scons: Reading SConscript files ... scons version: 1.3.1 python version: 2 7 2 'final' 0 Checking whether the C++ compiler works(cached) yes Checking for C header file unistd.h... (cached) yes Checking for C library rt... (cached) yes Checking for C++ header file execinfo.h... (cached) yes Checking whether backtrace is declared... (cached) yes Checking whether backtrace_symbols is declared... (cached) yes Checking for C library pcap... (cached) no Checking for C library wpcap... (cached) no Checking for C library nsl... (cached) yes scons: done reading SConscript files. scons: Building targets ... generate_buildinfo(["build/buildinfo.cpp"], ['\n#include \n#include \n\n#include "mongo/util/version.h"\n\nnamespace mongo {\n const char * gitVersion() { return "%(git_version)s"; }\n const char * compiledJSEngine() { return "%(js_engine)s"; }\n const char * allocator() { return "%(allocator)s"; }\n const char * loaderFlags() { return "%(loader_flags)s"; }\n const char * compilerFlags() { return "%(compiler_flags)s"; }\n std::string sysInfo() { return "%(sys_info)s BOOST_LIB_VERSION=" BOOST_LIB_VERSION ; }\n} // namespace mongo\n']) /opt/local/bin/python2.7 /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/buildscripts/smoke.py --with-cleanbb jsSlowNightly cwd [/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo] nokill requested, not killing anybody cwd [/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo] num procs:1 buildlogger: could not find or import buildbot.tac for authentication Fri Feb 22 11:14:57.123 [initandlisten] MongoDB starting : pid=18143 port=27999 dbpath=/data/db/sconsTests/ 64-bit host=bs-smartos-x86-64-1.10gen.cc Fri Feb 22 11:14:57.123 [initandlisten] Fri Feb 22 11:14:57.123 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB Fri Feb 22 11:14:57.123 [initandlisten] ** uses to detect impending page faults. Fri Feb 22 11:14:57.123 [initandlisten] ** This may result in slower performance for certain use cases Fri Feb 22 11:14:57.123 [initandlisten] Fri Feb 22 11:14:57.123 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 Fri Feb 22 11:14:57.123 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 Fri Feb 22 11:14:57.123 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 Fri Feb 22 11:14:57.124 [initandlisten] allocator: system Fri Feb 22 11:14:57.124 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999, setParameter: [ "enableTestCommands=1" ] } Fri Feb 22 11:14:57.124 [initandlisten] journal dir=/data/db/sconsTests/journal Fri Feb 22 11:14:57.124 [initandlisten] recover : no journal files present, no recovery needed Fri Feb 22 11:14:57.139 [FileAllocator] allocating new datafile /data/db/sconsTests/local.ns, filling with zeroes... Fri Feb 22 11:14:57.140 [FileAllocator] creating directory /data/db/sconsTests/_tmp Fri Feb 22 11:14:57.140 [FileAllocator] done allocating datafile /data/db/sconsTests/local.ns, size: 16MB, took 0 secs Fri Feb 22 11:14:57.140 [FileAllocator] allocating new datafile /data/db/sconsTests/local.0, filling with zeroes... Fri Feb 22 11:14:57.140 [FileAllocator] done allocating datafile /data/db/sconsTests/local.0, size: 64MB, took 0 secs Fri Feb 22 11:14:57.143 [websvr] admin web console waiting for connections on port 28999 Fri Feb 22 11:14:57.143 [initandlisten] waiting for connections on port 27999 Fri Feb 22 11:14:57.949 [initandlisten] connection accepted from 127.0.0.1:56914 #1 (1 connection now open) running /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/ --setParameter enableTestCommands=1 ******************************************* Test : 32bit.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/32bit.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/32bit.js";TestData.testFile = "32bit.js";TestData.testName = "32bit";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:14:57 2013 Fri Feb 22 11:14:57.962 [conn1] end connection 127.0.0.1:56914 (0 connections now open) buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:14:58.113 [initandlisten] connection accepted from 127.0.0.1:51266 #2 (1 connection now open) null 32bit.js running - this test is slow so only runs at night. Fri Feb 22 11:14:58.122 [conn2] dropDatabase test_32bit starting Fri Feb 22 11:14:58.130 [conn2] removeJournalFiles Fri Feb 22 11:14:58.131 [conn2] dropDatabase test_32bit finished 32bit.js PASS #1 seed=0.650223703822121 Fri Feb 22 11:14:58.132 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.ns, filling with zeroes... Fri Feb 22 11:14:58.132 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.ns, size: 16MB, took 0 secs Fri Feb 22 11:14:58.132 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.0, filling with zeroes... Fri Feb 22 11:14:58.132 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.0, size: 64MB, took 0 secs Fri Feb 22 11:14:58.133 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.1, filling with zeroes... Fri Feb 22 11:14:58.133 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.1, size: 128MB, took 0 secs Fri Feb 22 11:14:58.136 [conn2] build index test_32bit.colltest_32bit { _id: 1 } Fri Feb 22 11:14:58.137 [conn2] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:14:58.137 [conn2] build index test_32bit.colltest_32bit { a: 1.0 } Fri Feb 22 11:14:58.138 [conn2] build index done. scanned 1 total records. 0 secs Fri Feb 22 11:14:58.139 [conn2] build index test_32bit.colltest_32bit { b: 1.0 } Fri Feb 22 11:14:58.140 [conn2] build index done. scanned 1 total records. 0 secs Fri Feb 22 11:14:58.140 [conn2] build index test_32bit.colltest_32bit { x: 1.0 } Fri Feb 22 11:14:58.142 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.142 [conn2] build index test_32bit.colltest_32bit { c: 1.0 } Fri Feb 22 11:14:58.143 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.144 [conn2] build index test_32bit.colltest_32bit { d: 1.0 } Fri Feb 22 11:14:58.145 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.145 [conn2] build index test_32bit.colltest_32bit { e: 1.0 } Fri Feb 22 11:14:58.146 [conn2] build index done. scanned 1 total records. 0.001 secs Fri Feb 22 11:14:58.147 [conn2] build index test_32bit.colltest_32bit { f: 1.0 } Fri Feb 22 11:14:58.148 [conn2] build index done. scanned 1 total records. 0.001 secs 32bit.js eta_secs:315.5 Fri Feb 22 11:15:01.967 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.2, filling with zeroes... Fri Feb 22 11:15:01.967 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.2, size: 256MB, took 0 secs Fri Feb 22 11:15:13.484 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.3, filling with zeroes... Fri Feb 22 11:15:13.484 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.3, size: 512MB, took 0 secs 100000 200000 Fri Feb 22 11:15:51.711 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.4, filling with zeroes... Fri Feb 22 11:15:51.711 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.4, size: 1024MB, took 0 secs 300000 400000 500000 600000 700000 Fri Feb 22 11:17:32.974 [FileAllocator] allocating new datafile /data/db/sconsTests/test_32bit.5, filling with zeroes... Fri Feb 22 11:17:32.975 [FileAllocator] done allocating datafile /data/db/sconsTests/test_32bit.5, size: 2047MB, took 0 secs count: 723263 Fri Feb 22 11:17:38.621 [conn2] CMD: validate test_32bit.colltest_32bit Fri Feb 22 11:17:38.621 [conn2] validating index 0: test_32bit.colltest_32bit.$_id_ Fri Feb 22 11:17:38.649 [conn2] validating index 1: test_32bit.colltest_32bit.$a_1 Fri Feb 22 11:17:38.675 [conn2] validating index 2: test_32bit.colltest_32bit.$b_1 Fri Feb 22 11:17:38.696 [conn2] validating index 3: test_32bit.colltest_32bit.$x_1 Fri Feb 22 11:17:38.721 [conn2] validating index 4: test_32bit.colltest_32bit.$c_1 Fri Feb 22 11:17:39.074 [conn2] validating index 5: test_32bit.colltest_32bit.$d_1 Fri Feb 22 11:17:39.111 [conn2] validating index 6: test_32bit.colltest_32bit.$e_1 Fri Feb 22 11:17:39.132 [conn2] validating index 7: test_32bit.colltest_32bit.$f_1 Fri Feb 22 11:17:39.158 [conn2] command test_32bit.$cmd command: { validate: "colltest_32bit", full: undefined } ntoreturn:1 keyUpdates:0 locks(micros) r:537722 reslen:1085 537ms Fri Feb 22 11:17:39.159 [conn2] dropDatabase test_32bit starting Fri Feb 22 11:17:39.540 [conn2] removeJournalFiles Fri Feb 22 11:17:40.114 [conn2] dropDatabase test_32bit finished Fri Feb 22 11:17:40.114 [conn2] command test_32bit.$cmd command: { dropDatabase: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:955072 reslen:61 955ms 32bit.js SUCCESS Fri Feb 22 11:17:40.134 [conn2] end connection 127.0.0.1:51266 (0 connections now open) 2.7034 minutes Fri Feb 22 11:17:40.153 [initandlisten] connection accepted from 127.0.0.1:49308 #3 (1 connection now open) Fri Feb 22 11:17:40.155 [conn3] end connection 127.0.0.1:49308 (0 connections now open) ******************************************* Test : autosplit_heuristics.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/autosplit_heuristics.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/autosplit_heuristics.js";TestData.testFile = "autosplit_heuristics.js";TestData.testName = "autosplit_heuristics";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:17:40 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:17:40.301 [initandlisten] connection accepted from 127.0.0.1:38033 #4 (1 connection now open) null Resetting db path '/data/db/test0' Fri Feb 22 11:17:40.309 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/test0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:17:40.384 [initandlisten] MongoDB starting : pid=19376 port=30000 dbpath=/data/db/test0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:17:40.384 [initandlisten] m30000| Fri Feb 22 11:17:40.384 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:17:40.384 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:17:40.384 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:17:40.384 [initandlisten] m30000| Fri Feb 22 11:17:40.384 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:17:40.384 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:17:40.384 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:17:40.384 [initandlisten] allocator: system m30000| Fri Feb 22 11:17:40.384 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:17:40.385 [initandlisten] journal dir=/data/db/test0/journal m30000| Fri Feb 22 11:17:40.385 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:17:40.400 [FileAllocator] allocating new datafile /data/db/test0/local.ns, filling with zeroes... m30000| Fri Feb 22 11:17:40.400 [FileAllocator] creating directory /data/db/test0/_tmp m30000| Fri Feb 22 11:17:40.400 [FileAllocator] done allocating datafile /data/db/test0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:17:40.400 [FileAllocator] allocating new datafile /data/db/test0/local.0, filling with zeroes... m30000| Fri Feb 22 11:17:40.400 [FileAllocator] done allocating datafile /data/db/test0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:17:40.403 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:17:40.403 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:17:40.511 [initandlisten] connection accepted from 127.0.0.1:40147 #1 (1 connection now open) Resetting db path '/data/db/test-config0' Fri Feb 22 11:17:40.515 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/test-config0 --configsvr --setParameter enableTestCommands=1 m29000| Fri Feb 22 11:17:40.585 [initandlisten] MongoDB starting : pid=19377 port=29000 dbpath=/data/db/test-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 11:17:40.586 [initandlisten] m29000| Fri Feb 22 11:17:40.586 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 11:17:40.586 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 11:17:40.586 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 11:17:40.586 [initandlisten] m29000| Fri Feb 22 11:17:40.586 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 11:17:40.586 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 11:17:40.586 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 11:17:40.586 [initandlisten] allocator: system m29000| Fri Feb 22 11:17:40.586 [initandlisten] options: { configsvr: true, dbpath: "/data/db/test-config0", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:17:40.586 [initandlisten] journal dir=/data/db/test-config0/journal m29000| Fri Feb 22 11:17:40.586 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 11:17:40.599 [FileAllocator] allocating new datafile /data/db/test-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 11:17:40.599 [FileAllocator] creating directory /data/db/test-config0/_tmp m29000| Fri Feb 22 11:17:40.599 [FileAllocator] done allocating datafile /data/db/test-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.599 [FileAllocator] allocating new datafile /data/db/test-config0/local.0, filling with zeroes... m29000| Fri Feb 22 11:17:40.599 [FileAllocator] done allocating datafile /data/db/test-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.602 [initandlisten] ****** m29000| Fri Feb 22 11:17:40.602 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 11:17:40.606 [initandlisten] ****** m29000| Fri Feb 22 11:17:40.606 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 11:17:40.606 [websvr] ERROR: listen(): bind() failed errno:125 Address already in use for socket: 0.0.0.0:30000 m29000| Fri Feb 22 11:17:40.606 [websvr] ERROR: addr already in use m29000| Fri Feb 22 11:17:40.716 [initandlisten] connection accepted from 127.0.0.1:52563 #1 (1 connection now open) "localhost:29000" m29000| Fri Feb 22 11:17:40.717 [initandlisten] connection accepted from 127.0.0.1:55728 #2 (2 connections now open) ShardingTest test : { "config" : "localhost:29000", "shards" : [ connection to localhost:30000 ] } Fri Feb 22 11:17:40.723 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:29000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:17:40.750 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:17:40.751 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=19378 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:17:40.751 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:17:40.751 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:17:40.751 [mongosMain] options: { chunkSize: 1, configdb: "localhost:29000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 11:17:40.751 [mongosMain] config string : localhost:29000 m30999| Fri Feb 22 11:17:40.751 [mongosMain] creating new connection to:localhost:29000 m30999| Fri Feb 22 11:17:40.752 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.752 [initandlisten] connection accepted from 127.0.0.1:54691 #3 (3 connections now open) m30999| Fri Feb 22 11:17:40.752 [mongosMain] connected connection! m30999| Fri Feb 22 11:17:40.753 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:17:40.753 [mongosMain] creating new connection to:localhost:29000 m30999| Fri Feb 22 11:17:40.753 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.753 [initandlisten] connection accepted from 127.0.0.1:64791 #4 (4 connections now open) m30999| Fri Feb 22 11:17:40.753 [mongosMain] connected connection! m29000| Fri Feb 22 11:17:40.754 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:17:40.758 [mongosMain] created new distributed lock for configUpgrade on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:17:40.759 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838 ) m30999| Fri Feb 22 11:17:40.759 [LockPinger] creating distributed lock ping thread for localhost:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:17:40.759 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 11:17:40.760 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:17:40 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "512753d414c149b7a4b0a7b1" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m29000| Fri Feb 22 11:17:40.760 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes... m29000| Fri Feb 22 11:17:40.760 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.760 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes... m29000| Fri Feb 22 11:17:40.760 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:17:40.760 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes... m29000| Fri Feb 22 11:17:40.761 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 11:17:40.763 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 11:17:40.764 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.764 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 11:17:40.765 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.765 [LockPinger] cluster localhost:29000 pinged successfully at Fri Feb 22 11:17:40 2013 by distributed lock pinger 'localhost:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838', sleeping for 30000ms m29000| Fri Feb 22 11:17:40.765 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 11:17:40.766 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:17:40.766 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' acquired, ts : 512753d414c149b7a4b0a7b1 m30999| Fri Feb 22 11:17:40.768 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:17:40.768 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:17:40.768 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d414c149b7a4b0a7b2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531860768), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 11:17:40.768 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 11:17:40.769 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.769 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 11:17:40.769 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 11:17:40.770 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.770 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d414c149b7a4b0a7b4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531860770), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:17:40.770 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:17:40.771 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' unlocked. m29000| Fri Feb 22 11:17:40.772 [conn3] build index config.settings { _id: 1 } m29000| Fri Feb 22 11:17:40.772 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.773 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:17:40.773 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:17:40.773 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:17:40.773 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:17:40.773 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:17:40.773 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:17:40.773 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:17:40.773 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 11:17:40.773 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 11:17:40.774 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.774 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 11:17:40.774 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 11:17:40.775 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.775 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 11:17:40.775 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.775 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 11:17:40.776 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.776 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 11:17:40.776 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:17:40.776 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 11:17:40.777 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 11:17:40.777 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.778 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:17:40.778 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:17:40 m30999| Fri Feb 22 11:17:40.778 [Balancer] created new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:17:40.778 [Balancer] creating new connection to:localhost:29000 m29000| Fri Feb 22 11:17:40.778 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 11:17:40.778 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.778 [initandlisten] connection accepted from 127.0.0.1:64925 #5 (5 connections now open) m30999| Fri Feb 22 11:17:40.778 [Balancer] connected connection! m29000| Fri Feb 22 11:17:40.779 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.779 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:40.779 [Balancer] trying to acquire new distributed lock for balancer on localhost:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838 ) m30999| Fri Feb 22 11:17:40.779 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:17:40.779 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:17:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512753d414c149b7a4b0a7b6" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:17:40.780 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' acquired, ts : 512753d414c149b7a4b0a7b6 m30999| Fri Feb 22 11:17:40.780 [Balancer] *** start balancing round m30999| Fri Feb 22 11:17:40.780 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:17:40.780 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:17:40.780 [Balancer] no collections to balance m30999| Fri Feb 22 11:17:40.780 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:17:40.780 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:17:40.780 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838' unlocked. m30999| Fri Feb 22 11:17:40.924 [mongosMain] connection accepted from 127.0.0.1:60762 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:17:40.926 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 11:17:40.927 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 11:17:40.927 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.928 [conn1] put [admin] on: config:localhost:29000 m30999| Fri Feb 22 11:17:40.928 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:17:40.928 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:17:40.928 [conn1] connected connection! m30000| Fri Feb 22 11:17:40.928 [initandlisten] connection accepted from 127.0.0.1:54131 #2 (2 connections now open) m30999| Fri Feb 22 11:17:40.929 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } m30999| Fri Feb 22 11:17:40.931 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:17:40.931 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:17:40.939 [conn1] connected connection! m30999| Fri Feb 22 11:17:40.939 [conn1] creating WriteBackListener for: localhost:30000 serverID: 512753d414c149b7a4b0a7b5 m30000| Fri Feb 22 11:17:40.939 [initandlisten] connection accepted from 127.0.0.1:61319 #3 (3 connections now open) m30999| Fri Feb 22 11:17:40.939 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 11:17:40.939 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 11:17:40.939 [conn1] creating new connection to:localhost:29000 m30999| Fri Feb 22 11:17:40.940 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:17:40.940 [initandlisten] connection accepted from 127.0.0.1:51914 #6 (6 connections now open) m30999| Fri Feb 22 11:17:40.940 [conn1] connected connection! m30999| Fri Feb 22 11:17:40.940 [conn1] creating WriteBackListener for: localhost:29000 serverID: 512753d414c149b7a4b0a7b5 m30999| Fri Feb 22 11:17:40.940 [conn1] initializing shard connection to localhost:29000 m30999| Fri Feb 22 11:17:40.940 BackgroundJob starting: WriteBackListener-localhost:29000 m30999| Fri Feb 22 11:17:40.940 [WriteBackListener-localhost:29000] localhost:29000 is not a shard node Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| Fri Feb 22 11:17:40.942 [conn1] couldn't find database [foo] in config db m30999| Fri Feb 22 11:17:40.942 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:17:40.942 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:17:40.942 [initandlisten] connection accepted from 127.0.0.1:63773 #4 (4 connections now open) m30999| Fri Feb 22 11:17:40.942 [conn1] connected connection! m30999| Fri Feb 22 11:17:40.943 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:17:40.943 [conn1] put [foo] on: shard0000:localhost:30000 m30999| Fri Feb 22 11:17:40.943 [conn1] enabling sharding on: foo { "ok" : 1 } m30000| Fri Feb 22 11:17:40.945 [FileAllocator] allocating new datafile /data/db/test0/foo.ns, filling with zeroes... m30000| Fri Feb 22 11:17:40.945 [FileAllocator] done allocating datafile /data/db/test0/foo.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:17:40.945 [FileAllocator] allocating new datafile /data/db/test0/foo.0, filling with zeroes... m30000| Fri Feb 22 11:17:40.945 [FileAllocator] done allocating datafile /data/db/test0/foo.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:17:40.946 [FileAllocator] allocating new datafile /data/db/test0/foo.1, filling with zeroes... m30000| Fri Feb 22 11:17:40.946 [FileAllocator] done allocating datafile /data/db/test0/foo.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:17:40.949 [conn4] build index foo.hashBar { _id: 1 } m30000| Fri Feb 22 11:17:40.950 [conn4] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:17:40.950 [conn4] info: creating collection foo.hashBar on add index m30999| Fri Feb 22 11:17:40.950 [conn1] CMD: shardcollection: { shardCollection: "foo.hashBar", key: { _id: 1.0 } } m30999| Fri Feb 22 11:17:40.950 [conn1] enable sharding on: foo.hashBar with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:17:40.951 [conn1] going to create 1 chunk(s) for: foo.hashBar using new epoch 512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:17:40.951 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 2 version: 1|0||512753d414c149b7a4b0a7b7 based on: (empty) m29000| Fri Feb 22 11:17:40.952 [conn3] build index config.collections { _id: 1 } m29000| Fri Feb 22 11:17:40.953 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:17:40.953 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 2 m30999| Fri Feb 22 11:17:40.953 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "foo.hashBar", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'foo.hashBar'" } m30999| Fri Feb 22 11:17:40.953 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 2 m30000| Fri Feb 22 11:17:40.954 [conn3] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 11:17:40.954 [initandlisten] connection accepted from 127.0.0.1:47773 #7 (7 connections now open) m30999| Fri Feb 22 11:17:40.954 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } { "collectionsharded" : "foo.hashBar", "ok" : 1 } m30999| Fri Feb 22 11:17:40.955 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m30000| Fri Feb 22 11:17:40.956 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "foo.hashBar-_id_MinKey", configdb: "localhost:29000" } m29000| Fri Feb 22 11:17:40.956 [initandlisten] connection accepted from 127.0.0.1:39233 #8 (8 connections now open) m30000| Fri Feb 22 11:17:40.957 [LockPinger] creating distributed lock ping thread for localhost:29000 and process bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070 (sleeping for 30000ms) m30000| Fri Feb 22 11:17:40.958 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e57 m30000| Fri Feb 22 11:17:40.959 [conn4] splitChunk accepted at version 1|0||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.959 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e58", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860959), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.960 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.960 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 3 version: 1|2||512753d414c149b7a4b0a7b7 based on: 1|0||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.961 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:17:40.961 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], shardId: "foo.hashBar-_id_0.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.962 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e59 m30000| Fri Feb 22 11:17:40.963 [conn4] splitChunk accepted at version 1|2||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.963 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e5a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860963), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.964 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.964 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 4 version: 1|4||512753d414c149b7a4b0a7b7 based on: 1|2||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.965 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|3||000000000000000000000000min: { _id: 0.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.965 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "foo.hashBar-_id_0.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.966 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e5b m30000| Fri Feb 22 11:17:40.967 [conn4] splitChunk accepted at version 1|4||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.967 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e5c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860967), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 0.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 1.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.967 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.968 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 5 version: 1|6||512753d414c149b7a4b0a7b7 based on: 1|4||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.969 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|6||000000000000000000000000min: { _id: 1.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.969 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "foo.hashBar-_id_1.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.970 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e5d m30000| Fri Feb 22 11:17:40.980 [conn4] splitChunk accepted at version 1|6||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.980 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e5e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860980), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 1.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 2.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.980 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.981 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 6 version: 1|8||512753d414c149b7a4b0a7b7 based on: 1|6||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.982 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|8||000000000000000000000000min: { _id: 2.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.982 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "foo.hashBar-_id_2.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.983 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e5f m30000| Fri Feb 22 11:17:40.984 [conn4] splitChunk accepted at version 1|8||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.984 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e60", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860984), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 2.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 3.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.984 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.985 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 7 version: 1|10||512753d414c149b7a4b0a7b7 based on: 1|8||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.986 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|10||000000000000000000000000min: { _id: 3.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.986 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], shardId: "foo.hashBar-_id_3.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.987 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e61 m30000| Fri Feb 22 11:17:40.987 [conn4] splitChunk accepted at version 1|10||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.988 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e62", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860988), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 3.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 4.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.988 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.989 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 8 version: 1|12||512753d414c149b7a4b0a7b7 based on: 1|10||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.989 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|12||000000000000000000000000min: { _id: 4.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.989 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], shardId: "foo.hashBar-_id_4.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.990 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e63 m30000| Fri Feb 22 11:17:40.991 [conn4] splitChunk accepted at version 1|12||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.991 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e64", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860991), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 4.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 5.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.992 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.992 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 9 version: 1|14||512753d414c149b7a4b0a7b7 based on: 1|12||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.993 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|14||000000000000000000000000min: { _id: 5.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.993 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], shardId: "foo.hashBar-_id_5.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.994 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e65 m30000| Fri Feb 22 11:17:40.995 [conn4] splitChunk accepted at version 1|14||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.995 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e66", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860995), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 5.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 6.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.995 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:40.996 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 10 version: 1|16||512753d414c149b7a4b0a7b7 based on: 1|14||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:40.997 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|16||000000000000000000000000min: { _id: 6.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:40.997 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], shardId: "foo.hashBar-_id_6.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:40.998 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d452997dd3b08a6e67 m30000| Fri Feb 22 11:17:40.998 [conn4] splitChunk accepted at version 1|16||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:40.999 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:40-512753d452997dd3b08a6e68", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531860999), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 6.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 7.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:40.999 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:41.000 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 11 version: 1|18||512753d414c149b7a4b0a7b7 based on: 1|16||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:41.001 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|18||000000000000000000000000min: { _id: 7.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:41.001 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], shardId: "foo.hashBar-_id_7.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:41.002 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d552997dd3b08a6e69 m30000| Fri Feb 22 11:17:41.002 [conn4] splitChunk accepted at version 1|18||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:41.003 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:41-512753d552997dd3b08a6e6a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531861003), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 7.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 8.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:41.003 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:41.004 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 12 version: 1|20||512753d414c149b7a4b0a7b7 based on: 1|18||512753d414c149b7a4b0a7b7 { "ok" : 1 } m30999| Fri Feb 22 11:17:41.004 [conn1] splitting: foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|20||000000000000000000000000min: { _id: 8.0 }max: { _id: 10.0 } m30000| Fri Feb 22 11:17:41.005 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], shardId: "foo.hashBar-_id_8.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:17:41.005 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753d552997dd3b08a6e6b m30000| Fri Feb 22 11:17:41.006 [conn4] splitChunk accepted at version 1|20||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:17:41.006 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:17:41-512753d552997dd3b08a6e6c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531861006), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 8.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 9.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:17:41.007 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:17:41.007 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 13 version: 1|22||512753d414c149b7a4b0a7b7 based on: 1|20||512753d414c149b7a4b0a7b7 { "ok" : 1 } ---- Setup collection... ---- --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512753d414c149b7a4b0a7b3") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "foo", "partitioned" : true, "primary" : "shard0000" } foo.hashBar shard key: { "_id" : 1 } chunks: shard0000 12 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : shard0000 { "t" : 1000, "i" : 1 } { "_id" : 0 } -->> { "_id" : 1 } on : shard0000 { "t" : 1000, "i" : 5 } { "_id" : 1 } -->> { "_id" : 2 } on : shard0000 { "t" : 1000, "i" : 7 } { "_id" : 2 } -->> { "_id" : 3 } on : shard0000 { "t" : 1000, "i" : 9 } { "_id" : 3 } -->> { "_id" : 4 } on : shard0000 { "t" : 1000, "i" : 11 } { "_id" : 4 } -->> { "_id" : 5 } on : shard0000 { "t" : 1000, "i" : 13 } { "_id" : 5 } -->> { "_id" : 6 } on : shard0000 { "t" : 1000, "i" : 15 } { "_id" : 6 } -->> { "_id" : 7 } on : shard0000 { "t" : 1000, "i" : 17 } { "_id" : 7 } -->> { "_id" : 8 } on : shard0000 { "t" : 1000, "i" : 19 } { "_id" : 8 } -->> { "_id" : 9 } on : shard0000 { "t" : 1000, "i" : 21 } { "_id" : 9 } -->> { "_id" : 10 } on : shard0000 { "t" : 1000, "i" : 22 } { "_id" : 10 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 { "t" : 1000, "i" : 4 } ---- Starting inserts of approx size: 18... ---- { "chunkSizeBytes" : 1048576, "insertsForSplit" : 81556, "totalInserts" : 815560 } m30999| Fri Feb 22 11:17:41.049 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|22, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 13 m30999| Fri Feb 22 11:17:41.050 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:17:43.765 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.765 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.765 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.765 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.766 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.766 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.766 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.766 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.766 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.766 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.767 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209715 splitThreshold: 1048576 m30999| Fri Feb 22 11:17:43.767 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.841 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.841 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:17:43.849 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.850 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.850 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:17:43.858 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.858 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.858 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:17:43.866 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:43.866 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209728 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:43.866 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:17:43.874 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:46.781 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:46.781 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:17:49.108 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.108 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:17:49.133 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.133 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.133 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:17:49.157 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.158 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.158 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:17:49.180 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.180 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.181 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:17:49.204 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.205 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.205 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:17:49.229 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.229 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.229 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:17:49.252 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.296 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.296 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:17:49.321 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.321 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.321 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:17:49.344 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.344 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.345 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:17:49.367 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:49.367 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:49.368 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:17:49.390 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:17:52.782 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:52.782 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:17:54.595 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.596 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:17:54.637 [conn1] chunk not full enough to trigger auto-split { _id: 0.321423316494188 } m30999| Fri Feb 22 11:17:54.638 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.638 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:17:54.678 [conn1] chunk not full enough to trigger auto-split { _id: 1.321424542645544 } m30999| Fri Feb 22 11:17:54.679 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.679 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:17:54.717 [conn1] chunk not full enough to trigger auto-split { _id: 2.3214257687969 } m30999| Fri Feb 22 11:17:54.717 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.717 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:17:54.757 [conn1] chunk not full enough to trigger auto-split { _id: 3.321426994948256 } m30999| Fri Feb 22 11:17:54.757 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.757 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:17:54.798 [conn1] chunk not full enough to trigger auto-split { _id: 4.321428221099612 } m30999| Fri Feb 22 11:17:54.798 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.798 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:17:54.836 [conn1] chunk not full enough to trigger auto-split { _id: 5.321429447250969 } m30999| Fri Feb 22 11:17:54.897 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.897 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:17:54.935 [conn1] chunk not full enough to trigger auto-split { _id: 6.321430673402324 } m30999| Fri Feb 22 11:17:54.935 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.935 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:17:54.975 [conn1] chunk not full enough to trigger auto-split { _id: 7.321431899553681 } m30999| Fri Feb 22 11:17:54.976 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:54.976 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:17:55.017 [conn1] chunk not full enough to trigger auto-split { _id: 8.321433125705036 } m30999| Fri Feb 22 11:17:55.017 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:17:55.017 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:17:55.057 [conn1] chunk not full enough to trigger auto-split { _id: 9.321434351856393 } m30999| Fri Feb 22 11:17:58.783 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:17:58.783 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:18:00.234 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.234 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:18:00.272 [conn1] chunk not full enough to trigger auto-split { _id: 0.321423316494188 } m30999| Fri Feb 22 11:18:00.272 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.272 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:18:00.308 [conn1] chunk not full enough to trigger auto-split { _id: 1.321424542645544 } m30999| Fri Feb 22 11:18:00.308 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.308 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:18:00.342 [conn1] chunk not full enough to trigger auto-split { _id: 2.3214257687969 } m30999| Fri Feb 22 11:18:00.342 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.342 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:18:00.378 [conn1] chunk not full enough to trigger auto-split { _id: 3.321426994948256 } m30999| Fri Feb 22 11:18:00.378 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.378 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:18:00.414 [conn1] chunk not full enough to trigger auto-split { _id: 4.321428221099612 } m30999| Fri Feb 22 11:18:00.414 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.414 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:18:00.449 [conn1] chunk not full enough to trigger auto-split { _id: 5.321429447250969 } m30999| Fri Feb 22 11:18:00.493 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.493 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:18:00.527 [conn1] chunk not full enough to trigger auto-split { _id: 6.321430673402324 } m30999| Fri Feb 22 11:18:00.527 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.527 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:18:00.566 [conn1] chunk not full enough to trigger auto-split { _id: 7.321431899553681 } m30999| Fri Feb 22 11:18:00.566 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.566 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:18:00.622 [conn1] chunk not full enough to trigger auto-split { _id: 8.321433125705036 } m30999| Fri Feb 22 11:18:00.633 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:00.633 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:18:00.689 [conn1] chunk not full enough to trigger auto-split { _id: 9.321434351856393 } m30999| Fri Feb 22 11:18:04.783 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:18:04.784 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:18:05.876 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:05.877 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30999| Fri Feb 22 11:18:05.952 [conn1] chunk not full enough to trigger auto-split { _id: 0.321423316494188 } m30999| Fri Feb 22 11:18:05.952 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:05.952 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30999| Fri Feb 22 11:18:06.026 [conn1] chunk not full enough to trigger auto-split { _id: 1.321424542645544 } m30999| Fri Feb 22 11:18:06.026 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.026 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30999| Fri Feb 22 11:18:06.085 [conn1] chunk not full enough to trigger auto-split { _id: 2.3214257687969 } m30999| Fri Feb 22 11:18:06.092 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.092 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30999| Fri Feb 22 11:18:06.169 [conn1] chunk not full enough to trigger auto-split { _id: 3.321426994948256 } m30999| Fri Feb 22 11:18:06.170 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.170 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30999| Fri Feb 22 11:18:06.243 [conn1] chunk not full enough to trigger auto-split { _id: 4.321428221099612 } m30999| Fri Feb 22 11:18:06.243 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.243 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30999| Fri Feb 22 11:18:06.317 [conn1] chunk not full enough to trigger auto-split { _id: 5.321429447250969 } m30999| Fri Feb 22 11:18:06.370 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.370 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30999| Fri Feb 22 11:18:06.443 [conn1] chunk not full enough to trigger auto-split { _id: 6.321430673402324 } m30999| Fri Feb 22 11:18:06.443 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.443 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30999| Fri Feb 22 11:18:06.512 [conn1] chunk not full enough to trigger auto-split { _id: 7.321431899553681 } m30999| Fri Feb 22 11:18:06.513 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.513 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30999| Fri Feb 22 11:18:06.581 [conn1] chunk not full enough to trigger auto-split { _id: 8.321433125705036 } m30999| Fri Feb 22 11:18:06.582 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:06.582 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30999| Fri Feb 22 11:18:06.653 [conn1] chunk not full enough to trigger auto-split { _id: 9.321434351856393 } m30999| Fri Feb 22 11:18:10.767 [LockPinger] cluster localhost:29000 pinged successfully at Fri Feb 22 11:18:10 2013 by distributed lock pinger 'localhost:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531860:16838', sleeping for 30000ms m30999| Fri Feb 22 11:18:10.784 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:18:10.784 [Balancer] skipping balancing round because balancing is disabled m30000| Fri Feb 22 11:18:11.657 [FileAllocator] allocating new datafile /data/db/test0/foo.2, filling with zeroes... m30000| Fri Feb 22 11:18:11.657 [FileAllocator] done allocating datafile /data/db/test0/foo.2, size: 256MB, took 0 secs m30999| Fri Feb 22 11:18:12.015 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.015 [conn4] request split points lookup for chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30000| Fri Feb 22 11:18:12.091 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 0.0 } -->> { : 1.0 } m30000| Fri Feb 22 11:18:12.091 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", splitKeys: [ { _id: 0.321423316494188 } ], shardId: "foo.hashBar-_id_0.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.092 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e6d m30000| Fri Feb 22 11:18:12.093 [conn4] splitChunk accepted at version 1|22||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.094 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e6e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892094), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 0.321423316494188 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 0.321423316494188 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.094 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.095 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 14 version: 1|24||512753d414c149b7a4b0a7b7 based on: 1|22||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.095 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|5||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 } on: { _id: 0.321423316494188 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.095 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|24, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 14 m30999| Fri Feb 22 11:18:12.096 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.096 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.096 [conn4] request split points lookup for chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30000| Fri Feb 22 11:18:12.169 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 1.0 } -->> { : 2.0 } m30000| Fri Feb 22 11:18:12.169 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", splitKeys: [ { _id: 1.321424542645544 } ], shardId: "foo.hashBar-_id_1.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.170 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e6f m30000| Fri Feb 22 11:18:12.171 [conn4] splitChunk accepted at version 1|24||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.172 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e70", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892172), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 1.321424542645544 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 1.321424542645544 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.175 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.176 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 15 version: 1|26||512753d414c149b7a4b0a7b7 based on: 1|24||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.177 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|7||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 } on: { _id: 1.321424542645544 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.177 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|26, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 15 m30999| Fri Feb 22 11:18:12.177 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.177 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.177 [conn4] request split points lookup for chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30000| Fri Feb 22 11:18:12.251 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 2.0 } -->> { : 3.0 } m30000| Fri Feb 22 11:18:12.251 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0000", splitKeys: [ { _id: 2.3214257687969 } ], shardId: "foo.hashBar-_id_2.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.252 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e71 m30000| Fri Feb 22 11:18:12.253 [conn4] splitChunk accepted at version 1|26||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.254 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e72", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892254), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 2.3214257687969 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 2.3214257687969 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.254 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.255 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 16 version: 1|28||512753d414c149b7a4b0a7b7 based on: 1|26||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.255 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|9||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 } on: { _id: 2.3214257687969 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.255 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|28, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 16 m30999| Fri Feb 22 11:18:12.255 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.255 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.256 [conn4] request split points lookup for chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30000| Fri Feb 22 11:18:12.330 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 3.0 } -->> { : 4.0 } m30000| Fri Feb 22 11:18:12.330 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0000", splitKeys: [ { _id: 3.321426994948256 } ], shardId: "foo.hashBar-_id_3.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.331 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e73 m30000| Fri Feb 22 11:18:12.332 [conn4] splitChunk accepted at version 1|28||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.333 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e74", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892333), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 3.321426994948256 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 3.321426994948256 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.333 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.334 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 17 version: 1|30||512753d414c149b7a4b0a7b7 based on: 1|28||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.334 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|11||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 } on: { _id: 3.321426994948256 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.334 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|30, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 17 m30999| Fri Feb 22 11:18:12.335 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.335 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.335 [conn4] request split points lookup for chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30000| Fri Feb 22 11:18:12.409 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 4.0 } -->> { : 5.0 } m30000| Fri Feb 22 11:18:12.409 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0000", splitKeys: [ { _id: 4.321428221099612 } ], shardId: "foo.hashBar-_id_4.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.410 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e75 m30000| Fri Feb 22 11:18:12.411 [conn4] splitChunk accepted at version 1|30||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.411 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e76", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892411), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 4.321428221099612 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 4.321428221099612 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.411 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.412 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 18 version: 1|32||512753d414c149b7a4b0a7b7 based on: 1|30||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.412 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|13||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 } on: { _id: 4.321428221099612 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.412 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|32, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 18 m30999| Fri Feb 22 11:18:12.413 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.413 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.413 [conn4] request split points lookup for chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30000| Fri Feb 22 11:18:12.482 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 5.0 } -->> { : 6.0 } m30000| Fri Feb 22 11:18:12.482 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0000", splitKeys: [ { _id: 5.321429447250969 } ], shardId: "foo.hashBar-_id_5.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.483 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e77 m30000| Fri Feb 22 11:18:12.484 [conn4] splitChunk accepted at version 1|32||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.485 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e78", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892485), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 5.321429447250969 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 5.321429447250969 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.485 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.486 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 19 version: 1|34||512753d414c149b7a4b0a7b7 based on: 1|32||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.486 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|15||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 } on: { _id: 5.321429447250969 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.486 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|34, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 19 m30999| Fri Feb 22 11:18:12.486 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.532 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.532 [conn4] request split points lookup for chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30000| Fri Feb 22 11:18:12.603 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 6.0 } -->> { : 7.0 } m30000| Fri Feb 22 11:18:12.603 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0000", splitKeys: [ { _id: 6.321430673402324 } ], shardId: "foo.hashBar-_id_6.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.604 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e79 m30000| Fri Feb 22 11:18:12.605 [conn4] splitChunk accepted at version 1|34||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.605 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e7a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892605), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 6.321430673402324 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 6.321430673402324 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.606 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.607 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 20 version: 1|36||512753d414c149b7a4b0a7b7 based on: 1|34||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.607 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|17||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 } on: { _id: 6.321430673402324 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.607 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|36, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 20 m30999| Fri Feb 22 11:18:12.607 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.607 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.607 [conn4] request split points lookup for chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30000| Fri Feb 22 11:18:12.687 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 7.0 } -->> { : 8.0 } m30000| Fri Feb 22 11:18:12.687 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0000", splitKeys: [ { _id: 7.321431899553681 } ], shardId: "foo.hashBar-_id_7.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.688 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e7b m30000| Fri Feb 22 11:18:12.689 [conn4] splitChunk accepted at version 1|36||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.689 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e7c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892689), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 7.321431899553681 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 7.321431899553681 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.690 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.690 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 21 version: 1|38||512753d414c149b7a4b0a7b7 based on: 1|36||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.691 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|19||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 } on: { _id: 7.321431899553681 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.691 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|38, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 21 m30999| Fri Feb 22 11:18:12.691 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:12.691 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } dataWritten: 209718 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:12.691 [conn4] request split points lookup for chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:12.764 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 8.0 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:12.764 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0000", splitKeys: [ { _id: 8.321433125705036 } ], shardId: "foo.hashBar-_id_8.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:12.765 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f452997dd3b08a6e7d m30000| Fri Feb 22 11:18:12.765 [conn4] splitChunk accepted at version 1|38||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:12.766 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:12-512753f452997dd3b08a6e7e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531892766), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 8.0 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 8.321433125705036 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 8.321433125705036 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:12.766 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:12.767 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 22 version: 1|40||512753d414c149b7a4b0a7b7 based on: 1|38||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:12.767 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|21||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 } on: { _id: 8.321433125705036 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:12.767 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|40, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 22 m30999| Fri Feb 22 11:18:12.768 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:16.326 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.326 [conn4] request split points lookup for chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30000| Fri Feb 22 11:18:16.398 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 9.0 } -->> { : 10.0 } m30000| Fri Feb 22 11:18:16.398 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 9.0 }, max: { _id: 10.0 }, from: "shard0000", splitKeys: [ { _id: 9.321434351856393 } ], shardId: "foo.hashBar-_id_9.0", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:16.400 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753f852997dd3b08a6e7f m30000| Fri Feb 22 11:18:16.410 [conn4] splitChunk accepted at version 1|40||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:16.411 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:16-512753f852997dd3b08a6e80", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531896411), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 9.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 9.0 }, max: { _id: 9.321434351856393 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 9.321434351856393 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:16.411 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:16.412 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 23 version: 1|42||512753d414c149b7a4b0a7b7 based on: 1|40||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:16.412 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 } on: { _id: 9.321434351856393 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:16.412 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|42, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 23 m30999| Fri Feb 22 11:18:16.413 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } m30999| Fri Feb 22 11:18:16.413 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|24||000000000000000000000000min: { _id: 0.321423316494188 }max: { _id: 1.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.413 [conn4] request split points lookup for chunk foo.hashBar { : 0.321423316494188 } -->> { : 1.0 } m30999| Fri Feb 22 11:18:16.479 [conn1] chunk not full enough to trigger auto-split { _id: 0.6428466329883761 } m30999| Fri Feb 22 11:18:16.480 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|26||000000000000000000000000min: { _id: 1.321424542645544 }max: { _id: 2.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.480 [conn4] request split points lookup for chunk foo.hashBar { : 1.321424542645544 } -->> { : 2.0 } m30999| Fri Feb 22 11:18:16.544 [conn1] chunk not full enough to trigger auto-split { _id: 1.642847859139732 } m30999| Fri Feb 22 11:18:16.544 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|28||000000000000000000000000min: { _id: 2.3214257687969 }max: { _id: 3.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.545 [conn4] request split points lookup for chunk foo.hashBar { : 2.3214257687969 } -->> { : 3.0 } m30999| Fri Feb 22 11:18:16.605 [conn1] chunk not full enough to trigger auto-split { _id: 2.642849085291088 } m30999| Fri Feb 22 11:18:16.606 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|30||000000000000000000000000min: { _id: 3.321426994948256 }max: { _id: 4.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.606 [conn4] request split points lookup for chunk foo.hashBar { : 3.321426994948256 } -->> { : 4.0 } m30999| Fri Feb 22 11:18:16.672 [conn1] chunk not full enough to trigger auto-split { _id: 3.642850311442444 } m30999| Fri Feb 22 11:18:16.672 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|32||000000000000000000000000min: { _id: 4.321428221099612 }max: { _id: 5.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.673 [conn4] request split points lookup for chunk foo.hashBar { : 4.321428221099612 } -->> { : 5.0 } m30999| Fri Feb 22 11:18:16.737 [conn1] chunk not full enough to trigger auto-split { _id: 4.642851537593801 } m30999| Fri Feb 22 11:18:16.780 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|34||000000000000000000000000min: { _id: 5.321429447250969 }max: { _id: 6.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.780 [conn4] request split points lookup for chunk foo.hashBar { : 5.321429447250969 } -->> { : 6.0 } m30999| Fri Feb 22 11:18:16.785 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:18:16.785 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:18:16.842 [conn1] chunk not full enough to trigger auto-split { _id: 5.642852763745156 } m30999| Fri Feb 22 11:18:16.843 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|36||000000000000000000000000min: { _id: 6.321430673402324 }max: { _id: 7.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.843 [conn4] request split points lookup for chunk foo.hashBar { : 6.321430673402324 } -->> { : 7.0 } m30999| Fri Feb 22 11:18:16.904 [conn1] chunk not full enough to trigger auto-split { _id: 6.642853989896513 } m30999| Fri Feb 22 11:18:16.904 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|38||000000000000000000000000min: { _id: 7.321431899553681 }max: { _id: 8.0 } dataWritten: 209717 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:16.904 [conn4] request split points lookup for chunk foo.hashBar { : 7.321431899553681 } -->> { : 8.0 } m30999| Fri Feb 22 11:18:16.968 [conn1] chunk not full enough to trigger auto-split { _id: 7.642855216047869 } m30999| Fri Feb 22 11:18:20.884 [conn1] about to initiate autosplit: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|40||000000000000000000000000min: { _id: 8.321433125705036 }max: { _id: 9.0 } dataWritten: 209727 splitThreshold: 1048576 m30000| Fri Feb 22 11:18:20.885 [conn4] request split points lookup for chunk foo.hashBar { : 8.321433125705036 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:20.960 [conn4] max number of requested split points reached (2) before the end of chunk foo.hashBar { : 8.321433125705036 } -->> { : 9.0 } m30000| Fri Feb 22 11:18:20.960 [conn4] received splitChunk request: { splitChunk: "foo.hashBar", keyPattern: { _id: 1.0 }, min: { _id: 8.321433125705036 }, max: { _id: 9.0 }, from: "shard0000", splitKeys: [ { _id: 8.642856442199225 } ], shardId: "foo.hashBar-_id_8.321433125705036", configdb: "localhost:29000" } m30000| Fri Feb 22 11:18:20.962 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' acquired, ts : 512753fc52997dd3b08a6e81 m30000| Fri Feb 22 11:18:20.963 [conn4] splitChunk accepted at version 1|42||512753d414c149b7a4b0a7b7 m30000| Fri Feb 22 11:18:20.963 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:18:20-512753fc52997dd3b08a6e82", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:63773", time: new Date(1361531900963), what: "split", ns: "foo.hashBar", details: { before: { min: { _id: 8.321433125705036 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.321433125705036 }, max: { _id: 8.642856442199225 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') }, right: { min: { _id: 8.642856442199225 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('512753d414c149b7a4b0a7b7') } } } m30000| Fri Feb 22 11:18:20.964 [conn4] distributed lock 'foo.hashBar/bs-smartos-x86-64-1.10gen.cc:30000:1361531860:17070' unlocked. m30999| Fri Feb 22 11:18:20.964 [conn1] ChunkManager: time to load chunks for foo.hashBar: 0ms sequenceNumber: 24 version: 1|44||512753d414c149b7a4b0a7b7 based on: 1|42||512753d414c149b7a4b0a7b7 m30999| Fri Feb 22 11:18:20.965 [conn1] autosplitted foo.hashBar shard: ns:foo.hashBarshard: shard0000:localhost:30000lastmod: 1|40||000000000000000000000000min: { _id: 8.321433125705036 }max: { _id: 9.0 } on: { _id: 8.642856442199225 } (splitThreshold 1048576) m30999| Fri Feb 22 11:18:20.965 [conn1] setShardVersion shard0000 localhost:30000 foo.hashBar { setShardVersion: "foo.hashBar", configdb: "localhost:29000", version: Timestamp 1000|44, versionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), serverID: ObjectId('512753d414c149b7a4b0a7b5'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f210 24 m30999| Fri Feb 22 11:18:20.965 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('512753d414c149b7a4b0a7b7'), ok: 1.0 } ---- Inserts completed... ---- --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512753d414c149b7a4b0a7b3") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "foo", "partitioned" : true, "primary" : "shard0000" } foo.hashBar shard key: { "_id" : 1 } chunks: shard0000 23 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:18:20.978 [conn1] warning: mongos collstats doesn't know about: systemFlags m30999| Fri Feb 22 11:18:20.978 [conn1] warning: mongos collstats doesn't know about: userFlags { "sharded" : true, "ns" : "foo.hashBar", "count" : 815560, "numExtents" : 8, "size" : 16311224, "storageSize" : 37797888, "totalIndexSize" : 39277504, "indexSizes" : { "_id_" : 39277504 }, "avgObjSize" : 20.000029427632548, "nindexes" : 1, "nchunks" : 23, "shards" : { "shard0000" : { "ns" : "foo.hashBar", "count" : 815560, "size" : 16311224, "avgObjSize" : 20.000029427632548, "storageSize" : 37797888, "numExtents" : 8, "nindexes" : 1, "lastExtentSize" : 15290368, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 39277504, "indexSizes" : { "_id_" : 39277504 }, "ok" : 1 } }, "ok" : 1 } ---- DONE! ---- m30999| Fri Feb 22 11:18:20.979 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30000| Fri Feb 22 11:18:20.992 [conn4] end connection 127.0.0.1:63773 (3 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn4] end connection 127.0.0.1:64791 (7 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn6] end connection 127.0.0.1:51914 (7 connections now open) m30000| Fri Feb 22 11:18:20.992 [conn3] end connection 127.0.0.1:61319 (3 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn5] end connection 127.0.0.1:64925 (7 connections now open) m29000| Fri Feb 22 11:18:20.992 [conn3] end connection 127.0.0.1:54691 (7 connections now open) Fri Feb 22 11:18:21.979 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 11:18:21.980 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:18:21.980 [interruptThread] now exiting m30000| Fri Feb 22 11:18:21.980 dbexit: m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:18:21.980 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:18:21.980 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:18:21.980 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:18:21.980 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:18:21.980 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:18:21.980 [conn1] end connection 127.0.0.1:40147 (1 connection now open) m29000| Fri Feb 22 11:18:21.980 [conn7] end connection 127.0.0.1:47773 (3 connections now open) m29000| Fri Feb 22 11:18:21.980 [conn8] end connection 127.0.0.1:39233 (3 connections now open) m30000| Fri Feb 22 11:18:22.034 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:18:22.049 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:18:22.049 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:18:22.049 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:18:22.050 dbexit: really exiting now Fri Feb 22 11:18:22.980 shell: stopped mongo program on port 30000 m29000| Fri Feb 22 11:18:22.980 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 11:18:22.980 [interruptThread] now exiting m29000| Fri Feb 22 11:18:22.980 dbexit: m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 11:18:22.980 [interruptThread] closing listening socket: 15 m29000| Fri Feb 22 11:18:22.980 [interruptThread] closing listening socket: 17 m29000| Fri Feb 22 11:18:22.980 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 11:18:22.980 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 11:18:22.981 [conn1] end connection 127.0.0.1:52563 (1 connection now open) m29000| Fri Feb 22 11:18:22.981 [conn2] end connection 127.0.0.1:55728 (1 connection now open) m29000| Fri Feb 22 11:18:22.990 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 11:18:22.991 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 11:18:22.991 [interruptThread] journalCleanup... m29000| Fri Feb 22 11:18:22.991 [interruptThread] removeJournalFiles m29000| Fri Feb 22 11:18:22.991 dbexit: really exiting now Fri Feb 22 11:18:23.980 shell: stopped mongo program on port 29000 *** ShardingTest test completed successfully in 43.699 seconds *** Fri Feb 22 11:18:24.013 [conn4] end connection 127.0.0.1:38033 (0 connections now open) 43.8797 seconds Fri Feb 22 11:18:24.036 [initandlisten] connection accepted from 127.0.0.1:48800 #5 (1 connection now open) Fri Feb 22 11:18:24.037 [conn5] end connection 127.0.0.1:48800 (0 connections now open) ******************************************* Test : background.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/background.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/background.js";TestData.testFile = "background.js";TestData.testName = "background";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:18:24 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:18:24.203 [initandlisten] connection accepted from 127.0.0.1:42334 #6 (1 connection now open) null Fri Feb 22 11:18:24.209 [conn6] CMD: drop test.bg1 Fri Feb 22 11:18:24.210 [initandlisten] connection accepted from 127.0.0.1:40872 #7 (2 connections now open) Fri Feb 22 11:18:24.211 [FileAllocator] allocating new datafile /data/db/sconsTests/test.ns, filling with zeroes... Fri Feb 22 11:18:24.211 [FileAllocator] done allocating datafile /data/db/sconsTests/test.ns, size: 16MB, took 0 secs Fri Feb 22 11:18:24.211 [FileAllocator] allocating new datafile /data/db/sconsTests/test.0, filling with zeroes... Fri Feb 22 11:18:24.211 [FileAllocator] done allocating datafile /data/db/sconsTests/test.0, size: 64MB, took 0 secs Fri Feb 22 11:18:24.211 [FileAllocator] allocating new datafile /data/db/sconsTests/test.1, filling with zeroes... Fri Feb 22 11:18:24.212 [FileAllocator] done allocating datafile /data/db/sconsTests/test.1, size: 128MB, took 0 secs Fri Feb 22 11:18:24.215 [conn6] build index test.bg1 { _id: 1 } Fri Feb 22 11:18:24.216 [conn6] build index done. scanned 0 total records. 0.001 secs 0 10000 20000 30000 40000 50000 60000 70000 80000 90000 Fri Feb 22 11:18:27.942 [conn7] build index test.bg1 { i: 1.0 } background { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 0, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 128/99797 0%", "progress" : { "done" : 128, "total" : 99797 }, "numYields" : 3, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(11375) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(25592) } } } ] } 0 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 0, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 3840/109947 3%", "progress" : { "done" : 3840, "total" : 109947 }, "numYields" : 49, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(110018) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(526952) } } } ] } 10000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 1, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 9856/119777 8%", "progress" : { "done" : 9856, "total" : 119777 }, "numYields" : 96, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(287248) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(1104551) } } } ] } 20000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 1, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 15360/129914 11%", "progress" : { "done" : 15360, "total" : 129914 }, "numYields" : 139, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(441567) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(1632933) } } } ] } 30000 Fri Feb 22 11:18:30.007 [conn7] Background Index Build Progress: 19800/137606 14% { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 2, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 21120/139916 15%", "progress" : { "done" : 21120, "total" : 139916 }, "numYields" : 184, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(605477) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(2185753) } } } ] } 40000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 2, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 27520/149991 18%", "progress" : { "done" : 27520, "total" : 149991 }, "numYields" : 234, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(766833) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(2789378) } } } ] } 50000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 3, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 36096/159863 22%", "progress" : { "done" : 36096, "total" : 159863 }, "numYields" : 301, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(912212) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(3542167) } } } ] } 60000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 4, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 45696/169925 26%", "progress" : { "done" : 45696, "total" : 169925 }, "numYields" : 375, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1097449) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(4379971) } } } ] } 70000 { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 4, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 52224/179881 29%", "progress" : { "done" : 52224, "total" : 179881 }, "numYields" : 426, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1271744) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(4984808) } } } ] } 80000 Fri Feb 22 11:18:33.006 [conn7] Background Index Build Progress: 52900/181201 29% { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 5, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 58112/189870 30%", "progress" : { "done" : 58112, "total" : 189870 }, "numYields" : 472, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1445095) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(5550709) } } } ] } 90000 { "n" : 0, "connectionId" : 6, "err" : null, "ok" : 1 } { "inprog" : [ { "opid" : 866440, "active" : true, "secs_running" : 6, "op" : "insert", "ns" : "test.system.indexes", "insert" : { "v" : 1, "key" : { "i" : 1 }, "ns" : "test.bg1", "name" : "i_1", "background" : true }, "client" : "127.0.0.1:40872", "desc" : "conn7", "threadId" : "0x12", "connectionId" : 7, "waitingForLock" : false, "msg" : "bg index build Background Index Build Progress: 63872/199880 31%", "progress" : { "done" : 63872, "total" : 199880 }, "numYields" : 517, "lockStats" : { "timeLockedMicros" : { "r" : NumberLong(0), "w" : NumberLong(1617262) }, "timeAcquiringMicros" : { "r" : NumberLong(0), "w" : NumberLong(6103182) } } } ] } waiting waiting Fri Feb 22 11:18:36.002 [conn7] Background Index Build Progress: 129400/200000 64% waiting waiting waiting Fri Feb 22 11:18:38.326 [conn7] build index done. scanned 200000 total records. 10.383 secs Fri Feb 22 11:18:38.326 [conn7] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 558 locks(micros) w:9350693 10384ms { "n" : 0, "connectionId" : 7, "err" : null, "ok" : 1 } Fri Feb 22 11:18:39.072 [conn6] end connection 127.0.0.1:42334 (1 connection now open) Fri Feb 22 11:18:39.072 [conn7] end connection 127.0.0.1:40872 (1 connection now open) 15.0540 seconds Fri Feb 22 11:18:39.093 [initandlisten] connection accepted from 127.0.0.1:45824 #8 (1 connection now open) Fri Feb 22 11:18:39.093 [conn8] end connection 127.0.0.1:45824 (0 connections now open) ******************************************* Test : balance_repl.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_repl.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_repl.js";TestData.testFile = "balance_repl.js";TestData.testName = "balance_repl";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:18:39 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:18:39.265 [initandlisten] connection accepted from 127.0.0.1:57590 #9 (1 connection now open) null Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 0, "node" : 0, "set" : "rs1-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs0-0' Fri Feb 22 11:18:39.279 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet rs1-rs0 --dbpath /data/db/rs1-rs0-0 --nopreallocj --setParameter enableTestCommands=1 m31100| note: noprealloc may hurt performance in many applications m31100| Fri Feb 22 11:18:39.371 [initandlisten] MongoDB starting : pid=19530 port=31100 dbpath=/data/db/rs1-rs0-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31100| Fri Feb 22 11:18:39.371 [initandlisten] m31100| Fri Feb 22 11:18:39.371 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31100| Fri Feb 22 11:18:39.371 [initandlisten] ** uses to detect impending page faults. m31100| Fri Feb 22 11:18:39.371 [initandlisten] ** This may result in slower performance for certain use cases m31100| Fri Feb 22 11:18:39.371 [initandlisten] m31100| Fri Feb 22 11:18:39.371 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31100| Fri Feb 22 11:18:39.371 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31100| Fri Feb 22 11:18:39.371 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31100| Fri Feb 22 11:18:39.371 [initandlisten] allocator: system m31100| Fri Feb 22 11:18:39.371 [initandlisten] options: { dbpath: "/data/db/rs1-rs0-0", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31100, replSet: "rs1-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31100| Fri Feb 22 11:18:39.371 [initandlisten] journal dir=/data/db/rs1-rs0-0/journal m31100| Fri Feb 22 11:18:39.372 [initandlisten] recover : no journal files present, no recovery needed m31100| Fri Feb 22 11:18:39.373 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.ns, filling with zeroes... m31100| Fri Feb 22 11:18:39.373 [FileAllocator] creating directory /data/db/rs1-rs0-0/_tmp m31100| Fri Feb 22 11:18:39.373 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:18:39.373 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.0, filling with zeroes... m31100| Fri Feb 22 11:18:39.374 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:18:39.377 [initandlisten] waiting for connections on port 31100 m31100| Fri Feb 22 11:18:39.377 [websvr] admin web console waiting for connections on port 32100 m31100| Fri Feb 22 11:18:39.380 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 11:18:39.380 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31100| Fri Feb 22 11:18:39.481 [initandlisten] connection accepted from 127.0.0.1:54051 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 0, "node" : 1, "set" : "rs1-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs0-1' Fri Feb 22 11:18:39.489 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet rs1-rs0 --dbpath /data/db/rs1-rs0-1 --nopreallocj --setParameter enableTestCommands=1 m31101| note: noprealloc may hurt performance in many applications m31101| Fri Feb 22 11:18:39.580 [initandlisten] MongoDB starting : pid=19531 port=31101 dbpath=/data/db/rs1-rs0-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31101| Fri Feb 22 11:18:39.581 [initandlisten] m31101| Fri Feb 22 11:18:39.581 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31101| Fri Feb 22 11:18:39.581 [initandlisten] ** uses to detect impending page faults. m31101| Fri Feb 22 11:18:39.581 [initandlisten] ** This may result in slower performance for certain use cases m31101| Fri Feb 22 11:18:39.581 [initandlisten] m31101| Fri Feb 22 11:18:39.581 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31101| Fri Feb 22 11:18:39.581 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31101| Fri Feb 22 11:18:39.581 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31101| Fri Feb 22 11:18:39.581 [initandlisten] allocator: system m31101| Fri Feb 22 11:18:39.581 [initandlisten] options: { dbpath: "/data/db/rs1-rs0-1", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31101, replSet: "rs1-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31101| Fri Feb 22 11:18:39.581 [initandlisten] journal dir=/data/db/rs1-rs0-1/journal m31101| Fri Feb 22 11:18:39.581 [initandlisten] recover : no journal files present, no recovery needed m31101| Fri Feb 22 11:18:39.583 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.ns, filling with zeroes... m31101| Fri Feb 22 11:18:39.583 [FileAllocator] creating directory /data/db/rs1-rs0-1/_tmp m31101| Fri Feb 22 11:18:39.583 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:18:39.583 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.0, filling with zeroes... m31101| Fri Feb 22 11:18:39.583 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:18:39.586 [initandlisten] waiting for connections on port 31101 m31101| Fri Feb 22 11:18:39.586 [websvr] admin web console waiting for connections on port 32101 m31101| Fri Feb 22 11:18:39.589 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31101| Fri Feb 22 11:18:39.589 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31101| Fri Feb 22 11:18:39.691 [initandlisten] connection accepted from 127.0.0.1:65148 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101 ] { "replSetInitiate" : { "_id" : "rs1-rs0", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101" } ] } } m31100| Fri Feb 22 11:18:39.694 [conn1] replSet replSetInitiate admin command received from client m31100| Fri Feb 22 11:18:39.695 [conn1] replSet replSetInitiate config object parses ok, 2 members specified m31100| Fri Feb 22 11:18:39.695 [initandlisten] connection accepted from 165.225.128.186:56471 #2 (2 connections now open) m31101| Fri Feb 22 11:18:39.696 [initandlisten] connection accepted from 165.225.128.186:34514 #2 (2 connections now open) m31100| Fri Feb 22 11:18:39.697 [conn1] replSet replSetInitiate all members seem up m31100| Fri Feb 22 11:18:39.697 [conn1] ****** m31100| Fri Feb 22 11:18:39.697 [conn1] creating replication oplog of size: 40MB... m31100| Fri Feb 22 11:18:39.697 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.1, filling with zeroes... m31100| Fri Feb 22 11:18:39.698 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.1, size: 64MB, took 0 secs m31100| Fri Feb 22 11:18:39.709 [conn2] end connection 165.225.128.186:56471 (1 connection now open) m31100| Fri Feb 22 11:18:39.710 [conn1] ****** m31100| Fri Feb 22 11:18:39.710 [conn1] replSet info saving a newer config version to local.system.replset m31100| Fri Feb 22 11:18:39.726 [conn1] replSet saveConfigLocally done m31100| Fri Feb 22 11:18:39.726 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 11:18:49.380 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:18:49.380 [rsStart] replSet STARTUP2 m31100| Fri Feb 22 11:18:49.380 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31100| Fri Feb 22 11:18:49.380 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31101| Fri Feb 22 11:18:49.589 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:18:49.590 [initandlisten] connection accepted from 165.225.128.186:50991 #3 (2 connections now open) m31101| Fri Feb 22 11:18:49.591 [initandlisten] connection accepted from 165.225.128.186:53042 #3 (3 connections now open) m31101| Fri Feb 22 11:18:49.591 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:18:49.591 [rsStart] replSet got config version 1 from a remote, saving locally m31101| Fri Feb 22 11:18:49.591 [rsStart] replSet info saving a newer config version to local.system.replset m31101| Fri Feb 22 11:18:49.593 [rsStart] replSet saveConfigLocally done m31101| Fri Feb 22 11:18:49.594 [rsStart] replSet STARTUP2 m31101| Fri Feb 22 11:18:49.594 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31101| Fri Feb 22 11:18:49.594 [rsSync] ****** m31101| Fri Feb 22 11:18:49.594 [rsSync] creating replication oplog of size: 40MB... m31101| Fri Feb 22 11:18:49.594 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.1, filling with zeroes... m31101| Fri Feb 22 11:18:49.595 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.1, size: 64MB, took 0 secs m31101| Fri Feb 22 11:18:49.604 [conn3] end connection 165.225.128.186:53042 (2 connections now open) m31101| Fri Feb 22 11:18:49.606 [rsSync] ****** m31101| Fri Feb 22 11:18:49.606 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:18:49.606 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31100| Fri Feb 22 11:18:50.381 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:18:51.381 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 thinks that we are down m31100| Fri Feb 22 11:18:51.381 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31100| Fri Feb 22 11:18:51.381 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 11:18:51.591 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31101| Fri Feb 22 11:18:51.591 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31100| Fri Feb 22 11:18:57.382 [rsMgr] replSet info electSelf 0 m31101| Fri Feb 22 11:18:57.382 [conn2] replSet RECOVERING m31101| Fri Feb 22 11:18:57.382 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31100| Fri Feb 22 11:18:58.381 [rsMgr] replSet PRIMARY m31100| Fri Feb 22 11:18:59.382 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31101| Fri Feb 22 11:18:59.592 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31101| Fri Feb 22 11:19:05.606 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:19:05.606 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:19:05.607 [initandlisten] connection accepted from 165.225.128.186:57241 #4 (3 connections now open) m31101| Fri Feb 22 11:19:05.612 [rsSync] build index local.me { _id: 1 } m31101| Fri Feb 22 11:19:05.615 [rsSync] build index done. scanned 0 total records. 0.002 secs m31101| Fri Feb 22 11:19:05.616 [rsSync] build index local.replset.minvalid { _id: 1 } m31101| Fri Feb 22 11:19:05.617 [rsSync] build index done. scanned 0 total records. 0 secs m31101| Fri Feb 22 11:19:05.617 [rsSync] replSet initial sync drop all databases m31101| Fri Feb 22 11:19:05.617 [rsSync] dropAllDatabasesExceptLocal 1 m31101| Fri Feb 22 11:19:05.617 [rsSync] replSet initial sync clone all databases m31101| Fri Feb 22 11:19:05.617 [rsSync] replSet initial sync data copy, starting syncup m31101| Fri Feb 22 11:19:05.617 [rsSync] oplog sync 1 of 3 m31101| Fri Feb 22 11:19:05.618 [rsSync] oplog sync 2 of 3 m31101| Fri Feb 22 11:19:05.618 [rsSync] replSet initial sync building indexes m31101| Fri Feb 22 11:19:05.618 [rsSync] oplog sync 3 of 3 m31101| Fri Feb 22 11:19:05.618 [rsSync] replSet initial sync finishing up m31101| Fri Feb 22 11:19:05.626 [rsSync] replSet set minValid=5127540f:b m31101| Fri Feb 22 11:19:05.630 [rsSync] replSet initial sync done m31100| Fri Feb 22 11:19:05.631 [conn4] end connection 165.225.128.186:57241 (2 connections now open) m31101| Fri Feb 22 11:19:06.595 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:19:06.595 [initandlisten] connection accepted from 165.225.128.186:62991 #5 (3 connections now open) m31101| Fri Feb 22 11:19:06.630 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:19:06.631 [initandlisten] connection accepted from 165.225.128.186:40786 #6 (4 connections now open) m31101| Fri Feb 22 11:19:07.632 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:19:07.638 [slaveTracking] build index local.slaves { _id: 1 } m31100| Fri Feb 22 11:19:07.639 [slaveTracking] build index done. scanned 0 total records. 0.001 secs Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31200, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 1, "node" : 0, "set" : "rs1-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs1-0' Fri Feb 22 11:19:07.817 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet rs1-rs1 --dbpath /data/db/rs1-rs1-0 --nopreallocj --setParameter enableTestCommands=1 m31200| note: noprealloc may hurt performance in many applications m31200| Fri Feb 22 11:19:07.905 [initandlisten] MongoDB starting : pid=19558 port=31200 dbpath=/data/db/rs1-rs1-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31200| Fri Feb 22 11:19:07.905 [initandlisten] m31200| Fri Feb 22 11:19:07.905 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31200| Fri Feb 22 11:19:07.905 [initandlisten] ** uses to detect impending page faults. m31200| Fri Feb 22 11:19:07.905 [initandlisten] ** This may result in slower performance for certain use cases m31200| Fri Feb 22 11:19:07.905 [initandlisten] m31200| Fri Feb 22 11:19:07.905 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31200| Fri Feb 22 11:19:07.905 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31200| Fri Feb 22 11:19:07.905 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31200| Fri Feb 22 11:19:07.905 [initandlisten] allocator: system m31200| Fri Feb 22 11:19:07.905 [initandlisten] options: { dbpath: "/data/db/rs1-rs1-0", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31200, replSet: "rs1-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31200| Fri Feb 22 11:19:07.906 [initandlisten] journal dir=/data/db/rs1-rs1-0/journal m31200| Fri Feb 22 11:19:07.906 [initandlisten] recover : no journal files present, no recovery needed m31200| Fri Feb 22 11:19:07.907 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.ns, filling with zeroes... m31200| Fri Feb 22 11:19:07.907 [FileAllocator] creating directory /data/db/rs1-rs1-0/_tmp m31200| Fri Feb 22 11:19:07.908 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:07.908 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.0, filling with zeroes... m31200| Fri Feb 22 11:19:07.908 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.0, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:07.911 [initandlisten] waiting for connections on port 31200 m31200| Fri Feb 22 11:19:07.911 [websvr] admin web console waiting for connections on port 32200 m31200| Fri Feb 22 11:19:07.913 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31200| Fri Feb 22 11:19:07.913 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31200| Fri Feb 22 11:19:08.019 [initandlisten] connection accepted from 127.0.0.1:34027 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31200 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31201, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : true, "pathOpts" : { "testName" : "rs1", "shard" : 1, "node" : 1, "set" : "rs1-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs1-1' Fri Feb 22 11:19:08.023 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet rs1-rs1 --dbpath /data/db/rs1-rs1-1 --nopreallocj --setParameter enableTestCommands=1 m31201| note: noprealloc may hurt performance in many applications m31201| Fri Feb 22 11:19:08.094 [initandlisten] MongoDB starting : pid=19559 port=31201 dbpath=/data/db/rs1-rs1-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31201| Fri Feb 22 11:19:08.095 [initandlisten] m31201| Fri Feb 22 11:19:08.095 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31201| Fri Feb 22 11:19:08.095 [initandlisten] ** uses to detect impending page faults. m31201| Fri Feb 22 11:19:08.095 [initandlisten] ** This may result in slower performance for certain use cases m31201| Fri Feb 22 11:19:08.095 [initandlisten] m31201| Fri Feb 22 11:19:08.095 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31201| Fri Feb 22 11:19:08.095 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31201| Fri Feb 22 11:19:08.095 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31201| Fri Feb 22 11:19:08.095 [initandlisten] allocator: system m31201| Fri Feb 22 11:19:08.095 [initandlisten] options: { dbpath: "/data/db/rs1-rs1-1", noprealloc: true, nopreallocj: true, oplogSize: 40, port: 31201, replSet: "rs1-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31201| Fri Feb 22 11:19:08.095 [initandlisten] journal dir=/data/db/rs1-rs1-1/journal m31201| Fri Feb 22 11:19:08.095 [initandlisten] recover : no journal files present, no recovery needed m31201| Fri Feb 22 11:19:08.096 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.ns, filling with zeroes... m31201| Fri Feb 22 11:19:08.096 [FileAllocator] creating directory /data/db/rs1-rs1-1/_tmp m31201| Fri Feb 22 11:19:08.097 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.ns, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:08.097 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.0, filling with zeroes... m31201| Fri Feb 22 11:19:08.097 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.0, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:08.100 [initandlisten] waiting for connections on port 31201 m31201| Fri Feb 22 11:19:08.100 [websvr] admin web console waiting for connections on port 32201 m31201| Fri Feb 22 11:19:08.102 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31201| Fri Feb 22 11:19:08.102 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31201| Fri Feb 22 11:19:08.224 [initandlisten] connection accepted from 127.0.0.1:47001 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31200, connection to bs-smartos-x86-64-1.10gen.cc:31201 ] { "replSetInitiate" : { "_id" : "rs1-rs1", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31200" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31201" } ] } } m31200| Fri Feb 22 11:19:08.226 [conn1] replSet replSetInitiate admin command received from client m31200| Fri Feb 22 11:19:08.229 [conn1] replSet replSetInitiate config object parses ok, 2 members specified m31200| Fri Feb 22 11:19:08.230 [initandlisten] connection accepted from 165.225.128.186:56222 #2 (2 connections now open) m31201| Fri Feb 22 11:19:08.231 [initandlisten] connection accepted from 165.225.128.186:50247 #2 (2 connections now open) m31200| Fri Feb 22 11:19:08.232 [conn1] replSet replSetInitiate all members seem up m31200| Fri Feb 22 11:19:08.232 [conn1] ****** m31200| Fri Feb 22 11:19:08.232 [conn1] creating replication oplog of size: 40MB... m31200| Fri Feb 22 11:19:08.232 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.1, filling with zeroes... m31200| Fri Feb 22 11:19:08.232 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.1, size: 64MB, took 0 secs m31200| Fri Feb 22 11:19:08.247 [conn1] ****** m31200| Fri Feb 22 11:19:08.247 [conn1] replSet info saving a newer config version to local.system.replset m31200| Fri Feb 22 11:19:08.254 [conn2] end connection 165.225.128.186:56222 (1 connection now open) m31200| Fri Feb 22 11:19:08.260 [conn1] replSet saveConfigLocally done m31200| Fri Feb 22 11:19:08.260 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 11:19:09.383 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31101| Fri Feb 22 11:19:17.384 [conn2] end connection 165.225.128.186:34514 (1 connection now open) m31101| Fri Feb 22 11:19:17.385 [initandlisten] connection accepted from 165.225.128.186:46605 #4 (2 connections now open) m31200| Fri Feb 22 11:19:17.914 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:17.914 [rsStart] replSet STARTUP2 m31200| Fri Feb 22 11:19:17.914 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31200| Fri Feb 22 11:19:17.914 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is up m31201| Fri Feb 22 11:19:18.103 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:18.103 [initandlisten] connection accepted from 165.225.128.186:41805 #3 (2 connections now open) m31201| Fri Feb 22 11:19:18.104 [initandlisten] connection accepted from 165.225.128.186:38161 #3 (3 connections now open) m31201| Fri Feb 22 11:19:18.104 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31201 m31201| Fri Feb 22 11:19:18.105 [rsStart] replSet got config version 1 from a remote, saving locally m31201| Fri Feb 22 11:19:18.105 [rsStart] replSet info saving a newer config version to local.system.replset m31201| Fri Feb 22 11:19:18.111 [rsStart] replSet saveConfigLocally done m31201| Fri Feb 22 11:19:18.111 [rsStart] replSet STARTUP2 m31201| Fri Feb 22 11:19:18.112 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31201| Fri Feb 22 11:19:18.112 [rsSync] ****** m31201| Fri Feb 22 11:19:18.112 [rsSync] creating replication oplog of size: 40MB... m31201| Fri Feb 22 11:19:18.112 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.1, filling with zeroes... m31201| Fri Feb 22 11:19:18.112 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.1, size: 64MB, took 0 secs m31201| Fri Feb 22 11:19:18.126 [conn3] end connection 165.225.128.186:38161 (2 connections now open) m31201| Fri Feb 22 11:19:18.127 [rsSync] ****** m31201| Fri Feb 22 11:19:18.127 [rsSync] replSet initial sync pending m31201| Fri Feb 22 11:19:18.128 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31200| Fri Feb 22 11:19:18.915 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:19:19.595 [conn3] end connection 165.225.128.186:50991 (3 connections now open) m31100| Fri Feb 22 11:19:19.596 [initandlisten] connection accepted from 165.225.128.186:59785 #7 (4 connections now open) m31200| Fri Feb 22 11:19:19.914 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31201 thinks that we are down m31200| Fri Feb 22 11:19:19.914 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state STARTUP2 m31200| Fri Feb 22 11:19:19.915 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31201 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31200 is electable' m31201| Fri Feb 22 11:19:20.105 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is up m31201| Fri Feb 22 11:19:20.105 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state SECONDARY m31200| Fri Feb 22 11:19:25.916 [rsMgr] replSet info electSelf 0 m31201| Fri Feb 22 11:19:25.916 [conn2] replSet RECOVERING m31201| Fri Feb 22 11:19:25.916 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31200 (0) m31200| Fri Feb 22 11:19:26.915 [rsMgr] replSet PRIMARY m31200| Fri Feb 22 11:19:27.916 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state RECOVERING m31201| Fri Feb 22 11:19:28.106 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state PRIMARY m31201| Fri Feb 22 11:19:34.128 [rsSync] replSet initial sync pending m31201| Fri Feb 22 11:19:34.128 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:34.128 [initandlisten] connection accepted from 165.225.128.186:33716 #4 (3 connections now open) m31201| Fri Feb 22 11:19:34.135 [rsSync] build index local.me { _id: 1 } m31201| Fri Feb 22 11:19:34.138 [rsSync] build index done. scanned 0 total records. 0.003 secs m31201| Fri Feb 22 11:19:34.139 [rsSync] build index local.replset.minvalid { _id: 1 } m31201| Fri Feb 22 11:19:34.140 [rsSync] build index done. scanned 0 total records. 0 secs m31201| Fri Feb 22 11:19:34.140 [rsSync] replSet initial sync drop all databases m31201| Fri Feb 22 11:19:34.140 [rsSync] dropAllDatabasesExceptLocal 1 m31201| Fri Feb 22 11:19:34.140 [rsSync] replSet initial sync clone all databases m31201| Fri Feb 22 11:19:34.141 [rsSync] replSet initial sync data copy, starting syncup m31201| Fri Feb 22 11:19:34.141 [rsSync] oplog sync 1 of 3 m31201| Fri Feb 22 11:19:34.141 [rsSync] oplog sync 2 of 3 m31201| Fri Feb 22 11:19:34.141 [rsSync] replSet initial sync building indexes m31201| Fri Feb 22 11:19:34.141 [rsSync] oplog sync 3 of 3 m31201| Fri Feb 22 11:19:34.141 [rsSync] replSet initial sync finishing up m31201| Fri Feb 22 11:19:34.151 [rsSync] replSet set minValid=5127542c:1 m31201| Fri Feb 22 11:19:34.158 [rsSync] replSet initial sync done m31200| Fri Feb 22 11:19:34.158 [conn4] end connection 165.225.128.186:33716 (2 connections now open) m31201| Fri Feb 22 11:19:35.112 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:35.113 [initandlisten] connection accepted from 165.225.128.186:39096 #5 (3 connections now open) m31201| Fri Feb 22 11:19:35.158 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:35.158 [initandlisten] connection accepted from 165.225.128.186:33069 #6 (4 connections now open) m31201| Fri Feb 22 11:19:36.159 [rsSync] replSet SECONDARY m31200| Fri Feb 22 11:19:36.167 [slaveTracking] build index local.slaves { _id: 1 } m31200| Fri Feb 22 11:19:36.170 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31100| Fri Feb 22 11:19:36.328 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/admin.ns, filling with zeroes... m31100| Fri Feb 22 11:19:36.328 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/admin.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.328 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/admin.0, filling with zeroes... m31100| Fri Feb 22 11:19:36.328 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/admin.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.331 [conn1] build index admin.foo { _id: 1 } m31100| Fri Feb 22 11:19:36.332 [conn1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 11:19:36.333 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/admin.ns, filling with zeroes... m31101| Fri Feb 22 11:19:36.333 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/admin.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:19:36.334 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/admin.0, filling with zeroes... m31101| Fri Feb 22 11:19:36.334 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361531976000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361531976000, "i" : 1 } m31101| Fri Feb 22 11:19:36.337 [repl writer worker 1] build index admin.foo { _id: 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:19:36.338 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp { "t" : 1361531976000, "i" : 1 } Fri Feb 22 11:19:36.341 starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:19:36.342 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31100| Fri Feb 22 11:19:36.342 [initandlisten] connection accepted from 165.225.128.186:33843 #8 (5 connections now open) Fri Feb 22 11:19:36.342 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ Fri Feb 22 11:19:36.342 trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m31100| Fri Feb 22 11:19:36.342 [initandlisten] connection accepted from 165.225.128.186:56264 #9 (6 connections now open) Fri Feb 22 11:19:36.342 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 Fri Feb 22 11:19:36.342 trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 Fri Feb 22 11:19:36.343 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 11:19:36.343 [initandlisten] connection accepted from 165.225.128.186:34675 #5 (3 connections now open) m31100| Fri Feb 22 11:19:36.343 [initandlisten] connection accepted from 165.225.128.186:47962 #10 (7 connections now open) m31100| Fri Feb 22 11:19:36.343 [conn8] end connection 165.225.128.186:33843 (6 connections now open) Fri Feb 22 11:19:36.343 Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:19:36.344 [initandlisten] connection accepted from 165.225.128.186:62481 #6 (4 connections now open) Fri Feb 22 11:19:36.344 replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:19:36.344 [ReplicaSetMonitorWatcher] starting m31200| Fri Feb 22 11:19:36.346 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/admin.ns, filling with zeroes... m31200| Fri Feb 22 11:19:36.346 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/admin.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:36.346 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/admin.0, filling with zeroes... m31200| Fri Feb 22 11:19:36.346 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/admin.0, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:36.349 [conn1] build index admin.foo { _id: 1 } m31200| Fri Feb 22 11:19:36.351 [conn1] build index done. scanned 0 total records. 0.001 secs m31201| Fri Feb 22 11:19:36.352 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/admin.ns, filling with zeroes... ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31200, is { "t" : 1361531976000, "i" : 1 } m31201| Fri Feb 22 11:19:36.352 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/admin.ns, size: 16MB, took 0 secs ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361531976000, "i" : 1 } m31201| Fri Feb 22 11:19:36.352 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/admin.0, filling with zeroes... m31201| Fri Feb 22 11:19:36.353 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31201 m31201| Fri Feb 22 11:19:36.356 [repl writer worker 1] build index admin.foo { _id: 1 } m31201| Fri Feb 22 11:19:36.357 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31201, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp { "t" : 1361531976000, "i" : 1 } Fri Feb 22 11:19:36.360 starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 Fri Feb 22 11:19:36.361 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m31200| Fri Feb 22 11:19:36.361 [initandlisten] connection accepted from 165.225.128.186:56538 #7 (5 connections now open) Fri Feb 22 11:19:36.361 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ Fri Feb 22 11:19:36.361 trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 Fri Feb 22 11:19:36.361 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 Fri Feb 22 11:19:36.361 trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m31200| Fri Feb 22 11:19:36.361 [initandlisten] connection accepted from 165.225.128.186:47585 #8 (6 connections now open) Fri Feb 22 11:19:36.361 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m31201| Fri Feb 22 11:19:36.361 [initandlisten] connection accepted from 165.225.128.186:54721 #4 (3 connections now open) m31200| Fri Feb 22 11:19:36.362 [initandlisten] connection accepted from 165.225.128.186:46153 #9 (7 connections now open) m31200| Fri Feb 22 11:19:36.362 [conn7] end connection 165.225.128.186:56538 (6 connections now open) Fri Feb 22 11:19:36.362 Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m31201| Fri Feb 22 11:19:36.363 [initandlisten] connection accepted from 165.225.128.186:33137 #5 (4 connections now open) Fri Feb 22 11:19:36.363 replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 Resetting db path '/data/db/rs1-config0' Fri Feb 22 11:19:36.367 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/rs1-config0 --configsvr --nopreallocj --setParameter enableTestCommands=1 m29000| Fri Feb 22 11:19:36.438 [initandlisten] MongoDB starting : pid=19611 port=29000 dbpath=/data/db/rs1-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 11:19:36.438 [initandlisten] m29000| Fri Feb 22 11:19:36.438 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 11:19:36.438 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 11:19:36.438 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 11:19:36.438 [initandlisten] m29000| Fri Feb 22 11:19:36.439 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 11:19:36.439 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 11:19:36.439 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 11:19:36.439 [initandlisten] allocator: system m29000| Fri Feb 22 11:19:36.439 [initandlisten] options: { configsvr: true, dbpath: "/data/db/rs1-config0", nopreallocj: true, port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:19:36.439 [initandlisten] journal dir=/data/db/rs1-config0/journal m29000| Fri Feb 22 11:19:36.439 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 11:19:36.440 [FileAllocator] allocating new datafile /data/db/rs1-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 11:19:36.440 [FileAllocator] creating directory /data/db/rs1-config0/_tmp m29000| Fri Feb 22 11:19:36.440 [FileAllocator] done allocating datafile /data/db/rs1-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.440 [FileAllocator] allocating new datafile /data/db/rs1-config0/local.0, filling with zeroes... m29000| Fri Feb 22 11:19:36.441 [FileAllocator] done allocating datafile /data/db/rs1-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.443 [initandlisten] ****** m29000| Fri Feb 22 11:19:36.443 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 11:19:36.447 [initandlisten] ****** m29000| Fri Feb 22 11:19:36.447 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 11:19:36.447 [websvr] admin web console waiting for connections on port 30000 m29000| Fri Feb 22 11:19:36.568 [initandlisten] connection accepted from 127.0.0.1:48681 #1 (1 connection now open) "bs-smartos-x86-64-1.10gen.cc:29000" m29000| Fri Feb 22 11:19:36.569 [initandlisten] connection accepted from 165.225.128.186:57785 #2 (2 connections now open) ShardingTest rs1 : { "config" : "bs-smartos-x86-64-1.10gen.cc:29000", "shards" : [ connection to rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101, connection to rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 ] } Fri Feb 22 11:19:36.573 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb bs-smartos-x86-64-1.10gen.cc:29000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:19:36.591 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:19:36.592 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=19612 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:19:36.592 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:19:36.592 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:19:36.592 [mongosMain] options: { chunkSize: 1, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 11:19:36.592 [mongosMain] config string : bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.592 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.593 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.593 [mongosMain] connected connection! m29000| Fri Feb 22 11:19:36.593 [initandlisten] connection accepted from 165.225.128.186:56121 #3 (3 connections now open) m30999| Fri Feb 22 11:19:36.594 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:19:36.594 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.594 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:19:36.594 [initandlisten] connection accepted from 165.225.128.186:58413 #4 (4 connections now open) m30999| Fri Feb 22 11:19:36.594 [mongosMain] connected connection! m29000| Fri Feb 22 11:19:36.595 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:19:36.600 [mongosMain] created new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:19:36.601 [mongosMain] trying to acquire new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838 ) m30999| Fri Feb 22 11:19:36.601 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:19:36.601 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m29000| Fri Feb 22 11:19:36.601 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.ns, filling with zeroes... m30999| Fri Feb 22 11:19:36.601 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:19:36 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127544800fc1508e4df1cdc" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m29000| Fri Feb 22 11:19:36.601 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.601 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.0, filling with zeroes... m29000| Fri Feb 22 11:19:36.601 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:19:36.602 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.1, filling with zeroes... m29000| Fri Feb 22 11:19:36.602 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 11:19:36.604 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 11:19:36.605 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.606 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 11:19:36.607 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.607 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 11:19:36 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838', sleeping for 30000ms m29000| Fri Feb 22 11:19:36.607 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 11:19:36.608 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:19:36.608 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' acquired, ts : 5127544800fc1508e4df1cdc m30999| Fri Feb 22 11:19:36.611 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:19:36.611 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:19:36.611 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-5127544800fc1508e4df1cdd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531976611), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 11:19:36.611 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 11:19:36.612 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.612 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 11:19:36.612 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 11:19:36.613 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.614 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-5127544800fc1508e4df1cdf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361531976614), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:19:36.614 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:19:36.614 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' unlocked. m29000| Fri Feb 22 11:19:36.615 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:19:36.616 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m29000| Fri Feb 22 11:19:36.616 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:19:36.616 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:19:36.616 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:19:36.616 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:19:36.616 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:19:36.616 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:19:36.616 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:19:36.617 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 11:19:36.617 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 11:19:36.618 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.618 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 11:19:36.618 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 11:19:36.618 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.618 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 11:19:36.619 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.619 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 11:19:36.619 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.620 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 11:19:36.620 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:19:36.620 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 11:19:36.620 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 11:19:36.621 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.621 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:19:36.621 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:19:36 m30999| Fri Feb 22 11:19:36.621 [Balancer] created new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:19:36.621 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m29000| Fri Feb 22 11:19:36.622 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 11:19:36.622 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.622 [Balancer] connected connection! m29000| Fri Feb 22 11:19:36.622 [initandlisten] connection accepted from 165.225.128.186:50526 #5 (5 connections now open) m29000| Fri Feb 22 11:19:36.622 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.623 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:36.623 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838 ) m30999| Fri Feb 22 11:19:36.623 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:19:36.623 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:19:36 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127544800fc1508e4df1ce1" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:19:36.624 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' acquired, ts : 5127544800fc1508e4df1ce1 m30999| Fri Feb 22 11:19:36.624 [Balancer] *** start balancing round m30999| Fri Feb 22 11:19:36.624 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:19:36.624 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:19:36.624 [Balancer] no collections to balance m30999| Fri Feb 22 11:19:36.624 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:19:36.624 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:19:36.624 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838' unlocked. m30999| Fri Feb 22 11:19:36.775 [mongosMain] connection accepted from 127.0.0.1:39839 #1 (1 connection now open) ShardingTest undefined going to add shard : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.777 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 11:19:36.777 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 11:19:36.778 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:19:36.778 [conn1] put [admin] on: config:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.778 [conn1] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.778 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.779 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.779 [conn1] connected connection! m31100| Fri Feb 22 11:19:36.779 [initandlisten] connection accepted from 165.225.128.186:47616 #11 (7 connections now open) m30999| Fri Feb 22 11:19:36.779 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.779 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976779), ok: 1.0 } m30999| Fri Feb 22 11:19:36.779 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m30999| Fri Feb 22 11:19:36.779 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.779 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.779 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.780 [initandlisten] connection accepted from 165.225.128.186:55630 #12 (8 connections now open) m30999| Fri Feb 22 11:19:36.780 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.780 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.780 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m30999| Fri Feb 22 11:19:36.780 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.780 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.780 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.780 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 11:19:36.780 [initandlisten] connection accepted from 165.225.128.186:44068 #7 (5 connections now open) m30999| Fri Feb 22 11:19:36.780 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.780 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.780 [initandlisten] connection accepted from 165.225.128.186:52897 #13 (9 connections now open) m30999| Fri Feb 22 11:19:36.780 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] replicaSetChange: shard not found for set: rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31100| Fri Feb 22 11:19:36.781 [conn11] end connection 165.225.128.186:47616 (8 connections now open) m30999| Fri Feb 22 11:19:36.781 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976781), ok: 1.0 } m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976781), ok: 1.0 } m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.781 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.781 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976781), ok: 1.0 } m30999| Fri Feb 22 11:19:36.781 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.782 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.782 [conn1] connected connection! m31101| Fri Feb 22 11:19:36.782 [initandlisten] connection accepted from 165.225.128.186:47479 #8 (6 connections now open) m30999| Fri Feb 22 11:19:36.782 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.782 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.782 [conn1] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.782 BackgroundJob starting: ReplicaSetMonitorWatcher m30999| Fri Feb 22 11:19:36.782 [ReplicaSetMonitorWatcher] starting m30999| Fri Feb 22 11:19:36.782 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.782 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.782 [initandlisten] connection accepted from 165.225.128.186:38058 #14 (9 connections now open) m30999| Fri Feb 22 11:19:36.782 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.784 [conn1] going to add shard: { _id: "rs1-rs0", host: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } { "shardAdded" : "rs1-rs0", "ok" : 1 } ShardingTest undefined going to add shard : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.785 [conn1] starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.785 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.785 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.785 [conn1] connected connection! m31200| Fri Feb 22 11:19:36.785 [initandlisten] connection accepted from 165.225.128.186:35800 #10 (7 connections now open) m30999| Fri Feb 22 11:19:36.785 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.785 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976785), ok: 1.0 } m30999| Fri Feb 22 11:19:36.785 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ m30999| Fri Feb 22 11:19:36.785 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.785 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.786 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 11:19:36.786 [initandlisten] connection accepted from 165.225.128.186:41224 #11 (8 connections now open) m30999| Fri Feb 22 11:19:36.786 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.786 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.786 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.786 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.786 BackgroundJob starting: ConnectBG m31201| Fri Feb 22 11:19:36.786 [initandlisten] connection accepted from 165.225.128.186:45330 #6 (5 connections now open) m30999| Fri Feb 22 11:19:36.786 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.786 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m30999| Fri Feb 22 11:19:36.786 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.787 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 11:19:36.787 [initandlisten] connection accepted from 165.225.128.186:47705 #12 (9 connections now open) m30999| Fri Feb 22 11:19:36.787 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.787 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.787 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.787 [conn1] replicaSetChange: shard not found for set: rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.787 [conn1] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31200| Fri Feb 22 11:19:36.787 [conn10] end connection 165.225.128.186:35800 (8 connections now open) m30999| Fri Feb 22 11:19:36.787 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976787), ok: 1.0 } m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976788), ok: 1.0 } m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531976788), ok: 1.0 } m30999| Fri Feb 22 11:19:36.788 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 BackgroundJob starting: ConnectBG m31201| Fri Feb 22 11:19:36.788 [initandlisten] connection accepted from 165.225.128.186:61872 #7 (6 connections now open) m30999| Fri Feb 22 11:19:36.788 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.788 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.788 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.789 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 11:19:36.789 [initandlisten] connection accepted from 165.225.128.186:46311 #13 (9 connections now open) m30999| Fri Feb 22 11:19:36.789 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.790 [conn1] going to add shard: { _id: "rs1-rs1", host: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } { "shardAdded" : "rs1-rs1", "ok" : 1 } m30999| Fri Feb 22 11:19:36.791 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31100 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.791 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31101 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.791 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.791 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.791 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.791 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.791 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:36.792 [initandlisten] connection accepted from 165.225.128.186:40845 #15 (10 connections now open) m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.792 [conn1] connected connection! m31100| Fri Feb 22 11:19:36.792 [initandlisten] connection accepted from 165.225.128.186:49532 #16 (11 connections now open) m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.792 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] connected connection! m30999| Fri Feb 22 11:19:36.792 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:36.792 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] connected connection! m31101| Fri Feb 22 11:19:36.792 [initandlisten] connection accepted from 165.225.128.186:63370 #9 (7 connections now open) m30999| Fri Feb 22 11:19:36.792 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31200 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.792 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31201 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.792 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.792 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.792 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:36.793 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:36.793 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.793 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.793 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200] connected connection! m31200| Fri Feb 22 11:19:36.793 [initandlisten] connection accepted from 165.225.128.186:40626 #14 (10 connections now open) m30999| Fri Feb 22 11:19:36.793 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:36.793 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.793 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:19:36.793 [initandlisten] connection accepted from 165.225.128.186:48503 #15 (11 connections now open) m31201| Fri Feb 22 11:19:36.793 [initandlisten] connection accepted from 165.225.128.186:40100 #8 (7 connections now open) m30999| Fri Feb 22 11:19:36.793 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201] connected connection! m30999| Fri Feb 22 11:19:36.793 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.794 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:19:36.794 [initandlisten] connection accepted from 165.225.128.186:54107 #6 (6 connections now open) m30999| Fri Feb 22 11:19:36.794 [conn1] connected connection! m30999| Fri Feb 22 11:19:36.794 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:29000 serverID: 5127544800fc1508e4df1ce0 m30999| Fri Feb 22 11:19:36.794 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.794 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:19:36.794 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000] bs-smartos-x86-64-1.10gen.cc:29000 is not a shard node m30999| Fri Feb 22 11:19:36.795 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 11:19:36.795 [conn1] best shard for new allocation is shard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 mapped: 128 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:19:36.796 [conn1] put [test] on: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31100| Fri Feb 22 11:19:36.796 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/test.ns, filling with zeroes... m31100| Fri Feb 22 11:19:36.797 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/test.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.797 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/test.0, filling with zeroes... m31100| Fri Feb 22 11:19:36.797 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/test.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:19:36.800 [conn15] build index test.foo { _id: 1 } m31100| Fri Feb 22 11:19:36.801 [conn15] build index done. scanned 0 total records. 0 secs m31101| Fri Feb 22 11:19:36.802 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/test.ns, filling with zeroes... m31101| Fri Feb 22 11:19:36.802 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/test.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:19:36.802 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/test.0, filling with zeroes... m31101| Fri Feb 22 11:19:36.802 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/test.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:19:36.808 [repl writer worker 1] build index test.foo { _id: 1 } m31101| Fri Feb 22 11:19:36.810 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:19:36.948 [conn1] enabling sharding on: test m30999| Fri Feb 22 11:19:36.950 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 11:19:36.950 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:19:36.950 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.952 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||5127544800fc1508e4df1ce2 based on: (empty) m29000| Fri Feb 22 11:19:36.952 [conn3] build index config.collections { _id: 1 } m29000| Fri Feb 22 11:19:36.953 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:19:36.954 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 2 m30999| Fri Feb 22 11:19:36.954 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:19:36.954 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), authoritative: true, shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 2 m31100| Fri Feb 22 11:19:36.954 [conn15] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 11:19:36.955 [initandlisten] connection accepted from 165.225.128.186:45712 #7 (7 connections now open) m30999| Fri Feb 22 11:19:36.955 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:19:36.956 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.957 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m29000| Fri Feb 22 11:19:36.957 [initandlisten] connection accepted from 165.225.128.186:41614 #8 (8 connections now open) m31100| Fri Feb 22 11:19:36.958 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633 (sleeping for 30000ms) m31100| Fri Feb 22 11:19:36.960 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059523 m31100| Fri Feb 22 11:19:36.961 [conn14] splitChunk accepted at version 1|0||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.962 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059524", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976962), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.962 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.963 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||5127544800fc1508e4df1ce2 based on: 1|0||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.965 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.965 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 100.0 } ], shardId: "test.foo-_id_0.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.965 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059525 m31100| Fri Feb 22 11:19:36.966 [conn14] splitChunk accepted at version 1|2||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.967 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059526", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976967), what: "split", ns: "test.foo", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 100.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.967 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.968 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||5127544800fc1508e4df1ce2 based on: 1|2||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.969 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|4||000000000000000000000000min: { _id: 100.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.969 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 100.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 200.0 } ], shardId: "test.foo-_id_100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.970 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059527 m31100| Fri Feb 22 11:19:36.971 [conn14] splitChunk accepted at version 1|4||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.972 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059528", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976972), what: "split", ns: "test.foo", details: { before: { min: { _id: 100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 100.0 }, max: { _id: 200.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.972 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.973 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||5127544800fc1508e4df1ce2 based on: 1|4||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.974 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|6||000000000000000000000000min: { _id: 200.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.974 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 200.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 300.0 } ], shardId: "test.foo-_id_200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.975 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059529 m31100| Fri Feb 22 11:19:36.975 [conn14] splitChunk accepted at version 1|6||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.976 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa44516705952a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976976), what: "split", ns: "test.foo", details: { before: { min: { _id: 200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 200.0 }, max: { _id: 300.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.976 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.977 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||5127544800fc1508e4df1ce2 based on: 1|6||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.978 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|8||000000000000000000000000min: { _id: 300.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.978 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 300.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 400.0 } ], shardId: "test.foo-_id_300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.979 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa44516705952b m31100| Fri Feb 22 11:19:36.980 [conn14] splitChunk accepted at version 1|8||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.980 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa44516705952c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976980), what: "split", ns: "test.foo", details: { before: { min: { _id: 300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 300.0 }, max: { _id: 400.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.981 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.982 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||5127544800fc1508e4df1ce2 based on: 1|8||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.983 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|10||000000000000000000000000min: { _id: 400.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.983 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 400.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 500.0 } ], shardId: "test.foo-_id_400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.984 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa44516705952d m31100| Fri Feb 22 11:19:36.985 [conn14] splitChunk accepted at version 1|10||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.985 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa44516705952e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976985), what: "split", ns: "test.foo", details: { before: { min: { _id: 400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 400.0 }, max: { _id: 500.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.986 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.987 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||5127544800fc1508e4df1ce2 based on: 1|10||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.988 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|12||000000000000000000000000min: { _id: 500.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.988 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 500.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 600.0 } ], shardId: "test.foo-_id_500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.988 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa44516705952f m31100| Fri Feb 22 11:19:36.989 [conn14] splitChunk accepted at version 1|12||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.990 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059530", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976990), what: "split", ns: "test.foo", details: { before: { min: { _id: 500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 500.0 }, max: { _id: 600.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.990 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.991 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||5127544800fc1508e4df1ce2 based on: 1|12||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.992 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|14||000000000000000000000000min: { _id: 600.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.992 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 600.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 700.0 } ], shardId: "test.foo-_id_600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.993 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059531 m31100| Fri Feb 22 11:19:36.994 [conn14] splitChunk accepted at version 1|14||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.994 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059532", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976994), what: "split", ns: "test.foo", details: { before: { min: { _id: 600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 600.0 }, max: { _id: 700.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.995 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:36.996 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||5127544800fc1508e4df1ce2 based on: 1|14||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:36.996 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|16||000000000000000000000000min: { _id: 700.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:36.997 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 700.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 800.0 } ], shardId: "test.foo-_id_700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:36.997 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754488cfa445167059533 m31100| Fri Feb 22 11:19:36.998 [conn14] splitChunk accepted at version 1|16||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:36.999 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:36-512754488cfa445167059534", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531976999), what: "split", ns: "test.foo", details: { before: { min: { _id: 700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 700.0 }, max: { _id: 800.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:36.999 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.000 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||5127544800fc1508e4df1ce2 based on: 1|16||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.001 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|18||000000000000000000000000min: { _id: 800.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.001 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 800.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 900.0 } ], shardId: "test.foo-_id_800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.002 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059535 m31100| Fri Feb 22 11:19:37.003 [conn14] splitChunk accepted at version 1|18||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.003 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059536", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977003), what: "split", ns: "test.foo", details: { before: { min: { _id: 800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 800.0 }, max: { _id: 900.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 900.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.004 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.004 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||5127544800fc1508e4df1ce2 based on: 1|18||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.005 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|20||000000000000000000000000min: { _id: 900.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.006 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 900.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1000.0 } ], shardId: "test.foo-_id_900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.006 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059537 m31100| Fri Feb 22 11:19:37.007 [conn14] splitChunk accepted at version 1|20||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.008 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059538", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977008), what: "split", ns: "test.foo", details: { before: { min: { _id: 900.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 900.0 }, max: { _id: 1000.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.008 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.009 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||5127544800fc1508e4df1ce2 based on: 1|20||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.010 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|22||000000000000000000000000min: { _id: 1000.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.010 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1000.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1100.0 } ], shardId: "test.foo-_id_1000.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.011 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059539 m31100| Fri Feb 22 11:19:37.012 [conn14] splitChunk accepted at version 1|22||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.012 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705953a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977012), what: "split", ns: "test.foo", details: { before: { min: { _id: 1000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.013 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.014 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||5127544800fc1508e4df1ce2 based on: 1|22||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.015 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|24||000000000000000000000000min: { _id: 1100.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.015 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1100.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1200.0 } ], shardId: "test.foo-_id_1100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.015 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705953b m31100| Fri Feb 22 11:19:37.016 [conn14] splitChunk accepted at version 1|24||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.017 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705953c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977017), what: "split", ns: "test.foo", details: { before: { min: { _id: 1100.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.017 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.018 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||5127544800fc1508e4df1ce2 based on: 1|24||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.019 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|26||000000000000000000000000min: { _id: 1200.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.019 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1200.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1300.0 } ], shardId: "test.foo-_id_1200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.020 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705953d m31100| Fri Feb 22 11:19:37.021 [conn14] splitChunk accepted at version 1|26||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.022 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705953e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977022), what: "split", ns: "test.foo", details: { before: { min: { _id: 1200.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.022 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.023 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||5127544800fc1508e4df1ce2 based on: 1|26||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.024 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|28||000000000000000000000000min: { _id: 1300.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.024 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1300.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1400.0 } ], shardId: "test.foo-_id_1300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.025 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705953f m31100| Fri Feb 22 11:19:37.026 [conn14] splitChunk accepted at version 1|28||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.026 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059540", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977026), what: "split", ns: "test.foo", details: { before: { min: { _id: 1300.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.027 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.028 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||5127544800fc1508e4df1ce2 based on: 1|28||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.029 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|30||000000000000000000000000min: { _id: 1400.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.029 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1400.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1500.0 } ], shardId: "test.foo-_id_1400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.029 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059541 m31100| Fri Feb 22 11:19:37.030 [conn14] splitChunk accepted at version 1|30||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.031 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059542", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977031), what: "split", ns: "test.foo", details: { before: { min: { _id: 1400.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.031 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.032 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||5127544800fc1508e4df1ce2 based on: 1|30||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.033 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|32||000000000000000000000000min: { _id: 1500.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.033 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1500.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1600.0 } ], shardId: "test.foo-_id_1500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.034 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059543 m31100| Fri Feb 22 11:19:37.035 [conn14] splitChunk accepted at version 1|32||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.035 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059544", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977035), what: "split", ns: "test.foo", details: { before: { min: { _id: 1500.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.036 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.037 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||5127544800fc1508e4df1ce2 based on: 1|32||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.038 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|34||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.038 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1600.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1700.0 } ], shardId: "test.foo-_id_1600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.038 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059545 m31100| Fri Feb 22 11:19:37.039 [conn14] splitChunk accepted at version 1|34||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.040 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059546", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977040), what: "split", ns: "test.foo", details: { before: { min: { _id: 1600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.040 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.041 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||5127544800fc1508e4df1ce2 based on: 1|34||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.042 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|36||000000000000000000000000min: { _id: 1700.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.042 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1700.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1800.0 } ], shardId: "test.foo-_id_1700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.043 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059547 m31100| Fri Feb 22 11:19:37.044 [conn14] splitChunk accepted at version 1|36||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.045 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa445167059548", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977045), what: "split", ns: "test.foo", details: { before: { min: { _id: 1700.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.045 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.046 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||5127544800fc1508e4df1ce2 based on: 1|36||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.047 [conn1] splitting: test.foo shard: ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|38||000000000000000000000000min: { _id: 1800.0 }max: { _id: MaxKey } m31100| Fri Feb 22 11:19:37.047 [conn14] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1800.0 }, max: { _id: MaxKey }, from: "rs1-rs0", splitKeys: [ { _id: 1900.0 } ], shardId: "test.foo-_id_1800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 11:19:37.048 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa445167059549 m31100| Fri Feb 22 11:19:37.049 [conn14] splitChunk accepted at version 1|38||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.050 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705954a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977050), what: "split", ns: "test.foo", details: { before: { min: { _id: 1800.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') }, right: { min: { _id: 1900.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5127544800fc1508e4df1ce2') } } } m31100| Fri Feb 22 11:19:37.050 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m30999| Fri Feb 22 11:19:37.051 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 1|40||5127544800fc1508e4df1ce2 based on: 1|38||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:37.052 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|40, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 22 m30999| Fri Feb 22 11:19:37.052 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:37.054 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:37.054 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:19:37.054 [initandlisten] connection accepted from 165.225.128.186:62098 #17 (12 connections now open) m30999| Fri Feb 22 11:19:37.054 [conn1] connected connection! m30999| Fri Feb 22 11:19:37.093 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 0.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:37.093 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|3||000000000000000000000000min: { _id: 0.0 }max: { _id: 100.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:37.094 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:37.094 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 0.0 }, max: { _id: 100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:37.095 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754498cfa44516705954b m31100| Fri Feb 22 11:19:37.095 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:37-512754498cfa44516705954c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531977095), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:37.096 [conn14] moveChunk request accepted at version 1|40||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:37.096 [conn14] moveChunk number of documents: 100 m31100| Fri Feb 22 11:19:37.096 [conn14] starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:37.097 [conn14] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m31200| Fri Feb 22 11:19:37.097 [initandlisten] connection accepted from 165.225.128.186:46351 #16 (12 connections now open) m31100| Fri Feb 22 11:19:37.097 [conn14] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ m31100| Fri Feb 22 11:19:37.097 [conn14] trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 m31200| Fri Feb 22 11:19:37.097 [initandlisten] connection accepted from 165.225.128.186:40234 #17 (13 connections now open) m31100| Fri Feb 22 11:19:37.097 [conn14] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 m31100| Fri Feb 22 11:19:37.097 [conn14] trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m31100| Fri Feb 22 11:19:37.098 [conn14] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m31201| Fri Feb 22 11:19:37.098 [initandlisten] connection accepted from 165.225.128.186:56063 #9 (8 connections now open) m31200| Fri Feb 22 11:19:37.098 [initandlisten] connection accepted from 165.225.128.186:43875 #18 (14 connections now open) m31200| Fri Feb 22 11:19:37.098 [conn16] end connection 165.225.128.186:46351 (13 connections now open) m31100| Fri Feb 22 11:19:37.099 [conn14] Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m31201| Fri Feb 22 11:19:37.099 [initandlisten] connection accepted from 165.225.128.186:39391 #10 (9 connections now open) m31100| Fri Feb 22 11:19:37.100 [conn14] replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:37.100 [ReplicaSetMonitorWatcher] starting m31200| Fri Feb 22 11:19:37.100 [initandlisten] connection accepted from 165.225.128.186:49821 #19 (14 connections now open) m31200| Fri Feb 22 11:19:37.100 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 100.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:37.100 [migrateThread] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31100| Fri Feb 22 11:19:37.101 [initandlisten] connection accepted from 165.225.128.186:39169 #18 (13 connections now open) m31200| Fri Feb 22 11:19:37.101 [migrateThread] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31200| Fri Feb 22 11:19:37.101 [migrateThread] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m31200| Fri Feb 22 11:19:37.101 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m31100| Fri Feb 22 11:19:37.101 [initandlisten] connection accepted from 165.225.128.186:36167 #19 (14 connections now open) m31200| Fri Feb 22 11:19:37.101 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m31200| Fri Feb 22 11:19:37.102 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m31200| Fri Feb 22 11:19:37.102 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 11:19:37.102 [initandlisten] connection accepted from 165.225.128.186:39290 #10 (8 connections now open) m31100| Fri Feb 22 11:19:37.102 [initandlisten] connection accepted from 165.225.128.186:39920 #20 (15 connections now open) m31100| Fri Feb 22 11:19:37.103 [conn18] end connection 165.225.128.186:39169 (14 connections now open) m31200| Fri Feb 22 11:19:37.103 [migrateThread] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:19:37.103 [initandlisten] connection accepted from 165.225.128.186:36721 #11 (9 connections now open) m31200| Fri Feb 22 11:19:37.104 [migrateThread] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m31200| Fri Feb 22 11:19:37.104 [ReplicaSetMonitorWatcher] starting m31100| Fri Feb 22 11:19:37.104 [initandlisten] connection accepted from 165.225.128.186:45792 #21 (15 connections now open) m31200| Fri Feb 22 11:19:37.105 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/test.ns, filling with zeroes... m31200| Fri Feb 22 11:19:37.106 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/test.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:37.106 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/test.0, filling with zeroes... m31200| Fri Feb 22 11:19:37.106 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/test.0, size: 16MB, took 0 secs m31200| Fri Feb 22 11:19:37.110 [migrateThread] build index test.foo { _id: 1 } m31100| Fri Feb 22 11:19:37.110 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:37.111 [migrateThread] build index done. scanned 0 total records. 0.001 secs m31200| Fri Feb 22 11:19:37.111 [migrateThread] info: creating collection test.foo on add index m31200| Fri Feb 22 11:19:37.112 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31201| Fri Feb 22 11:19:37.112 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/test.ns, filling with zeroes... m31201| Fri Feb 22 11:19:37.112 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/test.ns, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:37.113 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/test.0, filling with zeroes... m31201| Fri Feb 22 11:19:37.113 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/test.0, size: 16MB, took 0 secs m31201| Fri Feb 22 11:19:37.117 [repl writer worker 1] build index test.foo { _id: 1 } m31201| Fri Feb 22 11:19:37.118 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31201| Fri Feb 22 11:19:37.118 [repl writer worker 1] info: creating collection test.foo on add index m31100| Fri Feb 22 11:19:37.121 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.131 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.141 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.157 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5, clonedBytes: 145, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.189 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8, clonedBytes: 232, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.253 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 14, clonedBytes: 406, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.382 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 27, clonedBytes: 783, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:37.638 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 52, clonedBytes: 1508, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:37.917 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state SECONDARY m31200| Fri Feb 22 11:19:38.131 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:38.131 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31200| Fri Feb 22 11:19:38.131 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31100| Fri Feb 22 11:19:38.150 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:38.150 [conn14] moveChunk setting version to: 2|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:38.151 [initandlisten] connection accepted from 165.225.128.186:45938 #20 (15 connections now open) m31200| Fri Feb 22 11:19:38.151 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:38.152 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31200| Fri Feb 22 11:19:38.152 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 100.0 } m31200| Fri Feb 22 11:19:38.152 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:38-5127544a4384cdc634ba227e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531978152), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, step1 of 5: 11, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 20 } } m29000| Fri Feb 22 11:19:38.152 [initandlisten] connection accepted from 165.225.128.186:64688 #9 (9 connections now open) m31100| Fri Feb 22 11:19:38.161 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 0.0 }, max: { _id: 100.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:38.161 [conn14] moveChunk updating self version to: 2|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m29000| Fri Feb 22 11:19:38.162 [initandlisten] connection accepted from 165.225.128.186:44442 #10 (10 connections now open) m31100| Fri Feb 22 11:19:38.162 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:38-5127544a8cfa44516705954d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531978162), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:38.162 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:38.162 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:38.162 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:38.162 [conn14] moveChunk starting delete for: test.foo from { _id: 0.0 } -> { _id: 100.0 } m31100| Fri Feb 22 11:19:39.175 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:19:39.175 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 0.0 } -> { _id: 100.0 } m31100| Fri Feb 22 11:19:39.175 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:39.175 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:39.175 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:39.176 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:39-5127544b8cfa44516705954e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531979176), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 100.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 4, step4 of 6: 1050, step5 of 6: 12, step6 of 6: 1012 } } m31100| Fri Feb 22 11:19:39.176 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 0.0 }, max: { _id: 100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:345 w:10941 reslen:37 2081ms m30999| Fri Feb 22 11:19:39.176 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:39.177 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 2|1||5127544800fc1508e4df1ce2 based on: 1|40||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:39.177 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 2000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 23 m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:39.178 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:39.178 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:39.178 [conn1] connected connection! m31101| Fri Feb 22 11:19:39.178 [initandlisten] connection accepted from 165.225.128.186:59280 #12 (10 connections now open) m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 23 m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:19:39.178 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 2000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), authoritative: true, shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 23 m31200| Fri Feb 22 11:19:39.178 [conn15] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 11:19:39.179 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:19:39.179 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:39.180 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:39.180 [conn1] connected connection! m31201| Fri Feb 22 11:19:39.180 [initandlisten] connection accepted from 165.225.128.186:39345 #11 (10 connections now open) m30999| Fri Feb 22 11:19:39.181 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:39.182 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:19:39.182 [initandlisten] connection accepted from 165.225.128.186:43377 #13 (11 connections now open) m30999| Fri Feb 22 11:19:39.182 [conn1] connected connection! m30999| Fri Feb 22 11:19:39.210 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 100.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:39.210 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|5||000000000000000000000000min: { _id: 100.0 }max: { _id: 200.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:39.210 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:39.210 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 100.0 }, max: { _id: 200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:39.211 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127544b8cfa44516705954f m31100| Fri Feb 22 11:19:39.211 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:39-5127544b8cfa445167059550", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531979211), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:39.212 [conn14] moveChunk request accepted at version 2|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:39.213 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:39.213 [migrateThread] starting receiving-end of migration of chunk { _id: 100.0 } -> { _id: 200.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:39.214 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:39.223 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.233 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.244 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.254 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.270 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.302 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.366 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.494 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:39.751 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:40.234 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:40.234 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31200| Fri Feb 22 11:19:40.234 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31100| Fri Feb 22 11:19:40.263 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:40.263 [conn14] moveChunk setting version to: 3|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:40.263 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:40.265 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31200| Fri Feb 22 11:19:40.265 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 100.0 } -> { _id: 200.0 } m31200| Fri Feb 22 11:19:40.265 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:40-5127544c4384cdc634ba227f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531980265), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:40.273 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 100.0 }, max: { _id: 200.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:40.273 [conn14] moveChunk updating self version to: 3|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:40.274 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:40-5127544c8cfa445167059551", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531980274), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:40.274 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:40.274 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:40.274 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:40.274 [conn14] moveChunk starting delete for: test.foo from { _id: 100.0 } -> { _id: 200.0 } m31100| Fri Feb 22 11:19:41.283 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 986ms m31100| Fri Feb 22 11:19:41.283 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 100.0 } -> { _id: 200.0 } m31100| Fri Feb 22 11:19:41.283 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:41.283 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:41.284 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:41.284 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:41-5127544d8cfa445167059552", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531981284), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 100.0 }, max: { _id: 200.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1009 } } m31100| Fri Feb 22 11:19:41.284 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 100.0 }, max: { _id: 200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:315 w:10984 reslen:37 2073ms m30999| Fri Feb 22 11:19:41.284 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:41.285 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 24 version: 3|1||5127544800fc1508e4df1ce2 based on: 2|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:41.286 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 3000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 24 m30999| Fri Feb 22 11:19:41.286 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:41.286 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 3000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 24 m30999| Fri Feb 22 11:19:41.287 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:41.289 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:41.289 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:19:41.289 [conn1] connected connection! m31201| Fri Feb 22 11:19:41.289 [initandlisten] connection accepted from 165.225.128.186:42134 #12 (11 connections now open) m30999| Fri Feb 22 11:19:41.315 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 200.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:41.315 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|7||000000000000000000000000min: { _id: 200.0 }max: { _id: 300.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:41.315 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:41.316 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 200.0 }, max: { _id: 300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:41.317 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127544d8cfa445167059553 m31100| Fri Feb 22 11:19:41.317 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:41-5127544d8cfa445167059554", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531981317), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:41.318 [conn14] moveChunk request accepted at version 3|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:41.318 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:41.318 [migrateThread] starting receiving-end of migration of chunk { _id: 200.0 } -> { _id: 300.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:41.319 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:41.328 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.338 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.348 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.359 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.375 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.407 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.471 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.599 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:41.856 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:42.338 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:42.338 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31200| Fri Feb 22 11:19:42.338 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31100| Fri Feb 22 11:19:42.368 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:42.368 [conn14] moveChunk setting version to: 4|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:42.368 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:42.368 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31200| Fri Feb 22 11:19:42.368 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 200.0 } -> { _id: 300.0 } m31200| Fri Feb 22 11:19:42.369 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:42-5127544e4384cdc634ba2280", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531982369), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:19:42.378 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 200.0 }, max: { _id: 300.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:42.378 [conn14] moveChunk updating self version to: 4|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:42.379 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:42-5127544e8cfa445167059555", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531982379), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:42.379 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:42.379 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:42.379 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:42.379 [conn14] moveChunk starting delete for: test.foo from { _id: 200.0 } -> { _id: 300.0 } m30999| Fri Feb 22 11:19:42.625 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:42.625 [Balancer] skipping balancing round because balancing is disabled m31100| Fri Feb 22 11:19:43.389 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 984ms m31100| Fri Feb 22 11:19:43.389 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 200.0 } -> { _id: 300.0 } m31100| Fri Feb 22 11:19:43.389 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:43.389 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:43.389 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:43.389 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:43-5127544f8cfa445167059556", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531983389), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 200.0 }, max: { _id: 300.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1009 } } m31100| Fri Feb 22 11:19:43.390 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 200.0 }, max: { _id: 300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:310 w:11093 reslen:37 2074ms m30999| Fri Feb 22 11:19:43.390 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:43.391 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 4|1||5127544800fc1508e4df1ce2 based on: 3|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:43.395 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 4000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 25 m30999| Fri Feb 22 11:19:43.395 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:43.395 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 4000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 25 m30999| Fri Feb 22 11:19:43.396 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:43.426 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 300.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:43.426 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|9||000000000000000000000000min: { _id: 300.0 }max: { _id: 400.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:43.426 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:43.426 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 300.0 }, max: { _id: 400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:43.427 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127544f8cfa445167059557 m31100| Fri Feb 22 11:19:43.427 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:43-5127544f8cfa445167059558", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531983427), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:43.428 [conn14] moveChunk request accepted at version 4|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:43.428 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:43.429 [migrateThread] starting receiving-end of migration of chunk { _id: 300.0 } -> { _id: 400.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:43.429 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:43.439 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.449 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.459 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.469 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.485 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.518 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.582 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.710 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:43.966 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:44.449 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:44.449 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31200| Fri Feb 22 11:19:44.449 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31100| Fri Feb 22 11:19:44.479 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:44.479 [conn14] moveChunk setting version to: 5|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:44.479 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:44.480 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31200| Fri Feb 22 11:19:44.480 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 300.0 } -> { _id: 400.0 } m31200| Fri Feb 22 11:19:44.480 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:44-512754504384cdc634ba2281", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531984480), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:44.489 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 300.0 }, max: { _id: 400.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:44.489 [conn14] moveChunk updating self version to: 5|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:44.490 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:44-512754508cfa445167059559", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531984490), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:44.490 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:44.490 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:44.490 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:44.490 [conn14] moveChunk starting delete for: test.foo from { _id: 300.0 } -> { _id: 400.0 } m31100| Fri Feb 22 11:19:45.501 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:19:45.501 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 300.0 } -> { _id: 400.0 } m31100| Fri Feb 22 11:19:45.501 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:45.501 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:45.501 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:45.501 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:45-512754518cfa44516705955a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531985501), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 300.0 }, max: { _id: 400.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1010 } } m31100| Fri Feb 22 11:19:45.501 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 300.0 }, max: { _id: 400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:285 w:11569 reslen:37 2075ms m30999| Fri Feb 22 11:19:45.501 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:45.503 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 26 version: 5|1||5127544800fc1508e4df1ce2 based on: 4|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:45.504 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 5000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 26 m30999| Fri Feb 22 11:19:45.504 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:45.504 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 5000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 26 m30999| Fri Feb 22 11:19:45.505 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:45.537 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 400.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:45.537 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|11||000000000000000000000000min: { _id: 400.0 }max: { _id: 500.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:45.537 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:45.538 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 400.0 }, max: { _id: 500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:45.539 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754518cfa44516705955b m31100| Fri Feb 22 11:19:45.539 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:45-512754518cfa44516705955c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531985539), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:45.540 [conn14] moveChunk request accepted at version 5|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:45.540 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:45.541 [migrateThread] starting receiving-end of migration of chunk { _id: 400.0 } -> { _id: 500.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:45.542 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:45.551 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.561 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.571 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.581 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.598 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.630 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.694 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:45.822 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31201| Fri Feb 22 11:19:45.918 [conn2] end connection 165.225.128.186:50247 (10 connections now open) m31201| Fri Feb 22 11:19:45.918 [initandlisten] connection accepted from 165.225.128.186:45106 #13 (11 connections now open) m31100| Fri Feb 22 11:19:46.078 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:46.562 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:46.562 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31200| Fri Feb 22 11:19:46.563 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31100| Fri Feb 22 11:19:46.591 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:46.591 [conn14] moveChunk setting version to: 6|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:46.591 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:46.593 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31200| Fri Feb 22 11:19:46.593 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 400.0 } -> { _id: 500.0 } m31200| Fri Feb 22 11:19:46.593 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:46-512754524384cdc634ba2282", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531986593), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:46.601 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 400.0 }, max: { _id: 500.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:46.601 [conn14] moveChunk updating self version to: 6|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:46.602 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:46-512754528cfa44516705955d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531986602), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:46.602 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:46.602 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:46.602 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:46.602 [conn14] moveChunk starting delete for: test.foo from { _id: 400.0 } -> { _id: 500.0 } m30999| Fri Feb 22 11:19:46.782 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986782), ok: 1.0 } m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986783), ok: 1.0 } m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986783), ok: 1.0 } m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.783 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986783), ok: 1.0 } m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986790), ok: 1.0 } m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.790 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986790), ok: 1.0 } m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986791), ok: 1.0 } m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531986791), ok: 1.0 } m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:46.791 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31101| Fri Feb 22 11:19:47.389 [conn4] end connection 165.225.128.186:46605 (10 connections now open) m31101| Fri Feb 22 11:19:47.389 [initandlisten] connection accepted from 165.225.128.186:40295 #14 (11 connections now open) m31100| Fri Feb 22 11:19:47.614 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 976ms m31100| Fri Feb 22 11:19:47.614 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 400.0 } -> { _id: 500.0 } m31100| Fri Feb 22 11:19:47.614 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:47.614 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:47.615 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:47.615 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:47-512754538cfa44516705955e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531987615), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 400.0 }, max: { _id: 500.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1012 } } m31100| Fri Feb 22 11:19:47.615 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 400.0 }, max: { _id: 500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:293 w:11361 reslen:37 2077ms m30999| Fri Feb 22 11:19:47.615 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:47.616 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 6|1||5127544800fc1508e4df1ce2 based on: 5|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:47.617 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 6000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 27 m30999| Fri Feb 22 11:19:47.617 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:47.617 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 6000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 27 m30999| Fri Feb 22 11:19:47.618 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:47.647 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 500.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:47.647 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|13||000000000000000000000000min: { _id: 500.0 }max: { _id: 600.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:47.647 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:47.648 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 500.0 }, max: { _id: 600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:47.648 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754538cfa44516705955f m31100| Fri Feb 22 11:19:47.649 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:47-512754538cfa445167059560", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531987649), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:47.650 [conn14] moveChunk request accepted at version 6|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:47.650 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:47.650 [migrateThread] starting receiving-end of migration of chunk { _id: 500.0 } -> { _id: 600.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:47.651 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:47.660 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.671 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.681 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.691 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.707 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.739 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.804 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:47.932 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:48.108 [conn3] end connection 165.225.128.186:41805 (14 connections now open) m31200| Fri Feb 22 11:19:48.109 [initandlisten] connection accepted from 165.225.128.186:33784 #21 (15 connections now open) m31100| Fri Feb 22 11:19:48.188 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:19:48.626 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:48.626 [Balancer] skipping balancing round because balancing is disabled m31200| Fri Feb 22 11:19:48.670 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:48.670 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31200| Fri Feb 22 11:19:48.671 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31100| Fri Feb 22 11:19:48.700 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:48.700 [conn14] moveChunk setting version to: 7|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:48.700 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:48.701 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31200| Fri Feb 22 11:19:48.701 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 500.0 } -> { _id: 600.0 } m31200| Fri Feb 22 11:19:48.701 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:48-512754544384cdc634ba2283", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531988701), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:48.711 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 500.0 }, max: { _id: 600.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:48.711 [conn14] moveChunk updating self version to: 7|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:48.711 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:48-512754548cfa445167059561", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531988711), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:48.712 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:48.712 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:48.712 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:48.712 [conn14] moveChunk starting delete for: test.foo from { _id: 500.0 } -> { _id: 600.0 } m31100| Fri Feb 22 11:19:49.600 [conn7] end connection 165.225.128.186:59785 (14 connections now open) m31100| Fri Feb 22 11:19:49.600 [initandlisten] connection accepted from 165.225.128.186:64933 #22 (15 connections now open) m31100| Fri Feb 22 11:19:49.722 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:19:49.722 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 500.0 } -> { _id: 600.0 } m31100| Fri Feb 22 11:19:49.722 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:49.722 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:49.723 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:49.723 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:49-512754558cfa445167059562", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531989723), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 500.0 }, max: { _id: 600.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1010 } } m31100| Fri Feb 22 11:19:49.723 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 500.0 }, max: { _id: 600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:25 r:314 w:10697 reslen:37 2075ms m30999| Fri Feb 22 11:19:49.723 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:49.724 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 7|1||5127544800fc1508e4df1ce2 based on: 6|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:49.725 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 7000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 28 m30999| Fri Feb 22 11:19:49.725 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:49.725 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 7000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 28 m30999| Fri Feb 22 11:19:49.726 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:49.753 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 600.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:49.753 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|15||000000000000000000000000min: { _id: 600.0 }max: { _id: 700.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:49.754 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:49.754 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 600.0 }, max: { _id: 700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:49.755 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754558cfa445167059563 m31100| Fri Feb 22 11:19:49.755 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:49-512754558cfa445167059564", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531989755), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:49.756 [conn14] moveChunk request accepted at version 7|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:49.756 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:49.757 [migrateThread] starting receiving-end of migration of chunk { _id: 600.0 } -> { _id: 700.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:49.757 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:49.767 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.777 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.787 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.797 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.814 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.846 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:49.910 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:50.038 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:50.295 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:50.778 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:50.778 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31200| Fri Feb 22 11:19:50.778 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31100| Fri Feb 22 11:19:50.807 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:50.807 [conn14] moveChunk setting version to: 8|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:50.807 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:50.809 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31200| Fri Feb 22 11:19:50.809 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 600.0 } -> { _id: 700.0 } m31200| Fri Feb 22 11:19:50.809 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:50-512754564384cdc634ba2284", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531990809), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:50.817 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 600.0 }, max: { _id: 700.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:50.817 [conn14] moveChunk updating self version to: 8|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:50.818 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:50-512754568cfa445167059565", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531990818), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:50.818 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:50.818 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:50.818 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:50.818 [conn14] moveChunk starting delete for: test.foo from { _id: 600.0 } -> { _id: 700.0 } m31100| Fri Feb 22 11:19:51.829 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:19:51.830 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 600.0 } -> { _id: 700.0 } m31100| Fri Feb 22 11:19:51.830 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:51.830 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:51.840 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:51.840 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:51-512754578cfa445167059566", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531991840), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 600.0 }, max: { _id: 700.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:19:51.840 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 600.0 }, max: { _id: 700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:35 r:341 w:10844 reslen:37 2086ms m30999| Fri Feb 22 11:19:51.840 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:51.841 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 8|1||5127544800fc1508e4df1ce2 based on: 7|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:51.842 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 8000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 29 m30999| Fri Feb 22 11:19:51.842 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:51.842 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 8000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 29 m30999| Fri Feb 22 11:19:51.843 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:51.870 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 700.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:51.870 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|17||000000000000000000000000min: { _id: 700.0 }max: { _id: 800.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:51.870 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:51.870 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 700.0 }, max: { _id: 800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:51.871 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754578cfa445167059567 m31100| Fri Feb 22 11:19:51.871 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:51-512754578cfa445167059568", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531991871), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:51.872 [conn14] moveChunk request accepted at version 8|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:51.872 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:51.873 [migrateThread] starting receiving-end of migration of chunk { _id: 700.0 } -> { _id: 800.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:51.873 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:51.883 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.893 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.903 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.913 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.930 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:51.962 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.026 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.154 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.410 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:52.892 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:52.892 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31200| Fri Feb 22 11:19:52.893 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31100| Fri Feb 22 11:19:52.923 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:52.923 [conn14] moveChunk setting version to: 9|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:52.923 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:52.923 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31200| Fri Feb 22 11:19:52.923 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 700.0 } -> { _id: 800.0 } m31200| Fri Feb 22 11:19:52.923 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:52-512754584384cdc634ba2285", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531992923), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1018, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:19:52.933 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 700.0 }, max: { _id: 800.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:52.933 [conn14] moveChunk updating self version to: 9|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:52.934 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:52-512754588cfa445167059569", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531992934), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:52.934 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:52.934 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:52.934 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:52.934 [conn14] moveChunk starting delete for: test.foo from { _id: 700.0 } -> { _id: 800.0 } m31100| Fri Feb 22 11:19:53.943 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 985ms m31100| Fri Feb 22 11:19:53.943 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 700.0 } -> { _id: 800.0 } m31100| Fri Feb 22 11:19:53.943 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:53.943 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:53.944 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:53.944 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:53-512754598cfa44516705956a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531993944), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 700.0 }, max: { _id: 800.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1009 } } m31100| Fri Feb 22 11:19:53.944 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 700.0 }, max: { _id: 800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:313 w:11356 reslen:37 2073ms m30999| Fri Feb 22 11:19:53.944 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:53.945 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 9|1||5127544800fc1508e4df1ce2 based on: 8|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:53.946 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 9000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 30 m30999| Fri Feb 22 11:19:53.946 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:53.946 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 9000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 30 m30999| Fri Feb 22 11:19:53.947 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:53.973 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 800.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:53.973 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|19||000000000000000000000000min: { _id: 800.0 }max: { _id: 900.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:53.974 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:53.974 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 800.0 }, max: { _id: 900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:53.975 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754598cfa44516705956b m31100| Fri Feb 22 11:19:53.975 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:53-512754598cfa44516705956c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531993975), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:53.976 [conn14] moveChunk request accepted at version 9|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:53.976 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:53.976 [migrateThread] starting receiving-end of migration of chunk { _id: 800.0 } -> { _id: 900.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:53.977 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:53.986 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:53.996 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.007 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.017 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.033 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.065 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.129 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.258 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:54.514 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:19:54.627 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:19:54.627 [Balancer] skipping balancing round because balancing is disabled m31200| Fri Feb 22 11:19:54.997 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:54.997 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31200| Fri Feb 22 11:19:54.997 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31100| Fri Feb 22 11:19:55.026 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:55.026 [conn14] moveChunk setting version to: 10|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:55.026 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:55.028 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31200| Fri Feb 22 11:19:55.028 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 800.0 } -> { _id: 900.0 } m31200| Fri Feb 22 11:19:55.028 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:55-5127545b4384cdc634ba2286", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531995028), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:55.036 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 800.0 }, max: { _id: 900.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:55.037 [conn14] moveChunk updating self version to: 10|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:55.038 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:55-5127545b8cfa44516705956d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531995038), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:55.038 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:55.038 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:55.038 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:55.038 [conn14] moveChunk starting delete for: test.foo from { _id: 800.0 } -> { _id: 900.0 } m31100| Fri Feb 22 11:19:56.050 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:19:56.050 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 800.0 } -> { _id: 900.0 } m31100| Fri Feb 22 11:19:56.050 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:56.050 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:56.050 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:56.050 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:56-5127545c8cfa44516705956e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531996050), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 800.0 }, max: { _id: 900.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1012 } } m31100| Fri Feb 22 11:19:56.051 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 800.0 }, max: { _id: 900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:366 w:11162 reslen:37 2076ms m30999| Fri Feb 22 11:19:56.051 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:56.052 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 10|1||5127544800fc1508e4df1ce2 based on: 9|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:56.052 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 10000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 31 m30999| Fri Feb 22 11:19:56.052 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:56.053 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 10000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 31 m30999| Fri Feb 22 11:19:56.053 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:56.106 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 900.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:56.106 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|21||000000000000000000000000min: { _id: 900.0 }max: { _id: 1000.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:56.106 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:56.107 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 900.0 }, max: { _id: 1000.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:56.108 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127545c8cfa44516705956f m31100| Fri Feb 22 11:19:56.108 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:56-5127545c8cfa445167059570", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531996108), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:56.109 [conn14] moveChunk request accepted at version 10|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:56.109 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:56.109 [migrateThread] starting receiving-end of migration of chunk { _id: 900.0 } -> { _id: 1000.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:56.110 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:56.119 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.130 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.140 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.150 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.166 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.198 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.262 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.391 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:56.647 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:19:56.791 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.792 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996792), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996793), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996793), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996793), ok: 1.0 } m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.793 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:19:56.794 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361531996794), ok: 1.0 } m30999| Fri Feb 22 11:19:56.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:19:56.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31200| Fri Feb 22 11:19:57.129 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:57.129 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31200| Fri Feb 22 11:19:57.130 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31100| Fri Feb 22 11:19:57.159 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:57.159 [conn14] moveChunk setting version to: 11|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:57.159 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:57.160 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31200| Fri Feb 22 11:19:57.160 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 900.0 } -> { _id: 1000.0 } m31200| Fri Feb 22 11:19:57.161 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:57-5127545d4384cdc634ba2287", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531997161), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1019, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:57.169 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 900.0 }, max: { _id: 1000.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:57.169 [conn14] moveChunk updating self version to: 11|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:57.170 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:57-5127545d8cfa445167059571", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531997170), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:57.170 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:57.170 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:57.170 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:57.170 [conn14] moveChunk starting delete for: test.foo from { _id: 900.0 } -> { _id: 1000.0 } m31100| Fri Feb 22 11:19:58.182 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:19:58.182 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 900.0 } -> { _id: 1000.0 } m31100| Fri Feb 22 11:19:58.182 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:58.182 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:58.183 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:19:58.183 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:58-5127545e8cfa445167059572", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531998183), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 900.0 }, max: { _id: 1000.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:19:58.183 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 900.0 }, max: { _id: 1000.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:317 w:11088 reslen:37 2076ms m30999| Fri Feb 22 11:19:58.183 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:19:58.184 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 32 version: 11|1||5127544800fc1508e4df1ce2 based on: 10|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:19:58.185 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 11000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 32 m30999| Fri Feb 22 11:19:58.185 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:58.185 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 11000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 32 m30999| Fri Feb 22 11:19:58.186 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:19:58.213 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1000.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:19:58.213 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|23||000000000000000000000000min: { _id: 1000.0 }max: { _id: 1100.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:19:58.213 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:19:58.213 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1000.0 }, max: { _id: 1100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1000.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:19:58.214 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127545e8cfa445167059573 m31100| Fri Feb 22 11:19:58.214 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:58-5127545e8cfa445167059574", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531998214), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:58.215 [conn14] moveChunk request accepted at version 11|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:19:58.216 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:19:58.216 [migrateThread] starting receiving-end of migration of chunk { _id: 1000.0 } -> { _id: 1100.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:19:58.216 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:19:58.226 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.236 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.246 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.256 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.273 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.305 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.369 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.497 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:58.753 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:19:59.238 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:19:59.238 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31200| Fri Feb 22 11:19:59.239 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31100| Fri Feb 22 11:19:59.266 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:19:59.266 [conn14] moveChunk setting version to: 12|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:19:59.266 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:19:59.269 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31200| Fri Feb 22 11:19:59.269 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1000.0 } -> { _id: 1100.0 } m31200| Fri Feb 22 11:19:59.269 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:59-5127545f4384cdc634ba2288", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361531999269), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1021, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:19:59.276 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1000.0 }, max: { _id: 1100.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:19:59.276 [conn14] moveChunk updating self version to: 12|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:19:59.277 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:19:59-5127545f8cfa445167059575", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361531999277), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:19:59.277 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:19:59.277 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:19:59.277 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:19:59.277 [conn14] moveChunk starting delete for: test.foo from { _id: 1000.0 } -> { _id: 1100.0 } m31100| Fri Feb 22 11:20:00.289 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:20:00.289 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1000.0 } -> { _id: 1100.0 } m31100| Fri Feb 22 11:20:00.289 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:00.289 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:00.290 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:00.290 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:00-512754608cfa445167059576", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532000290), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1000.0 }, max: { _id: 1100.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:00.290 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1000.0 }, max: { _id: 1100.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1000.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:312 w:11094 reslen:37 2077ms m30999| Fri Feb 22 11:20:00.290 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:00.292 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 33 version: 12|1||5127544800fc1508e4df1ce2 based on: 11|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:00.293 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 12000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 33 m30999| Fri Feb 22 11:20:00.293 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:00.293 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 12000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 33 m30999| Fri Feb 22 11:20:00.294 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:00.322 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1100.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:00.322 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|25||000000000000000000000000min: { _id: 1100.0 }max: { _id: 1200.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:00.322 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:00.323 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1100.0 }, max: { _id: 1200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:00.323 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754608cfa445167059577 m31100| Fri Feb 22 11:20:00.324 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:00-512754608cfa445167059578", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532000323), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:00.324 [conn14] moveChunk request accepted at version 12|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:00.325 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:00.325 [migrateThread] starting receiving-end of migration of chunk { _id: 1100.0 } -> { _id: 1200.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:00.326 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:00.335 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.345 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.356 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.366 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.382 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.414 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.478 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:00.606 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:20:00.628 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:00.628 [Balancer] skipping balancing round because balancing is disabled m31100| Fri Feb 22 11:20:00.863 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:01.347 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:01.347 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31200| Fri Feb 22 11:20:01.347 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31100| Fri Feb 22 11:20:01.375 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:01.375 [conn14] moveChunk setting version to: 13|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:01.375 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:01.378 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31200| Fri Feb 22 11:20:01.378 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1100.0 } -> { _id: 1200.0 } m31200| Fri Feb 22 11:20:01.378 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:01-512754614384cdc634ba2289", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532001378), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:01.385 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1100.0 }, max: { _id: 1200.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:01.385 [conn14] moveChunk updating self version to: 13|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:01.386 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:01-512754618cfa445167059579", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532001386), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:01.386 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:01.386 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:01.386 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:01.386 [conn14] moveChunk starting delete for: test.foo from { _id: 1100.0 } -> { _id: 1200.0 } m31100| Fri Feb 22 11:20:02.400 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:20:02.400 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1100.0 } -> { _id: 1200.0 } m31100| Fri Feb 22 11:20:02.400 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:02.400 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:02.401 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:02.401 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:02-512754628cfa44516705957a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532002401), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1100.0 }, max: { _id: 1200.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1014 } } m31100| Fri Feb 22 11:20:02.401 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1100.0 }, max: { _id: 1200.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1100.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:311 w:11663 reslen:37 2078ms m30999| Fri Feb 22 11:20:02.401 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:02.402 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 34 version: 13|1||5127544800fc1508e4df1ce2 based on: 12|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:02.403 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 13000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 34 m30999| Fri Feb 22 11:20:02.403 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:02.403 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 13000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 34 m30999| Fri Feb 22 11:20:02.404 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:02.432 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1200.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:02.432 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|27||000000000000000000000000min: { _id: 1200.0 }max: { _id: 1300.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:02.432 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:02.433 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1200.0 }, max: { _id: 1300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:02.434 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754628cfa44516705957b m31100| Fri Feb 22 11:20:02.434 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:02-512754628cfa44516705957c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532002434), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:02.435 [conn14] moveChunk request accepted at version 13|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:02.435 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:02.435 [migrateThread] starting receiving-end of migration of chunk { _id: 1200.0 } -> { _id: 1300.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:02.436 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:02.445 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.456 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.466 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.476 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.492 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.524 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.589 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.717 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:02.973 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:03.461 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:03.462 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31200| Fri Feb 22 11:20:03.462 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31100| Fri Feb 22 11:20:03.486 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:03.486 [conn14] moveChunk setting version to: 14|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:03.486 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:03.493 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31200| Fri Feb 22 11:20:03.493 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1200.0 } -> { _id: 1300.0 } m31200| Fri Feb 22 11:20:03.493 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:03-512754634384cdc634ba228a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532003493), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1025, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:03.496 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1200.0 }, max: { _id: 1300.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:03.496 [conn14] moveChunk updating self version to: 14|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:03.497 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:03-512754638cfa44516705957d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532003497), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:03.497 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:03.497 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:03.497 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:03.497 [conn14] moveChunk starting delete for: test.foo from { _id: 1200.0 } -> { _id: 1300.0 } m31100| Fri Feb 22 11:20:04.509 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:20:04.509 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1200.0 } -> { _id: 1300.0 } m31100| Fri Feb 22 11:20:04.509 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:04.509 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:04.510 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:04.510 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:04-512754648cfa44516705957e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532004510), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1200.0 }, max: { _id: 1300.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1012 } } m31100| Fri Feb 22 11:20:04.510 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1200.0 }, max: { _id: 1300.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1200.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:34 r:292 w:11307 reslen:37 2077ms m30999| Fri Feb 22 11:20:04.510 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:04.511 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 14|1||5127544800fc1508e4df1ce2 based on: 13|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 14000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 35 m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 14000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 35 m30999| Fri Feb 22 11:20:04.512 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:04.549 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1300.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:04.549 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|29||000000000000000000000000min: { _id: 1300.0 }max: { _id: 1400.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:04.549 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:04.549 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1300.0 }, max: { _id: 1400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:04.550 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754648cfa44516705957f m31100| Fri Feb 22 11:20:04.550 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:04-512754648cfa445167059580", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532004550), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:04.551 [conn14] moveChunk request accepted at version 14|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:04.551 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:04.551 [migrateThread] starting receiving-end of migration of chunk { _id: 1300.0 } -> { _id: 1400.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:04.552 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:04.562 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.572 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.582 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.592 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.609 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.641 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.705 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:04.833 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:05.089 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:05.576 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:05.576 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31200| Fri Feb 22 11:20:05.576 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31100| Fri Feb 22 11:20:05.602 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:05.602 [conn14] moveChunk setting version to: 15|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:05.602 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:05.607 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31200| Fri Feb 22 11:20:05.607 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1300.0 } -> { _id: 1400.0 } m31200| Fri Feb 22 11:20:05.607 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:05-512754654384cdc634ba228b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532005607), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1023, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:05.612 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1300.0 }, max: { _id: 1400.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:05.612 [conn14] moveChunk updating self version to: 15|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:05.613 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:05-512754658cfa445167059581", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532005613), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:05.613 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:05.613 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:05.613 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:05.613 [conn14] moveChunk starting delete for: test.foo from { _id: 1300.0 } -> { _id: 1400.0 } m30999| Fri Feb 22 11:20:06.609 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 11:20:06 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361531976:16838', sleeping for 30000ms m31100| Fri Feb 22 11:20:06.627 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:20:06.627 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1300.0 } -> { _id: 1400.0 } m31100| Fri Feb 22 11:20:06.627 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:06.627 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:06.627 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:06.627 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:06-512754668cfa445167059582", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532006627), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1300.0 }, max: { _id: 1400.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1013 } } m31100| Fri Feb 22 11:20:06.627 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1300.0 }, max: { _id: 1400.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1300.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:21 r:269 w:10214 reslen:37 2078ms m30999| Fri Feb 22 11:20:06.628 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:06.629 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 36 version: 15|1||5127544800fc1508e4df1ce2 based on: 14|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:06.629 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:20:06.629 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:06.629 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:06.629 [conn1] connected connection! m29000| Fri Feb 22 11:20:06.629 [initandlisten] connection accepted from 165.225.128.186:50718 #11 (11 connections now open) m30999| Fri Feb 22 11:20:06.629 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:20:06.630 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 15000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 36 m30999| Fri Feb 22 11:20:06.630 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:06.630 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 15000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 36 m30999| Fri Feb 22 11:20:06.631 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:06.660 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1400.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:06.660 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|31||000000000000000000000000min: { _id: 1400.0 }max: { _id: 1500.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:06.661 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:06.661 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1400.0 }, max: { _id: 1500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:06.662 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754668cfa445167059583 m31100| Fri Feb 22 11:20:06.662 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:06-512754668cfa445167059584", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532006662), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:06.663 [conn14] moveChunk request accepted at version 15|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:06.663 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:06.663 [migrateThread] starting receiving-end of migration of chunk { _id: 1400.0 } -> { _id: 1500.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:06.664 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:06.674 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.684 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.694 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.705 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.721 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.753 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006794), ok: 1.0 } m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.794 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006794), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006795), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006795), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006795), ok: 1.0 } m30999| Fri Feb 22 11:20:06.795 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006796), ok: 1.0 } m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006796), ok: 1.0 } m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532006796), ok: 1.0 } m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:06.796 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:06.817 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:06.945 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:07.202 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:07.690 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:07.690 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31200| Fri Feb 22 11:20:07.691 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31100| Fri Feb 22 11:20:07.714 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:07.714 [conn14] moveChunk setting version to: 16|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:07.714 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:07.722 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31200| Fri Feb 22 11:20:07.722 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1400.0 } -> { _id: 1500.0 } m31200| Fri Feb 22 11:20:07.722 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:07-512754674384cdc634ba228c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532007722), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1025, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:07.724 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1400.0 }, max: { _id: 1500.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:07.724 [conn14] moveChunk updating self version to: 16|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:07.725 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:07-512754678cfa445167059585", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532007725), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:07.725 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:07.725 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:07.725 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:07.725 [conn14] moveChunk starting delete for: test.foo from { _id: 1400.0 } -> { _id: 1500.0 } m31100| Fri Feb 22 11:20:08.739 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:20:08.739 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1400.0 } -> { _id: 1500.0 } m31100| Fri Feb 22 11:20:08.739 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:08.739 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:08.740 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:08.740 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:08-512754688cfa445167059586", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532008740), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1400.0 }, max: { _id: 1500.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1013 } } m31100| Fri Feb 22 11:20:08.740 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1400.0 }, max: { _id: 1500.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1400.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:24 r:290 w:11489 reslen:37 2079ms m30999| Fri Feb 22 11:20:08.740 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:08.741 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 16|1||5127544800fc1508e4df1ce2 based on: 15|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:08.742 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 16000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 37 m30999| Fri Feb 22 11:20:08.742 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:08.742 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 16000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 37 m30999| Fri Feb 22 11:20:08.743 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:08.770 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1500.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:08.770 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|33||000000000000000000000000min: { _id: 1500.0 }max: { _id: 1600.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:08.770 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:08.770 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1500.0 }, max: { _id: 1600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:08.771 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754688cfa445167059587 m31100| Fri Feb 22 11:20:08.771 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:08-512754688cfa445167059588", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532008771), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:08.772 [conn14] moveChunk request accepted at version 16|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:08.773 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:08.773 [migrateThread] starting receiving-end of migration of chunk { _id: 1500.0 } -> { _id: 1600.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:08.773 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:08.783 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.793 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.803 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.814 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.830 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.862 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:08.926 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:09.054 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:09.311 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:09.795 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:09.795 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31200| Fri Feb 22 11:20:09.795 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31100| Fri Feb 22 11:20:09.823 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:09.823 [conn14] moveChunk setting version to: 17|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:09.823 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:09.825 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31200| Fri Feb 22 11:20:09.825 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1500.0 } -> { _id: 1600.0 } m31200| Fri Feb 22 11:20:09.825 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:09-512754694384cdc634ba228d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532009825), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:20:09.834 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1500.0 }, max: { _id: 1600.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:09.834 [conn14] moveChunk updating self version to: 17|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:09.834 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:09-512754698cfa445167059589", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532009834), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:09.834 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:09.834 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:09.835 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:09.835 [conn14] moveChunk starting delete for: test.foo from { _id: 1500.0 } -> { _id: 1600.0 } m31100| Fri Feb 22 11:20:10.846 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 989ms m31100| Fri Feb 22 11:20:10.846 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1500.0 } -> { _id: 1600.0 } m31100| Fri Feb 22 11:20:10.846 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:10.846 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:10.847 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:10.847 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:10-5127546a8cfa44516705958a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532010847), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1500.0 }, max: { _id: 1600.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:10.847 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1500.0 }, max: { _id: 1600.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1500.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:30 r:288 w:11084 reslen:37 2076ms m30999| Fri Feb 22 11:20:10.847 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:10.848 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 38 version: 17|1||5127544800fc1508e4df1ce2 based on: 16|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:10.849 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 17000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 38 m30999| Fri Feb 22 11:20:10.849 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:10.849 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 17000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 38 m30999| Fri Feb 22 11:20:10.850 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:10.878 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1600.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:10.878 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|35||000000000000000000000000min: { _id: 1600.0 }max: { _id: 1700.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:10.878 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:10.878 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1600.0 }, max: { _id: 1700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:10.879 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127546a8cfa44516705958b m31100| Fri Feb 22 11:20:10.879 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:10-5127546a8cfa44516705958c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532010879), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:10.880 [conn14] moveChunk request accepted at version 17|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:10.880 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:10.881 [migrateThread] starting receiving-end of migration of chunk { _id: 1600.0 } -> { _id: 1700.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:10.881 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:10.891 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.901 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.911 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.921 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.937 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:10.970 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.034 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.162 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.418 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:11.905 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:11.905 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31200| Fri Feb 22 11:20:11.905 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31100| Fri Feb 22 11:20:11.931 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:11.931 [conn14] moveChunk setting version to: 18|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:11.931 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:11.936 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31200| Fri Feb 22 11:20:11.936 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1600.0 } -> { _id: 1700.0 } m31200| Fri Feb 22 11:20:11.936 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:11-5127546b4384cdc634ba228e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532011936), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1023, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:11.941 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1600.0 }, max: { _id: 1700.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:11.941 [conn14] moveChunk updating self version to: 18|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:11.942 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:11-5127546b8cfa44516705958d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532011942), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:11.942 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:11.942 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:11.942 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:11.942 [conn14] moveChunk starting delete for: test.foo from { _id: 1600.0 } -> { _id: 1700.0 } m30999| Fri Feb 22 11:20:12.631 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:12.631 [Balancer] skipping balancing round because balancing is disabled m31100| Fri Feb 22 11:20:12.953 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:20:12.953 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1600.0 } -> { _id: 1700.0 } m31100| Fri Feb 22 11:20:12.953 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:12.953 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:12.953 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:12.953 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:12-5127546c8cfa44516705958e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532012953), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1600.0 }, max: { _id: 1700.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 1010 } } m31100| Fri Feb 22 11:20:12.953 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1600.0 }, max: { _id: 1700.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1600.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:315 w:10751 reslen:37 2075ms m30999| Fri Feb 22 11:20:12.953 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:12.955 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 18|1||5127544800fc1508e4df1ce2 based on: 17|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:12.956 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 18000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 39 m30999| Fri Feb 22 11:20:12.956 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:12.956 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 18000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 39 m30999| Fri Feb 22 11:20:12.957 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:12.988 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1700.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:12.988 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|37||000000000000000000000000min: { _id: 1700.0 }max: { _id: 1800.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:12.988 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:12.988 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1700.0 }, max: { _id: 1800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:12.989 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127546c8cfa44516705958f m31100| Fri Feb 22 11:20:12.989 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:12-5127546c8cfa445167059590", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532012989), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:12.991 [conn14] moveChunk request accepted at version 18|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:12.991 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:12.991 [migrateThread] starting receiving-end of migration of chunk { _id: 1700.0 } -> { _id: 1800.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:12.992 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:13.001 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.012 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.022 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.032 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.048 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.081 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.145 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.273 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:13.529 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:14.012 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:14.012 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31200| Fri Feb 22 11:20:14.013 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31100| Fri Feb 22 11:20:14.041 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:14.042 [conn14] moveChunk setting version to: 19|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:14.042 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:14.043 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31200| Fri Feb 22 11:20:14.043 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1700.0 } -> { _id: 1800.0 } m31200| Fri Feb 22 11:20:14.043 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:14-5127546e4384cdc634ba228f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532014043), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1020, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:14.052 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1700.0 }, max: { _id: 1800.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:14.052 [conn14] moveChunk updating self version to: 19|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:14.053 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:14-5127546e8cfa445167059591", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532014053), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:14.053 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:14.053 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:14.053 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:14.053 [conn14] moveChunk starting delete for: test.foo from { _id: 1700.0 } -> { _id: 1800.0 } m31100| Fri Feb 22 11:20:15.064 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 988ms m31100| Fri Feb 22 11:20:15.064 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1700.0 } -> { _id: 1800.0 } m31100| Fri Feb 22 11:20:15.064 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:15.064 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:15.064 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:15.065 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:15-5127546f8cfa445167059592", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532015065), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1700.0 }, max: { _id: 1800.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:15.065 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1700.0 }, max: { _id: 1800.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1700.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:309 w:11307 reslen:37 2076ms m30999| Fri Feb 22 11:20:15.065 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:15.066 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 19|1||5127544800fc1508e4df1ce2 based on: 18|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:15.067 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 19000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 40 m30999| Fri Feb 22 11:20:15.067 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:15.067 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 19000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 40 m30999| Fri Feb 22 11:20:15.068 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:15.095 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1800.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:15.095 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|39||000000000000000000000000min: { _id: 1800.0 }max: { _id: 1900.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:15.095 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:15.095 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1800.0 }, max: { _id: 1900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:15.096 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 5127546f8cfa445167059593 m31100| Fri Feb 22 11:20:15.096 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:15-5127546f8cfa445167059594", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532015096), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:15.097 [conn14] moveChunk request accepted at version 19|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:15.098 [conn14] moveChunk number of documents: 100 m31200| Fri Feb 22 11:20:15.098 [migrateThread] starting receiving-end of migration of chunk { _id: 1800.0 } -> { _id: 1900.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:15.099 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:15.108 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.118 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.128 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.139 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.155 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.187 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.251 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.380 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:15.636 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31201| Fri Feb 22 11:20:15.930 [conn13] end connection 165.225.128.186:45106 (10 connections now open) m31201| Fri Feb 22 11:20:15.931 [initandlisten] connection accepted from 165.225.128.186:38149 #14 (11 connections now open) m31200| Fri Feb 22 11:20:16.122 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:16.122 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31200| Fri Feb 22 11:20:16.122 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31100| Fri Feb 22 11:20:16.148 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:16.148 [conn14] moveChunk setting version to: 20|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:16.148 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:16.153 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31200| Fri Feb 22 11:20:16.153 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1800.0 } -> { _id: 1900.0 } m31200| Fri Feb 22 11:20:16.153 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:16-512754704384cdc634ba2290", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532016153), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 1022, step4 of 5: 0, step5 of 5: 31 } } m31100| Fri Feb 22 11:20:16.158 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1800.0 }, max: { _id: 1900.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 100, clonedBytes: 2900, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:16.158 [conn14] moveChunk updating self version to: 20|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:16.159 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:16-512754708cfa445167059595", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532016159), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:16.159 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:16.159 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:16.159 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:16.159 [conn14] moveChunk starting delete for: test.foo from { _id: 1800.0 } -> { _id: 1900.0 } m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016797), ok: 1.0 } m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.797 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016797), ok: 1.0 } m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016798), ok: 1.0 } m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016798), ok: 1.0 } m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 11:20:16.798 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016798), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016799), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016799), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361532016799), ok: 1.0 } m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 11:20:16.799 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:17.171 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 987ms m31100| Fri Feb 22 11:20:17.171 [conn14] moveChunk deleted 100 documents for test.foo from { _id: 1800.0 } -> { _id: 1900.0 } m31100| Fri Feb 22 11:20:17.171 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:17.171 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:17.172 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:17.172 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:17-512754718cfa445167059596", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532017172), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1800.0 }, max: { _id: 1900.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 1011 } } m31100| Fri Feb 22 11:20:17.172 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1800.0 }, max: { _id: 1900.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1800.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:319 w:11955 reslen:37 2076ms m30999| Fri Feb 22 11:20:17.172 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:17.173 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 41 version: 20|1||5127544800fc1508e4df1ce2 based on: 19|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:17.174 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 20000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 41 m30999| Fri Feb 22 11:20:17.174 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:17.174 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 20000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 41 m30999| Fri Feb 22 11:20:17.175 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:17.202 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 1900.0 }, to: "rs1-rs1", _secondaryThrottle: true, _waitForDelete: true } m30999| Fri Feb 22 11:20:17.202 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101lastmod: 1|40||000000000000000000000000min: { _id: 1900.0 }max: { _id: MaxKey }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201 m31100| Fri Feb 22 11:20:17.202 [conn14] moveChunk waiting for full cleanup after move m31100| Fri Feb 22 11:20:17.203 [conn14] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1900.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } m31100| Fri Feb 22 11:20:17.203 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' acquired, ts : 512754718cfa445167059597 m31100| Fri Feb 22 11:20:17.203 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:17-512754718cfa445167059598", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532017203), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:17.205 [conn14] moveChunk request accepted at version 20|1||5127544800fc1508e4df1ce2 m31100| Fri Feb 22 11:20:17.205 [conn14] moveChunk number of documents: 200 m31200| Fri Feb 22 11:20:17.205 [migrateThread] starting receiving-end of migration of chunk { _id: 1900.0 } -> { _id: MaxKey } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101 (1 slaves detected) m31200| Fri Feb 22 11:20:17.206 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 11:20:17.215 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.226 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 58, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.236 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 87, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.246 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.262 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 174, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.295 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 261, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.359 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31101| Fri Feb 22 11:20:17.394 [conn14] end connection 165.225.128.186:40295 (10 connections now open) m31101| Fri Feb 22 11:20:17.394 [initandlisten] connection accepted from 165.225.128.186:61602 #15 (11 connections now open) m31100| Fri Feb 22 11:20:17.487 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 812, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:17.744 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 1537, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 11:20:18.113 [conn21] end connection 165.225.128.186:33784 (14 connections now open) m31200| Fri Feb 22 11:20:18.113 [initandlisten] connection accepted from 165.225.128.186:38882 #22 (15 connections now open) m31100| Fri Feb 22 11:20:18.256 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 103, clonedBytes: 2987, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:20:18.632 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:18.632 [Balancer] skipping balancing round because balancing is disabled m31200| Fri Feb 22 11:20:19.250 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 11:20:19.250 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31200| Fri Feb 22 11:20:19.251 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31100| Fri Feb 22 11:20:19.280 [conn14] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 200, clonedBytes: 5800, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 11:20:19.280 [conn14] moveChunk setting version to: 21|0||5127544800fc1508e4df1ce2 m31200| Fri Feb 22 11:20:19.280 [conn20] Waiting for commit to finish m31200| Fri Feb 22 11:20:19.281 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31200| Fri Feb 22 11:20:19.281 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1900.0 } -> { _id: MaxKey } m31200| Fri Feb 22 11:20:19.281 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:19-512754734384cdc634ba2291", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532019281), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 2044, step4 of 5: 0, step5 of 5: 30 } } m31100| Fri Feb 22 11:20:19.290 [conn14] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", min: { _id: 1900.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 200, clonedBytes: 5800, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 11:20:19.290 [conn14] moveChunk updating self version to: 21|1||5127544800fc1508e4df1ce2 through { _id: MinKey } -> { _id: 0.0 } for collection 'test.foo' m31100| Fri Feb 22 11:20:19.291 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:19-512754738cfa445167059599", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532019291), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 11:20:19.291 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:19.291 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:19.291 [conn14] doing delete inline for cleanup of chunk data m31100| Fri Feb 22 11:20:19.291 [conn14] moveChunk starting delete for: test.foo from { _id: 1900.0 } -> { _id: MaxKey } m31100| Fri Feb 22 11:20:19.605 [conn22] end connection 165.225.128.186:64933 (14 connections now open) m31100| Fri Feb 22 11:20:19.605 [initandlisten] connection accepted from 165.225.128.186:53699 #23 (15 connections now open) m31100| Fri Feb 22 11:20:21.325 [conn14] Helpers::removeRangeUnlocked time spent waiting for replication: 1988ms m31100| Fri Feb 22 11:20:21.325 [conn14] moveChunk deleted 200 documents for test.foo from { _id: 1900.0 } -> { _id: MaxKey } m31100| Fri Feb 22 11:20:21.325 [conn14] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 11:20:21.325 [conn14] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 11:20:21.326 [conn14] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361531976:14633' unlocked. m31100| Fri Feb 22 11:20:21.326 [conn14] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:21-512754758cfa44516705959a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38058", time: new Date(1361532021326), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1900.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 2074, step5 of 6: 11, step6 of 6: 2034 } } m31100| Fri Feb 22 11:20:21.326 [conn14] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1900.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1900.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: true } ntoreturn:1 keyUpdates:0 locks(micros) W:21 r:525 w:20654 reslen:37 4123ms m30999| Fri Feb 22 11:20:21.326 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:21.327 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 42 version: 21|1||5127544800fc1508e4df1ce2 based on: 20|1||5127544800fc1508e4df1ce2 m30999| Fri Feb 22 11:20:21.328 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 21000|1, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101" } 0x1187540 42 m30999| Fri Feb 22 11:20:21.328 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:21.328 [conn1] setShardVersion rs1-rs1 bs-smartos-x86-64-1.10gen.cc:31200 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 21000|0, versionEpoch: ObjectId('5127544800fc1508e4df1ce2'), serverID: ObjectId('5127544800fc1508e4df1ce0'), shard: "rs1-rs1", shardHost: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201" } 0x1188dc0 42 m30999| Fri Feb 22 11:20:21.329 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|0, oldVersionEpoch: ObjectId('5127544800fc1508e4df1ce2'), ok: 1.0 } m30999| Fri Feb 22 11:20:21.346 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m31101| Fri Feb 22 11:20:21.347 [conn8] end connection 165.225.128.186:47479 (10 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn7] end connection 165.225.128.186:61872 (10 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:47705 (14 connections now open) m31101| Fri Feb 22 11:20:21.347 [conn7] end connection 165.225.128.186:44068 (10 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn6] end connection 165.225.128.186:45330 (10 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:55630 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn14] end connection 165.225.128.186:38058 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn13] end connection 165.225.128.186:52897 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn15] end connection 165.225.128.186:40845 (13 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn11] end connection 165.225.128.186:41224 (14 connections now open) m31101| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:59280 (8 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn13] end connection 165.225.128.186:46311 (14 connections now open) m31100| Fri Feb 22 11:20:21.347 [conn17] end connection 165.225.128.186:62098 (11 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn11] end connection 165.225.128.186:39345 (8 connections now open) m31101| Fri Feb 22 11:20:21.347 [conn13] end connection 165.225.128.186:43377 (8 connections now open) m31201| Fri Feb 22 11:20:21.347 [conn12] end connection 165.225.128.186:42134 (8 connections now open) m31200| Fri Feb 22 11:20:21.347 [conn15] end connection 165.225.128.186:48503 (14 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn3] end connection 165.225.128.186:56121 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn11] end connection 165.225.128.186:50718 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn4] end connection 165.225.128.186:58413 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn5] end connection 165.225.128.186:50526 (10 connections now open) m29000| Fri Feb 22 11:20:21.359 [conn6] end connection 165.225.128.186:54107 (10 connections now open) Fri Feb 22 11:20:22.346 shell: stopped mongo program on port 30999 Fri Feb 22 11:20:22.347 No db started on port: 30000 Fri Feb 22 11:20:22.347 shell: stopped mongo program on port 30000 Fri Feb 22 11:20:22.347 No db started on port: 30001 Fri Feb 22 11:20:22.347 shell: stopped mongo program on port 30001 ReplSetTest n: 0 ports: [ 31100, 31101 ] 31100 number ReplSetTest stop *** Shutting down mongod in port 31100 *** m31100| Fri Feb 22 11:20:22.347 got signal 15 (Terminated), will terminate after current cmd ends m31100| Fri Feb 22 11:20:22.347 [interruptThread] now exiting m31100| Fri Feb 22 11:20:22.347 dbexit: m31100| Fri Feb 22 11:20:22.347 [interruptThread] shutdown: going to close listening sockets... m31100| Fri Feb 22 11:20:22.347 [interruptThread] closing listening socket: 12 m31100| Fri Feb 22 11:20:22.347 [interruptThread] closing listening socket: 13 m31100| Fri Feb 22 11:20:22.348 [interruptThread] closing listening socket: 14 m31100| Fri Feb 22 11:20:22.348 [interruptThread] removing socket file: /tmp/mongodb-31100.sock m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: going to flush diaglog... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: going to close sockets... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: waiting for fs preallocator... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: lock for final commit... m31100| Fri Feb 22 11:20:22.348 [interruptThread] shutdown: final commit... m31100| Fri Feb 22 11:20:22.348 [conn23] end connection 165.225.128.186:53699 (9 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn1] end connection 127.0.0.1:54051 (9 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn9] end connection 165.225.128.186:56264 (9 connections now open) m31101| Fri Feb 22 11:20:22.348 [conn15] end connection 165.225.128.186:61602 (6 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn10] end connection 165.225.128.186:47962 (9 connections now open) m29000| Fri Feb 22 11:20:22.348 [conn7] end connection 165.225.128.186:45712 (5 connections now open) m29000| Fri Feb 22 11:20:22.348 [conn8] end connection 165.225.128.186:41614 (5 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn17] end connection 165.225.128.186:40234 (10 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn18] end connection 165.225.128.186:43875 (10 connections now open) m31201| Fri Feb 22 11:20:22.348 [conn9] end connection 165.225.128.186:56063 (6 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn19] end connection 165.225.128.186:49821 (10 connections now open) m31201| Fri Feb 22 11:20:22.348 [conn10] end connection 165.225.128.186:39391 (6 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn21] end connection 165.225.128.186:45792 (9 connections now open) m31101| Fri Feb 22 11:20:22.348 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:20:22.348 [conn19] end connection 165.225.128.186:36167 (9 connections now open) m31100| Fri Feb 22 11:20:22.348 [conn20] end connection 165.225.128.186:39920 (9 connections now open) m31101| Fri Feb 22 11:20:22.348 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m29000| Fri Feb 22 11:20:22.348 [conn10] end connection 165.225.128.186:44442 (3 connections now open) m31200| Fri Feb 22 11:20:22.348 [conn20] end connection 165.225.128.186:45938 (7 connections now open) m31100| Fri Feb 22 11:20:22.365 [interruptThread] shutdown: closing all files... m31100| Fri Feb 22 11:20:22.366 [interruptThread] closeAllFiles() finished m31100| Fri Feb 22 11:20:22.366 [interruptThread] journalCleanup... m31100| Fri Feb 22 11:20:22.366 [interruptThread] removeJournalFiles m31100| Fri Feb 22 11:20:22.366 dbexit: really exiting now Fri Feb 22 11:20:23.347 shell: stopped mongo program on port 31100 ReplSetTest n: 1 ports: [ 31100, 31101 ] 31101 number ReplSetTest stop *** Shutting down mongod in port 31101 *** m31101| Fri Feb 22 11:20:23.348 got signal 15 (Terminated), will terminate after current cmd ends m31101| Fri Feb 22 11:20:23.348 [interruptThread] now exiting m31101| Fri Feb 22 11:20:23.348 dbexit: m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: going to close listening sockets... m31101| Fri Feb 22 11:20:23.348 [interruptThread] closing listening socket: 15 m31101| Fri Feb 22 11:20:23.348 [interruptThread] closing listening socket: 16 m31101| Fri Feb 22 11:20:23.348 [interruptThread] closing listening socket: 17 m31101| Fri Feb 22 11:20:23.348 [interruptThread] removing socket file: /tmp/mongodb-31101.sock m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: going to flush diaglog... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: going to close sockets... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: waiting for fs preallocator... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: lock for final commit... m31101| Fri Feb 22 11:20:23.348 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 11:20:23.348 [conn1] end connection 127.0.0.1:65148 (5 connections now open) m31101| Fri Feb 22 11:20:23.348 [conn5] end connection 165.225.128.186:34675 (5 connections now open) m31101| Fri Feb 22 11:20:23.348 [conn6] end connection 165.225.128.186:62481 (5 connections now open) m31101| Fri Feb 22 11:20:23.348 [conn11] end connection 165.225.128.186:36721 (5 connections now open) m31101| Fri Feb 22 11:20:23.349 [conn10] end connection 165.225.128.186:39290 (4 connections now open) m31101| Fri Feb 22 11:20:23.367 [interruptThread] shutdown: closing all files... m31101| Fri Feb 22 11:20:23.367 [interruptThread] closeAllFiles() finished m31101| Fri Feb 22 11:20:23.367 [interruptThread] journalCleanup... m31101| Fri Feb 22 11:20:23.367 [interruptThread] removeJournalFiles m31101| Fri Feb 22 11:20:23.368 dbexit: really exiting now Fri Feb 22 11:20:24.348 shell: stopped mongo program on port 31101 ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** ReplSetTest n: 0 ports: [ 31200, 31201 ] 31200 number ReplSetTest stop *** Shutting down mongod in port 31200 *** m31200| Fri Feb 22 11:20:24.356 got signal 15 (Terminated), will terminate after current cmd ends m31200| Fri Feb 22 11:20:24.356 [interruptThread] now exiting m31200| Fri Feb 22 11:20:24.356 dbexit: m31200| Fri Feb 22 11:20:24.356 [interruptThread] shutdown: going to close listening sockets... m31200| Fri Feb 22 11:20:24.356 [interruptThread] closing listening socket: 18 m31200| Fri Feb 22 11:20:24.356 [interruptThread] closing listening socket: 19 m31200| Fri Feb 22 11:20:24.356 [interruptThread] closing listening socket: 20 m31200| Fri Feb 22 11:20:24.357 [interruptThread] removing socket file: /tmp/mongodb-31200.sock m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: going to flush diaglog... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: going to close sockets... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: waiting for fs preallocator... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: lock for final commit... m31200| Fri Feb 22 11:20:24.357 [interruptThread] shutdown: final commit... m31200| Fri Feb 22 11:20:24.357 [conn1] end connection 127.0.0.1:34027 (6 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn22] end connection 165.225.128.186:38882 (6 connections now open) m31201| Fri Feb 22 11:20:24.357 [conn14] end connection 165.225.128.186:38149 (4 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn6] end connection 165.225.128.186:33069 (6 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn8] end connection 165.225.128.186:47585 (6 connections now open) m31200| Fri Feb 22 11:20:24.357 [conn9] end connection 165.225.128.186:46153 (6 connections now open) m29000| Fri Feb 22 11:20:24.357 [conn9] end connection 165.225.128.186:64688 (2 connections now open) m31201| Fri Feb 22 11:20:24.357 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 11:20:24.377 [interruptThread] shutdown: closing all files... m31200| Fri Feb 22 11:20:24.378 [interruptThread] closeAllFiles() finished m31200| Fri Feb 22 11:20:24.378 [interruptThread] journalCleanup... m31200| Fri Feb 22 11:20:24.378 [interruptThread] removeJournalFiles m31200| Fri Feb 22 11:20:24.378 dbexit: really exiting now Fri Feb 22 11:20:25.356 shell: stopped mongo program on port 31200 ReplSetTest n: 1 ports: [ 31200, 31201 ] 31201 number ReplSetTest stop *** Shutting down mongod in port 31201 *** m31201| Fri Feb 22 11:20:25.357 got signal 15 (Terminated), will terminate after current cmd ends m31201| Fri Feb 22 11:20:25.357 [interruptThread] now exiting m31201| Fri Feb 22 11:20:25.357 dbexit: m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: going to close listening sockets... m31201| Fri Feb 22 11:20:25.357 [interruptThread] closing listening socket: 21 m31201| Fri Feb 22 11:20:25.357 [interruptThread] closing listening socket: 22 m31201| Fri Feb 22 11:20:25.357 [interruptThread] closing listening socket: 23 m31201| Fri Feb 22 11:20:25.357 [interruptThread] removing socket file: /tmp/mongodb-31201.sock m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: going to flush diaglog... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: going to close sockets... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: waiting for fs preallocator... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: lock for final commit... m31201| Fri Feb 22 11:20:25.357 [interruptThread] shutdown: final commit... m31201| Fri Feb 22 11:20:25.357 [conn1] end connection 127.0.0.1:47001 (3 connections now open) m31201| Fri Feb 22 11:20:25.357 [conn4] end connection 165.225.128.186:54721 (3 connections now open) m31201| Fri Feb 22 11:20:25.357 [conn5] end connection 165.225.128.186:33137 (3 connections now open) m31201| Fri Feb 22 11:20:25.380 [interruptThread] shutdown: closing all files... m31201| Fri Feb 22 11:20:25.381 [interruptThread] closeAllFiles() finished m31201| Fri Feb 22 11:20:25.381 [interruptThread] journalCleanup... m31201| Fri Feb 22 11:20:25.381 [interruptThread] removeJournalFiles m31201| Fri Feb 22 11:20:25.382 dbexit: really exiting now Fri Feb 22 11:20:26.357 shell: stopped mongo program on port 31201 ReplSetTest stopSet deleting all dbpaths Fri Feb 22 11:20:26.361 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 ReplSetTest stopSet *** Shut down repl set - test worked **** m29000| Fri Feb 22 11:20:26.366 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 11:20:26.366 [interruptThread] now exiting m29000| Fri Feb 22 11:20:26.366 dbexit: m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 11:20:26.366 [interruptThread] closing listening socket: 32 m29000| Fri Feb 22 11:20:26.366 [interruptThread] closing listening socket: 33 m29000| Fri Feb 22 11:20:26.366 [interruptThread] closing listening socket: 34 m29000| Fri Feb 22 11:20:26.366 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 11:20:26.366 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 11:20:26.366 [conn1] end connection 127.0.0.1:48681 (1 connection now open) m29000| Fri Feb 22 11:20:26.366 [conn2] end connection 165.225.128.186:57785 (0 connections now open) Fri Feb 22 11:20:26.367 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 11:20:26.367 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:20:26.367 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31101 Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31101 error: 9001 socket exception [1] server [165.225.128.186:31101] Fri Feb 22 11:20:26.368 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:20:26.377 [conn9] end connection 127.0.0.1:57590 (0 connections now open) m29000| Fri Feb 22 11:20:26.376 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 11:20:26.377 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 11:20:26.377 [interruptThread] journalCleanup... m29000| Fri Feb 22 11:20:26.377 [interruptThread] removeJournalFiles m29000| Fri Feb 22 11:20:26.377 dbexit: really exiting now Fri Feb 22 11:20:27.366 shell: stopped mongo program on port 29000 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [FAILED_STATE] for bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31101 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31101 Fri Feb 22 11:20:27.368 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31101 socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31101 *** ShardingTest rs1 completed successfully in 108.103 seconds *** 1.8052 minutes Fri Feb 22 11:20:27.406 [initandlisten] connection accepted from 127.0.0.1:39368 #10 (1 connection now open) Fri Feb 22 11:20:27.407 [conn10] end connection 127.0.0.1:39368 (0 connections now open) ******************************************* Test : balance_tags1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_tags1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/balance_tags1.js";TestData.testFile = "balance_tags1.js";TestData.testName = "balance_tags1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:20:27 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:20:27.582 [initandlisten] connection accepted from 127.0.0.1:60212 #11 (1 connection now open) null Resetting db path '/data/db/balance_tags10' Fri Feb 22 11:20:27.597 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/balance_tags10 --nopreallocj --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:20:27.688 [initandlisten] MongoDB starting : pid=20728 port=30000 dbpath=/data/db/balance_tags10 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:20:27.689 [initandlisten] m30000| Fri Feb 22 11:20:27.689 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:20:27.689 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:20:27.689 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:20:27.689 [initandlisten] m30000| Fri Feb 22 11:20:27.689 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:20:27.689 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:20:27.689 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:20:27.689 [initandlisten] allocator: system m30000| Fri Feb 22 11:20:27.689 [initandlisten] options: { dbpath: "/data/db/balance_tags10", nopreallocj: true, port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:20:27.689 [initandlisten] journal dir=/data/db/balance_tags10/journal m30000| Fri Feb 22 11:20:27.689 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:20:27.691 [FileAllocator] allocating new datafile /data/db/balance_tags10/local.ns, filling with zeroes... m30000| Fri Feb 22 11:20:27.691 [FileAllocator] creating directory /data/db/balance_tags10/_tmp m30000| Fri Feb 22 11:20:27.691 [FileAllocator] done allocating datafile /data/db/balance_tags10/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:20:27.691 [FileAllocator] allocating new datafile /data/db/balance_tags10/local.0, filling with zeroes... m30000| Fri Feb 22 11:20:27.691 [FileAllocator] done allocating datafile /data/db/balance_tags10/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:20:27.695 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:20:27.695 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:20:27.799 [initandlisten] connection accepted from 127.0.0.1:51670 #1 (1 connection now open) Resetting db path '/data/db/balance_tags11' Fri Feb 22 11:20:27.803 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/balance_tags11 --nopreallocj --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:20:27.893 [initandlisten] MongoDB starting : pid=20729 port=30001 dbpath=/data/db/balance_tags11 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:20:27.893 [initandlisten] m30001| Fri Feb 22 11:20:27.893 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:20:27.893 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:20:27.893 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:20:27.893 [initandlisten] m30001| Fri Feb 22 11:20:27.893 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:20:27.893 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:20:27.893 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:20:27.893 [initandlisten] allocator: system m30001| Fri Feb 22 11:20:27.893 [initandlisten] options: { dbpath: "/data/db/balance_tags11", nopreallocj: true, port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:20:27.893 [initandlisten] journal dir=/data/db/balance_tags11/journal m30001| Fri Feb 22 11:20:27.894 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:20:27.895 [FileAllocator] allocating new datafile /data/db/balance_tags11/local.ns, filling with zeroes... m30001| Fri Feb 22 11:20:27.895 [FileAllocator] creating directory /data/db/balance_tags11/_tmp m30001| Fri Feb 22 11:20:27.895 [FileAllocator] done allocating datafile /data/db/balance_tags11/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:20:27.896 [FileAllocator] allocating new datafile /data/db/balance_tags11/local.0, filling with zeroes... m30001| Fri Feb 22 11:20:27.896 [FileAllocator] done allocating datafile /data/db/balance_tags11/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:20:27.899 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:20:27.899 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:20:28.004 [initandlisten] connection accepted from 127.0.0.1:61815 #1 (1 connection now open) Resetting db path '/data/db/balance_tags12' Fri Feb 22 11:20:28.007 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30002 --dbpath /data/db/balance_tags12 --nopreallocj --setParameter enableTestCommands=1 m30002| Fri Feb 22 11:20:28.097 [initandlisten] MongoDB starting : pid=20730 port=30002 dbpath=/data/db/balance_tags12 64-bit host=bs-smartos-x86-64-1.10gen.cc m30002| Fri Feb 22 11:20:28.097 [initandlisten] m30002| Fri Feb 22 11:20:28.097 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30002| Fri Feb 22 11:20:28.097 [initandlisten] ** uses to detect impending page faults. m30002| Fri Feb 22 11:20:28.097 [initandlisten] ** This may result in slower performance for certain use cases m30002| Fri Feb 22 11:20:28.097 [initandlisten] m30002| Fri Feb 22 11:20:28.097 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30002| Fri Feb 22 11:20:28.097 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30002| Fri Feb 22 11:20:28.097 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30002| Fri Feb 22 11:20:28.097 [initandlisten] allocator: system m30002| Fri Feb 22 11:20:28.097 [initandlisten] options: { dbpath: "/data/db/balance_tags12", nopreallocj: true, port: 30002, setParameter: [ "enableTestCommands=1" ] } m30002| Fri Feb 22 11:20:28.097 [initandlisten] journal dir=/data/db/balance_tags12/journal m30002| Fri Feb 22 11:20:28.098 [initandlisten] recover : no journal files present, no recovery needed m30002| Fri Feb 22 11:20:28.099 [FileAllocator] allocating new datafile /data/db/balance_tags12/local.ns, filling with zeroes... m30002| Fri Feb 22 11:20:28.099 [FileAllocator] creating directory /data/db/balance_tags12/_tmp m30002| Fri Feb 22 11:20:28.099 [FileAllocator] done allocating datafile /data/db/balance_tags12/local.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:28.100 [FileAllocator] allocating new datafile /data/db/balance_tags12/local.0, filling with zeroes... m30002| Fri Feb 22 11:20:28.100 [FileAllocator] done allocating datafile /data/db/balance_tags12/local.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:20:28.103 [initandlisten] waiting for connections on port 30002 m30002| Fri Feb 22 11:20:28.103 [websvr] admin web console waiting for connections on port 31002 m30002| Fri Feb 22 11:20:28.209 [initandlisten] connection accepted from 127.0.0.1:34448 #1 (1 connection now open) "localhost:30000,localhost:30001,localhost:30002" Fri Feb 22 11:20:28.213 SyncClusterConnection connecting to [localhost:30000] Fri Feb 22 11:20:28.213 SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:28.213 [initandlisten] connection accepted from 127.0.0.1:58478 #2 (2 connections now open) Fri Feb 22 11:20:28.214 SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:28.214 [initandlisten] connection accepted from 127.0.0.1:58147 #2 (2 connections now open) m30002| Fri Feb 22 11:20:28.214 [initandlisten] connection accepted from 127.0.0.1:47206 #2 (2 connections now open) ShardingTest balance_tags1 : { "config" : "localhost:30000,localhost:30001,localhost:30002", "shards" : [ connection to localhost:30000, connection to localhost:30001, connection to localhost:30002 ] } Fri Feb 22 11:20:28.218 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000,localhost:30001,localhost:30002 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:20:28.236 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=20731 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:20:28.236 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:20:28.236 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:20:28.236 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000,localhost:30001,localhost:30002", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 11:20:28.236 [mongosMain] config string : localhost:30000,localhost:30001,localhost:30002 m30999| Fri Feb 22 11:20:28.236 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:28.237 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.238 [mongosMain] connected connection! m30000| Fri Feb 22 11:20:28.238 [initandlisten] connection accepted from 127.0.0.1:40427 #3 (3 connections now open) m30999| Fri Feb 22 11:20:28.238 [mongosMain] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:28.238 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.239 [mongosMain] connected connection! m30001| Fri Feb 22 11:20:28.238 [initandlisten] connection accepted from 127.0.0.1:55562 #3 (3 connections now open) m30999| Fri Feb 22 11:20:28.239 [mongosMain] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:28.239 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.239 [initandlisten] connection accepted from 127.0.0.1:64894 #3 (3 connections now open) m30999| Fri Feb 22 11:20:28.239 [mongosMain] connected connection! m30999| Fri Feb 22 11:20:28.240 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:20:28.240 [mongosMain] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:28.240 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:28.240 [initandlisten] connection accepted from 127.0.0.1:33503 #4 (4 connections now open) m30999| Fri Feb 22 11:20:28.240 [mongosMain] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:28.240 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 11:20:28.241 [initandlisten] connection accepted from 127.0.0.1:42942 #4 (4 connections now open) m30999| Fri Feb 22 11:20:28.241 [mongosMain] SyncClusterConnection connecting to [localhost:30002] m30999| Fri Feb 22 11:20:28.241 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.241 [initandlisten] connection accepted from 127.0.0.1:48691 #4 (4 connections now open) m30000| Fri Feb 22 11:20:28.242 [conn4] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.249 [conn4] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.264 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:28.273 [mongosMain] scoped connection to localhost:30000,localhost:30001,localhost:30002 not being returned to the pool m30999| Fri Feb 22 11:20:28.273 [mongosMain] created new distributed lock for configUpgrade on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:20:28.274 [mongosMain] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:28.274 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.274 [mongosMain] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:28.274 [initandlisten] connection accepted from 127.0.0.1:60590 #5 (5 connections now open) m30999| Fri Feb 22 11:20:28.274 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.274 [mongosMain] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:28.274 [initandlisten] connection accepted from 127.0.0.1:48409 #5 (5 connections now open) m30999| Fri Feb 22 11:20:28.274 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.274 [initandlisten] connection accepted from 127.0.0.1:52548 #5 (5 connections now open) m30999| Fri Feb 22 11:20:28.275 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:28.275 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:20:28.275 [LockPinger] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:28.275 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30000| Fri Feb 22 11:20:28.276 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:28.276 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:28.276 [LockPinger] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:28.276 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:28.276 [initandlisten] connection accepted from 127.0.0.1:58494 #6 (6 connections now open) m30999| Fri Feb 22 11:20:28.276 [LockPinger] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:28.276 [initandlisten] connection accepted from 127.0.0.1:46581 #6 (6 connections now open) m30999| Fri Feb 22 11:20:28.276 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:28.276 [initandlisten] connection accepted from 127.0.0.1:52531 #6 (6 connections now open) m30000| Fri Feb 22 11:20:28.284 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.284 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.291 [conn4] end connection 127.0.0.1:42942 (5 connections now open) m30002| Fri Feb 22 11:20:28.291 [conn4] end connection 127.0.0.1:48691 (5 connections now open) m30001| Fri Feb 22 11:20:28.292 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.298 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.299 [conn4] end connection 127.0.0.1:33503 (5 connections now open) m30002| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags12/config.ns, filling with zeroes... m30000| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags10/config.ns, filling with zeroes... m30002| Fri Feb 22 11:20:28.306 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.306 [FileAllocator] done allocating datafile /data/db/balance_tags10/config.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:28.306 [FileAllocator] done allocating datafile /data/db/balance_tags12/config.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags12/config.0, filling with zeroes... m30000| Fri Feb 22 11:20:28.306 [FileAllocator] allocating new datafile /data/db/balance_tags10/config.0, filling with zeroes... m30002| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags12/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags10/config.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:20:28.307 [FileAllocator] allocating new datafile /data/db/balance_tags12/config.1, filling with zeroes... m30000| Fri Feb 22 11:20:28.307 [FileAllocator] allocating new datafile /data/db/balance_tags10/config.1, filling with zeroes... m30002| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags12/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:20:28.307 [FileAllocator] done allocating datafile /data/db/balance_tags10/config.1, size: 128MB, took 0 secs m30002| Fri Feb 22 11:20:28.310 [conn5] build index config.locks { _id: 1 } m30000| Fri Feb 22 11:20:28.310 [conn5] build index config.locks { _id: 1 } m30002| Fri Feb 22 11:20:28.310 [conn5] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:20:28.311 [conn5] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:20:28.316 [FileAllocator] allocating new datafile /data/db/balance_tags11/config.ns, filling with zeroes... m30001| Fri Feb 22 11:20:28.316 [FileAllocator] done allocating datafile /data/db/balance_tags11/config.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:20:28.316 [FileAllocator] allocating new datafile /data/db/balance_tags11/config.0, filling with zeroes... m30001| Fri Feb 22 11:20:28.316 [FileAllocator] done allocating datafile /data/db/balance_tags11/config.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:20:28.317 [FileAllocator] allocating new datafile /data/db/balance_tags11/config.1, filling with zeroes... m30001| Fri Feb 22 11:20:28.317 [FileAllocator] done allocating datafile /data/db/balance_tags11/config.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:20:28.320 [conn5] build index config.locks { _id: 1 } m30001| Fri Feb 22 11:20:28.320 [conn5] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:20:28.329 [conn6] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:20:28.329 [conn6] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:20:28.330 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:28.330 [conn6] build index done. scanned 0 total records. 0.001 secs m30002| Fri Feb 22 11:20:28.331 [conn6] build index config.lockpings { _id: 1 } m30002| Fri Feb 22 11:20:28.335 [conn6] build index done. scanned 0 total records. 0.003 secs m30999| Fri Feb 22 11:20:28.406 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:28 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127547cd4b973931fc9a223" } } m30999| { "_id" : "configUpgrade", m30000| Fri Feb 22 11:20:28.407 [conn6] CMD fsync: sync:1 lock:0 m30999| "state" : 0 } m30000| Fri Feb 22 11:20:28.407 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.430 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.443 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.453 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.480 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.543 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.566 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.578 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.586 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.599 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.602 [conn6] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 11:20:28.602 [conn6] build index config.lockpings { ping: new Date(1) } m30001| Fri Feb 22 11:20:28.602 [conn6] build index config.lockpings { ping: new Date(1) } m30001| Fri Feb 22 11:20:28.604 [conn6] build index done. scanned 1 total records. 0.001 secs m30002| Fri Feb 22 11:20:28.604 [conn6] build index done. scanned 1 total records. 0.001 secs m30000| Fri Feb 22 11:20:28.604 [conn6] build index done. scanned 1 total records. 0.002 secs m30002| Fri Feb 22 11:20:28.627 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:28.680 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:20:28 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30999| Fri Feb 22 11:20:28.714 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127547cd4b973931fc9a223 m30999| Fri Feb 22 11:20:28.723 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:20:28.723 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:20:28.723 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:28-5127547cd4b973931fc9a224", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532028723), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:20:28.723 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.746 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.775 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:28.792 [conn5] build index config.changelog { _id: 1 } m30000| Fri Feb 22 11:20:28.793 [conn5] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:20:28.797 [conn5] build index config.changelog { _id: 1 } m30001| Fri Feb 22 11:20:28.798 [conn5] build index done. scanned 0 total records. 0 secs m30002| Fri Feb 22 11:20:28.801 [conn5] build index config.changelog { _id: 1 } m30002| Fri Feb 22 11:20:28.802 [conn5] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:20:28.885 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:28.908 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:28.936 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:29.022 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 11:20:29.030 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.054 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.083 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:29.101 [conn5] build index config.version { _id: 1 } m30002| Fri Feb 22 11:20:29.101 [conn5] build index config.version { _id: 1 } m30001| Fri Feb 22 11:20:29.101 [conn5] build index config.version { _id: 1 } m30002| Fri Feb 22 11:20:29.102 [conn5] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:29.102 [conn5] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:29.103 [conn5] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:20:29.160 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:29-5127547dd4b973931fc9a226", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532029160), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:20:29.160 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.184 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.213 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:29.295 [mongosMain] upgrade of config server to v4 successful m30000| Fri Feb 22 11:20:29.295 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.318 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.346 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.500 [conn5] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30999| Fri Feb 22 11:20:29.500 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:20:29.504 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.526 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.546 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:29.564 [conn6] build index config.settings { _id: 1 } m30001| Fri Feb 22 11:20:29.567 [conn6] build index config.settings { _id: 1 } m30002| Fri Feb 22 11:20:29.568 [conn6] build index config.settings { _id: 1 } m30000| Fri Feb 22 11:20:29.568 [conn6] build index done. scanned 0 total records. 0.004 secs m30001| Fri Feb 22 11:20:29.572 [conn6] build index done. scanned 0 total records. 0.004 secs m30002| Fri Feb 22 11:20:29.572 [conn6] build index done. scanned 0 total records. 0.004 secs m30000| Fri Feb 22 11:20:29.637 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.660 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.681 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.701 [conn6] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:20:29.701 [conn6] build index config.chunks { _id: 1 } m30002| Fri Feb 22 11:20:29.702 [conn6] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:20:29.705 [conn6] build index done. scanned 0 total records. 0.004 secs m30000| Fri Feb 22 11:20:29.706 [conn6] info: creating collection config.chunks on add index m30000| Fri Feb 22 11:20:29.706 [conn6] build index config.chunks { ns: 1, min: 1 } m30001| Fri Feb 22 11:20:29.706 [conn6] build index done. scanned 0 total records. 0.004 secs m30001| Fri Feb 22 11:20:29.706 [conn6] info: creating collection config.chunks on add index m30001| Fri Feb 22 11:20:29.706 [conn6] build index config.chunks { ns: 1, min: 1 } m30002| Fri Feb 22 11:20:29.706 [conn6] build index done. scanned 0 total records. 0.004 secs m30002| Fri Feb 22 11:20:29.706 [conn6] info: creating collection config.chunks on add index m30002| Fri Feb 22 11:20:29.706 [conn6] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 11:20:29.708 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:29.708 [conn6] build index done. scanned 0 total records. 0.002 secs m30002| Fri Feb 22 11:20:29.708 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:29.774 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.798 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.821 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:29.838 [conn6] build index config.chunks { ns: 1, shard: 1, min: 1 } m30002| Fri Feb 22 11:20:29.838 [conn6] build index config.chunks { ns: 1, shard: 1, min: 1 } m30001| Fri Feb 22 11:20:29.838 [conn6] build index config.chunks { ns: 1, shard: 1, min: 1 } m30002| Fri Feb 22 11:20:29.840 [conn6] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:29.841 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:29.841 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:29.876 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:29.900 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.921 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:29.938 [conn6] build index config.chunks { ns: 1, lastmod: 1 } m30001| Fri Feb 22 11:20:29.938 [conn6] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:20:29.938 [conn6] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:20:29.940 [conn6] build index done. scanned 0 total records. 0.002 secs m30002| Fri Feb 22 11:20:29.940 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:29.940 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:29.979 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.001 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.024 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:30.041 [conn6] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:20:30.044 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:30.044 [conn6] info: creating collection config.shards on add index m30002| Fri Feb 22 11:20:30.044 [conn6] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:20:30.044 [conn6] build index config.shards { host: 1 } m30001| Fri Feb 22 11:20:30.044 [conn6] build index config.shards { _id: 1 } m30002| Fri Feb 22 11:20:30.048 [conn6] build index done. scanned 0 total records. 0.003 secs m30002| Fri Feb 22 11:20:30.048 [conn6] info: creating collection config.shards on add index m30002| Fri Feb 22 11:20:30.048 [conn6] build index config.shards { host: 1 } m30001| Fri Feb 22 11:20:30.048 [conn6] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 11:20:30.048 [conn6] info: creating collection config.shards on add index m30001| Fri Feb 22 11:20:30.048 [conn6] build index config.shards { host: 1 } m30000| Fri Feb 22 11:20:30.048 [conn6] build index done. scanned 0 total records. 0.003 secs m30002| Fri Feb 22 11:20:30.052 [conn6] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 11:20:30.052 [conn6] build index done. scanned 0 total records. 0.004 secs m30000| Fri Feb 22 11:20:30.151 [conn6] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30999| Fri Feb 22 11:20:30.185 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:20:30.185 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:20:30.185 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:20:30.185 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:20:30.185 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:20:30.185 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:20:30 m30999| Fri Feb 22 11:20:30.185 [Balancer] created new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:20:30.185 [Balancer] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:30.185 [mongosMain] waiting for connections on port 30999 m30999| Fri Feb 22 11:20:30.185 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:30.185 [initandlisten] connection accepted from 127.0.0.1:56278 #7 (6 connections now open) m30999| Fri Feb 22 11:20:30.186 [Balancer] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:30.186 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:30.186 [conn6] build index config.mongos { _id: 1 } m30001| Fri Feb 22 11:20:30.186 [conn6] build index config.mongos { _id: 1 } m30000| Fri Feb 22 11:20:30.186 [conn6] build index config.mongos { _id: 1 } m30001| Fri Feb 22 11:20:30.186 [initandlisten] connection accepted from 127.0.0.1:58330 #7 (6 connections now open) m30999| Fri Feb 22 11:20:30.186 [Balancer] SyncClusterConnection connecting to [localhost:30002] m30999| Fri Feb 22 11:20:30.186 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:30.186 [initandlisten] connection accepted from 127.0.0.1:61888 #7 (6 connections now open) m30002| Fri Feb 22 11:20:30.188 [conn6] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:30.188 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:30.188 [conn6] build index done. scanned 0 total records. 0.002 secs m30999| Fri Feb 22 11:20:30.189 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:30.189 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:30.189 [Balancer] inserting initial doc in config.locks for lock balancer m30000| Fri Feb 22 11:20:30.189 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.213 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.223 [mongosMain] connection accepted from 127.0.0.1:34561 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:20:30.226 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 11:20:30.226 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.240 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.242 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.267 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.289 [conn7] build index config.databases { _id: 1 } m30002| Fri Feb 22 11:20:30.289 [conn7] build index config.databases { _id: 1 } m30000| Fri Feb 22 11:20:30.289 [conn7] build index config.databases { _id: 1 } m30002| Fri Feb 22 11:20:30.291 [conn7] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:30.291 [conn7] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:30.291 [conn7] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:20:30.320 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:30 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127547ed4b973931fc9a228" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30000| Fri Feb 22 11:20:30.320 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.341 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.354 [conn1] put [admin] on: config:localhost:30000,localhost:30001,localhost:30002 m30999| Fri Feb 22 11:20:30.355 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:30.355 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.355 [conn1] connected connection! m30000| Fri Feb 22 11:20:30.355 [initandlisten] connection accepted from 127.0.0.1:50872 #8 (7 connections now open) m30999| Fri Feb 22 11:20:30.356 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } m30000| Fri Feb 22 11:20:30.356 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.366 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.371 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.394 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:30.395 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.420 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.446 [conn5] CMD fsync: sync:1 lock:0 { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 11:20:30.497 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:30.497 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.497 [conn1] connected connection! m30001| Fri Feb 22 11:20:30.497 [initandlisten] connection accepted from 127.0.0.1:48237 #8 (7 connections now open) m30999| Fri Feb 22 11:20:30.499 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } m30000| Fri Feb 22 11:20:30.499 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.513 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.530 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127547ed4b973931fc9a228 m30999| Fri Feb 22 11:20:30.530 [Balancer] *** start balancing round m30999| Fri Feb 22 11:20:30.530 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:20:30.530 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:20:30.531 [Balancer] no collections to balance m30999| Fri Feb 22 11:20:30.531 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:20:30.531 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:20:30.531 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.538 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.549 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.576 [conn5] CMD fsync: sync:1 lock:0 { "shardAdded" : "shard0001", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30002 m30999| Fri Feb 22 11:20:30.633 [conn1] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:30.634 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.634 [conn1] connected connection! m30002| Fri Feb 22 11:20:30.634 [initandlisten] connection accepted from 127.0.0.1:54222 #8 (7 connections now open) m30999| Fri Feb 22 11:20:30.635 [conn1] going to add shard: { _id: "shard0002", host: "localhost:30002" } m30000| Fri Feb 22 11:20:30.635 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.653 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.666 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30002| Fri Feb 22 11:20:30.681 [conn7] CMD fsync: sync:1 lock:0 { "shardAdded" : "shard0002", "ok" : 1 } m30999| Fri Feb 22 11:20:30.804 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:30.804 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.804 [conn1] connected connection! m30000| Fri Feb 22 11:20:30.804 [initandlisten] connection accepted from 127.0.0.1:38220 #9 (8 connections now open) m30999| Fri Feb 22 11:20:30.804 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5127547ed4b973931fc9a227 m30999| Fri Feb 22 11:20:30.804 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 11:20:30.804 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 11:20:30.805 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:30.805 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.805 [conn1] connected connection! m30001| Fri Feb 22 11:20:30.805 [initandlisten] connection accepted from 127.0.0.1:57851 #9 (8 connections now open) m30999| Fri Feb 22 11:20:30.805 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5127547ed4b973931fc9a227 m30999| Fri Feb 22 11:20:30.805 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 11:20:30.805 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Fri Feb 22 11:20:30.806 [conn1] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:30.806 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.806 [conn1] connected connection! m30002| Fri Feb 22 11:20:30.806 [initandlisten] connection accepted from 127.0.0.1:42564 #9 (8 connections now open) m30999| Fri Feb 22 11:20:30.806 [conn1] creating WriteBackListener for: localhost:30002 serverID: 5127547ed4b973931fc9a227 m30999| Fri Feb 22 11:20:30.806 [conn1] initializing shard connection to localhost:30002 m30999| Fri Feb 22 11:20:30.806 BackgroundJob starting: WriteBackListener-localhost:30002 m30999| Fri Feb 22 11:20:30.807 [conn1] SyncClusterConnection connecting to [localhost:30000] m30999| Fri Feb 22 11:20:30.807 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:20:30.807 [initandlisten] connection accepted from 127.0.0.1:44398 #10 (9 connections now open) m30999| Fri Feb 22 11:20:30.807 [conn1] SyncClusterConnection connecting to [localhost:30001] m30999| Fri Feb 22 11:20:30.807 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.807 [conn1] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:30.807 [initandlisten] connection accepted from 127.0.0.1:50063 #10 (9 connections now open) m30999| Fri Feb 22 11:20:30.807 BackgroundJob starting: ConnectBG m30002| Fri Feb 22 11:20:30.808 [initandlisten] connection accepted from 127.0.0.1:59349 #10 (9 connections now open) m30000| Fri Feb 22 11:20:30.808 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.833 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.854 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:30.940 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 11:20:30.940 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:20:30.940 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.940 [conn1] connected connection! m30000| Fri Feb 22 11:20:30.940 [initandlisten] connection accepted from 127.0.0.1:53660 #11 (10 connections now open) m30999| Fri Feb 22 11:20:30.941 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:20:30.941 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.941 [conn1] connected connection! m30001| Fri Feb 22 11:20:30.941 [initandlisten] connection accepted from 127.0.0.1:42693 #11 (10 connections now open) m30999| Fri Feb 22 11:20:30.942 [conn1] creating new connection to:localhost:30002 m30999| Fri Feb 22 11:20:30.942 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:20:30.942 [conn1] connected connection! m30002| Fri Feb 22 11:20:30.942 [initandlisten] connection accepted from 127.0.0.1:44320 #11 (10 connections now open) m30999| Fri Feb 22 11:20:30.942 [conn1] best shard for new allocation is shard: shard0000:localhost:30000 mapped: 160 writeLock: 0 version: 2.4.0-rc1-pre- m30000| Fri Feb 22 11:20:30.942 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:30.960 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:30.989 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:31.110 [conn1] put [test] on: shard0000:localhost:30000 m30000| Fri Feb 22 11:20:31.110 [FileAllocator] allocating new datafile /data/db/balance_tags10/test.ns, filling with zeroes... m30000| Fri Feb 22 11:20:31.110 [FileAllocator] done allocating datafile /data/db/balance_tags10/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:20:31.111 [FileAllocator] allocating new datafile /data/db/balance_tags10/test.0, filling with zeroes... m30000| Fri Feb 22 11:20:31.111 [FileAllocator] done allocating datafile /data/db/balance_tags10/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:20:31.111 [FileAllocator] allocating new datafile /data/db/balance_tags10/test.1, filling with zeroes... m30000| Fri Feb 22 11:20:31.111 [FileAllocator] done allocating datafile /data/db/balance_tags10/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:20:31.114 [conn9] build index test.foo { _id: 1 } m30000| Fri Feb 22 11:20:31.115 [conn9] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:20:31.117 [conn1] enabling sharding on: test m30000| Fri Feb 22 11:20:31.117 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.143 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.168 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:31.281 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 11:20:31.281 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:20:31.282 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:31.282 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.306 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.332 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:31.451 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||5127547fd4b973931fc9a229 based on: (empty) m30000| Fri Feb 22 11:20:31.451 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.475 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.501 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:31.587 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.612 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.640 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.665 [conn7] build index config.collections { _id: 1 } m30001| Fri Feb 22 11:20:31.665 [conn7] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:20:31.665 [conn7] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:20:31.670 [conn7] build index done. scanned 0 total records. 0.004 secs m30002| Fri Feb 22 11:20:31.670 [conn7] build index done. scanned 0 total records. 0.004 secs m30001| Fri Feb 22 11:20:31.670 [conn7] build index done. scanned 0 total records. 0.004 secs m30999| Fri Feb 22 11:20:31.758 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000,localhost:30001,localhost:30002", version: Timestamp 1000|0, versionEpoch: ObjectId('5127547fd4b973931fc9a229'), serverID: ObjectId('5127547ed4b973931fc9a227'), shard: "shard0000", shardHost: "localhost:30000" } 0x1180c80 2 m30999| Fri Feb 22 11:20:31.759 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:20:31.759 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000,localhost:30001,localhost:30002", version: Timestamp 1000|0, versionEpoch: ObjectId('5127547fd4b973931fc9a229'), serverID: ObjectId('5127547ed4b973931fc9a227'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x1180c80 2 m30000| Fri Feb 22 11:20:31.759 [conn9] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 11:20:31.759 [conn9] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:31.760 [conn9] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:31.760 [initandlisten] connection accepted from 127.0.0.1:40271 #12 (11 connections now open) m30000| Fri Feb 22 11:20:31.760 [conn9] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:31.760 [initandlisten] connection accepted from 127.0.0.1:43078 #12 (11 connections now open) m30002| Fri Feb 22 11:20:31.760 [initandlisten] connection accepted from 127.0.0.1:60374 #12 (11 connections now open) m30999| Fri Feb 22 11:20:31.761 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30000| Fri Feb 22 11:20:31.761 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.795 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.816 [conn10] CMD fsync: sync:1 lock:0 Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| Fri Feb 22 11:20:31.898 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:31.898 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:31.899 [initandlisten] connection accepted from 127.0.0.1:34264 #13 (12 connections now open) m30001| Fri Feb 22 11:20:31.899 [initandlisten] connection accepted from 127.0.0.1:49945 #13 (12 connections now open) m30002| Fri Feb 22 11:20:31.900 [initandlisten] connection accepted from 127.0.0.1:43096 #13 (12 connections now open) m30000| Fri Feb 22 11:20:31.902 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:31.902 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257 (sleeping for 30000ms) m30000| Fri Feb 22 11:20:31.902 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:31.902 [initandlisten] connection accepted from 127.0.0.1:49538 #14 (13 connections now open) m30000| Fri Feb 22 11:20:31.902 [conn11] SyncClusterConnection connecting to [localhost:30001] m30001| Fri Feb 22 11:20:31.913 [initandlisten] connection accepted from 127.0.0.1:51906 #14 (13 connections now open) m30000| Fri Feb 22 11:20:31.913 [conn11] SyncClusterConnection connecting to [localhost:30002] m30002| Fri Feb 22 11:20:31.913 [initandlisten] connection accepted from 127.0.0.1:33814 #14 (13 connections now open) m30000| Fri Feb 22 11:20:31.913 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.935 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:31.946 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.956 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:31.981 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.032 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.032 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.065 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.069 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.091 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.108 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.168 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.168 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.205 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.209 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.231 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.248 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.305 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754800cfd6a2130a0abd0 m30000| Fri Feb 22 11:20:32.306 [conn11] splitChunk accepted at version 1|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:32.306 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.334 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.355 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.480 [conn12] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30000| Fri Feb 22 11:20:32.509 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:32-512754800cfd6a2130a0abd1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532032509), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:32.510 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.539 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.561 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.612 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.640 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.660 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.714 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.739 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.764 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.816 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:32.816 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:111 reslen:37 918ms m30999| Fri Feb 22 11:20:32.817 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||5127547fd4b973931fc9a229 based on: 1|0||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:32.818 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:32.818 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:32.819 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.843 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.868 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:32.919 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:32.944 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:32.968 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.021 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754800cfd6a2130a0abd2 m30000| Fri Feb 22 11:20:33.022 [conn11] splitChunk accepted at version 1|2||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:33.022 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.051 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.072 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.158 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:33-512754810cfd6a2130a0abd3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532033158), what: "split", ns: "test.foo", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:33.158 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.187 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.208 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.260 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.285 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.310 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.363 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:33.363 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 1.0 } ], shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:60 reslen:103 544ms m30999| Fri Feb 22 11:20:33.364 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||5127547fd4b973931fc9a229 based on: 1|2||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:33.365 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|4||000000000000000000000000min: { _id: 1.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:33.365 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:33.365 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.390 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.414 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.465 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.490 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.515 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.568 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754810cfd6a2130a0abd4 m30000| Fri Feb 22 11:20:33.569 [conn11] splitChunk accepted at version 1|4||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:33.569 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.601 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.625 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.705 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:33-512754810cfd6a2130a0abd5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532033705), what: "split", ns: "test.foo", details: { before: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:33.705 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.733 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.755 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.807 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.832 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.857 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:33.909 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:33.909 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 2.0 } ], shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 544ms m30999| Fri Feb 22 11:20:33.910 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||5127547fd4b973931fc9a229 based on: 1|4||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:33.911 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|6||000000000000000000000000min: { _id: 2.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:33.911 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:33.912 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:33.936 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:33.961 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.012 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.037 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.063 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.114 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754810cfd6a2130a0abd6 m30000| Fri Feb 22 11:20:34.115 [conn11] splitChunk accepted at version 1|6||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:34.115 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.144 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.166 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.251 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:34-512754820cfd6a2130a0abd7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532034251), what: "split", ns: "test.foo", details: { before: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:34.251 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.280 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.302 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.353 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.378 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.403 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.456 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:34.456 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 3.0 } ], shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 544ms m30999| Fri Feb 22 11:20:34.457 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||5127547fd4b973931fc9a229 based on: 1|6||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:34.458 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|8||000000000000000000000000min: { _id: 3.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:34.458 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:34.458 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.484 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.511 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.593 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.617 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.642 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.696 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754820cfd6a2130a0abd8 m30000| Fri Feb 22 11:20:34.697 [conn11] splitChunk accepted at version 1|8||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:34.697 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.726 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.750 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.832 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:34-512754820cfd6a2130a0abd9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532034832), what: "split", ns: "test.foo", details: { before: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:34.832 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.861 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.883 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:34.935 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:34.960 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:34.985 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.037 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:35.037 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 4.0 } ], shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:103 579ms m30999| Fri Feb 22 11:20:35.038 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||5127547fd4b973931fc9a229 based on: 1|8||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:35.039 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|10||000000000000000000000000min: { _id: 4.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:35.039 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:35.039 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.064 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.089 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.139 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.165 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.190 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.242 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754830cfd6a2130a0abda m30000| Fri Feb 22 11:20:35.243 [conn11] splitChunk accepted at version 1|10||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:35.243 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.271 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.293 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.378 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:35-512754830cfd6a2130a0abdb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532035378), what: "split", ns: "test.foo", details: { before: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:35.378 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.407 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.429 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.481 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.506 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.531 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.583 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:35.583 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 5.0 } ], shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:48 reslen:103 544ms m30999| Fri Feb 22 11:20:35.584 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||5127547fd4b973931fc9a229 based on: 1|10||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:35.585 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|12||000000000000000000000000min: { _id: 5.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:35.586 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:35.586 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.611 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.640 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.720 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.745 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.773 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.857 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754830cfd6a2130a0abdc m30000| Fri Feb 22 11:20:35.858 [conn11] splitChunk accepted at version 1|12||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:35.858 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:35.892 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:35.914 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:35.993 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:35-512754830cfd6a2130a0abdd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532035993), what: "split", ns: "test.foo", details: { before: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:35.993 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.027 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.048 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.130 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.150 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.174 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.232 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:36.232 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 6.0 } ], shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:67 reslen:103 646ms m30999| Fri Feb 22 11:20:36.233 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||5127547fd4b973931fc9a229 based on: 1|12||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:36.234 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|14||000000000000000000000000min: { _id: 6.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:36.234 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:36.234 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.254 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.278 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.335 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.354 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.379 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.437 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754840cfd6a2130a0abde m30000| Fri Feb 22 11:20:36.438 [conn11] splitChunk accepted at version 1|14||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:36.438 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.465 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.483 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.540 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:36-512754840cfd6a2130a0abdf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532036540), what: "split", ns: "test.foo", details: { before: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:36.540 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.568 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.590 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.642 [conn14] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:36.667 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:36.667 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:20:36.667 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.693 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.779 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:36.779 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 7.0 } ], shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 544ms m30999| Fri Feb 22 11:20:36.780 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||5127547fd4b973931fc9a229 based on: 1|14||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:36.781 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|16||000000000000000000000000min: { _id: 7.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:36.781 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:36.781 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.806 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.830 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.881 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:36.906 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:36.931 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:36.984 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754840cfd6a2130a0abe0 m30000| Fri Feb 22 11:20:36.984 [conn11] splitChunk accepted at version 1|16||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:36.985 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.014 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.038 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.120 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:37-512754850cfd6a2130a0abe1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532037120), what: "split", ns: "test.foo", details: { before: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:37.120 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.149 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.171 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.223 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.248 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.273 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.325 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:37.325 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 8.0 } ], shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:60 reslen:103 544ms m30999| Fri Feb 22 11:20:37.326 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||5127547fd4b973931fc9a229 based on: 1|16||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:37.327 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:37.327 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:37.327 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.352 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.377 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.428 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.452 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.477 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.530 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754850cfd6a2130a0abe2 m30000| Fri Feb 22 11:20:37.531 [conn11] splitChunk accepted at version 1|18||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:37.531 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.559 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.581 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.705 [conn12] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30000| Fri Feb 22 11:20:37.735 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:37-512754850cfd6a2130a0abe3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532037735), what: "split", ns: "test.foo", details: { before: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 9.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 9.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:37.735 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.763 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.785 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.837 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.862 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.887 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:37.940 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:37.940 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 9.0 } ], shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 612ms m30999| Fri Feb 22 11:20:37.940 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||5127547fd4b973931fc9a229 based on: 1|18||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:37.941 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|20||000000000000000000000000min: { _id: 9.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:37.942 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:37.942 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:37.966 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:37.991 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.042 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.067 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.094 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.145 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754850cfd6a2130a0abe4 m30000| Fri Feb 22 11:20:38.146 [conn11] splitChunk accepted at version 1|20||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:38.146 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.176 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.199 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.281 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:38-512754860cfd6a2130a0abe5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532038281), what: "split", ns: "test.foo", details: { before: { min: { _id: 9.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 9.0 }, max: { _id: 10.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:38.281 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.310 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.333 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.384 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.409 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.436 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.486 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:38.486 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 9.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 10.0 } ], shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 544ms m30999| Fri Feb 22 11:20:38.487 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||5127547fd4b973931fc9a229 based on: 1|20||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:38.488 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|22||000000000000000000000000min: { _id: 10.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:38.488 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 11.0 } ], shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:38.488 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.515 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.541 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.623 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.648 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.676 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.725 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754860cfd6a2130a0abe6 m30000| Fri Feb 22 11:20:38.726 [conn11] splitChunk accepted at version 1|22||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:38.726 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.759 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.780 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.862 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:38-512754860cfd6a2130a0abe7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532038862), what: "split", ns: "test.foo", details: { before: { min: { _id: 10.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 10.0 }, max: { _id: 11.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 11.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:38.862 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:38.895 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:38.916 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:38.998 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.024 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.052 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.173 [conn14] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30000| Fri Feb 22 11:20:39.203 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:39.203 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 10.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 11.0 } ], shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:68 reslen:103 714ms m30999| Fri Feb 22 11:20:39.204 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||5127547fd4b973931fc9a229 based on: 1|22||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:39.205 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|24||000000000000000000000000min: { _id: 11.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:39.205 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 11.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 12.0 } ], shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:39.206 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.232 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.262 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.339 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.364 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.393 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.442 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754870cfd6a2130a0abe8 m30000| Fri Feb 22 11:20:39.443 [conn11] splitChunk accepted at version 1|24||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:39.443 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.475 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.497 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.578 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:39-512754870cfd6a2130a0abe9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532039578), what: "split", ns: "test.foo", details: { before: { min: { _id: 11.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 11.0 }, max: { _id: 12.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 12.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:39.579 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.611 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.632 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.715 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.740 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.769 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.818 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:39.818 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 11.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 12.0 } ], shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:108 reslen:103 612ms m30999| Fri Feb 22 11:20:39.819 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||5127547fd4b973931fc9a229 based on: 1|24||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:39.820 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|26||000000000000000000000000min: { _id: 12.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:39.820 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 12.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 13.0 } ], shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:39.820 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.845 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:39.873 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:39.954 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:39.979 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.007 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.057 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754870cfd6a2130a0abea m30000| Fri Feb 22 11:20:40.058 [conn11] splitChunk accepted at version 1|26||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:40.058 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.087 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.108 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.193 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:40-512754880cfd6a2130a0abeb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532040193), what: "split", ns: "test.foo", details: { before: { min: { _id: 12.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 12.0 }, max: { _id: 13.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 13.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:40.193 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.222 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.243 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.296 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.321 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.346 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.398 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:40.398 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 12.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 13.0 } ], shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 578ms m30999| Fri Feb 22 11:20:40.399 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||5127547fd4b973931fc9a229 based on: 1|26||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:40.400 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|28||000000000000000000000000min: { _id: 13.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:40.400 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 13.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 14.0 } ], shardId: "test.foo-_id_13.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:40.401 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.427 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.453 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.573 [conn14] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30000| Fri Feb 22 11:20:40.603 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.628 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.653 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.705 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754880cfd6a2130a0abec m30000| Fri Feb 22 11:20:40.706 [conn11] splitChunk accepted at version 1|28||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:40.706 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.734 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.755 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.842 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:40-512754880cfd6a2130a0abed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532040842), what: "split", ns: "test.foo", details: { before: { min: { _id: 13.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 13.0 }, max: { _id: 14.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 14.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:40.842 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.871 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.893 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:40.945 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:40.970 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:40.995 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.047 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:41.047 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 13.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 14.0 } ], shardId: "test.foo-_id_13.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:59 reslen:103 647ms m30999| Fri Feb 22 11:20:41.048 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||5127547fd4b973931fc9a229 based on: 1|28||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:41.049 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|30||000000000000000000000000min: { _id: 14.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:41.049 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 14.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 15.0 } ], shardId: "test.foo-_id_14.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:41.049 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.074 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.099 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.150 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.174 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.199 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.252 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754890cfd6a2130a0abee m30000| Fri Feb 22 11:20:41.253 [conn11] splitChunk accepted at version 1|30||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:41.253 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.281 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.303 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.389 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:41-512754890cfd6a2130a0abef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532041389), what: "split", ns: "test.foo", details: { before: { min: { _id: 14.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 14.0 }, max: { _id: 15.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:41.389 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.418 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.439 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.491 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.518 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.547 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.628 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:41.628 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 14.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 15.0 } ], shardId: "test.foo-_id_14.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 578ms m30999| Fri Feb 22 11:20:41.628 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||5127547fd4b973931fc9a229 based on: 1|30||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:41.630 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|32||000000000000000000000000min: { _id: 15.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:41.630 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 16.0 } ], shardId: "test.foo-_id_15.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:41.630 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.654 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.679 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.730 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.755 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.780 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.833 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754890cfd6a2130a0abf0 m30000| Fri Feb 22 11:20:41.833 [conn11] splitChunk accepted at version 1|32||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:41.833 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.862 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:41.883 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:41.969 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:41-512754890cfd6a2130a0abf1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532041969), what: "split", ns: "test.foo", details: { before: { min: { _id: 15.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 15.0 }, max: { _id: 16.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 16.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:41.969 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:41.998 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.020 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.072 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.096 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.121 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.174 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:42.174 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 15.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 16.0 } ], shardId: "test.foo-_id_15.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:58 reslen:103 544ms m30999| Fri Feb 22 11:20:42.175 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||5127547fd4b973931fc9a229 based on: 1|32||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:42.176 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|34||000000000000000000000000min: { _id: 16.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:42.176 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 16.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 17.0 } ], shardId: "test.foo-_id_16.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:42.176 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.201 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.227 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.311 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.335 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.360 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.413 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127548a0cfd6a2130a0abf2 m30000| Fri Feb 22 11:20:42.414 [conn11] splitChunk accepted at version 1|34||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:42.414 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.443 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.464 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.549 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:42-5127548a0cfd6a2130a0abf3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532042549), what: "split", ns: "test.foo", details: { before: { min: { _id: 16.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 16.0 }, max: { _id: 17.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 17.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:42.549 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.582 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.605 [conn12] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:42.668 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:42.668 [Balancer] skipping balancing round because balancing is disabled m30000| Fri Feb 22 11:20:42.687 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.714 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.744 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.823 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:42.823 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 16.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 17.0 } ], shardId: "test.foo-_id_16.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:74 reslen:103 647ms m30999| Fri Feb 22 11:20:42.824 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||5127547fd4b973931fc9a229 based on: 1|34||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:42.825 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|36||000000000000000000000000min: { _id: 17.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:42.825 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 17.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 18.0 } ], shardId: "test.foo-_id_17.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:42.826 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.850 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:42.879 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:42.960 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:42.985 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.013 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.096 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127548a0cfd6a2130a0abf4 m30000| Fri Feb 22 11:20:43.097 [conn11] splitChunk accepted at version 1|36||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:43.097 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.130 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.152 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.233 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:43-5127548b0cfd6a2130a0abf5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532043233), what: "split", ns: "test.foo", details: { before: { min: { _id: 17.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 17.0 }, max: { _id: 18.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 18.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:43.233 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.266 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.288 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.369 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.395 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.424 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.506 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:43.506 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 17.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 18.0 } ], shardId: "test.foo-_id_17.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:77 reslen:103 680ms m30999| Fri Feb 22 11:20:43.507 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||5127547fd4b973931fc9a229 based on: 1|36||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:43.508 [conn1] splitting: test.foo shard: ns:test.fooshard: shard0000:localhost:30000lastmod: 1|38||000000000000000000000000min: { _id: 18.0 }max: { _id: MaxKey } m30000| Fri Feb 22 11:20:43.508 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 18.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 19.0 } ], shardId: "test.foo-_id_18.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30000| Fri Feb 22 11:20:43.509 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.537 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.566 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.643 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.667 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.696 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.780 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127548b0cfd6a2130a0abf6 m30000| Fri Feb 22 11:20:43.781 [conn11] splitChunk accepted at version 1|38||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:43.781 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.815 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.837 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:43.916 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:43-5127548b0cfd6a2130a0abf7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532043916), what: "split", ns: "test.foo", details: { before: { min: { _id: 18.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 18.0 }, max: { _id: 19.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: 19.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:20:43.916 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:43.949 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:43.971 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:44.053 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:44.078 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:44.106 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:44.189 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:44.189 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 18.0 }, max: { _id: MaxKey }, from: "shard0000", splitKeys: [ { _id: 19.0 } ], shardId: "test.foo-_id_18.0", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:79 reslen:103 680ms m30999| Fri Feb 22 11:20:44.190 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 1|40||5127547fd4b973931fc9a229 based on: 1|38||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:44.191 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:44.223 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:44.244 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:48.669 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:48.670 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:48.670 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:48 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275490d4b973931fc9a22a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127547ed4b973931fc9a228" } } m30000| Fri Feb 22 11:20:48.670 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:48.699 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:48.724 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:48.785 [conn5] CMD fsync: sync:1 lock:0 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("5127547dd4b973931fc9a225") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0000" } test.foo shard key: { "_id" : 1 } chunks: shard0000 21 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : shard0000 { "t" : 1000, "i" : 1 } { "_id" : 0 } -->> { "_id" : 1 } on : shard0000 { "t" : 1000, "i" : 3 } { "_id" : 1 } -->> { "_id" : 2 } on : shard0000 { "t" : 1000, "i" : 5 } { "_id" : 2 } -->> { "_id" : 3 } on : shard0000 { "t" : 1000, "i" : 7 } { "_id" : 3 } -->> { "_id" : 4 } on : shard0000 { "t" : 1000, "i" : 9 } { "_id" : 4 } -->> { "_id" : 5 } on : shard0000 { "t" : 1000, "i" : 11 } { "_id" : 5 } -->> { "_id" : 6 } on : shard0000 { "t" : 1000, "i" : 13 } { "_id" : 6 } -->> { "_id" : 7 } on : shard0000 { "t" : 1000, "i" : 15 } { "_id" : 7 } -->> { "_id" : 8 } on : shard0000 { "t" : 1000, "i" : 17 } { "_id" : 8 } -->> { "_id" : 9 } on : shard0000 { "t" : 1000, "i" : 19 } { "_id" : 9 } -->> { "_id" : 10 } on : shard0000 { "t" : 1000, "i" : 21 } { "_id" : 10 } -->> { "_id" : 11 } on : shard0000 { "t" : 1000, "i" : 23 } { "_id" : 11 } -->> { "_id" : 12 } on : shard0000 { "t" : 1000, "i" : 25 } { "_id" : 12 } -->> { "_id" : 13 } on : shard0000 { "t" : 1000, "i" : 27 } { "_id" : 13 } -->> { "_id" : 14 } on : shard0000 { "t" : 1000, "i" : 29 } { "_id" : 14 } -->> { "_id" : 15 } on : shard0000 { "t" : 1000, "i" : 31 } { "_id" : 15 } -->> { "_id" : 16 } on : shard0000 { "t" : 1000, "i" : 33 } { "_id" : 16 } -->> { "_id" : 17 } on : shard0000 { "t" : 1000, "i" : 35 } { "_id" : 17 } -->> { "_id" : 18 } on : shard0000 { "t" : 1000, "i" : 37 } { "_id" : 18 } -->> { "_id" : 19 } on : shard0000 { "t" : 1000, "i" : 39 } { "_id" : 19 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 { "t" : 1000, "i" : 40 } m30001| Fri Feb 22 11:20:48.816 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:48.841 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:48.922 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275490d4b973931fc9a22a m30999| Fri Feb 22 11:20:48.922 [Balancer] *** start balancing round m30999| Fri Feb 22 11:20:48.922 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:20:48.922 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 11:20:48.923 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:48.948 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:48.973 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:48.994 [conn7] build index config.tags { _id: 1 } m30002| Fri Feb 22 11:20:48.994 [conn7] build index config.tags { _id: 1 } m30001| Fri Feb 22 11:20:48.994 [conn7] build index config.tags { _id: 1 } m30001| Fri Feb 22 11:20:48.997 [conn7] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:20:48.997 [conn7] info: creating collection config.tags on add index m30001| Fri Feb 22 11:20:48.997 [conn7] build index config.tags { ns: 1, min: 1 } m30002| Fri Feb 22 11:20:48.997 [conn7] build index done. scanned 0 total records. 0.002 secs m30002| Fri Feb 22 11:20:48.997 [conn7] info: creating collection config.tags on add index m30002| Fri Feb 22 11:20:48.997 [conn7] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 11:20:48.997 [conn7] build index done. scanned 0 total records. 0.003 secs m30000| Fri Feb 22 11:20:48.998 [conn7] info: creating collection config.tags on add index m30000| Fri Feb 22 11:20:48.998 [conn7] build index config.tags { ns: 1, min: 1 } m30001| Fri Feb 22 11:20:48.999 [conn7] build index done. scanned 0 total records. 0.001 secs m30002| Fri Feb 22 11:20:48.999 [conn7] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:20:49.000 [conn7] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:20:49.103 [conn7] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30999| Fri Feb 22 11:20:49.127 [Balancer] shard0002 has more chunks me:0 best: shard0001:0 m30999| Fri Feb 22 11:20:49.127 [Balancer] collection : test.foo m30999| Fri Feb 22 11:20:49.127 [Balancer] donor : shard0000 chunks on 21 m30999| Fri Feb 22 11:20:49.127 [Balancer] receiver : shard0001 chunks on 0 m30999| Fri Feb 22 11:20:49.127 [Balancer] threshold : 4 m30999| Fri Feb 22 11:20:49.127 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:20:49.127 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 1|1||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:20:49.127 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:20:49.127 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:20:49.128 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.154 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.180 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.229 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.253 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.278 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.331 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754910cfd6a2130a0abf8 m30000| Fri Feb 22 11:20:49.332 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-512754910cfd6a2130a0abf9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532049332), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:20:49.332 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.360 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.381 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.435 [conn11] moveChunk request accepted at version 1|40||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:49.435 [conn11] moveChunk number of documents: 0 m30001| Fri Feb 22 11:20:49.435 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30000| Fri Feb 22 11:20:49.436 [initandlisten] connection accepted from 127.0.0.1:62544 #15 (14 connections now open) m30001| Fri Feb 22 11:20:49.437 [FileAllocator] allocating new datafile /data/db/balance_tags11/test.ns, filling with zeroes... m30001| Fri Feb 22 11:20:49.437 [FileAllocator] done allocating datafile /data/db/balance_tags11/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:20:49.437 [FileAllocator] allocating new datafile /data/db/balance_tags11/test.0, filling with zeroes... m30001| Fri Feb 22 11:20:49.437 [FileAllocator] done allocating datafile /data/db/balance_tags11/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:20:49.437 [FileAllocator] allocating new datafile /data/db/balance_tags11/test.1, filling with zeroes... m30001| Fri Feb 22 11:20:49.437 [FileAllocator] done allocating datafile /data/db/balance_tags11/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:20:49.440 [migrateThread] build index test.foo { _id: 1 } m30001| Fri Feb 22 11:20:49.441 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:20:49.441 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 11:20:49.441 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:20:49.441 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.442 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:20:49.445 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:20:49.446 [conn11] moveChunk setting version to: 2|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:20:49.446 [initandlisten] connection accepted from 127.0.0.1:38369 #15 (14 connections now open) m30001| Fri Feb 22 11:20:49.446 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:20:49.452 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.452 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.452 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-5127549178e37a7f0861eba1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532049452), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:20:49.452 [migrateThread] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:49.453 [initandlisten] connection accepted from 127.0.0.1:43618 #16 (15 connections now open) m30001| Fri Feb 22 11:20:49.453 [migrateThread] SyncClusterConnection connecting to [localhost:30001] m30001| Fri Feb 22 11:20:49.453 [initandlisten] connection accepted from 127.0.0.1:62454 #16 (15 connections now open) m30001| Fri Feb 22 11:20:49.453 [migrateThread] SyncClusterConnection connecting to [localhost:30002] m30002| Fri Feb 22 11:20:49.453 [initandlisten] connection accepted from 127.0.0.1:45677 #15 (14 connections now open) m30000| Fri Feb 22 11:20:49.453 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.456 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:20:49.456 [conn11] moveChunk updating self version to: 2|1||5127547fd4b973931fc9a229 through { _id: 0.0 } -> { _id: 1.0 } for collection 'test.foo' m30000| Fri Feb 22 11:20:49.456 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:49.462 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:20:49.462 [initandlisten] connection accepted from 127.0.0.1:45549 #17 (16 connections now open) m30000| Fri Feb 22 11:20:49.464 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:49.464 [initandlisten] connection accepted from 127.0.0.1:40060 #17 (16 connections now open) m30002| Fri Feb 22 11:20:49.464 [initandlisten] connection accepted from 127.0.0.1:56139 #16 (15 connections now open) m30000| Fri Feb 22 11:20:49.464 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.480 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.490 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.519 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.537 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.571 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.599 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.605 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-512754910cfd6a2130a0abfa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532049605), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:20:49.605 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.631 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.634 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.661 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:49.742 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:49.742 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:49.742 [cleanupOldData-512754910cfd6a2130a0abfb] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 0.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:20:49.742 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.762 [cleanupOldData-512754910cfd6a2130a0abfb] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:20:49.762 [cleanupOldData-512754910cfd6a2130a0abfb] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:20:49.762 [cleanupOldData-512754910cfd6a2130a0abfb] moveChunk deleted 0 documents for test.foo from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:20:49.767 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.802 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:49.878 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:49.878 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:49-512754910cfd6a2130a0abfc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532049878), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 307, step3 of 6: 0, step4 of 6: 10, step5 of 6: 296, step6 of 6: 0 } } m30000| Fri Feb 22 11:20:49.879 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:49.908 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:49.938 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:50.015 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:18 r:45 w:20 reslen:37 887ms m30999| Fri Feb 22 11:20:50.015 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:50.016 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 2|1||5127547fd4b973931fc9a229 based on: 1|40||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:50.016 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:20:50.016 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:50.045 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:50.080 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:50.152 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:20:55.152 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:20:55.153 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:20:55.154 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:20:55 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275497d4b973931fc9a22b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275490d4b973931fc9a22a" } } m30000| Fri Feb 22 11:20:55.154 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.189 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.230 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.302 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.335 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.375 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:55.439 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275497d4b973931fc9a22b m30999| Fri Feb 22 11:20:55.439 [Balancer] *** start balancing round m30999| Fri Feb 22 11:20:55.439 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:20:55.439 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:20:55.440 [Balancer] collection : test.foo m30999| Fri Feb 22 11:20:55.440 [Balancer] donor : shard0000 chunks on 20 m30999| Fri Feb 22 11:20:55.440 [Balancer] receiver : shard0002 chunks on 0 m30999| Fri Feb 22 11:20:55.440 [Balancer] threshold : 2 m30999| Fri Feb 22 11:20:55.441 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:20:55.441 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 2|1||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:20:55.441 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:20:55.441 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:20:55.441 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.466 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.506 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.575 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.600 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.640 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.712 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754970cfd6a2130a0abfd m30000| Fri Feb 22 11:20:55.712 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:55-512754970cfd6a2130a0abfe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532055712), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:20:55.712 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.745 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.775 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.849 [conn11] moveChunk request accepted at version 2|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:20:55.849 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:20:55.850 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 1.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30000| Fri Feb 22 11:20:55.850 [initandlisten] connection accepted from 127.0.0.1:35927 #18 (17 connections now open) m30002| Fri Feb 22 11:20:55.851 [FileAllocator] allocating new datafile /data/db/balance_tags12/test.ns, filling with zeroes... m30002| Fri Feb 22 11:20:55.851 [FileAllocator] done allocating datafile /data/db/balance_tags12/test.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:20:55.851 [FileAllocator] allocating new datafile /data/db/balance_tags12/test.0, filling with zeroes... m30002| Fri Feb 22 11:20:55.852 [FileAllocator] done allocating datafile /data/db/balance_tags12/test.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:20:55.852 [FileAllocator] allocating new datafile /data/db/balance_tags12/test.1, filling with zeroes... m30002| Fri Feb 22 11:20:55.852 [FileAllocator] done allocating datafile /data/db/balance_tags12/test.1, size: 128MB, took 0 secs m30002| Fri Feb 22 11:20:55.856 [migrateThread] build index test.foo { _id: 1 } m30002| Fri Feb 22 11:20:55.857 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30002| Fri Feb 22 11:20:55.857 [migrateThread] info: creating collection test.foo on add index m30002| Fri Feb 22 11:20:55.858 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:20:55.858 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:20:55.858 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:20:55.860 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:20:55.860 [conn11] moveChunk setting version to: 3|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:20:55.860 [initandlisten] connection accepted from 127.0.0.1:33670 #17 (16 connections now open) m30002| Fri Feb 22 11:20:55.860 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:20:55.868 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:20:55.868 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:20:55.868 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:55-51275497aaaba61d9eb250fb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532055868), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 5: 7, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30002| Fri Feb 22 11:20:55.868 [migrateThread] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:20:55.869 [initandlisten] connection accepted from 127.0.0.1:61265 #19 (18 connections now open) m30002| Fri Feb 22 11:20:55.869 [migrateThread] SyncClusterConnection connecting to [localhost:30001] m30002| Fri Feb 22 11:20:55.869 [migrateThread] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:20:55.869 [initandlisten] connection accepted from 127.0.0.1:58911 #18 (17 connections now open) m30002| Fri Feb 22 11:20:55.869 [initandlisten] connection accepted from 127.0.0.1:39075 #18 (17 connections now open) m30000| Fri Feb 22 11:20:55.869 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:55.871 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:20:55.871 [conn11] moveChunk updating self version to: 3|1||5127547fd4b973931fc9a229 through { _id: 1.0 } -> { _id: 2.0 } for collection 'test.foo' m30000| Fri Feb 22 11:20:55.871 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.903 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:55.905 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.933 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:55.941 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.019 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:56-512754980cfd6a2130a0abff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532056019), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:20:56.019 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.019 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.058 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.062 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.092 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.097 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:56.190 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:20:56.190 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:20:56.190 [cleanupOldData-512754980cfd6a2130a0ac00] (start) waiting to cleanup test.foo from { _id: 0.0 } -> { _id: 1.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:20:56.190 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.210 [cleanupOldData-512754980cfd6a2130a0ac00] waiting to remove documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:20:56.210 [cleanupOldData-512754980cfd6a2130a0ac00] moveChunk starting delete for: test.foo from { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:20:56.211 [cleanupOldData-512754980cfd6a2130a0ac00] moveChunk deleted 1 documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:20:56.215 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.255 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.326 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:20:56.326 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:20:56-512754980cfd6a2130a0ac01", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532056326), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:20:56.327 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.361 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.392 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:56.463 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:62 w:20 reslen:37 1022ms m30999| Fri Feb 22 11:20:56.463 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:20:56.465 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 3|1||5127547fd4b973931fc9a229 based on: 2|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:20:56.465 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:20:56.465 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:56.498 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:56.536 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:56.600 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:20:58.680 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:58.707 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:58.751 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:20:58.847 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:20:58.872 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:20:58.912 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:20:58.983 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:20:58 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30999| Fri Feb 22 11:21:01.600 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:01.601 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:01.601 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127549dd4b973931fc9a22c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275497d4b973931fc9a22b" } } m30000| Fri Feb 22 11:21:01.601 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:01.633 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:01.671 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:01.740 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:01.772 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:01.810 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:01.877 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127549dd4b973931fc9a22c m30999| Fri Feb 22 11:21:01.877 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:01.877 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:01.877 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:01.878 [Balancer] shard0002 has more chunks me:1 best: shard0001:1 m30999| Fri Feb 22 11:21:01.878 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:01.878 [Balancer] donor : shard0000 chunks on 19 m30999| Fri Feb 22 11:21:01.878 [Balancer] receiver : shard0001 chunks on 1 m30999| Fri Feb 22 11:21:01.878 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:01.878 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:01.878 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 3|1||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:01.879 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:01.879 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:01.879 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:01.899 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:01.938 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.013 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.033 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.074 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.150 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 5127549d0cfd6a2130a0ac02 m30000| Fri Feb 22 11:21:02.150 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e0cfd6a2130a0ac03", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532062150), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:02.150 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.184 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.216 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.287 [conn11] moveChunk request accepted at version 3|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:02.287 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:02.288 [migrateThread] starting receiving-end of migration of chunk { _id: 1.0 } -> { _id: 2.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:02.289 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:02.289 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.289 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:21:02.298 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:02.298 [conn11] moveChunk setting version to: 4|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:02.298 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:02.300 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.300 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.300 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e78e37a7f0861eba2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532062300), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:02.300 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.305 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.308 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:02.308 [conn11] moveChunk updating self version to: 4|1||5127547fd4b973931fc9a229 through { _id: 2.0 } -> { _id: 3.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:02.308 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.326 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.342 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.346 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.369 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.393 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.398 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.491 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e0cfd6a2130a0ac04", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532062491), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:02.491 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:21:02.491 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.492 [initandlisten] connection accepted from 127.0.0.1:51262 #20 (19 connections now open) m30000| Fri Feb 22 11:21:02.492 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:21:02.503 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:21:02.503 [initandlisten] connection accepted from 127.0.0.1:41492 #19 (18 connections now open) m30002| Fri Feb 22 11:21:02.503 [initandlisten] connection accepted from 127.0.0.1:64097 #19 (18 connections now open) m30000| Fri Feb 22 11:21:02.503 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.525 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.536 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.555 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.564 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:02.628 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:02.628 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:02.628 [cleanupOldData-5127549e0cfd6a2130a0ac05] (start) waiting to cleanup test.foo from { _id: 1.0 } -> { _id: 2.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:02.628 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.648 [cleanupOldData-5127549e0cfd6a2130a0ac05] waiting to remove documents for test.foo from { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:21:02.648 [cleanupOldData-5127549e0cfd6a2130a0ac05] moveChunk starting delete for: test.foo from { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:21:02.648 [cleanupOldData-5127549e0cfd6a2130a0ac05] moveChunk deleted 1 documents for test.foo from { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:21:02.653 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.694 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.764 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:02.764 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:02-5127549e0cfd6a2130a0ac06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532062764), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:02.771 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.804 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.834 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:02.901 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:26 r:121 w:18 reslen:37 1022ms m30999| Fri Feb 22 11:21:02.901 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:02.902 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 25 version: 4|1||5127547fd4b973931fc9a229 based on: 3|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:02.902 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:02.903 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:02.936 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:02.974 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:03.037 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:08.038 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:08.039 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:08.039 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754a4d4b973931fc9a22d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127549dd4b973931fc9a22c" } } m30000| Fri Feb 22 11:21:08.039 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.072 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.107 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.153 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.182 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.216 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:08.256 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754a4d4b973931fc9a22d m30999| Fri Feb 22 11:21:08.256 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:08.256 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:08.256 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:08.257 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:08.257 [Balancer] donor : shard0000 chunks on 18 m30999| Fri Feb 22 11:21:08.257 [Balancer] receiver : shard0002 chunks on 1 m30999| Fri Feb 22 11:21:08.257 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:08.257 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_2.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:08.257 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 4|1||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:08.258 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:08.258 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:08.258 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.283 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.317 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.358 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.383 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.417 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.460 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754a40cfd6a2130a0ac07 m30000| Fri Feb 22 11:21:08.460 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a40cfd6a2130a0ac08", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532068460), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:08.461 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.489 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.518 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.563 [conn11] moveChunk request accepted at version 4|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:08.564 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:08.564 [migrateThread] starting receiving-end of migration of chunk { _id: 2.0 } -> { _id: 3.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:08.565 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:08.565 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:21:08.565 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:21:08.574 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:08.574 [conn11] moveChunk setting version to: 5|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:08.574 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:08.575 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:21:08.575 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:21:08.576 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a4aaaba61d9eb250fc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532068575), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:08.576 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.584 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:08.584 [conn11] moveChunk updating self version to: 5|1||5127547fd4b973931fc9a229 through { _id: 3.0 } -> { _id: 4.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:08.585 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.601 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.605 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.636 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.640 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.699 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a40cfd6a2130a0ac09", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532068699), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:08.699 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.729 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.759 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:08.802 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:08.802 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:08.802 [cleanupOldData-512754a40cfd6a2130a0ac0a] (start) waiting to cleanup test.foo from { _id: 2.0 } -> { _id: 3.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:08.802 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.822 [cleanupOldData-512754a40cfd6a2130a0ac0a] waiting to remove documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:21:08.822 [cleanupOldData-512754a40cfd6a2130a0ac0a] moveChunk starting delete for: test.foo from { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:21:08.822 [cleanupOldData-512754a40cfd6a2130a0ac0a] moveChunk deleted 1 documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:21:08.827 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.863 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:08.904 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:08.904 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:08-512754a40cfd6a2130a0ac0b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532068904), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 6: 0, step2 of 6: 305, step3 of 6: 0, step4 of 6: 10, step5 of 6: 227, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:08.905 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:08.935 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:08.967 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:09.007 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:32 r:92 w:21 reslen:37 749ms m30999| Fri Feb 22 11:21:09.007 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:09.009 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 5|1||5127547fd4b973931fc9a229 based on: 4|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:09.009 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:09.009 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:09.039 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:09.074 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:09.110 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:14.110 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:14.111 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:14.111 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754aad4b973931fc9a22e" } } m30000| Fri Feb 22 11:21:14.111 [conn5] CMD fsync: sync:1 lock:0 m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754a4d4b973931fc9a22d" } } m30001| Fri Feb 22 11:21:14.140 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.175 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.224 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.253 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.289 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:14.361 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754aad4b973931fc9a22e m30999| Fri Feb 22 11:21:14.361 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:14.361 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:14.361 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:14.362 [Balancer] shard0002 has more chunks me:2 best: shard0001:2 m30999| Fri Feb 22 11:21:14.362 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:14.362 [Balancer] donor : shard0000 chunks on 17 m30999| Fri Feb 22 11:21:14.362 [Balancer] receiver : shard0001 chunks on 2 m30999| Fri Feb 22 11:21:14.362 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:14.362 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_3.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 3.0 }, max: { _id: 4.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:14.363 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 5|1||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:14.363 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:14.363 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:14.363 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.388 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.423 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.551 [conn14] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:97 102ms m30000| Fri Feb 22 11:21:14.565 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.593 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.635 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.702 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754aa0cfd6a2130a0ac0c m30000| Fri Feb 22 11:21:14.702 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:14-512754aa0cfd6a2130a0ac0d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532074702), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:14.702 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.736 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.766 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.839 [conn11] moveChunk request accepted at version 5|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:14.839 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:14.840 [migrateThread] starting receiving-end of migration of chunk { _id: 3.0 } -> { _id: 4.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:14.841 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:14.841 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:14.841 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:21:14.850 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:14.850 [conn11] moveChunk setting version to: 6|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:14.850 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:14.852 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:14.852 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:14.852 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:14-512754aa78e37a7f0861eba3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532074852), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:14.852 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:14.860 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:14.860 [conn11] moveChunk updating self version to: 6|1||5127547fd4b973931fc9a229 through { _id: 4.0 } -> { _id: 5.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:14.860 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.877 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:14.882 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.925 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:14.925 [conn15] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.009 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:15-512754ab0cfd6a2130a0ac0e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532075009), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:15.009 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:15.042 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.074 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.145 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:15.145 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:15.145 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:15.146 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:15.146 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:15.146 [cleanupOldData-512754ab0cfd6a2130a0ac0f] (start) waiting to cleanup test.foo from { _id: 3.0 } -> { _id: 4.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:15.146 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.166 [cleanupOldData-512754ab0cfd6a2130a0ac0f] waiting to remove documents for test.foo from { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:21:15.166 [cleanupOldData-512754ab0cfd6a2130a0ac0f] moveChunk starting delete for: test.foo from { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:21:15.166 [cleanupOldData-512754ab0cfd6a2130a0ac0f] moveChunk deleted 1 documents for test.foo from { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:21:15.171 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.211 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.316 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:15.316 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:15-512754ab0cfd6a2130a0ac10", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532075316), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 6: 0, step2 of 6: 476, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:15.316 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:15.350 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.380 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:15.487 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_3.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:121 w:19 reslen:37 1124ms m30999| Fri Feb 22 11:21:15.487 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:15.488 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 6|1||5127547fd4b973931fc9a229 based on: 5|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:15.488 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:15.488 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:15.522 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:15.562 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:15.657 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:20.658 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:20.658 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:20.659 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754b0d4b973931fc9a22f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754aad4b973931fc9a22e" } } m30000| Fri Feb 22 11:21:20.659 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:20.688 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:20.722 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:20.806 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:20.834 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:20.868 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:20.976 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754b0d4b973931fc9a22f m30999| Fri Feb 22 11:21:20.976 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:20.976 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:20.976 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:20.978 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:20.978 [Balancer] donor : shard0000 chunks on 16 m30999| Fri Feb 22 11:21:20.978 [Balancer] receiver : shard0002 chunks on 2 m30999| Fri Feb 22 11:21:20.978 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:20.978 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_4.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 4.0 }, max: { _id: 5.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:20.978 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 6|1||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:20.978 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:20.978 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:20.978 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.003 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.039 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.147 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.172 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.207 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.317 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754b00cfd6a2130a0ac11 m30000| Fri Feb 22 11:21:21.317 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:21-512754b10cfd6a2130a0ac12", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532081317), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:21.317 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.346 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.376 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.489 [conn11] moveChunk request accepted at version 6|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:21.489 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:21.489 [migrateThread] starting receiving-end of migration of chunk { _id: 4.0 } -> { _id: 5.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:21.490 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:21.490 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:21:21.490 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30000| Fri Feb 22 11:21:21.499 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:21.499 [conn11] moveChunk setting version to: 7|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:21.499 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:21.500 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:21:21.500 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:21:21.501 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:21-512754b1aaaba61d9eb250fd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532081501), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:21.509 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:21.509 [conn11] moveChunk updating self version to: 7|1||5127547fd4b973931fc9a229 through { _id: 5.0 } -> { _id: 6.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:21.510 [conn17] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.511 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.545 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.547 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.580 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.584 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.696 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:21-512754b10cfd6a2130a0ac13", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532081696), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:21.696 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:21.726 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.756 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:21.867 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:21.867 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:21.867 [cleanupOldData-512754b10cfd6a2130a0ac14] (start) waiting to cleanup test.foo from { _id: 4.0 } -> { _id: 5.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:21.867 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:21.887 [cleanupOldData-512754b10cfd6a2130a0ac14] waiting to remove documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30000| Fri Feb 22 11:21:21.887 [cleanupOldData-512754b10cfd6a2130a0ac14] moveChunk starting delete for: test.foo from { _id: 4.0 } -> { _id: 5.0 } m30000| Fri Feb 22 11:21:21.887 [cleanupOldData-512754b10cfd6a2130a0ac14] moveChunk deleted 1 documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:21:21.892 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:21.927 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:22.037 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:22.037 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:22-512754b20cfd6a2130a0ac15", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532082037), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 367, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:22.037 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:22.066 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:22.099 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:22.208 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:89 w:20 reslen:37 1229ms m30999| Fri Feb 22 11:21:22.208 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:22.209 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 7|1||5127547fd4b973931fc9a229 based on: 6|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:22.209 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:22.209 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:22.238 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:22.271 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:22.378 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:27.379 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:27.379 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:27.380 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754b7d4b973931fc9a230" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754b0d4b973931fc9a22f" } } m30000| Fri Feb 22 11:21:27.380 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.413 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.453 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:27.553 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.586 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.625 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:27.723 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754b7d4b973931fc9a230 m30999| Fri Feb 22 11:21:27.723 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:27.723 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:27.723 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:27.725 [Balancer] shard0002 has more chunks me:3 best: shard0001:3 m30999| Fri Feb 22 11:21:27.725 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:27.725 [Balancer] donor : shard0000 chunks on 15 m30999| Fri Feb 22 11:21:27.725 [Balancer] receiver : shard0001 chunks on 3 m30999| Fri Feb 22 11:21:27.725 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:27.725 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_5.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 5.0 }, max: { _id: 6.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:27.725 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 7|1||000000000000000000000000min: { _id: 5.0 }max: { _id: 6.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:27.725 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:27.726 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 5.0 }, max: { _id: 6.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:27.726 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.751 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.791 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:27.894 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:27.918 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:27.957 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.064 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754b70cfd6a2130a0ac16 m30000| Fri Feb 22 11:21:28.065 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b80cfd6a2130a0ac17", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532088064), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:28.065 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.090 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.120 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.236 [conn11] moveChunk request accepted at version 7|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:28.236 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:28.236 [migrateThread] starting receiving-end of migration of chunk { _id: 5.0 } -> { _id: 6.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:28.238 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:28.238 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.238 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30000| Fri Feb 22 11:21:28.247 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 5.0 }, max: { _id: 6.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:28.247 [conn11] moveChunk setting version to: 8|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:28.247 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:28.248 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.248 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.248 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b878e37a7f0861eba4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532088248), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:28.249 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.257 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 5.0 }, max: { _id: 6.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:28.257 [conn11] moveChunk updating self version to: 8|1||5127547fd4b973931fc9a229 through { _id: 6.0 } -> { _id: 7.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:28.257 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.303 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.306 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.347 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.361 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.440 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b80cfd6a2130a0ac18", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532088440), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:28.440 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.463 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.493 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:28.542 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:28.542 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:28.543 [cleanupOldData-512754b80cfd6a2130a0ac19] (start) waiting to cleanup test.foo from { _id: 5.0 } -> { _id: 6.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:28.543 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.563 [cleanupOldData-512754b80cfd6a2130a0ac19] waiting to remove documents for test.foo from { _id: 5.0 } -> { _id: 6.0 } m30000| Fri Feb 22 11:21:28.563 [cleanupOldData-512754b80cfd6a2130a0ac19] moveChunk starting delete for: test.foo from { _id: 5.0 } -> { _id: 6.0 } m30000| Fri Feb 22 11:21:28.563 [cleanupOldData-512754b80cfd6a2130a0ac19] moveChunk deleted 1 documents for test.foo from { _id: 5.0 } -> { _id: 6.0 } m30001| Fri Feb 22 11:21:28.569 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.604 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.679 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:28.679 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:28-512754b80cfd6a2130a0ac1a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532088679), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 5.0 }, max: { _id: 6.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:28.679 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.705 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.734 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:28.815 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 5.0 }, max: { _id: 6.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_5.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:33 r:115 w:25 reslen:37 1090ms m30999| Fri Feb 22 11:21:28.815 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:28.816 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 8|1||5127547fd4b973931fc9a229 based on: 7|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:28.817 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:28.817 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:28.846 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:28.880 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:28.952 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:21:28.983 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:29.009 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:29.045 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:29.122 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:29.148 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:29.182 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:29.258 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:21:28 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30000| Fri Feb 22 11:21:32.662 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:32.690 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:32.722 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:32.798 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:32.825 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:32.856 [conn19] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:33.953 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:33.961 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:33.961 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:33 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754bdd4b973931fc9a231" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754b7d4b973931fc9a230" } } m30000| Fri Feb 22 11:21:33.961 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:33.991 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.026 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.093 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.122 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.157 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:34.229 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754bdd4b973931fc9a231 m30999| Fri Feb 22 11:21:34.229 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:34.229 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:34.229 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 11:21:34.230 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.260 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.290 [conn6] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:34.365 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:34.365 [Balancer] donor : shard0000 chunks on 14 m30999| Fri Feb 22 11:21:34.365 [Balancer] receiver : shard0002 chunks on 3 m30999| Fri Feb 22 11:21:34.365 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:34.365 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_6.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 6.0 }, max: { _id: 7.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:34.365 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 8|1||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:34.366 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:34.366 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:34.366 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.391 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.427 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.502 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.527 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.562 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.638 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754be0cfd6a2130a0ac1b m30000| Fri Feb 22 11:21:34.638 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:34-512754be0cfd6a2130a0ac1c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532094638), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:34.638 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.665 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.697 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.775 [conn11] moveChunk request accepted at version 8|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:34.776 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:34.776 [migrateThread] starting receiving-end of migration of chunk { _id: 6.0 } -> { _id: 7.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:34.777 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:34.777 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:21:34.777 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:21:34.786 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:34.786 [conn11] moveChunk setting version to: 9|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:34.786 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:34.787 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:21:34.787 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:21:34.787 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:34-512754beaaaba61d9eb250fe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532094787), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:34.787 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.796 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:34.796 [conn11] moveChunk updating self version to: 9|1||5127547fd4b973931fc9a229 through { _id: 7.0 } -> { _id: 8.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:34.796 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.822 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.825 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.853 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.863 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:34.911 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:34-512754be0cfd6a2130a0ac1d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532094911), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:34.911 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:34.942 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:34.978 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.047 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:35.048 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:35.048 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:35.048 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:35.048 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:35.048 [cleanupOldData-512754bf0cfd6a2130a0ac1e] (start) waiting to cleanup test.foo from { _id: 6.0 } -> { _id: 7.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:35.048 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.068 [cleanupOldData-512754bf0cfd6a2130a0ac1e] waiting to remove documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:21:35.068 [cleanupOldData-512754bf0cfd6a2130a0ac1e] moveChunk starting delete for: test.foo from { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:21:35.068 [cleanupOldData-512754bf0cfd6a2130a0ac1e] moveChunk deleted 1 documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30001| Fri Feb 22 11:21:35.074 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:35.111 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.184 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:35.184 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:35-512754bf0cfd6a2130a0ac1f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532095184), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 6: 0, step2 of 6: 409, step3 of 6: 0, step4 of 6: 10, step5 of 6: 261, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:35.184 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:35.210 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:35.242 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:35.320 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:22 r:104 w:18 reslen:37 954ms m30999| Fri Feb 22 11:21:35.321 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:35.322 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 9|1||5127547fd4b973931fc9a229 based on: 8|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:35.322 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:35.322 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:35.352 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:35.389 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:35.457 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:40.458 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:40.459 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:40.459 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754c4d4b973931fc9a232" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754bdd4b973931fc9a231" } } m30000| Fri Feb 22 11:21:40.459 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:40.493 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:40.533 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:40.641 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:40.676 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:40.719 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:40.846 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754c4d4b973931fc9a232 m30999| Fri Feb 22 11:21:40.846 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:40.846 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:40.846 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:40.848 [Balancer] shard0002 has more chunks me:4 best: shard0001:4 m30999| Fri Feb 22 11:21:40.848 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:40.848 [Balancer] donor : shard0000 chunks on 13 m30999| Fri Feb 22 11:21:40.848 [Balancer] receiver : shard0001 chunks on 4 m30999| Fri Feb 22 11:21:40.848 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:40.848 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_7.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 7.0 }, max: { _id: 8.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:40.848 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 9|1||000000000000000000000000min: { _id: 7.0 }max: { _id: 8.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:40.848 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:40.848 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 7.0 }, max: { _id: 8.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:40.848 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:40.877 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:40.918 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.017 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.048 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.088 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.187 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754c40cfd6a2130a0ac20 m30000| Fri Feb 22 11:21:41.187 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c50cfd6a2130a0ac21", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532101187), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:41.187 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.213 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.243 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.359 [conn11] moveChunk request accepted at version 9|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:41.359 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:41.359 [migrateThread] starting receiving-end of migration of chunk { _id: 7.0 } -> { _id: 8.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:41.360 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:41.360 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.361 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30000| Fri Feb 22 11:21:41.369 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 7.0 }, max: { _id: 8.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:41.370 [conn11] moveChunk setting version to: 10|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:41.370 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:41.371 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.371 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.371 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c578e37a7f0861eba5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532101371), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:41.371 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.380 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 7.0 }, max: { _id: 8.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:41.380 [conn11] moveChunk updating self version to: 10|1||5127547fd4b973931fc9a229 through { _id: 8.0 } -> { _id: 9.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:41.380 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.410 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.413 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.459 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.459 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.596 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c50cfd6a2130a0ac22", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532101596), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:41.597 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.622 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.653 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:41.767 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:41.767 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:41.767 [cleanupOldData-512754c50cfd6a2130a0ac23] (start) waiting to cleanup test.foo from { _id: 7.0 } -> { _id: 8.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:41.767 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.787 [cleanupOldData-512754c50cfd6a2130a0ac23] waiting to remove documents for test.foo from { _id: 7.0 } -> { _id: 8.0 } m30000| Fri Feb 22 11:21:41.788 [cleanupOldData-512754c50cfd6a2130a0ac23] moveChunk starting delete for: test.foo from { _id: 7.0 } -> { _id: 8.0 } m30000| Fri Feb 22 11:21:41.788 [cleanupOldData-512754c50cfd6a2130a0ac23] moveChunk deleted 1 documents for test.foo from { _id: 7.0 } -> { _id: 8.0 } m30001| Fri Feb 22 11:21:41.795 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.832 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:41.938 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:41.938 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:41-512754c50cfd6a2130a0ac24", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532101938), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 7.0 }, max: { _id: 8.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 397, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:41.938 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:41.964 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:41.996 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:42.108 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 7.0 }, max: { _id: 8.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_7.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:127 w:22 reslen:37 1260ms m30999| Fri Feb 22 11:21:42.108 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:42.109 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 10|1||5127547fd4b973931fc9a229 based on: 9|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:42.110 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:42.110 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:42.139 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:42.174 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:42.279 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:47.279 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:47.280 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:47.280 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:47 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754cbd4b973931fc9a233" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754c4d4b973931fc9a232" } } m30000| Fri Feb 22 11:21:47.280 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.311 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.348 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.393 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.423 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.460 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:47.495 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754cbd4b973931fc9a233 m30999| Fri Feb 22 11:21:47.495 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:47.495 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:47.495 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:47.497 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:47.497 [Balancer] donor : shard0000 chunks on 12 m30999| Fri Feb 22 11:21:47.497 [Balancer] receiver : shard0002 chunks on 4 m30999| Fri Feb 22 11:21:47.497 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:47.497 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_8.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 8.0 }, max: { _id: 9.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:47.497 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 10|1||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:47.497 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:47.497 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:47.497 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.522 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.557 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.598 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.623 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.663 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.700 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754cb0cfd6a2130a0ac25 m30000| Fri Feb 22 11:21:47.700 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:47-512754cb0cfd6a2130a0ac26", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532107700), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:47.700 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.726 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.757 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.803 [conn11] moveChunk request accepted at version 10|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:47.803 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:21:47.804 [migrateThread] starting receiving-end of migration of chunk { _id: 8.0 } -> { _id: 9.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:21:47.805 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:21:47.805 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:21:47.805 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30000| Fri Feb 22 11:21:47.814 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:47.814 [conn11] moveChunk setting version to: 11|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:21:47.814 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:21:47.815 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:21:47.815 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:21:47.815 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:47-512754cbaaaba61d9eb250ff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532107815), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:21:47.815 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.824 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:47.824 [conn11] moveChunk updating self version to: 11|1||5127547fd4b973931fc9a229 through { _id: 9.0 } -> { _id: 10.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:47.824 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.841 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.854 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.871 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.882 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:47.939 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:47-512754cb0cfd6a2130a0ac27", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532107939), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:21:47.939 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:47.965 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:47.995 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:48.041 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:48.041 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:48.042 [cleanupOldData-512754cc0cfd6a2130a0ac28] (start) waiting to cleanup test.foo from { _id: 8.0 } -> { _id: 9.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:48.042 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.062 [cleanupOldData-512754cc0cfd6a2130a0ac28] waiting to remove documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30000| Fri Feb 22 11:21:48.062 [cleanupOldData-512754cc0cfd6a2130a0ac28] moveChunk starting delete for: test.foo from { _id: 8.0 } -> { _id: 9.0 } m30000| Fri Feb 22 11:21:48.062 [cleanupOldData-512754cc0cfd6a2130a0ac28] moveChunk deleted 1 documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:21:48.067 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:48.108 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.144 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:48.144 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:48-512754cc0cfd6a2130a0ac29", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532108144), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 6: 0, step2 of 6: 306, step3 of 6: 0, step4 of 6: 10, step5 of 6: 227, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:48.144 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:48.170 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:48.200 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:48.246 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:29 r:119 w:15 reslen:37 749ms m30999| Fri Feb 22 11:21:48.246 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:48.248 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 32 version: 11|1||5127547fd4b973931fc9a229 based on: 10|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:48.248 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:48.248 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:48.282 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:48.322 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:48.383 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:21:53.383 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:53.384 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:53.384 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:53 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754d1d4b973931fc9a234" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754cbd4b973931fc9a233" } } m30000| Fri Feb 22 11:21:53.384 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.414 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.449 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.497 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.526 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.561 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:53.634 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754d1d4b973931fc9a234 m30999| Fri Feb 22 11:21:53.634 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:53.634 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:53.634 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:53.635 [Balancer] shard0002 has more chunks me:5 best: shard0001:5 m30999| Fri Feb 22 11:21:53.635 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:53.635 [Balancer] donor : shard0000 chunks on 11 m30999| Fri Feb 22 11:21:53.635 [Balancer] receiver : shard0001 chunks on 5 m30999| Fri Feb 22 11:21:53.635 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:53.635 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_9.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 9.0 }, max: { _id: 10.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:21:53.635 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 11|1||000000000000000000000000min: { _id: 9.0 }max: { _id: 10.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:21:53.635 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:53.636 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 9.0 }, max: { _id: 10.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:53.636 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.661 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.696 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.736 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.761 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.796 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.838 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754d10cfd6a2130a0ac2a m30000| Fri Feb 22 11:21:53.839 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:53-512754d10cfd6a2130a0ac2b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532113838), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:53.839 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:53.864 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:53.894 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.942 [conn11] moveChunk request accepted at version 11|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:21:53.942 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:21:53.942 [migrateThread] starting receiving-end of migration of chunk { _id: 9.0 } -> { _id: 10.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:21:53.943 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:21:53.943 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:53.944 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30000| Fri Feb 22 11:21:53.961 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 9.0 }, max: { _id: 10.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:21:53.961 [conn11] moveChunk setting version to: 12|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:21:53.961 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:21:53.964 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:53.964 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:53.964 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:53-512754d178e37a7f0861eba6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532113964), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 20 } } m30000| Fri Feb 22 11:21:53.964 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:53.971 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 9.0 }, max: { _id: 10.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:21:53.971 [conn11] moveChunk updating self version to: 12|1||5127547fd4b973931fc9a229 through { _id: 10.0 } -> { _id: 11.0 } for collection 'test.foo' m30000| Fri Feb 22 11:21:53.971 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.001 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.004 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.049 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.049 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.112 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:54-512754d20cfd6a2130a0ac2c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532114112), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:21:54.112 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.139 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.170 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:54.214 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:21:54.214 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:21:54.214 [cleanupOldData-512754d20cfd6a2130a0ac2d] (start) waiting to cleanup test.foo from { _id: 9.0 } -> { _id: 10.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:21:54.221 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.234 [cleanupOldData-512754d20cfd6a2130a0ac2d] waiting to remove documents for test.foo from { _id: 9.0 } -> { _id: 10.0 } m30000| Fri Feb 22 11:21:54.234 [cleanupOldData-512754d20cfd6a2130a0ac2d] moveChunk starting delete for: test.foo from { _id: 9.0 } -> { _id: 10.0 } m30000| Fri Feb 22 11:21:54.235 [cleanupOldData-512754d20cfd6a2130a0ac2d] moveChunk deleted 1 documents for test.foo from { _id: 9.0 } -> { _id: 10.0 } m30001| Fri Feb 22 11:21:54.248 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.285 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.351 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:21:54.351 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:21:54-512754d20cfd6a2130a0ac2e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532114351), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 9.0 }, max: { _id: 10.0 }, step1 of 6: 0, step2 of 6: 306, step3 of 6: 0, step4 of 6: 18, step5 of 6: 253, step6 of 6: 0 } } m30000| Fri Feb 22 11:21:54.352 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.377 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.407 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:54.454 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 9.0 }, max: { _id: 10.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_9.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:41 r:107 w:18 reslen:37 818ms m30999| Fri Feb 22 11:21:54.454 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:21:54.455 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 33 version: 12|1||5127547fd4b973931fc9a229 based on: 11|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:21:54.455 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:21:54.455 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:54.484 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:54.519 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:54.590 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:21:59.259 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.288 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.338 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:59.425 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.454 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.484 [conn6] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:59.561 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:21:59 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30999| Fri Feb 22 11:21:59.592 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:21:59.593 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:21:59.593 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:21:59 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754d7d4b973931fc9a235" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754d1d4b973931fc9a234" } } m30000| Fri Feb 22 11:21:59.593 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.623 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.660 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:21:59.765 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.794 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:21:59.830 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:21:59.936 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754d7d4b973931fc9a235 m30999| Fri Feb 22 11:21:59.936 [Balancer] *** start balancing round m30999| Fri Feb 22 11:21:59.936 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:21:59.936 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:21:59.938 [Balancer] collection : test.foo m30999| Fri Feb 22 11:21:59.938 [Balancer] donor : shard0000 chunks on 10 m30999| Fri Feb 22 11:21:59.938 [Balancer] receiver : shard0002 chunks on 5 m30999| Fri Feb 22 11:21:59.938 [Balancer] threshold : 2 m30999| Fri Feb 22 11:21:59.938 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_10.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 10.0 }, max: { _id: 11.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:21:59.938 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 12|1||000000000000000000000000min: { _id: 10.0 }max: { _id: 11.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:21:59.938 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:21:59.939 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:21:59.939 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:21:59.968 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.005 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.106 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.132 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.172 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.277 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754d70cfd6a2130a0ac2f m30000| Fri Feb 22 11:22:00.277 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d80cfd6a2130a0ac30", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532120277), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:00.277 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.303 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.333 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.470 [conn19] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30000| Fri Feb 22 11:22:00.482 [conn11] moveChunk request accepted at version 12|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:00.483 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:22:00.483 [migrateThread] starting receiving-end of migration of chunk { _id: 10.0 } -> { _id: 11.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:22:00.484 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:22:00.484 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:22:00.484 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:22:00.493 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:22:00.493 [conn11] moveChunk setting version to: 13|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:00.493 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:22:00.495 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:22:00.495 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:22:00.495 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d8aaaba61d9eb25100", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532120495), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:00.501 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.503 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:22:00.503 [conn11] moveChunk updating self version to: 13|1||5127547fd4b973931fc9a229 through { _id: 11.0 } -> { _id: 12.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:00.503 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.536 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.539 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.575 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.584 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.686 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d80cfd6a2130a0ac31", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532120686), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:00.686 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:00.712 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.743 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:00.823 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:00.823 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:00.823 [cleanupOldData-512754d80cfd6a2130a0ac32] (start) waiting to cleanup test.foo from { _id: 10.0 } -> { _id: 11.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:00.823 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.843 [cleanupOldData-512754d80cfd6a2130a0ac32] waiting to remove documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:22:00.843 [cleanupOldData-512754d80cfd6a2130a0ac32] moveChunk starting delete for: test.foo from { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:22:00.843 [cleanupOldData-512754d80cfd6a2130a0ac32] moveChunk deleted 1 documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30001| Fri Feb 22 11:22:00.849 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:00.889 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:00.993 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:22:00.993 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:00-512754d80cfd6a2130a0ac33", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532120993), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 6: 0, step2 of 6: 543, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:00.993 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:01.020 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:01.050 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:01.130 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:125 w:14 reslen:37 1191ms m30999| Fri Feb 22 11:22:01.130 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:01.132 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 34 version: 13|1||5127547fd4b973931fc9a229 based on: 12|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:01.132 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:01.132 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:01.167 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:01.210 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:01.301 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30000| Fri Feb 22 11:22:02.934 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:02.960 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:02.991 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:03.071 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:03.097 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:03.127 [conn19] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:06.301 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:06.302 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:06.302 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754ded4b973931fc9a236" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754d7d4b973931fc9a235" } } m30000| Fri Feb 22 11:22:06.302 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.331 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.366 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:06.441 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.469 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.504 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:06.577 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754ded4b973931fc9a236 m30999| Fri Feb 22 11:22:06.577 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:06.577 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:06.577 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:06.579 [Balancer] shard0002 has more chunks me:6 best: shard0001:6 m30999| Fri Feb 22 11:22:06.579 [Balancer] collection : test.foo m30999| Fri Feb 22 11:22:06.579 [Balancer] donor : shard0000 chunks on 9 m30999| Fri Feb 22 11:22:06.579 [Balancer] receiver : shard0001 chunks on 6 m30999| Fri Feb 22 11:22:06.579 [Balancer] threshold : 2 m30999| Fri Feb 22 11:22:06.579 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_11.0", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 11.0 }, max: { _id: 12.0 }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30999| Fri Feb 22 11:22:06.579 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 13|1||000000000000000000000000min: { _id: 11.0 }max: { _id: 12.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 11:22:06.579 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:22:06.579 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 11.0 }, max: { _id: 12.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:22:06.579 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.605 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.640 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:06.714 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.741 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.776 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:06.850 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754de0cfd6a2130a0ac34 m30000| Fri Feb 22 11:22:06.850 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:06-512754de0cfd6a2130a0ac35", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532126850), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:22:06.850 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:06.878 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:06.914 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.022 [conn11] moveChunk request accepted at version 13|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:07.022 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:22:07.022 [migrateThread] starting receiving-end of migration of chunk { _id: 11.0 } -> { _id: 12.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 11:22:07.023 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:22:07.023 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.024 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30000| Fri Feb 22 11:22:07.032 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 11.0 }, max: { _id: 12.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:22:07.032 [conn11] moveChunk setting version to: 14|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:22:07.032 [conn15] Waiting for commit to finish m30001| Fri Feb 22 11:22:07.034 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.034 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.034 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:07-512754df78e37a7f0861eba7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532127034), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:07.034 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.042 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 11.0 }, max: { _id: 12.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:22:07.043 [conn11] moveChunk updating self version to: 14|1||5127547fd4b973931fc9a229 through { _id: 12.0 } -> { _id: 13.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:07.043 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.060 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.065 [conn17] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.098 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.116 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.225 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:07-512754df0cfd6a2130a0ac36", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532127225), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 11:22:07.225 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.251 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.281 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.361 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:07.361 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:07.362 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:22:07.362 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:07.362 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:07.362 [cleanupOldData-512754df0cfd6a2130a0ac37] (start) waiting to cleanup test.foo from { _id: 11.0 } -> { _id: 12.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:07.362 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.382 [cleanupOldData-512754df0cfd6a2130a0ac37] waiting to remove documents for test.foo from { _id: 11.0 } -> { _id: 12.0 } m30000| Fri Feb 22 11:22:07.382 [cleanupOldData-512754df0cfd6a2130a0ac37] moveChunk starting delete for: test.foo from { _id: 11.0 } -> { _id: 12.0 } m30000| Fri Feb 22 11:22:07.382 [cleanupOldData-512754df0cfd6a2130a0ac37] moveChunk deleted 1 documents for test.foo from { _id: 11.0 } -> { _id: 12.0 } m30001| Fri Feb 22 11:22:07.387 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.423 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.498 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:22:07.498 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:07-512754df0cfd6a2130a0ac38", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532127498), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 11.0 }, max: { _id: 12.0 }, step1 of 6: 0, step2 of 6: 442, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:07.498 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.524 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.554 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:07.635 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 11.0 }, max: { _id: 12.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_11.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:106 w:15 reslen:37 1055ms m30999| Fri Feb 22 11:22:07.635 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:07.636 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 14|1||5127547fd4b973931fc9a229 based on: 13|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:07.636 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:07.636 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:07.666 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:07.703 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:07.805 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30999| Fri Feb 22 11:22:12.806 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:12.807 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:12.807 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:12 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754e4d4b973931fc9a237" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754ded4b973931fc9a236" } } m30000| Fri Feb 22 11:22:12.807 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:12.841 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:12.882 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:12.989 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.023 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.065 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:13.160 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754e4d4b973931fc9a237 m30999| Fri Feb 22 11:22:13.160 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:13.160 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:13.160 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:13.162 [Balancer] collection : test.foo m30999| Fri Feb 22 11:22:13.162 [Balancer] donor : shard0000 chunks on 8 m30999| Fri Feb 22 11:22:13.162 [Balancer] receiver : shard0002 chunks on 6 m30999| Fri Feb 22 11:22:13.162 [Balancer] threshold : 2 m30999| Fri Feb 22 11:22:13.162 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_12.0", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 12.0 }, max: { _id: 13.0 }, shard: "shard0000" } from: shard0000 to: shard0002 tag [] m30999| Fri Feb 22 11:22:13.162 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 14|1||000000000000000000000000min: { _id: 12.0 }max: { _id: 13.0 }) shard0000:localhost:30000 -> shard0002:localhost:30002 m30000| Fri Feb 22 11:22:13.162 [conn11] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 11:22:13.162 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:22:13.162 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.189 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.230 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.330 [conn14] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.364 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.405 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.501 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' acquired, ts : 512754e50cfd6a2130a0ac39 m30000| Fri Feb 22 11:22:13.501 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:13-512754e50cfd6a2130a0ac3a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532133501), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:13.501 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.527 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.558 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.673 [conn11] moveChunk request accepted at version 14|1||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:13.673 [conn11] moveChunk number of documents: 1 m30002| Fri Feb 22 11:22:13.673 [migrateThread] starting receiving-end of migration of chunk { _id: 12.0 } -> { _id: 13.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30002| Fri Feb 22 11:22:13.675 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:22:13.675 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:22:13.676 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30000| Fri Feb 22 11:22:13.684 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:22:13.684 [conn11] moveChunk setting version to: 15|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:13.684 [conn17] Waiting for commit to finish m30002| Fri Feb 22 11:22:13.686 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:22:13.686 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:22:13.686 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:13-512754e5aaaba61d9eb25101", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532133686), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:13.686 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.694 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 11:22:13.694 [conn11] moveChunk updating self version to: 15|1||5127547fd4b973931fc9a229 through { _id: 13.0 } -> { _id: 14.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:13.694 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.727 [conn17] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.731 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.759 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.778 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.876 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:13-512754e50cfd6a2130a0ac3b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532133876), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0000", to: "shard0002" } } m30000| Fri Feb 22 11:22:13.876 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.903 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:13.933 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:13.960 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:13.994 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:14.013 [conn11] forking for cleanup of chunk data m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 11:22:14.013 [conn11] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 11:22:14.013 [cleanupOldData-512754e60cfd6a2130a0ac3c] (start) waiting to cleanup test.foo from { _id: 12.0 } -> { _id: 13.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:14.013 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.030 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.033 [cleanupOldData-512754e60cfd6a2130a0ac3c] waiting to remove documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30000| Fri Feb 22 11:22:14.033 [cleanupOldData-512754e60cfd6a2130a0ac3c] moveChunk starting delete for: test.foo from { _id: 12.0 } -> { _id: 13.0 } m30000| Fri Feb 22 11:22:14.034 [cleanupOldData-512754e60cfd6a2130a0ac3c] moveChunk deleted 1 documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:22:14.040 [conn14] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.082 [conn14] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.118 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.150 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361532031:15257' unlocked. m30000| Fri Feb 22 11:22:14.150 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:14-512754e60cfd6a2130a0ac3d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:53660", time: new Date(1361532134150), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:14.150 [conn20] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.153 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.178 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.195 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.207 [conn19] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.288 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.321 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0002", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:186 w:15 reslen:37 1158ms m30999| Fri Feb 22 11:22:14.321 [Balancer] moveChunk result: { ok: 1.0 } m30001| Fri Feb 22 11:22:14.322 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:14.322 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 36 version: 15|1||5127547fd4b973931fc9a229 based on: 14|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:14.323 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:14.323 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.355 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.359 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.398 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.458 [conn10] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:14.491 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. m30001| Fri Feb 22 11:22:14.492 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.523 [conn10] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:14.629 [conn10] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:14.662 [conn10] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:14.687 [conn10] CMD fsync: sync:1 lock:0 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("5127547dd4b973931fc9a225") } shards: { "_id" : "shard0000", "host" : "localhost:30000", "tags" : [ "a" ] } { "_id" : "shard0001", "host" : "localhost:30001", "tags" : [ "a" ] } { "_id" : "shard0002", "host" : "localhost:30002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0000" } test.foo shard key: { "_id" : 1 } chunks: shard0001 7 shard0002 7 shard0000 7 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 0 } on : shard0001 { "t" : 2000, "i" : 0 } { "_id" : 0 } -->> { "_id" : 1 } on : shard0002 { "t" : 3000, "i" : 0 } { "_id" : 1 } -->> { "_id" : 2 } on : shard0001 { "t" : 4000, "i" : 0 } { "_id" : 2 } -->> { "_id" : 3 } on : shard0002 { "t" : 5000, "i" : 0 } { "_id" : 3 } -->> { "_id" : 4 } on : shard0001 { "t" : 6000, "i" : 0 } { "_id" : 4 } -->> { "_id" : 5 } on : shard0002 { "t" : 7000, "i" : 0 } { "_id" : 5 } -->> { "_id" : 6 } on : shard0001 { "t" : 8000, "i" : 0 } { "_id" : 6 } -->> { "_id" : 7 } on : shard0002 { "t" : 9000, "i" : 0 } { "_id" : 7 } -->> { "_id" : 8 } on : shard0001 { "t" : 10000, "i" : 0 } { "_id" : 8 } -->> { "_id" : 9 } on : shard0002 { "t" : 11000, "i" : 0 } { "_id" : 9 } -->> { "_id" : 10 } on : shard0001 { "t" : 12000, "i" : 0 } { "_id" : 10 } -->> { "_id" : 11 } on : shard0002 { "t" : 13000, "i" : 0 } { "_id" : 11 } -->> { "_id" : 12 } on : shard0001 { "t" : 14000, "i" : 0 } { "_id" : 12 } -->> { "_id" : 13 } on : shard0002 { "t" : 15000, "i" : 0 } { "_id" : 13 } -->> { "_id" : 14 } on : shard0000 { "t" : 15000, "i" : 1 } { "_id" : 14 } -->> { "_id" : 15 } on : shard0000 { "t" : 1000, "i" : 31 } { "_id" : 15 } -->> { "_id" : 16 } on : shard0000 { "t" : 1000, "i" : 33 } { "_id" : 16 } -->> { "_id" : 17 } on : shard0000 { "t" : 1000, "i" : 35 } { "_id" : 17 } -->> { "_id" : 18 } on : shard0000 { "t" : 1000, "i" : 37 } { "_id" : 18 } -->> { "_id" : 19 } on : shard0000 { "t" : 1000, "i" : 39 } { "_id" : 19 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 { "t" : 1000, "i" : 40 } tag: a { "_id" : -1 } -->> { "_id" : 1000 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } m30999| Fri Feb 22 11:22:19.492 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:19.493 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:19.493 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:19 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512754ebd4b973931fc9a238" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754e4d4b973931fc9a237" } } m30000| Fri Feb 22 11:22:19.493 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.524 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.561 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.632 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.661 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.696 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:19.769 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 512754ebd4b973931fc9a238 m30999| Fri Feb 22 11:22:19.769 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:19.769 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:19.769 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:19.771 [Balancer] ns: test.foo need to split on { _id: -1.0 } because there is a range there m30001| Fri Feb 22 11:22:19.771 [conn11] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", splitKeys: [ { _id: -1.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } m30001| Fri Feb 22 11:22:19.772 [initandlisten] connection accepted from 127.0.0.1:48199 #20 (19 connections now open) m30002| Fri Feb 22 11:22:19.773 [initandlisten] connection accepted from 127.0.0.1:61722 #20 (19 connections now open) m30001| Fri Feb 22 11:22:19.774 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30001:1361532139:10113 (sleeping for 30000ms) m30001| Fri Feb 22 11:22:19.774 [conn11] SyncClusterConnection connecting to [localhost:30000] m30000| Fri Feb 22 11:22:19.774 [conn16] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.774 [initandlisten] connection accepted from 127.0.0.1:46014 #21 (20 connections now open) m30001| Fri Feb 22 11:22:19.774 [conn11] SyncClusterConnection connecting to [localhost:30001] m30001| Fri Feb 22 11:22:19.775 [initandlisten] connection accepted from 127.0.0.1:38642 #21 (20 connections now open) m30001| Fri Feb 22 11:22:19.775 [conn11] SyncClusterConnection connecting to [localhost:30002] m30002| Fri Feb 22 11:22:19.775 [initandlisten] connection accepted from 127.0.0.1:36772 #21 (20 connections now open) m30000| Fri Feb 22 11:22:19.775 [conn21] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 7 } m30001| Fri Feb 22 11:22:19.807 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.810 [conn21] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.851 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:19.861 [conn21] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.939 [conn21] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:19.939 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.980 [conn21] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:19.985 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.021 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.027 [conn21] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:20.110 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.110 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361532139:10113' acquired, ts : 512754eb78e37a7f0861eba8 m30001| Fri Feb 22 11:22:20.110 [conn11] SyncClusterConnection connecting to [localhost:30000] m30001| Fri Feb 22 11:22:20.110 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:20.110 [initandlisten] connection accepted from 127.0.0.1:46249 #22 (21 connections now open) m30001| Fri Feb 22 11:22:20.110 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:20.110 [initandlisten] connection accepted from 127.0.0.1:42216 #22 (21 connections now open) m30002| Fri Feb 22 11:22:20.111 [initandlisten] connection accepted from 127.0.0.1:50697 #22 (21 connections now open) m30001| Fri Feb 22 11:22:20.121 [conn11] no current chunk manager found for this shard, will initialize m30001| Fri Feb 22 11:22:20.122 [conn11] splitChunk accepted at version 14|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:20.122 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.135 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.149 [conn22] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.173 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.182 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.280 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:20-512754ec78e37a7f0861eba9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:42693", time: new Date(1361532140280), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: -1.0 }, lastmod: Timestamp 15000|2, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') }, right: { min: { _id: -1.0 }, max: { _id: 0.0 }, lastmod: Timestamp 15000|3, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229') } } } m30000| Fri Feb 22 11:22:20.280 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.308 [conn22] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.346 [conn22] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:20.417 [conn21] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.447 [conn21] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.485 [conn21] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.587 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361532139:10113' unlocked. m30001| Fri Feb 22 11:22:20.587 [conn11] command admin.$cmd command: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", splitKeys: [ { _id: -1.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000,localhost:30001,localhost:30002" } ntoreturn:1 keyUpdates:0 locks(micros) r:126 reslen:37 816ms m30999| Fri Feb 22 11:22:20.589 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 15|3||5127547fd4b973931fc9a229 based on: 15|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:20.589 [Balancer] split worked: { ok: 1.0 } m30999| Fri Feb 22 11:22:20.589 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:22:20.589 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:20.589 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:20.623 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:20.664 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:20.758 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:29.561 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:29.596 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:29.628 [conn6] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:29.711 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:29.742 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:29.769 [conn6] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30999| Fri Feb 22 11:22:29.847 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:22:29 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:33.208 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:33.238 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:33.266 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:33.353 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:33.384 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:33.412 [conn12] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:50.247 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.269 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.306 [conn15] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:50.405 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.427 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.466 [conn15] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:50.759 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:50.759 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:50.759 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127550ad4b973931fc9a239" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512754ebd4b973931fc9a238" } } m30000| Fri Feb 22 11:22:50.759 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.786 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.819 [conn5] CMD fsync: sync:1 lock:0 { "shard0002" : 7, "shard0000" : 7, "shard0001" : 8 } m30000| Fri Feb 22 11:22:50.882 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:50.909 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:50.942 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:51.018 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127550ad4b973931fc9a239 m30999| Fri Feb 22 11:22:51.018 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:51.018 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:51.019 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:51.020 [Balancer] chunk { _id: "test.foo-_id_0.0", lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:22:51.020 [Balancer] shard0001 has more chunks me:8 best: shard0000:7 m30999| Fri Feb 22 11:22:51.020 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:22:51.020 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:22:51.020 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 3|0||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:22:51.021 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:22:51.021 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:22:51.021 [initandlisten] connection accepted from 127.0.0.1:58098 #23 (22 connections now open) m30002| Fri Feb 22 11:22:51.022 [initandlisten] connection accepted from 127.0.0.1:41207 #23 (22 connections now open) m30002| Fri Feb 22 11:22:51.024 [conn11] SyncClusterConnection connecting to [localhost:30000] m30002| Fri Feb 22 11:22:51.024 [LockPinger] creating distributed lock ping thread for localhost:30000,localhost:30001,localhost:30002 and process bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548 (sleeping for 30000ms) m30000| Fri Feb 22 11:22:51.024 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.024 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:51.024 [initandlisten] connection accepted from 127.0.0.1:40174 #23 (22 connections now open) m30002| Fri Feb 22 11:22:51.024 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:51.024 [initandlisten] connection accepted from 127.0.0.1:44990 #24 (23 connections now open) m30002| Fri Feb 22 11:22:51.024 [initandlisten] connection accepted from 127.0.0.1:62763 #24 (23 connections now open) m30000| Fri Feb 22 11:22:51.032 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.049 [conn18] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.053 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.083 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.091 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:51.155 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.181 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:51.189 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.215 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.216 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.245 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.291 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 5127550baaaba61d9eb25102 m30002| Fri Feb 22 11:22:51.291 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550baaaba61d9eb25103", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532171291), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0002", to: "shard0000" } } m30002| Fri Feb 22 11:22:51.291 [conn11] SyncClusterConnection connecting to [localhost:30000] m30002| Fri Feb 22 11:22:51.291 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:51.291 [initandlisten] connection accepted from 127.0.0.1:45942 #24 (23 connections now open) m30002| Fri Feb 22 11:22:51.291 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:51.291 [initandlisten] connection accepted from 127.0.0.1:42360 #25 (24 connections now open) m30002| Fri Feb 22 11:22:51.292 [initandlisten] connection accepted from 127.0.0.1:33710 #25 (24 connections now open) m30000| Fri Feb 22 11:22:51.292 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.318 [conn25] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:51.325 [conn19] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.351 [conn25] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.351 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.379 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.427 [conn11] no current chunk manager found for this shard, will initialize m30002| Fri Feb 22 11:22:51.428 [conn11] moveChunk request accepted at version 15|0||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:51.428 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:22:51.429 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 1.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:22:51.430 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:22:51.430 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:22:51.430 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:22:51.439 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:22:51.439 [conn11] moveChunk setting version to: 16|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:51.439 [initandlisten] connection accepted from 127.0.0.1:57636 #25 (24 connections now open) m30000| Fri Feb 22 11:22:51.439 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:22:51.441 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:22:51.441 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:22:51.441 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550b0cfd6a2130a0ac3e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532171441), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:51.441 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.450 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:22:51.450 [conn11] moveChunk updating self version to: 16|1||5127547fd4b973931fc9a229 through { _id: 2.0 } -> { _id: 3.0 } for collection 'test.foo' m30002| Fri Feb 22 11:22:51.450 [conn11] SyncClusterConnection connecting to [localhost:30000] m30002| Fri Feb 22 11:22:51.450 [conn11] SyncClusterConnection connecting to [localhost:30001] m30000| Fri Feb 22 11:22:51.450 [initandlisten] connection accepted from 127.0.0.1:65387 #26 (25 connections now open) m30002| Fri Feb 22 11:22:51.450 [conn11] SyncClusterConnection connecting to [localhost:30002] m30001| Fri Feb 22 11:22:51.450 [initandlisten] connection accepted from 127.0.0.1:53938 #26 (25 connections now open) m30002| Fri Feb 22 11:22:51.450 [initandlisten] connection accepted from 127.0.0.1:37179 #26 (25 connections now open) m30000| Fri Feb 22 11:22:51.451 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.471 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.477 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.512 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.524 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.598 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550baaaba61d9eb25104", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532171598), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:22:51.598 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.624 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.654 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.734 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:51.734 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:51.734 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:22:51.734 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:51.735 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:51.735 [cleanupOldData-5127550baaaba61d9eb25105] (start) waiting to cleanup test.foo from { _id: 0.0 } -> { _id: 1.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:51.735 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.755 [cleanupOldData-5127550baaaba61d9eb25105] waiting to remove documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:22:51.755 [cleanupOldData-5127550baaaba61d9eb25105] moveChunk starting delete for: test.foo from { _id: 0.0 } -> { _id: 1.0 } m30002| Fri Feb 22 11:22:51.755 [cleanupOldData-5127550baaaba61d9eb25105] moveChunk deleted 1 documents for test.foo from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:22:51.761 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.801 [conn24] CMD fsync: sync:1 lock:0 { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } m30002| Fri Feb 22 11:22:51.871 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:22:51.871 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:51-5127550baaaba61d9eb25106", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532171871), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 6: 0, step2 of 6: 407, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:51.871 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:51.897 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:51.927 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:52.008 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:42 r:72 w:13 reslen:37 987ms m30999| Fri Feb 22 11:22:52.008 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:52.009 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 38 version: 16|1||5127547fd4b973931fc9a229 based on: 15|3||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:52.009 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:52.009 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:52.044 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:52.084 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:52.144 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } m30999| Fri Feb 22 11:22:57.145 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:22:57.146 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:22:57.146 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:22:57 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275511d4b973931fc9a23a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127550ad4b973931fc9a239" } } m30000| Fri Feb 22 11:22:57.146 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.180 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.221 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:57.285 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.319 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.359 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:57.421 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275511d4b973931fc9a23a m30999| Fri Feb 22 11:22:57.421 [Balancer] *** start balancing round m30999| Fri Feb 22 11:22:57.421 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:22:57.421 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:22:57.423 [Balancer] chunk { _id: "test.foo-_id_2.0", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:22:57.423 [Balancer] shard0001 has more chunks me:8 best: shard0000:8 m30999| Fri Feb 22 11:22:57.423 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:22:57.423 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:22:57.423 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 16|1||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:22:57.423 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:22:57.424 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:22:57.424 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.449 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.490 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:57.558 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.583 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.624 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.694 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275511aaaba61d9eb25107 m30002| Fri Feb 22 11:22:57.694 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:57-51275511aaaba61d9eb25108", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532177694), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:22:57.694 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.720 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.750 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.832 [conn11] moveChunk request accepted at version 16|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:22:57.832 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:22:57.832 [migrateThread] starting receiving-end of migration of chunk { _id: 2.0 } -> { _id: 3.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:22:57.833 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:22:57.833 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:22:57.834 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:22:57.843 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:22:57.843 [conn11] moveChunk setting version to: 17|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:22:57.843 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:22:57.844 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:22:57.844 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:22:57.844 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:57-512755110cfd6a2130a0ac3f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532177844), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:22:57.844 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.853 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:22:57.853 [conn11] moveChunk updating self version to: 17|1||5127547fd4b973931fc9a229 through { _id: 4.0 } -> { _id: 5.0 } for collection 'test.foo' m30000| Fri Feb 22 11:22:57.853 [conn26] CMD fsync: sync:1 lock:0 { "shard0002" : 6, "shard0000" : 8, "shard0001" : 8 } m30001| Fri Feb 22 11:22:57.893 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:57.893 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.935 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:57.944 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.036 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:58-51275512aaaba61d9eb25109", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532178036), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:22:58.036 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:58.064 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.096 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.172 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:58.172 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:58.172 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:22:58.173 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:22:58.173 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:22:58.173 [cleanupOldData-51275512aaaba61d9eb2510a] (start) waiting to cleanup test.foo from { _id: 2.0 } -> { _id: 3.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:22:58.173 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.193 [cleanupOldData-51275512aaaba61d9eb2510a] waiting to remove documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:22:58.193 [cleanupOldData-51275512aaaba61d9eb2510a] moveChunk starting delete for: test.foo from { _id: 2.0 } -> { _id: 3.0 } m30002| Fri Feb 22 11:22:58.193 [cleanupOldData-51275512aaaba61d9eb2510a] moveChunk deleted 1 documents for test.foo from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:22:58.199 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.239 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.309 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:22:58.309 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:22:58-51275512aaaba61d9eb2510b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532178309), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:22:58.309 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:58.335 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.365 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.446 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:139 w:22 reslen:37 1022ms m30999| Fri Feb 22 11:22:58.446 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:22:58.447 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 17|1||5127547fd4b973931fc9a229 based on: 16|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:22:58.447 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:22:58.447 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:58.481 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:22:58.522 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:22:58.616 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30000| Fri Feb 22 11:22:59.847 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:22:59.873 [conn7] CMD fsync: sync:1 lock:0 { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30002| Fri Feb 22 11:22:59.908 [conn7] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:22:59.979 [conn7] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:00.004 [conn7] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:00.040 [conn7] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:00.115 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:22:59 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30000| Fri Feb 22 11:23:03.489 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.518 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.548 [conn12] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:03.617 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:03.617 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:03.618 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275517d4b973931fc9a23b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275511d4b973931fc9a23a" } } m30000| Fri Feb 22 11:23:03.618 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:03.628 [conn12] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.647 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.665 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.685 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.698 [conn12] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:03.764 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.794 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.827 [conn5] CMD fsync: sync:1 lock:0 { "shard0002" : 5, "shard0000" : 9, "shard0001" : 8 } m30999| Fri Feb 22 11:23:03.901 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275517d4b973931fc9a23b m30999| Fri Feb 22 11:23:03.901 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:03.901 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:03.901 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:03.903 [Balancer] chunk { _id: "test.foo-_id_4.0", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 4.0 }, max: { _id: 5.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:03.903 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:03.903 [Balancer] going to move to: shard0001 m30999| Fri Feb 22 11:23:03.903 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 17|1||000000000000000000000000min: { _id: 4.0 }max: { _id: 5.0 }) shard0002:localhost:30002 -> shard0001:localhost:30001 m30002| Fri Feb 22 11:23:03.903 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:03.903 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:03.903 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:03.929 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:03.964 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:04.037 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.063 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.098 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.174 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275517aaaba61d9eb2510c m30002| Fri Feb 22 11:23:04.174 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-51275518aaaba61d9eb2510d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532184174), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:04.174 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.199 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.228 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.311 [conn11] moveChunk request accepted at version 17|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:04.311 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:23:04.311 [migrateThread] starting receiving-end of migration of chunk { _id: 4.0 } -> { _id: 5.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30001| Fri Feb 22 11:23:04.312 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:23:04.312 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.313 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:23:04.322 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:04.322 [conn11] moveChunk setting version to: 18|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:23:04.322 [initandlisten] connection accepted from 127.0.0.1:49673 #27 (26 connections now open) m30001| Fri Feb 22 11:23:04.322 [conn27] Waiting for commit to finish m30001| Fri Feb 22 11:23:04.323 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.323 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.323 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-5127551878e37a7f0861ebaa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532184323), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:04.323 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.332 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 4.0 }, max: { _id: 5.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:04.332 [conn11] moveChunk updating self version to: 18|1||5127547fd4b973931fc9a229 through { _id: 6.0 } -> { _id: 7.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:04.332 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.348 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.353 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.393 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.403 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.481 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-51275518aaaba61d9eb2510e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532184481), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:04.481 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.507 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.538 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:04.617 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:04.617 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:04.618 [cleanupOldData-51275518aaaba61d9eb2510f] (start) waiting to cleanup test.foo from { _id: 4.0 } -> { _id: 5.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:04.618 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.638 [cleanupOldData-51275518aaaba61d9eb2510f] waiting to remove documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:23:04.638 [cleanupOldData-51275518aaaba61d9eb2510f] moveChunk starting delete for: test.foo from { _id: 4.0 } -> { _id: 5.0 } m30002| Fri Feb 22 11:23:04.638 [cleanupOldData-51275518aaaba61d9eb2510f] moveChunk deleted 1 documents for test.foo from { _id: 4.0 } -> { _id: 5.0 } m30001| Fri Feb 22 11:23:04.643 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.679 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.754 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:04.754 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:04-51275518aaaba61d9eb25110", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532184754), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 4.0 }, max: { _id: 5.0 }, step1 of 6: 0, step2 of 6: 407, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:04.754 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.782 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.817 [conn18] CMD fsync: sync:1 lock:0 { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } m30002| Fri Feb 22 11:23:04.890 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 4.0 }, max: { _id: 5.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:125 w:14 reslen:37 987ms m30999| Fri Feb 22 11:23:04.890 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:04.892 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 18|1||5127547fd4b973931fc9a229 based on: 17|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:04.892 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:04.892 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:04.922 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:04.958 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:05.027 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } m30999| Fri Feb 22 11:23:10.027 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:10.028 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:10.028 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:10 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127551ed4b973931fc9a23c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275517d4b973931fc9a23b" } } m30000| Fri Feb 22 11:23:10.028 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.062 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.103 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:10.210 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.244 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.281 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:10.347 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127551ed4b973931fc9a23c m30999| Fri Feb 22 11:23:10.347 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:10.347 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:10.347 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:10.349 [Balancer] chunk { _id: "test.foo-_id_6.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 6.0 }, max: { _id: 7.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:10.349 [Balancer] shard0001 has more chunks me:9 best: shard0000:9 m30999| Fri Feb 22 11:23:10.349 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:10.349 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:23:10.349 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 18|1||000000000000000000000000min: { _id: 6.0 }max: { _id: 7.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:23:10.349 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:10.349 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:10.350 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.375 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.416 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:10.518 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.545 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.583 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.654 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 5127551eaaaba61d9eb25111 m30002| Fri Feb 22 11:23:10.654 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:10-5127551eaaaba61d9eb25112", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532190654), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:10.654 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.680 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.710 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.859 [conn18] command admin.$cmd command: { getlasterror: 1, fsync: 1 } ntoreturn:1 keyUpdates:0 reslen:79 102ms m30002| Fri Feb 22 11:23:10.860 [conn11] moveChunk request accepted at version 18|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:10.860 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:23:10.860 [migrateThread] starting receiving-end of migration of chunk { _id: 6.0 } -> { _id: 7.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:23:10.861 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:23:10.861 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:23:10.861 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:23:10.870 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:10.870 [conn11] moveChunk setting version to: 19|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:23:10.870 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:23:10.872 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:23:10.872 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6.0 } -> { _id: 7.0 } m30000| Fri Feb 22 11:23:10.872 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:10-5127551e0cfd6a2130a0ac40", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532190872), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:10.872 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.880 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 6.0 }, max: { _id: 7.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:10.881 [conn11] moveChunk updating self version to: 19|1||5127547fd4b973931fc9a229 through { _id: 8.0 } -> { _id: 9.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:10.881 [conn26] CMD fsync: sync:1 lock:0 { "shard0002" : 4, "shard0000" : 9, "shard0001" : 9 } m30001| Fri Feb 22 11:23:10.917 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:10.917 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.957 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:10.962 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.063 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:11-5127551faaaba61d9eb25113", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532191063), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:11.063 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:11.089 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.120 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:11.200 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:11.200 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:11.200 [cleanupOldData-5127551faaaba61d9eb25114] (start) waiting to cleanup test.foo from { _id: 6.0 } -> { _id: 7.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:11.200 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.220 [cleanupOldData-5127551faaaba61d9eb25114] waiting to remove documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:23:11.220 [cleanupOldData-5127551faaaba61d9eb25114] moveChunk starting delete for: test.foo from { _id: 6.0 } -> { _id: 7.0 } m30002| Fri Feb 22 11:23:11.220 [cleanupOldData-5127551faaaba61d9eb25114] moveChunk deleted 1 documents for test.foo from { _id: 6.0 } -> { _id: 7.0 } m30001| Fri Feb 22 11:23:11.226 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.261 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.370 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:11.370 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:11-5127551faaaba61d9eb25115", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532191370), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 6.0 }, max: { _id: 7.0 }, step1 of 6: 0, step2 of 6: 510, step3 of 6: 0, step4 of 6: 10, step5 of 6: 329, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:11.371 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:11.396 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.426 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.507 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 6.0 }, max: { _id: 7.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:18 r:109 w:14 reslen:37 1157ms m30999| Fri Feb 22 11:23:11.507 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:11.508 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 41 version: 19|1||5127547fd4b973931fc9a229 based on: 18|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:11.508 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:11.508 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:11.537 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:11.573 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:11.677 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } m30999| Fri Feb 22 11:23:16.678 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:16.679 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:16.679 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275524d4b973931fc9a23d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127551ed4b973931fc9a23c" } } m30000| Fri Feb 22 11:23:16.679 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:16.710 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:16.747 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:16.826 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:16.859 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:16.897 [conn5] CMD fsync: sync:1 lock:0 { "shard0002" : 3, "shard0000" : 10, "shard0001" : 9 } m30999| Fri Feb 22 11:23:16.963 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275524d4b973931fc9a23d m30999| Fri Feb 22 11:23:16.963 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:16.963 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:16.963 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:16.965 [Balancer] chunk { _id: "test.foo-_id_8.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 8.0 }, max: { _id: 9.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:16.965 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:16.965 [Balancer] going to move to: shard0001 m30999| Fri Feb 22 11:23:16.965 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 19|1||000000000000000000000000min: { _id: 8.0 }max: { _id: 9.0 }) shard0002:localhost:30002 -> shard0001:localhost:30001 m30002| Fri Feb 22 11:23:16.965 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:16.965 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:16.965 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:16.992 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.029 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:17.099 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.128 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.166 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.236 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275524aaaba61d9eb25116 m30002| Fri Feb 22 11:23:17.236 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-51275525aaaba61d9eb25117", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532197236), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:17.236 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.264 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.297 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.373 [conn11] moveChunk request accepted at version 19|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:17.373 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:23:17.374 [migrateThread] starting receiving-end of migration of chunk { _id: 8.0 } -> { _id: 9.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30001| Fri Feb 22 11:23:17.375 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:23:17.375 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.375 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:23:17.384 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:17.384 [conn11] moveChunk setting version to: 20|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:23:17.384 [conn27] Waiting for commit to finish m30001| Fri Feb 22 11:23:17.385 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.385 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.385 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-5127552578e37a7f0861ebab", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532197385), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:17.385 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.394 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 8.0 }, max: { _id: 9.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:17.394 [conn11] moveChunk updating self version to: 20|1||5127547fd4b973931fc9a229 through { _id: 10.0 } -> { _id: 11.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:17.394 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.412 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.425 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.450 [conn15] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.473 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.543 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-51275525aaaba61d9eb25118", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532197543), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:17.543 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.569 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.601 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:17.680 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:17.680 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:17.680 [cleanupOldData-51275525aaaba61d9eb25119] (start) waiting to cleanup test.foo from { _id: 8.0 } -> { _id: 9.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:17.680 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.700 [cleanupOldData-51275525aaaba61d9eb25119] waiting to remove documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:23:17.700 [cleanupOldData-51275525aaaba61d9eb25119] moveChunk starting delete for: test.foo from { _id: 8.0 } -> { _id: 9.0 } m30002| Fri Feb 22 11:23:17.700 [cleanupOldData-51275525aaaba61d9eb25119] moveChunk deleted 1 documents for test.foo from { _id: 8.0 } -> { _id: 9.0 } m30001| Fri Feb 22 11:23:17.706 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.743 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.816 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:17.816 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:17-51275525aaaba61d9eb2511a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532197816), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 8.0 }, max: { _id: 9.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 295, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:17.816 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.842 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:17.873 [conn18] CMD fsync: sync:1 lock:0 { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30002| Fri Feb 22 11:23:17.953 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 8.0 }, max: { _id: 9.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:30 r:106 w:8 reslen:37 987ms m30999| Fri Feb 22 11:23:17.953 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:17.954 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 42 version: 20|1||5127547fd4b973931fc9a229 based on: 19|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:17.954 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:17.954 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:17.989 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:18.026 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:18.090 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30000| Fri Feb 22 11:23:20.541 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:20.567 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:20.602 [conn15] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:20.678 [conn16] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:20.705 [conn16] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:20.740 [conn15] CMD fsync: sync:1 lock:0 { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30000| Fri Feb 22 11:23:21.461 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:21.487 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:21.517 [conn18] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:21.597 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:21.622 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:21.653 [conn18] CMD fsync: sync:1 lock:0 { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } { "shard0002" : 2, "shard0000" : 10, "shard0001" : 10 } m30999| Fri Feb 22 11:23:23.090 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:23.091 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:23.091 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:23 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127552bd4b973931fc9a23e" } } m30000| Fri Feb 22 11:23:23.091 [conn5] CMD fsync: sync:1 lock:0 m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275524d4b973931fc9a23d" } } m30001| Fri Feb 22 11:23:23.124 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.164 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:23.232 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.265 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.304 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:23.368 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 5127552bd4b973931fc9a23e m30999| Fri Feb 22 11:23:23.368 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:23.368 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:23.368 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:23.370 [Balancer] chunk { _id: "test.foo-_id_10.0", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 10.0 }, max: { _id: 11.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:23.370 [Balancer] shard0001 has more chunks me:10 best: shard0000:10 m30999| Fri Feb 22 11:23:23.370 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:23.370 [Balancer] going to move to: shard0000 m30999| Fri Feb 22 11:23:23.370 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 20|1||000000000000000000000000min: { _id: 10.0 }max: { _id: 11.0 }) shard0002:localhost:30002 -> shard0000:localhost:30000 m30002| Fri Feb 22 11:23:23.371 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:23.371 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:23.371 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.396 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.436 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:23.505 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.530 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.570 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.641 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 5127552baaaba61d9eb2511b m30002| Fri Feb 22 11:23:23.642 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:23-5127552baaaba61d9eb2511c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532203641), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:23.642 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.667 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.697 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.779 [conn11] moveChunk request accepted at version 20|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:23.779 [conn11] moveChunk number of documents: 1 m30000| Fri Feb 22 11:23:23.779 [migrateThread] starting receiving-end of migration of chunk { _id: 10.0 } -> { _id: 11.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30000| Fri Feb 22 11:23:23.780 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:23:23.780 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:23:23.781 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:23:23.789 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:23.789 [conn11] moveChunk setting version to: 21|0||5127547fd4b973931fc9a229 m30000| Fri Feb 22 11:23:23.789 [conn25] Waiting for commit to finish m30000| Fri Feb 22 11:23:23.791 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:23:23.791 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10.0 } -> { _id: 11.0 } m30000| Fri Feb 22 11:23:23.791 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:23-5127552b0cfd6a2130a0ac41", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532203791), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:23.791 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.800 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 10.0 }, max: { _id: 11.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:23.800 [conn11] moveChunk updating self version to: 21|1||5127547fd4b973931fc9a229 through { _id: 12.0 } -> { _id: 13.0 } for collection 'test.foo' m30000| Fri Feb 22 11:23:23.800 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.837 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.837 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.876 [conn12] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:23.882 [conn26] CMD fsync: sync:1 lock:0 { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } m30002| Fri Feb 22 11:23:23.948 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:23-5127552baaaba61d9eb2511d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532203948), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, from: "shard0002", to: "shard0000" } } m30000| Fri Feb 22 11:23:23.949 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:23.974 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.005 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.085 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:24.085 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:24.086 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:24.086 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:24.086 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:24.086 [cleanupOldData-5127552caaaba61d9eb2511e] (start) waiting to cleanup test.foo from { _id: 10.0 } -> { _id: 11.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:24.086 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.106 [cleanupOldData-5127552caaaba61d9eb2511e] waiting to remove documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:23:24.106 [cleanupOldData-5127552caaaba61d9eb2511e] moveChunk starting delete for: test.foo from { _id: 10.0 } -> { _id: 11.0 } m30002| Fri Feb 22 11:23:24.106 [cleanupOldData-5127552caaaba61d9eb2511e] moveChunk deleted 1 documents for test.foo from { _id: 10.0 } -> { _id: 11.0 } m30001| Fri Feb 22 11:23:24.114 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.150 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.222 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:24.222 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:24-5127552caaaba61d9eb2511f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532204222), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 10.0 }, max: { _id: 11.0 }, step1 of 6: 0, step2 of 6: 408, step3 of 6: 0, step4 of 6: 10, step5 of 6: 296, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:24.222 [conn19] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:24.248 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.278 [conn18] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.359 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30000", fromShard: "shard0002", toShard: "shard0000", min: { _id: 10.0 }, max: { _id: 11.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:117 w:11 reslen:37 988ms m30999| Fri Feb 22 11:23:24.359 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:24.362 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 43 version: 21|1||5127547fd4b973931fc9a229 based on: 20|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:24.362 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:24.362 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:24.392 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:24.425 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:24.495 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } m30999| Fri Feb 22 11:23:29.496 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:23:29.496 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000,localhost:30001,localhost:30002 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838 ) m30999| Fri Feb 22 11:23:29.497 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:23:29 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275531d4b973931fc9a23f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127552bd4b973931fc9a23e" } } m30000| Fri Feb 22 11:23:29.497 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.527 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:29.562 [conn5] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:29.635 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.667 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:29.706 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:29.772 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' acquired, ts : 51275531d4b973931fc9a23f m30999| Fri Feb 22 11:23:29.772 [Balancer] *** start balancing round m30999| Fri Feb 22 11:23:29.772 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:23:29.772 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:23:29.774 [Balancer] chunk { _id: "test.foo-_id_12.0", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('5127547fd4b973931fc9a229'), ns: "test.foo", min: { _id: 12.0 }, max: { _id: 13.0 }, shard: "shard0002" } is not on a shard with the right tag: a m30999| Fri Feb 22 11:23:29.774 [Balancer] shard0002 doesn't have right tag m30999| Fri Feb 22 11:23:29.774 [Balancer] going to move to: shard0001 m30999| Fri Feb 22 11:23:29.774 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0002:localhost:30002lastmod: 21|1||000000000000000000000000min: { _id: 12.0 }max: { _id: 13.0 }) shard0002:localhost:30002 -> shard0001:localhost:30001 m30002| Fri Feb 22 11:23:29.774 [conn11] warning: secondaryThrottle selected but no replication m30002| Fri Feb 22 11:23:29.774 [conn11] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 11:23:29.774 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.797 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:29.835 [conn24] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:29.908 [conn23] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:29.931 [conn24] CMD fsync: sync:1 lock:0 { "shard0002" : 1, "shard0000" : 11, "shard0001" : 10 } m30002| Fri Feb 22 11:23:29.969 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.045 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' acquired, ts : 51275531aaaba61d9eb25120 m30002| Fri Feb 22 11:23:30.045 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-51275532aaaba61d9eb25121", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532210045), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:30.045 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.068 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.106 [conn25] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:30.115 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.144 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.172 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.173 [conn11] moveChunk request accepted at version 21|1||5127547fd4b973931fc9a229 m30002| Fri Feb 22 11:23:30.173 [conn11] moveChunk number of documents: 1 m30001| Fri Feb 22 11:23:30.174 [migrateThread] starting receiving-end of migration of chunk { _id: 12.0 } -> { _id: 13.0 } for collection test.foo from localhost:30002 (0 slaves detected) m30001| Fri Feb 22 11:23:30.175 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 11:23:30.175 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.175 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:23:30.184 [conn11] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:23:30.184 [conn11] moveChunk setting version to: 22|0||5127547fd4b973931fc9a229 m30001| Fri Feb 22 11:23:30.184 [conn27] Waiting for commit to finish m30001| Fri Feb 22 11:23:30.186 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.186 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.186 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-5127553278e37a7f0861ebac", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532210186), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:23:30.186 [conn22] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.194 [conn11] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30002", min: { _id: 12.0 }, max: { _id: 13.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 29, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:23:30.194 [conn11] moveChunk moved last chunk out for collection 'test.foo' m30000| Fri Feb 22 11:23:30.194 [conn26] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.209 [conn22] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.224 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.247 [conn22] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 11:23:30.248 [conn6] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.284 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.287 [conn26] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.318 [conn6] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.354 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-51275532aaaba61d9eb25122", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532210354), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, from: "shard0002", to: "shard0001" } } m30000| Fri Feb 22 11:23:30.354 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.380 [conn25] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:30.388 [LockPinger] cluster localhost:30000,localhost:30001,localhost:30002 pinged successfully at Fri Feb 22 11:23:30 2013 by distributed lock pinger 'localhost:30000,localhost:30001,localhost:30002/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838', sleeping for 30000ms m30002| Fri Feb 22 11:23:30.418 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:30.490 [conn11] forking for cleanup of chunk data m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:23:30.490 [conn11] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:23:30.490 [cleanupOldData-51275532aaaba61d9eb25123] (start) waiting to cleanup test.foo from { _id: 12.0 } -> { _id: 13.0 }, # cursors remaining: 0 m30000| Fri Feb 22 11:23:30.490 [conn23] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.510 [cleanupOldData-51275532aaaba61d9eb25123] waiting to remove documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:23:30.510 [cleanupOldData-51275532aaaba61d9eb25123] moveChunk starting delete for: test.foo from { _id: 12.0 } -> { _id: 13.0 } m30002| Fri Feb 22 11:23:30.511 [cleanupOldData-51275532aaaba61d9eb25123] moveChunk deleted 1 documents for test.foo from { _id: 12.0 } -> { _id: 13.0 } m30001| Fri Feb 22 11:23:30.516 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.554 [conn24] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.627 [conn11] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30002:1361532171:4548' unlocked. m30002| Fri Feb 22 11:23:30.627 [conn11] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:23:30-51275532aaaba61d9eb25124", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:44320", time: new Date(1361532210627), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 12.0 }, max: { _id: 13.0 }, step1 of 6: 0, step2 of 6: 399, step3 of 6: 0, step4 of 6: 10, step5 of 6: 306, step6 of 6: 0 } } m30000| Fri Feb 22 11:23:30.627 [conn24] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.653 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.694 [conn25] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.763 [conn11] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30002", to: "localhost:30001", fromShard: "shard0002", toShard: "shard0001", min: { _id: 12.0 }, max: { _id: 13.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12.0", configdb: "localhost:30000,localhost:30001,localhost:30002", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:106 w:6 reslen:37 989ms m30999| Fri Feb 22 11:23:30.763 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:23:30.765 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 44 version: 22|0||5127547fd4b973931fc9a229 based on: 21|1||5127547fd4b973931fc9a229 m30999| Fri Feb 22 11:23:30.765 [Balancer] *** end of balancing round m30000| Fri Feb 22 11:23:30.765 [conn5] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:23:30.799 [conn5] CMD fsync: sync:1 lock:0 m30002| Fri Feb 22 11:23:30.841 [conn5] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:23:30.901 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532028:16838' unlocked. { "shard0002" : 0, "shard0000" : 11, "shard0001" : 11 } --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("5127547dd4b973931fc9a225") } shards: { "_id" : "shard0000", "host" : "localhost:30000", "tags" : [ "a" ] } { "_id" : "shard0001", "host" : "localhost:30001", "tags" : [ "a" ] } { "_id" : "shard0002", "host" : "localhost:30002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0000" } test.foo shard key: { "_id" : 1 } chunks: shard0001 11 shard0000 11 too many chunks to print, use verbose if you want to force print tag: a { "_id" : -1 } -->> { "_id" : 1000 } undefined m30999| Fri Feb 22 11:23:30.956 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30000| Fri Feb 22 11:23:30.957 [conn3] end connection 127.0.0.1:40427 (24 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn3] end connection 127.0.0.1:55562 (25 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn5] end connection 127.0.0.1:60590 (24 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn5] end connection 127.0.0.1:48409 (25 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn3] end connection 127.0.0.1:64894 (24 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn5] end connection 127.0.0.1:52548 (24 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn6] end connection 127.0.0.1:58494 (23 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn6] end connection 127.0.0.1:46581 (24 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn6] end connection 127.0.0.1:52531 (24 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn7] end connection 127.0.0.1:56278 (21 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn7] end connection 127.0.0.1:58330 (22 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn7] end connection 127.0.0.1:61888 (21 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn9] end connection 127.0.0.1:38220 (20 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn9] end connection 127.0.0.1:57851 (22 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn10] end connection 127.0.0.1:44398 (20 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn9] end connection 127.0.0.1:42564 (20 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn10] end connection 127.0.0.1:50063 (21 connections now open) m30000| Fri Feb 22 11:23:30.957 [conn11] end connection 127.0.0.1:53660 (20 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn10] end connection 127.0.0.1:59349 (20 connections now open) m30001| Fri Feb 22 11:23:30.957 [conn11] end connection 127.0.0.1:42693 (20 connections now open) m30002| Fri Feb 22 11:23:30.957 [conn11] end connection 127.0.0.1:44320 (20 connections now open) Fri Feb 22 11:23:31.956 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 11:23:31.956 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:23:31.957 [interruptThread] now exiting m30000| Fri Feb 22 11:23:31.957 dbexit: m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:23:31.957 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:23:31.957 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:23:31.957 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:23:31.957 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:23:31.957 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:23:31.957 [conn1] end connection 127.0.0.1:51670 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn2] end connection 127.0.0.1:58478 (17 connections now open) m30001| Fri Feb 22 11:23:31.957 [conn15] end connection 127.0.0.1:38369 (18 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn17] end connection 127.0.0.1:45549 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn18] end connection 127.0.0.1:35927 (17 connections now open) m30001| Fri Feb 22 11:23:31.957 [conn19] end connection 127.0.0.1:41492 (17 connections now open) m30001| Fri Feb 22 11:23:31.957 [conn17] end connection 127.0.0.1:40060 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn20] end connection 127.0.0.1:51262 (17 connections now open) m30002| Fri Feb 22 11:23:31.957 [conn16] end connection 127.0.0.1:56139 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn25] end connection 127.0.0.1:57636 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn26] end connection 127.0.0.1:65387 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn19] end connection 127.0.0.1:61265 (17 connections now open) m30002| Fri Feb 22 11:23:31.957 [conn17] end connection 127.0.0.1:33670 (16 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn24] end connection 127.0.0.1:45942 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn14] end connection 127.0.0.1:49538 (17 connections now open) m30000| Fri Feb 22 11:23:31.957 [conn12] end connection 127.0.0.1:40271 (17 connections now open) m30002| Fri Feb 22 11:23:31.957 [conn12] end connection 127.0.0.1:60374 (15 connections now open) m30001| Fri Feb 22 11:23:31.958 [conn12] end connection 127.0.0.1:43078 (15 connections now open) m30002| Fri Feb 22 11:23:31.958 [conn13] end connection 127.0.0.1:43096 (14 connections now open) m30002| Fri Feb 22 11:23:31.958 [conn14] end connection 127.0.0.1:33814 (14 connections now open) m30001| Fri Feb 22 11:23:31.958 [conn13] end connection 127.0.0.1:49945 (14 connections now open) m30000| Fri Feb 22 11:23:31.958 [conn16] end connection 127.0.0.1:43618 (17 connections now open) m30001| Fri Feb 22 11:23:31.958 [conn14] end connection 127.0.0.1:51906 (14 connections now open) m30000| Fri Feb 22 11:23:31.958 [conn22] end connection 127.0.0.1:46249 (17 connections now open) m30002| Fri Feb 22 11:23:31.958 [conn19] end connection 127.0.0.1:64097 (12 connections now open) m30000| Fri Feb 22 11:23:31.967 [conn13] end connection 127.0.0.1:34264 (4 connections now open) m30000| Fri Feb 22 11:23:31.968 [conn21] end connection 127.0.0.1:46014 (3 connections now open) m30000| Fri Feb 22 11:23:31.968 [conn23] end connection 127.0.0.1:40174 (2 connections now open) m30000| Fri Feb 22 11:23:31.968 [conn15] end connection 127.0.0.1:62544 (2 connections now open) m30000| Fri Feb 22 11:23:31.988 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:23:31.990 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:23:31.990 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:23:31.990 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:23:31.990 dbexit: really exiting now Fri Feb 22 11:23:32.956 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 11:23:32.957 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:23:32.957 [interruptThread] now exiting m30001| Fri Feb 22 11:23:32.957 dbexit: m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:23:32.957 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 11:23:32.957 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 11:23:32.957 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 11:23:32.957 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:23:32.957 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:23:32.957 [conn1] end connection 127.0.0.1:61815 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn2] end connection 127.0.0.1:58147 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn16] end connection 127.0.0.1:62454 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn18] end connection 127.0.0.1:58911 (12 connections now open) m30002| Fri Feb 22 11:23:32.957 [conn15] end connection 127.0.0.1:45677 (11 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn20] end connection 127.0.0.1:48199 (12 connections now open) m30002| Fri Feb 22 11:23:32.957 [conn20] end connection 127.0.0.1:61722 (11 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn21] end connection 127.0.0.1:38642 (12 connections now open) m30001| Fri Feb 22 11:23:32.957 [conn22] end connection 127.0.0.1:42216 (12 connections now open) m30002| Fri Feb 22 11:23:32.958 [conn21] end connection 127.0.0.1:36772 (9 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn24] end connection 127.0.0.1:44990 (12 connections now open) m30002| Fri Feb 22 11:23:32.958 [conn22] end connection 127.0.0.1:50697 (9 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn25] end connection 127.0.0.1:42360 (12 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn23] end connection 127.0.0.1:58098 (12 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn26] end connection 127.0.0.1:53938 (12 connections now open) m30001| Fri Feb 22 11:23:32.958 [conn27] end connection 127.0.0.1:49673 (12 connections now open) m30001| Fri Feb 22 11:23:32.987 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:23:32.989 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:23:32.989 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:23:32.989 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:23:32.989 dbexit: really exiting now Fri Feb 22 11:23:33.957 shell: stopped mongo program on port 30001 m30002| Fri Feb 22 11:23:33.957 got signal 15 (Terminated), will terminate after current cmd ends m30002| Fri Feb 22 11:23:33.957 [interruptThread] now exiting m30002| Fri Feb 22 11:23:33.957 dbexit: m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: going to close listening sockets... m30002| Fri Feb 22 11:23:33.957 [interruptThread] closing listening socket: 18 m30002| Fri Feb 22 11:23:33.957 [interruptThread] closing listening socket: 19 m30002| Fri Feb 22 11:23:33.957 [interruptThread] closing listening socket: 20 m30002| Fri Feb 22 11:23:33.957 [interruptThread] removing socket file: /tmp/mongodb-30002.sock m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: going to flush diaglog... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: going to close sockets... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: waiting for fs preallocator... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: lock for final commit... m30002| Fri Feb 22 11:23:33.957 [interruptThread] shutdown: final commit... m30002| Fri Feb 22 11:23:33.958 [conn1] end connection 127.0.0.1:34448 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn2] end connection 127.0.0.1:47206 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn18] end connection 127.0.0.1:39075 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn23] end connection 127.0.0.1:41207 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn24] end connection 127.0.0.1:62763 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn25] end connection 127.0.0.1:33710 (7 connections now open) m30002| Fri Feb 22 11:23:33.958 [conn26] end connection 127.0.0.1:37179 (7 connections now open) m30002| Fri Feb 22 11:23:33.984 [interruptThread] shutdown: closing all files... m30002| Fri Feb 22 11:23:33.988 [interruptThread] closeAllFiles() finished m30002| Fri Feb 22 11:23:33.988 [interruptThread] journalCleanup... m30002| Fri Feb 22 11:23:33.988 [interruptThread] removeJournalFiles m30002| Fri Feb 22 11:23:33.988 dbexit: really exiting now Fri Feb 22 11:23:34.957 shell: stopped mongo program on port 30002 *** ShardingTest balance_tags1 completed successfully in 187.381 seconds *** Fri Feb 22 11:23:34.985 [conn11] end connection 127.0.0.1:60212 (0 connections now open) 3.1267 minutes Fri Feb 22 11:23:35.007 [initandlisten] connection accepted from 127.0.0.1:54305 #12 (1 connection now open) Fri Feb 22 11:23:35.008 [conn12] end connection 127.0.0.1:54305 (0 connections now open) ******************************************* Test : btreedel.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/btreedel.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/btreedel.js";TestData.testFile = "btreedel.js";TestData.testName = "btreedel";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:23:35 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:23:35.190 [initandlisten] connection accepted from 127.0.0.1:41651 #13 (1 connection now open) null Fri Feb 22 11:23:35.201 [conn13] build index test.foo { _id: 1 } Fri Feb 22 11:23:35.205 [conn13] build index done. scanned 0 total records. 0.003 secs Fri Feb 22 11:23:39.849 [FileAllocator] allocating new datafile /data/db/sconsTests/test.2, filling with zeroes... Fri Feb 22 11:23:39.849 [FileAllocator] done allocating datafile /data/db/sconsTests/test.2, size: 256MB, took 0 secs Fri Feb 22 11:24:07.875 [FileAllocator] allocating new datafile /data/db/sconsTests/test.3, filling with zeroes... Fri Feb 22 11:24:07.875 [FileAllocator] done allocating datafile /data/db/sconsTests/test.3, size: 512MB, took 0 secs 1 insert done count: 1000000 { "_id" : 1, "x" : "a b" } Fri Feb 22 11:24:24.865 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 38 locks(micros) r:180164 nreturned:39199 reslen:4194313 174ms Fri Feb 22 11:24:25.329 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:190051 nreturned:39199 reslen:4194313 190ms Fri Feb 22 11:24:25.766 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:190862 nreturned:39199 reslen:4194313 190ms Fri Feb 22 11:24:26.202 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:196068 nreturned:39199 reslen:4194313 196ms Fri Feb 22 11:24:26.643 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:352461 nreturned:39199 reslen:4194313 202ms Fri Feb 22 11:24:27.209 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:192556 nreturned:39199 reslen:4194313 192ms { "_id" : 200002, "x" : "a b" } Fri Feb 22 11:24:27.767 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:234290 nreturned:39199 reslen:4194313 194ms Fri Feb 22 11:24:28.296 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:176823 nreturned:39199 reslen:4194313 176ms Fri Feb 22 11:24:28.718 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:291071 nreturned:39199 reslen:4194313 187ms Fri Feb 22 11:24:29.154 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:193883 nreturned:39199 reslen:4194313 193ms Fri Feb 22 11:24:29.604 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 917 locks(micros) r:406013 nreturned:39199 reslen:4194313 216ms { "_id" : 400002, "x" : "a b" } 2 3 true { "_id" : 400003, "x" : "a b" } Fri Feb 22 11:24:42.867 [conn13] remove test.foo query: { _id: { $gt: 200000.0, $lt: 600000.0 } } ndeleted:399999 keyUpdates:0 numYields: 123 locks(micros) w:23926391 13214ms Fri Feb 22 11:24:42.996 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:128592 nreturned:39199 reslen:4194313 128ms Fri Feb 22 11:24:43.398 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:124170 nreturned:39199 reslen:4194313 124ms Fri Feb 22 11:24:43.836 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:133705 nreturned:39199 reslen:4194313 133ms Fri Feb 22 11:24:44.293 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:123602 nreturned:39199 reslen:4194313 123ms Fri Feb 22 11:24:44.869 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:203996 nreturned:39199 reslen:4194313 204ms Fri Feb 22 11:24:45.393 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:208413 nreturned:39199 reslen:4194313 208ms Fri Feb 22 11:24:45.846 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:193932 nreturned:39199 reslen:4194313 193ms Fri Feb 22 11:24:46.327 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:214002 nreturned:39199 reslen:4194313 214ms Fri Feb 22 11:24:46.777 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:266115 nreturned:39199 reslen:4194313 185ms Fri Feb 22 11:24:47.276 [conn13] getmore test.foo query: { query: { y: null }, orderby: { _id: 1.0 } } cursorid:2432597678457837 ntoreturn:0 keyUpdates:0 locks(micros) r:194124 nreturned:39199 reslen:4194313 194ms 4. n:431286 { "_id" : 999999, "x" : "a b" } btreedel.js success Fri Feb 22 11:24:47.689 [conn13] end connection 127.0.0.1:41651 (0 connections now open) 1.2118 minutes Fri Feb 22 11:24:47.721 [initandlisten] connection accepted from 127.0.0.1:60941 #14 (1 connection now open) Fri Feb 22 11:24:47.722 [conn14] end connection 127.0.0.1:60941 (0 connections now open) ******************************************* Test : bulk_shard_insert.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/bulk_shard_insert.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/bulk_shard_insert.js";TestData.testFile = "bulk_shard_insert.js";TestData.testName = "bulk_shard_insert";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:24:47 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:24:47.898 [initandlisten] connection accepted from 127.0.0.1:65134 #15 (1 connection now open) null Seeded with 1361532287907 Resetting db path '/data/db/bulk_shard_insert0' Fri Feb 22 11:24:47.918 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/bulk_shard_insert0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:24:48.011 [initandlisten] MongoDB starting : pid=22355 port=30000 dbpath=/data/db/bulk_shard_insert0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:24:48.011 [initandlisten] m30000| Fri Feb 22 11:24:48.011 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:24:48.011 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:24:48.011 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:24:48.011 [initandlisten] m30000| Fri Feb 22 11:24:48.011 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:24:48.011 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:24:48.011 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:24:48.011 [initandlisten] allocator: system m30000| Fri Feb 22 11:24:48.011 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:24:48.012 [initandlisten] journal dir=/data/db/bulk_shard_insert0/journal m30000| Fri Feb 22 11:24:48.012 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:24:48.026 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/local.ns, filling with zeroes... m30000| Fri Feb 22 11:24:48.026 [FileAllocator] creating directory /data/db/bulk_shard_insert0/_tmp m30000| Fri Feb 22 11:24:48.026 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:24:48.026 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/local.0, filling with zeroes... m30000| Fri Feb 22 11:24:48.027 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:24:48.030 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:24:48.030 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:24:48.121 [initandlisten] connection accepted from 127.0.0.1:40234 #1 (1 connection now open) Resetting db path '/data/db/bulk_shard_insert1' Fri Feb 22 11:24:48.125 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/bulk_shard_insert1 --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:24:48.219 [initandlisten] MongoDB starting : pid=22356 port=30001 dbpath=/data/db/bulk_shard_insert1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:24:48.220 [initandlisten] m30001| Fri Feb 22 11:24:48.220 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:24:48.220 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:24:48.220 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:24:48.220 [initandlisten] m30001| Fri Feb 22 11:24:48.220 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:24:48.220 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:24:48.220 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:24:48.220 [initandlisten] allocator: system m30001| Fri Feb 22 11:24:48.220 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:24:48.220 [initandlisten] journal dir=/data/db/bulk_shard_insert1/journal m30001| Fri Feb 22 11:24:48.220 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:24:48.237 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/local.ns, filling with zeroes... m30001| Fri Feb 22 11:24:48.237 [FileAllocator] creating directory /data/db/bulk_shard_insert1/_tmp m30001| Fri Feb 22 11:24:48.237 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:24:48.237 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/local.0, filling with zeroes... m30001| Fri Feb 22 11:24:48.238 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:24:48.241 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:24:48.241 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:24:48.327 [initandlisten] connection accepted from 127.0.0.1:56952 #1 (1 connection now open) Resetting db path '/data/db/bulk_shard_insert2' Fri Feb 22 11:24:48.330 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30002 --dbpath /data/db/bulk_shard_insert2 --setParameter enableTestCommands=1 m30002| Fri Feb 22 11:24:48.420 [initandlisten] MongoDB starting : pid=22357 port=30002 dbpath=/data/db/bulk_shard_insert2 64-bit host=bs-smartos-x86-64-1.10gen.cc m30002| Fri Feb 22 11:24:48.420 [initandlisten] m30002| Fri Feb 22 11:24:48.420 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30002| Fri Feb 22 11:24:48.420 [initandlisten] ** uses to detect impending page faults. m30002| Fri Feb 22 11:24:48.420 [initandlisten] ** This may result in slower performance for certain use cases m30002| Fri Feb 22 11:24:48.420 [initandlisten] m30002| Fri Feb 22 11:24:48.420 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30002| Fri Feb 22 11:24:48.420 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30002| Fri Feb 22 11:24:48.420 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30002| Fri Feb 22 11:24:48.420 [initandlisten] allocator: system m30002| Fri Feb 22 11:24:48.420 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert2", port: 30002, setParameter: [ "enableTestCommands=1" ] } m30002| Fri Feb 22 11:24:48.421 [initandlisten] journal dir=/data/db/bulk_shard_insert2/journal m30002| Fri Feb 22 11:24:48.421 [initandlisten] recover : no journal files present, no recovery needed m30002| Fri Feb 22 11:24:48.436 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert2/local.ns, filling with zeroes... m30002| Fri Feb 22 11:24:48.436 [FileAllocator] creating directory /data/db/bulk_shard_insert2/_tmp m30002| Fri Feb 22 11:24:48.437 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert2/local.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:24:48.437 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert2/local.0, filling with zeroes... m30002| Fri Feb 22 11:24:48.437 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert2/local.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:24:48.440 [initandlisten] waiting for connections on port 30002 m30002| Fri Feb 22 11:24:48.440 [websvr] admin web console waiting for connections on port 31002 m30002| Fri Feb 22 11:24:48.531 [initandlisten] connection accepted from 127.0.0.1:36630 #1 (1 connection now open) Resetting db path '/data/db/bulk_shard_insert3' Fri Feb 22 11:24:48.538 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30003 --dbpath /data/db/bulk_shard_insert3 --setParameter enableTestCommands=1 m30003| Fri Feb 22 11:24:48.618 [initandlisten] MongoDB starting : pid=22358 port=30003 dbpath=/data/db/bulk_shard_insert3 64-bit host=bs-smartos-x86-64-1.10gen.cc m30003| Fri Feb 22 11:24:48.619 [initandlisten] m30003| Fri Feb 22 11:24:48.619 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30003| Fri Feb 22 11:24:48.619 [initandlisten] ** uses to detect impending page faults. m30003| Fri Feb 22 11:24:48.619 [initandlisten] ** This may result in slower performance for certain use cases m30003| Fri Feb 22 11:24:48.619 [initandlisten] m30003| Fri Feb 22 11:24:48.619 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30003| Fri Feb 22 11:24:48.619 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30003| Fri Feb 22 11:24:48.619 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30003| Fri Feb 22 11:24:48.619 [initandlisten] allocator: system m30003| Fri Feb 22 11:24:48.619 [initandlisten] options: { dbpath: "/data/db/bulk_shard_insert3", port: 30003, setParameter: [ "enableTestCommands=1" ] } m30003| Fri Feb 22 11:24:48.619 [initandlisten] journal dir=/data/db/bulk_shard_insert3/journal m30003| Fri Feb 22 11:24:48.619 [initandlisten] recover : no journal files present, no recovery needed m30003| Fri Feb 22 11:24:48.632 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/local.ns, filling with zeroes... m30003| Fri Feb 22 11:24:48.632 [FileAllocator] creating directory /data/db/bulk_shard_insert3/_tmp m30003| Fri Feb 22 11:24:48.633 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/local.ns, size: 16MB, took 0 secs m30003| Fri Feb 22 11:24:48.633 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/local.0, filling with zeroes... m30003| Fri Feb 22 11:24:48.633 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/local.0, size: 64MB, took 0 secs m30003| Fri Feb 22 11:24:48.636 [websvr] admin web console waiting for connections on port 31003 m30003| Fri Feb 22 11:24:48.636 [initandlisten] waiting for connections on port 30003 m30003| Fri Feb 22 11:24:48.740 [initandlisten] connection accepted from 127.0.0.1:35811 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 11:24:48.740 [initandlisten] connection accepted from 127.0.0.1:35416 #2 (2 connections now open) ShardingTest bulk_shard_insert : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001, connection to localhost:30002, connection to localhost:30003 ] } Fri Feb 22 11:24:48.744 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:24:48.757 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:24:48.758 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=22359 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:24:48.758 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:24:48.758 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:24:48.758 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:24:48.759 [initandlisten] connection accepted from 127.0.0.1:42442 #3 (3 connections now open) m30000| Fri Feb 22 11:24:48.762 [initandlisten] connection accepted from 127.0.0.1:40998 #4 (4 connections now open) m30000| Fri Feb 22 11:24:48.762 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:24:48.775 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838 (sleeping for 30000ms) m30000| Fri Feb 22 11:24:48.775 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/config.ns, filling with zeroes... m30000| Fri Feb 22 11:24:48.775 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:24:48.775 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/config.0, filling with zeroes... m30000| Fri Feb 22 11:24:48.775 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:24:48.775 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/config.1, filling with zeroes... m30000| Fri Feb 22 11:24:48.776 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:24:48.778 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 11:24:48.778 [conn4] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.779 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:24:48.780 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:24:48.781 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 11:24:48.782 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:24:48.782 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 51275580ce6119f732c457ec m30999| Fri Feb 22 11:24:48.784 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:24:48.784 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:24:48.784 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:48-51275580ce6119f732c457ed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532288784), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:24:48.784 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 11:24:48.784 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.785 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 11:24:48.785 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 11:24:48.785 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.786 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:48-51275580ce6119f732c457ef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361532288786), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:24:48.786 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:24:48.786 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30000| Fri Feb 22 11:24:48.787 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:24:48.788 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:24:48.788 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 11:24:48.788 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:24:48.788 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 11:24:48.789 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:24:48.790 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.790 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 11:24:48.790 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 11:24:48.790 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.790 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 11:24:48.791 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.791 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:24:48.791 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.791 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:24:48.792 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:24:48.792 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 11:24:48.792 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 11:24:48.793 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.793 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:24:48.793 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:24:48 m30000| Fri Feb 22 11:24:48.794 [conn3] build index config.mongos { _id: 1 } m30000| Fri Feb 22 11:24:48.794 [initandlisten] connection accepted from 127.0.0.1:55370 #5 (5 connections now open) m30000| Fri Feb 22 11:24:48.795 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 51275580ce6119f732c457f1 m30999| Fri Feb 22 11:24:48.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30999| Fri Feb 22 11:24:48.945 [mongosMain] connection accepted from 127.0.0.1:33354 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:24:48.947 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 11:24:48.947 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 11:24:48.948 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:24:48.948 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 11:24:48.949 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30001| Fri Feb 22 11:24:48.951 [initandlisten] connection accepted from 127.0.0.1:59009 #2 (2 connections now open) m30999| Fri Feb 22 11:24:48.952 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30002 m30002| Fri Feb 22 11:24:48.953 [initandlisten] connection accepted from 127.0.0.1:47913 #2 (2 connections now open) m30999| Fri Feb 22 11:24:48.954 [conn1] going to add shard: { _id: "shard0002", host: "localhost:30002" } { "shardAdded" : "shard0002", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30003 m30003| Fri Feb 22 11:24:48.955 [initandlisten] connection accepted from 127.0.0.1:41716 #2 (2 connections now open) m30999| Fri Feb 22 11:24:48.956 [conn1] going to add shard: { _id: "shard0003", host: "localhost:30003" } { "shardAdded" : "shard0003", "ok" : 1 } m30000| Fri Feb 22 11:24:48.957 [initandlisten] connection accepted from 127.0.0.1:63280 #6 (6 connections now open) m30999| Fri Feb 22 11:24:48.957 [conn1] creating WriteBackListener for: localhost:30000 serverID: 51275580ce6119f732c457f0 m30999| Fri Feb 22 11:24:48.957 [conn1] creating WriteBackListener for: localhost:30001 serverID: 51275580ce6119f732c457f0 m30001| Fri Feb 22 11:24:48.957 [initandlisten] connection accepted from 127.0.0.1:60370 #3 (3 connections now open) m30002| Fri Feb 22 11:24:48.958 [initandlisten] connection accepted from 127.0.0.1:50557 #3 (3 connections now open) m30999| Fri Feb 22 11:24:48.958 [conn1] creating WriteBackListener for: localhost:30002 serverID: 51275580ce6119f732c457f0 m30003| Fri Feb 22 11:24:48.958 [initandlisten] connection accepted from 127.0.0.1:42191 #3 (3 connections now open) m30999| Fri Feb 22 11:24:48.958 [conn1] creating WriteBackListener for: localhost:30003 serverID: 51275580ce6119f732c457f0 m30999| Fri Feb 22 11:24:48.959 [conn1] couldn't find database [bulk_shard_insert] in config db m30000| Fri Feb 22 11:24:48.959 [initandlisten] connection accepted from 127.0.0.1:61993 #7 (7 connections now open) m30001| Fri Feb 22 11:24:48.960 [initandlisten] connection accepted from 127.0.0.1:56609 #4 (4 connections now open) m30002| Fri Feb 22 11:24:48.961 [initandlisten] connection accepted from 127.0.0.1:41136 #4 (4 connections now open) m30003| Fri Feb 22 11:24:48.961 [initandlisten] connection accepted from 127.0.0.1:52636 #4 (4 connections now open) m30999| Fri Feb 22 11:24:48.961 [conn1] put [bulk_shard_insert] on: shard0001:localhost:30001 m30999| Fri Feb 22 11:24:48.963 [conn1] enabling sharding on: bulk_shard_insert m30001| Fri Feb 22 11:24:48.964 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/bulk_shard_insert.ns, filling with zeroes... m30001| Fri Feb 22 11:24:48.964 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/bulk_shard_insert.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:24:48.964 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/bulk_shard_insert.0, filling with zeroes... m30001| Fri Feb 22 11:24:48.964 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/bulk_shard_insert.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:24:48.965 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/bulk_shard_insert.1, filling with zeroes... m30001| Fri Feb 22 11:24:48.965 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/bulk_shard_insert.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:24:48.967 [conn4] build index bulk_shard_insert.coll { _id: 1 } m30001| Fri Feb 22 11:24:48.968 [conn4] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:24:48.968 [conn4] info: creating collection bulk_shard_insert.coll on add index m30999| Fri Feb 22 11:24:48.968 [conn1] CMD: shardcollection: { shardcollection: "bulk_shard_insert.coll", key: { _id: 1.0 } } m30999| Fri Feb 22 11:24:48.968 [conn1] enable sharding on: bulk_shard_insert.coll with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:24:48.968 [conn1] going to create 1 chunk(s) for: bulk_shard_insert.coll using new epoch 51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:48.969 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 2 version: 1|0||51275580ce6119f732c457f2 based on: (empty) m30000| Fri Feb 22 11:24:48.970 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:24:48.971 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:24:48.971 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 11:24:48.972 [initandlisten] connection accepted from 127.0.0.1:64913 #8 (8 connections now open) Bulk size is 4000 Document size is 141 m30001| Fri Feb 22 11:24:49.280 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.281 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.281 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755815b44ae98eaa729fc') } ], shardId: "bulk_shard_insert.coll-_id_MinKey", configdb: "localhost:30000" } m30000| Fri Feb 22 11:24:49.282 [initandlisten] connection accepted from 127.0.0.1:37721 #9 (9 connections now open) m30001| Fri Feb 22 11:24:49.283 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480 (sleeping for 30000ms) m30001| Fri Feb 22 11:24:49.284 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275581ad0d9d7dc768fedd m30001| Fri Feb 22 11:24:49.285 [conn4] splitChunk accepted at version 1|0||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:49.286 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:49-51275581ad0d9d7dc768fede", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532289286), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:49.287 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:49.288 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 3 version: 1|2||51275580ce6119f732c457f2 based on: 1|0||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:49.288 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } on: { _id: ObjectId('512755815b44ae98eaa729fc') } (splitThreshold 921) m30001| Fri Feb 22 11:24:49.667 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa729fc') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.671 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa729fc') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:49.672 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755815b44ae98eaa7493b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa729fc')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:49.673 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275581ad0d9d7dc768fedf m30001| Fri Feb 22 11:24:49.674 [conn4] splitChunk accepted at version 1|2||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:49.674 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:49-51275581ad0d9d7dc768fee0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532289674), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:49.675 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:49.675 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 4 version: 1|4||51275580ce6119f732c457f2 based on: 1|2||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:49.676 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: ObjectId('512755815b44ae98eaa729fc') }max: { _id: MaxKey } on: { _id: ObjectId('512755815b44ae98eaa7493b') } (splitThreshold 471859) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:49.919 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } Inserted 12000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 3 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 4 } m30001| Fri Feb 22 11:24:50.204 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.446 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.454 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755815b44ae98eaa7493b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.455 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755825b44ae98eaa7781b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa7493b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:50.456 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275582ad0d9d7dc768fee1 m30001| Fri Feb 22 11:24:50.457 [conn4] splitChunk accepted at version 1|4||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:50.458 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:50-51275582ad0d9d7dc768fee2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532290458), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:50.458 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:50.459 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 5 version: 1|6||51275580ce6119f732c457f2 based on: 1|4||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:50.459 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: ObjectId('512755815b44ae98eaa7493b') }max: { _id: MaxKey } on: { _id: ObjectId('512755825b44ae98eaa7781b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 20000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 4 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 6 } m30001| Fri Feb 22 11:24:50.692 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:50.903 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.136 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.148 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755825b44ae98eaa7781b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.148 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755835b44ae98eaa7a6fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755825b44ae98eaa7781b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:51.149 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275583ad0d9d7dc768fee3 m30001| Fri Feb 22 11:24:51.150 [conn4] splitChunk accepted at version 1|6||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:51.151 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:51-51275583ad0d9d7dc768fee4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532291151), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:51.151 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:51.152 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 6 version: 1|8||51275580ce6119f732c457f2 based on: 1|6||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:51.153 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: ObjectId('512755825b44ae98eaa7781b') }max: { _id: MaxKey } on: { _id: ObjectId('512755835b44ae98eaa7a6fb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 32000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 5 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 8 } m30001| Fri Feb 22 11:24:51.496 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:51.846 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } Inserted 40000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 5 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 8 } m30001| Fri Feb 22 11:24:52.199 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:52.211 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755835b44ae98eaa7a6fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:52.213 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755845b44ae98eaa7d5db') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755835b44ae98eaa7a6fb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:52.214 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275584ad0d9d7dc768fee5 m30001| Fri Feb 22 11:24:52.215 [conn4] splitChunk accepted at version 1|8||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:52.215 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:52-51275584ad0d9d7dc768fee6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532292215), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:52.216 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:52.217 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 7 version: 1|10||51275580ce6119f732c457f2 based on: 1|8||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:52.217 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }max: { _id: MaxKey } on: { _id: ObjectId('512755845b44ae98eaa7d5db') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:52.559 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:52.904 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } Inserted 52000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 6 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 10 } m30001| Fri Feb 22 11:24:53.274 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:53.286 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755845b44ae98eaa7d5db') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:53.287 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755855b44ae98eaa804bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755845b44ae98eaa7d5db')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:53.288 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275585ad0d9d7dc768fee7 m30001| Fri Feb 22 11:24:53.289 [conn4] splitChunk accepted at version 1|10||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:53.290 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:53-51275585ad0d9d7dc768fee8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532293290), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:53.290 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:53.291 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 8 version: 1|12||51275580ce6119f732c457f2 based on: 1|10||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:53.291 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: ObjectId('512755845b44ae98eaa7d5db') }max: { _id: MaxKey } on: { _id: ObjectId('512755855b44ae98eaa804bb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:53.612 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } Inserted 60000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 7 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 12 } m30001| Fri Feb 22 11:24:53.883 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.140 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.148 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755855b44ae98eaa804bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.149 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755865b44ae98eaa8339b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755855b44ae98eaa804bb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:54.149 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275586ad0d9d7dc768fee9 m30001| Fri Feb 22 11:24:54.150 [conn4] splitChunk accepted at version 1|12||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:54.151 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:54-51275586ad0d9d7dc768feea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532294151), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:54.151 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:54.152 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 9 version: 1|14||51275580ce6119f732c457f2 based on: 1|12||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:54.152 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: ObjectId('512755855b44ae98eaa804bb') }max: { _id: MaxKey } on: { _id: ObjectId('512755865b44ae98eaa8339b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:54.395 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } Inserted 72000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 8 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 14 } m30001| Fri Feb 22 11:24:54.647 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.901 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.909 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8339b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:54.909 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755865b44ae98eaa8627b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8339b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:54.910 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275586ad0d9d7dc768feeb m30001| Fri Feb 22 11:24:54.911 [conn4] splitChunk accepted at version 1|14||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:54.911 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:54-51275586ad0d9d7dc768feec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532294911), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:54.912 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:54.913 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 10 version: 1|16||51275580ce6119f732c457f2 based on: 1|14||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:54.913 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: ObjectId('512755865b44ae98eaa8339b') }max: { _id: MaxKey } on: { _id: ObjectId('512755865b44ae98eaa8627b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 80000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 9 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 16 } m30001| Fri Feb 22 11:24:55.178 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.434 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.683 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.702 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755865b44ae98eaa8627b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:55.702 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755875b44ae98eaa8915b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8627b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:55.703 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275587ad0d9d7dc768feed m30001| Fri Feb 22 11:24:55.704 [conn4] splitChunk accepted at version 1|16||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:55.705 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:55-51275587ad0d9d7dc768feee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532295705), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:55.705 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:55.706 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 11 version: 1|18||51275580ce6119f732c457f2 based on: 1|16||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:55.706 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: ObjectId('512755865b44ae98eaa8627b') }max: { _id: MaxKey } on: { _id: ObjectId('512755875b44ae98eaa8915b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 92000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 10 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 18 } m30001| Fri Feb 22 11:24:56.048 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:56.386 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } Inserted 100000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 10 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 18 } m30001| Fri Feb 22 11:24:56.749 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:56.761 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755875b44ae98eaa8915b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:56.762 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755885b44ae98eaa8c03b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755875b44ae98eaa8915b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:56.763 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275588ad0d9d7dc768feef m30001| Fri Feb 22 11:24:56.766 [conn4] splitChunk accepted at version 1|18||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:56.767 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:56-51275588ad0d9d7dc768fef0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532296767), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:56.767 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:56.768 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 12 version: 1|20||51275580ce6119f732c457f2 based on: 1|18||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:56.768 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: ObjectId('512755875b44ae98eaa8915b') }max: { _id: MaxKey } on: { _id: ObjectId('512755885b44ae98eaa8c03b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:57.093 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:57.354 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } Inserted 112000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 11 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 20 } m30001| Fri Feb 22 11:24:57.620 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:57.628 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755885b44ae98eaa8c03b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:57.629 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755895b44ae98eaa8ef1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755885b44ae98eaa8c03b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:57.629 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275589ad0d9d7dc768fef1 m30001| Fri Feb 22 11:24:57.630 [conn4] splitChunk accepted at version 1|20||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:57.631 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:57-51275589ad0d9d7dc768fef2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532297631), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:57.631 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:57.632 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 13 version: 1|22||51275580ce6119f732c457f2 based on: 1|20||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:57.632 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: ObjectId('512755885b44ae98eaa8c03b') }max: { _id: MaxKey } on: { _id: ObjectId('512755895b44ae98eaa8ef1b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:57.878 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } Inserted 120000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 12 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 22 } m30001| Fri Feb 22 11:24:58.138 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:58.389 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:58.401 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755895b44ae98eaa8ef1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:58.401 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558a5b44ae98eaa91dfb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755895b44ae98eaa8ef1b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:58.402 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558aad0d9d7dc768fef3 m30001| Fri Feb 22 11:24:58.403 [conn4] splitChunk accepted at version 1|22||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:58.404 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:58-5127558aad0d9d7dc768fef4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532298404), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:58.404 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:58.405 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 14 version: 1|24||51275580ce6119f732c457f2 based on: 1|22||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:58.405 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }max: { _id: MaxKey } on: { _id: ObjectId('5127558a5b44ae98eaa91dfb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:24:58.652 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } Inserted 132000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 13 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 24 } m30001| Fri Feb 22 11:24:58.936 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.185 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.192 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558a5b44ae98eaa91dfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.193 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558b5b44ae98eaa94cdb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558a5b44ae98eaa91dfb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:24:59.193 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558bad0d9d7dc768fef5 m30001| Fri Feb 22 11:24:59.194 [conn4] splitChunk accepted at version 1|24||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:24:59.195 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:24:59-5127558bad0d9d7dc768fef6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532299195), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:24:59.195 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:24:59.196 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 15 version: 1|26||51275580ce6119f732c457f2 based on: 1|24||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:24:59.196 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }max: { _id: MaxKey } on: { _id: ObjectId('5127558b5b44ae98eaa94cdb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 140000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 14 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 26 } m30001| Fri Feb 22 11:24:59.457 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.715 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:24:59.997 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:00.007 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa94cdb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:00.007 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558b5b44ae98eaa97bbb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa94cdb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:00.008 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558cad0d9d7dc768fef7 m30001| Fri Feb 22 11:25:00.009 [conn4] splitChunk accepted at version 1|26||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:00.010 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:00-5127558cad0d9d7dc768fef8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532300010), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:00.010 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:00.011 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 16 version: 1|28||51275580ce6119f732c457f2 based on: 1|26||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:00.011 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }max: { _id: MaxKey } on: { _id: ObjectId('5127558b5b44ae98eaa97bbb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 152000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 15 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 28 } m30001| Fri Feb 22 11:25:00.340 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:00.673 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } Inserted 160000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 15 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 28 } m30001| Fri Feb 22 11:25:01.069 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.080 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558b5b44ae98eaa97bbb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.082 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa97bbb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:01.083 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558dad0d9d7dc768fef9 m30001| Fri Feb 22 11:25:01.083 [conn4] splitChunk accepted at version 1|28||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:01.084 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:01-5127558dad0d9d7dc768fefa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532301084), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:01.085 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:01.085 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 17 version: 1|30||51275580ce6119f732c457f2 based on: 1|28||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:01.086 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }max: { _id: MaxKey } on: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:01.336 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.579 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } Inserted 172000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 16 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } on : shard0001 { "t" : 1000, "i" : 29 } { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 30 } m30001| Fri Feb 22 11:25:01.854 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.863 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558c5b44ae98eaa9aa9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:01.863 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558d5b44ae98eaa9d97b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558c5b44ae98eaa9aa9b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:01.864 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558dad0d9d7dc768fefb m30001| Fri Feb 22 11:25:01.865 [conn4] splitChunk accepted at version 1|30||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:01.865 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:01-5127558dad0d9d7dc768fefc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532301865), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:01.866 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:01.873 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 18 version: 1|32||51275580ce6119f732c457f2 based on: 1|30||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:01.873 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }max: { _id: MaxKey } on: { _id: ObjectId('5127558d5b44ae98eaa9d97b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:02.130 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } Inserted 180000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 17 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } on : shard0001 { "t" : 1000, "i" : 29 } { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } -->> { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } on : shard0001 { "t" : 1000, "i" : 31 } { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 32 } m30001| Fri Feb 22 11:25:02.424 [conn3] insert bulk_shard_insert.coll ninserted:4000 keyUpdates:0 locks(micros) w:105771 105ms m30001| Fri Feb 22 11:25:02.425 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:02.681 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:02.689 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558d5b44ae98eaa9d97b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:02.691 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558e5b44ae98eaaa085b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558d5b44ae98eaa9d97b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:02.692 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558ead0d9d7dc768fefd m30001| Fri Feb 22 11:25:02.692 [conn4] splitChunk accepted at version 1|32||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:02.693 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:02-5127558ead0d9d7dc768fefe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532302693), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:02.694 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:02.695 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 19 version: 1|34||51275580ce6119f732c457f2 based on: 1|32||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:02.695 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }max: { _id: MaxKey } on: { _id: ObjectId('5127558e5b44ae98eaaa085b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:02.937 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558e5b44ae98eaaa085b') } -->> { : MaxKey } Inserted 192000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 18 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } on : shard0001 { "t" : 1000, "i" : 29 } { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } -->> { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } on : shard0001 { "t" : 1000, "i" : 31 } { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } -->> { "_id" : ObjectId("5127558e5b44ae98eaaa085b") } on : shard0001 { "t" : 1000, "i" : 33 } { "_id" : ObjectId("5127558e5b44ae98eaaa085b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 34 } m30001| Fri Feb 22 11:25:03.211 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558e5b44ae98eaaa085b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:03.488 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558e5b44ae98eaaa085b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:03.496 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558e5b44ae98eaaa085b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:03.496 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127558f5b44ae98eaaa373b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558e5b44ae98eaaa085b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:03.497 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127558fad0d9d7dc768feff m30001| Fri Feb 22 11:25:03.498 [conn4] splitChunk accepted at version 1|34||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:03.499 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:03-5127558fad0d9d7dc768ff00", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532303499), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:03.499 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:03.500 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 20 version: 1|36||51275580ce6119f732c457f2 based on: 1|34||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:03.500 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }max: { _id: MaxKey } on: { _id: ObjectId('5127558f5b44ae98eaaa373b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 200000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 19 { "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("512755815b44ae98eaa729fc") } on : shard0001 { "t" : 1000, "i" : 1 } { "_id" : ObjectId("512755815b44ae98eaa729fc") } -->> { "_id" : ObjectId("512755815b44ae98eaa7493b") } on : shard0001 { "t" : 1000, "i" : 3 } { "_id" : ObjectId("512755815b44ae98eaa7493b") } -->> { "_id" : ObjectId("512755825b44ae98eaa7781b") } on : shard0001 { "t" : 1000, "i" : 5 } { "_id" : ObjectId("512755825b44ae98eaa7781b") } -->> { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } on : shard0001 { "t" : 1000, "i" : 7 } { "_id" : ObjectId("512755835b44ae98eaa7a6fb") } -->> { "_id" : ObjectId("512755845b44ae98eaa7d5db") } on : shard0001 { "t" : 1000, "i" : 9 } { "_id" : ObjectId("512755845b44ae98eaa7d5db") } -->> { "_id" : ObjectId("512755855b44ae98eaa804bb") } on : shard0001 { "t" : 1000, "i" : 11 } { "_id" : ObjectId("512755855b44ae98eaa804bb") } -->> { "_id" : ObjectId("512755865b44ae98eaa8339b") } on : shard0001 { "t" : 1000, "i" : 13 } { "_id" : ObjectId("512755865b44ae98eaa8339b") } -->> { "_id" : ObjectId("512755865b44ae98eaa8627b") } on : shard0001 { "t" : 1000, "i" : 15 } { "_id" : ObjectId("512755865b44ae98eaa8627b") } -->> { "_id" : ObjectId("512755875b44ae98eaa8915b") } on : shard0001 { "t" : 1000, "i" : 17 } { "_id" : ObjectId("512755875b44ae98eaa8915b") } -->> { "_id" : ObjectId("512755885b44ae98eaa8c03b") } on : shard0001 { "t" : 1000, "i" : 19 } { "_id" : ObjectId("512755885b44ae98eaa8c03b") } -->> { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } on : shard0001 { "t" : 1000, "i" : 21 } { "_id" : ObjectId("512755895b44ae98eaa8ef1b") } -->> { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } on : shard0001 { "t" : 1000, "i" : 23 } { "_id" : ObjectId("5127558a5b44ae98eaa91dfb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } on : shard0001 { "t" : 1000, "i" : 25 } { "_id" : ObjectId("5127558b5b44ae98eaa94cdb") } -->> { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } on : shard0001 { "t" : 1000, "i" : 27 } { "_id" : ObjectId("5127558b5b44ae98eaa97bbb") } -->> { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } on : shard0001 { "t" : 1000, "i" : 29 } { "_id" : ObjectId("5127558c5b44ae98eaa9aa9b") } -->> { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } on : shard0001 { "t" : 1000, "i" : 31 } { "_id" : ObjectId("5127558d5b44ae98eaa9d97b") } -->> { "_id" : ObjectId("5127558e5b44ae98eaaa085b") } on : shard0001 { "t" : 1000, "i" : 33 } { "_id" : ObjectId("5127558e5b44ae98eaaa085b") } -->> { "_id" : ObjectId("5127558f5b44ae98eaaa373b") } on : shard0001 { "t" : 1000, "i" : 35 } { "_id" : ObjectId("5127558f5b44ae98eaaa373b") } -->> { "_id" : { "$maxKey" : 1 } } on : shard0001 { "t" : 1000, "i" : 36 } m30001| Fri Feb 22 11:25:03.840 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558f5b44ae98eaaa373b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:04.179 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558f5b44ae98eaaa373b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:04.527 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127558f5b44ae98eaaa373b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:04.539 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127558f5b44ae98eaaa373b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:04.540 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755905b44ae98eaaa661b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558f5b44ae98eaaa373b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:04.541 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275590ad0d9d7dc768ff01 m30001| Fri Feb 22 11:25:04.542 [conn4] splitChunk accepted at version 1|36||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:04.543 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:04-51275590ad0d9d7dc768ff02", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532304543), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:04.543 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:04.544 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 21 version: 1|38||51275580ce6119f732c457f2 based on: 1|36||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:04.545 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }max: { _id: MaxKey } on: { _id: ObjectId('512755905b44ae98eaaa661b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 212000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 20 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:04.897 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755905b44ae98eaaa661b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:05.245 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755905b44ae98eaaa661b') } -->> { : MaxKey } Inserted 220000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 20 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:05.687 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755905b44ae98eaaa661b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:05.696 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755905b44ae98eaaa661b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:05.696 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755915b44ae98eaaa94fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755905b44ae98eaaa661b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:05.697 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275591ad0d9d7dc768ff03 m30001| Fri Feb 22 11:25:05.698 [conn4] splitChunk accepted at version 1|38||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:05.698 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:05-51275591ad0d9d7dc768ff04", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532305698), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:05.699 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:05.700 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 22 version: 1|40||51275580ce6119f732c457f2 based on: 1|38||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:05.700 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: ObjectId('512755905b44ae98eaaa661b') }max: { _id: MaxKey } on: { _id: ObjectId('512755915b44ae98eaaa94fb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:05.946 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755915b44ae98eaaa94fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:06.247 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755915b44ae98eaaa94fb') } -->> { : MaxKey } Inserted 232000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 21 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:06.513 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755915b44ae98eaaa94fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:06.526 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755915b44ae98eaaa94fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:06.526 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755925b44ae98eaaac3db') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755915b44ae98eaaa94fb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:06.527 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275592ad0d9d7dc768ff05 m30001| Fri Feb 22 11:25:06.529 [conn4] splitChunk accepted at version 1|40||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:06.530 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:06-51275592ad0d9d7dc768ff06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532306530), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:06.530 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:06.531 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 23 version: 1|42||51275580ce6119f732c457f2 based on: 1|40||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:06.531 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: ObjectId('512755915b44ae98eaaa94fb') }max: { _id: MaxKey } on: { _id: ObjectId('512755925b44ae98eaaac3db') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:06.840 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755925b44ae98eaaac3db') } -->> { : MaxKey } Inserted 240000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 22 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:07.176 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755925b44ae98eaaac3db') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:07.522 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755925b44ae98eaaac3db') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:07.530 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755925b44ae98eaaac3db') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:07.530 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755935b44ae98eaaaf2bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755925b44ae98eaaac3db')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:07.531 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275593ad0d9d7dc768ff07 m30001| Fri Feb 22 11:25:07.532 [conn4] splitChunk accepted at version 1|42||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:07.533 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:07-51275593ad0d9d7dc768ff08", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532307532), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:07.533 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:07.534 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 24 version: 1|44||51275580ce6119f732c457f2 based on: 1|42||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:07.534 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: ObjectId('512755925b44ae98eaaac3db') }max: { _id: MaxKey } on: { _id: ObjectId('512755935b44ae98eaaaf2bb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:07.800 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755935b44ae98eaaaf2bb') } -->> { : MaxKey } Inserted 252000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 23 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:08.121 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755935b44ae98eaaaf2bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:08.462 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755935b44ae98eaaaf2bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:08.474 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755935b44ae98eaaaf2bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:08.475 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755945b44ae98eaab219b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755935b44ae98eaaaf2bb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:08.476 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275594ad0d9d7dc768ff09 m30001| Fri Feb 22 11:25:08.477 [conn4] splitChunk accepted at version 1|44||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:08.477 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:08-51275594ad0d9d7dc768ff0a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532308477), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:08.478 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:08.479 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 25 version: 1|46||51275580ce6119f732c457f2 based on: 1|44||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:08.479 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }max: { _id: MaxKey } on: { _id: ObjectId('512755945b44ae98eaab219b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 260000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 24 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:08.828 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755945b44ae98eaab219b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:09.179 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755945b44ae98eaab219b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:09.560 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755945b44ae98eaab219b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:09.575 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755945b44ae98eaab219b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:09.575 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755955b44ae98eaab507b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755945b44ae98eaab219b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:09.576 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275595ad0d9d7dc768ff0b m30001| Fri Feb 22 11:25:09.577 [conn4] splitChunk accepted at version 1|46||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:09.578 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:09-51275595ad0d9d7dc768ff0c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532309578), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:09.579 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:09.579 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 26 version: 1|48||51275580ce6119f732c457f2 based on: 1|46||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:09.580 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: ObjectId('512755945b44ae98eaab219b') }max: { _id: MaxKey } on: { _id: ObjectId('512755955b44ae98eaab507b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 272000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 25 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:09.849 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755955b44ae98eaab507b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:10.108 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755955b44ae98eaab507b') } -->> { : MaxKey } Inserted 280000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 25 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:10.373 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755955b44ae98eaab507b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:10.381 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755955b44ae98eaab507b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:10.381 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755965b44ae98eaab7f5b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755955b44ae98eaab507b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:10.382 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275596ad0d9d7dc768ff0d m30001| Fri Feb 22 11:25:10.383 [conn4] splitChunk accepted at version 1|48||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:10.383 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:10-51275596ad0d9d7dc768ff0e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532310383), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:10.384 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:10.384 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 27 version: 1|50||51275580ce6119f732c457f2 based on: 1|48||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:10.385 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: ObjectId('512755955b44ae98eaab507b') }max: { _id: MaxKey } on: { _id: ObjectId('512755965b44ae98eaab7f5b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:10.652 [conn3] insert bulk_shard_insert.coll ninserted:4000 keyUpdates:0 locks(micros) w:104964 104ms m30001| Fri Feb 22 11:25:10.653 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755965b44ae98eaab7f5b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:10.904 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755965b44ae98eaab7f5b') } -->> { : MaxKey } Inserted 292000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 26 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:11.161 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert1/bulk_shard_insert.2, filling with zeroes... m30001| Fri Feb 22 11:25:11.162 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert1/bulk_shard_insert.2, size: 256MB, took 0 secs m30001| Fri Feb 22 11:25:11.179 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755965b44ae98eaab7f5b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:11.187 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755965b44ae98eaab7f5b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:11.188 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755975b44ae98eaabae3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755965b44ae98eaab7f5b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:11.189 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275597ad0d9d7dc768ff0f m30001| Fri Feb 22 11:25:11.190 [conn4] splitChunk accepted at version 1|50||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:11.190 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:11-51275597ad0d9d7dc768ff10", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532311190), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755975b44ae98eaabae3b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:11.191 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:11.192 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 28 version: 1|52||51275580ce6119f732c457f2 based on: 1|50||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:11.192 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: ObjectId('512755965b44ae98eaab7f5b') }max: { _id: MaxKey } on: { _id: ObjectId('512755975b44ae98eaabae3b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:11.436 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabae3b') } -->> { : MaxKey } Inserted 300000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 27 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:11.709 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabae3b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:11.984 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabae3b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:11.992 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabae3b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:11.992 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755975b44ae98eaabae3b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755975b44ae98eaabdd1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755975b44ae98eaabae3b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:11.993 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275597ad0d9d7dc768ff11 m30001| Fri Feb 22 11:25:11.994 [conn4] splitChunk accepted at version 1|52||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:11.995 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:11-51275597ad0d9d7dc768ff12", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532311995), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755975b44ae98eaabae3b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755975b44ae98eaabae3b') }, max: { _id: ObjectId('512755975b44ae98eaabdd1b') }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755975b44ae98eaabdd1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:11.995 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:11.996 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 29 version: 1|54||51275580ce6119f732c457f2 based on: 1|52||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:11.996 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: ObjectId('512755975b44ae98eaabae3b') }max: { _id: MaxKey } on: { _id: ObjectId('512755975b44ae98eaabdd1b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:12.284 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabdd1b') } -->> { : MaxKey } Inserted 312000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 28 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:12.547 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabdd1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:12.881 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabdd1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:12.894 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755975b44ae98eaabdd1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:12.894 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755975b44ae98eaabdd1b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755985b44ae98eaac0bfb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755975b44ae98eaabdd1b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:12.895 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275598ad0d9d7dc768ff13 m30001| Fri Feb 22 11:25:12.896 [conn4] splitChunk accepted at version 1|54||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:12.897 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:12-51275598ad0d9d7dc768ff14", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532312897), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755975b44ae98eaabdd1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755975b44ae98eaabdd1b') }, max: { _id: ObjectId('512755985b44ae98eaac0bfb') }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755985b44ae98eaac0bfb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:12.897 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:12.898 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 30 version: 1|56||51275580ce6119f732c457f2 based on: 1|54||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:12.899 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|54||000000000000000000000000min: { _id: ObjectId('512755975b44ae98eaabdd1b') }max: { _id: MaxKey } on: { _id: ObjectId('512755985b44ae98eaac0bfb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 320000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 29 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:13.247 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755985b44ae98eaac0bfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:13.592 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755985b44ae98eaac0bfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:13.935 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755985b44ae98eaac0bfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:13.947 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755985b44ae98eaac0bfb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:13.948 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755985b44ae98eaac0bfb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755995b44ae98eaac3adb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755985b44ae98eaac0bfb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:13.948 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 51275599ad0d9d7dc768ff15 m30001| Fri Feb 22 11:25:13.949 [conn4] splitChunk accepted at version 1|56||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:13.950 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:13-51275599ad0d9d7dc768ff16", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532313950), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755985b44ae98eaac0bfb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755985b44ae98eaac0bfb') }, max: { _id: ObjectId('512755995b44ae98eaac3adb') }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755995b44ae98eaac3adb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:13.951 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:13.952 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 31 version: 1|58||51275580ce6119f732c457f2 based on: 1|56||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:13.952 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|56||000000000000000000000000min: { _id: ObjectId('512755985b44ae98eaac0bfb') }max: { _id: MaxKey } on: { _id: ObjectId('512755995b44ae98eaac3adb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 332000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 30 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:14.306 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755995b44ae98eaac3adb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:14.643 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755995b44ae98eaac3adb') } -->> { : MaxKey } Inserted 340000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 30 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:14.913 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755995b44ae98eaac3adb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:14.921 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755995b44ae98eaac3adb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:14.921 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755995b44ae98eaac3adb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127559a5b44ae98eaac69bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755995b44ae98eaac3adb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:14.922 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127559aad0d9d7dc768ff17 m30001| Fri Feb 22 11:25:14.923 [conn4] splitChunk accepted at version 1|58||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:14.924 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:14-5127559aad0d9d7dc768ff18", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532314924), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755995b44ae98eaac3adb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755995b44ae98eaac3adb') }, max: { _id: ObjectId('5127559a5b44ae98eaac69bb') }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127559a5b44ae98eaac69bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:14.924 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:14.925 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 32 version: 1|60||51275580ce6119f732c457f2 based on: 1|58||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:14.925 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|58||000000000000000000000000min: { _id: ObjectId('512755995b44ae98eaac3adb') }max: { _id: MaxKey } on: { _id: ObjectId('5127559a5b44ae98eaac69bb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:15.136 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559a5b44ae98eaac69bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:15.362 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559a5b44ae98eaac69bb') } -->> { : MaxKey } Inserted 352000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 31 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:15.598 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559a5b44ae98eaac69bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:15.606 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127559a5b44ae98eaac69bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:15.608 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127559a5b44ae98eaac69bb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127559b5b44ae98eaac989b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127559a5b44ae98eaac69bb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:15.608 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127559bad0d9d7dc768ff19 m30001| Fri Feb 22 11:25:15.609 [conn4] splitChunk accepted at version 1|60||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:15.610 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:15-5127559bad0d9d7dc768ff1a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532315610), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127559a5b44ae98eaac69bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127559a5b44ae98eaac69bb') }, max: { _id: ObjectId('5127559b5b44ae98eaac989b') }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127559b5b44ae98eaac989b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:15.610 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:15.611 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 33 version: 1|62||51275580ce6119f732c457f2 based on: 1|60||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:15.611 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|60||000000000000000000000000min: { _id: ObjectId('5127559a5b44ae98eaac69bb') }max: { _id: MaxKey } on: { _id: ObjectId('5127559b5b44ae98eaac989b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:15.871 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559b5b44ae98eaac989b') } -->> { : MaxKey } Inserted 360000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 32 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:16.135 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559b5b44ae98eaac989b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:16.393 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559b5b44ae98eaac989b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:16.401 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127559b5b44ae98eaac989b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:16.401 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127559b5b44ae98eaac989b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127559c5b44ae98eaacc77b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127559b5b44ae98eaac989b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:16.402 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127559cad0d9d7dc768ff1b m30001| Fri Feb 22 11:25:16.403 [conn4] splitChunk accepted at version 1|62||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:16.404 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:16-5127559cad0d9d7dc768ff1c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532316403), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127559b5b44ae98eaac989b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127559b5b44ae98eaac989b') }, max: { _id: ObjectId('5127559c5b44ae98eaacc77b') }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127559c5b44ae98eaacc77b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:16.404 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:16.405 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 34 version: 1|64||51275580ce6119f732c457f2 based on: 1|62||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:16.405 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|62||000000000000000000000000min: { _id: ObjectId('5127559b5b44ae98eaac989b') }max: { _id: MaxKey } on: { _id: ObjectId('5127559c5b44ae98eaacc77b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:16.662 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559c5b44ae98eaacc77b') } -->> { : MaxKey } Inserted 372000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 33 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:17.052 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559c5b44ae98eaacc77b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:17.392 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559c5b44ae98eaacc77b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:17.404 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127559c5b44ae98eaacc77b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:17.404 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127559c5b44ae98eaacc77b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127559d5b44ae98eaacf65b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127559c5b44ae98eaacc77b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:17.406 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127559dad0d9d7dc768ff1d m30001| Fri Feb 22 11:25:17.406 [conn4] splitChunk accepted at version 1|64||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:17.407 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:17-5127559dad0d9d7dc768ff1e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532317407), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127559c5b44ae98eaacc77b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127559c5b44ae98eaacc77b') }, max: { _id: ObjectId('5127559d5b44ae98eaacf65b') }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127559d5b44ae98eaacf65b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:17.408 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:17.409 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 35 version: 1|66||51275580ce6119f732c457f2 based on: 1|64||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:17.409 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|64||000000000000000000000000min: { _id: ObjectId('5127559c5b44ae98eaacc77b') }max: { _id: MaxKey } on: { _id: ObjectId('5127559d5b44ae98eaacf65b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 380000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 34 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:17.786 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559d5b44ae98eaacf65b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:18.156 [conn3] insert bulk_shard_insert.coll ninserted:4000 keyUpdates:0 locks(micros) w:103292 103ms m30001| Fri Feb 22 11:25:18.157 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559d5b44ae98eaacf65b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:18.507 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559d5b44ae98eaacf65b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:18.519 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127559d5b44ae98eaacf65b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:18.521 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127559d5b44ae98eaacf65b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127559e5b44ae98eaad253b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127559d5b44ae98eaacf65b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:18.522 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127559ead0d9d7dc768ff1f m30001| Fri Feb 22 11:25:18.523 [conn4] splitChunk accepted at version 1|66||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:18.524 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:18-5127559ead0d9d7dc768ff20", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532318524), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127559d5b44ae98eaacf65b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127559d5b44ae98eaacf65b') }, max: { _id: ObjectId('5127559e5b44ae98eaad253b') }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127559e5b44ae98eaad253b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:18.524 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:18.525 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 36 version: 1|68||51275580ce6119f732c457f2 based on: 1|66||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:18.526 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|66||000000000000000000000000min: { _id: ObjectId('5127559d5b44ae98eaacf65b') }max: { _id: MaxKey } on: { _id: ObjectId('5127559e5b44ae98eaad253b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 392000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 35 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:18.807 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559e5b44ae98eaad253b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:19.070 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559e5b44ae98eaad253b') } -->> { : MaxKey } Inserted 400000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 35 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:19.346 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559e5b44ae98eaad253b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:19.354 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127559e5b44ae98eaad253b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:19.355 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127559e5b44ae98eaad253b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('5127559f5b44ae98eaad541b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127559e5b44ae98eaad253b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:19.356 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 5127559fad0d9d7dc768ff21 m30001| Fri Feb 22 11:25:19.357 [conn4] splitChunk accepted at version 1|68||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:19.357 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:19-5127559fad0d9d7dc768ff22", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532319357), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127559e5b44ae98eaad253b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127559e5b44ae98eaad253b') }, max: { _id: ObjectId('5127559f5b44ae98eaad541b') }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('5127559f5b44ae98eaad541b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:19.358 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:19.359 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 37 version: 1|70||51275580ce6119f732c457f2 based on: 1|68||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:19.359 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|68||000000000000000000000000min: { _id: ObjectId('5127559e5b44ae98eaad253b') }max: { _id: MaxKey } on: { _id: ObjectId('5127559f5b44ae98eaad541b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:19.606 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559f5b44ae98eaad541b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:19.854 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559f5b44ae98eaad541b') } -->> { : MaxKey } Inserted 412000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 36 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:20.168 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('5127559f5b44ae98eaad541b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:20.180 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('5127559f5b44ae98eaad541b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:20.181 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('5127559f5b44ae98eaad541b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a05b44ae98eaad82fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('5127559f5b44ae98eaad541b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:20.182 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a0ad0d9d7dc768ff23 m30001| Fri Feb 22 11:25:20.183 [conn4] splitChunk accepted at version 1|70||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:20.183 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:20-512755a0ad0d9d7dc768ff24", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532320183), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('5127559f5b44ae98eaad541b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('5127559f5b44ae98eaad541b') }, max: { _id: ObjectId('512755a05b44ae98eaad82fb') }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a05b44ae98eaad82fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:20.184 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:20.185 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 38 version: 1|72||51275580ce6119f732c457f2 based on: 1|70||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:20.185 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|70||000000000000000000000000min: { _id: ObjectId('5127559f5b44ae98eaad541b') }max: { _id: MaxKey } on: { _id: ObjectId('512755a05b44ae98eaad82fb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:20.469 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaad82fb') } -->> { : MaxKey } Inserted 420000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 37 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:20.744 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaad82fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:21.003 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaad82fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:21.011 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaad82fb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:21.011 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a05b44ae98eaad82fb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a05b44ae98eaadb1db') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a05b44ae98eaad82fb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:21.012 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a1ad0d9d7dc768ff25 m30001| Fri Feb 22 11:25:21.013 [conn4] splitChunk accepted at version 1|72||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:21.014 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:21-512755a1ad0d9d7dc768ff26", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532321014), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a05b44ae98eaad82fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a05b44ae98eaad82fb') }, max: { _id: ObjectId('512755a05b44ae98eaadb1db') }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a05b44ae98eaadb1db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:21.014 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:21.015 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 39 version: 1|74||51275580ce6119f732c457f2 based on: 1|72||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:21.016 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|72||000000000000000000000000min: { _id: ObjectId('512755a05b44ae98eaad82fb') }max: { _id: MaxKey } on: { _id: ObjectId('512755a05b44ae98eaadb1db') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:21.312 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaadb1db') } -->> { : MaxKey } Inserted 432000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 38 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:21.675 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaadb1db') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:22.044 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaadb1db') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:22.057 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a05b44ae98eaadb1db') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:22.057 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a05b44ae98eaadb1db') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a15b44ae98eaade0bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a05b44ae98eaadb1db')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:22.058 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a2ad0d9d7dc768ff27 m30001| Fri Feb 22 11:25:22.059 [conn4] splitChunk accepted at version 1|74||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:22.060 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:22-512755a2ad0d9d7dc768ff28", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532322060), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a05b44ae98eaadb1db') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a05b44ae98eaadb1db') }, max: { _id: ObjectId('512755a15b44ae98eaade0bb') }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a15b44ae98eaade0bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:22.060 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:22.061 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 40 version: 1|76||51275580ce6119f732c457f2 based on: 1|74||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:22.062 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|74||000000000000000000000000min: { _id: ObjectId('512755a05b44ae98eaadb1db') }max: { _id: MaxKey } on: { _id: ObjectId('512755a15b44ae98eaade0bb') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 440000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 39 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:22.412 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a15b44ae98eaade0bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:22.735 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a15b44ae98eaade0bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:23.040 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a15b44ae98eaade0bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:23.048 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a15b44ae98eaade0bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:23.049 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a15b44ae98eaade0bb') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a25b44ae98eaae0f9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a15b44ae98eaade0bb')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:23.050 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a3ad0d9d7dc768ff29 m30001| Fri Feb 22 11:25:23.051 [conn4] splitChunk accepted at version 1|76||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:23.051 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:23-512755a3ad0d9d7dc768ff2a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532323051), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a15b44ae98eaade0bb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a15b44ae98eaade0bb') }, max: { _id: ObjectId('512755a25b44ae98eaae0f9b') }, lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a25b44ae98eaae0f9b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:23.052 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:23.053 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 41 version: 1|78||51275580ce6119f732c457f2 based on: 1|76||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:23.053 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|76||000000000000000000000000min: { _id: ObjectId('512755a15b44ae98eaade0bb') }max: { _id: MaxKey } on: { _id: ObjectId('512755a25b44ae98eaae0f9b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 452000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 40 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:23.314 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a25b44ae98eaae0f9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:23.565 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a25b44ae98eaae0f9b') } -->> { : MaxKey } Inserted 460000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 40 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:23.839 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a25b44ae98eaae0f9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:23.853 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a25b44ae98eaae0f9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:23.854 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a25b44ae98eaae0f9b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a35b44ae98eaae3e7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a25b44ae98eaae0f9b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:23.855 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a3ad0d9d7dc768ff2b m30001| Fri Feb 22 11:25:23.855 [conn4] splitChunk accepted at version 1|78||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:23.856 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:23-512755a3ad0d9d7dc768ff2c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532323856), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a25b44ae98eaae0f9b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a25b44ae98eaae0f9b') }, max: { _id: ObjectId('512755a35b44ae98eaae3e7b') }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a35b44ae98eaae3e7b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:23.856 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:23.857 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 42 version: 1|80||51275580ce6119f732c457f2 based on: 1|78||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:23.857 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|78||000000000000000000000000min: { _id: ObjectId('512755a25b44ae98eaae0f9b') }max: { _id: MaxKey } on: { _id: ObjectId('512755a35b44ae98eaae3e7b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:24.122 [conn3] insert bulk_shard_insert.coll ninserted:4000 keyUpdates:0 locks(micros) w:107549 107ms m30001| Fri Feb 22 11:25:24.123 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a35b44ae98eaae3e7b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:24.376 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a35b44ae98eaae3e7b') } -->> { : MaxKey } Inserted 472000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 41 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:24.646 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a35b44ae98eaae3e7b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:24.654 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a35b44ae98eaae3e7b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:24.654 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a35b44ae98eaae3e7b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a45b44ae98eaae6d5b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a35b44ae98eaae3e7b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:24.655 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a4ad0d9d7dc768ff2d m30001| Fri Feb 22 11:25:24.656 [conn4] splitChunk accepted at version 1|80||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:24.657 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:24-512755a4ad0d9d7dc768ff2e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532324657), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a35b44ae98eaae3e7b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a35b44ae98eaae3e7b') }, max: { _id: ObjectId('512755a45b44ae98eaae6d5b') }, lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a45b44ae98eaae6d5b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:24.657 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:24.658 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 43 version: 1|82||51275580ce6119f732c457f2 based on: 1|80||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:24.658 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|80||000000000000000000000000min: { _id: ObjectId('512755a35b44ae98eaae3e7b') }max: { _id: MaxKey } on: { _id: ObjectId('512755a45b44ae98eaae6d5b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:24.918 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a45b44ae98eaae6d5b') } -->> { : MaxKey } Inserted 480000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 42 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:25.231 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a45b44ae98eaae6d5b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:25.577 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a45b44ae98eaae6d5b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:25.585 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a45b44ae98eaae6d5b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:25.585 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a45b44ae98eaae6d5b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a55b44ae98eaae9c3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a45b44ae98eaae6d5b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:25.586 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a5ad0d9d7dc768ff2f m30001| Fri Feb 22 11:25:25.587 [conn4] splitChunk accepted at version 1|82||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:25.588 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:25-512755a5ad0d9d7dc768ff30", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532325588), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a45b44ae98eaae6d5b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a45b44ae98eaae6d5b') }, max: { _id: ObjectId('512755a55b44ae98eaae9c3b') }, lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a55b44ae98eaae9c3b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:25.588 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:25.589 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 44 version: 1|84||51275580ce6119f732c457f2 based on: 1|82||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:25.589 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|82||000000000000000000000000min: { _id: ObjectId('512755a45b44ae98eaae6d5b') }max: { _id: MaxKey } on: { _id: ObjectId('512755a55b44ae98eaae9c3b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30001| Fri Feb 22 11:25:25.835 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a55b44ae98eaae9c3b') } -->> { : MaxKey } Inserted 492000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 43 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:26.121 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a55b44ae98eaae9c3b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:26.520 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a55b44ae98eaae9c3b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:26.532 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a55b44ae98eaae9c3b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:26.533 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a55b44ae98eaae9c3b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a65b44ae98eaaecb1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a55b44ae98eaae9c3b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:26.534 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a6ad0d9d7dc768ff31 m30001| Fri Feb 22 11:25:26.535 [conn4] splitChunk accepted at version 1|84||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:26.535 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:26-512755a6ad0d9d7dc768ff32", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532326535), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a55b44ae98eaae9c3b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a55b44ae98eaae9c3b') }, max: { _id: ObjectId('512755a65b44ae98eaaecb1b') }, lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a65b44ae98eaaecb1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:26.536 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:26.537 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 45 version: 1|86||51275580ce6119f732c457f2 based on: 1|84||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:26.537 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|84||000000000000000000000000min: { _id: ObjectId('512755a55b44ae98eaae9c3b') }max: { _id: MaxKey } on: { _id: ObjectId('512755a65b44ae98eaaecb1b') } (splitThreshold 943718) (migrate suggested, but no migrations allowed) Inserted 500000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 44 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:26.894 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a65b44ae98eaaecb1b') } -->> { : MaxKey } Turning on balancer after half documents inserted. m30001| Fri Feb 22 11:25:27.271 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a65b44ae98eaaecb1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:27.529 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a65b44ae98eaaecb1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:27.542 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a65b44ae98eaaecb1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:27.543 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a65b44ae98eaaecb1b') }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: ObjectId('512755a75b44ae98eaaef9fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a65b44ae98eaaecb1b')", configdb: "localhost:30000" } m30001| Fri Feb 22 11:25:27.544 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a7ad0d9d7dc768ff33 m30001| Fri Feb 22 11:25:27.545 [conn4] splitChunk accepted at version 1|86||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:27.546 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:27-512755a7ad0d9d7dc768ff34", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532327546), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a65b44ae98eaaecb1b') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a65b44ae98eaaecb1b') }, max: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30001| Fri Feb 22 11:25:27.546 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30999| Fri Feb 22 11:25:27.547 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 46 version: 1|88||51275580ce6119f732c457f2 based on: 1|86||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:27.547 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|86||000000000000000000000000min: { _id: ObjectId('512755a65b44ae98eaaecb1b') }max: { _id: MaxKey } on: { _id: ObjectId('512755a75b44ae98eaaef9fb') } (splitThreshold 943718) (migrate suggested) m30999| Fri Feb 22 11:25:27.548 [conn1] moving chunk (auto): ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|88||000000000000000000000000min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }max: { _id: MaxKey } to: shard0002:localhost:30002 m30999| Fri Feb 22 11:25:27.548 [conn1] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 1|88||000000000000000000000000min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:25:27.549 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a75b44ae98eaaef9fb')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Fri Feb 22 11:25:27.550 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755a7ad0d9d7dc768ff35 m30001| Fri Feb 22 11:25:27.550 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:27-512755a7ad0d9d7dc768ff36", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532327550), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:27.550 [conn4] moveChunk request accepted at version 1|88||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:27.550 [conn4] moveChunk number of documents: 1 m30002| Fri Feb 22 11:25:27.550 [initandlisten] connection accepted from 127.0.0.1:36223 #5 (5 connections now open) m30002| Fri Feb 22 11:25:27.551 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:27.551 [initandlisten] connection accepted from 127.0.0.1:51527 #5 (5 connections now open) m30002| Fri Feb 22 11:25:27.552 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert2/bulk_shard_insert.ns, filling with zeroes... m30002| Fri Feb 22 11:25:27.552 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert2/bulk_shard_insert.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 11:25:27.552 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert2/bulk_shard_insert.0, filling with zeroes... m30002| Fri Feb 22 11:25:27.553 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert2/bulk_shard_insert.0, size: 64MB, took 0 secs m30002| Fri Feb 22 11:25:27.553 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert2/bulk_shard_insert.1, filling with zeroes... m30002| Fri Feb 22 11:25:27.553 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert2/bulk_shard_insert.1, size: 128MB, took 0 secs m30002| Fri Feb 22 11:25:27.555 [migrateThread] build index bulk_shard_insert.coll { _id: 1 } m30002| Fri Feb 22 11:25:27.556 [migrateThread] build index done. scanned 0 total records. 0 secs m30002| Fri Feb 22 11:25:27.556 [migrateThread] info: creating collection bulk_shard_insert.coll on add index m30002| Fri Feb 22 11:25:27.557 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:25:27.557 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:27.557 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } m30001| Fri Feb 22 11:25:27.561 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 105, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:27.561 [conn4] moveChunk setting version to: 2|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:27.561 [initandlisten] connection accepted from 127.0.0.1:57701 #6 (6 connections now open) m30002| Fri Feb 22 11:25:27.561 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:25:27.568 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:27.568 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:27.568 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:27-512755a754d20db69cce81b1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532327568), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 11:25:27.568 [initandlisten] connection accepted from 127.0.0.1:58390 #10 (10 connections now open) m30001| Fri Feb 22 11:25:27.572 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 105, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:27.572 [conn4] moveChunk updating self version to: 2|1||51275580ce6119f732c457f2 through { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } for collection 'bulk_shard_insert.coll' m30000| Fri Feb 22 11:25:27.572 [initandlisten] connection accepted from 127.0.0.1:60144 #11 (11 connections now open) m30001| Fri Feb 22 11:25:27.573 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:27-512755a7ad0d9d7dc768ff37", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532327573), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:27.573 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:27.573 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:27.573 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:27.573 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:27.573 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:27.573 [cleanupOldData-512755a7ad0d9d7dc768ff38] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:27.573 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:27.573 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:27-512755a7ad0d9d7dc768ff39", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532327573), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:25:27.574 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 47 version: 2|1||51275580ce6119f732c457f2 based on: 1|88||51275580ce6119f732c457f2 Inserted 512000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 44 shard0002 1 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:27.593 [cleanupOldData-512755a7ad0d9d7dc768ff38] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } m30001| Fri Feb 22 11:25:27.593 [cleanupOldData-512755a7ad0d9d7dc768ff38] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } m30001| Fri Feb 22 11:25:27.593 [cleanupOldData-512755a7ad0d9d7dc768ff38] moveChunk deleted 1 documents for bulk_shard_insert.coll from { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:27.778 [conn3] no current chunk manager found for this shard, will initialize Inserted 520000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 44 shard0002 1 too many chunks to print, use verbose if you want to force print m30002| Fri Feb 22 11:25:28.398 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a75b44ae98eaaef9fb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:28.408 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a75b44ae98eaaef9fb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:28.409 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755a85b44ae98eaaf28db') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a75b44ae98eaaef9fb')", configdb: "localhost:30000" } m30000| Fri Feb 22 11:25:28.409 [initandlisten] connection accepted from 127.0.0.1:51202 #12 (12 connections now open) m30002| Fri Feb 22 11:25:28.410 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576 (sleeping for 30000ms) m30002| Fri Feb 22 11:25:28.411 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755a854d20db69cce81b2 m30002| Fri Feb 22 11:25:28.412 [conn4] splitChunk accepted at version 2|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:28.413 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:28-512755a854d20db69cce81b3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532328413), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: MaxKey }, lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }, max: { _id: ObjectId('512755a85b44ae98eaaf28db') }, lastmod: Timestamp 2000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, lastmod: Timestamp 2000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:28.413 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:28.414 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 48 version: 2|3||51275580ce6119f732c457f2 based on: 2|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:28.415 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 2|0||000000000000000000000000min: { _id: ObjectId('512755a75b44ae98eaaef9fb') }max: { _id: MaxKey } on: { _id: ObjectId('512755a85b44ae98eaaf28db') } (splitThreshold 943718) (migrate suggested) m30999| Fri Feb 22 11:25:28.416 [conn1] moving chunk (auto): ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 2|3||000000000000000000000000min: { _id: ObjectId('512755a85b44ae98eaaf28db') }max: { _id: MaxKey } to: shard0003:localhost:30003 m30999| Fri Feb 22 11:25:28.416 [conn1] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 2|3||000000000000000000000000min: { _id: ObjectId('512755a85b44ae98eaaf28db') }max: { _id: MaxKey }) shard0002:localhost:30002 -> shard0003:localhost:30003 m30002| Fri Feb 22 11:25:28.416 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30002", to: "localhost:30003", fromShard: "shard0002", toShard: "shard0003", min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a85b44ae98eaaf28db')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30002| Fri Feb 22 11:25:28.417 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755a854d20db69cce81b4 m30002| Fri Feb 22 11:25:28.417 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:28-512755a854d20db69cce81b5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532328417), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, from: "shard0002", to: "shard0003" } } m30002| Fri Feb 22 11:25:28.418 [conn4] moveChunk request accepted at version 2|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:28.418 [conn4] moveChunk number of documents: 1 m30003| Fri Feb 22 11:25:28.418 [initandlisten] connection accepted from 127.0.0.1:57000 #5 (5 connections now open) m30003| Fri Feb 22 11:25:28.418 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } for collection bulk_shard_insert.coll from localhost:30002 (0 slaves detected) m30002| Fri Feb 22 11:25:28.419 [initandlisten] connection accepted from 127.0.0.1:40820 #7 (7 connections now open) m30003| Fri Feb 22 11:25:28.420 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/bulk_shard_insert.ns, filling with zeroes... m30003| Fri Feb 22 11:25:28.420 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/bulk_shard_insert.ns, size: 16MB, took 0 secs m30003| Fri Feb 22 11:25:28.420 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/bulk_shard_insert.0, filling with zeroes... m30003| Fri Feb 22 11:25:28.420 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/bulk_shard_insert.0, size: 64MB, took 0 secs m30003| Fri Feb 22 11:25:28.420 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/bulk_shard_insert.1, filling with zeroes... m30003| Fri Feb 22 11:25:28.421 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/bulk_shard_insert.1, size: 128MB, took 0 secs m30003| Fri Feb 22 11:25:28.424 [migrateThread] build index bulk_shard_insert.coll { _id: 1 } m30003| Fri Feb 22 11:25:28.425 [migrateThread] build index done. scanned 0 total records. 0 secs m30003| Fri Feb 22 11:25:28.425 [migrateThread] info: creating collection bulk_shard_insert.coll on add index m30003| Fri Feb 22 11:25:28.425 [migrateThread] Waiting for replication to catch up before entering critical section m30003| Fri Feb 22 11:25:28.426 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } m30003| Fri Feb 22 11:25:28.426 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:28.428 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30002", min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 105, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:28.429 [conn4] moveChunk setting version to: 3|0||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:28.429 [initandlisten] connection accepted from 127.0.0.1:45916 #6 (6 connections now open) m30003| Fri Feb 22 11:25:28.429 [conn6] Waiting for commit to finish m30003| Fri Feb 22 11:25:28.437 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } m30003| Fri Feb 22 11:25:28.437 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } m30003| Fri Feb 22 11:25:28.437 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:28-512755a85ed41a5c89325bb2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532328437), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 11:25:28.437 [initandlisten] connection accepted from 127.0.0.1:56606 #13 (13 connections now open) m30002| Fri Feb 22 11:25:28.439 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30002", min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 105, catchup: 0, steady: 0 }, ok: 1.0 } m30002| Fri Feb 22 11:25:28.439 [conn4] moveChunk updating self version to: 3|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755a75b44ae98eaaef9fb') } -> { _id: ObjectId('512755a85b44ae98eaaf28db') } for collection 'bulk_shard_insert.coll' m30000| Fri Feb 22 11:25:28.439 [initandlisten] connection accepted from 127.0.0.1:62107 #14 (14 connections now open) m30002| Fri Feb 22 11:25:28.440 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:28-512755a854d20db69cce81b6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532328440), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, from: "shard0002", to: "shard0003" } } m30002| Fri Feb 22 11:25:28.440 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:25:28.440 [conn4] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:25:28.440 [conn4] forking for cleanup of chunk data m30002| Fri Feb 22 11:25:28.440 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30002| Fri Feb 22 11:25:28.440 [conn4] MigrateFromStatus::done Global lock acquired m30002| Fri Feb 22 11:25:28.440 [cleanupOldData-512755a854d20db69cce81b7] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey }, # cursors remaining: 0 m30002| Fri Feb 22 11:25:28.441 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30002| Fri Feb 22 11:25:28.441 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:28-512755a854d20db69cce81b8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532328441), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:25:28.442 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 49 version: 3|1||51275580ce6119f732c457f2 based on: 2|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:28.460 [cleanupOldData-512755a854d20db69cce81b7] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:28.460 [cleanupOldData-512755a854d20db69cce81b7] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:28.461 [cleanupOldData-512755a854d20db69cce81b7] moveChunk deleted 1 documents for bulk_shard_insert.coll from { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: MaxKey } m30003| Fri Feb 22 11:25:28.637 [conn3] no current chunk manager found for this shard, will initialize Inserted 532000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 44 shard0002 1 shard0003 1 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:29.158 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a85b44ae98eaaf28db') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:29.171 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a85b44ae98eaaf28db') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:29.172 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755a95b44ae98eaaf57bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a85b44ae98eaaf28db')", configdb: "localhost:30000" } m30000| Fri Feb 22 11:25:29.172 [initandlisten] connection accepted from 127.0.0.1:37407 #15 (15 connections now open) m30003| Fri Feb 22 11:25:29.173 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586 (sleeping for 30000ms) m30003| Fri Feb 22 11:25:29.174 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755a95ed41a5c89325bb3 m30003| Fri Feb 22 11:25:29.175 [conn4] splitChunk accepted at version 3|0||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:29.175 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:29-512755a95ed41a5c89325bb4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:52636", time: new Date(1361532329175), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: MaxKey }, lastmod: Timestamp 3000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a85b44ae98eaaf28db') }, max: { _id: ObjectId('512755a95b44ae98eaaf57bb') }, lastmod: Timestamp 3000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a95b44ae98eaaf57bb') }, max: { _id: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:29.176 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:29.177 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 50 version: 3|3||51275580ce6119f732c457f2 based on: 3|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:29.177 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 3|0||000000000000000000000000min: { _id: ObjectId('512755a85b44ae98eaaf28db') }max: { _id: MaxKey } on: { _id: ObjectId('512755a95b44ae98eaaf57bb') } (splitThreshold 943718) (migrate suggested) m30003| Fri Feb 22 11:25:29.433 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf57bb') } -->> { : MaxKey } Inserted 540000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 44 shard0002 1 shard0003 2 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:29.701 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf57bb') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:30.044 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf57bb') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:30.056 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf57bb') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:30.056 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a95b44ae98eaaf57bb') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755a95b44ae98eaaf869b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a95b44ae98eaaf57bb')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:30.057 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755aa5ed41a5c89325bb5 m30003| Fri Feb 22 11:25:30.058 [conn4] splitChunk accepted at version 3|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:30.059 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:30-512755aa5ed41a5c89325bb6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:52636", time: new Date(1361532330059), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a95b44ae98eaaf57bb') }, max: { _id: MaxKey }, lastmod: Timestamp 3000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a95b44ae98eaaf57bb') }, max: { _id: ObjectId('512755a95b44ae98eaaf869b') }, lastmod: Timestamp 3000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755a95b44ae98eaaf869b') }, max: { _id: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:30.059 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:30.061 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 51 version: 3|5||51275580ce6119f732c457f2 based on: 3|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:30.061 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 3|3||000000000000000000000000min: { _id: ObjectId('512755a95b44ae98eaaf57bb') }max: { _id: MaxKey } on: { _id: ObjectId('512755a95b44ae98eaaf869b') } (splitThreshold 943718) (migrate suggested) m30003| Fri Feb 22 11:25:30.355 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf869b') } -->> { : MaxKey } Inserted 552000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0001 44 shard0002 1 shard0003 3 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:30.617 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf869b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:30.803 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755aace6119f732c457f3 m30000| Fri Feb 22 11:25:30.806 [conn3] build index config.tags { _id: 1 } m30000| Fri Feb 22 11:25:30.808 [conn3] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:25:30.808 [conn3] info: creating collection config.tags on add index m30000| Fri Feb 22 11:25:30.808 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 11:25:30.810 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:25:30.810 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:30.811 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: MinKey }max: { _id: ObjectId('512755815b44ae98eaa729fc') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:30.811 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:30.811 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:30.812 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755aaad0d9d7dc768ff3a m30001| Fri Feb 22 11:25:30.812 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:30-512755aaad0d9d7dc768ff3b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532330812), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:30.813 [conn4] moveChunk request accepted at version 2|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:30.813 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:25:30.813 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:30.814 [initandlisten] connection accepted from 127.0.0.1:37533 #6 (6 connections now open) m30000| Fri Feb 22 11:25:30.815 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/bulk_shard_insert.ns, filling with zeroes... m30000| Fri Feb 22 11:25:30.815 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/bulk_shard_insert.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:25:30.815 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/bulk_shard_insert.0, filling with zeroes... m30000| Fri Feb 22 11:25:30.815 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/bulk_shard_insert.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:25:30.816 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert0/bulk_shard_insert.1, filling with zeroes... m30000| Fri Feb 22 11:25:30.816 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert0/bulk_shard_insert.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:25:30.819 [migrateThread] build index bulk_shard_insert.coll { _id: 1 } m30000| Fri Feb 22 11:25:30.820 [migrateThread] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:25:30.820 [migrateThread] info: creating collection bulk_shard_insert.coll on add index m30000| Fri Feb 22 11:25:30.820 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:30.821 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } m30000| Fri Feb 22 11:25:30.821 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } m30001| Fri Feb 22 11:25:30.823 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:30.824 [conn4] moveChunk setting version to: 4|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:30.824 [initandlisten] connection accepted from 127.0.0.1:49100 #16 (16 connections now open) m30000| Fri Feb 22 11:25:30.824 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:30.832 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } m30000| Fri Feb 22 11:25:30.832 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } m30000| Fri Feb 22 11:25:30.832 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:30-512755aad8a80eaf6b75946b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532330832), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 11:25:30.832 [initandlisten] connection accepted from 127.0.0.1:46248 #17 (17 connections now open) m30001| Fri Feb 22 11:25:30.834 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:30.834 [conn4] moveChunk updating self version to: 4|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:30.835 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:30-512755aaad0d9d7dc768ff3c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532330835), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:30.835 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:30.835 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:30.835 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:30.835 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:30.835 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:30.835 [cleanupOldData-512755aaad0d9d7dc768ff3d] (start) waiting to cleanup bulk_shard_insert.coll from { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:30.835 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:30.836 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:30-512755aaad0d9d7dc768ff3e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532330836), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: MinKey }, max: { _id: ObjectId('512755815b44ae98eaa729fc') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:25:30.837 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 52 version: 4|1||51275580ce6119f732c457f2 based on: 3|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:30.837 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:30.855 [cleanupOldData-512755aaad0d9d7dc768ff3d] waiting to remove documents for bulk_shard_insert.coll from { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } m30001| Fri Feb 22 11:25:30.856 [cleanupOldData-512755aaad0d9d7dc768ff3d] moveChunk starting delete for: bulk_shard_insert.coll from { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } m30001| Fri Feb 22 11:25:30.856 [cleanupOldData-512755aaad0d9d7dc768ff3d] moveChunk deleted 0 documents for bulk_shard_insert.coll from { _id: MinKey } -> { _id: ObjectId('512755815b44ae98eaa729fc') } m30003| Fri Feb 22 11:25:30.926 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf869b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:30.935 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755a95b44ae98eaaf869b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:30.935 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755a95b44ae98eaaf869b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755aa5b44ae98eaafb57b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755a95b44ae98eaaf869b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:30.936 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755aa5ed41a5c89325bb7 m30003| Fri Feb 22 11:25:30.937 [conn4] splitChunk accepted at version 3|5||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:30.938 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:30-512755aa5ed41a5c89325bb8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:52636", time: new Date(1361532330938), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755a95b44ae98eaaf869b') }, max: { _id: MaxKey }, lastmod: Timestamp 3000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755a95b44ae98eaaf869b') }, max: { _id: ObjectId('512755aa5b44ae98eaafb57b') }, lastmod: Timestamp 4000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755aa5b44ae98eaafb57b') }, max: { _id: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:30.938 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:30.939 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 53 version: 4|3||51275580ce6119f732c457f2 based on: 4|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:30.940 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 3|5||000000000000000000000000min: { _id: ObjectId('512755a95b44ae98eaaf869b') }max: { _id: MaxKey } on: { _id: ObjectId('512755aa5b44ae98eaafb57b') } (splitThreshold 943718) (migrate suggested) Inserted 560000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 1 shard0001 43 shard0002 1 shard0003 4 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:31.182 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755aa5b44ae98eaafb57b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:31.396 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755aa5b44ae98eaafb57b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:31.614 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755aa5b44ae98eaafb57b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:31.622 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755aa5b44ae98eaafb57b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:31.623 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755aa5b44ae98eaafb57b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ab5b44ae98eaafe45b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755aa5b44ae98eaafb57b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:31.623 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755ab5ed41a5c89325bb9 m30003| Fri Feb 22 11:25:31.624 [conn4] splitChunk accepted at version 4|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:31.625 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:31-512755ab5ed41a5c89325bba", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:52636", time: new Date(1361532331625), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755aa5b44ae98eaafb57b') }, max: { _id: MaxKey }, lastmod: Timestamp 4000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755aa5b44ae98eaafb57b') }, max: { _id: ObjectId('512755ab5b44ae98eaafe45b') }, lastmod: Timestamp 4000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755ab5b44ae98eaafe45b') }, max: { _id: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:31.625 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:31.627 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 54 version: 4|5||51275580ce6119f732c457f2 based on: 4|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:31.627 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 4|3||000000000000000000000000min: { _id: ObjectId('512755aa5b44ae98eaafb57b') }max: { _id: MaxKey } on: { _id: ObjectId('512755ab5b44ae98eaafe45b') } (splitThreshold 943718) (migrate suggested) Inserted 572000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 1 shard0001 43 shard0002 1 shard0003 5 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:25:31.839 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755abce6119f732c457f4 m30003| Fri Feb 22 11:25:31.860 [initandlisten] connection accepted from 127.0.0.1:51569 #7 (7 connections now open) m30003| Fri Feb 22 11:25:31.860 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ab5b44ae98eaafe45b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:31.860 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa729fc')", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:31.860 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: ObjectId('512755815b44ae98eaa729fc') }max: { _id: ObjectId('512755815b44ae98eaa7493b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:31.860 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:31.861 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa729fc')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:31.861 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755abad0d9d7dc768ff3f m30001| Fri Feb 22 11:25:31.861 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:31-512755abad0d9d7dc768ff40", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532331861), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:31.862 [conn4] moveChunk request accepted at version 4|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:31.874 [conn4] moveChunk number of documents: 7999 m30000| Fri Feb 22 11:25:31.874 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:31.884 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:31.894 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:31.905 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 131, clonedBytes: 13755, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:31.915 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 354, clonedBytes: 37170, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:31.931 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 712, clonedBytes: 74760, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:31.963 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1420, clonedBytes: 149100, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:32.027 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2678, clonedBytes: 281190, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:32.084 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ab5b44ae98eaafe45b') } -->> { : MaxKey } Inserted 580000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 1 shard0001 43 shard0002 1 shard0003 5 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:32.156 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5383, clonedBytes: 565215, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:32.279 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:32.279 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } m30000| Fri Feb 22 11:25:32.282 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } m30003| Fri Feb 22 11:25:32.388 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ab5b44ae98eaafe45b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:32.400 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ab5b44ae98eaafe45b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:32.400 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ab5b44ae98eaafe45b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ac5b44ae98eab0133b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ab5b44ae98eaafe45b')", configdb: "localhost:30000" } m30000| Fri Feb 22 11:25:32.401 [initandlisten] connection accepted from 127.0.0.1:60277 #18 (18 connections now open) m30999| Fri Feb 22 11:25:32.402 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ab5b44ae98eaafe45b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ac5b44ae98eab0133b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ab5b44ae98eaafe45b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755abad0d9d7dc768ff3f'), when: new Date(1361532331861), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755815b44ae98eaa729fc') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:32.412 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 7999, clonedBytes: 839895, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:32.412 [conn4] moveChunk setting version to: 5|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:32.412 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:32.413 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } m30000| Fri Feb 22 11:25:32.414 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } m30000| Fri Feb 22 11:25:32.414 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:32-512755acd8a80eaf6b75946c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532332414), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 404, step4 of 5: 0, step5 of 5: 134 } } m30001| Fri Feb 22 11:25:32.422 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 7999, clonedBytes: 839895, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:32.422 [conn4] moveChunk updating self version to: 5|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:32.423 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:32-512755acad0d9d7dc768ff41", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532332423), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:32.423 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:32.423 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:32.423 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:32.423 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:32.423 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:32.423 [cleanupOldData-512755acad0d9d7dc768ff42] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:32.423 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:32.423 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:32-512755acad0d9d7dc768ff43", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532332423), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 11, step4 of 6: 538, step5 of 6: 10, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:32.424 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755815b44ae98eaa729fc') }, max: { _id: ObjectId('512755815b44ae98eaa7493b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa729fc')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:11365 w:31 reslen:37 563ms m30999| Fri Feb 22 11:25:32.425 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 55 version: 5|1||51275580ce6119f732c457f2 based on: 4|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:32.426 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:32.443 [cleanupOldData-512755acad0d9d7dc768ff42] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } m30001| Fri Feb 22 11:25:32.443 [cleanupOldData-512755acad0d9d7dc768ff42] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } m30003| Fri Feb 22 11:25:32.613 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ab5b44ae98eaafe45b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:32.625 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ab5b44ae98eaafe45b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:32.626 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ab5b44ae98eaafe45b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ac5b44ae98eab022db') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ab5b44ae98eaafe45b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:32.627 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755ac5ed41a5c89325bbb m30003| Fri Feb 22 11:25:32.628 [conn7] splitChunk accepted at version 4|5||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:32.629 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:32-512755ac5ed41a5c89325bbc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532332629), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755ab5b44ae98eaafe45b') }, max: { _id: MaxKey }, lastmod: Timestamp 4000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755ab5b44ae98eaafe45b') }, max: { _id: ObjectId('512755ac5b44ae98eab022db') }, lastmod: Timestamp 5000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755ac5b44ae98eab022db') }, max: { _id: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:32.629 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:32.630 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 56 version: 5|3||51275580ce6119f732c457f2 based on: 5|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:32.631 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 4|5||000000000000000000000000min: { _id: ObjectId('512755ab5b44ae98eaafe45b') }max: { _id: MaxKey } on: { _id: ObjectId('512755ac5b44ae98eab022db') } (splitThreshold 943718) (migrate suggested) m30001| Fri Feb 22 11:25:32.778 [cleanupOldData-512755acad0d9d7dc768ff42] moveChunk deleted 7999 documents for bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa729fc') } -> { _id: ObjectId('512755815b44ae98eaa7493b') } m30003| Fri Feb 22 11:25:32.847 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ac5b44ae98eab022db') } -->> { : MaxKey } Inserted 592000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 2 shard0001 42 shard0002 1 shard0003 6 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:33.078 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ac5b44ae98eab022db') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:33.300 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ac5b44ae98eab022db') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:33.313 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ac5b44ae98eab022db') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:33.314 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ac5b44ae98eab022db') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ad5b44ae98eab051bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ac5b44ae98eab022db')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:33.314 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755ad5ed41a5c89325bbd m30003| Fri Feb 22 11:25:33.316 [conn7] splitChunk accepted at version 5|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:33.317 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:33-512755ad5ed41a5c89325bbe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532333317), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755ac5b44ae98eab022db') }, max: { _id: MaxKey }, lastmod: Timestamp 5000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755ac5b44ae98eab022db') }, max: { _id: ObjectId('512755ad5b44ae98eab051bb') }, lastmod: Timestamp 5000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755ad5b44ae98eab051bb') }, max: { _id: MaxKey }, lastmod: Timestamp 5000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:33.317 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:33.319 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 57 version: 5|5||51275580ce6119f732c457f2 based on: 5|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:33.319 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 5|3||000000000000000000000000min: { _id: ObjectId('512755ac5b44ae98eab022db') }max: { _id: MaxKey } on: { _id: ObjectId('512755ad5b44ae98eab051bb') } (splitThreshold 943718) (migrate suggested) Inserted 600000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 2 shard0001 42 shard0002 1 shard0003 7 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:25:33.427 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755adce6119f732c457f5 m30999| Fri Feb 22 11:25:33.429 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa7493b')", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:25:33.429 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: ObjectId('512755815b44ae98eaa7493b') }max: { _id: ObjectId('512755825b44ae98eaa7781b') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:25:33.429 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:33.430 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa7493b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:33.430 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755adad0d9d7dc768ff44 m30001| Fri Feb 22 11:25:33.430 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:33-512755adad0d9d7dc768ff45", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532333430), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:33.431 [conn4] moveChunk request accepted at version 5|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:33.459 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:25:33.459 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:33.469 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:33.480 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:33.490 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 153, clonedBytes: 16065, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:33.500 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 370, clonedBytes: 38850, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:33.516 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 668, clonedBytes: 70140, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:33.548 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1324, clonedBytes: 139020, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:33.567 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ad5b44ae98eab051bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:33.613 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2496, clonedBytes: 262080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:33.741 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4889, clonedBytes: 513345, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:33.937 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ad5b44ae98eab051bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:33.997 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9932, clonedBytes: 1042860, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:34.104 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:25:34.104 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } m30002| Fri Feb 22 11:25:34.107 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } m30003| Fri Feb 22 11:25:34.314 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ad5b44ae98eab051bb') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:34.325 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ad5b44ae98eab051bb') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:34.326 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ad5b44ae98eab051bb') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ae5b44ae98eab0809b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ad5b44ae98eab051bb')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:34.327 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ad5b44ae98eab051bb') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ae5b44ae98eab0809b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ad5b44ae98eab051bb')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755adad0d9d7dc768ff44'), when: new Date(1361532333430), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755815b44ae98eaa7493b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 612000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 2 shard0001 42 shard0002 1 shard0003 7 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:34.510 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:34.510 [conn4] moveChunk setting version to: 6|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:34.510 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:25:34.512 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } m30002| Fri Feb 22 11:25:34.512 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } m30002| Fri Feb 22 11:25:34.512 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:34-512755ae54d20db69cce81b9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532334512), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 643, step4 of 5: 0, step5 of 5: 408 } } m30001| Fri Feb 22 11:25:34.520 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:34.520 [conn4] moveChunk updating self version to: 6|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:34.521 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:34-512755aead0d9d7dc768ff46", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532334521), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:34.521 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:34.521 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:34.521 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:34.521 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:34.521 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:34.521 [cleanupOldData-512755aead0d9d7dc768ff47] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:34.521 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:34.521 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:34-512755aead0d9d7dc768ff48", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532334521), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 27, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:34.521 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755815b44ae98eaa7493b') }, max: { _id: ObjectId('512755825b44ae98eaa7781b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755815b44ae98eaa7493b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:27683 w:39 reslen:37 1091ms m30999| Fri Feb 22 11:25:34.523 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 58 version: 6|1||51275580ce6119f732c457f2 based on: 5|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:34.523 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:34.541 [cleanupOldData-512755aead0d9d7dc768ff47] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } m30001| Fri Feb 22 11:25:34.541 [cleanupOldData-512755aead0d9d7dc768ff47] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } m30003| Fri Feb 22 11:25:34.712 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ad5b44ae98eab051bb') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:34.724 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ad5b44ae98eab051bb') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:34.725 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ad5b44ae98eab051bb') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ae5b44ae98eab0903b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ad5b44ae98eab051bb')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:34.726 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755ae5ed41a5c89325bbf m30003| Fri Feb 22 11:25:34.727 [conn7] splitChunk accepted at version 5|5||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:34.728 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:34-512755ae5ed41a5c89325bc0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532334728), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755ad5b44ae98eab051bb') }, max: { _id: MaxKey }, lastmod: Timestamp 5000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755ad5b44ae98eab051bb') }, max: { _id: ObjectId('512755ae5b44ae98eab0903b') }, lastmod: Timestamp 6000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:34.728 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:34.729 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 59 version: 6|3||51275580ce6119f732c457f2 based on: 6|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:34.730 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 5|5||000000000000000000000000min: { _id: ObjectId('512755ad5b44ae98eab051bb') }max: { _id: MaxKey } on: { _id: ObjectId('512755ae5b44ae98eab0903b') } (splitThreshold 943718) (migrate suggested) m30003| Fri Feb 22 11:25:35.068 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } Inserted 620000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 2 shard0002 2 shard0001 41 shard0003 8 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:35.314 [cleanupOldData-512755aead0d9d7dc768ff47] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755815b44ae98eaa7493b') } -> { _id: ObjectId('512755825b44ae98eaa7781b') } m30003| Fri Feb 22 11:25:35.434 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:35.526 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755afce6119f732c457f6 m30999| Fri Feb 22 11:25:35.528 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755825b44ae98eaa7781b')", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:35.528 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: ObjectId('512755825b44ae98eaa7781b') }max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:35.528 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:35.528 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755825b44ae98eaa7781b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:35.529 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755afad0d9d7dc768ff49 m30001| Fri Feb 22 11:25:35.529 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:35-512755afad0d9d7dc768ff4a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532335529), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:35.530 [conn4] moveChunk request accepted at version 6|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:35.556 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:35.556 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:35.566 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:35.577 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:35.587 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 78, clonedBytes: 8190, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:35.597 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 294, clonedBytes: 30870, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:35.613 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 609, clonedBytes: 63945, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:35.646 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1304, clonedBytes: 136920, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:35.710 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2649, clonedBytes: 278145, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:35.783 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:35.796 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:35.796 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755af5b44ae98eab0bf1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:35.797 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755af5b44ae98eab0bf1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755afad0d9d7dc768ff49'), when: new Date(1361532335528), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755825b44ae98eaa7781b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:35.838 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5187, clonedBytes: 544635, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:36.094 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10434, clonedBytes: 1095570, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:36.112 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.123 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.124 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b05b44ae98eab0cebb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:36.125 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b05b44ae98eab0cebb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755afad0d9d7dc768ff49'), when: new Date(1361532335528), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755825b44ae98eaa7781b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 632000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 2 shard0002 2 shard0001 41 shard0003 8 too many chunks to print, use verbose if you want to force print m30000| Fri Feb 22 11:25:36.169 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:36.169 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } m30000| Fri Feb 22 11:25:36.171 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } m30003| Fri Feb 22 11:25:36.355 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.366 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.367 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b05b44ae98eab0de5b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:36.368 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b05b44ae98eab0de5b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755afad0d9d7dc768ff49'), when: new Date(1361532335528), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755825b44ae98eaa7781b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30003| Fri Feb 22 11:25:36.588 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.599 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.600 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b05b44ae98eab0edfb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:36.600 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b05b44ae98eab0edfb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755afad0d9d7dc768ff49'), when: new Date(1361532335528), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755825b44ae98eaa7781b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 640000 documents. m30001| Fri Feb 22 11:25:36.606 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:36.607 [conn4] moveChunk setting version to: 7|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:36.607 [conn16] Waiting for commit to finish --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 2 shard0002 2 shard0001 41 shard0003 8 too many chunks to print, use verbose if you want to force print m30000| Fri Feb 22 11:25:36.616 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } m30000| Fri Feb 22 11:25:36.616 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } m30000| Fri Feb 22 11:25:36.616 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:36-512755b0d8a80eaf6b75946d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532336616), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 612, step4 of 5: 0, step5 of 5: 447 } } m30001| Fri Feb 22 11:25:36.617 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:36.617 [conn4] moveChunk updating self version to: 7|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:36.617 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:36-512755b0ad0d9d7dc768ff4b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532336617), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:36.617 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:36.618 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:36.618 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:36.618 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:36.618 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:36.618 [cleanupOldData-512755b0ad0d9d7dc768ff4c] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:36.618 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:36.618 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:36-512755b0ad0d9d7dc768ff4d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532336618), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 26, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:36.618 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755825b44ae98eaa7781b') }, max: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755825b44ae98eaa7781b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:35 r:25756 w:28 reslen:37 1090ms m30999| Fri Feb 22 11:25:36.619 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 60 version: 7|1||51275580ce6119f732c457f2 based on: 6|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:36.620 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:36.638 [cleanupOldData-512755b0ad0d9d7dc768ff4c] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } m30001| Fri Feb 22 11:25:36.638 [cleanupOldData-512755b0ad0d9d7dc768ff4c] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } m30003| Fri Feb 22 11:25:36.826 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.838 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ae5b44ae98eab0903b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:36.838 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b05b44ae98eab0fd9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ae5b44ae98eab0903b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:36.839 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b05ed41a5c89325bc1 m30003| Fri Feb 22 11:25:36.840 [conn7] splitChunk accepted at version 6|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:36.841 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:36-512755b05ed41a5c89325bc2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532336841), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755ae5b44ae98eab0903b') }, max: { _id: ObjectId('512755b05b44ae98eab0fd9b') }, lastmod: Timestamp 7000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b05b44ae98eab0fd9b') }, max: { _id: MaxKey }, lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:36.841 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:36.842 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 61 version: 7|3||51275580ce6119f732c457f2 based on: 7|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:36.843 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 6|3||000000000000000000000000min: { _id: ObjectId('512755ae5b44ae98eab0903b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b05b44ae98eab0fd9b') } (splitThreshold 943718) (migrate suggested) m30003| Fri Feb 22 11:25:37.058 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b05b44ae98eab0fd9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:37.107 [cleanupOldData-512755b0ad0d9d7dc768ff4c] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755825b44ae98eaa7781b') } -> { _id: ObjectId('512755835b44ae98eaa7a6fb') } m30003| Fri Feb 22 11:25:37.278 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b05b44ae98eab0fd9b') } -->> { : MaxKey } Inserted 652000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 3 shard0002 2 shard0001 40 shard0003 9 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:37.525 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b05b44ae98eab0fd9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:37.538 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b05b44ae98eab0fd9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:37.538 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b05b44ae98eab0fd9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b15b44ae98eab12c7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b05b44ae98eab0fd9b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:37.539 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b15ed41a5c89325bc3 m30003| Fri Feb 22 11:25:37.540 [conn7] splitChunk accepted at version 7|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:37.541 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:37-512755b15ed41a5c89325bc4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532337541), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b05b44ae98eab0fd9b') }, max: { _id: MaxKey }, lastmod: Timestamp 7000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b05b44ae98eab0fd9b') }, max: { _id: ObjectId('512755b15b44ae98eab12c7b') }, lastmod: Timestamp 7000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: MaxKey }, lastmod: Timestamp 7000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:37.541 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:37.543 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 62 version: 7|5||51275580ce6119f732c457f2 based on: 7|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:37.543 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 7|3||000000000000000000000000min: { _id: ObjectId('512755b05b44ae98eab0fd9b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b15b44ae98eab12c7b') } (splitThreshold 943718) (migrate suggested) m30999| Fri Feb 22 11:25:37.622 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755b1ce6119f732c457f7 m30999| Fri Feb 22 11:25:37.624 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755835b44ae98eaa7a6fb')", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:25:37.624 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }max: { _id: ObjectId('512755845b44ae98eaa7d5db') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:25:37.624 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:37.624 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755835b44ae98eaa7a6fb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:37.625 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755b1ad0d9d7dc768ff4e m30001| Fri Feb 22 11:25:37.625 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:37-512755b1ad0d9d7dc768ff4f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532337625), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:37.626 [conn4] moveChunk request accepted at version 7|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:37.643 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:25:37.643 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:37.654 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:37.664 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 85, clonedBytes: 8925, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:37.674 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 299, clonedBytes: 31395, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:37.692 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 680, clonedBytes: 71400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:37.708 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1014, clonedBytes: 106470, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:37.740 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1606, clonedBytes: 168630, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:37.757 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } Inserted 660000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 3 shard0002 2 shard0001 40 shard0003 10 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:37.805 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2928, clonedBytes: 307440, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:37.933 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5595, clonedBytes: 587475, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:37.990 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:38.189 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10470, clonedBytes: 1099350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:38.217 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:38.229 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:38.231 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b25b44ae98eab15b5b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b15b44ae98eab12c7b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:38.232 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b25b44ae98eab15b5b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b15b44ae98eab12c7b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b1ad0d9d7dc768ff4e'), when: new Date(1361532337624), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755835b44ae98eaa7a6fb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30002| Fri Feb 22 11:25:38.264 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:25:38.264 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } m30002| Fri Feb 22 11:25:38.269 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } m30003| Fri Feb 22 11:25:38.459 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:38.473 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:38.473 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b25b44ae98eab16afb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b15b44ae98eab12c7b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:38.474 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b25b44ae98eab16afb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b15b44ae98eab12c7b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b1ad0d9d7dc768ff4e'), when: new Date(1361532337624), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755835b44ae98eaa7a6fb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 672000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 3 shard0002 2 shard0001 40 shard0003 10 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:38.702 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:38.702 [conn4] moveChunk setting version to: 8|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:38.702 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:25:38.705 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } m30002| Fri Feb 22 11:25:38.705 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } m30002| Fri Feb 22 11:25:38.705 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:38-512755b254d20db69cce81ba", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532338705), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 619, step4 of 5: 0, step5 of 5: 441 } } m30001| Fri Feb 22 11:25:38.712 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:38.712 [conn4] moveChunk updating self version to: 8|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:38.713 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:38-512755b2ad0d9d7dc768ff50", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532338713), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:38.713 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:38.713 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:38.713 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:38.713 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:38.713 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:38.713 [cleanupOldData-512755b2ad0d9d7dc768ff51] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:38.714 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:38.714 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:38-512755b2ad0d9d7dc768ff52", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532338714), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 17, step4 of 6: 1058, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:38.714 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755835b44ae98eaa7a6fb') }, max: { _id: ObjectId('512755845b44ae98eaa7d5db') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755835b44ae98eaa7a6fb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:16865 w:39 reslen:37 1089ms m30999| Fri Feb 22 11:25:38.715 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 63 version: 8|1||51275580ce6119f732c457f2 based on: 7|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:38.716 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:38.733 [cleanupOldData-512755b2ad0d9d7dc768ff51] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } m30001| Fri Feb 22 11:25:38.733 [cleanupOldData-512755b2ad0d9d7dc768ff51] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } m30003| Fri Feb 22 11:25:38.770 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:38.782 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b15b44ae98eab12c7b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:38.782 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b25b44ae98eab17a9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b15b44ae98eab12c7b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:38.783 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b25ed41a5c89325bc5 m30003| Fri Feb 22 11:25:38.784 [conn7] splitChunk accepted at version 7|5||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:38.785 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:38-512755b25ed41a5c89325bc6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532338785), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: MaxKey }, lastmod: Timestamp 7000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b15b44ae98eab12c7b') }, max: { _id: ObjectId('512755b25b44ae98eab17a9b') }, lastmod: Timestamp 8000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, lastmod: Timestamp 8000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:38.785 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:38.787 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 64 version: 8|3||51275580ce6119f732c457f2 based on: 8|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:38.787 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 7|5||000000000000000000000000min: { _id: ObjectId('512755b15b44ae98eab12c7b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b25b44ae98eab17a9b') } (splitThreshold 943718) (migrate suggested) m30003| Fri Feb 22 11:25:39.119 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } Inserted 680000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 3 shard0002 3 shard0001 39 shard0003 11 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:39.446 [cleanupOldData-512755b2ad0d9d7dc768ff51] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755835b44ae98eaa7a6fb') } -> { _id: ObjectId('512755845b44ae98eaa7d5db') } m30003| Fri Feb 22 11:25:39.480 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:39.718 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755b3ce6119f732c457f8 m30003| Fri Feb 22 11:25:39.751 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:39.752 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755845b44ae98eaa7d5db')", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:39.752 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { _id: ObjectId('512755845b44ae98eaa7d5db') }max: { _id: ObjectId('512755855b44ae98eaa804bb') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:39.752 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:39.752 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755845b44ae98eaa7d5db')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:39.753 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755b3ad0d9d7dc768ff53 m30001| Fri Feb 22 11:25:39.753 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:39-512755b3ad0d9d7dc768ff54", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532339753), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:39.754 [conn4] moveChunk request accepted at version 8|1||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:39.763 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:39.764 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b35b44ae98eab1a97b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:39.765 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b35b44ae98eab1a97b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b3ad0d9d7dc768ff53'), when: new Date(1361532339752), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755845b44ae98eaa7d5db') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:39.771 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:39.771 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:39.781 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:39.791 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:39.802 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 143, clonedBytes: 15015, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:39.812 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 367, clonedBytes: 38535, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:39.828 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 720, clonedBytes: 75600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:39.860 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1371, clonedBytes: 143955, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:39.924 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2741, clonedBytes: 287805, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:39.983 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:39.995 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:39.996 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b35b44ae98eab1b91b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:39.996 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b35b44ae98eab1b91b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b3ad0d9d7dc768ff53'), when: new Date(1361532339752), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755845b44ae98eaa7d5db') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 692000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 3 shard0002 3 shard0001 39 shard0003 11 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:40.053 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5286, clonedBytes: 555030, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:40.226 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.239 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.239 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b45b44ae98eab1c8bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:40.240 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b45b44ae98eab1c8bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b3ad0d9d7dc768ff53'), when: new Date(1361532339752), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755845b44ae98eaa7d5db') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:40.309 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10600, clonedBytes: 1113000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:40.381 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:40.381 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } m30000| Fri Feb 22 11:25:40.382 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } m30003| Fri Feb 22 11:25:40.453 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.465 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.465 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b45b44ae98eab1d85b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:40.466 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b45b44ae98eab1d85b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b3ad0d9d7dc768ff53'), when: new Date(1361532339752), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755845b44ae98eaa7d5db') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 700000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 3 shard0002 3 shard0001 39 shard0003 11 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:40.705 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.718 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.718 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b45b44ae98eab1e7fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:40.719 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b45b44ae98eab1e7fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b3ad0d9d7dc768ff53'), when: new Date(1361532339752), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755845b44ae98eaa7d5db') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30999| Fri Feb 22 11:25:40.720 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 65 version: 8|3||51275580ce6119f732c457f2 based on: 8|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:40.720 [conn1] warning: chunk manager reload forced for collection 'bulk_shard_insert.coll', config version is 8|3||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:40.821 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:40.821 [conn4] moveChunk setting version to: 9|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:40.821 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:40.828 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } m30000| Fri Feb 22 11:25:40.828 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } m30000| Fri Feb 22 11:25:40.828 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:40-512755b4d8a80eaf6b75946e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532340828), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 609, step4 of 5: 0, step5 of 5: 446 } } m30001| Fri Feb 22 11:25:40.831 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:40.831 [conn4] moveChunk updating self version to: 9|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:40.832 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:40-512755b4ad0d9d7dc768ff55", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532340832), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:40.832 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:40.833 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:40.833 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:40.833 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:40.833 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:40.833 [cleanupOldData-512755b4ad0d9d7dc768ff56] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:40.833 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:40.833 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:40-512755b4ad0d9d7dc768ff57", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532340833), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 16, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:40.833 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755845b44ae98eaa7d5db') }, max: { _id: ObjectId('512755855b44ae98eaa804bb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755845b44ae98eaa7d5db')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:18 r:16593 w:29 reslen:37 1081ms m30999| Fri Feb 22 11:25:40.835 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 66 version: 9|1||51275580ce6119f732c457f2 based on: 8|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:40.835 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:40.853 [cleanupOldData-512755b4ad0d9d7dc768ff56] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } m30001| Fri Feb 22 11:25:40.853 [cleanupOldData-512755b4ad0d9d7dc768ff56] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } m30003| Fri Feb 22 11:25:40.939 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.951 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b25b44ae98eab17a9b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:40.953 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b45b44ae98eab1f79b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b25b44ae98eab17a9b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:40.954 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b45ed41a5c89325bc7 m30003| Fri Feb 22 11:25:40.954 [conn7] splitChunk accepted at version 8|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:40.955 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:40-512755b45ed41a5c89325bc8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532340955), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: MaxKey }, lastmod: Timestamp 8000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b25b44ae98eab17a9b') }, max: { _id: ObjectId('512755b45b44ae98eab1f79b') }, lastmod: Timestamp 9000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b45b44ae98eab1f79b') }, max: { _id: MaxKey }, lastmod: Timestamp 9000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:40.956 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:40.957 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 67 version: 9|3||51275580ce6119f732c457f2 based on: 9|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:40.958 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 8|3||000000000000000000000000min: { _id: ObjectId('512755b25b44ae98eab17a9b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b45b44ae98eab1f79b') } (splitThreshold 943718) (migrate suggested) m30003| Fri Feb 22 11:25:41.175 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b45b44ae98eab1f79b') } -->> { : MaxKey } Inserted 712000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 4 shard0002 3 shard0001 38 shard0003 12 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:41.414 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b45b44ae98eab1f79b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:41.583 [cleanupOldData-512755b4ad0d9d7dc768ff56] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755845b44ae98eaa7d5db') } -> { _id: ObjectId('512755855b44ae98eaa804bb') } m30003| Fri Feb 22 11:25:41.647 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b45b44ae98eab1f79b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:41.660 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b45b44ae98eab1f79b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:41.660 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b45b44ae98eab1f79b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b55b44ae98eab2267b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b45b44ae98eab1f79b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:41.661 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b55ed41a5c89325bc9 m30003| Fri Feb 22 11:25:41.662 [conn7] splitChunk accepted at version 9|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:41.663 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:41-512755b55ed41a5c89325bca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532341663), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b45b44ae98eab1f79b') }, max: { _id: MaxKey }, lastmod: Timestamp 9000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b45b44ae98eab1f79b') }, max: { _id: ObjectId('512755b55b44ae98eab2267b') }, lastmod: Timestamp 9000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: MaxKey }, lastmod: Timestamp 9000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:41.663 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:41.665 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 68 version: 9|5||51275580ce6119f732c457f2 based on: 9|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:41.665 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 9|3||000000000000000000000000min: { _id: ObjectId('512755b45b44ae98eab1f79b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b55b44ae98eab2267b') } (splitThreshold 943718) (migrate suggested) Inserted 720000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 4 shard0002 3 shard0001 38 shard0003 13 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:25:41.837 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755b5ce6119f732c457f9 m30999| Fri Feb 22 11:25:41.839 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755855b44ae98eaa804bb')", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:25:41.840 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { _id: ObjectId('512755855b44ae98eaa804bb') }max: { _id: ObjectId('512755865b44ae98eaa8339b') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:25:41.840 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:41.840 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755855b44ae98eaa804bb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:41.841 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755b5ad0d9d7dc768ff58 m30001| Fri Feb 22 11:25:41.841 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:41-512755b5ad0d9d7dc768ff59", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532341841), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:41.842 [conn4] moveChunk request accepted at version 9|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:41.860 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:25:41.860 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:41.870 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:41.880 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 58, clonedBytes: 6090, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:41.891 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 257, clonedBytes: 26985, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:41.900 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:41.901 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 453, clonedBytes: 47565, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:41.917 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 763, clonedBytes: 80115, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:41.949 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1321, clonedBytes: 138705, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:42.014 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2650, clonedBytes: 278250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:42.127 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:42.142 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5170, clonedBytes: 542850, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:42.362 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:42.374 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:42.375 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b65b44ae98eab2555b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b55b44ae98eab2267b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:42.376 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b65b44ae98eab2555b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b55b44ae98eab2267b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b5ad0d9d7dc768ff58'), when: new Date(1361532341840), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755855b44ae98eaa804bb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 732000 documents. m30001| Fri Feb 22 11:25:42.398 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10092, clonedBytes: 1059660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 4 shard0002 3 shard0001 38 shard0003 13 too many chunks to print, use verbose if you want to force print m30002| Fri Feb 22 11:25:42.497 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:25:42.497 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } m30002| Fri Feb 22 11:25:42.498 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } m30003| Fri Feb 22 11:25:42.643 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:42.656 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:42.656 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b65b44ae98eab264fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b55b44ae98eab2267b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:42.657 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b65b44ae98eab264fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b55b44ae98eab2267b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b5ad0d9d7dc768ff58'), when: new Date(1361532341840), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755855b44ae98eaa804bb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:42.910 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:42.910 [conn4] moveChunk setting version to: 10|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:42.910 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:25:42.912 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } m30002| Fri Feb 22 11:25:42.912 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } m30002| Fri Feb 22 11:25:42.912 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:42-512755b654d20db69cce81bb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532342912), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 635, step4 of 5: 0, step5 of 5: 415 } } m30001| Fri Feb 22 11:25:42.921 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:42.921 [conn4] moveChunk updating self version to: 10|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:42.921 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:42-512755b6ad0d9d7dc768ff5a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532342921), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:42.921 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:42.921 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:42.921 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:42.921 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:42.921 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:42.922 [cleanupOldData-512755b6ad0d9d7dc768ff5b] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:42.922 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:42.922 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:42-512755b6ad0d9d7dc768ff5c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532342922), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 18, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:42.922 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755855b44ae98eaa804bb') }, max: { _id: ObjectId('512755865b44ae98eaa8339b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755855b44ae98eaa804bb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:18135 w:35 reslen:37 1082ms m30999| Fri Feb 22 11:25:42.924 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 69 version: 10|1||51275580ce6119f732c457f2 based on: 9|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:42.924 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:42.942 [cleanupOldData-512755b6ad0d9d7dc768ff5b] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } m30001| Fri Feb 22 11:25:42.942 [cleanupOldData-512755b6ad0d9d7dc768ff5b] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } m30003| Fri Feb 22 11:25:42.947 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:42.959 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b55b44ae98eab2267b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:42.960 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b65b44ae98eab2749b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b55b44ae98eab2267b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:42.961 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b65ed41a5c89325bcb m30003| Fri Feb 22 11:25:42.962 [conn7] splitChunk accepted at version 9|5||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:42.963 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:42-512755b65ed41a5c89325bcc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532342963), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: MaxKey }, lastmod: Timestamp 9000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b55b44ae98eab2267b') }, max: { _id: ObjectId('512755b65b44ae98eab2749b') }, lastmod: Timestamp 10000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b65b44ae98eab2749b') }, max: { _id: MaxKey }, lastmod: Timestamp 10000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:42.963 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:42.964 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 70 version: 10|3||51275580ce6119f732c457f2 based on: 10|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:42.965 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 9|5||000000000000000000000000min: { _id: ObjectId('512755b55b44ae98eab2267b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b65b44ae98eab2749b') } (splitThreshold 943718) (migrate suggested) Inserted 740000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 4 shard0002 4 shard0001 37 shard0003 14 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:43.278 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b65b44ae98eab2749b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:43.571 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b65b44ae98eab2749b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:43.673 [cleanupOldData-512755b6ad0d9d7dc768ff5b] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755855b44ae98eaa804bb') } -> { _id: ObjectId('512755865b44ae98eaa8339b') } m30003| Fri Feb 22 11:25:43.850 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b65b44ae98eab2749b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:43.862 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b65b44ae98eab2749b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:43.862 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b65b44ae98eab2749b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b75b44ae98eab2a37b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b65b44ae98eab2749b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:43.863 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b75ed41a5c89325bcd m30003| Fri Feb 22 11:25:43.864 [conn7] splitChunk accepted at version 10|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:43.865 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:43-512755b75ed41a5c89325bce", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532343865), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b65b44ae98eab2749b') }, max: { _id: MaxKey }, lastmod: Timestamp 10000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b65b44ae98eab2749b') }, max: { _id: ObjectId('512755b75b44ae98eab2a37b') }, lastmod: Timestamp 10000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: MaxKey }, lastmod: Timestamp 10000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:43.865 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:43.867 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 71 version: 10|5||51275580ce6119f732c457f2 based on: 10|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:43.867 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 10|3||000000000000000000000000min: { _id: ObjectId('512755b65b44ae98eab2749b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b75b44ae98eab2a37b') } (splitThreshold 943718) (migrate suggested) Inserted 752000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 4 shard0002 4 shard0001 37 shard0003 15 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:25:43.926 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755b7ce6119f732c457fa m30999| Fri Feb 22 11:25:43.928 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8339b')", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:43.928 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { _id: ObjectId('512755865b44ae98eaa8339b') }max: { _id: ObjectId('512755865b44ae98eaa8627b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:43.928 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:43.928 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8339b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:43.929 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755b7ad0d9d7dc768ff5d m30001| Fri Feb 22 11:25:43.929 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:43-512755b7ad0d9d7dc768ff5e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532343929), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:43.930 [conn4] moveChunk request accepted at version 10|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:43.951 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:43.952 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:43.962 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:43.972 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:43.982 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 73, clonedBytes: 7665, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:43.993 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 292, clonedBytes: 30660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:44.009 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 644, clonedBytes: 67620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:44.041 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1335, clonedBytes: 140175, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:44.106 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2603, clonedBytes: 273315, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:44.188 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:44.234 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5069, clonedBytes: 532245, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:44.442 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } Inserted 760000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 4 shard0002 4 shard0001 37 shard0003 15 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:44.490 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10208, clonedBytes: 1071840, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:44.576 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:44.576 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } m30000| Fri Feb 22 11:25:44.580 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } m30003| Fri Feb 22 11:25:44.689 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:44.701 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:44.701 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b85b44ae98eab2d25b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b75b44ae98eab2a37b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:44.702 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b85b44ae98eab2d25b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b75b44ae98eab2a37b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b7ad0d9d7dc768ff5d'), when: new Date(1361532343928), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755865b44ae98eaa8339b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30003| Fri Feb 22 11:25:44.924 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:44.937 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:44.937 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b85b44ae98eab2e1fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b75b44ae98eab2a37b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:44.938 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b85b44ae98eab2e1fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b75b44ae98eab2a37b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755b7ad0d9d7dc768ff5d'), when: new Date(1361532343928), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755865b44ae98eaa8339b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:45.002 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:45.002 [conn4] moveChunk setting version to: 11|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:45.002 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:45.006 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } m30000| Fri Feb 22 11:25:45.006 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } m30000| Fri Feb 22 11:25:45.006 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:45-512755b9d8a80eaf6b75946f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532345006), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 623, step4 of 5: 0, step5 of 5: 430 } } m30001| Fri Feb 22 11:25:45.013 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:45.013 [conn4] moveChunk updating self version to: 11|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:45.014 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:45-512755b9ad0d9d7dc768ff5f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532345014), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:45.014 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:45.014 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:45.014 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:45.014 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:45.014 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:45.014 [cleanupOldData-512755b9ad0d9d7dc768ff60] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:45.014 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:45.014 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:45-512755b9ad0d9d7dc768ff61", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532345014), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 21, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:45.014 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755865b44ae98eaa8339b') }, max: { _id: ObjectId('512755865b44ae98eaa8627b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8339b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:14 r:21101 w:30 reslen:37 1086ms m30999| Fri Feb 22 11:25:45.016 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 72 version: 11|1||51275580ce6119f732c457f2 based on: 10|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:45.016 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:45.034 [cleanupOldData-512755b9ad0d9d7dc768ff60] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } m30001| Fri Feb 22 11:25:45.034 [cleanupOldData-512755b9ad0d9d7dc768ff60] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } m30003| Fri Feb 22 11:25:45.150 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:45.163 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b75b44ae98eab2a37b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:45.163 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b95b44ae98eab2f19b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b75b44ae98eab2a37b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:45.164 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b95ed41a5c89325bcf m30003| Fri Feb 22 11:25:45.165 [conn7] splitChunk accepted at version 10|5||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:45.166 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:45-512755b95ed41a5c89325bd0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532345166), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: MaxKey }, lastmod: Timestamp 10000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b75b44ae98eab2a37b') }, max: { _id: ObjectId('512755b95b44ae98eab2f19b') }, lastmod: Timestamp 11000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b95b44ae98eab2f19b') }, max: { _id: MaxKey }, lastmod: Timestamp 11000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:45.166 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:45.167 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 73 version: 11|3||51275580ce6119f732c457f2 based on: 11|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:45.168 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 10|5||000000000000000000000000min: { _id: ObjectId('512755b75b44ae98eab2a37b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b95b44ae98eab2f19b') } (splitThreshold 943718) (migrate suggested) Inserted 772000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 5 shard0002 4 shard0001 36 shard0003 16 too many chunks to print, use verbose if you want to force print m30003| Fri Feb 22 11:25:45.434 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab2f19b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:45.676 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab2f19b') } -->> { : MaxKey } Inserted 780000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 5 shard0002 4 shard0001 36 shard0003 16 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:45.707 [cleanupOldData-512755b9ad0d9d7dc768ff60] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8339b') } -> { _id: ObjectId('512755865b44ae98eaa8627b') } m30003| Fri Feb 22 11:25:45.910 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab2f19b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:45.917 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab2f19b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:45.918 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b95b44ae98eab2f19b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755b95b44ae98eab3207b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b95b44ae98eab2f19b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:45.918 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755b95ed41a5c89325bd1 m30003| Fri Feb 22 11:25:45.919 [conn7] splitChunk accepted at version 11|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:45.920 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:45-512755b95ed41a5c89325bd2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532345920), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b95b44ae98eab2f19b') }, max: { _id: MaxKey }, lastmod: Timestamp 11000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b95b44ae98eab2f19b') }, max: { _id: ObjectId('512755b95b44ae98eab3207b') }, lastmod: Timestamp 11000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755b95b44ae98eab3207b') }, max: { _id: MaxKey }, lastmod: Timestamp 11000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:45.920 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:45.922 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 74 version: 11|5||51275580ce6119f732c457f2 based on: 11|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:45.922 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 11|3||000000000000000000000000min: { _id: ObjectId('512755b95b44ae98eab2f19b') }max: { _id: MaxKey } on: { _id: ObjectId('512755b95b44ae98eab3207b') } (splitThreshold 943718) (migrate suggested) m30999| Fri Feb 22 11:25:46.018 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755bace6119f732c457fb m30999| Fri Feb 22 11:25:46.020 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8627b')", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:25:46.020 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: ObjectId('512755865b44ae98eaa8627b') }max: { _id: ObjectId('512755875b44ae98eaa8915b') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:25:46.020 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:46.021 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8627b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:46.021 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755baad0d9d7dc768ff62 m30001| Fri Feb 22 11:25:46.021 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:46-512755baad0d9d7dc768ff63", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532346021), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:46.022 [conn4] moveChunk request accepted at version 11|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:46.045 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:25:46.045 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:46.055 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:46.065 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 112, clonedBytes: 11760, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:46.076 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 422, clonedBytes: 44310, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:46.086 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 739, clonedBytes: 77595, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:46.102 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1250, clonedBytes: 131250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:46.134 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2262, clonedBytes: 237510, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:46.149 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab3207b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:46.198 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4120, clonedBytes: 432600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:46.332 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 7991, clonedBytes: 839055, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:46.372 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab3207b') } -->> { : MaxKey } Inserted 792000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 5 shard0002 4 shard0001 36 shard0003 17 too many chunks to print, use verbose if you want to force print m30002| Fri Feb 22 11:25:46.471 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:25:46.471 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } m30002| Fri Feb 22 11:25:46.475 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } m30001| Fri Feb 22 11:25:46.588 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:46.588 [conn4] moveChunk setting version to: 12|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:46.589 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:25:46.597 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } m30002| Fri Feb 22 11:25:46.597 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } m30002| Fri Feb 22 11:25:46.597 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:46-512755ba54d20db69cce81bc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532346597), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 425, step4 of 5: 0, step5 of 5: 125 } } m30001| Fri Feb 22 11:25:46.599 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:46.599 [conn4] moveChunk updating self version to: 12|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:46.599 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:46-512755baad0d9d7dc768ff64", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532346599), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:25:46.599 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:46.599 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:46.600 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:46.600 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:46.600 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:46.600 [cleanupOldData-512755baad0d9d7dc768ff65] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:46.600 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:46.600 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:46-512755baad0d9d7dc768ff66", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532346600), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 22, step4 of 6: 543, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:46.600 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755865b44ae98eaa8627b') }, max: { _id: ObjectId('512755875b44ae98eaa8915b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755865b44ae98eaa8627b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:23 r:29835 w:36 reslen:37 579ms m30999| Fri Feb 22 11:25:46.602 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 75 version: 12|1||51275580ce6119f732c457f2 based on: 11|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:46.602 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:46.620 [cleanupOldData-512755baad0d9d7dc768ff65] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } m30001| Fri Feb 22 11:25:46.620 [cleanupOldData-512755baad0d9d7dc768ff65] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } m30003| Fri Feb 22 11:25:46.620 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab3207b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:46.633 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755b95b44ae98eab3207b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:46.633 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755b95b44ae98eab3207b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755ba5b44ae98eab34f5b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755b95b44ae98eab3207b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:46.634 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755ba5ed41a5c89325bd3 m30003| Fri Feb 22 11:25:46.635 [conn7] splitChunk accepted at version 11|5||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:46.636 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:46-512755ba5ed41a5c89325bd4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532346636), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755b95b44ae98eab3207b') }, max: { _id: MaxKey }, lastmod: Timestamp 11000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755b95b44ae98eab3207b') }, max: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, lastmod: Timestamp 12000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, lastmod: Timestamp 12000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:46.636 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:46.638 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 76 version: 12|3||51275580ce6119f732c457f2 based on: 12|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:46.638 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 11|5||000000000000000000000000min: { _id: ObjectId('512755b95b44ae98eab3207b') }max: { _id: MaxKey } on: { _id: ObjectId('512755ba5b44ae98eab34f5b') } (splitThreshold 943718) (migrate suggested) m30003| Fri Feb 22 11:25:46.983 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } Inserted 800000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 5 shard0002 5 shard0001 35 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:47.348 [cleanupOldData-512755baad0d9d7dc768ff65] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755865b44ae98eaa8627b') } -> { _id: ObjectId('512755875b44ae98eaa8915b') } m30003| Fri Feb 22 11:25:47.355 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:47.605 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755bbce6119f732c457fc m30999| Fri Feb 22 11:25:47.606 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755875b44ae98eaa8915b')", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:47.606 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 12|1||000000000000000000000000min: { _id: ObjectId('512755875b44ae98eaa8915b') }max: { _id: ObjectId('512755885b44ae98eaa8c03b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:47.607 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:47.607 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755875b44ae98eaa8915b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:47.608 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755bbad0d9d7dc768ff67 m30001| Fri Feb 22 11:25:47.608 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:47-512755bbad0d9d7dc768ff68", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532347608), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:47.609 [conn4] moveChunk request accepted at version 12|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:47.627 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:47.627 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:47.637 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:47.648 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:47.658 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 107, clonedBytes: 11235, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:47.668 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 432, clonedBytes: 45360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:47.684 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 791, clonedBytes: 83055, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:47.717 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1505, clonedBytes: 158025, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:47.725 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:47.737 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:47.739 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bb5b44ae98eab37e3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:47.740 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bb5b44ae98eab37e3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bbad0d9d7dc768ff67'), when: new Date(1361532347607), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755875b44ae98eaa8915b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:47.781 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2905, clonedBytes: 305025, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:47.909 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5900, clonedBytes: 619500, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:47.999 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.012 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.013 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bb5b44ae98eab38ddb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:48.015 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bb5b44ae98eab38ddb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bbad0d9d7dc768ff67'), when: new Date(1361532347607), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755875b44ae98eaa8915b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 812000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 5 shard0002 5 shard0001 35 shard0003 18 too many chunks to print, use verbose if you want to force print m30000| Fri Feb 22 11:25:48.165 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:48.165 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } m30001| Fri Feb 22 11:25:48.165 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:48.169 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } m30003| Fri Feb 22 11:25:48.308 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.320 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.320 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bc5b44ae98eab39d7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:48.321 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bc5b44ae98eab39d7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bbad0d9d7dc768ff67'), when: new Date(1361532347607), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755875b44ae98eaa8915b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30003| Fri Feb 22 11:25:48.522 [FileAllocator] allocating new datafile /data/db/bulk_shard_insert3/bulk_shard_insert.2, filling with zeroes... m30003| Fri Feb 22 11:25:48.522 [FileAllocator] done allocating datafile /data/db/bulk_shard_insert3/bulk_shard_insert.2, size: 256MB, took 0 secs m30003| Fri Feb 22 11:25:48.532 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.544 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.545 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bc5b44ae98eab3ad1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:48.545 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bc5b44ae98eab3ad1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bbad0d9d7dc768ff67'), when: new Date(1361532347607), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755875b44ae98eaa8915b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 820000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 5 shard0002 5 shard0001 35 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:48.678 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:48.678 [conn4] moveChunk setting version to: 13|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:48.678 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:48.681 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } m30000| Fri Feb 22 11:25:48.681 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } m30000| Fri Feb 22 11:25:48.681 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bcd8a80eaf6b759470", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532348681), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 536, step4 of 5: 0, step5 of 5: 516 } } m30001| Fri Feb 22 11:25:48.688 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:48.688 [conn4] moveChunk updating self version to: 13|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:48.689 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bcad0d9d7dc768ff69", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532348689), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:48.689 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:48.689 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:48.689 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:48.689 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:48.689 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:48.689 [cleanupOldData-512755bcad0d9d7dc768ff6a] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:48.689 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:48.689 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bcad0d9d7dc768ff6b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532348689), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 18, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:48.689 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755875b44ae98eaa8915b') }, max: { _id: ObjectId('512755885b44ae98eaa8c03b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755875b44ae98eaa8915b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:19 r:18309 w:27 reslen:37 1082ms m30999| Fri Feb 22 11:25:48.691 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 77 version: 13|1||51275580ce6119f732c457f2 based on: 12|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:48.692 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:48.709 [cleanupOldData-512755bcad0d9d7dc768ff6a] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } m30001| Fri Feb 22 11:25:48.709 [cleanupOldData-512755bcad0d9d7dc768ff6a] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } m30003| Fri Feb 22 11:25:48.776 [conn7] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.789 [conn7] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755ba5b44ae98eab34f5b') } -->> { : MaxKey } m30003| Fri Feb 22 11:25:48.790 [conn7] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, from: "shard0003", splitKeys: [ { _id: ObjectId('512755bc5b44ae98eab3bcbb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755ba5b44ae98eab34f5b')", configdb: "localhost:30000" } m30003| Fri Feb 22 11:25:48.791 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755bc5ed41a5c89325bd5 m30003| Fri Feb 22 11:25:48.792 [conn7] splitChunk accepted at version 12|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:48.792 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bc5ed41a5c89325bd6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532348792), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: MaxKey }, lastmod: Timestamp 12000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }, max: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, lastmod: Timestamp 13000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, lastmod: Timestamp 13000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30003| Fri Feb 22 11:25:48.793 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30999| Fri Feb 22 11:25:48.795 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 78 version: 13|3||51275580ce6119f732c457f2 based on: 13|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:48.795 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 12|3||000000000000000000000000min: { _id: ObjectId('512755ba5b44ae98eab34f5b') }max: { _id: MaxKey } on: { _id: ObjectId('512755bc5b44ae98eab3bcbb') } (splitThreshold 943718) (migrate suggested) m30999| Fri Feb 22 11:25:48.797 [conn1] moving chunk (auto): ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 13|3||000000000000000000000000min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }max: { _id: MaxKey } to: shard0002:localhost:30002 m30999| Fri Feb 22 11:25:48.797 [conn1] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0003:localhost:30003lastmod: 13|3||000000000000000000000000min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }max: { _id: MaxKey }) shard0003:localhost:30003 -> shard0002:localhost:30002 m30003| Fri Feb 22 11:25:48.797 [conn7] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30003", to: "localhost:30002", fromShard: "shard0003", toShard: "shard0002", min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755bc5b44ae98eab3bcbb')", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30003| Fri Feb 22 11:25:48.798 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' acquired, ts : 512755bc5ed41a5c89325bd7 m30003| Fri Feb 22 11:25:48.798 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bc5ed41a5c89325bd8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532348798), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, from: "shard0003", to: "shard0002" } } m30003| Fri Feb 22 11:25:48.798 [conn7] moveChunk request accepted at version 13|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:48.798 [conn7] moveChunk number of documents: 1 m30002| Fri Feb 22 11:25:48.799 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } for collection bulk_shard_insert.coll from localhost:30003 (0 slaves detected) m30002| Fri Feb 22 11:25:48.800 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:25:48.800 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:48.800 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } m30003| Fri Feb 22 11:25:48.809 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30003", min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 1, clonedBytes: 105, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30003| Fri Feb 22 11:25:48.809 [conn7] moveChunk setting version to: 14|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:48.809 [initandlisten] connection accepted from 127.0.0.1:49216 #8 (8 connections now open) m30002| Fri Feb 22 11:25:48.809 [conn8] Waiting for commit to finish m30002| Fri Feb 22 11:25:48.811 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:48.811 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:48.811 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bc54d20db69cce81bd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532348811), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30003| Fri Feb 22 11:25:48.820 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30003", min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 1, clonedBytes: 105, catchup: 0, steady: 0 }, ok: 1.0 } m30003| Fri Feb 22 11:25:48.820 [conn7] moveChunk updating self version to: 14|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755a85b44ae98eaaf28db') } -> { _id: ObjectId('512755a95b44ae98eaaf57bb') } for collection 'bulk_shard_insert.coll' m30000| Fri Feb 22 11:25:48.820 [initandlisten] connection accepted from 127.0.0.1:60089 #19 (19 connections now open) m30003| Fri Feb 22 11:25:48.820 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bc5ed41a5c89325bd9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532348820), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, from: "shard0003", to: "shard0002" } } m30003| Fri Feb 22 11:25:48.821 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30003| Fri Feb 22 11:25:48.821 [conn7] MigrateFromStatus::done Global lock acquired m30003| Fri Feb 22 11:25:48.821 [conn7] forking for cleanup of chunk data m30003| Fri Feb 22 11:25:48.821 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30003| Fri Feb 22 11:25:48.821 [conn7] MigrateFromStatus::done Global lock acquired m30003| Fri Feb 22 11:25:48.821 [cleanupOldData-512755bc5ed41a5c89325bda] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey }, # cursors remaining: 0 m30003| Fri Feb 22 11:25:48.821 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30003:1361532329:30586' unlocked. m30003| Fri Feb 22 11:25:48.821 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:48-512755bc5ed41a5c89325bdb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:51569", time: new Date(1361532348821), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:25:48.823 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 79 version: 14|1||51275580ce6119f732c457f2 based on: 13|3||51275580ce6119f732c457f2 m30003| Fri Feb 22 11:25:48.841 [cleanupOldData-512755bc5ed41a5c89325bda] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } m30003| Fri Feb 22 11:25:48.841 [cleanupOldData-512755bc5ed41a5c89325bda] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } m30003| Fri Feb 22 11:25:48.841 [cleanupOldData-512755bc5ed41a5c89325bda] moveChunk deleted 1 documents for bulk_shard_insert.coll from { _id: ObjectId('512755bc5b44ae98eab3bcbb') } -> { _id: MaxKey } m30002| Fri Feb 22 11:25:49.088 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755bc5b44ae98eab3bcbb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:49.342 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755bc5b44ae98eab3bcbb') } -->> { : MaxKey } Inserted 832000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 6 shard0002 6 shard0001 34 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:49.416 [cleanupOldData-512755bcad0d9d7dc768ff6a] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755875b44ae98eaa8915b') } -> { _id: ObjectId('512755885b44ae98eaa8c03b') } m30002| Fri Feb 22 11:25:49.602 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755bc5b44ae98eab3bcbb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:49.610 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755bc5b44ae98eab3bcbb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:49.610 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755bd5b44ae98eab3eb9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755bc5b44ae98eab3bcbb')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:25:49.611 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755bd54d20db69cce81be m30002| Fri Feb 22 11:25:49.612 [conn4] splitChunk accepted at version 14|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:49.613 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:49-512755bd54d20db69cce81bf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532349613), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: MaxKey }, lastmod: Timestamp 14000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }, max: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }, lastmod: Timestamp 14000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }, max: { _id: MaxKey }, lastmod: Timestamp 14000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:49.613 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:49.615 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 0ms sequenceNumber: 80 version: 14|3||51275580ce6119f732c457f2 based on: 14|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:49.615 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 14|0||000000000000000000000000min: { _id: ObjectId('512755bc5b44ae98eab3bcbb') }max: { _id: MaxKey } on: { _id: ObjectId('512755bd5b44ae98eab3eb9b') } (splitThreshold 943718) (migrate suggested) m30999| Fri Feb 22 11:25:49.694 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755bdce6119f732c457fd m30999| Fri Feb 22 11:25:49.695 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755885b44ae98eaa8c03b')", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:49.696 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 13|1||000000000000000000000000min: { _id: ObjectId('512755885b44ae98eaa8c03b') }max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:49.696 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:49.696 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755885b44ae98eaa8c03b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:49.697 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755bdad0d9d7dc768ff6c m30001| Fri Feb 22 11:25:49.697 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:49-512755bdad0d9d7dc768ff6d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532349697), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:49.698 [conn4] moveChunk request accepted at version 13|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:49.714 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:49.714 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:49.725 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:49.735 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:49.745 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 85, clonedBytes: 8925, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:49.755 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 297, clonedBytes: 31185, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:49.771 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 615, clonedBytes: 64575, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:49.804 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1302, clonedBytes: 136710, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:49.866 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755bd5b44ae98eab3eb9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:49.868 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2489, clonedBytes: 261345, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 Inserted 840000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 6 shard0002 7 shard0001 34 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:49.996 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4917, clonedBytes: 516285, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:50.132 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755bd5b44ae98eab3eb9b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:50.252 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9992, clonedBytes: 1049160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:50.357 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:50.357 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } m30000| Fri Feb 22 11:25:50.362 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } m30002| Fri Feb 22 11:25:50.481 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755bd5b44ae98eab3eb9b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:50.494 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755bd5b44ae98eab3eb9b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:50.495 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755be5b44ae98eab41a7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755bd5b44ae98eab3eb9b')", configdb: "localhost:30000" } m30000| Fri Feb 22 11:25:50.495 [initandlisten] connection accepted from 127.0.0.1:46999 #20 (20 connections now open) m30999| Fri Feb 22 11:25:50.496 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755be5b44ae98eab41a7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755bd5b44ae98eab3eb9b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bdad0d9d7dc768ff6c'), when: new Date(1361532349696), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755885b44ae98eaa8c03b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:50.765 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:50.765 [conn4] moveChunk setting version to: 15|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:50.765 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:50.767 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } m30000| Fri Feb 22 11:25:50.767 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } m30000| Fri Feb 22 11:25:50.767 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:50-512755bed8a80eaf6b759471", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532350767), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 642, step4 of 5: 0, step5 of 5: 409 } } m30001| Fri Feb 22 11:25:50.775 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:50.775 [conn4] moveChunk updating self version to: 15|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:50.776 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:50-512755bead0d9d7dc768ff6e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532350776), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:50.776 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:50.776 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:50.776 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:50.776 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:50.776 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:50.776 [cleanupOldData-512755bead0d9d7dc768ff6f] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:50.776 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:50.776 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:50-512755bead0d9d7dc768ff70", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532350776), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 16, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:50.777 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755885b44ae98eaa8c03b') }, max: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755885b44ae98eaa8c03b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:16339 w:36 reslen:37 1080ms m30999| Fri Feb 22 11:25:50.778 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 81 version: 15|1||51275580ce6119f732c457f2 based on: 14|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:50.779 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:50.796 [cleanupOldData-512755bead0d9d7dc768ff6f] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } m30001| Fri Feb 22 11:25:50.796 [cleanupOldData-512755bead0d9d7dc768ff6f] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } m30002| Fri Feb 22 11:25:50.837 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755bd5b44ae98eab3eb9b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:50.849 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755bd5b44ae98eab3eb9b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:50.850 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755be5b44ae98eab42a1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755bd5b44ae98eab3eb9b')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:25:50.851 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755be54d20db69cce81c0 m30002| Fri Feb 22 11:25:50.852 [conn4] splitChunk accepted at version 14|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:50.853 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:50-512755be54d20db69cce81c1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532350853), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }, max: { _id: MaxKey }, lastmod: Timestamp 14000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }, max: { _id: ObjectId('512755be5b44ae98eab42a1b') }, lastmod: Timestamp 15000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, lastmod: Timestamp 15000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:50.853 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:50.855 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 82 version: 15|3||51275580ce6119f732c457f2 based on: 15|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:50.855 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 14|3||000000000000000000000000min: { _id: ObjectId('512755bd5b44ae98eab3eb9b') }max: { _id: MaxKey } on: { _id: ObjectId('512755be5b44ae98eab42a1b') } (splitThreshold 943718) (migrate suggested) Inserted 852000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 7 shard0002 8 shard0001 33 shard0003 18 too many chunks to print, use verbose if you want to force print m30002| Fri Feb 22 11:25:51.245 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:51.529 [cleanupOldData-512755bead0d9d7dc768ff6f] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755885b44ae98eaa8c03b') } -> { _id: ObjectId('512755895b44ae98eaa8ef1b') } m30002| Fri Feb 22 11:25:51.575 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } Inserted 860000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 7 shard0002 8 shard0001 33 shard0003 18 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:25:51.781 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755bfce6119f732c457fe m30999| Fri Feb 22 11:25:51.783 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755895b44ae98eaa8ef1b')", lastmod: Timestamp 15000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:51.783 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 15|1||000000000000000000000000min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:51.783 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:51.783 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755895b44ae98eaa8ef1b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:51.784 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755bfad0d9d7dc768ff71 m30001| Fri Feb 22 11:25:51.784 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:51-512755bfad0d9d7dc768ff72", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532351784), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:51.785 [conn4] moveChunk request accepted at version 15|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:51.811 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:51.811 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:51.822 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:51.832 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:51.842 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 133, clonedBytes: 13965, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:51.852 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 347, clonedBytes: 36435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:51.869 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 692, clonedBytes: 72660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:51.901 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1374, clonedBytes: 144270, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:51.946 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:51.959 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:51.959 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755bf5b44ae98eab458fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755be5b44ae98eab42a1b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:51.960 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755bf5b44ae98eab458fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755be5b44ae98eab42a1b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bfad0d9d7dc768ff71'), when: new Date(1361532351784), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755895b44ae98eaa8ef1b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:51.965 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2616, clonedBytes: 274680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:52.093 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4682, clonedBytes: 491610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:52.322 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:52.333 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:52.335 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c05b44ae98eab4689b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755be5b44ae98eab42a1b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:52.336 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c05b44ae98eab4689b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755be5b44ae98eab42a1b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bfad0d9d7dc768ff71'), when: new Date(1361532351784), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755895b44ae98eaa8ef1b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:52.350 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9607, clonedBytes: 1008735, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:52.462 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:52.462 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } m30000| Fri Feb 22 11:25:52.462 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } m30002| Fri Feb 22 11:25:52.647 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:52.655 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:52.655 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c05b44ae98eab4783b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755be5b44ae98eab42a1b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:52.656 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c05b44ae98eab4783b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755be5b44ae98eab42a1b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755bfad0d9d7dc768ff71'), when: new Date(1361532351784), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('512755895b44ae98eaa8ef1b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 872000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 7 shard0002 8 shard0001 33 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:52.862 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:52.862 [conn4] moveChunk setting version to: 16|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:52.862 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:52.868 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } m30000| Fri Feb 22 11:25:52.868 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } m30000| Fri Feb 22 11:25:52.868 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:52-512755c0d8a80eaf6b759472", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532352868), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 649, step4 of 5: 0, step5 of 5: 406 } } m30001| Fri Feb 22 11:25:52.872 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:52.872 [conn4] moveChunk updating self version to: 16|1||51275580ce6119f732c457f2 through { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:52.873 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:52-512755c0ad0d9d7dc768ff73", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532352873), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:52.873 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:52.873 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:52.873 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:52.873 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:52.873 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:52.873 [cleanupOldData-512755c0ad0d9d7dc768ff74] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:52.874 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:52.874 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:52-512755c0ad0d9d7dc768ff75", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532352874), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 26, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:52.874 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755895b44ae98eaa8ef1b') }, max: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755895b44ae98eaa8ef1b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:21 r:25727 w:27 reslen:37 1090ms m30999| Fri Feb 22 11:25:52.876 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 83 version: 16|1||51275580ce6119f732c457f2 based on: 15|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:52.876 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:52.893 [cleanupOldData-512755c0ad0d9d7dc768ff74] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } m30001| Fri Feb 22 11:25:52.893 [cleanupOldData-512755c0ad0d9d7dc768ff74] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } m30002| Fri Feb 22 11:25:52.914 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:52.923 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755be5b44ae98eab42a1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:52.923 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c05b44ae98eab487db') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755be5b44ae98eab42a1b')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:25:52.924 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755c054d20db69cce81c2 m30002| Fri Feb 22 11:25:52.925 [conn4] splitChunk accepted at version 15|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:52.926 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:52-512755c054d20db69cce81c3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532352926), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: MaxKey }, lastmod: Timestamp 15000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755be5b44ae98eab42a1b') }, max: { _id: ObjectId('512755c05b44ae98eab487db') }, lastmod: Timestamp 16000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755c05b44ae98eab487db') }, max: { _id: MaxKey }, lastmod: Timestamp 16000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:52.926 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:52.928 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 84 version: 16|3||51275580ce6119f732c457f2 based on: 16|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:52.928 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 15|3||000000000000000000000000min: { _id: ObjectId('512755be5b44ae98eab42a1b') }max: { _id: MaxKey } on: { _id: ObjectId('512755c05b44ae98eab487db') } (splitThreshold 943718) (migrate suggested) m30002| Fri Feb 22 11:25:53.182 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c05b44ae98eab487db') } -->> { : MaxKey } Inserted 880000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 8 shard0002 9 shard0001 32 shard0003 18 too many chunks to print, use verbose if you want to force print m30002| Fri Feb 22 11:25:53.449 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c05b44ae98eab487db') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:53.624 [cleanupOldData-512755c0ad0d9d7dc768ff74] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755895b44ae98eaa8ef1b') } -> { _id: ObjectId('5127558a5b44ae98eaa91dfb') } m30002| Fri Feb 22 11:25:53.702 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c05b44ae98eab487db') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:53.711 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c05b44ae98eab487db') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:53.711 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c05b44ae98eab487db') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c15b44ae98eab4b6bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c05b44ae98eab487db')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:25:53.712 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755c154d20db69cce81c4 m30002| Fri Feb 22 11:25:53.713 [conn4] splitChunk accepted at version 16|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:53.713 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:53-512755c154d20db69cce81c5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532353713), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755c05b44ae98eab487db') }, max: { _id: MaxKey }, lastmod: Timestamp 16000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755c05b44ae98eab487db') }, max: { _id: ObjectId('512755c15b44ae98eab4b6bb') }, lastmod: Timestamp 16000|4, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755c15b44ae98eab4b6bb') }, max: { _id: MaxKey }, lastmod: Timestamp 16000|5, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:53.714 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:53.715 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 85 version: 16|5||51275580ce6119f732c457f2 based on: 16|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:53.716 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 16|3||000000000000000000000000min: { _id: ObjectId('512755c05b44ae98eab487db') }max: { _id: MaxKey } on: { _id: ObjectId('512755c15b44ae98eab4b6bb') } (splitThreshold 943718) (migrate suggested) m30999| Fri Feb 22 11:25:53.878 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755c1ce6119f732c457ff m30999| Fri Feb 22 11:25:53.881 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('5127558a5b44ae98eaa91dfb')", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:53.881 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 16|1||000000000000000000000000min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:53.881 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:53.881 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558a5b44ae98eaa91dfb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:53.882 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755c1ad0d9d7dc768ff76 m30001| Fri Feb 22 11:25:53.882 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:53-512755c1ad0d9d7dc768ff77", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532353882), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:53.883 [conn4] moveChunk request accepted at version 16|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:53.900 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:53.900 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:53.911 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:53.921 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:53.931 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 127, clonedBytes: 13335, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:53.941 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 339, clonedBytes: 35595, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:53.957 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 677, clonedBytes: 71085, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:53.977 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c15b44ae98eab4b6bb') } -->> { : MaxKey } Inserted 892000 documents. m30001| Fri Feb 22 11:25:53.989 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1307, clonedBytes: 137235, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 8 shard0002 10 shard0001 32 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:54.054 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2578, clonedBytes: 270690, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:54.182 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5124, clonedBytes: 538020, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:54.278 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c15b44ae98eab4b6bb') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:54.438 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9882, clonedBytes: 1037610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:54.551 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:54.551 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } m30000| Fri Feb 22 11:25:54.552 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } m30002| Fri Feb 22 11:25:54.642 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c15b44ae98eab4b6bb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:54.655 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c15b44ae98eab4b6bb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:54.655 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c15b44ae98eab4b6bb') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c25b44ae98eab4e59b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c15b44ae98eab4b6bb')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:54.656 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c15b44ae98eab4b6bb') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c25b44ae98eab4e59b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c15b44ae98eab4b6bb')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c1ad0d9d7dc768ff76'), when: new Date(1361532353881), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558a5b44ae98eaa91dfb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 900000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 8 shard0002 10 shard0001 32 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:54.950 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:54.951 [conn4] moveChunk setting version to: 17|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:54.951 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:54.957 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } m30000| Fri Feb 22 11:25:54.957 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } m30000| Fri Feb 22 11:25:54.958 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:54-512755c2d8a80eaf6b759473", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532354958), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 649, step4 of 5: 0, step5 of 5: 406 } } m30001| Fri Feb 22 11:25:54.961 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:54.961 [conn4] moveChunk updating self version to: 17|1||51275580ce6119f732c457f2 through { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:54.962 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:54-512755c2ad0d9d7dc768ff78", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532354962), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:54.962 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:54.962 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:54.962 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:54.962 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:54.962 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:54.962 [cleanupOldData-512755c2ad0d9d7dc768ff79] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:54.962 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:54.962 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:54-512755c2ad0d9d7dc768ff7a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532354962), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 17, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:54.962 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558a5b44ae98eaa91dfb') }, max: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558a5b44ae98eaa91dfb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:16903 w:34 reslen:37 1081ms m30999| Fri Feb 22 11:25:54.964 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 86 version: 17|1||51275580ce6119f732c457f2 based on: 16|5||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:54.965 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:54.982 [cleanupOldData-512755c2ad0d9d7dc768ff79] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } m30001| Fri Feb 22 11:25:54.982 [cleanupOldData-512755c2ad0d9d7dc768ff79] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } m30002| Fri Feb 22 11:25:55.013 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c15b44ae98eab4b6bb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:55.025 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c15b44ae98eab4b6bb') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:55.025 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c15b44ae98eab4b6bb') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c25b44ae98eab4f53b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c15b44ae98eab4b6bb')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:25:55.026 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755c354d20db69cce81c6 m30002| Fri Feb 22 11:25:55.027 [conn4] splitChunk accepted at version 16|5||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:55.028 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:55-512755c354d20db69cce81c7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532355028), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755c15b44ae98eab4b6bb') }, max: { _id: MaxKey }, lastmod: Timestamp 16000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755c15b44ae98eab4b6bb') }, max: { _id: ObjectId('512755c25b44ae98eab4f53b') }, lastmod: Timestamp 17000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, lastmod: Timestamp 17000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:55.028 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:55.030 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 87 version: 17|3||51275580ce6119f732c457f2 based on: 17|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:55.031 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 16|5||000000000000000000000000min: { _id: ObjectId('512755c15b44ae98eab4b6bb') }max: { _id: MaxKey } on: { _id: ObjectId('512755c25b44ae98eab4f53b') } (splitThreshold 943718) (migrate suggested) m30002| Fri Feb 22 11:25:55.357 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:55.696 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:55.705 [cleanupOldData-512755c2ad0d9d7dc768ff79] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('5127558a5b44ae98eaa91dfb') } -> { _id: ObjectId('5127558b5b44ae98eaa94cdb') } Inserted 912000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 9 shard0002 11 shard0001 31 shard0003 18 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:25:55.967 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755c3ce6119f732c45800 m30999| Fri Feb 22 11:25:55.969 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa94cdb')", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:55.969 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 17|1||000000000000000000000000min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:55.969 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:55.969 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa94cdb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:55.970 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755c3ad0d9d7dc768ff7b m30001| Fri Feb 22 11:25:55.970 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:55-512755c3ad0d9d7dc768ff7c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532355970), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:55.972 [conn4] moveChunk request accepted at version 17|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:55.997 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:55.997 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:56.007 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:56.018 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:56.028 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 126, clonedBytes: 13230, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:56.038 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 342, clonedBytes: 35910, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:56.054 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 684, clonedBytes: 71820, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:56.064 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:56.078 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:56.079 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c35b44ae98eab5241b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:56.080 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c35b44ae98eab5241b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c3ad0d9d7dc768ff7b'), when: new Date(1361532355970), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558b5b44ae98eaa94cdb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:56.087 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1261, clonedBytes: 132405, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:56.151 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2615, clonedBytes: 274575, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:56.279 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5125, clonedBytes: 538125, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:56.409 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:56.432 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:56.433 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c45b44ae98eab533bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:56.434 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c45b44ae98eab533bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c3ad0d9d7dc768ff7b'), when: new Date(1361532355970), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558b5b44ae98eaa94cdb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 920000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 9 shard0002 11 shard0001 31 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:56.535 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9935, clonedBytes: 1043175, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:56.620 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:56.620 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } m30000| Fri Feb 22 11:25:56.620 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } m30002| Fri Feb 22 11:25:56.783 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:56.795 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:56.795 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c45b44ae98eab5435b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:56.796 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c45b44ae98eab5435b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c3ad0d9d7dc768ff7b'), when: new Date(1361532355970), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558b5b44ae98eaa94cdb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30002| Fri Feb 22 11:25:57.037 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:57.045 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:57.045 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c45b44ae98eab552fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:57.046 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c45b44ae98eab552fb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c3ad0d9d7dc768ff7b'), when: new Date(1361532355970), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558b5b44ae98eaa94cdb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:57.048 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:57.048 [conn4] moveChunk setting version to: 18|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:57.048 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:57.057 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } m30000| Fri Feb 22 11:25:57.057 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } m30000| Fri Feb 22 11:25:57.057 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:57-512755c5d8a80eaf6b759474", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532357057), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 621, step4 of 5: 0, step5 of 5: 437 } } m30001| Fri Feb 22 11:25:57.058 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:57.058 [conn4] moveChunk updating self version to: 18|1||51275580ce6119f732c457f2 through { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:57.059 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:57-512755c5ad0d9d7dc768ff7d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532357059), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:57.059 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:57.059 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:57.059 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:57.059 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:57.059 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:57.059 [cleanupOldData-512755c5ad0d9d7dc768ff7e] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:57.059 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:57.060 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:57-512755c5ad0d9d7dc768ff7f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532357060), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 25, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:57.060 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558b5b44ae98eaa94cdb') }, max: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa94cdb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:25137 w:26 reslen:37 1090ms m30999| Fri Feb 22 11:25:57.061 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 88 version: 18|1||51275580ce6119f732c457f2 based on: 17|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:57.062 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:57.079 [cleanupOldData-512755c5ad0d9d7dc768ff7e] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } m30001| Fri Feb 22 11:25:57.079 [cleanupOldData-512755c5ad0d9d7dc768ff7e] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } m30002| Fri Feb 22 11:25:57.300 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:57.308 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c25b44ae98eab4f53b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:57.309 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c55b44ae98eab5629b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c25b44ae98eab4f53b')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:25:57.310 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755c554d20db69cce81c8 m30002| Fri Feb 22 11:25:57.311 [conn4] splitChunk accepted at version 17|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:57.311 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:57-512755c554d20db69cce81c9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532357311), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: MaxKey }, lastmod: Timestamp 17000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755c25b44ae98eab4f53b') }, max: { _id: ObjectId('512755c55b44ae98eab5629b') }, lastmod: Timestamp 18000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, lastmod: Timestamp 18000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:57.312 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:57.313 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 89 version: 18|3||51275580ce6119f732c457f2 based on: 18|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:57.314 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 17|3||000000000000000000000000min: { _id: ObjectId('512755c25b44ae98eab4f53b') }max: { _id: MaxKey } on: { _id: ObjectId('512755c55b44ae98eab5629b') } (splitThreshold 943718) (migrate suggested) Inserted 932000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 10 shard0002 12 shard0001 30 shard0003 18 too many chunks to print, use verbose if you want to force print m30002| Fri Feb 22 11:25:57.613 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:57.813 [cleanupOldData-512755c5ad0d9d7dc768ff7e] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa94cdb') } -> { _id: ObjectId('5127558b5b44ae98eaa97bbb') } m30002| Fri Feb 22 11:25:57.865 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } Inserted 940000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 10 shard0002 12 shard0001 30 shard0003 18 too many chunks to print, use verbose if you want to force print m30999| Fri Feb 22 11:25:58.064 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755c6ce6119f732c45801 m30002| Fri Feb 22 11:25:58.139 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:58.140 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa97bbb')", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:58.140 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 18|1||000000000000000000000000min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:58.140 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:58.140 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa97bbb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:58.141 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755c6ad0d9d7dc768ff80 m30001| Fri Feb 22 11:25:58.141 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:58-512755c6ad0d9d7dc768ff81", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532358141), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:58.142 [conn4] moveChunk request accepted at version 18|1||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:58.147 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:58.147 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c65b44ae98eab5917b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c55b44ae98eab5629b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:58.148 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c65b44ae98eab5917b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c55b44ae98eab5629b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c6ad0d9d7dc768ff80'), when: new Date(1361532358140), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558b5b44ae98eaa97bbb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:58.158 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:58.159 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:58.169 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:58.179 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:58.189 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 224, clonedBytes: 23520, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:58.200 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 545, clonedBytes: 57225, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:58.216 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1061, clonedBytes: 111405, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:58.248 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2114, clonedBytes: 221970, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:58.313 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4078, clonedBytes: 428190, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:58.390 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:58.398 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:58.398 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c65b44ae98eab5a11b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c55b44ae98eab5629b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:58.399 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c65b44ae98eab5a11b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c55b44ae98eab5629b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c6ad0d9d7dc768ff80'), when: new Date(1361532358140), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558b5b44ae98eaa97bbb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:25:58.441 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8005, clonedBytes: 840525, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:25:58.570 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:25:58.570 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } m30000| Fri Feb 22 11:25:58.575 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } m30002| Fri Feb 22 11:25:58.643 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:58.651 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:58.651 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c65b44ae98eab5b0bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c55b44ae98eab5629b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:58.651 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c65b44ae98eab5b0bb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c55b44ae98eab5629b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c6ad0d9d7dc768ff80'), when: new Date(1361532358140), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558b5b44ae98eaa97bbb') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 952000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 10 shard0002 12 shard0001 30 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:58.697 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:58.697 [conn4] moveChunk setting version to: 19|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:25:58.697 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:25:58.707 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } m30000| Fri Feb 22 11:25:58.707 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } m30000| Fri Feb 22 11:25:58.707 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:58-512755c6d8a80eaf6b759475", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532358707), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 410, step4 of 5: 0, step5 of 5: 137 } } m30001| Fri Feb 22 11:25:58.707 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:25:58.707 [conn4] moveChunk updating self version to: 19|1||51275580ce6119f732c457f2 through { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:25:58.708 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:58-512755c6ad0d9d7dc768ff82", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532358708), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:58.708 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:58.708 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:58.708 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:25:58.708 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:25:58.708 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:25:58.708 [cleanupOldData-512755c6ad0d9d7dc768ff83] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:25:58.709 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:25:58.709 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:58-512755c6ad0d9d7dc768ff84", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532358709), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 16, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:25:58.709 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558b5b44ae98eaa97bbb') }, max: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558b5b44ae98eaa97bbb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:14 r:16333 w:26 reslen:37 568ms m30999| Fri Feb 22 11:25:58.711 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 90 version: 19|1||51275580ce6119f732c457f2 based on: 18|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:58.711 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:25:58.728 [cleanupOldData-512755c6ad0d9d7dc768ff83] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } m30001| Fri Feb 22 11:25:58.728 [cleanupOldData-512755c6ad0d9d7dc768ff83] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } m30002| Fri Feb 22 11:25:58.914 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:58.923 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c55b44ae98eab5629b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:58.923 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c65b44ae98eab5c05b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c55b44ae98eab5629b')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:25:58.924 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755c654d20db69cce81ca m30002| Fri Feb 22 11:25:58.925 [conn4] splitChunk accepted at version 18|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:25:58.926 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:58-512755c654d20db69cce81cb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532358926), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: MaxKey }, lastmod: Timestamp 18000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755c55b44ae98eab5629b') }, max: { _id: ObjectId('512755c65b44ae98eab5c05b') }, lastmod: Timestamp 19000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, lastmod: Timestamp 19000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:25:58.926 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:25:58.928 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 91 version: 19|3||51275580ce6119f732c457f2 based on: 19|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:25:58.929 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 18|3||000000000000000000000000min: { _id: ObjectId('512755c55b44ae98eab5629b') }max: { _id: MaxKey } on: { _id: ObjectId('512755c65b44ae98eab5c05b') } (splitThreshold 943718) (migrate suggested) m30002| Fri Feb 22 11:25:59.210 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } Inserted 960000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 11 shard0002 13 shard0001 29 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:25:59.289 [cleanupOldData-512755c6ad0d9d7dc768ff83] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('5127558b5b44ae98eaa97bbb') } -> { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } m30002| Fri Feb 22 11:25:59.557 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30999| Fri Feb 22 11:25:59.713 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755c7ce6119f732c45802 m30999| Fri Feb 22 11:25:59.716 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('5127558c5b44ae98eaa9aa9b')", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:25:59.716 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 19|1||000000000000000000000000min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:25:59.716 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:25:59.716 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558c5b44ae98eaa9aa9b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:25:59.717 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755c7ad0d9d7dc768ff85 m30001| Fri Feb 22 11:25:59.717 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:25:59-512755c7ad0d9d7dc768ff86", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532359717), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:25:59.718 [conn4] moveChunk request accepted at version 19|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:25:59.744 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:25:59.744 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:25:59.754 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:59.764 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:59.775 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 137, clonedBytes: 14385, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:59.785 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 351, clonedBytes: 36855, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:59.801 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 692, clonedBytes: 72660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:25:59.833 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1315, clonedBytes: 138075, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:59.894 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30001| Fri Feb 22 11:25:59.897 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2654, clonedBytes: 278670, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:25:59.906 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30002| Fri Feb 22 11:25:59.907 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c75b44ae98eab5ef3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c65b44ae98eab5c05b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:25:59.907 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c75b44ae98eab5ef3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c65b44ae98eab5c05b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c7ad0d9d7dc768ff85'), when: new Date(1361532359716), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558c5b44ae98eaa9aa9b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:26:00.026 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5095, clonedBytes: 534975, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:26:00.261 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:00.274 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:00.274 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c85b44ae98eab5fedb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c65b44ae98eab5c05b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:26:00.275 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c85b44ae98eab5fedb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c65b44ae98eab5c05b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c7ad0d9d7dc768ff85'), when: new Date(1361532359716), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558c5b44ae98eaa9aa9b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 972000 documents. m30001| Fri Feb 22 11:26:00.282 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10004, clonedBytes: 1050420, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 11 shard0002 13 shard0001 29 shard0003 18 too many chunks to print, use verbose if you want to force print m30000| Fri Feb 22 11:26:00.389 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:26:00.389 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } m30000| Fri Feb 22 11:26:00.391 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } m30002| Fri Feb 22 11:26:00.653 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:00.664 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:00.665 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c85b44ae98eab60e7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c65b44ae98eab5c05b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:26:00.666 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c85b44ae98eab60e7b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c65b44ae98eab5c05b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c7ad0d9d7dc768ff85'), when: new Date(1361532359716), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558c5b44ae98eaa9aa9b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30001| Fri Feb 22 11:26:00.794 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:00.794 [conn4] moveChunk setting version to: 20|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:26:00.794 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:00.797 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } m30000| Fri Feb 22 11:26:00.797 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } m30000| Fri Feb 22 11:26:00.797 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:00-512755c8d8a80eaf6b759476", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532360797), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 643, step4 of 5: 0, step5 of 5: 408 } } m30001| Fri Feb 22 11:26:00.804 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:00.804 [conn4] moveChunk updating self version to: 20|1||51275580ce6119f732c457f2 through { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:00.806 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:00-512755c8ad0d9d7dc768ff87", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532360806), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:00.806 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:00.806 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:00.806 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:00.806 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:00.806 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:00.806 [cleanupOldData-512755c8ad0d9d7dc768ff88] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:26:00.806 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:00.806 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:00-512755c8ad0d9d7dc768ff89", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532360806), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 25, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:00.806 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558c5b44ae98eaa9aa9b') }, max: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558c5b44ae98eaa9aa9b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:25 r:25444 w:32 reslen:37 1090ms m30999| Fri Feb 22 11:26:00.809 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 92 version: 20|1||51275580ce6119f732c457f2 based on: 19|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:00.809 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:00.826 [cleanupOldData-512755c8ad0d9d7dc768ff88] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } m30001| Fri Feb 22 11:26:00.826 [cleanupOldData-512755c8ad0d9d7dc768ff88] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } m30002| Fri Feb 22 11:26:01.004 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:01.016 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c65b44ae98eab5c05b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:01.016 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755c85b44ae98eab61e1b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c65b44ae98eab5c05b')", configdb: "localhost:30000" } m30002| Fri Feb 22 11:26:01.023 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' acquired, ts : 512755c954d20db69cce81cc m30002| Fri Feb 22 11:26:01.024 [conn4] splitChunk accepted at version 19|3||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:26:01.025 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:01-512755c954d20db69cce81cd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:41136", time: new Date(1361532361025), what: "split", ns: "bulk_shard_insert.coll", details: { before: { min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: MaxKey }, lastmod: Timestamp 19000|3, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: ObjectId('512755c65b44ae98eab5c05b') }, max: { _id: ObjectId('512755c85b44ae98eab61e1b') }, lastmod: Timestamp 20000|2, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') }, right: { min: { _id: ObjectId('512755c85b44ae98eab61e1b') }, max: { _id: MaxKey }, lastmod: Timestamp 20000|3, lastmodEpoch: ObjectId('51275580ce6119f732c457f2') } } } m30002| Fri Feb 22 11:26:01.026 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30002:1361532328:18576' unlocked. m30999| Fri Feb 22 11:26:01.028 [conn1] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 93 version: 20|3||51275580ce6119f732c457f2 based on: 20|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:01.028 [conn1] autosplitted bulk_shard_insert.coll shard: ns:bulk_shard_insert.collshard: shard0002:localhost:30002lastmod: 19|3||000000000000000000000000min: { _id: ObjectId('512755c65b44ae98eab5c05b') }max: { _id: MaxKey } on: { _id: ObjectId('512755c85b44ae98eab61e1b') } (splitThreshold 943718) (migrate suggested) Inserted 980000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 12 shard0002 14 shard0001 28 shard0003 18 too many chunks to print, use verbose if you want to force print m30002| Fri Feb 22 11:26:01.434 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30001| Fri Feb 22 11:26:01.609 [cleanupOldData-512755c8ad0d9d7dc768ff88] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('5127558c5b44ae98eaa9aa9b') } -> { _id: ObjectId('5127558d5b44ae98eaa9d97b') } m30002| Fri Feb 22 11:26:01.784 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30999| Fri Feb 22 11:26:01.811 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755c9ce6119f732c45803 m30999| Fri Feb 22 11:26:01.813 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('5127558d5b44ae98eaa9d97b')", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:26:01.813 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 20|1||000000000000000000000000min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:26:01.814 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:01.814 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558d5b44ae98eaa9d97b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:01.815 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755c9ad0d9d7dc768ff8a m30001| Fri Feb 22 11:26:01.815 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:01-512755c9ad0d9d7dc768ff8b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532361815), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:01.816 [conn4] moveChunk request accepted at version 20|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:01.841 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:26:01.842 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:01.852 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:01.862 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:01.872 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 126, clonedBytes: 13230, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:01.883 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 310, clonedBytes: 32550, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:01.899 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 650, clonedBytes: 68250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:01.931 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1322, clonedBytes: 138810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:01.996 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2550, clonedBytes: 267750, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:02.124 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5047, clonedBytes: 529935, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:26:02.139 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:02.159 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:02.159 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c85b44ae98eab61e1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755ca5b44ae98eab64cfb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c85b44ae98eab61e1b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:26:02.160 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c85b44ae98eab61e1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755ca5b44ae98eab64cfb') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c85b44ae98eab61e1b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c9ad0d9d7dc768ff8a'), when: new Date(1361532361814), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558d5b44ae98eaa9d97b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 992000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 12 shard0002 14 shard0001 28 shard0003 18 too many chunks to print, use verbose if you want to force print m30001| Fri Feb 22 11:26:02.380 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10123, clonedBytes: 1062915, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:26:02.483 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:26:02.483 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } m30000| Fri Feb 22 11:26:02.486 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } m30002| Fri Feb 22 11:26:02.572 [conn3] insert bulk_shard_insert.coll ninserted:4000 keyUpdates:0 locks(micros) w:101700 101ms m30002| Fri Feb 22 11:26:02.573 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:02.586 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:02.586 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c85b44ae98eab61e1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755ca5b44ae98eab65c9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c85b44ae98eab61e1b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:26:02.587 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c85b44ae98eab61e1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755ca5b44ae98eab65c9b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c85b44ae98eab61e1b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c9ad0d9d7dc768ff8a'), when: new Date(1361532361814), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558d5b44ae98eaa9d97b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } m30002| Fri Feb 22 11:26:02.831 [conn4] request split points lookup for chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:02.839 [conn4] max number of requested split points reached (2) before the end of chunk bulk_shard_insert.coll { : ObjectId('512755c85b44ae98eab61e1b') } -->> { : MaxKey } m30002| Fri Feb 22 11:26:02.840 [conn4] received splitChunk request: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c85b44ae98eab61e1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755ca5b44ae98eab66c3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c85b44ae98eab61e1b')", configdb: "localhost:30000" } m30999| Fri Feb 22 11:26:02.840 [conn1] warning: splitChunk failed - cmd: { splitChunk: "bulk_shard_insert.coll", keyPattern: { _id: 1.0 }, min: { _id: ObjectId('512755c85b44ae98eab61e1b') }, max: { _id: MaxKey }, from: "shard0002", splitKeys: [ { _id: ObjectId('512755ca5b44ae98eab66c3b') } ], shardId: "bulk_shard_insert.coll-_id_ObjectId('512755c85b44ae98eab61e1b')", configdb: "localhost:30000" } result: { who: { _id: "bulk_shard_insert.coll", process: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480", state: 2, ts: ObjectId('512755c9ad0d9d7dc768ff8a'), when: new Date(1361532361814), who: "bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480:conn4:8936", why: "migrate-{ _id: ObjectId('5127558d5b44ae98eaa9d97b') }" }, ok: 0.0, errmsg: "the collection's metadata lock is taken" } Inserted 1000000 documents. --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 12 shard0002 14 shard0001 28 shard0003 18 too many chunks to print, use verbose if you want to force print --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275580ce6119f732c457ee") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0002", "host" : "localhost:30002" } { "_id" : "shard0003", "host" : "localhost:30003" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "bulk_shard_insert", "partitioned" : true, "primary" : "shard0001" } bulk_shard_insert.coll shard key: { "_id" : 1 } chunks: shard0000 12 shard0002 14 shard0001 28 shard0003 18 too many chunks to print, use verbose if you want to force print m30000| Fri Feb 22 11:26:02.866 [conn6] no current chunk manager found for this shard, will initialize Inserted 1000000 count : 1012000 itcount : 1012000 m30001| Fri Feb 22 11:26:02.892 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:02.893 [conn4] moveChunk setting version to: 21|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:26:02.893 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:02.902 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } m30000| Fri Feb 22 11:26:02.903 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:02.903 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } m30000| Fri Feb 22 11:26:02.903 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:02-512755cad8a80eaf6b759477", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532362903), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 640, step4 of 5: 0, step5 of 5: 420 } } m30001| Fri Feb 22 11:26:02.913 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:02.913 [conn4] moveChunk updating self version to: 21|1||51275580ce6119f732c457f2 through { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:02.914 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:02-512755caad0d9d7dc768ff8c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532362914), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:02.914 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:02.914 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:02.914 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:02.914 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:02.914 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:02.914 [cleanupOldData-512755caad0d9d7dc768ff8d] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:26:02.914 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:02.914 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:02-512755caad0d9d7dc768ff8e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532362914), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 25, step4 of 6: 1050, step5 of 6: 21, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:02.914 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558d5b44ae98eaa9d97b') }, max: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558d5b44ae98eaa9d97b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:25551 w:26 reslen:37 1100ms m30999| Fri Feb 22 11:26:02.916 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 94 version: 21|1||51275580ce6119f732c457f2 based on: 20|3||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:02.917 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:02.934 [cleanupOldData-512755caad0d9d7dc768ff8d] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } m30001| Fri Feb 22 11:26:02.934 [cleanupOldData-512755caad0d9d7dc768ff8d] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } m30001| Fri Feb 22 11:26:03.581 [cleanupOldData-512755caad0d9d7dc768ff8d] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('5127558d5b44ae98eaa9d97b') } -> { _id: ObjectId('5127558e5b44ae98eaaa085b') } m30999| Fri Feb 22 11:26:03.919 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755cbce6119f732c45804 m30999| Fri Feb 22 11:26:03.921 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('5127558e5b44ae98eaaa085b')", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:26:03.921 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 21|1||000000000000000000000000min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:26:03.921 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:03.922 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558e5b44ae98eaaa085b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:03.922 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755cbad0d9d7dc768ff8f m30001| Fri Feb 22 11:26:03.922 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:03-512755cbad0d9d7dc768ff90", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532363922), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:03.923 [conn4] moveChunk request accepted at version 21|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:03.949 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:26:03.949 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:03.959 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:03.969 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:03.980 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 136, clonedBytes: 14280, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:03.990 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 347, clonedBytes: 36435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:04.006 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 688, clonedBytes: 72240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:04.038 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1348, clonedBytes: 141540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:04.103 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2600, clonedBytes: 273000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:04.231 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5186, clonedBytes: 544530, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:04.487 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10318, clonedBytes: 1083390, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:26:04.576 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:26:04.576 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } m30000| Fri Feb 22 11:26:04.580 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } m30001| Fri Feb 22 11:26:05.000 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:05.000 [conn4] moveChunk setting version to: 22|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:26:05.000 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:05.007 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } m30000| Fri Feb 22 11:26:05.007 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } m30000| Fri Feb 22 11:26:05.007 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:05-512755cdd8a80eaf6b759478", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532365007), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 626, step4 of 5: 0, step5 of 5: 431 } } m30001| Fri Feb 22 11:26:05.010 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:05.010 [conn4] moveChunk updating self version to: 22|1||51275580ce6119f732c457f2 through { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:05.011 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:05-512755cdad0d9d7dc768ff91", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532365011), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:05.011 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:05.011 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:05.011 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:05.011 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:05.011 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:05.011 [cleanupOldData-512755cdad0d9d7dc768ff92] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:26:05.012 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:05.012 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:05-512755cdad0d9d7dc768ff93", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532365012), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 25, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:05.012 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558e5b44ae98eaaa085b') }, max: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558e5b44ae98eaaa085b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:39 r:25091 w:31 reslen:37 1090ms m30999| Fri Feb 22 11:26:05.014 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 95 version: 22|1||51275580ce6119f732c457f2 based on: 21|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:05.014 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:05.031 [cleanupOldData-512755cdad0d9d7dc768ff92] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } m30001| Fri Feb 22 11:26:05.031 [cleanupOldData-512755cdad0d9d7dc768ff92] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } m30001| Fri Feb 22 11:26:05.704 [cleanupOldData-512755cdad0d9d7dc768ff92] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('5127558e5b44ae98eaaa085b') } -> { _id: ObjectId('5127558f5b44ae98eaaa373b') } m30999| Fri Feb 22 11:26:06.017 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755cece6119f732c45805 m30999| Fri Feb 22 11:26:06.019 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('5127558f5b44ae98eaaa373b')", lastmod: Timestamp 22000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:26:06.019 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 22|1||000000000000000000000000min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }max: { _id: ObjectId('512755905b44ae98eaaa661b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:26:06.019 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:06.019 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558f5b44ae98eaaa373b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:06.020 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755cead0d9d7dc768ff94 m30001| Fri Feb 22 11:26:06.020 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:06-512755cead0d9d7dc768ff95", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532366020), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:06.021 [conn4] moveChunk request accepted at version 22|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:06.037 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:26:06.038 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:06.048 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.058 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.068 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 264, clonedBytes: 27720, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.079 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 590, clonedBytes: 61950, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.095 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1111, clonedBytes: 116655, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.127 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2153, clonedBytes: 226065, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.191 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4104, clonedBytes: 430920, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.320 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8079, clonedBytes: 848295, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:26:06.447 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:26:06.447 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } m30000| Fri Feb 22 11:26:06.452 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } m30001| Fri Feb 22 11:26:06.576 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:06.576 [conn4] moveChunk setting version to: 23|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:26:06.576 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:06.584 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } m30000| Fri Feb 22 11:26:06.584 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } m30000| Fri Feb 22 11:26:06.584 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:06-512755ced8a80eaf6b759479", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532366584), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 408, step4 of 5: 0, step5 of 5: 137 } } m30001| Fri Feb 22 11:26:06.586 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:06.586 [conn4] moveChunk updating self version to: 23|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:06.587 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:06-512755cead0d9d7dc768ff96", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532366587), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:06.587 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:06.587 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:06.587 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:06.587 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:06.587 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:06.588 [cleanupOldData-512755cead0d9d7dc768ff97] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') }, # cursors remaining: 0 m30001| Fri Feb 22 11:26:06.588 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:06.588 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:06-512755cead0d9d7dc768ff98", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532366588), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 16, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:06.588 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('5127558f5b44ae98eaaa373b') }, max: { _id: ObjectId('512755905b44ae98eaaa661b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('5127558f5b44ae98eaaa373b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:16617 w:31 reslen:37 569ms m30999| Fri Feb 22 11:26:06.590 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 96 version: 23|1||51275580ce6119f732c457f2 based on: 22|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:06.590 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:06.608 [cleanupOldData-512755cead0d9d7dc768ff97] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } m30001| Fri Feb 22 11:26:06.608 [cleanupOldData-512755cead0d9d7dc768ff97] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } m30001| Fri Feb 22 11:26:07.316 [cleanupOldData-512755cead0d9d7dc768ff97] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('5127558f5b44ae98eaaa373b') } -> { _id: ObjectId('512755905b44ae98eaaa661b') } m30999| Fri Feb 22 11:26:07.593 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755cfce6119f732c45806 m30999| Fri Feb 22 11:26:07.595 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755905b44ae98eaaa661b')", lastmod: Timestamp 23000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:26:07.595 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 23|1||000000000000000000000000min: { _id: ObjectId('512755905b44ae98eaaa661b') }max: { _id: ObjectId('512755915b44ae98eaaa94fb') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:26:07.595 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:07.596 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755905b44ae98eaaa661b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:07.597 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755cfad0d9d7dc768ff99 m30001| Fri Feb 22 11:26:07.597 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:07-512755cfad0d9d7dc768ff9a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532367597), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:07.598 [conn4] moveChunk request accepted at version 23|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:07.624 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:26:07.624 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:07.634 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:07.645 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:07.655 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 53, clonedBytes: 5565, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:07.665 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 257, clonedBytes: 26985, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:07.681 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 585, clonedBytes: 61425, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:07.713 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1235, clonedBytes: 129675, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:07.778 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2434, clonedBytes: 255570, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:07.906 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4901, clonedBytes: 514605, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:08.162 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9786, clonedBytes: 1027530, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:26:08.292 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:26:08.292 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } m30002| Fri Feb 22 11:26:08.293 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } m30001| Fri Feb 22 11:26:08.674 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:08.675 [conn4] moveChunk setting version to: 24|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:26:08.675 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:26:08.678 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } m30002| Fri Feb 22 11:26:08.678 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } m30002| Fri Feb 22 11:26:08.678 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:08-512755d054d20db69cce81ce", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532368678), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 666, step4 of 5: 0, step5 of 5: 386 } } m30001| Fri Feb 22 11:26:08.685 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:08.685 [conn4] moveChunk updating self version to: 24|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:08.686 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:08-512755d0ad0d9d7dc768ff9b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532368686), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:08.686 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:08.686 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:08.686 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:08.686 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:08.686 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:08.686 [cleanupOldData-512755d0ad0d9d7dc768ff9c] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') }, # cursors remaining: 0 m30001| Fri Feb 22 11:26:08.686 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:08.686 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:08-512755d0ad0d9d7dc768ff9d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532368686), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 26, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:08.686 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755905b44ae98eaaa661b') }, max: { _id: ObjectId('512755915b44ae98eaaa94fb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755905b44ae98eaaa661b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:35 r:26154 w:31 reslen:37 1091ms m30999| Fri Feb 22 11:26:08.689 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 97 version: 24|1||51275580ce6119f732c457f2 based on: 23|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:08.689 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:08.706 [cleanupOldData-512755d0ad0d9d7dc768ff9c] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } m30001| Fri Feb 22 11:26:08.706 [cleanupOldData-512755d0ad0d9d7dc768ff9c] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } m30001| Fri Feb 22 11:26:09.513 [cleanupOldData-512755d0ad0d9d7dc768ff9c] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755905b44ae98eaaa661b') } -> { _id: ObjectId('512755915b44ae98eaaa94fb') } m30999| Fri Feb 22 11:26:09.691 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755d1ce6119f732c45807 m30999| Fri Feb 22 11:26:09.693 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755915b44ae98eaaa94fb')", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:26:09.694 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 24|1||000000000000000000000000min: { _id: ObjectId('512755915b44ae98eaaa94fb') }max: { _id: ObjectId('512755925b44ae98eaaac3db') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:26:09.694 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:09.694 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755915b44ae98eaaa94fb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:09.695 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755d1ad0d9d7dc768ff9e m30001| Fri Feb 22 11:26:09.695 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:09-512755d1ad0d9d7dc768ff9f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532369695), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:09.696 [conn4] moveChunk request accepted at version 24|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:09.714 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:26:09.714 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:09.724 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:09.735 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:09.745 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 133, clonedBytes: 13965, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:09.755 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 340, clonedBytes: 35700, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:09.771 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 680, clonedBytes: 71400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:09.803 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1351, clonedBytes: 141855, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:09.868 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2621, clonedBytes: 275205, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:09.996 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5229, clonedBytes: 549045, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:10.252 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10347, clonedBytes: 1086435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:26:10.335 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:26:10.335 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } m30000| Fri Feb 22 11:26:10.339 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } m30001| Fri Feb 22 11:26:10.764 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:10.764 [conn4] moveChunk setting version to: 25|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:26:10.764 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:10.765 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } m30000| Fri Feb 22 11:26:10.765 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } m30000| Fri Feb 22 11:26:10.765 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:10-512755d2d8a80eaf6b75947a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532370765), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 620, step4 of 5: 0, step5 of 5: 429 } } m30001| Fri Feb 22 11:26:10.775 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:10.775 [conn4] moveChunk updating self version to: 25|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:10.776 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:10-512755d2ad0d9d7dc768ffa0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532370776), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:10.776 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:10.776 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:10.776 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:10.776 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:10.776 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:10.776 [cleanupOldData-512755d2ad0d9d7dc768ffa1] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') }, # cursors remaining: 0 m30001| Fri Feb 22 11:26:10.776 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:10.776 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:10-512755d2ad0d9d7dc768ffa2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532370776), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 18, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:10.776 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755915b44ae98eaaa94fb') }, max: { _id: ObjectId('512755925b44ae98eaaac3db') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755915b44ae98eaaa94fb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:33 r:24018 w:23 reslen:37 1082ms m30999| Fri Feb 22 11:26:10.778 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 98 version: 25|1||51275580ce6119f732c457f2 based on: 24|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:10.779 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:10.796 [cleanupOldData-512755d2ad0d9d7dc768ffa1] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } m30001| Fri Feb 22 11:26:10.796 [cleanupOldData-512755d2ad0d9d7dc768ffa1] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } m30001| Fri Feb 22 11:26:11.492 [cleanupOldData-512755d2ad0d9d7dc768ffa1] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755915b44ae98eaaa94fb') } -> { _id: ObjectId('512755925b44ae98eaaac3db') } m30999| Fri Feb 22 11:26:11.781 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755d3ce6119f732c45808 m30999| Fri Feb 22 11:26:11.784 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755925b44ae98eaaac3db')", lastmod: Timestamp 25000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:26:11.784 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 25|1||000000000000000000000000min: { _id: ObjectId('512755925b44ae98eaaac3db') }max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:26:11.784 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:11.784 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755925b44ae98eaaac3db')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:11.785 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755d3ad0d9d7dc768ffa3 m30001| Fri Feb 22 11:26:11.785 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:11-512755d3ad0d9d7dc768ffa4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532371785), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:11.786 [conn4] moveChunk request accepted at version 25|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:11.806 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:26:11.806 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:11.816 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:11.826 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 29, clonedBytes: 3045, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:11.837 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 221, clonedBytes: 23205, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:11.847 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 418, clonedBytes: 43890, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:11.863 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 695, clonedBytes: 72975, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:11.895 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1342, clonedBytes: 140910, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:11.960 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2624, clonedBytes: 275520, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:12.088 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4971, clonedBytes: 521955, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:12.344 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9869, clonedBytes: 1036245, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:26:12.462 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:26:12.462 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } m30002| Fri Feb 22 11:26:12.465 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } m30001| Fri Feb 22 11:26:12.856 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:12.856 [conn4] moveChunk setting version to: 26|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:26:12.857 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:26:12.861 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } m30002| Fri Feb 22 11:26:12.861 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } m30002| Fri Feb 22 11:26:12.861 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:12-512755d454d20db69cce81cf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532372861), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 655, step4 of 5: 0, step5 of 5: 399 } } m30001| Fri Feb 22 11:26:12.867 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:12.867 [conn4] moveChunk updating self version to: 26|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:12.868 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:12-512755d4ad0d9d7dc768ffa5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532372868), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:12.868 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:12.868 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:12.868 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:12.868 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:12.868 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:12.868 [cleanupOldData-512755d4ad0d9d7dc768ffa6] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') }, # cursors remaining: 0 m30001| Fri Feb 22 11:26:12.868 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:12.868 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:12-512755d4ad0d9d7dc768ffa7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532372868), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 19, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:12.869 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755925b44ae98eaaac3db') }, max: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755925b44ae98eaaac3db')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:19404 w:30 reslen:37 1084ms m30999| Fri Feb 22 11:26:12.872 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 2ms sequenceNumber: 99 version: 26|1||51275580ce6119f732c457f2 based on: 25|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:12.873 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:12.888 [cleanupOldData-512755d4ad0d9d7dc768ffa6] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } m30001| Fri Feb 22 11:26:12.888 [cleanupOldData-512755d4ad0d9d7dc768ffa6] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } m30001| Fri Feb 22 11:26:13.090 [conn4] getmore bulk_shard_insert.coll cursorid:362865977049106 ntoreturn:0 keyUpdates:0 numYields: 16 locks(micros) r:325093 nreturned:39946 reslen:4194350 165ms m30002| Fri Feb 22 11:26:13.293 [conn4] getmore bulk_shard_insert.coll cursorid:361929941131514 ntoreturn:0 keyUpdates:0 locks(micros) r:193730 nreturned:39946 reslen:4194350 193ms m30003| Fri Feb 22 11:26:13.473 [conn7] getmore bulk_shard_insert.coll cursorid:360722615982021 ntoreturn:0 keyUpdates:0 locks(micros) r:169509 nreturned:39946 reslen:4194350 169ms m30000| Fri Feb 22 11:26:13.657 [conn7] getmore bulk_shard_insert.coll cursorid:363141324917685 ntoreturn:0 keyUpdates:0 locks(micros) r:173332 nreturned:39946 reslen:4194350 173ms m30001| Fri Feb 22 11:26:13.860 [cleanupOldData-512755d4ad0d9d7dc768ffa6] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755925b44ae98eaaac3db') } -> { _id: ObjectId('512755935b44ae98eaaaf2bb') } m30999| Fri Feb 22 11:26:13.875 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755d5ce6119f732c45809 m30999| Fri Feb 22 11:26:13.878 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755935b44ae98eaaaf2bb')", lastmod: Timestamp 26000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:26:13.878 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 26|1||000000000000000000000000min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }max: { _id: ObjectId('512755945b44ae98eaab219b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:26:13.878 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:13.878 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755935b44ae98eaaaf2bb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:13.879 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755d5ad0d9d7dc768ffa8 m30001| Fri Feb 22 11:26:13.879 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:13-512755d5ad0d9d7dc768ffa9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532373879), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:13.881 [conn4] moveChunk request accepted at version 26|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:13.908 [conn4] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:26:13.908 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:13.919 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:13.929 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:13.939 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 108, clonedBytes: 11340, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:13.950 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 302, clonedBytes: 31710, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:13.966 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 603, clonedBytes: 63315, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:13.998 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1224, clonedBytes: 128520, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:14.062 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2351, clonedBytes: 246855, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:14.191 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4924, clonedBytes: 517020, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:14.447 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9953, clonedBytes: 1045065, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:26:14.550 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:26:14.550 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } m30000| Fri Feb 22 11:26:14.554 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } m30001| Fri Feb 22 11:26:14.959 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:14.959 [conn4] moveChunk setting version to: 27|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:26:14.959 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:14.960 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } m30000| Fri Feb 22 11:26:14.960 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } m30000| Fri Feb 22 11:26:14.960 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:14-512755d6d8a80eaf6b75947b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532374960), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 640, step4 of 5: 0, step5 of 5: 409 } } m30001| Fri Feb 22 11:26:14.969 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:14.970 [conn4] moveChunk updating self version to: 27|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:14.970 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:14-512755d6ad0d9d7dc768ffaa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532374970), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:14.970 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:14.970 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:14.970 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:14.970 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:14.971 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:14.971 [cleanupOldData-512755d6ad0d9d7dc768ffab] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') }, # cursors remaining: 1 m30001| Fri Feb 22 11:26:14.971 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:14.971 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:14-512755d6ad0d9d7dc768ffac", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532374971), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 27, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:14.971 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755935b44ae98eaaaf2bb') }, max: { _id: ObjectId('512755945b44ae98eaab219b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755935b44ae98eaaaf2bb')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:33 r:27459 w:31 reslen:37 1093ms m30999| Fri Feb 22 11:26:14.973 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 100 version: 27|1||51275580ce6119f732c457f2 based on: 26|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:14.973 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:14.991 [cleanupOldData-512755d6ad0d9d7dc768ffab] (looping 1) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } # cursors:1 m30001| Fri Feb 22 11:26:14.991 [cleanupOldData-512755d6ad0d9d7dc768ffab] cursors: 362865977049106 m30001| Fri Feb 22 11:26:15.097 [conn4] getmore bulk_shard_insert.coll cursorid:362865977049106 ntoreturn:0 keyUpdates:0 locks(micros) r:103294 nreturned:39946 reslen:4194350 103ms m30002| Fri Feb 22 11:26:15.215 [conn4] getmore bulk_shard_insert.coll cursorid:361929941131514 ntoreturn:0 keyUpdates:0 locks(micros) r:113660 nreturned:39946 reslen:4194350 113ms m30003| Fri Feb 22 11:26:15.377 [conn7] getmore bulk_shard_insert.coll cursorid:360722615982021 ntoreturn:0 keyUpdates:0 locks(micros) r:159565 nreturned:39946 reslen:4194350 159ms m30000| Fri Feb 22 11:26:15.559 [conn7] getmore bulk_shard_insert.coll cursorid:363141324917685 ntoreturn:0 keyUpdates:0 locks(micros) r:179300 nreturned:39946 reslen:4194350 179ms m30999| Fri Feb 22 11:26:15.975 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755d7ce6119f732c4580a m30999| Fri Feb 22 11:26:15.977 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755945b44ae98eaab219b')", lastmod: Timestamp 27000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:26:15.977 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 27|1||000000000000000000000000min: { _id: ObjectId('512755945b44ae98eaab219b') }max: { _id: ObjectId('512755955b44ae98eaab507b') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:26:15.977 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:15.978 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755945b44ae98eaab219b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:15.979 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755d7ad0d9d7dc768ffad m30001| Fri Feb 22 11:26:15.979 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:15-512755d7ad0d9d7dc768ffae", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532375979), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:15.980 [conn4] moveChunk request accepted at version 27|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:15.997 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:26:15.997 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:16.007 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.018 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 80, clonedBytes: 8400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.028 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 380, clonedBytes: 39900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.038 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 680, clonedBytes: 71400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.054 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1165, clonedBytes: 122325, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.087 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1947, clonedBytes: 204435, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.151 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3219, clonedBytes: 337995, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.279 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5584, clonedBytes: 586320, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:16.536 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10272, clonedBytes: 1078560, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:26:16.619 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:26:16.619 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } m30002| Fri Feb 22 11:26:16.623 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } m30001| Fri Feb 22 11:26:16.967 [initandlisten] connection accepted from 127.0.0.1:56384 #7 (7 connections now open) m30001| Fri Feb 22 11:26:17.048 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:17.048 [conn4] moveChunk setting version to: 28|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:26:17.049 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:26:17.050 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } m30002| Fri Feb 22 11:26:17.050 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } m30002| Fri Feb 22 11:26:17.050 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:17-512755d954d20db69cce81d0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532377050), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 620, step4 of 5: 0, step5 of 5: 431 } } m30001| Fri Feb 22 11:26:17.059 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:17.059 [conn4] moveChunk updating self version to: 28|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:17.060 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:17-512755d9ad0d9d7dc768ffaf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532377060), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:17.060 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:17.060 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:17.061 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:17.061 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:17.061 [cleanupOldData-512755d9ad0d9d7dc768ffb0] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') }, # cursors remaining: 1 m30001| Fri Feb 22 11:26:17.061 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:17.062 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:17.062 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:17-512755d9ad0d9d7dc768ffb1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532377062), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 17, step4 of 6: 1050, step5 of 6: 12, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:17.062 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755945b44ae98eaab219b') }, max: { _id: ObjectId('512755955b44ae98eaab507b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755945b44ae98eaab219b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:34 r:17373 w:31 reslen:37 1084ms m30999| Fri Feb 22 11:26:17.064 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 101 version: 28|1||51275580ce6119f732c457f2 based on: 27|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:17.064 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:17.081 [cleanupOldData-512755d9ad0d9d7dc768ffb0] (looping 1) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } # cursors:1 m30001| Fri Feb 22 11:26:17.081 [cleanupOldData-512755d9ad0d9d7dc768ffb0] cursors: 362865977049106 m30001| Fri Feb 22 11:26:17.124 [conn7] getmore bulk_shard_insert.coll cursorid:362865977049106 ntoreturn:0 keyUpdates:0 numYields: 3 locks(micros) r:250293 nreturned:39946 reslen:4194350 156ms m30002| Fri Feb 22 11:26:17.240 [conn4] getmore bulk_shard_insert.coll cursorid:361929941131514 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:118717 nreturned:39946 reslen:4194350 112ms m30003| Fri Feb 22 11:26:17.395 [conn7] getmore bulk_shard_insert.coll cursorid:360722615982021 ntoreturn:0 keyUpdates:0 locks(micros) r:152401 nreturned:39946 reslen:4194350 152ms m30000| Fri Feb 22 11:26:17.567 [conn7] getmore bulk_shard_insert.coll cursorid:363141324917685 ntoreturn:0 keyUpdates:0 locks(micros) r:170135 nreturned:39946 reslen:4194350 170ms m30999| Fri Feb 22 11:26:18.066 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755dace6119f732c4580b m30999| Fri Feb 22 11:26:18.068 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755955b44ae98eaab507b')", lastmod: Timestamp 28000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:26:18.069 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 28|1||000000000000000000000000min: { _id: ObjectId('512755955b44ae98eaab507b') }max: { _id: ObjectId('512755965b44ae98eaab7f5b') }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:26:18.069 [conn7] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:18.069 [conn7] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755955b44ae98eaab507b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:18.070 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755daad0d9d7dc768ffb2 m30001| Fri Feb 22 11:26:18.070 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:18-512755daad0d9d7dc768ffb3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56384", time: new Date(1361532378070), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:18.071 [conn7] moveChunk request accepted at version 28|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:18.098 [conn7] moveChunk number of documents: 12000 m30000| Fri Feb 22 11:26:18.099 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:18.109 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.119 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.129 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.140 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 152, clonedBytes: 15960, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.156 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 462, clonedBytes: 48510, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.188 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1129, clonedBytes: 118545, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.252 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2463, clonedBytes: 258615, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.380 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4950, clonedBytes: 519750, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:18.637 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9922, clonedBytes: 1041810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:26:18.741 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:26:18.741 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } m30000| Fri Feb 22 11:26:18.745 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } m30001| Fri Feb 22 11:26:18.997 [cleanupOldData-512755d6ad0d9d7dc768ffab] (looping 201) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } # cursors:1 m30001| Fri Feb 22 11:26:18.997 [cleanupOldData-512755d6ad0d9d7dc768ffab] cursors: 362865977049106 m30001| Fri Feb 22 11:26:19.149 [conn7] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:19.149 [conn7] moveChunk setting version to: 29|0||51275580ce6119f732c457f2 m30000| Fri Feb 22 11:26:19.149 [conn16] Waiting for commit to finish m30000| Fri Feb 22 11:26:19.150 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } m30000| Fri Feb 22 11:26:19.150 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } m30000| Fri Feb 22 11:26:19.150 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:19-512755dbd8a80eaf6b75947c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532379150), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 641, step4 of 5: 0, step5 of 5: 409 } } m30001| Fri Feb 22 11:26:19.160 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:19.160 [conn7] moveChunk updating self version to: 29|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:19.161 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:19-512755dbad0d9d7dc768ffb4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56384", time: new Date(1361532379161), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:26:19.161 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:19.161 [conn7] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:19.161 [conn7] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:19.161 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:19.161 [cleanupOldData-512755dbad0d9d7dc768ffb5] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') }, # cursors remaining: 1 m30001| Fri Feb 22 11:26:19.162 [conn7] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:19.180 [conn7] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:19.180 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:19-512755dbad0d9d7dc768ffb6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56384", time: new Date(1361532379180), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 27, step4 of 6: 1050, step5 of 6: 12, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:19.180 [conn7] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: ObjectId('512755955b44ae98eaab507b') }, max: { _id: ObjectId('512755965b44ae98eaab7f5b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755955b44ae98eaab507b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:41 r:27525 w:31 reslen:37 1111ms m30001| Fri Feb 22 11:26:19.181 [cleanupOldData-512755dbad0d9d7dc768ffb5] (looping 1) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } # cursors:1 m30001| Fri Feb 22 11:26:19.181 [cleanupOldData-512755dbad0d9d7dc768ffb5] cursors: 362865977049106 m30999| Fri Feb 22 11:26:19.182 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 102 version: 29|1||51275580ce6119f732c457f2 based on: 28|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:19.182 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:19.184 [conn4] getmore bulk_shard_insert.coll cursorid:362865977049106 ntoreturn:0 keyUpdates:0 numYields: 3 locks(micros) r:181308 nreturned:39946 reslen:4194350 101ms m30002| Fri Feb 22 11:26:19.299 [conn4] getmore bulk_shard_insert.coll cursorid:361929941131514 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:212108 nreturned:39946 reslen:4194350 112ms m30003| Fri Feb 22 11:26:19.464 [conn7] getmore bulk_shard_insert.coll cursorid:360722615982021 ntoreturn:0 keyUpdates:0 locks(micros) r:162274 nreturned:39946 reslen:4194350 162ms m30000| Fri Feb 22 11:26:19.650 [conn7] getmore bulk_shard_insert.coll cursorid:363141324917685 ntoreturn:0 keyUpdates:0 locks(micros) r:184078 nreturned:39946 reslen:4194350 184ms m30999| Fri Feb 22 11:26:20.184 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755dcce6119f732c4580c m30999| Fri Feb 22 11:26:20.187 [Balancer] ns: bulk_shard_insert.coll going to move { _id: "bulk_shard_insert.coll-_id_ObjectId('512755965b44ae98eaab7f5b')", lastmod: Timestamp 29000|1, lastmodEpoch: ObjectId('51275580ce6119f732c457f2'), ns: "bulk_shard_insert.coll", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shard: "shard0001" } from: shard0001 to: shard0002 tag [] m30999| Fri Feb 22 11:26:20.187 [Balancer] moving chunk ns: bulk_shard_insert.coll moving ( ns:bulk_shard_insert.collshard: shard0001:localhost:30001lastmod: 29|1||000000000000000000000000min: { _id: ObjectId('512755965b44ae98eaab7f5b') }max: { _id: ObjectId('512755975b44ae98eaabae3b') }) shard0001:localhost:30001 -> shard0002:localhost:30002 m30001| Fri Feb 22 11:26:20.187 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:26:20.187 [conn4] received moveChunk request: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755965b44ae98eaab7f5b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:26:20.188 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' acquired, ts : 512755dcad0d9d7dc768ffb7 m30001| Fri Feb 22 11:26:20.188 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:20-512755dcad0d9d7dc768ffb8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532380188), what: "moveChunk.start", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:20.189 [conn4] moveChunk request accepted at version 29|1||51275580ce6119f732c457f2 m30001| Fri Feb 22 11:26:20.206 [conn4] moveChunk number of documents: 12000 m30002| Fri Feb 22 11:26:20.206 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } for collection bulk_shard_insert.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:26:20.217 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.227 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 100, clonedBytes: 10500, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.237 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 306, clonedBytes: 32130, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.247 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 509, clonedBytes: 53445, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.263 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 835, clonedBytes: 87675, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.295 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1481, clonedBytes: 155505, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.360 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2691, clonedBytes: 282555, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.488 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5080, clonedBytes: 533400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:20.744 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9776, clonedBytes: 1026480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 11:26:20.874 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 11:26:20.874 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } m30002| Fri Feb 22 11:26:20.876 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } m30001| Fri Feb 22 11:26:21.092 [cleanupOldData-512755d9ad0d9d7dc768ffb0] (looping 201) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } # cursors:1 m30001| Fri Feb 22 11:26:21.092 [cleanupOldData-512755d9ad0d9d7dc768ffb0] cursors: 362865977049106 m30001| Fri Feb 22 11:26:21.208 [conn7] getmore bulk_shard_insert.coll cursorid:362865977049106 ntoreturn:0 keyUpdates:0 locks(micros) r:161970 nreturned:39946 reslen:4194350 161ms m30001| Fri Feb 22 11:26:21.256 [conn4] moveChunk data transfer progress: { active: true, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:26:21.256 [conn4] moveChunk setting version to: 30|0||51275580ce6119f732c457f2 m30002| Fri Feb 22 11:26:21.257 [conn6] Waiting for commit to finish m30002| Fri Feb 22 11:26:21.262 [migrateThread] migrate commit succeeded flushing to secondaries for 'bulk_shard_insert.coll' { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } m30002| Fri Feb 22 11:26:21.262 [migrateThread] migrate commit flushed to journal for 'bulk_shard_insert.coll' { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } m30002| Fri Feb 22 11:26:21.262 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:21-512755dd54d20db69cce81d1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361532381262), what: "moveChunk.to", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 667, step4 of 5: 0, step5 of 5: 387 } } m30001| Fri Feb 22 11:26:21.267 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "bulk_shard_insert.coll", from: "localhost:30001", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 12000, clonedBytes: 1260000, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:26:21.267 [conn4] moveChunk updating self version to: 30|1||51275580ce6119f732c457f2 through { _id: ObjectId('512755975b44ae98eaabae3b') } -> { _id: ObjectId('512755975b44ae98eaabdd1b') } for collection 'bulk_shard_insert.coll' m30001| Fri Feb 22 11:26:21.268 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:21-512755ddad0d9d7dc768ffb9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532381268), what: "moveChunk.commit", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, from: "shard0001", to: "shard0002" } } m30001| Fri Feb 22 11:26:21.268 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:21.268 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:21.268 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:26:21.268 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:26:21.268 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:26:21.268 [cleanupOldData-512755ddad0d9d7dc768ffba] (start) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') }, # cursors remaining: 1 m30001| Fri Feb 22 11:26:21.268 [conn4] distributed lock 'bulk_shard_insert.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361532289:23480' unlocked. m30001| Fri Feb 22 11:26:21.268 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:26:21-512755ddad0d9d7dc768ffbb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:56609", time: new Date(1361532381268), what: "moveChunk.from", ns: "bulk_shard_insert.coll", details: { min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 17, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:26:21.268 [conn4] command admin.$cmd command: { moveChunk: "bulk_shard_insert.coll", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0001", toShard: "shard0002", min: { _id: ObjectId('512755965b44ae98eaab7f5b') }, max: { _id: ObjectId('512755975b44ae98eaabae3b') }, maxChunkSizeBytes: 1048576, shardId: "bulk_shard_insert.coll-_id_ObjectId('512755965b44ae98eaab7f5b')", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:16872 w:26 reslen:37 1081ms m30999| Fri Feb 22 11:26:21.270 [Balancer] ChunkManager: time to load chunks for bulk_shard_insert.coll: 1ms sequenceNumber: 103 version: 30|1||51275580ce6119f732c457f2 based on: 29|1||51275580ce6119f732c457f2 m30999| Fri Feb 22 11:26:21.271 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:21.288 [cleanupOldData-512755ddad0d9d7dc768ffba] (looping 1) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } # cursors:1 m30001| Fri Feb 22 11:26:21.288 [cleanupOldData-512755ddad0d9d7dc768ffba] cursors: 362865977049106 m30002| Fri Feb 22 11:26:21.388 [conn4] getmore bulk_shard_insert.coll cursorid:361929941131514 ntoreturn:0 keyUpdates:0 locks(micros) r:177774 nreturned:39946 reslen:4194350 177ms m30003| Fri Feb 22 11:26:21.553 [conn7] getmore bulk_shard_insert.coll cursorid:360722615982021 ntoreturn:0 keyUpdates:0 locks(micros) r:162028 nreturned:39946 reslen:4194350 162ms m30000| Fri Feb 22 11:26:21.724 [conn7] getmore bulk_shard_insert.coll cursorid:363141324917685 ntoreturn:0 keyUpdates:0 locks(micros) r:168513 nreturned:16114 reslen:1691990 168ms m30999| Fri Feb 22 11:26:22.273 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' acquired, ts : 512755dece6119f732c4580d m30999| Fri Feb 22 11:26:22.276 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361532288:16838' unlocked. m30001| Fri Feb 22 11:26:23.003 [cleanupOldData-512755d6ad0d9d7dc768ffab] (looping 401) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } # cursors:1 m30001| Fri Feb 22 11:26:23.003 [cleanupOldData-512755d6ad0d9d7dc768ffab] cursors: 362865977049106 m30001| Fri Feb 22 11:26:23.026 [conn4] getmore bulk_shard_insert.coll cursorid:362865977049106 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:198439 nreturned:39946 reslen:4194350 102ms m30002| Fri Feb 22 11:26:23.151 [conn4] getmore bulk_shard_insert.coll cursorid:361929941131514 ntoreturn:0 keyUpdates:0 locks(micros) r:121740 nreturned:39946 reslen:4194350 121ms m30001| Fri Feb 22 11:26:23.187 [cleanupOldData-512755dbad0d9d7dc768ffb5] (looping 201) waiting to cleanup bulk_shard_insert.coll from { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } # cursors:1 m30001| Fri Feb 22 11:26:23.187 [cleanupOldData-512755dbad0d9d7dc768ffb5] cursors: 362865977049106 m30003| Fri Feb 22 11:26:23.313 [conn7] getmore bulk_shard_insert.coll cursorid:360722615982021 ntoreturn:0 keyUpdates:0 locks(micros) r:159329 nreturned:39946 reslen:4194350 159ms m30001| Fri Feb 22 11:26:24.368 [cleanupOldData-512755dbad0d9d7dc768ffb5] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } m30001| Fri Feb 22 11:26:24.368 [cleanupOldData-512755dbad0d9d7dc768ffb5] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } m30001| Fri Feb 22 11:26:24.372 [cleanupOldData-512755ddad0d9d7dc768ffba] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } m30001| Fri Feb 22 11:26:24.382 [cleanupOldData-512755d9ad0d9d7dc768ffb0] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } m30001| Fri Feb 22 11:26:24.384 [cleanupOldData-512755d6ad0d9d7dc768ffab] waiting to remove documents for bulk_shard_insert.coll from { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } m30002| Fri Feb 22 11:26:24.481 [conn4] getmore bulk_shard_insert.coll cursorid:361929941131514 ntoreturn:0 keyUpdates:0 locks(micros) r:114801 nreturned:20224 reslen:2123540 114ms m30003| Fri Feb 22 11:26:24.644 [conn7] getmore bulk_shard_insert.coll cursorid:360722615982021 ntoreturn:0 keyUpdates:0 locks(micros) r:162013 nreturned:39946 reslen:4194350 162ms m30001| Fri Feb 22 11:26:25.152 [cleanupOldData-512755dbad0d9d7dc768ffb5] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755955b44ae98eaab507b') } -> { _id: ObjectId('512755965b44ae98eaab7f5b') } m30001| Fri Feb 22 11:26:25.153 [cleanupOldData-512755d6ad0d9d7dc768ffab] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } m30001| Fri Feb 22 11:26:25.896 [cleanupOldData-512755d6ad0d9d7dc768ffab] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755935b44ae98eaaaf2bb') } -> { _id: ObjectId('512755945b44ae98eaab219b') } m30001| Fri Feb 22 11:26:25.896 [cleanupOldData-512755d9ad0d9d7dc768ffb0] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } Inserted 1000000 count : 1011999 itcount : 1000000 m30000| Fri Feb 22 11:26:26.023 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:26:26.023 [interruptThread] now exiting m30000| Fri Feb 22 11:26:26.023 dbexit: m30000| Fri Feb 22 11:26:26.023 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:26:26.023 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:26:26.023 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:26:26.023 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:26:26.023 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:26:26.023 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:26:26.023 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:26:26.023 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:26:26.023 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:26:26.023 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:26:26.023 [conn2] end connection 127.0.0.1:35416 (19 connections now open) m30000| Fri Feb 22 11:26:26.023 [conn1] end connection 127.0.0.1:40234 (19 connections now open) m30000| Fri Feb 22 11:26:26.023 [conn3] end connection 127.0.0.1:42442 (19 connections now open) m30000| Fri Feb 22 11:26:26.023 [conn5] end connection 127.0.0.1:55370 (19 connections now open) m30000| Fri Feb 22 11:26:26.023 [conn6] end connection 127.0.0.1:63280 (19 connections now open) m30000| Fri Feb 22 11:26:26.023 [conn7] end connection 127.0.0.1:61993 (19 connections now open) m30000| Fri Feb 22 11:26:26.023 [conn9] end connection 127.0.0.1:37721 (19 connections now open) m30999| Fri Feb 22 11:26:26.023 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed m30000| Fri Feb 22 11:26:26.023 [conn8] end connection 127.0.0.1:64913 (19 connections now open) m30000| Fri Feb 22 11:26:26.024 [conn10] end connection 127.0.0.1:58390 (19 connections now open) m30000| Fri Feb 22 11:26:26.024 [conn17] end connection 127.0.0.1:46248 (19 connections now open) m30999| Fri Feb 22 11:26:26.024 [WriteBackListener-localhost:30000] Detected bad connection created at 1361532288762309 microSec, clearing pool for localhost:30000 m30000| Fri Feb 22 11:26:26.024 [conn12] end connection 127.0.0.1:51202 (19 connections now open) m30000| Fri Feb 22 11:26:26.024 [conn19] end connection 127.0.0.1:60089 (19 connections now open) m30999| Fri Feb 22 11:26:26.024 [WriteBackListener-localhost:30000] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30000 ns: admin.$cmd query: { writebacklisten: ObjectId('51275580ce6119f732c457f0') } m30001| Fri Feb 22 11:26:26.024 [conn6] end connection 127.0.0.1:37533 (6 connections now open) m30000| Fri Feb 22 11:26:26.063 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:26:26.076 [conn11] end connection 127.0.0.1:60144 (7 connections now open) m30000| Fri Feb 22 11:26:26.077 [conn20] end connection 127.0.0.1:46999 (6 connections now open) m30000| Fri Feb 22 11:26:26.077 [conn15] end connection 127.0.0.1:37407 (6 connections now open) m30000| Fri Feb 22 11:26:26.077 [conn13] end connection 127.0.0.1:56606 (6 connections now open) m30000| Fri Feb 22 11:26:26.077 [conn16] end connection 127.0.0.1:49100 (4 connections now open) m30000| Fri Feb 22 11:26:26.077 [conn18] end connection 127.0.0.1:60277 (5 connections now open) m30000| Fri Feb 22 11:26:26.077 [conn14] end connection 127.0.0.1:62107 (4 connections now open) m30000| Fri Feb 22 11:26:26.077 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:26:26.077 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:26:26.077 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:26:26.078 dbexit: really exiting now m30001| Fri Feb 22 11:26:26.607 [cleanupOldData-512755d9ad0d9d7dc768ffb0] moveChunk deleted 12000 documents for bulk_shard_insert.coll from { _id: ObjectId('512755945b44ae98eaab219b') } -> { _id: ObjectId('512755955b44ae98eaab507b') } m30001| Fri Feb 22 11:26:26.607 [cleanupOldData-512755ddad0d9d7dc768ffba] moveChunk starting delete for: bulk_shard_insert.coll from { _id: ObjectId('512755965b44ae98eaab7f5b') } -> { _id: ObjectId('512755975b44ae98eaabae3b') } m30001| Fri Feb 22 11:26:27.023 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:26:27.023 [interruptThread] now exiting m30001| Fri Feb 22 11:26:27.023 dbexit: m30001| Fri Feb 22 11:26:27.023 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:26:27.023 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 11:26:27.023 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 11:26:27.023 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 11:26:27.023 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:26:27.023 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:26:27.023 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:26:27.023 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:26:27.023 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:26:27.023 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:26:27.023 [conn1] end connection 127.0.0.1:56952 (5 connections now open) m30001| Fri Feb 22 11:26:27.023 [conn3] end connection 127.0.0.1:60370 (5 connections now open) m30001| Fri Feb 22 11:26:27.023 [conn4] end connection 127.0.0.1:56609 (5 connections now open) m30999| Fri Feb 22 11:26:27.023 [WriteBackListener-localhost:30001] DBClientCursor::init call() failed m30001| Fri Feb 22 11:26:27.023 [conn7] end connection 127.0.0.1:56384 (5 connections now open) m30999| Fri Feb 22 11:26:27.024 [WriteBackListener-localhost:30001] Detected bad connection created at 1361532288951282 microSec, clearing pool for localhost:30001 m30999| Fri Feb 22 11:26:27.024 [WriteBackListener-localhost:30001] WriteBackListener exception : DBClientBase::findN: transport error: localhost:30001 ns: admin.$cmd query: { writebacklisten: ObjectId('51275580ce6119f732c457f0') } m30001| Fri Feb 22 11:26:27.024 [conn5] end connection 127.0.0.1:51527 (5 connections now open) m30999| Fri Feb 22 11:26:27.024 [WriteBackListener-localhost:30000] Socket recv() errno:131 Connection reset by peer 127.0.0.1:30000 m30999| Fri Feb 22 11:26:27.024 [WriteBackListener-localhost:30000] SocketException: remote: 127.0.0.1:30000 error: 9001 socket exception [1] server [127.0.0.1:30000] m30999| Fri Feb 22 11:26:27.024 [WriteBackListener-localhost:30000] DBClientCursor::init call() failed m30999| Fri Feb 22 11:26:27.024 [WriteBackListener-localhost:30000] Assertion: 13632:couldn't get updated shard list from config server m30999| 0xaaa408 0xa74f3c 0x9fde75 0xa3b04a 0xa793be 0xa79f6c 0xafb11e 0xfffffd7fff257024 0xfffffd7fff2572f0 m30999| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo15printStackTraceERSo+0x28 [0xaaa408] m30999| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo11msgassertedEiPKc+0x9c [0xa74f3c] m30999| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo15StaticShardInfo6reloadEv+0xc05 [0x9fde75] m30999| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo17WriteBackListener3runEv+0xba [0xa3b04a] m30999| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xce [0xa793be] m30999| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7c [0xa79f6c] m30999| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'thread_proxy+0x7e [0xafb11e] m30999| /lib/amd64/libc.so.1'_thrp_setup+0xbc [0xfffffd7fff257024] m30999| /lib/amd64/libc.so.1'_lwp_start+0x0 [0xfffffd7fff2572f0] m30999| Fri Feb 22 11:26:27.025 [WriteBackListener-localhost:30000] Detected bad connection created at 1361532288759554 microSec, clearing pool for localhost:30000 m30999| Fri Feb 22 11:26:27.025 [WriteBackListener-localhost:30000] WriteBackListener exception : couldn't get updated shard list from config server m30002| Fri Feb 22 11:26:27.041 [conn6] end connection 127.0.0.1:57701 (7 connections now open) m30002| Fri Feb 22 11:26:27.041 [conn5] end connection 127.0.0.1:36223 (7 connections now open) m30001| Fri Feb 22 11:26:27.072 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:26:27.110 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:26:27.110 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:26:27.110 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:26:27.111 dbexit: really exiting now m30002| Fri Feb 22 11:26:28.023 got signal 15 (Terminated), will terminate after current cmd ends m30002| Fri Feb 22 11:26:28.023 [interruptThread] now exiting m30002| Fri Feb 22 11:26:28.023 dbexit: m30002| Fri Feb 22 11:26:28.023 [interruptThread] shutdown: going to close listening sockets... m30002| Fri Feb 22 11:26:28.023 [interruptThread] closing listening socket: 18 m30002| Fri Feb 22 11:26:28.023 [interruptThread] closing listening socket: 19 m30002| Fri Feb 22 11:26:28.023 [interruptThread] closing listening socket: 20 m30002| Fri Feb 22 11:26:28.023 [interruptThread] removing socket file: /tmp/mongodb-30002.sock m30002| Fri Feb 22 11:26:28.023 [interruptThread] shutdown: going to flush diaglog... m30002| Fri Feb 22 11:26:28.023 [interruptThread] shutdown: going to close sockets... m30002| Fri Feb 22 11:26:28.023 [interruptThread] shutdown: waiting for fs preallocator... m30002| Fri Feb 22 11:26:28.023 [interruptThread] shutdown: lock for final commit... m30002| Fri Feb 22 11:26:28.023 [interruptThread] shutdown: final commit... m30002| Fri Feb 22 11:26:28.023 [conn1] end connection 127.0.0.1:36630 (5 connections now open) m30002| Fri Feb 22 11:26:Fri Feb 22 11:26:30.024 [conn15] end connection 127.0.0.1:65134 (0 connections now open) 1.7224 minutes Fri Feb 22 11:26:31.065 [initandlisten] connection accepted from 127.0.0.1:43855 #16 (1 connection now open) Fri Feb 22 11:26:31.066 [conn16] end connection 127.0.0.1:43855 (0 connections now open) ******************************************* Test : capped4.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/capped4.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/capped4.js";TestData.testFile = "capped4.js";TestData.testName = "capped4";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:26:31 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:26:31.248 [initandlisten] connection accepted from 127.0.0.1:48598 #17 (1 connection now open) null Fri Feb 22 11:26:31.258 [conn17] CMD: drop test.jstests_capped4 Fri Feb 22 11:26:31.260 [conn17] build index test.jstests_capped4 { _id: 1 } Fri Feb 22 11:26:31.262 [conn17] build index done. scanned 0 total records. 0.002 secs Fri Feb 22 11:26:31.264 [conn17] build index test.jstests_capped4 { i: 1.0 } Fri Feb 22 11:26:31.266 [conn17] build index done. scanned 0 total records. 0.002 secs Fri Feb 22 11:26:31.304 [conn17] CMD: validate test.jstests_capped4 Fri Feb 22 11:26:31.304 [conn17] validating index 0: test.jstests_capped4.$_id_ Fri Feb 22 11:26:31.304 [conn17] validating index 1: test.jstests_capped4.$i_1 Fri Feb 22 11:26:31.305 [conn17] DatabaseHolder::closeAll path:/data/db/sconsTests/ Fri Feb 22 11:26:31.380 [conn17] end connection 127.0.0.1:48598 (0 connections now open) 333.4959 ms Fri Feb 22 11:26:31.401 [initandlisten] connection accepted from 127.0.0.1:37906 #18 (1 connection now open) Fri Feb 22 11:26:31.402 [conn18] end connection 127.0.0.1:37906 (0 connections now open) ******************************************* Test : command_line_parsing.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/command_line_parsing.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/command_line_parsing.js";TestData.testFile = "command_line_parsing.js";TestData.testName = "command_line_parsing";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:26:31 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:26:31.579 [initandlisten] connection accepted from 127.0.0.1:33781 #19 (1 connection now open) null startMongod WARNING DELETES DATA DIRECTORY THIS IS FOR TESTING ONLY Fri Feb 22 11:26:31.585 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 31000 --dbpath /data/db/jstests_slowNightly_command_line_parsing --notablescan --setParameter enableTestCommands=1 m31000| Fri Feb 22 11:26:31.656 [initandlisten] MongoDB starting : pid=23421 port=31000 dbpath=/data/db/jstests_slowNightly_command_line_parsing 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 11:26:31.657 [initandlisten] m31000| Fri Feb 22 11:26:31.657 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 11:26:31.657 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 11:26:31.657 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 11:26:31.657 [initandlisten] m31000| Fri Feb 22 11:26:31.657 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 11:26:31.657 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 11:26:31.657 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 11:26:31.657 [initandlisten] allocator: system m31000| Fri Feb 22 11:26:31.657 [initandlisten] options: { dbpath: "/data/db/jstests_slowNightly_command_line_parsing", notablescan: true, port: 31000, setParameter: [ "enableTestCommands=1" ] } m31000| Fri Feb 22 11:26:31.657 [initandlisten] journal dir=/data/db/jstests_slowNightly_command_line_parsing/journal m31000| Fri Feb 22 11:26:31.657 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 11:26:31.672 [FileAllocator] allocating new datafile /data/db/jstests_slowNightly_command_line_parsing/local.ns, filling with zeroes... m31000| Fri Feb 22 11:26:31.672 [FileAllocator] creating directory /data/db/jstests_slowNightly_command_line_parsing/_tmp m31000| Fri Feb 22 11:26:31.673 [FileAllocator] done allocating datafile /data/db/jstests_slowNightly_command_line_parsing/local.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:26:31.673 [FileAllocator] allocating new datafile /data/db/jstests_slowNightly_command_line_parsing/local.0, filling with zeroes... m31000| Fri Feb 22 11:26:31.673 [FileAllocator] done allocating datafile /data/db/jstests_slowNightly_command_line_parsing/local.0, size: 64MB, took 0 secs m31000| Fri Feb 22 11:26:31.675 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 11:26:31.675 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 11:26:31.788 [initandlisten] connection accepted from 127.0.0.1:45719 #1 (1 connection now open) m31000| Fri Feb 22 11:26:31.789 [FileAllocator] allocating new datafile /data/db/jstests_slowNightly_command_line_parsing/jstests_slowNightly_command_line_parsing.ns, filling with zeroes... m31000| Fri Feb 22 11:26:31.789 [FileAllocator] done allocating datafile /data/db/jstests_slowNightly_command_line_parsing/jstests_slowNightly_command_line_parsing.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:26:31.790 [FileAllocator] allocating new datafile /data/db/jstests_slowNightly_command_line_parsing/jstests_slowNightly_command_line_parsing.0, filling with zeroes... m31000| Fri Feb 22 11:26:31.790 [FileAllocator] done allocating datafile /data/db/jstests_slowNightly_command_line_parsing/jstests_slowNightly_command_line_parsing.0, size: 64MB, took 0 secs m31000| Fri Feb 22 11:26:31.790 [FileAllocator] allocating new datafile /data/db/jstests_slowNightly_command_line_parsing/jstests_slowNightly_command_line_parsing.1, filling with zeroes... m31000| Fri Feb 22 11:26:31.790 [FileAllocator] done allocating datafile /data/db/jstests_slowNightly_command_line_parsing/jstests_slowNightly_command_line_parsing.1, size: 128MB, took 0 secs m31000| Fri Feb 22 11:26:31.792 [conn1] build index jstests_slowNightly_command_line_parsing.jstests_slowNightly_command_line_parsing { _id: 1 } m31000| Fri Feb 22 11:26:31.793 [conn1] build index done. scanned 0 total records. 0.001 secs m31000| Fri Feb 22 11:26:31.806 [conn1] assertion 10111 table scans not allowed:jstests_slowNightly_command_line_parsing.jstests_slowNightly_command_line_parsing ns:jstests_slowNightly_command_line_parsing.jstests_slowNightly_command_line_parsing query:{ a: 1.0 } m31000| Fri Feb 22 11:26:31.806 [conn1] problem detected during query over jstests_slowNightly_command_line_parsing.jstests_slowNightly_command_line_parsing : { $err: "table scans not allowed:jstests_slowNightly_command_line_parsing.jstests_slowNightly_command_line_parsing", code: 10111 } startMongod WARNING DELETES DATA DIRECTORY THIS IS FOR TESTING ONLY Fri Feb 22 11:26:31.811 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 31002 --dbpath /data/db/jstests_slowNightly_command_line_parsing2 --config jstests/libs/testconfig --setParameter enableTestCommands=1 m31002| warning "fastsync" should not be put in your configuration file m31002| warning: remove or comment out this line by starting it with '#', skipping now : version = false m31002| Fri Feb 22 11:26:31.902 [initandlisten] MongoDB starting : pid=23430 port=31002 dbpath=/data/db/jstests_slowNightly_command_line_parsing2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31002| Fri Feb 22 11:26:31.902 [initandlisten] m31002| Fri Feb 22 11:26:31.902 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31002| Fri Feb 22 11:26:31.902 [initandlisten] ** uses to detect impending page faults. m31002| Fri Feb 22 11:26:31.902 [initandlisten] ** This may result in slower performance for certain use cases m31002| Fri Feb 22 11:26:31.902 [initandlisten] m31002| Fri Feb 22 11:26:31.902 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31002| Fri Feb 22 11:26:31.902 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31002| Fri Feb 22 11:26:31.902 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31002| Fri Feb 22 11:26:31.902 [initandlisten] allocator: system m31002| Fri Feb 22 11:26:31.902 [initandlisten] options: { config: "jstests/libs/testconfig", dbpath: "/data/db/jstests_slowNightly_command_line_parsing2", fastsync: "true", port: 31002, setParameter: [ "enableTestCommands=1" ] } m31002| Fri Feb 22 11:26:31.903 [initandlisten] journal dir=/data/db/jstests_slowNightly_command_line_parsing2/journal m31002| Fri Feb 22 11:26:31.903 [initandlisten] recover : no journal files present, no recovery needed m31002| Fri Feb 22 11:26:31.917 [FileAllocator] allocating new datafile /data/db/jstests_slowNightly_command_line_parsing2/local.ns, filling with zeroes... m31002| Fri Feb 22 11:26:31.917 [FileAllocator] creating directory /data/db/jstests_slowNightly_command_line_parsing2/_tmp m31002| Fri Feb 22 11:26:31.917 [FileAllocator] done allocating datafile /data/db/jstests_slowNightly_command_line_parsing2/local.ns, size: 16MB, took 0 secs m31002| Fri Feb 22 11:26:31.917 [FileAllocator] allocating new datafile /data/db/jstests_slowNightly_command_line_parsing2/local.0, filling with zeroes... m31002| Fri Feb 22 11:26:31.917 [FileAllocator] done allocating datafile /data/db/jstests_slowNightly_command_line_parsing2/local.0, size: 64MB, took 0 secs m31002| Fri Feb 22 11:26:31.920 [initandlisten] waiting for connections on port 31002 m31002| Fri Feb 22 11:26:31.920 [websvr] admin web console waiting for connections on port 32002 m31002| Fri Feb 22 11:26:32.013 [initandlisten] connection accepted from 127.0.0.1:47592 #1 (1 connection now open) Expected: { "config" : "jstests/libs/testconfig", "dbpath" : "/data/db/jstests_slowNightly_command_line_parsing2", "fastsync" : "true", "port" : 31002, "setParameter" : [ "enableTestCommands=1" ] } Actual: { "config" : "jstests/libs/testconfig", "dbpath" : "/data/db/jstests_slowNightly_command_line_parsing2", "fastsync" : "true", "port" : 31002, "setParameter" : [ "enableTestCommands=1" ] } Fri Feb 22 11:26:34.018 [conn19] end connection 127.0.0.1:33781 (0 connections now open) 2635.5171 ms Fri Feb 22 11:26:34.039 [initandlisten] connection accepted from 127.0.0.1:64415 #20 (1 connection now open) Fri Feb 22 11:26:34.039 [conn20] end connection 127.0.0.1:64415 (0 connections now open) ******************************************* Test : cursor8.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/cursor8.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/cursor8.js";TestData.testFile = "cursor8.js";TestData.testName = "cursor8";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:26:34 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:26:34.211 [initandlisten] connection accepted from 127.0.0.1:59198 #21 (1 connection now open) null Fri Feb 22 11:26:34.220 [conn21] CMD: drop test.cursor8 Fri Feb 22 11:26:34.221 [conn21] build index test.cursor8 { _id: 1 } Fri Feb 22 11:26:34.224 [conn21] build index done. scanned 0 total records. 0.002 secs Fri Feb 22 11:26:34.225 [conn21] DatabaseHolder::closeAll path:/data/db/sconsTests/ Fri Feb 22 11:26:34.241 [conn21] end connection 127.0.0.1:59198 (0 connections now open) 219.2149 ms Fri Feb 22 11:26:34.260 [initandlisten] connection accepted from 127.0.0.1:58534 #22 (1 connection now open) Fri Feb 22 11:26:34.261 [conn22] end connection 127.0.0.1:58534 (0 connections now open) ******************************************* Test : dur_big_atomic_update.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/dur_big_atomic_update.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/dur_big_atomic_update.js";TestData.testFile = "dur_big_atomic_update.js";TestData.testName = "dur_big_atomic_update";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:26:34 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:26:34.432 [initandlisten] connection accepted from 127.0.0.1:56970 #23 (1 connection now open) null Fri Feb 22 11:26:34.438 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/dur_big_atomic_update --dur --durOptions 8 --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:26:34.537 [initandlisten] MongoDB starting : pid=23437 port=30001 dbpath=/data/db/dur_big_atomic_update 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:26:34.538 [initandlisten] m30001| Fri Feb 22 11:26:34.538 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:26:34.538 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:26:34.538 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:26:34.538 [initandlisten] m30001| Fri Feb 22 11:26:34.538 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:26:34.538 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:26:34.538 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:26:34.538 [initandlisten] allocator: system m30001| Fri Feb 22 11:26:34.538 [initandlisten] options: { dbpath: "/data/db/dur_big_atomic_update", dur: true, durOptions: 8, port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:26:34.538 [initandlisten] journal dir=/data/db/dur_big_atomic_update/journal m30001| Fri Feb 22 11:26:34.539 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:26:34.557 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/local.ns, filling with zeroes... m30001| Fri Feb 22 11:26:34.557 [FileAllocator] creating directory /data/db/dur_big_atomic_update/_tmp m30001| Fri Feb 22 11:26:34.557 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:26:34.557 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/local.0, filling with zeroes... m30001| Fri Feb 22 11:26:34.557 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:26:34.561 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:26:34.561 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:26:34.641 [initandlisten] connection accepted from 127.0.0.1:38178 #1 (1 connection now open) m30001| Fri Feb 22 11:26:34.643 [conn1] CMD: drop test.foo m30001| Fri Feb 22 11:26:34.644 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.ns, filling with zeroes... m30001| Fri Feb 22 11:26:34.644 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:26:34.644 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.0, filling with zeroes... m30001| Fri Feb 22 11:26:34.644 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:26:34.645 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.1, filling with zeroes... m30001| Fri Feb 22 11:26:34.645 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:26:34.648 [conn1] build index test.foo { _id: 1 } m30001| Fri Feb 22 11:26:34.649 [conn1] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:26:34.832 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.2, filling with zeroes... m30001| Fri Feb 22 11:26:34.832 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.2, size: 256MB, took 0 secs m30001| Fri Feb 22 11:26:36.437 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.3, filling with zeroes... m30001| Fri Feb 22 11:26:36.437 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.3, size: 512MB, took 0 secs m30001| Fri Feb 22 11:26:38.871 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.4, filling with zeroes... m30001| Fri Feb 22 11:26:38.871 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.4, size: 1024MB, took 0 secs m30001| Fri Feb 22 11:26:43.709 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.5, filling with zeroes... m30001| Fri Feb 22 11:26:43.709 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.5, size: 2047MB, took 0 secs m30001| Fri Feb 22 11:27:04.572 [conn1] update test.foo query: { $atomic: 1.0 } update: { $set: { big_string: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } } nscanned:2048 nmoved:1024 nupdated:1024 keyUpdates:0 locks(micros) w:29865363 29865ms m30001| Fri Feb 22 11:27:04.625 [conn1] dropDatabase test starting m30001| Fri Feb 22 11:27:06.149 [conn1] removeJournalFiles m30001| Fri Feb 22 11:27:07.622 [conn1] dropDatabase test finished m30001| Fri Feb 22 11:27:07.622 [conn1] command test.$cmd command: { dropDatabase: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:2997674 reslen:55 2997ms m30001| Fri Feb 22 11:27:07.623 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.ns, filling with zeroes... m30001| Fri Feb 22 11:27:07.623 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:27:07.623 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.0, filling with zeroes... m30001| Fri Feb 22 11:27:07.623 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:27:07.624 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.1, filling with zeroes... m30001| Fri Feb 22 11:27:07.624 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:27:07.626 [conn1] build index test.foo { _id: 1 } m30001| Fri Feb 22 11:27:07.627 [conn1] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:27:07.729 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.748 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.770 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.789 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.801 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.2, filling with zeroes... m30001| Fri Feb 22 11:27:07.801 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.2, size: 256MB, took 0 secs m30001| Fri Feb 22 11:27:07.809 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.828 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.850 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.870 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.891 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.910 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:07.929 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 0, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.633 [conn1] DurParanoid map check 538ms for 288MB m30001| Fri Feb 22 11:27:08.752 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.772 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.792 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.812 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.834 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.853 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.872 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.893 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.899 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.914 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.930 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.3, filling with zeroes... m30001| Fri Feb 22 11:27:08.931 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.3, size: 512MB, took 0 secs m30001| Fri Feb 22 11:27:08.934 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.953 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.972 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:08.995 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:09.014 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:09.032 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:09.056 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:09.073 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 1, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.096 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.117 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.136 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.156 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.176 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.196 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.217 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.237 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.257 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.277 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 2, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.877 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.888 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.906 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.921 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.4, filling with zeroes... m30001| Fri Feb 22 11:27:10.921 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.4, size: 1024MB, took 0 secs m30001| Fri Feb 22 11:27:10.926 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.948 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.968 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:10.989 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.008 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.027 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.051 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.070 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.087 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.107 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.131 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:11.150 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 3, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.501 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.520 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.540 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.561 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.579 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.600 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.620 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.630 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:13.650 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 5, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.738 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.753 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.773 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.792 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.814 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.833 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.852 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.873 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:14.892 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 7, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:15.950 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:15.966 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:15.985 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:15.989 [FileAllocator] allocating new datafile /data/db/dur_big_atomic_update/test.5, filling with zeroes... m30001| Fri Feb 22 11:27:15.990 [FileAllocator] done allocating datafile /data/db/dur_big_atomic_update/test.5, size: 2047MB, took 0 secs m30001| Fri Feb 22 11:27:16.007 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.028 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.048 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.067 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.088 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.108 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.122 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.137 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.156 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.177 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:16.197 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 8, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.840 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.852 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.873 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.891 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.911 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.932 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.951 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.972 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:21.992 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:22.013 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 14, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:23.916 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:23.935 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:23.955 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:23.975 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:23.996 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:24.015 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:24.035 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:24.055 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 16, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:25.945 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:25.960 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:25.979 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:25.999 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:26.020 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:26.043 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:26.063 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:26.081 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:26.101 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:26.121 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 18, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.001 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.015 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.036 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.058 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.078 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.097 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.116 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.137 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:28.139 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 20, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:29.889 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 22, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:29.910 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 22, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:29.931 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 22, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:29.951 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 22, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:29.970 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 22, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:29.990 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 22, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:30.010 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 22, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:31.656 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 23, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:31.674 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 24, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:31.695 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 24, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:31.714 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 24, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:31.736 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 24, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:31.755 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 24, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:31.775 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 24, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:33.799 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 26, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:33.819 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 26, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:33.826 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 26, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:33.838 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 26, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:33.858 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 26, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:33.878 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 26, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:33.898 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 26, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.234 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.247 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.270 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.286 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.309 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.326 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.350 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.368 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.387 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.408 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.427 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:37.448 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 29, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.493 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.514 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.535 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.555 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.574 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.578 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.579 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.579 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.580 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.580 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.583 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.585 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.587 [conn1] warning: ClientCursor::yield can't unlock b/c of recursive lock ns: test.foo top: { opid: 2062, active: true, secs_running: 31, op: "query", ns: "test", query: { $msg: "query not recording (too large)" }, client: "127.0.0.1:38178", desc: "conn1", threadId: "0xc", connectionId: 1, locks: { ^: "W" }, waitingForLock: false, numYields: 0, lockStats: { timeLockedMicros: {}, timeAcquiringMicros: { R: 0, W: 2 } } } m30001| Fri Feb 22 11:27:39.589 [conn1] update test.foo update: { $set: { big_string: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } } nscanned:2048 nmoved:1024 nupdated:1024 keyUpdates:0 31882ms m30001| Fri Feb 22 11:27:39.589 [conn1] dbeval slow, time: 31892ms test m30001| Fri Feb 22 11:27:39.589 [conn1] function (big_string) { m30001| new Mongo().getDB("test").foo.update({}, {$set: {big_string: big_string}}, false, /*multi*/true) m30001| } m30001| Fri Feb 22 11:27:39.590 [conn1] command test.$cmd command: { $eval: function (big_string) { m30001| new Mongo().getDB("test").foo.update({..., args: [ "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." ] } ntoreturn:1 keyUpdates:0 locks(micros) W:31925537 reslen:45 31925ms m30001| Fri Feb 22 11:27:39.645 [conn1] dropDatabase test starting m30001| Fri Feb 22 11:27:40.815 [conn1] removeJournalFiles m30001| Fri Feb 22 11:27:42.081 [conn1] dropDatabase test finished m30001| Fri Feb 22 11:27:42.081 [conn1] command test.$cmd command: { dropDatabase: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:2436046 reslen:55 2436ms m30001| Fri Feb 22 11:27:42.081 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:27:42.081 [interruptThread] now exiting m30001| Fri Feb 22 11:27:42.081 dbexit: m30001| Fri Feb 22 11:27:42.081 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:27:42.081 [interruptThread] closing listening socket: 12 m30001| Fri Feb 22 11:27:42.082 [interruptThread] closing listening socket: 13 m30001| Fri Feb 22 11:27:42.082 [interruptThread] closing listening socket: 14 m30001| Fri Feb 22 11:27:42.082 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:27:42.082 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:27:42.082 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:27:42.082 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:27:42.082 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:27:42.082 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:27:42.094 [conn1] end connection 127.0.0.1:38178 (0 connections now open) m30001| Fri Feb 22 11:27:42.097 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:27:42.144 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:27:42.144 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:27:42.144 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:27:42.144 dbexit: really exiting now Fri Feb 22 11:27:43.082 shell: stopped mongo program on port 30001 dur big atomic update SUCCESS Fri Feb 22 11:27:43.095 [conn23] end connection 127.0.0.1:56970 (0 connections now open) 1.1476 minutes Fri Feb 22 11:27:43.116 [initandlisten] connection accepted from 127.0.0.1:45474 #24 (1 connection now open) Fri Feb 22 11:27:43.117 [conn24] end connection 127.0.0.1:45474 (0 connections now open) ******************************************* Test : dur_remove_old_journals.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/dur_remove_old_journals.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/dur_remove_old_journals.js";TestData.testFile = "dur_remove_old_journals.js";TestData.testName = "dur_remove_old_journals";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:27:43 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:27:43.260 [initandlisten] connection accepted from 127.0.0.1:33963 #25 (1 connection now open) null Fri Feb 22 11:27:43.273 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/dur_remove_old_journals --dur --smallfiles --syncdelay 5 --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:27:43.354 [initandlisten] MongoDB starting : pid=23617 port=30001 dbpath=/data/db/dur_remove_old_journals 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:27:43.354 [initandlisten] m30001| Fri Feb 22 11:27:43.354 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:27:43.354 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:27:43.354 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:27:43.354 [initandlisten] m30001| Fri Feb 22 11:27:43.354 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:27:43.354 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:27:43.354 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:27:43.354 [initandlisten] allocator: system m30001| Fri Feb 22 11:27:43.354 [initandlisten] options: { dbpath: "/data/db/dur_remove_old_journals", dur: true, port: 30001, setParameter: [ "enableTestCommands=1" ], smallfiles: true, syncdelay: 5.0 } m30001| Fri Feb 22 11:27:43.354 [initandlisten] journal dir=/data/db/dur_remove_old_journals/journal m30001| Fri Feb 22 11:27:43.354 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:27:43.368 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/local.ns, filling with zeroes... m30001| Fri Feb 22 11:27:43.368 [FileAllocator] creating directory /data/db/dur_remove_old_journals/_tmp m30001| Fri Feb 22 11:27:43.368 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:27:43.369 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/local.0, filling with zeroes... m30001| Fri Feb 22 11:27:43.369 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/local.0, size: 16MB, took 0 secs m30001| Fri Feb 22 11:27:43.372 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:27:43.372 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:27:43.476 [initandlisten] connection accepted from 127.0.0.1:39450 #1 (1 connection now open) m30001| Fri Feb 22 11:27:43.490 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/test.ns, filling with zeroes... m30001| Fri Feb 22 11:27:43.490 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:27:43.490 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/test.0, filling with zeroes... m30001| Fri Feb 22 11:27:43.490 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/test.0, size: 32MB, took 0 secs m30001| Fri Feb 22 11:27:43.491 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/test.1, filling with zeroes... m30001| Fri Feb 22 11:27:43.491 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/test.1, size: 32MB, took 0 secs m30001| Fri Feb 22 11:27:43.494 [conn1] build index test.foo { _id: 1 } m30001| Fri Feb 22 11:27:43.495 [conn1] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:27:43.569 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/test.2, filling with zeroes... m30001| Fri Feb 22 11:27:43.569 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/test.2, size: 64MB, took 0 secs m30001| Fri Feb 22 11:27:43.742 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/test.3, filling with zeroes... m30001| Fri Feb 22 11:27:43.742 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/test.3, size: 128MB, took 0 secs m30001| Fri Feb 22 11:27:43.997 [FileAllocator] allocating new datafile /data/db/dur_remove_old_journals/test.4, filling with zeroes... m30001| Fri Feb 22 11:27:43.997 [FileAllocator] done allocating datafile /data/db/dur_remove_old_journals/test.4, size: 256MB, took 0 secs numInserted: 100 m30001| Fri Feb 22 11:27:44.373 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:44.590 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:70652 reslen:51 217ms m30001| Fri Feb 22 11:27:44.595 [conn1] CMD fsync: sync:1 lock:0 numInserted: 200 m30001| Fri Feb 22 11:27:45.647 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:45.903 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:54118 reslen:51 256ms m30001| Fri Feb 22 11:27:45.907 [conn1] CMD fsync: sync:1 lock:0 numInserted: 300 m30001| Fri Feb 22 11:27:46.938 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:47.166 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:21883 reslen:51 227ms m30001| Fri Feb 22 11:27:47.168 [conn1] CMD fsync: sync:1 lock:0 numInserted: 400 m30001| Fri Feb 22 11:27:47.976 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:48.265 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:46933 reslen:51 288ms m30001| Fri Feb 22 11:27:48.267 [conn1] CMD fsync: sync:1 lock:0 numInserted: 500 m30001| Fri Feb 22 11:27:48.937 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:49.270 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:22057 reslen:51 333ms m30001| Fri Feb 22 11:27:49.272 [conn1] CMD fsync: sync:1 lock:0 numInserted: 600 m30001| Fri Feb 22 11:27:50.134 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:50.411 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:43736 reslen:51 276ms m30001| Fri Feb 22 11:27:50.413 [conn1] CMD fsync: sync:1 lock:0 numInserted: 700 m30001| Fri Feb 22 11:27:51.256 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:51.504 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:49278 reslen:51 247ms m30001| Fri Feb 22 11:27:51.506 [conn1] CMD fsync: sync:1 lock:0 numInserted: 800 m30001| Fri Feb 22 11:27:52.424 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:52.645 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:38834 reslen:51 221ms m30001| Fri Feb 22 11:27:52.647 [conn1] CMD fsync: sync:1 lock:0 numInserted: 900 m30001| Fri Feb 22 11:27:53.428 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:53.628 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:167936 reslen:51 200ms m30001| Fri Feb 22 11:27:53.630 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1000 m30001| Fri Feb 22 11:27:54.302 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:54.503 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:4081 reslen:51 201ms m30001| Fri Feb 22 11:27:54.504 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1100 m30001| Fri Feb 22 11:27:55.245 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:55.452 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:6214 reslen:51 206ms m30001| Fri Feb 22 11:27:55.455 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1200 m30001| Fri Feb 22 11:27:56.307 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:56.564 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:33923 reslen:51 257ms m30001| Fri Feb 22 11:27:56.567 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1300 m30001| Fri Feb 22 11:27:57.349 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:57.586 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:37381 reslen:51 236ms m30001| Fri Feb 22 11:27:57.588 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1400 m30001| Fri Feb 22 11:27:58.221 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:58.463 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:26663 reslen:51 242ms m30001| Fri Feb 22 11:27:58.465 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1500 m30001| Fri Feb 22 11:27:59.129 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:27:59.372 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:43767 reslen:51 242ms m30001| Fri Feb 22 11:27:59.374 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1600 m30001| Fri Feb 22 11:28:00.207 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:00.415 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:16474 reslen:51 208ms m30001| Fri Feb 22 11:28:00.417 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1700 m30001| Fri Feb 22 11:28:01.238 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:01.467 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:39171 reslen:51 229ms m30001| Fri Feb 22 11:28:01.470 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1800 m30001| Fri Feb 22 11:28:02.181 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:02.426 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:42858 reslen:51 245ms m30001| Fri Feb 22 11:28:02.428 [conn1] CMD fsync: sync:1 lock:0 numInserted: 1900 m30001| Fri Feb 22 11:28:03.154 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:03.299 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:2439 reslen:51 144ms m30001| Fri Feb 22 11:28:03.301 [conn1] CMD fsync: sync:1 lock:0 numInserted: 2000 m30001| Fri Feb 22 11:28:04.134 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:04.319 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:36702 reslen:51 184ms m30001| Fri Feb 22 11:28:04.321 [conn1] CMD fsync: sync:1 lock:0 numInserted: 2100 m30001| Fri Feb 22 11:28:05.233 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:05.401 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:28755 reslen:51 167ms m30001| Fri Feb 22 11:28:05.403 [conn1] CMD fsync: sync:1 lock:0 numInserted: 2200 m30001| Fri Feb 22 11:28:06.135 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:06.324 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:67835 reslen:51 188ms m30001| Fri Feb 22 11:28:06.327 [conn1] CMD fsync: sync:1 lock:0 numInserted: 2300 m30001| Fri Feb 22 11:28:06.975 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:07.196 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:66563 reslen:51 221ms m30001| Fri Feb 22 11:28:07.198 [conn1] CMD fsync: sync:1 lock:0 numInserted: 2400 m30001| Fri Feb 22 11:28:07.842 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:08.064 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:28983 reslen:51 221ms m30001| Fri Feb 22 11:28:08.066 [conn1] CMD fsync: sync:1 lock:0 numInserted: 2500 m30001| Fri Feb 22 11:28:08.955 [conn1] CMD fsync: sync:1 lock:0 m30001| Fri Feb 22 11:28:09.130 [conn1] command admin.$cmd command: { fsync: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:20686 reslen:51 174ms m30001| Fri Feb 22 11:28:09.132 [conn1] CMD fsync: sync:1 lock:0 Waiting 20 seconds... [ { "name" : "/data/db/dur_remove_old_journals/journal/j._0", "isDirectory" : false, "size" : 127369216 }, { "name" : "/data/db/dur_remove_old_journals/journal/lsn", "isDirectory" : false, "size" : 88 } ] m30001| Fri Feb 22 11:28:29.636 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:28:29.636 [interruptThread] now exiting m30001| Fri Feb 22 11:28:29.636 dbexit: m30001| Fri Feb 22 11:28:29.636 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:28:29.636 [interruptThread] closing listening socket: 12 m30001| Fri Feb 22 11:28:29.636 [interruptThread] closing listening socket: 13 m30001| Fri Feb 22 11:28:29.636 [interruptThread] closing listening socket: 14 m30001| Fri Feb 22 11:28:29.636 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:28:29.636 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:28:29.636 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:28:29.636 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:28:29.636 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:28:29.636 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:28:29.654 [conn1] end connection 127.0.0.1:39450 (0 connections now open) m30001| Fri Feb 22 11:28:29.676 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:28:29.685 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:28:29.685 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:28:29.685 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:28:29.688 dbexit: really exiting now Fri Feb 22 11:28:30.636 shell: stopped mongo program on port 30001 *** success *** Fri Feb 22 11:28:30.649 [conn25] end connection 127.0.0.1:33963 (0 connections now open) 47.5523 seconds Fri Feb 22 11:28:30.670 [initandlisten] connection accepted from 127.0.0.1:45818 #26 (1 connection now open) Fri Feb 22 11:28:30.671 [conn26] end connection 127.0.0.1:45818 (0 connections now open) ******************************************* Test : explain1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain1.js";TestData.testFile = "explain1.js";TestData.testName = "explain1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:28:30 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:28:30.824 [initandlisten] connection accepted from 127.0.0.1:47939 #27 (1 connection now open) null Fri Feb 22 11:28:30.834 [conn27] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.838 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain1.js", "testFile" : "explain1.js", "testName" : "explain1", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_slowNightly_explain1; for( var i = 0; i < 80; ++i ) { t.drop(); t.ensureIndex({x:1}); for( var j = 0; j < 1000; ++j ) { t.save( {x:j,y:1} ) }; sleep( 100 ); } 127.0.0.1:27999/admin Fri Feb 22 11:28:30.840 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain1.js", "testFile" : "explain1.js", "testName" : "explain1", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_slowNightly_explain1; for( var i = 0; i < 500; ++i ) { try { z = t.find( {x:{$gt:0},y:1} ).explain(); t.count( {x:{$gt:0},y:1} ); } catch( e ) {} } 127.0.0.1:27999/admin Fri Feb 22 11:28:30.842 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain1.js", "testFile" : "explain1.js", "testName" : "explain1", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_slowNightly_explain1; for( var i = 0; i < 200; ++i ) { t.validate({scandata:true}); } 127.0.0.1:27999/admin Fri Feb 22 11:28:30.892 [initandlisten] connection accepted from 127.0.0.1:52590 #28 (2 connections now open) Fri Feb 22 11:28:30.894 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.896 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:30.897 [conn28] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:28:30.898 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:30.898 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:30.901 [conn28] build index done. scanned 0 total records. 0.003 secs Fri Feb 22 11:28:30.912 [initandlisten] connection accepted from 127.0.0.1:60514 #29 (3 connections now open) Fri Feb 22 11:28:30.912 [initandlisten] connection accepted from 127.0.0.1:55319 #30 (4 connections now open) Fri Feb 22 11:28:30.915 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.915 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.915 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.916 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.916 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.916 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.917 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.917 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.917 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.918 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.918 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.918 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.919 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.919 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.919 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.920 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.921 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.921 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.922 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.922 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.922 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.923 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.923 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.923 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.924 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.924 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.924 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.925 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.925 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.925 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.926 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.927 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.927 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.927 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.928 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.928 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.929 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.929 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.929 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.930 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.930 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.930 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.931 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.931 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.931 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.932 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.932 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.932 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.933 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.934 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.934 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.935 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.935 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.935 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.936 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.936 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.936 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.937 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.937 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.937 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.938 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.939 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.939 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.939 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.940 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.940 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.941 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.941 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.941 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.942 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.942 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.943 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.943 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.944 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.944 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.945 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.945 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.945 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.948 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.949 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.949 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.950 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.950 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.950 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.951 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.951 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.951 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.952 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.953 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.953 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.953 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.953 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.953 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.954 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.954 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.954 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.955 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.955 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.955 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.956 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.956 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.956 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.958 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.958 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.958 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.959 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.959 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.959 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.960 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.960 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.961 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.961 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.962 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.962 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.963 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.963 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.963 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.964 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.965 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.965 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.966 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.966 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.966 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.967 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.967 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.967 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.969 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.969 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.969 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.970 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.970 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.971 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.971 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.972 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.972 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.973 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.973 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.973 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.974 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.975 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.975 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.976 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.976 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.976 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.977 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.978 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.978 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.979 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.979 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.979 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.980 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.980 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.981 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.982 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.982 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.982 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.983 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.984 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.984 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.984 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.985 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.985 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.986 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.987 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.987 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.988 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.989 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.989 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.989 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.990 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.990 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.991 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.991 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.991 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.992 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.993 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.993 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.994 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.994 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.994 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.996 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.996 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.996 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.997 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.997 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.998 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:30.998 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:30.999 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:30.999 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.000 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.001 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.001 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.001 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.002 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.002 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.003 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.003 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.003 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.009 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.009 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.009 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.010 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.011 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.011 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.012 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.012 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.012 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.013 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.014 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.014 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.015 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.015 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.015 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.016 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.017 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.017 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.017 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.018 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.018 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.019 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.019 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.019 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.020 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.021 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.021 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.022 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.023 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.023 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.023 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.024 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.024 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.025 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.025 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.025 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.026 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.027 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.027 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.028 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.029 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.029 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.029 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.030 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.030 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.031 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.031 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.031 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.032 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.033 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.033 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.034 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.035 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.035 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.035 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.036 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.036 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.037 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.037 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.037 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.039 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.039 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.039 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.040 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.041 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.041 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.042 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.042 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.042 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.043 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.043 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.043 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.045 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.045 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.045 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.046 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.047 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.047 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.048 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.049 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.049 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.049 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.050 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.050 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.051 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.052 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.052 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.052 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.053 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.053 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.054 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.055 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.055 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.055 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.056 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.056 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.057 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.058 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.058 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.059 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.059 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.059 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.060 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.061 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.061 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.061 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.062 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.062 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.064 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.064 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.064 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.065 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.066 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.066 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.067 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.067 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.067 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.068 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.068 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.068 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.070 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.071 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.071 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.071 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.072 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.072 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.073 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.074 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.074 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.074 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.075 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.075 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.076 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.077 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.077 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.078 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.078 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.078 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.079 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.080 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.080 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.081 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.081 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.081 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.083 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.083 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.083 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.084 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.085 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.085 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.086 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.087 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.087 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.088 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.088 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.088 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.090 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.090 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.090 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.091 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.092 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.092 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.093 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.093 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.093 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.094 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.095 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.095 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.096 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.097 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.097 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.098 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.099 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.099 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.099 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.100 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.100 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.101 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.101 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.101 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.103 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.104 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.104 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.105 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.106 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.106 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.107 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.107 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.107 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.109 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.110 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.110 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.110 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.111 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.111 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.112 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.113 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.113 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.118 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.119 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.119 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.121 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.122 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.122 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.123 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.123 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.123 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.124 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.125 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.125 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.126 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.127 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.127 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.128 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.128 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.128 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.130 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.131 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.131 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.131 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.132 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.132 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.133 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.134 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.134 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.134 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.135 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.135 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.136 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.137 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.137 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.138 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.139 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.139 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.139 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.140 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.140 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.141 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.142 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.142 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.142 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.143 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.144 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.144 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.145 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.145 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.145 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.146 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.146 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.146 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.147 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.147 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.149 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.149 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.149 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.150 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.151 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.151 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.151 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.152 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.152 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.153 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.154 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.154 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.155 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.156 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.156 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.157 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.158 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.158 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.159 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.159 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.159 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.160 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.160 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.160 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.161 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.162 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.162 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.163 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.164 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.164 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.165 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.165 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.165 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.166 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.166 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.167 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.168 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.169 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.169 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.169 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.170 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.170 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.171 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.172 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.172 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.173 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.174 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.174 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.174 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.175 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.175 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.177 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.178 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.178 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.178 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.179 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.179 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.180 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.181 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.181 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.182 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.183 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.183 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.184 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.185 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.185 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.186 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.187 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.187 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.188 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.189 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.189 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.190 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.190 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.190 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.191 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.192 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.192 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.193 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.194 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.194 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.196 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.196 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.196 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.197 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.198 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.198 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.199 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.200 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.200 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.200 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.201 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.201 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.202 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.203 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.203 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.205 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.206 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.206 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.206 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.207 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.207 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.208 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.209 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.209 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.210 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.211 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.211 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.211 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.212 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.212 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.214 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.215 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.215 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.216 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.216 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.216 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.217 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.218 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.218 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.219 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.220 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.220 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.221 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.222 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.222 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.223 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.224 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.224 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.225 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.226 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.226 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.227 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.228 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.228 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.228 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.229 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.230 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.230 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.231 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.231 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.233 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.234 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.234 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.235 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.235 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.235 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.236 [conn29] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.237 [conn29] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:31.237 [conn29] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:31.243 [conn29] end connection 127.0.0.1:60514 (3 connections now open) Fri Feb 22 11:28:31.341 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.347 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:31.348 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:31.348 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:31.348 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:31.348 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:31.660 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:31.661 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:31.662 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:31.662 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:31.662 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:31.662 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:32.010 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:32.011 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:32.012 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:32.012 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:32.012 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:32.013 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:32.414 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:32.416 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:32.417 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:32.417 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:32.417 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:32.417 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:32.836 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:32.838 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:32.838 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:32.838 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:32.838 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:32.838 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:33.236 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:33.237 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:33.238 [conn28] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:28:33.238 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:33.238 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:33.238 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:33.601 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:33.602 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:33.603 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:33.603 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:33.603 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:33.603 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:33.610 [conn30] end connection 127.0.0.1:55319 (2 connections now open) Fri Feb 22 11:28:33.735 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:33.736 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:33.736 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:33.736 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:33.736 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:33.737 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:33.868 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:33.869 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:33.869 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:33.869 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:33.869 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:33.870 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.001 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.003 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.003 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.003 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.003 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.003 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.138 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.139 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.140 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.140 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.140 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.140 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.272 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.274 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.274 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.274 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.274 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.274 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.406 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.408 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.409 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.409 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.409 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.409 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.544 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.545 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.545 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.546 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.546 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.546 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.677 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.679 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.680 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.680 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.680 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.681 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.812 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.813 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.814 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.814 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.814 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.814 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.945 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:34.946 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:34.947 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:34.947 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:34.947 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:34.947 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.078 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:35.080 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:35.080 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.080 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:35.080 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:35.081 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.214 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:35.215 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:35.215 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.215 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:35.215 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:35.215 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.347 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:35.348 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:35.348 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.348 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:35.348 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:35.349 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.480 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:35.482 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:35.483 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.483 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:35.483 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:35.484 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.616 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:35.617 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:35.617 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.617 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:35.617 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:35.618 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.765 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:35.766 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:35.766 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.766 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:35.766 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:35.767 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.912 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:35.913 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:35.913 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:35.913 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:35.913 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:35.914 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.064 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:36.065 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:36.066 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.066 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:36.066 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:36.066 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.212 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:36.213 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:36.214 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.214 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:36.214 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:36.214 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.364 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:36.365 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:36.365 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.365 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:36.366 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:36.366 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.513 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:36.514 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:36.514 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.514 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:36.514 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:36.515 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.662 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:36.664 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:36.665 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.665 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:36.665 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:36.666 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.813 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:36.814 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:36.815 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.815 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:36.815 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:36.815 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.964 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:36.965 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:36.965 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:36.965 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:36.965 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:36.966 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.114 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:37.115 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:37.115 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.115 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:37.115 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:37.116 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.266 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:37.267 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:37.267 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.267 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:37.268 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:37.268 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.417 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:37.419 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:37.419 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.419 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:37.419 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:37.420 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.569 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:37.570 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:37.570 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.570 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:37.570 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:37.571 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.708 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:37.709 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:37.709 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.709 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:37.709 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:37.710 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.841 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:37.843 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:37.843 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.843 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:37.843 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:37.844 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.975 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:37.976 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:37.977 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:37.977 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:37.977 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:37.977 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.108 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:38.109 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:38.110 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.110 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:38.110 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:38.110 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.241 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:38.242 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:38.243 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.243 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:38.243 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:38.243 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.374 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:38.375 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:38.375 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.375 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:38.375 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:38.376 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.506 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:38.507 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:38.508 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.508 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:38.508 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:38.509 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.641 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:38.642 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:38.643 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.643 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:38.643 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:38.643 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.774 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:38.775 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:38.775 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.775 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:38.775 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:38.776 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.907 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:38.909 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:38.910 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:38.910 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:38.910 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:38.911 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.042 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:39.043 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:39.043 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.044 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:39.044 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:39.044 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.175 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:39.176 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:39.177 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.177 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:39.177 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:39.177 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.325 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:39.327 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:39.328 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.328 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:39.328 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:39.329 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.477 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:39.478 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:39.478 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.478 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:39.478 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:39.479 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.626 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:39.627 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:39.628 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.628 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:39.628 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:39.628 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.761 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:39.762 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:39.762 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.763 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:39.763 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:39.763 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.895 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:39.896 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:39.896 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:39.896 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:39.896 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:39.897 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.028 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:40.029 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:40.030 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.030 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:40.030 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:40.031 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.163 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:40.165 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:40.166 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.166 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:40.166 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:40.167 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.314 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:40.315 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:40.315 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.316 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:40.316 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:40.316 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.464 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:40.465 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:40.466 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.466 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:40.466 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:40.466 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.614 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:40.615 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:40.615 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.615 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:40.615 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:40.615 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.763 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:40.764 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:40.765 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.765 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:40.765 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:40.765 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.915 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:40.916 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:40.916 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:40.916 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:40.916 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:40.917 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.064 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:41.065 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:41.066 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.066 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:41.066 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:41.066 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.214 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:41.215 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:41.215 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.215 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:41.215 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:41.216 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.363 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:41.365 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:41.366 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.366 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:41.366 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:41.367 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.515 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:41.516 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:41.516 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.516 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:41.516 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:41.517 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.664 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:41.666 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:41.667 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.667 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:41.667 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:41.667 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.815 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:41.816 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:41.816 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.816 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:41.816 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:41.817 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.974 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:41.977 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:41.977 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:41.977 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:41.977 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:41.978 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.126 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:42.127 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:42.128 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.128 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:42.128 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:42.128 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.259 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:42.260 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:42.261 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.261 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:42.261 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:42.261 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.393 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:42.395 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:42.396 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.396 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:42.396 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:42.397 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.528 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:42.529 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:42.530 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.530 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:42.530 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:42.530 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.662 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:42.663 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:42.663 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.663 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:42.663 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:42.663 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.795 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:42.795 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:42.796 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.796 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:42.796 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:42.796 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.927 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:42.928 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:42.928 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:42.928 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:42.928 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:42.929 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.060 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:43.061 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:43.061 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.061 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:43.061 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:43.061 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.194 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:43.195 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:43.196 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.196 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:43.196 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:43.196 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.327 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:43.329 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:43.329 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.329 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:43.329 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:43.330 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.461 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:43.462 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:43.462 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.462 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:43.462 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:43.463 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.594 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:43.596 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:43.597 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.597 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:43.597 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:43.598 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.729 [conn28] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:43.730 [conn28] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:43.730 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.730 [conn28] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:43.730 [conn28] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:43.731 [conn28] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:43.865 [conn28] end connection 127.0.0.1:52590 (1 connection now open) Fri Feb 22 11:28:43.882 [conn27] end connection 127.0.0.1:47939 (0 connections now open) 13.2297 seconds Fri Feb 22 11:28:43.902 [initandlisten] connection accepted from 127.0.0.1:58610 #31 (1 connection now open) Fri Feb 22 11:28:43.902 [conn31] end connection 127.0.0.1:58610 (0 connections now open) ******************************************* Test : explain2.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain2.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain2.js";TestData.testFile = "explain2.js";TestData.testName = "explain2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:28:43 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:28:44.025 [initandlisten] connection accepted from 127.0.0.1:37007 #32 (1 connection now open) null Fri Feb 22 11:28:44.027 [conn32] CMD: drop test.jstests_slowNightly_explain2 Fri Feb 22 11:28:44.028 [conn32] build index test.jstests_slowNightly_explain2 { _id: 1 } Fri Feb 22 11:28:44.029 [conn32] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:28:44.030 [conn32] build index test.jstests_slowNightly_explain2 { x: 1.0 } Fri Feb 22 11:28:44.032 [conn32] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:28:44.035 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain2.js", "testFile" : "explain2.js", "testName" : "explain2", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i = 0; i < 50000; ++i ) { db.jstests_slowNightly_explain2.insert( {x:i,y:1} ); } 127.0.0.1:27999/admin Fri Feb 22 11:28:44.115 [initandlisten] connection accepted from 127.0.0.1:58273 #33 (2 connections now open) Fri Feb 22 11:28:52.690 [conn32] end connection 127.0.0.1:37007 (1 connection now open) 8810.6980 ms Fri Feb 22 11:28:52.714 [initandlisten] connection accepted from 127.0.0.1:47163 #34 (2 connections now open) Fri Feb 22 11:28:52.715 [conn34] end connection 127.0.0.1:47163 (1 connection now open) ******************************************* Test : explain3.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain3.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain3.js";TestData.testFile = "explain3.js";TestData.testName = "explain3";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:28:52 2013 Fri Feb 22 11:28:52.778 [conn33] end connection 127.0.0.1:58273 (0 connections now open) buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:28:52.882 [initandlisten] connection accepted from 127.0.0.1:60464 #35 (1 connection now open) null Fri Feb 22 11:28:52.889 [conn35] CMD: drop test.jstests_slowNightly_explain3 Fri Feb 22 11:28:52.894 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain3.js", "testFile" : "explain3.js", "testName" : "explain3", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_slowNightly_explain1; for( var i = 0; i < 80; ++i ) { t.drop(); t.ensureIndex({x:1}); for( var j = 0; j < 1000; ++j ) { t.save( {x:j,y:1} ) }; sleep( 100 ); } 127.0.0.1:27999/admin Fri Feb 22 11:28:52.897 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain3.js", "testFile" : "explain3.js", "testName" : "explain3", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_slowNightly_explain1; for( var i = 0; i < 500; ++i ) { try { z = t.find( {x:{$gt:0},y:1} ).sort({x:1}).explain(); } catch( e ) {} } 127.0.0.1:27999/admin Fri Feb 22 11:28:52.898 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/explain3.js", "testFile" : "explain3.js", "testName" : "explain3", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_slowNightly_explain1; for( var i = 0; i < 200; ++i ) { t.validate({scandata:true}); } 127.0.0.1:27999/admin Fri Feb 22 11:28:52.948 [initandlisten] connection accepted from 127.0.0.1:47668 #36 (2 connections now open) Fri Feb 22 11:28:52.948 [initandlisten] connection accepted from 127.0.0.1:52064 #37 (3 connections now open) Fri Feb 22 11:28:52.951 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.954 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:52.955 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:52.955 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:52.955 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:52.955 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:52.975 [initandlisten] connection accepted from 127.0.0.1:36433 #38 (4 connections now open) Fri Feb 22 11:28:52.978 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.978 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.978 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.979 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.979 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.979 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.980 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.980 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.980 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.981 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.981 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.981 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.982 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.982 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.982 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.983 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.983 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.983 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.984 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.984 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.984 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.985 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.985 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.985 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.986 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.986 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.986 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.987 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.987 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.987 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.987 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.988 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.988 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.989 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.989 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.989 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.990 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.990 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.990 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.991 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.991 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.991 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.992 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.992 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.992 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.993 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.993 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.993 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.994 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.994 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.994 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.995 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.995 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.995 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.996 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.996 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.996 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.997 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.997 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.997 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.998 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.998 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.998 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:52.999 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:52.999 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:52.999 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.000 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.000 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.000 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.001 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.001 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.001 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.002 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.002 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.002 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.003 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.003 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.004 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.007 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.007 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.007 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.008 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.008 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.008 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.009 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.009 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.009 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.010 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.010 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.010 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.012 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.012 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.012 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.013 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.013 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.013 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.014 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.014 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.014 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.015 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.015 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.015 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.016 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.016 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.016 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.017 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.017 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.017 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.018 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.018 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.018 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.019 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.019 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.019 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.020 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.020 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.020 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.021 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.021 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.021 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.022 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.022 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.022 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.023 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.023 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.023 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.025 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.025 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.025 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.025 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.025 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.025 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.026 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.027 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.027 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.027 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.027 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.027 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.028 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.029 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.029 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.029 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.029 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.029 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.030 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.031 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.031 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.031 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.031 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.031 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.032 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.033 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.033 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.033 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.034 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.034 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.035 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.035 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.035 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.036 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.036 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.036 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.036 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.036 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.036 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.037 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.037 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.037 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.038 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.038 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.038 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.038 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.038 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.038 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.039 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.040 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.040 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.040 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.041 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.041 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.042 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.042 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.042 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.043 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.043 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.043 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.044 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.044 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.044 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.045 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.045 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.045 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.045 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.045 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.045 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.046 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.047 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.047 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.051 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.051 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.051 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.052 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.052 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.052 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.052 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.052 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.052 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.054 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.054 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.054 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.054 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.054 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.054 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.055 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.055 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.055 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.056 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.056 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.056 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.057 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.057 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.057 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.058 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.059 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.059 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.059 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.060 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.060 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.060 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.060 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.060 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.061 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.061 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.061 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.062 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.062 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.062 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.062 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.063 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.063 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.064 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.064 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.064 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.065 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.065 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.065 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.065 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.066 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.066 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.066 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.066 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.066 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.067 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.067 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.067 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.068 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.068 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.068 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.068 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.069 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.069 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.069 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.069 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.069 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.069 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.069 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.069 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.071 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.071 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.071 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.071 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.072 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.072 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.072 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.072 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.072 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.074 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.074 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.074 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.074 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.075 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.075 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.075 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.075 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.075 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.076 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.077 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.077 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.078 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.078 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.078 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.078 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.078 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.078 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.079 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.080 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.080 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.080 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.081 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.081 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.081 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.081 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.081 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.082 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.083 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.083 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.083 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.083 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.083 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.084 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.084 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.084 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.085 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.085 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.085 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.086 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.086 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.086 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.086 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.087 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.087 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.088 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.088 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.088 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.089 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.089 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.089 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.090 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.090 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.090 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.091 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.091 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.091 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.091 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.092 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.092 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.092 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.093 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.093 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.093 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.093 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.094 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.095 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.095 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.095 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.095 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.095 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.096 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.097 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.097 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.098 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.098 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.099 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.099 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.099 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.100 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.100 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.101 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.101 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.101 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.102 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.102 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.102 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.102 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.102 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.102 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.104 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.104 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.104 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.104 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.105 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.105 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.105 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.105 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.105 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.107 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.107 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.107 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.107 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.108 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.108 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.109 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.109 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.109 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.110 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.110 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.111 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.111 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.111 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.111 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.112 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.113 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.113 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.114 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.114 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.114 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.119 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.119 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.119 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.120 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.120 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.120 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.121 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.121 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.121 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.121 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.122 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.122 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.122 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.123 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.123 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.124 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.124 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.124 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.125 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.125 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.125 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.125 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.126 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.126 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.126 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.126 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.126 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.127 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.128 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.128 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.129 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.129 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.129 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.129 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.130 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.130 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.131 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.131 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.131 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.132 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.132 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.132 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.132 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.133 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.133 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.134 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.134 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.134 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.135 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.135 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.135 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.135 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.136 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.136 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.137 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.137 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.137 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.137 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.138 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.138 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.138 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.139 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.139 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.140 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.140 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.140 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.141 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.141 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.141 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.142 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.142 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.142 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.143 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.143 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.143 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.144 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.144 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.144 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.145 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.145 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.145 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.146 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.146 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.146 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.147 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.147 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.147 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.148 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.148 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.148 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.149 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.149 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.149 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.150 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.150 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.150 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.151 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.151 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.151 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.151 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.152 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.152 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.153 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.153 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.153 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.154 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.154 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.154 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.154 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.155 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.155 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.156 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.156 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.156 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.157 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.157 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.157 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.157 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.158 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.158 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.159 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.159 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.159 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.160 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.161 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.161 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.161 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.161 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.161 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.162 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.162 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.162 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.163 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.164 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.164 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.164 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.164 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.164 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.165 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.165 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.165 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.166 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.166 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.166 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.166 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.167 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.167 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.167 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.168 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.168 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.168 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.169 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.169 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.169 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.170 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.170 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.170 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.170 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.170 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.171 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.172 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.172 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.172 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.172 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.172 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.173 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.174 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.174 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.174 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.174 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.174 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.176 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.176 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.176 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.177 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.178 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.178 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.178 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.178 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.178 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.179 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.180 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.180 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.181 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.181 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.181 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.181 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.182 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.182 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.183 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.183 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.183 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.183 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.184 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.184 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.185 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.186 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.186 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.186 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.187 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.187 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.187 [conn38] CMD: validate test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.188 [conn38] validating index 0: test.jstests_slowNightly_explain1.$_id_ Fri Feb 22 11:28:53.188 [conn38] validating index 1: test.jstests_slowNightly_explain1.$x_1 Fri Feb 22 11:28:53.200 [conn38] end connection 127.0.0.1:36433 (3 connections now open) Fri Feb 22 11:28:53.447 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.449 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:53.450 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:53.450 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:53.450 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:53.450 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:53.906 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:53.908 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:53.908 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:53.908 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:53.908 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:53.908 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.360 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:54.361 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:54.361 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.361 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:54.361 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:54.362 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.364 [conn37] end connection 127.0.0.1:52064 (2 connections now open) Fri Feb 22 11:28:54.499 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:54.500 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:54.500 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.500 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:54.500 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:54.501 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.637 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:54.638 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:54.638 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.638 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:54.638 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:54.638 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.773 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:54.774 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:54.774 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.774 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:54.774 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:54.775 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.909 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:54.910 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:54.911 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:54.911 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:54.911 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:54.911 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.046 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:55.048 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:55.049 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.049 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:55.049 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:55.049 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.184 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:55.185 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:55.185 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.185 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:55.185 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:55.185 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.320 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:55.321 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:55.321 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.321 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:55.321 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:55.322 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.461 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:55.462 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:55.462 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.462 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:55.462 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:55.462 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.597 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:55.598 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:55.598 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.598 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:55.598 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:55.598 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.753 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:55.754 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:55.754 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.754 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:55.754 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:55.754 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.891 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:55.892 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:55.892 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:55.892 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:55.892 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:55.892 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.026 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.028 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.028 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.028 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.028 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.029 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.162 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.163 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.163 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.163 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.163 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.163 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.298 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.299 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.300 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.300 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.300 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.300 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.435 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.436 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.436 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.436 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.436 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.436 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.570 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.571 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.571 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.571 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.571 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.572 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.705 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.706 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.706 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.706 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.706 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.707 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.841 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.842 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.842 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.842 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.842 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.843 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.977 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:56.978 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:56.978 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:56.978 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:56.978 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:56.978 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.111 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:57.113 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:57.114 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.114 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:57.114 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:57.114 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.248 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:57.249 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:57.250 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.250 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:57.250 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:57.250 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.383 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:57.384 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:57.384 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.384 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:57.384 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:57.384 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.517 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:57.518 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:57.518 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.518 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:57.518 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:57.519 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.651 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:57.652 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:57.652 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.652 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:57.652 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:57.653 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.785 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:57.786 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:57.787 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.787 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:57.787 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:57.787 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.921 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:57.922 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:57.923 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:57.923 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:57.923 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:57.923 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.056 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:58.057 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:58.058 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.058 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:58.058 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:58.058 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.191 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:58.192 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:58.192 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.192 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:58.192 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:58.192 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.325 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:58.327 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:58.328 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.328 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:58.328 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:58.329 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.463 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:58.464 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:58.465 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.465 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:58.465 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:58.465 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.597 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:58.599 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:58.599 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.599 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:58.599 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:58.600 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.732 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:58.733 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:58.734 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.734 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:58.734 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:58.734 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.866 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:58.867 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:58.868 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:58.868 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:58.868 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:58.868 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.001 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.002 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.002 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.002 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.002 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.003 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.137 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.138 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.139 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.139 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.139 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.139 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.272 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.273 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.273 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.273 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.273 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.274 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.407 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.410 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.410 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.410 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.410 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.411 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.548 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.550 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.550 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.550 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.550 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.550 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.695 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.697 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.697 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.697 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.697 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.698 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.835 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.837 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.837 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.837 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.838 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.838 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.974 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:28:59.975 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:28:59.975 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:28:59.975 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:28:59.975 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:28:59.976 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.111 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:00.113 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:00.113 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.113 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:00.113 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:00.113 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.251 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:00.252 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:00.253 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.253 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:00.253 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:00.253 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.389 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:00.390 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:00.390 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.390 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:00.390 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:00.391 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.527 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:00.528 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:00.528 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.528 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:00.528 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:00.529 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.664 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:00.666 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:00.667 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.667 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:00.667 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:00.668 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.802 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:00.803 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:00.804 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.804 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:00.804 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:00.804 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.938 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:00.939 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:00.940 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:00.940 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:00.940 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:00.940 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.075 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:01.076 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:01.076 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.076 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:01.077 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:01.077 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.210 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:01.211 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:01.212 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.212 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:01.212 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:01.212 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.347 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:01.348 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:01.348 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.348 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:01.348 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:01.349 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.481 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:01.482 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:01.483 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.483 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:01.483 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:01.483 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.616 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:01.617 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:01.617 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.617 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:01.618 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:01.618 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.750 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:01.753 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:01.753 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.753 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:01.754 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:01.754 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.887 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:01.888 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:01.888 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:01.888 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:01.888 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:01.888 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.021 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.022 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.023 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.023 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.023 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.023 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.156 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.158 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.158 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.158 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.158 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.159 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.292 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.292 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.293 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.293 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.293 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.293 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.426 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.427 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.427 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.427 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.427 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.428 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.562 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.562 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.563 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.563 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.563 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.563 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.696 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.697 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.698 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.698 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.698 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.698 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.831 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.832 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.832 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.832 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.832 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.833 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.969 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:02.972 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:02.973 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:02.973 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:02.973 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:02.974 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.106 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:03.107 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:03.108 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.108 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:03.108 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:03.108 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.241 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:03.242 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:03.243 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.243 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:03.243 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:03.243 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.376 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:03.377 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:03.378 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.378 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:03.378 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:03.378 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.512 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:03.513 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:03.514 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.514 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:03.514 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:03.514 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.649 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:03.650 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:03.650 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.650 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:03.650 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:03.651 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.788 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:03.789 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:03.790 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.790 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:03.790 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:03.790 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.926 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:03.927 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:03.928 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:03.928 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:03.928 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:03.928 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.064 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:04.066 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:04.067 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.067 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:04.067 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:04.068 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.203 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:04.204 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:04.205 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.205 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:04.205 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:04.205 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.339 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:04.340 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:04.340 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.340 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:04.340 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:04.341 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.475 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:04.476 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:04.477 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.477 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:04.477 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:04.477 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.611 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:04.612 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:04.613 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.613 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:04.613 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:04.613 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.748 [conn36] CMD: drop test.jstests_slowNightly_explain1 Fri Feb 22 11:29:04.749 [conn36] build index test.jstests_slowNightly_explain1 { _id: 1 } Fri Feb 22 11:29:04.749 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.749 [conn36] info: creating collection test.jstests_slowNightly_explain1 on add index Fri Feb 22 11:29:04.749 [conn36] build index test.jstests_slowNightly_explain1 { x: 1.0 } Fri Feb 22 11:29:04.749 [conn36] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:04.886 [conn36] end connection 127.0.0.1:47668 (1 connection now open) Fri Feb 22 11:29:04.903 [conn35] end connection 127.0.0.1:60464 (0 connections now open) 12.2062 seconds Fri Feb 22 11:29:04.922 [initandlisten] connection accepted from 127.0.0.1:47408 #39 (1 connection now open) Fri Feb 22 11:29:04.923 [conn39] end connection 127.0.0.1:47408 (0 connections now open) ******************************************* Test : geo_axis_aligned.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_axis_aligned.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_axis_aligned.js";TestData.testFile = "geo_axis_aligned.js";TestData.testName = "geo_axis_aligned";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:29:04 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:29:05.080 [initandlisten] connection accepted from 127.0.0.1:53799 #40 (1 connection now open) null Fri Feb 22 11:29:05.087 [conn40] CMD: drop test.axisaligned [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.090 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.091 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.093 [conn40] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:29:05.093 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.095 [conn40] build index done. scanned 9 total records. 0.002 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.105 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.112 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.112 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.113 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.113 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.118 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.119 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.119 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.120 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.120 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.125 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.125 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.126 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.126 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.127 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.132 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.135 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.135 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.136 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.136 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.141 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.142 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.142 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.143 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.143 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.147 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.148 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.149 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.149 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.150 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.156 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.157 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.157 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.158 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.158 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.163 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.163 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.164 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.164 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.165 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.170 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.171 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.172 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.172 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.172 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.177 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.178 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.179 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.179 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.179 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.185 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.186 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.187 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.187 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.187 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.192 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.193 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.193 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.194 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.194 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.199 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.200 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.200 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.201 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.201 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.208 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.209 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.209 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.210 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.210 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.217 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.218 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.218 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.219 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.219 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.223 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.224 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.225 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.225 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.226 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.230 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.231 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.231 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.232 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.232 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.236 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.237 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.238 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.238 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.239 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 2 } Fri Feb 22 11:29:05.248 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.249 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.250 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.250 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.250 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.257 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.258 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.258 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.259 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.259 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.264 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.265 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.266 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.266 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.267 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.272 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.273 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.274 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.274 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.275 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.280 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.281 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.281 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.282 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.282 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.288 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.290 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.290 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.290 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.291 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.296 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.297 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.298 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.298 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.299 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.303 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.304 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.305 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.305 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.306 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.311 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.312 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.312 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.313 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.314 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.319 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.319 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.320 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.320 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.321 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.326 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.327 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.327 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.328 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.328 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.333 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.334 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.334 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.335 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.335 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.340 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.341 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.341 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.342 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.342 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.347 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.348 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.348 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.349 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.350 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.355 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.356 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.356 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.357 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.357 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.362 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.363 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.363 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.364 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.364 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.370 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.370 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.371 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.371 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.372 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.376 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.377 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.378 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.378 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.379 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.383 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.384 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.385 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.385 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.386 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.390 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.391 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.392 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.393 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.398 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.399 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.399 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.400 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.400 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.405 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.406 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.406 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.407 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.407 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.411 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.412 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.413 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.413 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.413 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.418 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.419 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.419 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.422 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.423 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.428 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.430 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.430 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.431 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.431 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.436 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.437 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.437 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.438 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.438 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.443 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.444 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.444 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.445 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.445 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.450 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.451 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.451 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.451 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.452 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.456 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.457 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.457 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.458 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.458 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.463 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.464 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.464 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.464 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.465 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.470 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.471 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.471 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.471 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.472 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.477 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.478 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.478 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.479 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.479 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.484 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.485 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.485 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.485 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.486 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.492 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.493 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.493 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.494 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.494 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.501 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.502 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.503 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.504 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.504 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.511 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.512 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.513 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.513 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.514 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.521 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.522 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.523 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.524 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.524 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.529 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.530 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.531 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.531 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.531 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.536 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.537 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.537 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.538 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.538 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.542 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.543 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.543 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.544 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.544 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.549 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.550 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.550 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.551 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.551 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.556 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.557 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.557 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.557 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.558 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.562 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.563 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.563 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.564 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.564 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.569 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.569 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.570 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.570 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.571 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.575 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.576 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.576 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.577 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.577 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.582 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.583 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.584 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.584 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.585 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.590 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.591 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.591 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.592 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.592 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.597 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.598 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.598 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.599 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.599 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.604 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.605 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.605 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.606 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.606 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.610 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.611 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.611 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.612 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.612 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.616 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.617 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.618 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.619 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.619 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.623 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.624 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.624 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.625 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.625 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.629 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.630 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.630 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.631 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.631 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.635 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.636 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.636 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.636 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.637 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.640 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.641 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.642 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.642 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.642 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.647 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.647 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.648 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.648 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.649 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.653 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.654 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.654 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.654 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.655 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.658 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.659 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.660 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.660 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.660 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.665 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.666 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.666 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.666 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.667 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.671 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.672 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.672 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.673 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.673 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.678 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.679 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.679 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.679 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.680 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.686 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.687 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.687 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.688 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.688 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.694 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.695 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.695 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.696 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.696 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.701 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.702 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.702 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.702 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.703 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.708 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.708 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.709 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.709 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.710 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.715 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.716 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.716 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.716 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.717 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.722 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.723 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.724 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.724 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.725 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.729 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.730 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.730 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.731 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.732 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.737 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.738 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.739 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.740 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.740 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.746 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.747 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.748 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.749 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.749 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.755 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.756 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.756 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.757 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.757 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.762 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.763 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.764 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.764 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.764 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.769 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.769 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.770 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.770 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.775 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.776 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.776 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.777 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.777 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.783 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.784 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.784 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.785 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.785 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.790 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.791 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.792 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.792 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.792 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.797 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.798 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.798 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.798 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.799 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.805 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.806 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.806 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.807 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.807 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.812 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.813 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.813 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.813 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.814 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.818 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.819 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.819 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.820 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.820 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 2 } Fri Feb 22 11:29:05.825 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.826 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.827 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.827 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.827 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.833 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.833 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.834 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.835 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.835 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.840 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.841 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.842 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.842 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.842 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.847 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.847 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.848 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.848 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.849 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.863 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.864 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.864 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.864 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.865 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.869 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.870 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.871 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.872 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.872 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.877 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.877 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.878 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.878 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.878 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.882 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.883 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.884 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.884 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.885 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.888 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.889 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.889 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.890 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.890 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.894 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.894 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.895 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.895 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.895 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.899 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.900 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.901 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.901 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.901 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.905 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.906 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.907 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.907 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.907 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.911 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.912 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.912 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.912 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.913 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.916 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.917 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.918 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.918 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.918 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.922 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.923 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.923 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.924 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.924 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.928 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.929 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.929 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.930 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.930 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.934 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.935 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.935 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.935 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.936 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.939 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.940 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.940 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.941 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.941 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.945 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.945 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.946 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.946 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.946 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.950 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.951 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.951 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.951 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.952 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 2 } Fri Feb 22 11:29:05.956 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.956 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.957 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.957 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.957 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.961 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.962 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.963 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.963 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.963 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.967 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.968 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.968 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.968 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.969 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.973 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.973 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.974 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.974 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.974 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.978 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.979 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.979 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.980 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.980 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.984 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.985 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.985 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.986 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.986 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.990 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.991 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.991 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.992 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.992 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:05.996 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:05.997 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:05.997 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:05.997 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:05.998 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.002 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.002 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.003 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.003 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.003 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.007 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.008 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.008 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.009 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.009 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.013 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.014 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.014 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.015 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.015 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.019 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.020 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.020 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.021 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.021 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.025 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.026 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.026 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.027 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.027 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.031 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.031 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.032 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.032 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.033 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.037 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.037 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.038 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.038 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.038 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.043 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.043 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.044 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.044 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.044 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.048 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.049 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.050 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.050 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.050 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.054 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.055 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.055 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.056 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.056 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.060 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.060 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.061 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.061 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.061 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.065 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.066 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.066 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.067 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.067 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.071 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.072 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.072 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.072 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.073 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.077 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.077 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.078 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.078 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.079 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.082 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.083 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.083 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.084 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.084 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.089 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.089 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.090 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.090 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.091 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.095 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.095 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.096 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.096 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.096 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.100 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.101 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.102 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.102 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.102 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.106 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.107 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.107 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.108 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.108 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.112 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.113 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.113 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.113 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.114 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.117 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.118 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.119 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.119 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.123 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.124 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.124 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.124 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.125 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.129 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.129 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.130 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.130 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.130 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.135 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.135 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.136 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.136 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.136 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.140 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.141 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.141 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.142 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.142 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.146 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.146 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.147 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.147 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.148 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.151 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.152 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.152 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.153 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.153 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.157 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.158 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.158 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.159 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.159 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.163 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.164 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.164 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.164 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.165 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.168 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.169 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.170 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.170 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.170 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.175 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.176 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.176 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.180 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.181 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.181 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.181 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.185 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.186 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.187 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.187 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.187 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.191 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.192 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.192 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.193 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.193 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.197 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.198 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.198 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.198 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.199 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.202 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.203 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.203 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.204 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.204 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.208 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.209 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.209 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.209 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.210 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.214 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.215 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.215 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.215 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.216 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.220 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.220 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.221 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.221 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.221 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.225 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.226 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.226 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.227 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.227 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.231 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.231 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.232 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.232 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.232 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.236 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.237 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.237 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.238 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.238 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.242 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.243 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.243 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.244 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.244 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.248 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.249 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.249 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.250 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.250 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.254 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.254 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.255 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.255 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.255 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.259 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.260 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.260 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.260 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.261 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.264 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.265 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.265 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.266 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.266 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.270 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.271 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.271 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.272 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.272 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.278 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.279 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.279 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.279 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.280 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.283 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.284 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.285 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.285 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.285 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.289 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.290 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.290 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.290 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.291 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.295 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.295 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.296 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.296 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.296 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 2 } Fri Feb 22 11:29:06.300 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.301 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.301 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.302 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.302 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.306 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.307 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.307 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.308 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.308 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.312 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.312 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.313 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.313 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.313 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.317 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.318 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.318 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.319 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.319 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.323 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.324 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.324 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.324 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.325 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.329 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.329 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.330 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.331 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.335 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.335 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.336 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.336 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.336 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.340 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.341 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.341 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.342 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.342 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.345 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.346 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.347 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.347 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.347 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.351 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.352 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.352 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.352 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.353 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.357 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.357 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.358 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.358 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.359 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.363 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.363 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.364 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.364 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.364 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.368 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.369 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.369 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.370 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.370 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.373 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.374 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.375 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.375 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.375 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.379 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.380 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.380 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.380 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.381 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.385 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.385 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.386 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.386 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.386 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.390 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.391 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.392 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.392 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.396 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.397 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.397 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.397 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.398 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.401 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.402 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.402 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.403 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.403 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.407 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.408 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.408 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.409 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.409 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.413 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.414 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.414 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.414 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.415 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.419 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.420 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.420 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.420 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.421 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.424 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.425 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.425 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.426 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.426 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.430 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.430 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.431 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.431 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.431 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.435 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.436 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.436 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.436 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.437 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.440 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.441 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.441 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.442 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.442 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.446 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.446 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.447 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.447 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.447 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.453 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.455 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.455 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.456 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.456 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.460 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.461 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.461 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.462 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.462 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.466 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.467 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.467 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.467 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.468 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.472 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.473 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.473 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.474 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.475 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.479 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.480 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.480 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.480 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.481 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.485 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.485 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.486 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.486 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.486 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.490 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.491 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.491 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.492 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.492 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.496 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.497 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.497 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.497 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.498 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.502 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.503 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.503 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.503 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.504 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.508 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.509 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.509 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.510 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.510 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.514 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.515 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.515 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.515 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.516 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.519 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.520 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.520 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.521 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.521 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.525 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.526 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.526 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.526 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.527 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.531 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.532 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.532 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.532 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.533 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.537 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.538 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.538 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.538 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.539 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.542 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.543 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.543 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.544 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.544 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.546 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.547 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.547 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.548 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.548 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.550 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.551 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.552 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.552 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.552 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.555 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.555 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.556 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.556 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.557 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.559 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.560 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.560 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.560 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.561 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.564 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.565 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.565 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.566 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.566 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.570 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.570 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.571 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.571 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.571 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.575 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.576 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.576 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.577 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.577 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.581 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.581 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.582 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.582 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.582 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.586 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.587 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.587 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.588 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.588 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.592 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.592 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.593 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.593 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.594 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.597 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.598 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.598 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.599 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.599 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.603 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.604 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.604 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.605 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.605 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.609 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.610 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.610 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.610 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.611 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.615 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.616 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.616 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.616 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.617 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.620 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.621 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.621 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.622 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.622 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.626 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.627 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.627 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.628 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.628 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.632 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.632 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.633 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.633 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.633 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.637 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.638 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.639 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.639 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.639 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.643 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.644 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.645 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.645 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.645 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.649 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.650 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.650 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.650 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.651 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.655 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.655 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.656 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.656 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.656 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.661 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.661 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.662 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.662 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.663 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.677 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.678 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.678 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.678 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.679 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.683 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.684 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.684 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.685 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.685 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.689 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.689 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.690 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.690 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.690 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.694 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.695 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.695 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.696 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.696 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.700 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.701 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.701 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.702 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.702 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.706 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.707 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.707 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.708 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.708 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.712 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.713 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.713 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.714 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.714 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.718 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.719 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.719 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.720 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.720 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.724 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.725 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.725 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.725 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.726 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.730 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.730 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.731 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.731 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.732 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.736 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.737 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.737 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.738 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.738 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.743 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.744 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.745 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.745 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.746 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.750 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.751 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.751 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.751 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.752 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.755 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.756 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.756 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.757 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.757 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.761 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.762 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.762 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.763 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.763 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 2 } Fri Feb 22 11:29:06.767 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.768 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.768 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.768 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.769 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.774 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.775 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.775 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.776 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.776 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.780 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.780 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.781 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.781 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.782 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.785 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.786 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.787 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.787 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.787 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.791 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.792 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.792 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.792 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.793 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.796 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.797 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.797 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.798 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.798 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.802 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.802 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.803 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.803 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.804 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.807 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.808 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.808 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.809 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.809 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.813 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.814 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.814 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.815 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.815 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.819 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.820 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.820 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.821 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.821 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.825 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.826 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.826 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.826 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.827 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.831 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.832 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.832 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.832 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.833 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.836 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.837 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.838 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.838 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.838 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.842 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.843 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.843 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.843 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.844 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.849 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.850 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.850 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.851 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.851 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.856 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.856 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.857 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.857 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.858 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.862 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.862 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.863 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.863 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.863 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.867 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.868 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.869 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.869 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.869 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.873 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.874 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.874 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.875 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.875 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.879 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.879 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.880 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.880 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.881 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 2 } Fri Feb 22 11:29:06.885 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.886 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.886 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.887 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.887 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.891 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.892 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.892 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.893 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.893 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.895 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.896 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.897 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.897 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.897 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.900 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.901 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.901 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.901 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.902 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.905 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.906 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.907 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.907 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.907 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.910 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.911 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.911 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.911 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.912 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.914 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.915 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.916 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.916 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.916 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.920 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.921 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.921 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.922 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.922 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.926 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.927 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.927 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.927 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.928 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.931 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.932 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.932 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.933 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.933 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.937 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.938 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.938 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.938 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.939 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.943 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.943 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.944 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.944 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.944 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.948 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.949 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.949 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.950 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.950 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.954 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.955 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.956 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.956 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.956 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.960 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.961 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.961 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.962 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.962 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.966 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.967 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.968 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.969 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.969 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.973 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.974 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.975 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.975 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.976 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.979 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.980 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.981 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.981 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.981 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.986 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.986 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.987 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.987 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.988 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.992 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:06.992 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:06.993 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:06.993 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:06.993 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 2 } Fri Feb 22 11:29:06.999 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.000 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.000 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.001 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.001 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.006 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.006 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.007 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.007 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.009 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.010 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.010 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.011 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.011 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.012 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.012 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.013 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.014 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.014 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.014 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.015 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.016 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.016 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.017 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.017 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.018 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.019 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.019 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.019 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.020 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.022 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.023 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.023 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.023 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.024 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.026 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.027 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.027 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.028 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.028 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.030 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.031 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.031 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.032 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.032 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.034 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.035 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.035 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.036 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.036 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.038 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.039 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.039 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.040 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.040 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.044 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.045 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.045 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.045 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.046 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.049 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.050 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.050 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.051 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.051 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.055 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.056 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.056 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.057 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.057 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.061 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.062 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.062 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.062 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.063 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.067 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.067 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.068 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.068 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.068 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.072 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.073 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.073 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.074 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.074 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.077 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.079 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.079 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.083 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.084 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.084 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.084 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.085 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 2 } Fri Feb 22 11:29:07.090 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.091 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.091 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.092 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.092 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.096 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.097 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.097 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.097 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.098 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.103 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.104 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.104 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.104 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.105 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.110 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.110 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.111 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.111 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.112 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.117 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.117 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.118 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.118 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.124 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.124 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.125 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.125 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.125 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.131 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.131 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.132 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.132 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.132 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.137 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.138 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.138 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.139 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.139 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.144 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.145 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.145 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.146 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.146 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.151 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.152 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.152 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.153 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.153 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.158 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.159 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.159 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.160 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.160 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.165 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.166 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.166 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.167 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.167 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.172 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.173 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.173 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.174 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.174 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.179 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.180 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.181 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.181 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.186 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.187 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.188 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.188 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.188 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.194 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.194 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.195 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.195 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.196 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.201 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.202 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.202 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.203 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.203 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.208 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.209 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.209 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.209 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.210 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.215 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.215 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.216 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.216 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.217 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.222 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.222 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.223 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.223 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.223 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 3 } Fri Feb 22 11:29:07.229 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.229 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.230 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.230 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.231 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.236 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.237 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.237 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.237 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.238 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.243 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.244 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.244 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.245 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.245 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.250 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.251 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.251 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.252 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.252 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.257 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.258 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.258 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.259 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.259 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.264 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.265 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.265 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.266 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.266 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.271 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.272 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.272 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.273 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.273 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.278 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.279 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.279 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.280 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.280 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.285 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.286 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.286 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.287 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.287 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.292 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.293 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.294 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.294 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.294 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.300 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.300 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.301 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.301 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.301 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.307 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.307 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.308 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.308 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.308 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.314 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.314 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.315 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.315 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.315 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.320 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.321 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.322 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.322 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.322 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.329 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.330 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.331 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.331 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.336 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.337 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.337 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.338 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.338 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.343 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.344 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.345 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.345 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.345 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.350 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.351 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.352 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.352 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.353 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.360 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.360 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.361 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.361 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.362 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.367 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.367 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.368 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.368 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.368 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.374 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.374 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.375 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.375 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.376 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.381 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.382 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.382 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.383 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.383 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.388 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.389 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.389 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.390 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.390 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.395 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.396 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.396 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.397 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.397 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.402 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.403 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.403 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.404 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.404 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.409 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.410 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.410 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.411 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.417 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.417 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.418 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.418 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.418 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.424 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.425 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.425 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.425 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.426 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.431 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.431 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.432 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.432 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.433 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.438 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.438 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.439 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.439 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.440 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.445 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.446 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.446 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.446 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.447 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.452 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.453 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.453 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.454 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.454 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.459 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.460 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.460 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.460 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.461 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.466 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.467 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.467 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.467 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.468 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.473 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.474 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.474 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.474 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.475 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.480 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.481 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.481 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.482 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.482 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.489 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.490 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.490 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.491 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.491 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.496 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.497 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.497 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.498 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.498 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.505 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.505 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.505 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.506 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.511 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.512 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.512 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.513 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.513 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.518 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.519 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.519 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.520 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.520 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.526 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.526 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.527 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.527 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.528 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.533 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.534 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.534 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.534 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.535 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.540 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.541 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.541 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.541 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.542 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.547 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.548 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.548 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.548 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.549 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.554 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.555 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.555 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.555 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.556 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.561 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.562 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.563 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.563 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.563 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.568 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.569 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.569 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.570 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.570 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.576 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.576 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.577 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.587 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.587 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.593 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.594 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.594 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.595 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.595 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.600 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.601 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.602 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.602 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.602 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.608 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.609 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.609 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.609 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.610 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.615 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.616 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.616 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.616 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.617 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.622 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.622 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.623 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.623 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.623 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.628 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.629 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.629 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.630 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.630 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.635 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.636 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.636 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.637 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.637 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.642 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.643 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.643 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.644 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.644 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.649 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.650 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.650 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.650 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.651 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.656 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.656 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.657 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.657 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.657 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.662 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.663 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.664 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.664 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.664 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.669 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.670 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.671 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.671 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.671 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.676 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.677 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.677 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.678 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.678 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.683 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.684 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.684 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.685 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.685 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.691 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.692 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.692 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.693 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.693 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.698 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.699 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.699 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.700 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.700 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.705 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.706 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.706 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.707 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.707 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.712 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.713 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.713 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.714 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.714 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.720 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.721 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.721 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.722 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.722 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.727 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.728 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.728 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.729 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.729 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.734 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.735 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.735 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.735 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.736 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.741 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.742 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.742 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.743 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.743 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.748 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.749 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.749 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.750 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.750 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.755 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.756 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.756 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.756 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.757 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.762 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.762 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.763 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.763 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.763 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.768 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.769 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.769 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.770 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.775 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.776 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.776 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.777 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.777 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.782 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.783 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.783 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.784 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.784 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.789 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.790 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.790 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.791 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.791 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.796 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.797 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.797 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.797 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.798 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.803 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.803 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.804 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.804 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.805 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 3 } Fri Feb 22 11:29:07.810 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.811 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.811 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.811 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.812 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.817 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.818 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.818 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.819 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.819 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.824 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.825 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.825 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.826 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.826 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.831 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.832 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.832 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.833 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.833 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.838 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.839 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.839 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.840 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.840 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.845 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.846 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.846 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.847 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.847 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.852 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.853 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.853 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.854 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.855 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.860 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.861 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.861 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.862 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.862 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.868 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.869 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.869 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.870 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.870 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.875 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.876 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.876 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.877 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.877 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.883 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.883 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.884 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.884 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.885 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.890 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.891 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.891 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.891 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.892 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.897 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.898 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.898 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.899 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.899 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.904 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.905 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.905 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.906 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.906 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.911 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.912 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.912 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.913 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.913 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.919 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.919 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.920 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.920 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.921 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.926 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.927 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.927 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.927 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.928 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.933 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.933 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.934 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.934 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.934 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.939 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.940 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.940 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.941 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.941 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.946 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.947 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.947 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.948 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.948 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 3 } Fri Feb 22 11:29:07.953 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.954 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.954 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.954 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.955 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.960 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.961 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.961 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.961 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.962 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.966 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.967 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.968 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.968 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.968 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.973 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.974 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.974 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.975 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.975 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.980 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.981 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.981 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.981 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.982 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.987 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.987 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.988 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.988 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.988 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:07.993 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:07.994 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:07.994 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:07.995 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:07.995 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.000 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.001 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.001 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.002 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.002 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.007 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.008 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.008 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.008 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.009 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.014 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.014 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.015 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.015 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.015 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.021 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.021 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.022 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.022 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.022 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.028 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.029 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.029 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.029 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.030 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.035 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.036 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.036 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.037 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.037 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.042 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.043 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.044 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.044 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.044 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.050 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.050 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.051 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.051 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.052 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.057 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.058 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.058 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.058 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.059 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.064 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.065 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.065 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.066 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.066 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.072 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.072 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.073 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.073 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.073 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.079 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.080 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.080 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.080 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.081 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.086 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.087 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.087 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.088 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.088 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.093 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.094 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.095 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.095 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.095 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.102 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.103 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.103 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.104 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.104 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.110 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.111 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.111 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.111 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.112 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.117 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.118 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.119 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.119 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.124 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.125 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.126 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.126 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.126 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.132 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.132 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.133 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.133 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.134 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.139 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.140 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.140 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.141 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.141 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.146 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.147 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.147 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.147 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.148 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.153 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.153 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.154 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.154 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.154 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.160 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.160 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.161 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.161 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.161 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.166 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.167 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.167 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.168 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.168 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.173 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.174 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.174 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.174 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.175 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.180 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.181 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.181 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.181 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.186 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.187 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.188 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.188 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.188 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.193 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.194 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.194 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.194 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.195 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.200 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.201 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.201 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.201 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.202 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.207 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.208 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.208 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.208 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.209 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.214 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.214 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.215 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.215 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.215 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.220 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.221 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.221 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.222 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.222 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.227 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.227 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.228 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.228 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.228 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.234 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.234 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.235 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.235 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.235 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.241 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.241 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.242 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.242 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.242 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.247 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.248 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.248 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.249 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.249 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.254 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.255 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.255 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.255 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.256 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.261 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.261 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.262 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.262 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.262 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.267 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.268 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.268 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.269 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.269 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.274 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.275 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.275 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.275 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.276 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.281 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.281 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.282 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.282 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.282 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.287 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.288 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.288 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.289 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.289 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.294 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.295 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.295 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.295 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.296 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.301 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.302 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.302 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.302 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.303 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.308 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.309 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.309 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.309 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.310 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.315 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.315 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.316 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.316 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.316 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.321 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.322 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.322 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.323 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.323 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.328 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.329 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.329 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.329 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.330 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.335 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.335 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.336 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.337 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.337 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.342 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.343 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.343 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.344 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.344 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.349 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.350 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.350 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.350 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.351 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.356 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.356 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.357 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.357 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.357 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.362 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.363 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.363 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.364 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.364 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 3 } Fri Feb 22 11:29:08.369 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.370 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.370 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.371 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.371 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.376 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.377 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.377 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.377 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.378 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.383 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.383 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.384 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.384 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.384 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.389 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.390 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.390 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.391 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.391 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.396 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.397 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.397 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.397 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.398 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.403 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.403 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.404 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.404 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.404 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.409 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.410 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.410 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.411 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.416 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.417 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.417 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.417 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.418 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.423 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.423 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.424 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.424 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.424 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.429 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.430 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.430 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.431 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.431 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.436 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.437 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.437 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.437 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.438 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.443 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.444 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.444 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.444 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.445 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.450 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.450 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.451 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.451 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.451 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.456 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.457 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.457 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.458 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.458 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.463 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.464 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.464 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.464 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.465 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.470 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.471 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.471 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.471 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.472 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.477 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.478 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.478 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.478 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.479 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.484 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.484 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.485 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.485 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.485 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.490 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.491 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.491 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.492 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.492 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.497 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.498 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.498 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.498 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.499 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.504 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.505 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.505 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.506 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.511 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.514 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.514 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.515 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.515 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.520 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.522 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.522 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.522 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.523 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.528 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.528 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.529 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.529 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.529 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.534 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.534 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.535 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.535 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.535 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.540 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.541 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.541 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.541 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.542 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.546 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.547 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.547 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.547 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.548 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.553 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.553 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.554 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.554 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.554 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.559 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.560 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.560 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.561 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.561 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.566 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.567 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.567 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.568 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.568 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.583 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.584 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.584 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.584 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.585 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.590 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.590 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.591 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.591 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.591 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.596 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.597 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.598 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.598 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.598 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.603 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.604 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.604 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.605 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.605 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.610 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.611 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.611 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.611 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.612 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.617 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.618 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.618 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.618 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.619 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.624 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.625 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.625 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.625 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.626 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.631 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.631 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.632 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.632 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.632 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.637 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.638 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.638 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.639 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.639 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.644 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.645 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.645 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.645 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.646 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.651 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.651 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.652 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.652 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.652 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.658 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.658 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.659 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.659 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.659 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.662 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.663 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.663 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.664 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.664 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.667 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.668 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.668 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.668 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.668 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.671 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.672 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.672 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.673 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.673 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.676 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.677 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.677 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.677 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.677 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.680 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.681 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.681 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.682 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.682 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.687 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.688 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.688 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.688 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.689 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.694 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.695 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.695 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.695 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.696 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.700 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.701 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.701 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.701 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.702 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.706 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.707 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.707 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.707 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.708 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.712 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.713 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.713 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.714 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.714 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.720 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.720 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.721 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.721 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.722 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.727 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.728 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.728 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.728 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.729 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.734 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.735 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.735 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.735 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.736 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.741 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.741 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.742 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.742 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.742 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.748 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.748 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.749 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.749 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.749 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.754 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.755 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.756 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.756 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.756 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.761 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.762 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.762 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.763 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.763 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.768 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.769 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.769 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.770 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:08.775 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.776 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.776 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.777 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.777 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.783 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.783 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.784 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.784 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.784 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.789 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.790 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.790 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.791 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.791 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.796 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.797 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.797 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.798 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.798 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.803 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.804 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.804 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.805 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.805 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.810 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.811 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.811 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.811 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.812 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.817 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.817 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.818 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.818 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.818 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.823 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.824 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.824 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.825 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.825 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.830 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.831 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.832 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.832 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.838 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.839 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.839 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.840 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.840 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.845 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.846 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.846 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.846 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.847 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.852 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.853 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.853 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.853 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.854 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.859 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.860 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.860 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.860 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.861 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.866 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.867 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.867 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.867 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.868 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.873 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.873 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.874 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.874 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.874 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.880 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.880 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.881 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.881 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.881 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.887 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.887 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.888 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.888 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.888 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.894 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.894 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.895 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.895 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.895 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.900 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.901 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.902 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.902 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.902 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.908 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.908 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.909 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.909 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.909 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 3 } Fri Feb 22 11:29:08.915 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.915 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.916 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.916 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.916 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.921 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.922 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.923 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.923 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.923 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.929 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.929 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.930 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.930 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.930 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.935 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.936 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.937 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.937 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.937 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.942 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.942 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.943 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.943 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.943 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.948 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.949 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.949 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.949 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.950 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.954 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.955 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.955 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.956 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.956 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.961 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.962 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.963 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.963 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.963 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.969 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.969 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.970 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.970 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.970 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.976 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.977 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.977 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.977 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.978 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.983 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.984 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.984 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.985 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.985 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.990 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.991 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.991 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.992 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:08.992 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:08.998 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:08.999 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:08.999 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:08.999 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.000 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.005 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.006 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.007 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.007 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.007 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.013 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.014 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.014 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.015 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.015 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.021 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.022 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.022 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.023 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.023 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.029 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.030 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.031 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.031 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.032 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.037 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.038 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.038 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.039 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.039 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.044 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.045 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.045 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.046 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.046 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.051 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.052 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.052 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.053 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.053 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 3 } Fri Feb 22 11:29:09.058 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.059 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.060 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.060 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.060 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.066 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.067 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.067 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.068 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.068 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.072 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.072 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.073 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.074 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.074 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.077 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.079 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.079 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.082 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.083 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.084 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.084 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.084 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.088 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.088 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.089 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.090 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.090 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.095 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.096 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.097 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.097 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.098 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.106 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.107 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.107 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.108 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.108 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.116 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.117 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.118 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.119 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.128 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.129 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.129 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.131 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.132 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.141 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.142 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.143 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.144 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.144 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.154 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.155 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.156 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.157 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.158 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.166 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.167 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.168 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.169 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.169 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.178 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.181 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.182 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.182 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.191 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.192 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.193 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.193 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.194 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.203 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.204 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.205 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.206 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.206 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.218 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.219 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.220 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.220 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.221 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.229 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.230 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.231 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.231 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.232 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.240 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.241 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.242 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.242 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.243 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.251 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.253 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.253 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.254 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.254 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 3 } Fri Feb 22 11:29:09.263 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.264 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.264 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.265 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.266 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.274 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.275 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.276 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.276 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.278 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.279 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.279 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.280 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.281 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.282 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.282 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.283 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.284 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.285 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.286 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.286 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.288 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.288 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.289 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.290 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.291 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.292 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.293 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.293 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.294 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.298 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.299 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.300 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.301 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.301 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.306 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.307 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.308 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.308 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.309 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.314 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.315 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.315 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.316 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.316 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.321 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.323 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.323 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.324 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.324 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.329 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.330 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.331 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.331 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.332 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.341 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.343 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.343 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.344 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.345 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.354 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.355 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.356 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.356 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.357 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.365 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.366 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.366 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.367 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.367 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.375 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.376 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.377 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.378 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.378 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.387 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.388 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.389 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.389 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.390 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.398 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.399 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.399 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.400 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.401 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.409 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.410 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.410 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.411 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.420 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.421 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.421 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.422 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.422 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 3 } Fri Feb 22 11:29:09.432 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.433 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.433 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.434 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.434 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.443 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.444 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.445 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.445 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.446 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.453 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.454 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.455 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.456 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.456 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.463 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.464 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.465 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.466 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.466 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.474 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.475 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.475 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.476 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.476 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.483 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.484 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.485 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.485 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.486 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.492 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.493 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.494 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.495 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.495 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.502 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.503 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.504 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.504 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.505 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.512 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.513 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.514 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.515 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.515 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.523 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.524 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.524 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.525 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.525 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.532 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.533 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.534 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.534 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.535 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.542 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.543 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.544 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.544 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.545 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.552 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.553 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.554 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.554 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.555 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.562 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.564 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.564 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.565 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.565 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.572 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.573 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.574 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.575 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.575 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.582 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.583 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.583 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.584 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.584 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.591 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.592 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.593 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.593 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.594 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.600 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.602 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.602 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.603 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.603 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.610 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.611 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.612 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.613 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.613 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.620 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.622 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.622 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.623 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.623 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 4 } Fri Feb 22 11:29:09.630 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.631 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.631 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.632 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.632 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.639 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.640 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.641 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.642 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.642 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.649 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.650 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.651 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.652 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.652 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.659 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.660 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.661 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.662 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.662 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.670 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.671 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.672 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.672 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.673 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.679 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.681 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.681 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.682 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.682 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.689 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.690 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.690 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.691 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.691 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.698 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.699 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.700 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.700 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.701 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.708 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.709 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.710 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.710 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.711 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.718 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.720 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.720 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.721 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.721 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.728 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.729 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.729 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.730 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.730 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.737 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.738 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.739 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.739 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.740 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.747 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.748 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.748 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.749 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.749 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.757 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.758 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.758 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.771 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.771 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.779 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.780 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.780 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.781 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.781 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.788 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.790 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.790 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.791 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.791 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.798 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.799 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.800 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.800 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.801 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.808 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.809 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.809 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.810 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.810 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.818 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.819 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.819 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.820 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.820 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.828 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.829 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.829 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.830 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.830 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:09.837 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.838 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.839 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.839 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.840 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.846 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.847 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.848 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.848 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.849 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.856 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.857 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.858 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.858 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.859 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.866 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.867 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.868 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.868 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.869 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.876 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.877 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.878 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.878 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.879 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.886 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.887 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.887 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.888 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.888 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.895 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.896 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.897 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.897 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.898 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.905 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.906 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.906 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.907 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.908 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.915 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.916 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.916 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.917 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.917 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.925 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.926 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.926 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.927 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.927 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.934 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.935 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.935 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.936 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.936 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.943 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.944 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.945 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.945 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.946 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.953 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.954 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.955 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.955 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.956 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.963 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.964 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.965 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.965 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.966 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.973 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.974 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.974 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.975 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.975 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.982 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.983 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.984 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.984 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.985 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:09.991 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:09.992 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:09.993 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:09.993 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:09.994 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.001 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.002 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.002 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.003 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.003 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.010 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.012 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.012 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.013 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.013 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.020 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.021 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.022 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.022 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.023 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.030 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.031 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.031 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.032 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.032 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.039 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.040 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.040 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.041 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.041 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.048 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.049 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.050 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.050 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.051 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.058 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.059 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.060 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.060 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.061 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.068 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.069 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.069 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.070 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.071 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.077 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.079 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.080 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.086 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.087 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.088 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.089 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.089 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.093 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.094 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.095 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.095 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.095 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.100 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.101 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.101 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.101 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.102 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.107 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.108 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.108 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.109 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.109 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.113 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.114 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.114 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.115 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.115 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.120 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.121 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.121 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.122 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.122 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.127 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.127 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.128 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.128 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.128 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.134 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.135 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.136 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.136 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.136 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.141 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.142 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.142 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.143 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.143 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.147 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.148 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.148 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.149 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.149 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.153 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.154 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.154 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.155 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.155 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.160 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.160 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.161 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.161 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.162 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.166 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.167 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.167 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.168 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.168 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.173 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.173 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.174 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.174 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.175 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.179 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.180 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.180 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.181 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.185 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.186 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.186 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.186 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.187 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.191 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.192 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.192 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.193 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.193 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.198 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.198 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.199 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.199 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.200 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.204 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.205 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.205 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.206 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.206 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.210 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.211 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.212 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.212 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.212 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.217 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.217 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.218 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.218 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.218 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.223 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.224 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.224 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.224 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.225 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.229 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.230 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.231 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.231 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.231 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.236 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.237 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.237 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.238 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.238 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.242 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.243 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.243 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.244 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.244 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.248 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.249 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.249 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.250 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.250 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.254 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.255 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.256 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.256 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.256 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.262 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.263 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.263 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.264 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.264 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.269 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.270 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.270 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.270 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.271 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.275 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.276 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.276 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.277 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.277 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.281 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.282 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.283 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.283 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.283 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.288 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.289 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.289 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.289 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.290 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.294 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.295 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.296 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.296 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.296 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.301 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.302 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.302 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.303 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.303 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 4 } Fri Feb 22 11:29:10.307 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.308 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.308 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.309 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.309 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.313 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.314 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.315 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.315 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.315 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.320 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.321 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.321 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.321 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.322 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.326 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.327 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.327 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.328 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.328 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.333 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.334 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.334 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.334 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.335 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.339 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.340 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.340 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.341 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.341 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.347 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.348 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.348 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.349 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.349 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.354 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.355 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.355 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.355 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.356 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.361 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.361 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.362 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.362 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.362 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.367 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.368 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.369 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.369 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.369 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.374 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.375 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.375 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.375 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.376 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.380 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.381 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.381 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.382 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.382 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.387 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.388 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.388 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.388 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.389 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.394 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.394 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.395 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.395 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.395 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.401 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.401 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.402 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.402 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.402 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.407 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.408 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.408 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.409 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.409 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.413 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.414 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.415 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.415 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.415 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.420 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.421 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.421 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.421 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.422 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.427 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.428 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.428 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.428 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.429 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.433 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.434 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.435 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.435 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.435 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 4 } Fri Feb 22 11:29:10.440 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.441 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.441 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.441 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.442 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.446 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.447 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.448 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.448 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.448 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.453 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.454 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.454 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.455 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.455 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.460 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.461 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.461 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.462 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.462 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.467 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.468 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.468 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.469 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.469 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.474 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.475 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.475 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.475 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.476 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.480 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.481 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.481 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.482 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.482 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.487 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.488 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.488 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.488 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.489 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.494 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.494 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.495 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.495 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.496 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.500 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.501 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.502 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.502 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.502 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.507 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.508 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.508 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.508 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.509 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.513 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.514 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.514 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.515 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.515 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.520 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.520 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.521 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.521 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.522 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.526 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.527 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.528 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.528 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.528 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.533 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.534 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.534 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.534 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.535 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.539 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.540 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.540 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.541 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.541 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.545 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.546 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.546 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.547 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.547 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.552 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.552 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.553 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.553 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.553 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.558 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.559 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.559 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.560 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.560 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.565 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.566 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.566 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.566 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.567 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.572 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.574 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.575 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.575 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.576 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.580 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.581 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.581 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.582 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.582 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.587 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.587 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.588 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.588 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.588 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.593 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.594 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.594 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.594 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.595 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.599 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.600 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.601 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.601 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.601 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.606 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.607 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.607 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.607 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.608 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.611 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.612 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.612 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.613 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.613 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.618 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.618 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.619 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.619 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.619 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.624 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.625 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.625 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.626 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.626 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.631 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.631 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.632 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.632 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.633 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.637 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.638 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.638 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.638 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.639 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.643 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.644 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.644 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.645 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.645 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.649 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.650 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.651 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.651 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.651 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.656 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.657 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.657 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.658 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.658 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.663 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.664 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.664 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.664 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.665 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.669 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.670 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.670 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.671 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.671 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.675 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.676 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.677 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.677 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.677 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.682 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.683 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.683 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.683 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.684 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.688 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.689 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.689 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.690 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.690 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.695 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.696 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.696 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.696 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.697 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.701 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.702 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.702 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.703 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.703 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.707 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.708 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.708 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.709 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.709 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.714 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.714 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.715 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.715 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.715 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.720 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.721 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.721 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.722 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.722 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.727 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.727 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.728 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.728 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.728 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.733 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.734 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.734 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.734 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.735 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.739 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.740 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.740 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.741 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.741 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.745 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.746 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.747 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.747 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.747 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.752 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.753 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.753 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.753 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.754 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.758 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.759 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.760 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.760 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.760 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.765 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.765 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.766 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.766 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.766 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.771 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.772 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.772 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.772 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.773 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.777 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.778 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.778 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.779 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.779 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.784 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.785 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.785 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.785 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.786 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.790 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.791 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.792 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.792 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.792 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.811 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.812 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.812 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.812 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.813 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.817 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.818 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.818 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.819 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.819 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.824 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.824 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.825 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.825 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.826 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.830 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.832 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.832 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.832 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.837 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.838 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.839 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.839 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.839 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 4 } Fri Feb 22 11:29:10.844 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.844 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.845 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.845 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.846 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.850 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.851 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.851 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.852 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.852 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.857 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.857 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.858 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.858 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.858 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.863 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.864 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.864 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.864 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.865 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.869 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.870 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.871 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.871 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.871 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.876 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.877 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.877 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.877 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.878 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.881 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.882 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.882 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.883 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.883 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.888 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.888 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.889 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.889 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.889 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.894 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.895 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.895 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.895 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.896 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.900 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.901 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.902 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.902 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.902 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.907 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.907 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.908 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.908 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.908 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.913 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.913 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.914 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.914 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.914 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.919 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.920 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.920 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.920 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.921 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.926 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.927 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.927 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.928 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.928 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.933 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.934 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.934 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.935 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.935 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.939 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.940 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.940 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.941 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.941 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.946 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.946 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.947 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.947 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.947 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.952 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.953 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.953 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.954 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.954 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.959 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.959 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.960 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.960 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.960 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.965 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.966 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.966 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.967 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.967 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:10.972 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.973 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.973 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.973 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.974 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:10.978 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.979 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.979 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.980 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.980 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:10.986 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.987 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.987 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.988 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.988 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:10.994 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:10.995 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:10.995 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:10.996 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:10.996 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.002 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.002 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.003 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.003 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.003 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.009 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.010 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.010 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.011 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.011 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.017 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.017 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.018 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.018 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.018 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.023 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.024 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.024 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.024 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.025 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.029 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.030 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.030 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.031 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.031 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.036 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.037 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.037 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.037 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.038 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.042 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.043 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.043 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.044 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.044 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.048 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.048 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.049 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.049 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.049 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.054 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.055 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.055 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.055 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.056 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.060 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.061 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.061 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.062 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.062 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.068 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.068 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.069 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.069 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.070 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.074 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.075 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.075 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.076 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.076 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.080 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.081 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.082 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.082 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.082 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.087 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.088 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.088 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.089 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.089 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.094 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.095 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.095 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.096 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.096 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.101 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.102 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.102 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.103 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.103 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.108 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.108 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.109 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.109 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.110 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.114 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.115 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.115 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.115 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.116 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.120 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.121 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.121 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.121 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.122 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.126 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.127 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.127 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.127 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.128 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.132 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.132 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.133 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.133 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.134 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.138 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.139 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.139 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.139 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.140 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.144 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.145 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.145 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.145 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.146 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.152 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.152 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.153 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.153 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.153 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.159 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.160 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.161 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.161 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.161 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.167 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.167 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.168 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.168 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.168 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.175 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.175 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.176 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.181 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.182 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.182 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.183 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.183 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.187 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.188 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.189 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.189 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.189 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.194 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.195 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.195 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.195 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.196 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.200 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.201 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.202 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.202 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.202 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.207 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.207 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.208 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.208 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.208 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.213 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.213 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.214 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.214 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.215 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.219 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.220 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.220 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.220 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.221 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.225 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.226 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.227 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.227 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.227 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.232 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.233 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.233 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.233 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.234 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.238 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.239 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.239 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.240 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.240 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.244 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.245 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.245 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.246 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.246 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.251 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.251 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.252 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.252 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.252 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.257 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.258 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.258 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.258 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.259 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.263 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.264 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.265 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.265 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.265 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.270 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.271 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.271 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.272 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.272 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.275 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.276 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.277 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.277 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.277 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.283 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.284 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.284 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.285 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.285 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.290 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.291 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.291 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.291 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.292 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.296 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.297 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.298 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.298 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.298 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.303 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.303 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.304 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.304 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.305 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.309 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.310 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.310 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.310 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.311 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.315 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.316 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.316 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.317 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.317 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.322 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.323 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.323 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.323 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.324 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.328 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.329 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.330 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.330 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.335 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.335 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.336 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.336 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.336 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.341 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.342 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.342 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.342 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.343 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.347 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.348 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.348 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.349 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.349 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.354 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.354 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.355 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.355 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.355 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.360 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.361 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.361 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.362 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.362 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 4 } Fri Feb 22 11:29:11.366 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.367 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.367 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.368 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.368 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.372 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.373 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.373 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.374 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.374 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.380 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.381 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.381 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.381 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.382 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.388 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.388 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.389 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.389 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.389 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.395 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.396 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.396 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.396 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.397 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.402 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.403 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.403 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.404 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.404 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.409 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.410 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.411 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.411 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.416 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.417 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.417 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.417 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.418 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.422 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.423 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.423 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.424 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.424 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.429 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.429 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.430 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.430 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.431 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.435 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.436 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.436 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.437 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.437 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.441 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.441 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.442 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.442 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.442 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.447 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.448 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.448 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.448 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.449 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.453 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.454 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.454 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.455 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.455 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.460 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.461 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.461 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.461 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.462 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.466 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.467 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.467 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.468 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.468 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.472 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.473 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.473 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.474 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.474 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.479 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.479 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.480 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.480 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.480 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.485 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.486 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.486 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.487 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.487 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.492 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.492 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.493 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.493 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.493 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 4 } Fri Feb 22 11:29:11.498 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.499 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.499 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.499 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.500 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.505 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.505 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.506 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.506 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.512 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.513 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.513 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.514 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.514 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.518 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.519 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.519 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.520 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.520 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.525 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.525 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.526 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.526 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.526 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.531 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.531 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.532 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.532 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.532 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.537 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.538 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.538 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.538 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.539 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.545 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.546 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.546 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.546 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.547 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.553 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.554 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.554 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.554 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.555 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.561 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.561 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.562 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.562 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.562 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.568 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.569 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.569 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.570 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.570 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.576 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.577 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.577 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.577 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.578 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.582 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.583 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.584 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.584 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.584 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.589 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.590 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.590 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.591 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.591 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.596 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.597 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.597 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.598 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.598 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.602 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.603 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.604 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.604 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.605 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.609 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.610 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.610 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.610 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.611 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.616 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.616 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.617 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.617 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.617 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.622 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.623 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.623 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.624 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.624 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.629 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.630 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.630 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.631 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.631 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 4 } Fri Feb 22 11:29:11.636 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.636 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.637 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.637 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.637 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.642 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.643 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.643 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.644 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.644 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.645 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.645 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.646 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.647 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.647 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.648 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.648 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.649 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.649 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.650 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.650 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.651 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.652 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.652 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.652 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.653 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.654 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.654 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.655 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.655 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.660 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.661 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.661 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.661 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.662 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.666 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.667 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.667 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.668 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.668 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.673 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.673 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.674 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.674 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.675 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.679 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.680 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.680 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.681 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.681 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.685 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.686 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.686 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.686 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.687 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.691 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.692 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.692 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.693 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.693 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.698 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.698 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.699 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.699 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.699 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.704 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.705 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.705 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.706 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.706 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.711 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.712 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.712 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.712 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.713 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.716 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.717 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.717 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.718 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.718 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.723 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.724 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.724 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.725 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.725 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.730 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.731 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.731 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.731 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.732 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.746 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.747 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.747 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.748 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.748 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 4 } Fri Feb 22 11:29:11.752 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.753 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.754 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.754 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.754 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.759 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.760 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.760 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.761 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.761 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.765 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.766 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.766 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.767 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.767 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.771 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.772 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.773 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.773 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.773 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.779 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.780 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.781 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.781 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.781 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.787 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.788 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.788 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.789 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.789 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.795 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.795 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.796 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.796 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.796 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.801 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.802 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.802 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.802 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.803 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.807 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.808 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.808 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.809 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.809 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.815 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.816 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.816 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.817 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.817 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.823 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.824 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.824 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.825 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.825 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.830 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.831 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.832 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.832 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.836 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.837 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.837 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.838 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.838 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.842 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.843 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.843 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.844 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.844 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.850 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.851 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.851 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.852 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.852 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.859 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.860 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.861 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.861 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.862 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.867 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.868 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.868 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.869 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.869 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.874 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.875 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.875 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.876 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.876 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.881 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.881 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.882 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.882 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.882 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.889 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.890 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.890 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.890 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.891 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 5 } Fri Feb 22 11:29:11.897 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.898 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.898 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.899 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.899 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.904 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.905 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.906 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.906 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.906 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.911 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.912 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.912 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.912 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.913 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.917 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.918 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.918 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.919 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.919 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.926 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.926 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.927 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.927 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.928 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.934 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.934 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.935 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.935 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.935 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.941 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.941 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.942 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.942 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.942 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.947 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.947 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.948 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.948 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.948 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.953 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.954 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.954 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.954 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.955 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.961 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.961 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.962 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.962 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.962 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.968 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.969 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.970 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.970 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.970 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.976 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.976 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.977 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.977 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.977 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.982 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.983 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.983 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.984 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.984 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.988 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.989 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.989 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.990 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.990 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:11.996 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:11.997 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:11.997 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:11.998 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:11.998 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.004 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.005 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.005 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.005 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.006 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.011 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.012 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.012 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.013 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.013 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.017 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.018 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.019 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.019 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.019 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.025 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.025 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.026 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.027 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.027 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.034 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.034 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.035 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.035 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.035 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.042 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.042 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.043 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.043 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.044 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.049 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.050 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.050 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.051 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.051 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.055 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.056 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.056 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.057 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.057 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.062 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.062 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.063 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.063 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.063 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.070 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.071 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.071 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.071 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.072 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.078 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.079 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.080 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.085 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.086 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.086 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.087 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.087 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.093 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.094 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.095 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.095 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.096 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.103 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.104 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.104 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.105 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.105 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.115 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.116 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.116 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.117 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.117 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.127 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.128 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.129 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.130 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.130 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.140 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.141 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.141 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.142 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.142 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.150 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.151 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.151 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.152 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.152 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.160 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.162 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.162 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.163 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.164 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.175 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.176 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.176 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.186 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.187 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.188 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.189 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.189 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.197 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.199 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.199 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.200 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.200 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.207 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.208 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.209 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.209 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.210 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.217 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.218 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.218 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.219 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.220 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.229 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.231 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.231 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.232 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.232 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.242 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.243 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.244 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.244 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.245 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.253 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.254 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.255 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.256 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.256 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.263 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.264 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.264 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.265 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.265 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.273 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.274 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.274 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.275 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.275 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.285 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.286 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.286 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.287 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.287 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.297 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.298 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.299 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.299 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.300 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.309 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.311 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.311 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.312 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.313 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.320 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.321 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.322 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.322 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.323 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.330 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.331 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.332 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.332 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.333 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.343 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.344 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.345 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.346 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.346 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.355 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.356 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.357 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.358 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.358 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.368 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.369 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.369 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.370 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.370 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.377 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.378 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.379 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.379 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.380 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.386 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.387 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.388 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.389 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.389 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.398 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.400 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.400 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.401 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.401 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.410 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.411 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.412 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.412 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.413 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.421 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.422 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.422 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.424 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.424 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.431 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.432 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.432 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.433 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.434 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.440 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.441 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.442 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.442 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.443 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.452 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.453 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.454 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.454 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.455 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.464 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.465 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.465 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.466 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.466 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.474 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.475 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.476 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.476 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.477 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.483 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.485 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.485 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.486 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.486 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.493 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.494 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.494 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.495 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.495 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.506 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.506 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.507 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.507 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.516 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.517 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.518 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.518 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.519 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.527 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.528 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.529 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.529 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.530 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.536 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.537 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.538 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.538 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.539 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.545 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.546 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.547 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.547 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.548 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.557 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.559 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.559 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.560 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.560 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.569 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.571 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.571 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.572 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.572 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.580 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.582 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.582 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.583 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.583 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.590 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.591 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.592 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.592 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.593 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.600 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.601 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.601 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.602 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.602 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.612 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.613 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.614 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.614 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.615 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.624 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.625 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.625 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.626 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.627 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.636 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.638 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.639 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.640 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.641 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.647 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.649 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.649 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.650 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.650 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.657 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.658 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.659 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.659 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.660 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.669 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.670 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.671 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.672 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.672 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 5 } Fri Feb 22 11:29:12.682 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.683 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.683 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.684 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.684 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.693 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.694 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.695 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.695 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.696 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.703 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.704 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.705 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.705 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.706 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.713 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.714 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.714 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.715 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.715 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.726 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.727 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.727 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.728 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.728 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.738 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.739 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.740 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.741 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.741 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.750 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.751 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.752 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.752 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.753 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.760 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.761 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.762 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.762 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.763 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.770 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.771 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.772 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.772 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.773 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.783 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.784 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.785 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.785 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.786 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.796 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.797 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.797 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.798 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.798 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.807 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.808 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.809 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.809 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.810 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.817 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.818 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.819 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.819 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.820 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.827 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.828 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.828 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.829 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.829 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.840 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.841 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.841 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.842 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.843 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.852 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.853 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.854 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.855 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.855 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.863 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.864 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.865 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.866 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.866 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.873 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.874 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.874 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.875 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.875 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.882 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.883 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.883 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.884 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.884 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.894 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.895 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.896 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.896 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.897 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 5 } Fri Feb 22 11:29:12.906 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.907 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.908 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.908 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.909 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.917 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.918 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.919 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.920 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.921 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.927 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.929 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.929 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.930 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.930 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.937 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.938 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.939 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.940 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.940 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.950 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.951 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.951 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.952 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.952 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.962 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.963 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.963 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.964 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.964 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.973 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.974 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.974 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.975 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.975 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.982 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.983 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.984 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.984 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.985 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:12.991 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:12.992 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:12.993 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:12.993 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:12.994 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.003 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.004 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.005 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.005 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.006 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.015 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.016 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.017 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.017 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.018 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.026 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.027 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.027 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.028 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.028 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.035 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.036 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.036 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.037 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.037 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.044 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.045 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.045 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.046 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.046 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.056 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.057 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.057 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.058 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.058 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.067 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.068 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.069 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.069 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.070 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.078 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.079 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.079 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.080 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.080 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.087 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.088 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.089 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.089 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.090 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.096 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.097 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.098 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.098 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.099 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.108 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.109 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.110 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.110 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.111 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.132 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.133 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.134 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.134 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.135 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.143 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.144 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.144 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.145 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.145 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.153 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.154 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.154 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.155 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.155 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.164 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.165 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.165 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.166 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.166 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.176 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.176 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.177 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.185 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.186 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.187 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.187 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.188 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.196 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.198 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.198 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.199 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.199 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.205 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.206 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.207 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.207 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.208 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.214 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.215 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.216 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.216 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.217 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.226 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.227 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.228 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.228 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.229 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.238 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.239 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.239 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.240 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.241 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.249 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.250 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.250 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.251 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.251 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.258 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.259 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.259 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.260 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.260 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.267 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.268 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.268 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.269 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.269 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.279 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.280 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.280 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.281 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.281 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.290 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.291 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.292 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.293 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.293 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.301 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.302 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.302 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.303 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.304 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.310 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.311 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.312 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.312 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.313 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.320 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.321 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.321 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.322 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.322 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.331 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.333 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.333 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.334 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.334 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.343 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.344 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.345 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.345 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.346 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.354 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.355 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.356 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.356 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.357 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.363 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.365 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.365 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.366 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.366 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.373 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.374 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.374 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.375 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.376 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.385 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.386 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.387 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.387 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.388 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.397 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.398 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.398 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.399 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.400 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.408 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.409 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.409 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.410 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.410 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.417 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.418 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.419 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.419 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.420 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.427 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.428 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.428 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.429 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.429 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.439 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.440 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.440 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.441 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.441 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.451 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.452 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.452 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.453 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.453 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.462 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.463 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.463 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.464 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.464 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.471 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.472 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.473 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.473 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.474 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.480 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.481 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.482 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.483 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.483 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.493 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.494 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.494 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.495 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.495 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.505 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.506 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.506 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.507 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.507 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.515 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.517 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.517 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.518 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.518 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.525 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.526 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.527 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.527 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.528 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.535 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.536 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.537 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.537 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.538 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.548 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.549 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.550 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.550 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.551 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 5 } Fri Feb 22 11:29:13.560 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.561 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.562 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.563 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.563 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.571 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.572 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.573 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.573 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.574 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.581 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.582 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.583 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.584 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.584 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.592 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.593 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.594 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.594 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.595 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.602 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.603 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.604 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.605 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.605 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.613 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.614 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.615 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.615 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.616 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.624 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.625 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.626 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.626 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.626 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.633 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.634 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.634 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.635 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.635 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.641 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.642 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.643 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.643 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.644 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.653 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.654 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.654 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.655 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.655 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.664 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.665 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.665 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.666 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.666 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.674 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.675 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.676 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.676 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.677 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.683 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.684 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.684 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.685 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.685 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.692 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.693 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.693 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.694 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.694 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.704 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.705 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.705 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.706 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.706 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.715 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.716 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.716 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.717 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.717 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.725 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.726 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.726 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.727 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.727 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.734 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.735 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.735 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.736 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.736 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.742 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.743 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.744 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.744 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.745 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.754 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.755 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.755 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.756 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.756 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:13.765 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.766 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.766 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.767 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.767 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.775 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.776 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.776 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.777 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.777 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.783 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.784 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.784 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.785 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.785 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.791 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.792 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.792 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.793 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.793 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.799 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.800 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.801 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.801 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.802 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.808 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.809 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.810 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.810 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.811 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.818 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.819 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.819 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.820 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.820 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.827 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.828 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.828 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.829 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.829 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.837 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.838 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.839 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.839 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.840 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.847 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.848 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.849 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.849 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.850 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.859 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.860 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.860 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.861 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.861 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.870 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.871 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.871 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.872 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.872 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.879 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.880 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.880 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.882 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.882 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.889 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.890 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.890 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.891 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.891 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.900 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.901 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.902 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.903 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.903 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.912 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.913 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.913 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.914 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.914 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.923 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.924 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.924 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.925 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.925 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.932 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.933 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.934 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.935 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.935 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.942 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.943 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.943 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.944 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.944 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.953 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.954 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.955 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.955 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.956 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:13.965 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.966 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.966 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.967 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.967 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:13.976 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.978 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.978 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.979 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.979 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:13.985 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.986 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.987 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.988 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.988 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:13.994 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:13.996 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:13.996 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:13.997 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:13.997 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.003 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.005 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.005 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.006 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.007 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.014 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.015 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.015 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.016 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.016 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.022 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.023 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.024 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.024 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.025 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.030 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.031 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.032 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.032 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.033 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.038 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.039 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.040 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.040 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.041 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.047 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.048 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.048 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.049 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.049 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.056 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.057 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.057 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.058 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.058 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.065 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.067 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.067 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.068 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.068 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.074 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.075 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.076 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.076 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.077 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.083 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.084 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.085 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.085 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.085 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.095 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.096 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.097 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.097 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.098 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.107 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.108 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.108 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.109 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.109 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.118 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.119 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.119 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.120 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.121 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.127 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.128 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.129 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.130 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.130 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.137 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.138 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.138 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.139 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.140 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.149 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.150 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.151 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.151 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.152 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.161 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.162 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.163 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.163 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.164 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.172 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.173 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.173 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.174 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.175 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.182 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.183 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.184 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.184 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.185 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.193 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.194 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.195 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.196 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.196 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.204 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.205 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.206 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.206 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.207 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.216 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.217 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.217 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.218 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.219 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.227 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.228 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.229 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.229 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.230 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.238 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.239 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.239 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.240 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.240 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.247 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.248 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.249 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.249 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.250 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.259 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.260 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.261 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.262 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.262 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.271 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.272 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.273 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.274 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.274 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.282 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.284 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.284 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.285 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.285 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.292 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.293 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.293 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.294 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.294 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.301 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.302 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.303 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.303 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.304 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.314 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.315 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.315 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.316 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.316 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.325 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.326 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.327 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.328 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.328 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.336 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.338 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.338 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.339 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.339 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.346 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.347 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.347 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.348 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.348 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.355 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.356 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.357 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.357 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.358 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.367 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.368 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.369 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.369 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.370 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 5 } Fri Feb 22 11:29:14.379 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.380 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.381 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.382 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.382 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.390 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.392 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.393 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.393 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.399 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.400 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.400 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.401 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.402 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.408 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.409 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.410 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.410 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.417 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.418 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.419 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.420 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.420 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.427 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.428 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.429 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.429 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.430 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.438 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.439 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.439 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.440 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.440 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.448 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.449 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.449 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.450 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.451 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.459 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.460 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.461 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.461 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.462 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.470 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.471 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.472 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.472 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.473 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.482 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.483 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.483 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.484 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.485 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.494 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.495 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.495 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.496 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.496 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.505 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.505 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.506 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.506 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.513 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.514 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.515 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.516 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.516 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.526 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.527 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.527 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.528 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.528 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.538 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.539 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.539 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.540 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.540 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.549 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.550 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.550 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.551 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.552 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.558 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.559 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.560 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.560 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.561 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.568 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.569 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.569 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.570 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.570 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.580 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.581 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.582 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.582 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.583 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 5 } Fri Feb 22 11:29:14.591 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.592 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.592 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.593 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.593 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.608 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.609 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.610 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.610 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.610 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.614 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.615 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.615 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.616 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.616 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.620 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.621 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.621 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.621 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.622 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.625 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.626 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.627 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.627 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.627 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.631 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.632 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.632 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.632 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.633 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.636 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.637 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.638 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.638 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.638 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.642 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.643 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.643 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.643 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.644 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.648 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.649 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.649 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.649 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.650 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.654 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.655 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.655 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.655 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.656 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.660 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.661 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.661 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.662 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.662 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.667 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.668 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.668 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.669 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.669 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.673 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.674 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.674 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.675 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.675 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.679 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.680 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.680 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.681 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.681 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.687 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.688 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.688 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.689 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.689 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.696 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.698 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.699 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.699 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.700 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.705 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.706 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.706 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.707 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.707 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.711 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.712 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.712 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.713 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.713 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.718 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.719 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.719 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.720 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.720 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.726 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.727 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.727 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.728 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.728 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 5 } Fri Feb 22 11:29:14.735 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.735 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.736 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.736 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.736 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.742 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.743 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.743 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.744 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.744 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.745 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.745 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.746 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.747 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.747 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.747 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.748 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.749 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.749 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.749 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.750 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.751 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.751 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.751 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.752 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.753 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.753 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.753 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.754 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.754 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.758 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.759 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.759 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.759 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.760 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.763 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.764 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.765 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.765 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.765 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.769 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.770 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.770 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.771 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.774 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.775 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.776 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.776 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.776 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.780 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.781 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.781 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.781 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.782 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.786 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.787 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.788 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.788 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.788 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.794 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.795 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.795 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.795 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.796 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.801 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.802 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.802 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.803 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.803 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.809 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.810 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.810 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.810 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.811 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.817 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.817 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.818 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.818 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.818 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.823 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.823 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.824 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.824 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.825 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.829 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.830 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.830 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.830 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.831 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.837 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.838 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.838 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.839 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.839 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 5 } Fri Feb 22 11:29:14.845 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.846 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.846 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.846 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.847 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.852 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.853 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.853 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.854 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.854 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.860 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.861 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.861 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.861 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.862 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.867 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.868 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.868 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.868 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.869 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.873 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.874 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.874 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.875 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.875 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.879 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.879 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.880 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.880 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.880 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.885 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.886 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.886 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.887 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.887 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.893 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.893 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.894 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.894 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.895 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.899 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.900 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.900 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.901 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.901 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.906 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.906 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.907 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.907 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.907 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.911 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.912 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.912 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.912 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.913 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.917 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.918 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.918 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.919 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.919 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.925 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.926 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.926 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.926 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.927 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.931 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.932 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.932 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.933 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.933 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.938 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.938 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.939 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.939 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.939 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.943 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.944 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.944 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.945 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.945 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.950 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.951 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.951 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.952 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.952 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.958 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.959 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.959 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.960 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.960 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.965 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.965 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.966 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.966 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.966 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.971 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.972 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.972 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.973 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.973 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 6 } Fri Feb 22 11:29:14.976 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.977 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.978 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.978 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.978 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:14.983 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.984 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.984 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.985 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.985 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:14.991 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.992 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.992 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.992 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.993 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:14.997 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:14.998 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:14.998 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:14.999 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:14.999 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.004 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.005 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.005 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.005 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.006 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.009 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.010 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.010 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.011 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.011 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.016 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.016 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.017 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.017 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.017 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.024 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.024 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.025 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.025 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.026 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.030 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.031 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.031 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.032 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.032 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.037 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.038 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.039 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.039 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.039 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.043 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.043 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.044 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.044 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.045 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.050 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.051 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.051 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.051 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.052 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.058 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.059 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.059 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.059 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.060 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.065 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.065 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.066 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.066 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.067 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.072 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.072 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.073 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.073 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.073 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.077 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.078 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.079 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.084 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.085 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.085 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.087 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.087 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.094 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.094 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.095 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.095 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.095 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.100 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.101 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.101 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.102 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.102 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.107 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.108 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.108 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.109 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.109 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.113 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.114 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.114 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.114 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.115 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.120 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.121 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.121 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.122 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.122 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.128 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.129 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.129 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.130 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.130 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.134 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.135 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.136 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.136 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.136 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.142 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.142 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.143 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.143 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.143 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.147 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.148 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.148 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.149 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.149 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.154 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.155 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.155 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.156 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.156 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.162 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.163 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.163 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.163 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.164 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.168 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.169 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.169 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.170 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.170 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.175 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.176 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.176 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.177 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.180 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.181 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.181 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.181 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.182 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.186 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.187 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.187 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.188 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.188 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.194 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.195 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.195 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.195 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.196 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.200 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.201 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.201 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.202 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.202 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.207 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.208 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.208 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.208 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.209 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.212 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.213 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.213 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.214 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.214 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.219 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.219 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.220 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.220 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.220 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.226 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.227 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.227 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.228 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.228 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.232 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.233 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.233 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.234 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.234 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.239 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.240 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.240 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.240 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.241 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.244 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.245 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.246 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.246 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.246 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.251 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.252 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.252 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.252 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.253 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.259 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.259 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.260 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.260 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.260 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.266 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.266 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.267 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.267 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.267 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.272 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.273 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.273 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.274 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.274 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.279 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.279 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.280 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.280 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.280 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.285 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.286 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.286 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.287 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.287 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.293 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.294 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.294 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.294 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.295 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.299 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.300 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.300 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.301 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.301 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.306 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.306 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.307 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.307 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.307 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.311 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.312 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.312 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.312 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.313 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.319 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.319 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.320 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.320 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.320 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.326 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.327 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.327 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.328 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.328 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.332 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.333 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.333 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.334 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.334 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.339 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.340 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.340 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.340 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.341 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.344 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.345 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.345 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.346 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.346 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.351 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.351 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.352 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.352 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.352 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.358 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.359 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.359 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.360 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.360 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.364 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.365 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.365 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.366 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.366 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.371 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.372 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.372 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.372 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.373 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.376 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.377 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.377 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.378 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.378 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.383 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.383 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.384 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.384 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.384 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.390 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.391 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.391 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.392 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.392 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.396 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.397 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.397 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.398 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.398 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.403 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.404 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.404 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.404 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.405 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.408 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.409 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.409 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.410 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.410 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.415 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.415 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.416 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.416 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.416 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.422 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.423 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.423 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.424 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.424 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.428 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.429 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.430 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.430 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.430 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.435 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.436 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.436 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.436 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.437 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.440 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.441 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.441 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.442 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.442 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.446 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.447 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.448 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.448 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.448 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.454 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.455 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.455 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.455 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.456 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.460 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.461 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.461 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.462 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.462 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.467 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.467 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.468 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.468 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.468 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.472 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.472 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.473 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.473 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.473 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.478 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.479 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.479 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.480 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.480 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.486 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.487 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.487 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.487 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.488 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.492 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.493 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.493 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.494 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.494 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.498 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.499 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.500 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.500 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.500 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 6 } Fri Feb 22 11:29:15.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.504 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.505 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.505 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.506 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.510 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.511 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.511 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.512 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.512 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.518 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.519 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.519 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.520 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.520 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.524 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.525 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.525 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.526 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.526 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.531 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.531 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.532 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.532 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.532 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.546 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.547 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.547 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.547 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.548 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.552 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.553 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.554 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.554 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.554 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.560 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.561 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.561 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.562 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.562 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.566 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.567 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.567 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.568 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.568 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.573 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.574 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.574 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.574 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.575 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.578 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.579 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.579 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.580 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.580 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.585 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.585 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.586 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.586 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.586 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.592 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.593 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.593 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.594 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.594 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.599 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.599 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.600 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.600 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.600 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.605 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.606 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.606 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.607 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.607 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.610 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.611 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.611 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.612 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.612 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.617 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.617 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.618 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.618 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.618 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.624 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.625 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.625 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.626 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.626 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.630 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.631 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.632 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.632 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.632 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.637 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.638 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.638 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.638 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.639 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 6 } Fri Feb 22 11:29:15.642 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.643 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.643 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.644 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.644 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.648 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.649 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.650 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.650 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.650 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.656 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.657 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.657 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.658 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.658 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.663 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.664 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.665 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.665 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.665 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.670 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.671 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.671 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.671 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.672 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.676 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.677 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.677 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.678 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.678 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.683 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.684 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.684 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.684 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.685 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.691 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.691 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.692 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.692 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.692 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.697 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.697 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.698 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.698 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.698 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.703 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.704 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.704 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.705 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.705 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.708 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.709 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.709 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.710 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.710 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.715 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.716 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.716 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.716 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.717 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.723 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.723 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.724 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.724 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.725 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.729 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.730 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.730 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.730 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.731 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.735 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.736 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.737 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.737 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.737 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.741 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.741 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.742 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.742 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.743 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.747 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.748 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.748 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.749 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.749 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.755 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.756 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.756 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.756 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.757 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.761 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.762 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.762 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.763 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.763 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.768 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.768 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.769 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.769 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.770 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.773 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.774 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.774 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.774 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.775 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.779 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.780 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.781 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.781 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.781 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.786 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.787 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.787 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.787 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.788 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.794 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.794 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.795 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.795 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.795 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.800 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.801 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.801 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.802 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.802 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.808 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.809 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.809 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.810 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.810 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.815 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.815 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.816 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.816 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.816 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.822 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.823 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.823 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.824 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.825 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.830 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.831 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.831 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.832 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.837 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.837 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.838 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.838 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.838 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.843 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.844 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.844 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.845 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.845 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.850 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.850 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.851 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.851 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.851 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.857 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.858 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.858 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.859 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.859 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.864 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.865 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.865 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.866 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.866 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.871 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.871 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.872 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.872 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.872 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.876 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.877 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.877 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.877 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.878 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.882 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.883 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.883 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.884 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.884 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.890 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.891 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.891 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.892 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.892 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.896 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.897 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.897 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.898 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.898 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.903 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.903 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.904 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.904 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.904 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:15.908 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.909 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.909 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.909 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.910 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.914 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.915 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.915 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.916 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.916 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.922 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.923 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.923 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.923 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.924 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.929 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.930 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.930 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.931 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.931 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.936 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.936 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.937 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.937 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.937 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.942 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.943 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.943 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.944 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.944 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.949 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.949 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.950 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.950 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.950 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.956 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.957 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.958 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.958 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.958 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.963 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.963 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.964 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.964 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.965 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.969 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.970 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.971 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.971 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.971 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.975 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.976 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.976 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.976 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.977 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.981 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.982 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.983 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.983 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.983 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.989 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.990 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.990 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.991 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.991 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:15.996 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:15.996 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:15.997 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:15.997 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:15.997 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:16.002 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.003 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.003 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.004 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.004 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:16.008 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.008 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.009 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.009 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.009 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:16.014 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.015 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.015 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.016 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.016 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:16.022 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.022 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.023 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.023 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.024 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:16.028 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.029 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.030 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.030 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.031 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:16.035 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.036 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.036 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.037 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.037 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 6 } Fri Feb 22 11:29:16.041 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.042 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.042 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.042 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.043 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.048 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.048 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.049 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.049 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.049 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.054 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.055 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.055 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.056 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.056 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.062 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.063 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.063 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.064 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.064 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.069 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.069 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.070 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.070 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.071 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.077 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.078 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.079 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.084 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.085 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.085 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.086 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.086 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.092 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.093 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.093 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.094 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.094 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.099 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.100 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.101 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.101 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.101 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.106 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.107 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.107 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.108 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.108 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.113 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.113 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.114 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.114 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.114 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.119 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.120 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.120 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.120 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.121 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.126 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.127 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.127 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.128 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.128 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.133 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.133 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.134 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.134 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.134 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.139 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.140 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.140 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.141 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.141 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.144 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.145 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.145 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.146 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.146 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.151 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.151 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.152 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.152 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.152 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.158 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.159 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.160 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.161 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.161 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.167 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.168 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.168 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.169 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.169 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.175 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.176 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.176 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.180 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.181 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.181 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.181 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.187 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.188 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.188 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.188 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.189 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.198 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.199 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.199 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.200 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.200 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.206 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.207 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.208 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.208 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.208 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.214 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.215 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.215 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.216 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.216 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.221 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.222 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.222 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.223 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.223 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.229 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.230 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.231 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.231 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.232 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.237 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.238 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.238 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.238 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.239 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.245 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.246 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.246 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.247 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.247 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.252 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.253 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.253 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.254 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.254 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.260 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.261 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.262 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.262 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.262 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.268 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.269 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.269 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.269 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.270 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.277 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.278 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.278 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.279 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.279 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.284 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.285 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.285 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.285 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.286 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.291 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.292 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.292 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.292 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.293 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.296 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.297 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.297 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.297 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.298 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.302 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.303 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.304 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.304 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.305 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.311 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.311 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.312 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.312 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.312 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.317 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.318 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.318 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.318 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.319 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.324 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.324 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.325 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.325 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.325 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.329 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.330 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.330 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.331 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.335 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.336 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.336 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.337 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.337 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.341 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.342 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.342 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.343 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.343 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.347 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.347 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.348 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.348 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.348 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.352 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.353 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.354 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.354 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.354 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.358 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.359 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.359 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.360 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.360 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.364 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.364 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.365 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.365 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.365 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.372 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.372 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.373 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.373 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.374 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.380 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.380 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.381 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.381 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.381 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.387 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.388 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.388 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.388 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.389 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.394 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.395 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.395 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.396 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.396 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.402 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.403 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.403 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.404 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.404 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.410 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.411 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.411 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.412 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.412 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.418 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.418 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.419 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.419 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.419 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.424 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.425 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.425 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.426 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.426 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.432 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.433 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.433 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.433 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.434 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.438 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.439 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.439 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.440 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.440 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.446 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.447 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.447 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.447 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.448 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.452 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.452 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.453 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.453 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.453 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.458 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.459 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.459 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.460 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.460 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.463 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.464 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.464 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.465 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.465 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.469 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.470 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.470 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.471 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.471 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.476 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.476 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.477 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.477 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.477 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.483 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.484 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.484 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.485 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.485 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.489 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.490 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.490 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.491 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.491 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.497 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.498 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.498 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.498 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.499 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.513 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.514 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.514 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.515 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.515 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.521 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.522 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.522 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.523 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.523 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.528 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.529 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.529 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.530 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.530 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.534 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.535 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.535 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.536 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.536 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.541 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.541 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.542 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.542 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.542 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.547 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.548 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.548 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.549 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.549 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.555 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.555 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.556 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.556 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.556 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.561 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.561 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.562 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.562 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.562 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.567 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.568 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.568 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.568 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.569 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.572 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.573 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.573 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.573 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.574 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.578 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.579 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.579 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.580 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.580 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.586 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.586 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.587 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.587 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.587 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.593 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.594 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.595 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.595 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.596 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.603 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.604 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.604 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.605 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.605 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 6 } Fri Feb 22 11:29:16.611 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.612 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.612 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.613 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.613 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.620 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.621 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.622 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.622 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.623 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.632 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.633 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.633 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.634 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.634 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.643 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.644 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.645 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.645 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.646 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.654 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.655 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.655 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.656 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.656 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.664 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.665 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.665 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.666 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.666 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.675 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.676 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.677 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.678 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.678 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.685 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.686 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.687 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.687 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.688 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.696 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.697 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.698 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.698 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.699 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.706 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.707 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.707 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.708 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.708 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.717 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.718 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.719 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.719 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.720 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.727 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.728 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.728 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.729 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.729 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.738 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.739 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.740 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.740 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.741 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.747 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.748 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.749 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.751 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.752 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.759 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.762 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.762 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.763 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.763 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.768 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.769 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.770 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.771 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.778 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.779 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.779 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.780 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.780 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.789 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.790 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.791 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.791 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.792 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.799 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.800 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.800 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.801 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.801 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.808 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.809 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.810 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.810 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.811 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 6 } Fri Feb 22 11:29:16.816 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.817 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.817 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.818 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.818 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.825 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.826 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.827 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.828 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.828 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.834 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.835 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.835 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.836 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.836 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.842 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.843 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.843 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.844 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.844 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.852 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.853 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.854 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.854 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.855 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.861 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.862 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.862 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.863 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.863 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.870 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.871 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.872 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.872 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.873 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.882 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.883 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.883 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.884 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.884 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.893 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.894 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.895 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.895 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.896 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.904 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.905 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.905 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.906 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.906 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.914 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.915 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.915 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.916 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.916 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.925 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.926 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.927 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.927 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.928 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.937 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.938 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.938 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.939 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.939 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.947 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.948 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.948 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.949 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.950 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.957 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.958 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.958 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.959 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.959 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.966 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.968 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.968 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.969 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.969 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.976 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.977 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.978 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.978 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.979 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.987 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.988 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.989 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.989 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.990 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:16.996 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:16.997 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:16.998 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:16.999 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:16.999 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:17.006 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.007 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.008 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.008 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.009 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 6 } Fri Feb 22 11:29:17.014 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.015 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.015 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.016 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.016 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.023 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.024 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.025 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.025 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.027 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.027 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.028 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.028 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.030 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.030 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.031 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.031 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.033 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.033 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.034 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.034 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.036 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.036 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.037 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.037 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.039 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.039 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.040 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.040 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.041 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.046 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.047 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.048 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.048 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.049 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.054 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.056 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.056 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.057 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.057 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.063 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.064 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.064 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.065 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.065 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.071 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.072 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.072 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.073 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.073 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.079 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.080 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.080 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.081 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.081 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.088 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.090 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.090 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.090 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.091 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.099 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.100 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.100 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.101 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.101 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.108 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.109 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.109 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.110 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.110 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.118 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.119 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.120 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.120 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.121 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.127 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.128 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.129 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.129 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.130 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.138 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.139 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.139 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.140 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.140 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.147 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.148 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.149 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.149 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.150 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.156 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.157 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.158 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.158 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.159 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 6 } Fri Feb 22 11:29:17.165 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.166 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.167 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.167 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.168 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.176 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.177 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.177 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.182 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.183 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.184 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.184 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.184 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.192 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.193 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.193 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.193 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.194 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.200 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.201 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.201 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.202 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.202 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.208 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.209 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.210 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.210 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.211 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.218 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.219 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.219 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.220 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.220 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.225 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.226 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.226 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.227 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.227 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.234 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.235 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.236 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.236 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.237 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.243 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.243 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.244 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.245 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.245 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.251 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.252 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.253 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.253 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.254 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.261 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.262 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.262 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.263 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.263 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.268 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.269 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.270 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.270 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.271 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.278 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.279 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.279 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.280 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.280 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.286 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.287 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.287 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.288 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.288 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.295 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.296 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.296 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.297 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.297 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.305 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.306 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.307 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.307 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.308 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.313 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.314 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.315 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.315 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.316 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.323 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.324 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.325 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.326 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.326 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.332 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.333 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.334 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.334 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.335 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 7 } Fri Feb 22 11:29:17.342 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.343 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.343 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.344 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.344 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.352 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.353 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.354 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.354 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.355 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.360 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.361 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.362 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.363 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.363 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.371 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.372 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.372 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.373 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.373 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.380 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.381 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.381 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.382 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.382 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.390 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.391 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.392 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.393 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.401 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.402 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.402 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.403 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.403 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.408 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.409 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.410 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.411 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.419 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.420 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.420 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.421 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.422 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.428 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.429 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.430 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.430 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.431 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.437 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.438 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.439 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.439 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.440 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.447 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.449 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.449 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.450 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.450 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.455 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.456 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.456 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.457 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.457 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.464 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.465 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.466 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.466 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.467 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.473 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.474 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.474 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.475 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.475 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.482 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.483 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.483 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.484 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.484 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.494 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.495 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.496 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.496 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.497 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.502 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.503 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.503 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.503 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.504 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.512 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.513 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.513 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.514 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.514 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.520 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.521 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.522 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.522 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.523 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.529 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.530 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.530 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.531 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.531 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.539 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.540 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.540 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.541 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.541 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.546 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.547 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.547 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.548 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.548 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.555 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.556 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.557 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.557 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.558 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.564 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.564 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.565 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.565 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.566 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.572 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.573 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.574 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.574 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.575 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.582 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.583 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.583 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.584 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.584 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.589 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.590 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.591 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.591 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.591 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.598 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.599 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.599 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.600 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.600 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.606 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.607 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.607 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.608 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.608 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.614 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.615 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.615 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.616 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.616 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.623 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.624 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.624 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.625 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.625 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.630 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.631 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.631 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.632 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.632 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.639 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.640 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.640 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.641 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.641 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.647 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.648 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.648 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.649 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.649 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.655 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.656 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.656 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.657 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.657 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.664 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.665 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.666 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.666 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.666 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.671 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.672 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.672 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.673 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.673 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.680 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.681 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.681 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.682 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.682 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.687 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.688 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.689 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.689 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.690 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:17.696 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.697 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.697 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.697 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.698 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.705 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.706 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.706 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.707 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.707 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.712 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.713 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.713 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.714 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.714 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.721 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.722 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.722 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.723 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.723 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.729 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.730 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.730 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.731 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.731 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.737 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.738 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.738 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.739 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.739 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.746 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.747 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.747 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.748 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.748 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.753 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.753 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.754 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.754 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.755 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.761 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.762 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.763 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.763 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.764 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.769 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.770 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.770 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.771 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.771 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.789 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.790 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.791 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.791 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.792 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.799 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.800 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.800 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.801 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.801 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.806 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.807 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.807 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.808 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.808 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.815 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.816 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.816 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.816 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.817 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.822 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.823 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.824 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.824 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.824 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.830 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.832 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.832 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.833 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.839 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.840 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.841 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.841 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.842 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.846 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.847 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.847 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.848 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.848 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.855 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.856 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.856 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.857 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.857 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.862 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.863 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.864 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.864 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.865 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:17.871 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.871 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.872 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.872 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.873 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.880 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.881 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.881 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.882 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.883 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.887 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.888 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.888 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.889 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.889 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.896 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.897 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.897 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.898 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.898 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.904 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.905 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.905 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.905 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.906 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.912 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.913 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.913 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.913 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.914 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.921 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.922 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.922 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.923 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.923 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.928 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.929 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.929 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.930 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.930 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.937 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.938 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.938 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.939 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.939 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.944 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.945 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.946 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.946 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.947 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.953 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.953 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.954 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.954 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.955 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.962 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.963 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.963 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.964 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.964 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.968 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.969 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.970 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.970 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.971 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.977 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.978 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.979 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.979 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.979 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.985 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.986 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.986 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.987 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.987 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:17.993 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:17.994 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:17.994 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:17.995 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:17.995 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:18.002 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.003 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.003 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.004 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.004 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:18.009 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.010 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.010 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.011 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.011 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:18.018 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.019 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.019 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.019 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.020 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:18.025 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.026 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.027 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.027 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.028 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 7 } Fri Feb 22 11:29:18.033 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.034 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.035 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.035 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.036 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.042 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.043 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.044 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.044 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.045 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.049 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.050 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.050 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.051 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.051 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.058 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.059 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.059 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.060 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.060 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.066 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.066 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.067 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.067 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.068 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.074 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.075 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.075 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.076 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.076 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.083 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.083 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.084 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.084 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.085 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.089 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.090 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.090 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.091 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.091 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.096 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.097 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.097 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.098 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.098 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.102 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.103 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.103 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.104 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.104 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.108 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.109 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.109 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.110 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.110 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.115 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.116 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.116 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.117 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.117 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.120 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.121 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.121 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.122 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.122 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.127 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.128 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.128 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.128 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.129 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.133 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.133 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.134 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.134 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.134 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.139 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.139 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.140 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.140 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.141 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.146 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.146 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.147 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.147 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.147 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.151 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.152 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.152 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.152 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.152 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.157 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.158 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.158 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.159 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.159 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.163 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.164 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.164 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.165 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.165 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 7 } Fri Feb 22 11:29:18.169 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.170 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.170 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.171 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.171 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.177 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.178 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.178 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.179 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.179 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.183 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.184 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.184 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.184 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.184 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.190 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.190 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.191 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.191 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.191 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.196 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.196 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.197 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.197 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.197 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.202 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.203 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.203 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.203 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.204 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.209 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.209 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.210 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.210 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.210 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.214 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.214 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.215 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.215 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.215 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.220 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.221 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.221 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.222 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.222 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.226 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.227 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.227 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.227 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.228 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.233 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.234 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.234 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.234 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.235 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.240 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.241 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.241 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.241 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.242 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.245 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.246 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.246 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.246 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.247 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.252 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.252 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.253 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.253 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.254 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.258 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.258 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.259 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.259 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.259 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.264 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.264 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.265 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.265 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.265 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.271 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.271 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.271 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.272 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.272 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.275 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.276 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.276 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.277 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.277 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.282 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.283 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.283 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.283 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.283 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.288 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.288 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.288 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.289 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.289 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.294 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.294 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.295 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.295 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.295 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.301 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.301 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.302 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.302 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.302 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.307 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.308 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.308 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.309 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.310 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.314 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.315 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.315 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.316 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.316 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.321 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.322 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.322 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.323 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.323 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.329 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.329 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.330 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.330 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.336 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.336 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.337 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.337 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.338 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.341 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.342 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.342 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.343 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.343 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.348 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.349 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.349 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.349 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.350 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.354 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.355 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.355 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.356 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.356 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.361 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.361 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.362 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.362 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.362 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.367 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.368 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.369 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.369 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.369 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.373 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.374 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.374 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.374 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.375 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.380 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.381 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.381 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.381 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.382 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.386 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.387 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.387 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.387 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.388 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.393 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.393 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.394 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.394 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.394 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.400 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.401 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.401 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.401 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.402 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.405 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.406 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.406 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.407 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.407 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.412 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.413 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.413 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.414 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.414 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.418 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.419 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.419 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.420 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.420 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.424 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.425 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.426 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.426 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.426 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.431 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.432 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.433 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.433 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.433 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.437 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.437 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.438 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.438 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.438 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.443 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.444 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.445 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.445 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.445 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.450 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.450 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.451 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.451 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.452 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.456 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.457 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.458 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.458 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.458 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.464 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.464 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.465 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.465 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.466 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.469 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.470 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.470 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.471 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.471 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.477 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.477 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.478 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.478 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.479 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.483 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.484 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.484 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.485 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.485 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.490 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.491 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.491 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.492 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.492 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.497 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.498 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.498 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.499 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.500 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.504 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.505 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.505 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.505 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.511 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.512 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.512 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.513 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.513 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.518 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.518 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.519 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.519 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.520 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.525 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.525 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.526 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.526 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.527 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.533 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.533 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.534 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.534 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.535 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.540 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.540 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.541 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.541 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.542 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.547 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.548 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.548 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.548 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.549 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.553 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.554 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.554 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.555 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.555 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 7 } Fri Feb 22 11:29:18.560 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.561 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.561 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.562 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.562 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.567 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.568 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.569 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.569 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.569 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.575 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.576 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.576 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.577 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.577 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.582 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.583 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.583 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.583 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.584 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.589 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.590 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.590 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.591 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.591 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.596 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.597 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.597 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.598 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.598 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.603 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.604 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.605 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.605 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.605 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.609 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.610 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.610 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.610 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.611 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.616 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.616 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.617 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.617 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.618 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.622 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.623 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.623 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.623 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.624 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.628 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.629 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.629 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.630 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.630 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.635 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.636 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.636 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.636 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.637 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.640 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.641 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.641 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.642 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.642 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.647 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.648 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.648 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.649 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.649 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.653 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.654 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.654 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.655 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.655 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.660 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.660 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.661 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.661 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.661 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.667 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.667 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.668 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.668 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.668 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.672 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.673 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.673 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.674 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.674 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.679 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.680 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.680 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.681 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.681 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.685 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.686 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.686 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.687 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.687 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.691 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.692 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.693 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.693 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.693 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.698 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.699 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.700 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.700 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.700 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.705 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.706 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.706 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.707 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.707 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.711 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.712 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.712 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.713 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.713 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.718 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.719 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.719 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.720 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.720 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.725 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.726 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.726 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.726 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.727 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.732 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.733 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.734 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.734 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.734 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.739 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.740 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.741 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.741 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.741 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.746 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.747 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.747 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.747 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.748 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.753 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.754 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.754 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.755 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.755 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.760 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.761 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.761 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.762 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.762 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.777 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.778 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.778 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.779 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.779 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.783 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.783 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.784 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.784 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.784 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.790 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.791 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.791 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.791 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.792 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.796 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.797 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.797 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.797 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.798 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.802 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.803 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.803 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.804 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.804 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.811 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.813 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.813 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.814 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.814 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.818 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.818 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.819 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.819 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.819 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.824 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.825 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.826 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.826 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.826 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.831 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.832 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.832 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.832 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:18.837 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.838 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.838 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.838 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.839 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.844 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.845 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.845 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.846 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.846 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.851 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.852 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.852 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.852 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.853 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.858 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.858 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.859 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.859 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.859 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.864 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.865 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.865 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.866 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.866 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.871 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.872 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.872 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.872 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.873 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.877 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.878 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.878 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.879 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.879 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.884 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.884 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.885 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.885 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.885 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.890 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.891 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.891 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.891 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.892 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.897 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.897 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.898 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.898 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.898 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.903 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.904 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.904 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.905 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.905 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.911 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.912 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.912 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.913 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.913 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.917 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.917 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.918 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.918 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.918 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.924 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.924 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.925 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.925 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.926 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.930 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.931 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.931 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.931 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.932 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.936 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.937 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.937 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.938 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.938 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.943 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.944 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.944 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.944 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.945 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.948 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.949 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.949 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.950 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.950 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.955 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.956 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.956 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.957 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.957 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.961 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.962 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.962 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.962 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.963 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:18.967 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.968 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.968 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.969 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.969 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.974 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.975 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.975 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.976 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.976 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.981 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.982 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.982 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.983 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.983 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.987 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.988 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.989 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.989 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.989 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:18.995 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:18.995 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:18.996 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:18.996 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:18.996 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.002 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.002 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.003 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.003 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.003 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.009 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.009 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.010 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.010 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.010 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.014 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.015 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.015 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.016 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.016 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.021 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.022 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.022 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.023 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.023 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.027 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.028 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.028 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.029 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.029 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.034 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.034 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.035 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.035 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.035 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.041 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.042 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.043 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.043 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.043 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.047 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.047 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.048 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.048 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.049 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.054 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.054 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.055 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.055 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.055 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.060 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.060 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.061 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.061 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.061 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.066 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.067 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.067 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.067 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.068 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.073 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.074 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.074 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.075 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.075 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.078 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.079 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.080 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.080 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.080 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.085 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.086 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.086 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.087 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.087 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.091 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.092 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.092 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.093 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.093 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 7 } Fri Feb 22 11:29:19.098 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.099 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.099 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.099 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.100 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.105 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.106 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.106 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.106 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.107 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.111 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.112 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.112 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.113 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.113 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.117 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.118 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.119 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.119 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.124 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.125 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.125 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.126 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.126 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.131 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.132 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.132 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.132 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.133 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.138 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.139 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.140 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.140 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.140 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.146 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.146 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.147 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.147 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.147 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.152 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.153 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.153 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.153 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.154 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.159 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.160 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.160 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.161 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.161 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.167 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.167 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.168 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.168 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.169 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.175 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.175 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.176 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.179 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.180 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.181 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.181 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.186 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.187 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.187 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.187 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.188 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.192 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.193 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.193 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.193 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.194 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.198 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.199 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.199 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.200 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.200 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.205 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.206 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.206 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.207 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.207 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.210 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.211 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.211 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.212 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.212 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.217 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.218 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.218 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.219 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.219 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.223 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.224 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.224 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.225 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.225 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 7 } Fri Feb 22 11:29:19.230 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.230 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.231 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.231 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.231 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.237 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.237 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.238 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.238 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.238 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.243 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.244 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.244 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.245 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.245 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.250 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.250 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.251 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.251 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.252 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.258 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.258 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.259 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.259 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.259 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.264 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.265 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.265 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.266 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.266 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.271 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.271 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.272 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.272 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.272 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.277 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.278 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.278 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.278 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.279 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.283 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.284 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.284 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.285 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.285 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.290 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.291 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.291 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.291 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.292 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.296 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.297 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.298 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.298 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.298 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.304 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.305 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.305 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.305 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.306 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.309 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.310 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.310 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.311 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.311 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.316 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.317 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.317 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.318 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.318 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.322 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.323 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.324 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.324 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.324 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.329 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.330 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.330 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.331 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.336 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.337 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.337 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.337 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.338 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.341 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.342 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.342 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.343 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.343 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.348 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.349 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.349 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.350 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.350 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.354 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.355 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.355 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.356 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.356 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 7 } Fri Feb 22 11:29:19.360 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.361 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.362 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.362 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.362 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.367 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.368 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.368 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.369 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.370 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.370 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.370 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.371 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.372 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.372 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.372 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.373 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.374 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.374 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.375 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.375 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.376 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.376 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.377 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.377 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.378 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.378 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.379 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.379 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.379 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.384 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.385 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.385 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.386 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.386 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.391 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.391 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.392 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.392 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.397 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.398 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.398 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.399 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.399 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.404 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.405 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.405 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.405 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.406 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.410 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.411 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.412 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.412 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.412 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.418 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.418 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.419 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.419 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.419 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.424 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.425 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.425 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.426 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.427 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.432 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.433 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.433 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.433 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.434 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.439 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.440 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.440 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.440 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.441 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.446 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.447 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.447 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.448 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.448 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.451 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.452 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.453 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.453 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.453 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.458 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.459 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.459 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.460 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.460 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.464 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.465 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.465 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.466 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.466 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 7 } Fri Feb 22 11:29:19.471 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.471 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.472 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.472 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.472 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.479 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.479 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.480 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.480 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.481 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.485 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.486 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.486 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.487 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.487 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.493 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.493 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.494 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.494 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.495 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.499 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.500 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.500 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.500 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.501 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.506 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.507 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.507 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.507 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.508 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.512 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.513 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.513 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.514 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.514 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.518 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.519 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.519 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.520 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.520 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.526 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.527 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.528 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.528 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.528 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.533 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.534 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.534 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.534 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.535 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.540 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.540 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.541 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.541 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.541 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.546 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.547 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.547 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.547 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.548 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.552 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.553 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.553 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.554 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.554 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.559 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.560 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.561 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.561 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.561 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.566 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.566 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.567 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.567 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.567 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.572 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.573 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.574 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.574 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.574 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.579 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.580 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.580 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.580 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.581 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.585 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.586 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.586 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.587 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.587 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.593 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.594 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.594 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.595 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.595 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.599 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.600 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.601 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.601 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.601 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 8 } Fri Feb 22 11:29:19.607 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.607 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.608 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.608 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.608 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.613 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.614 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.614 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.615 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.615 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.620 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.621 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.621 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.621 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.622 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.628 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.628 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.629 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.629 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.629 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.634 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.635 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.635 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.636 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.636 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.641 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.642 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.642 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.643 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.643 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.648 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.649 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.649 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.650 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.650 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.655 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.655 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.656 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.656 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.656 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.662 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.663 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.663 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.664 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.664 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.669 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.669 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.670 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.670 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.670 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.676 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.676 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.677 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.677 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.678 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.682 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.683 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.684 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.684 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.684 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.689 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.690 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.690 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.690 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.691 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.697 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.697 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.698 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.698 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.698 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.703 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.704 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.704 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.704 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.705 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.720 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.721 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.721 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.722 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.722 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.727 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.727 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.728 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.728 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.728 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.733 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.733 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.734 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.735 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.735 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.741 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.741 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.742 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.742 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.742 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.747 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.747 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.748 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.748 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.748 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:19.753 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.754 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.754 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.755 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.755 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.760 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.760 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.761 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.761 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.761 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.766 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.766 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.767 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.767 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.767 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.772 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.773 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.773 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.773 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.774 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.778 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.779 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.779 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.779 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.780 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.785 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.785 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.786 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.786 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.786 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.791 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.791 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.792 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.792 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.792 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.797 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.797 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.798 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.798 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.798 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.804 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.804 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.805 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.805 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.805 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.810 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.810 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.811 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.811 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.811 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.816 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.817 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.817 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.818 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.818 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.822 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.823 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.823 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.824 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.824 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.828 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.829 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.829 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.830 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.830 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.835 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.836 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.836 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.837 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.838 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.842 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.843 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.843 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.843 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.844 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.849 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.850 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.850 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.850 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.851 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.855 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.856 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.856 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.857 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.857 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.861 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.862 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.862 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.863 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.863 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.869 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.869 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.870 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.870 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.870 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.875 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.875 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.876 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.876 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.876 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:19.881 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.882 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.882 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.883 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.883 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.888 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.888 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.889 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.889 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.889 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.894 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.895 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.895 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.896 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.896 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.900 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.901 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.901 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.901 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.902 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.907 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.907 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.908 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.908 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.908 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.914 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.914 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.915 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.915 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.915 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.920 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.920 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.921 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.921 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.921 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.926 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.926 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.926 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.927 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.927 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.932 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.932 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.933 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.933 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.933 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.938 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.938 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.939 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.939 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.939 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.944 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.945 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.945 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.946 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.946 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.950 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.951 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.951 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.952 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.952 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.956 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.957 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.957 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.958 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.958 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.963 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.964 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.964 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.965 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.965 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.969 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.970 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.970 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.971 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.971 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.977 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.977 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.978 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.978 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.978 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.983 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.984 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.984 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.984 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.985 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.989 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.990 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.990 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.990 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.991 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:19.996 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:19.997 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:19.997 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:19.998 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:19.998 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.002 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.003 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.003 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.003 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.004 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.009 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.009 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.010 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.010 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.010 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.015 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.015 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.016 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.016 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.016 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.021 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.021 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.022 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.022 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.022 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.028 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.028 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.029 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.029 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.029 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.034 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.034 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.035 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.035 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.035 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.040 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.041 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.041 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.042 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.042 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.046 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.047 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.047 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.048 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.048 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.052 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.053 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.053 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.053 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.054 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.059 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.060 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.060 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.061 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.061 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.065 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.066 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.066 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.066 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.067 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.072 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.072 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.073 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.073 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.073 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.078 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.079 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.079 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.084 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.084 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.085 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.085 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.085 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.091 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.092 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.092 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.092 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.093 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.097 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.097 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.098 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.098 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.098 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.103 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.104 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.104 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.105 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.105 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.110 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.111 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.111 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.111 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.112 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.116 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.117 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.117 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.117 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.118 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.123 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.124 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.124 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.125 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.125 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.129 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.130 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.130 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.131 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.131 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 8 } Fri Feb 22 11:29:20.136 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.137 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.137 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.137 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.138 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.142 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.143 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.143 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.144 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.144 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.148 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.149 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.149 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.150 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.150 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.155 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.156 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.156 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.156 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.156 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.161 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.162 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.162 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.163 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.163 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.168 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.169 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.169 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.169 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.170 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.175 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.176 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.176 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.181 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.181 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.182 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.182 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.182 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.189 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.190 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.190 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.191 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.191 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.196 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.197 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.197 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.197 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.198 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.204 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.204 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.205 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.205 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.205 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.210 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.211 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.211 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.211 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.212 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.216 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.217 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.217 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.217 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.218 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.223 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.224 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.224 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.225 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.225 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.229 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.230 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.230 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.231 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.231 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.236 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.237 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.237 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.237 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.238 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.242 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.243 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.243 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.243 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.244 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.248 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.249 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.249 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.250 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.250 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.255 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.256 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.256 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.257 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.257 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.261 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.262 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.262 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.263 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.263 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 8 } Fri Feb 22 11:29:20.268 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.268 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.269 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.269 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.269 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.274 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.274 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.275 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.275 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.275 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.280 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.281 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.281 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.282 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.282 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.285 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.286 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.286 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.287 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.287 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.292 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.292 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.293 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.293 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.293 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.298 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.299 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.299 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.300 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.300 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.304 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.305 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.305 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.306 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.306 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.310 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.311 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.311 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.311 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.312 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.316 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.317 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.317 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.318 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.318 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.322 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.323 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.323 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.323 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.324 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.329 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.329 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.330 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.330 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.335 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.335 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.336 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.336 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.336 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.340 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.341 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.341 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.342 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.342 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.347 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.348 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.348 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.349 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.349 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.353 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.354 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.354 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.355 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.355 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.360 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.361 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.361 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.361 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.362 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.366 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.367 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.367 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.367 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.368 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.372 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.373 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.373 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.373 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.374 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.379 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.380 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.380 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.380 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.381 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.385 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.385 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.386 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.386 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.386 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.391 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.392 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.393 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.393 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.397 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.398 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.398 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.399 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.399 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.404 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.405 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.405 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.406 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.406 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.410 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.410 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.411 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.411 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.416 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.416 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.416 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.417 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.417 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.421 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.422 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.422 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.422 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.422 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.426 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.427 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.427 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.427 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.428 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.432 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.433 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.433 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.434 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.434 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.438 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.438 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.439 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.439 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.439 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.444 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.445 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.445 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.445 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.446 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.451 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.451 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.451 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.452 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.452 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.457 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.457 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.457 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.458 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.458 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.462 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.463 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.463 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.464 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.464 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.469 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.470 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.470 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.471 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.471 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.475 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.476 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.476 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.476 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.477 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.482 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.482 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.483 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.483 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.483 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.488 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.488 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.489 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.489 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.489 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.494 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.494 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.494 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.495 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.495 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.501 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.501 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.501 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.502 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.502 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.507 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.507 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.507 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.508 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.508 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.513 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.514 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.514 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.515 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.515 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.519 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.520 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.520 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.521 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.521 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.526 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.527 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.527 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.527 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.528 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.531 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.532 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.532 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.533 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.533 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.538 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.538 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.539 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.539 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.539 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.544 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.545 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.545 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.546 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.546 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.550 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.551 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.551 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.552 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.552 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.556 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.557 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.557 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.558 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.558 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.563 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.563 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.563 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.564 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.564 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.568 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.569 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.569 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.570 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.570 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.575 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.576 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.576 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.576 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.576 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.581 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.582 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.582 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.582 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.582 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.587 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.587 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.588 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.588 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.588 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.597 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.598 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.598 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.599 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.599 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.605 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.607 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.607 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.607 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.608 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.616 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.617 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.617 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.618 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.618 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.625 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.638 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.639 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.639 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.640 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.646 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.647 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.648 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.648 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.649 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.657 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.658 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.659 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.659 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.660 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.667 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.668 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.668 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.669 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.669 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 8 } Fri Feb 22 11:29:20.677 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.678 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.678 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.679 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.679 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.686 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.687 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.687 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.688 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.688 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.695 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.696 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.697 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.697 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.698 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.703 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.704 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.705 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.705 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.706 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.712 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.714 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.714 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.715 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.715 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.721 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.722 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.722 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.723 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.723 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.729 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.730 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.731 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.731 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.732 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.740 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.741 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.741 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.742 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.742 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.748 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.749 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.750 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.750 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.751 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.759 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.760 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.760 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.761 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.761 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.769 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.770 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.771 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.771 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.772 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.779 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.780 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.781 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.781 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.782 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.788 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.790 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.790 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.791 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.791 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.800 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.801 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.801 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.802 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.803 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.809 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.811 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.811 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.812 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.812 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.820 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.821 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.822 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.822 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.823 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.830 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.831 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.832 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.832 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.839 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.840 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.841 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.841 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.842 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.851 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.852 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.852 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.853 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.853 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.860 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.861 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.862 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.862 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.863 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:20.872 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.874 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.875 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.876 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.876 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.883 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.884 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.885 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.885 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.886 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.892 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.893 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.894 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.894 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.895 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.901 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.902 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.903 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.903 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.904 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.909 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.910 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.911 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.911 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.912 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.918 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.919 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.919 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.920 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.920 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.928 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.929 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.929 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.930 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.930 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.937 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.938 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.939 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.939 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.940 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.945 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.946 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.946 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.947 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.948 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.954 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.955 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.955 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.956 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.956 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.962 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.963 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.964 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.964 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.965 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.970 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.971 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.971 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.972 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.973 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.979 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.980 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.981 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.981 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.982 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.989 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.990 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.990 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:20.991 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:20.991 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:20.998 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:20.999 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:20.999 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.000 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.000 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.008 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.009 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.009 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.010 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.010 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.019 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.020 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.020 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.021 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.021 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.028 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.029 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.030 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.030 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.031 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.039 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.040 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.041 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.041 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.042 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.049 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.050 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.050 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.051 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.052 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.059 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.060 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.061 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.062 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.062 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.069 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.070 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.070 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.071 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.071 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.078 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.079 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.080 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.080 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.081 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.088 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.089 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.089 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.090 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.090 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.098 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.099 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.099 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.100 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.100 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.107 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.108 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.108 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.109 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.109 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.117 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.118 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.119 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.119 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.125 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.126 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.127 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.127 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.128 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.134 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.135 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.136 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.136 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.137 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.143 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.144 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.144 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.145 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.145 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.151 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.152 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.152 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.153 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.153 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.161 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.162 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.162 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.163 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.163 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.171 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.172 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.172 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.173 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.173 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.179 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.180 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.180 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.181 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.181 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.188 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.189 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.190 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.190 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.191 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.198 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.199 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.200 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.200 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.201 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.208 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.209 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.209 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.210 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.210 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.216 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.218 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.218 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.219 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.219 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.226 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.227 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.227 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.228 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.228 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.235 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.236 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.236 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.237 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.237 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.245 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.246 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.247 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.247 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.248 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.255 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.256 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.256 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.257 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.257 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.264 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.265 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.266 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.266 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.267 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.272 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.274 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.274 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.275 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.275 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.282 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.283 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.283 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.284 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.284 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.290 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.291 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.292 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.292 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.293 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.298 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.299 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.300 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.300 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.301 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.308 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.310 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.310 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.311 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.311 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.317 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.318 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.318 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.319 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.319 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.328 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.329 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.330 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.331 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.331 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.339 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.340 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.340 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.341 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.341 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.348 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.349 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.350 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.350 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.351 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.357 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.358 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.359 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.359 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.360 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.368 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.369 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.370 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.371 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.371 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.378 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.379 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.379 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.380 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.380 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.388 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.389 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.390 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.390 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.391 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.398 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.399 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.399 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.400 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.400 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.407 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.408 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.408 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.409 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.409 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.418 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.419 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.419 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.420 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.420 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.427 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.428 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.429 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.429 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.430 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 8 } Fri Feb 22 11:29:21.437 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.438 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.439 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.440 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.440 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.447 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.448 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.448 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.449 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.449 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.456 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.457 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.458 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.458 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.459 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.465 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.466 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.467 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.467 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.468 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.474 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.475 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.475 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.476 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.476 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.482 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.483 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.484 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.484 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.485 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.492 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.493 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.494 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.495 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.495 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.502 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.503 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.504 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.504 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.505 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.510 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.511 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.512 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.513 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.513 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.520 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.521 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.521 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.522 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.522 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.528 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.529 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.530 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.531 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.531 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.537 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.538 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.538 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.539 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.539 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.546 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.547 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.547 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.548 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.549 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.556 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.557 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.557 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.558 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.558 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.565 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.566 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.566 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.567 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.568 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.575 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.576 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.577 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.577 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.578 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.585 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.586 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.586 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.587 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.587 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.594 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.595 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.596 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.596 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.597 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.605 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.606 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.607 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.607 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.608 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.614 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.616 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.616 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.617 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.617 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 8 } Fri Feb 22 11:29:21.625 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.626 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.626 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.627 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.627 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.634 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.635 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.636 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.636 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.637 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.644 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.645 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.646 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.646 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.647 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.654 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.655 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.655 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.656 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.656 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.664 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.665 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.665 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.666 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.666 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.674 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.675 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.675 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.676 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.676 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.683 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.684 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.685 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.686 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.686 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.693 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.694 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.694 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.695 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.695 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.702 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.703 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.703 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.704 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.704 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.710 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.711 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.712 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.712 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.713 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.719 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.720 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.721 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.721 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.722 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.729 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.730 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.731 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.731 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.732 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.740 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.741 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.741 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.742 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.742 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.748 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.749 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.749 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.750 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.750 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.758 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.759 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.759 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.760 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.760 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.768 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.769 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.770 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.771 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.778 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.779 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.779 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.780 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.780 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.787 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.788 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.789 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.789 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.790 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.797 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.798 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.798 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.799 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.799 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.806 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.807 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.808 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.808 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.809 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 8 } Fri Feb 22 11:29:21.817 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.818 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.818 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.819 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.819 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.826 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.827 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.828 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.828 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.830 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.831 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.831 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.832 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.833 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.834 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.834 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.835 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.836 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.837 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.837 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.838 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.839 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.840 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.840 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.841 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.842 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.843 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.843 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.844 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.844 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.851 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.852 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.853 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.854 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.854 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.863 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.865 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.865 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.866 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.866 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.875 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.876 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.876 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.877 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.877 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.890 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.891 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.891 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.892 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.893 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.900 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.901 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.902 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.903 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.903 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.911 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.912 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.913 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.914 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.914 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.922 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.923 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.923 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.924 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.925 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.933 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.934 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.934 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.935 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.935 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.941 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.942 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.943 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.944 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.944 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.950 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.951 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.952 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.953 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.953 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.962 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.963 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.963 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.964 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.964 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.970 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.971 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.972 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.973 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.973 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.982 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.983 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.983 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.984 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.984 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 8 } Fri Feb 22 11:29:21.993 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:21.994 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:21.995 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:21.995 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:21.996 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY [ 0.0001, 0.001, 0.01, 0.1, 0.001, 0.01, 0.1, 1, 0.1, 1, 10, 100, 1, 10, 100, 1000 ] [ [ 5, 52 ], [ 6, 53 ], [ 7, 54 ], [ 8, 55 ], [ 9, 56 ], [ 50, 520 ], [ 60, 530 ], [ 70, 540 ], [ 80, 550 ], [ 90, 560 ], [ 5000, 52000 ], [ 6000, 53000 ], [ 7000, 54000 ], [ 8000, 55000 ], [ 9000, 56000 ], [ 50000, 520000 ], [ 60000, 530000 ], [ 70000, 540000 ], [ 80000, 550000 ], [ 90000, 560000 ] ] { "center" : [ 5, 52 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.016 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.018 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.018 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.019 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.019 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.027 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.029 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.029 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.030 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.030 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.037 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.038 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.038 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.039 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.039 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.046 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.047 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.048 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.049 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.049 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.056 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.057 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.058 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.059 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.059 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.067 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.068 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.069 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.069 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.070 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.077 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.078 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.078 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.079 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.079 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.086 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.087 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.087 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.088 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.088 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.095 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.096 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.097 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.097 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.098 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.105 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.106 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.107 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.108 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.108 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.116 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.117 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.118 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.119 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.126 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.127 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.127 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.128 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.128 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.135 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.136 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.136 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.137 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.137 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.144 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.145 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.146 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.146 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.147 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.154 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.155 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.156 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.156 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.157 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.165 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.166 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.166 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.167 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.167 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.174 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.175 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.176 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.176 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.177 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.183 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.184 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.184 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.185 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.185 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.192 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.193 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.194 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.194 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.195 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.0001, "bits" : 9 } Fri Feb 22 11:29:22.202 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.203 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.204 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.204 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.205 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.213 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.214 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.214 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.215 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.215 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.222 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.223 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.224 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.224 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.225 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.231 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.232 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.233 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.233 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.234 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.241 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.242 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.242 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.243 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.243 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.251 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.252 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.252 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.253 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.253 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.262 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.263 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.264 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.264 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.265 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.272 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.273 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.273 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.274 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.275 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.281 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.282 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.283 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.283 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.284 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.291 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.292 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.293 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.293 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.294 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.301 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.302 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.303 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.303 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.304 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.312 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.313 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.314 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.314 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.315 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.322 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.323 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.324 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.324 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.325 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.332 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.333 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.333 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.334 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.334 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.342 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.343 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.343 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.344 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.344 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.352 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.353 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.353 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.354 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.354 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.363 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.364 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.364 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.365 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.365 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.372 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.373 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.374 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.374 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.375 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.381 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.382 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.383 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.383 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.384 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.391 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.392 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.393 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.393 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.401 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.403 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.403 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.404 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.404 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.412 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.413 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.414 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.415 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.415 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.422 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.423 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.424 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.424 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.425 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.431 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.432 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.432 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.433 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.433 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.440 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.442 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.442 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.443 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.443 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.450 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.451 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.452 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.452 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.453 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.461 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.462 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.463 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.463 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.464 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.473 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.474 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.475 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.475 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.476 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.482 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.483 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.484 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.484 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.485 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.492 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.494 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.494 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.495 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.495 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.503 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.504 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.504 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.505 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.506 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.514 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.515 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.515 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.516 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.516 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.524 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.525 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.526 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.526 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.527 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.533 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.534 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.535 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.536 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.536 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.543 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.544 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.545 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.545 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.546 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.553 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.554 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.555 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.555 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.556 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.564 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.565 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.566 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.566 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.567 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.574 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.575 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.575 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.576 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.577 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.585 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.586 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.587 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.587 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.588 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.593 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.594 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.594 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.595 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.595 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.600 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.600 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.601 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.601 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.601 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.607 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.608 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.608 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.608 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.609 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.615 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.616 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.616 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.617 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.618 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.624 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.625 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.625 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.625 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.626 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.631 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.632 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.633 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.633 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.633 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.638 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.639 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.639 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.640 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.640 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.645 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.645 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.646 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.646 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.646 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.651 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.652 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.652 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.653 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.653 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.657 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.658 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.658 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.659 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.659 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.664 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.664 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.665 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.665 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.665 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.670 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.671 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.671 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.672 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.672 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.677 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.678 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.678 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.679 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.679 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.684 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.684 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.685 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.685 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.685 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.691 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.692 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.692 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.692 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.693 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.697 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.698 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.698 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.699 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.699 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.704 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.704 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.705 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.705 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.705 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.711 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.712 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.712 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.712 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.713 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.717 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.718 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.718 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.719 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.719 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.724 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.724 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.725 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.725 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.725 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.730 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.731 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.731 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.732 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.732 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:22.737 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.737 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.738 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.738 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.738 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.744 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.744 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.745 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.745 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.746 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.750 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.751 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.751 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.752 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.752 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.756 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.757 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.757 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.758 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.758 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.763 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.763 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.764 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.764 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.764 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.769 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.770 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.770 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.771 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.776 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.777 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.777 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.777 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.778 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.782 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.783 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.783 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.784 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.784 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.788 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.789 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.789 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.790 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.790 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.795 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.796 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.796 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.796 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.797 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.801 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.802 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.803 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.803 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.803 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.809 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.809 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.810 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.810 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.811 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.815 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.816 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.816 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.817 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.817 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.821 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.822 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.822 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.823 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.824 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.828 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.829 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.829 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.830 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.830 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.835 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.835 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.836 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.836 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.836 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.842 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.842 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.843 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.843 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.843 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.848 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.849 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.849 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.849 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.850 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.854 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.855 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.855 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.855 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.856 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.860 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.861 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.861 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.862 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.862 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.001, "bits" : 9 } Fri Feb 22 11:29:22.867 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.867 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.868 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.868 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.868 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.874 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.874 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.875 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.875 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.875 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.880 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.881 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.881 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.882 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.882 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.886 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.887 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.887 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.888 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.888 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.892 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.893 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.894 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.894 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.894 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.899 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.900 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.900 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.900 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.901 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.906 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.907 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.907 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.908 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.908 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.912 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.913 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.914 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.915 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.915 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.919 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.920 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.920 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.920 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.921 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.927 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.929 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.929 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.930 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.930 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.935 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.936 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.936 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.936 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.937 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.942 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.943 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.943 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.943 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.944 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.948 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.949 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.949 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.950 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.950 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.954 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.955 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.955 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.956 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.956 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.960 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.961 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.961 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.962 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.962 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.967 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.968 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.968 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.968 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.969 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.974 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.975 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.975 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.975 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.976 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.980 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.981 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.981 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.982 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.982 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.986 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.987 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.987 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.987 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.988 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.992 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.993 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:22.993 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:22.994 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:22.994 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.01, "bits" : 9 } Fri Feb 22 11:29:22.999 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:22.999 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.000 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.000 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.000 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.006 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.006 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.007 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.007 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.007 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.013 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.014 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.014 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.014 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.015 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.020 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.021 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.021 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.022 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.022 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.028 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.028 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.029 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.030 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.030 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.035 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.036 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.036 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.037 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.037 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.042 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.042 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.043 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.043 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.044 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.048 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.049 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.049 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.050 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.050 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.055 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.055 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.056 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.056 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.057 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.061 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.062 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.063 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.063 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.063 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.068 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.069 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.069 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.070 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.070 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.076 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.077 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.077 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.077 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.078 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.083 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.083 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.084 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.084 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.084 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.089 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.090 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.090 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.091 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.091 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.096 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.097 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.097 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.097 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.098 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.103 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.103 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.104 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.104 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.105 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.110 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.111 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.111 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.112 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.112 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.117 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.118 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.118 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.118 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.119 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.123 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.124 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.124 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.125 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.125 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.130 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.131 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.131 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.132 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.132 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.137 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.138 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.138 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.138 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.139 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.144 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.145 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.146 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.156 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.156 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.161 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.162 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.163 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.163 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.163 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.169 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.170 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.170 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.171 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.171 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.175 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.176 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.176 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.177 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.177 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.182 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.183 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.183 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.184 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.184 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.189 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.189 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.190 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.190 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.191 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.196 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.197 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.197 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.198 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.198 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.204 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.204 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.205 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.205 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.205 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.211 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.212 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.212 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.212 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.213 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.217 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.218 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.218 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.219 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.219 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.223 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.224 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.225 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.225 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.225 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.230 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.230 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.231 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.231 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.232 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.236 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.237 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.237 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.237 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.238 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.242 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.243 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.243 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.244 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.244 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.249 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.249 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.250 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.250 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.250 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.255 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.256 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.256 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.257 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.257 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.262 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.262 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.263 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.263 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.263 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.267 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.268 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.268 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.269 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.269 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.274 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.274 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.275 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.275 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.275 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.280 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.281 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.281 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.281 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.282 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.287 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.287 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.288 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.288 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.288 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.294 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.294 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.295 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.295 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.295 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.301 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.302 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.302 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.303 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.303 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.308 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.309 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.310 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.310 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.310 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.315 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.316 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.316 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.316 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.317 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.321 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.322 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.322 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.323 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.323 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.327 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.328 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.329 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.329 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.329 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.333 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.334 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.334 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.335 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.335 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.340 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.340 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.341 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.341 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.342 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.346 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.347 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.347 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.348 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.348 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.353 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.354 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.354 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.355 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.355 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.359 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.360 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.361 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.361 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.361 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.365 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.366 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.366 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.367 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.367 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.372 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.372 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.373 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.373 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.373 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.378 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.379 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.379 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.379 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.380 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.385 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.386 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.386 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.386 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.387 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.391 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.392 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.392 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.393 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.393 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.397 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.398 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.398 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.398 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.399 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.403 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.404 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.404 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.405 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.405 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 0.1, "bits" : 9 } Fri Feb 22 11:29:23.409 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.410 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.410 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.411 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.411 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.417 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.418 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.418 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.419 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.419 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.423 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.424 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.424 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.425 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.425 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.431 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.432 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.432 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.432 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.433 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.437 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.438 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.438 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.438 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.439 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.444 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.444 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.445 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.445 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.445 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.450 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.451 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.451 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.452 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.452 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.458 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.458 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.459 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.459 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.459 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.465 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.466 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.466 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.467 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.467 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.473 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.474 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.474 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.474 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.475 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.479 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.480 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.480 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.481 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.481 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.486 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.486 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.487 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.487 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.487 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.492 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.493 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.493 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.493 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.494 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.498 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.499 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.499 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.499 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.500 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.504 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.505 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.505 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.506 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.506 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.511 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.511 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.512 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.512 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.512 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.518 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.518 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.519 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.519 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.519 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.524 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.525 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.525 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.525 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.526 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.530 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.531 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.531 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.531 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.531 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.536 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.537 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.537 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.538 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.538 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.542 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.543 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.544 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.544 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.544 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.550 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.550 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.551 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.551 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.551 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.557 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.557 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.558 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.558 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.558 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.565 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.565 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.566 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.566 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.566 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.573 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.573 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.574 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.574 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.574 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.580 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.581 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.581 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.582 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.582 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.586 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.587 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.587 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.587 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.588 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.592 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.593 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.593 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.594 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.594 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.600 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.600 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.601 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.601 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.601 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.606 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.607 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.607 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.608 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.608 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.613 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.614 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.614 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.615 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.615 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.620 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.620 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.621 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.621 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.621 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.626 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.627 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.627 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.627 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.628 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.632 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.633 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.633 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.633 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.634 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.639 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.640 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.640 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.641 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.641 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.646 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.647 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.647 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.647 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.648 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.653 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.654 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.654 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.654 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.655 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.659 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.660 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.660 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.661 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.661 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.665 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.666 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.666 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.667 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.667 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.671 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.672 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.672 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.673 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.673 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.678 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.678 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.679 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.679 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.679 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.685 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.685 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.686 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.686 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.686 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.691 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.692 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.692 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.693 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.693 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.698 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.698 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.699 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.699 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.699 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.704 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.705 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.705 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.706 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.706 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.711 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.711 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.712 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.712 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.712 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.717 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.717 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.718 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.718 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.718 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.724 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.724 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.725 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.725 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.725 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.732 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.733 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.733 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.733 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.734 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.740 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.741 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.741 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.742 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.742 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.748 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.749 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.749 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.749 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.750 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.754 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.755 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.755 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.755 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.756 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.761 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.762 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.762 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.762 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.763 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.768 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.769 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.770 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.770 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.770 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.776 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.776 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.777 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.777 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.777 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.782 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.783 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.783 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.784 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.784 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.788 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.789 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.789 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.790 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.790 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.795 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.795 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.796 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.796 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.796 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.800 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.801 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.801 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.802 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.802 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.807 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.807 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.808 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.808 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.808 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:23.813 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.814 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.814 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.815 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.815 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.820 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.821 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.821 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.822 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.822 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.826 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.827 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.827 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.828 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.828 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.834 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.835 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.835 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.835 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.836 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.840 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.841 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.841 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.842 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.842 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.847 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.848 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.848 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.849 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.849 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.854 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.854 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.855 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.855 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.855 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.861 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.862 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.862 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.862 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.863 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.869 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.869 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.870 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.870 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.870 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.877 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.878 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.878 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.879 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.879 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.884 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.884 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.885 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.885 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.885 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.890 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.891 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.891 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.891 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.892 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.896 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.897 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.897 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.898 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.898 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.902 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.903 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.903 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.904 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.904 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.909 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.909 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.910 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.910 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.910 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.915 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.916 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.916 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.916 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.917 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.922 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.923 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.923 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.923 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.924 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.928 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.929 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.929 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.930 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.930 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.934 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.935 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.935 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.935 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.936 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.940 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.941 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.941 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.942 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.942 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1, "bits" : 9 } Fri Feb 22 11:29:23.947 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.947 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.948 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.948 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.948 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.954 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.955 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.955 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.956 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.956 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.961 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.962 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.962 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.963 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.963 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.970 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.971 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.971 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.971 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.972 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.978 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.979 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.980 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.980 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.980 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.986 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.987 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.987 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.988 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.988 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.993 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:23.994 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:23.994 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:23.995 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:23.995 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:23.999 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.000 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.000 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.001 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.001 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.007 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.008 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.008 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.008 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.009 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.013 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.014 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.014 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.015 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.015 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.021 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.022 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.022 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.022 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.023 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.028 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.028 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.029 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.029 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.029 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.035 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.035 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.036 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.037 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.037 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.041 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.042 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.042 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.043 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.043 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.048 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.048 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.049 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.049 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.049 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.054 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.055 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.055 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.055 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.056 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.061 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.062 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.062 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.063 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.063 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.067 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.068 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.068 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.069 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.069 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.073 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.074 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.074 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.075 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.075 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.080 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.081 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.081 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.081 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.082 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 10, "bits" : 9 } Fri Feb 22 11:29:24.086 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.087 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.087 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.088 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.088 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.094 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.094 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.095 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.095 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.095 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6, 53 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.100 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.101 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.102 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.102 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.102 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7, 54 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.118 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.119 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.120 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.120 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.120 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8, 55 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.126 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.127 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.127 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.128 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.128 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9, 56 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.133 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.134 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.134 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.135 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.135 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50, 520 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.140 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.140 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.141 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.141 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.141 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.148 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.148 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.149 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.149 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.149 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.157 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.158 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.158 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.159 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.159 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.166 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.167 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.168 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.168 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.168 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.176 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.177 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.177 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.177 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.178 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.182 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.183 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.184 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.184 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.185 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.191 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.192 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.192 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.192 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.193 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.200 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.200 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.201 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.201 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.202 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.208 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.209 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.209 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.210 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.210 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.216 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.217 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.217 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.218 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.218 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.224 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.225 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.225 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.226 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.226 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.232 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.233 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.233 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.233 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.234 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.239 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.240 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.240 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.241 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.241 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.246 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.247 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.248 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.248 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.249 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 100, "bits" : 9 } Fri Feb 22 11:29:24.254 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.255 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.256 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.256 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.256 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5, 52 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.263 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.264 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.264 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.265 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 6, 53 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.266 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.266 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.267 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.267 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 7, 54 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.268 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.269 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.269 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.270 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 8, 55 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.271 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.271 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.271 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.272 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 9, 56 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.273 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.274 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.274 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.274 [conn40] build index test.axisaligned { loc: "2d" } { "center" : [ 50, 520 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.275 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.276 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.276 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.277 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.277 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60, 530 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.283 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.284 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.284 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.285 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.285 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70, 540 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.291 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.292 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.292 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.293 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.293 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80, 550 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.299 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.299 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.300 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.300 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.301 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90, 560 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.306 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.307 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.307 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.307 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.308 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 5000, 52000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.313 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.313 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.314 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.314 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.314 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 6000, 53000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.320 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.320 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.321 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.321 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.321 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 7000, 54000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.328 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.329 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.329 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.330 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.330 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 8000, 55000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.335 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.336 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.336 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.337 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.337 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 9000, 56000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.343 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.344 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.344 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.345 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.345 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 50000, 520000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.350 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.351 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.351 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.352 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.352 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 60000, 530000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.359 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.359 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.360 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.360 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.360 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 70000, 540000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.367 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.368 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.368 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.369 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.369 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 80000, 550000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.375 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.376 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.376 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.377 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.377 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY { "center" : [ 90000, 560000 ], "radius" : 1000, "bits" : 9 } Fri Feb 22 11:29:24.382 [conn40] CMD: drop test.axisaligned Fri Feb 22 11:29:24.383 [conn40] build index test.axisaligned { _id: 1 } Fri Feb 22 11:29:24.383 [conn40] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:29:24.384 [conn40] build index test.axisaligned { loc: "2d" } Fri Feb 22 11:29:24.384 [conn40] build index done. scanned 9 total records. 0 secs DOING WITHIN QUERY DOING NEAR QUERY DOING DIST QUERY Fri Feb 22 11:29:24.403 [conn40] end connection 127.0.0.1:53799 (0 connections now open) 19.4993 seconds Fri Feb 22 11:29:24.424 [initandlisten] connection accepted from 127.0.0.1:52178 #41 (1 connection now open) Fri Feb 22 11:29:24.424 [conn41] end connection 127.0.0.1:52178 (0 connections now open) ******************************************* Test : geo_mnypts.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_mnypts.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_mnypts.js";TestData.testFile = "geo_mnypts.js";TestData.testName = "geo_mnypts";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:29:24 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:29:24.580 [initandlisten] connection accepted from 127.0.0.1:42999 #42 (1 connection now open) null Fri Feb 22 11:29:24.589 [conn42] CMD: drop test.testMnyPts Fri Feb 22 11:29:24.590 [conn42] build index test.testMnyPts { _id: 1 } Fri Feb 22 11:29:24.593 [conn42] build index done. scanned 0 total records. 0.003 secs Fri Feb 22 11:29:50.513 [conn42] build index test.testMnyPts { loc: "2d" } Fri Feb 22 11:29:53.505 [conn42] build index done. scanned 500000 total records. 2.992 secs Fri Feb 22 11:29:53.505 [conn42] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2992442 2992ms Fri Feb 22 11:29:54.758 [conn42] command test.$cmd command: { count: "testMnyPts", query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 49.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1251512 reslen:48 1251ms Fri Feb 22 11:29:55.600 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 49.0 ] ] } } } cursorid:3846796551665441 ntoreturn:0 keyUpdates:0 locks(micros) r:821136 nreturned:77672 reslen:4194308 821ms Fri Feb 22 11:29:57.176 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 49.0 ] ] } } } cursorid:3846796551665441 ntoreturn:0 keyUpdates:0 locks(micros) r:1019185 nreturned:47227 reslen:2550278 1019ms Fri Feb 22 11:29:59.094 [conn42] command test.$cmd command: { count: "testMnyPts", query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 49.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1403150 reslen:48 1403ms Fri Feb 22 11:29:59.906 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 49.0 ] ] } } } cursorid:3865307598457810 ntoreturn:0 keyUpdates:0 locks(micros) r:809787 nreturned:77672 reslen:4194308 809ms Fri Feb 22 11:30:01.409 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 49.0 ] ] } } } cursorid:3865307598457810 ntoreturn:0 keyUpdates:0 locks(micros) r:760854 nreturned:47227 reslen:2550278 760ms Fri Feb 22 11:30:03.772 [conn42] command test.$cmd command: { count: "testMnyPts", query: { loc: { $within: { $box: [ [ 0.0, 50.0 ], [ 49.0, 99.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1820258 reslen:48 1820ms Fri Feb 22 11:30:04.620 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 50.0 ], [ 49.0, 99.0 ] ] } } } cursorid:3885365526749592 ntoreturn:0 keyUpdates:0 locks(micros) r:846092 nreturned:77672 reslen:4194308 846ms Fri Feb 22 11:30:06.249 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 50.0 ], [ 49.0, 99.0 ] ] } } } cursorid:3885365526749592 ntoreturn:0 keyUpdates:0 locks(micros) r:1108953 nreturned:47227 reslen:2550278 1108ms Fri Feb 22 11:30:08.541 [conn42] command test.$cmd command: { count: "testMnyPts", query: { loc: { $within: { $box: [ [ 50.0, 50.0 ], [ 99.0, 99.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1974899 reslen:48 1974ms Fri Feb 22 11:30:09.575 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 50.0 ], [ 99.0, 99.0 ] ] } } } cursorid:3905810863944715 ntoreturn:0 keyUpdates:0 locks(micros) r:1030854 nreturned:77672 reslen:4194308 1030ms Fri Feb 22 11:30:11.213 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 50.0 ], [ 99.0, 99.0 ] ] } } } cursorid:3905810863944715 ntoreturn:0 keyUpdates:0 locks(micros) r:928105 nreturned:47227 reslen:2550278 928ms Fri Feb 22 11:30:14.884 [conn42] command test.$cmd command: { count: "testMnyPts", query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 99.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3340083 reslen:48 3340ms Fri Feb 22 11:30:15.735 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 99.0 ] ] } } } cursorid:3932996099644979 ntoreturn:0 keyUpdates:0 locks(micros) r:848051 nreturned:77672 reslen:4194308 848ms Fri Feb 22 11:30:17.627 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 99.0 ] ] } } } cursorid:3932996099644979 ntoreturn:0 keyUpdates:0 locks(micros) r:1351430 nreturned:77672 reslen:4194308 1351ms Fri Feb 22 11:30:19.872 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 99.0 ] ] } } } cursorid:3932996099644979 ntoreturn:0 keyUpdates:0 locks(micros) r:1582027 nreturned:77672 reslen:4194308 1582ms Fri Feb 22 11:30:20.863 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 0.0, 0.0 ], [ 49.0, 99.0 ] ] } } } cursorid:3932996099644979 ntoreturn:0 keyUpdates:0 locks(micros) r:448313 nreturned:16883 reslen:911702 448ms Fri Feb 22 11:30:25.005 [conn42] command test.$cmd command: { count: "testMnyPts", query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 99.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3970192 reslen:48 3970ms Fri Feb 22 11:30:25.911 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 99.0 ] ] } } } cursorid:3976376426744004 ntoreturn:0 keyUpdates:0 locks(micros) r:902459 nreturned:77672 reslen:4194308 902ms Fri Feb 22 11:30:27.713 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 99.0 ] ] } } } cursorid:3976376426744004 ntoreturn:0 keyUpdates:0 locks(micros) r:1280472 nreturned:77672 reslen:4194308 1280ms Fri Feb 22 11:30:29.912 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 99.0 ] ] } } } cursorid:3976376426744004 ntoreturn:0 keyUpdates:0 locks(micros) r:1619995 nreturned:77672 reslen:4194308 1620ms Fri Feb 22 11:30:31.097 [conn42] getmore test.testMnyPts query: { loc: { $within: { $box: [ [ 50.0, 0.0 ], [ 99.0, 99.0 ] ] } } } cursorid:3976376426744004 ntoreturn:0 keyUpdates:0 locks(micros) r:444393 nreturned:16883 reslen:911702 444ms Fri Feb 22 11:30:37.363 [conn42] command test.$cmd command: { count: "testMnyPts", query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6137125 reslen:48 6137ms Fri Feb 22 11:30:38.255 [conn42] getmore test.testMnyPts query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } } cursorid:4029333538768323 ntoreturn:0 keyUpdates:0 locks(micros) r:889167 nreturned:77672 reslen:4194308 889ms Fri Feb 22 11:30:40.215 [conn42] getmore test.testMnyPts query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } } cursorid:4029333538768323 ntoreturn:0 keyUpdates:0 locks(micros) r:1444934 nreturned:77672 reslen:4194308 1444ms Fri Feb 22 11:30:42.183 [conn42] getmore test.testMnyPts query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } } cursorid:4029333538768323 ntoreturn:0 keyUpdates:0 locks(micros) r:1323555 nreturned:77672 reslen:4194308 1323ms Fri Feb 22 11:30:43.918 [conn42] getmore test.testMnyPts query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } } cursorid:4029333538768323 ntoreturn:0 keyUpdates:0 locks(micros) r:1204614 nreturned:77672 reslen:4194308 1204ms Fri Feb 22 11:30:45.584 [conn42] getmore test.testMnyPts query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } } cursorid:4029333538768323 ntoreturn:0 keyUpdates:0 locks(micros) r:1161423 nreturned:77672 reslen:4194308 1161ms Fri Feb 22 11:30:47.499 [conn42] getmore test.testMnyPts query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } } cursorid:4029333538768323 ntoreturn:0 keyUpdates:0 locks(micros) r:1199094 nreturned:77672 reslen:4194308 1199ms Fri Feb 22 11:30:48.958 [conn42] getmore test.testMnyPts query: { loc: { $within: { $center: [ [ 0.0, 0.0 ], 139.7571426749364 ] } } } cursorid:4029333538768323 ntoreturn:0 keyUpdates:0 locks(micros) r:928283 nreturned:33817 reslen:1826138 928ms Fri Feb 22 11:30:49.301 [conn42] end connection 127.0.0.1:42999 (0 connections now open) 1.4152 minutes Fri Feb 22 11:30:49.337 [initandlisten] connection accepted from 127.0.0.1:37574 #43 (1 connection now open) Fri Feb 22 11:30:49.338 [conn43] end connection 127.0.0.1:37574 (0 connections now open) ******************************************* Test : geo_near_random1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_near_random1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_near_random1.js";TestData.testFile = "geo_near_random1.js";TestData.testName = "geo_near_random1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:30:49 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:30:49.513 [initandlisten] connection accepted from 127.0.0.1:62571 #44 (1 connection now open) null Fri Feb 22 11:30:49.527 [conn44] CMD: drop test.nightly.geo_near_random1 starting test: nightly.geo_near_random1 Fri Feb 22 11:30:49.529 [conn44] build index test.nightly.geo_near_random1 { _id: 1 } Fri Feb 22 11:30:49.530 [conn44] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:30:49.545 [conn44] build index test.nightly.geo_near_random1 { loc: "2d" } Fri Feb 22 11:30:49.550 [conn44] build index done. scanned 200 total records. 0.005 secs testing point: [ 0, 0 ] opts: { "sphere" : 0, "nToTest" : 200 } testing point: [ -53.671337890625, -15.2984619140625 ] opts: { "sphere" : 0, "nToTest" : 200 } testing point: [ -156.87446899414064, 30.2288818359375 ] opts: { "sphere" : 0, "nToTest" : 200 } testing point: [ 9.739459228515614, -49.04296875 ] opts: { "sphere" : 0, "nToTest" : 200 } testing point: [ 39.64959106445312, -83.243408203125 ] opts: { "sphere" : 0, "nToTest" : 200 } Fri Feb 22 11:30:57.082 [conn44] end connection 127.0.0.1:62571 (0 connections now open) 7764.6720 ms Fri Feb 22 11:30:57.104 [initandlisten] connection accepted from 127.0.0.1:36545 #45 (1 connection now open) Fri Feb 22 11:30:57.105 [conn45] end connection 127.0.0.1:36545 (0 connections now open) ******************************************* Test : geo_near_random2.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_near_random2.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_near_random2.js";TestData.testFile = "geo_near_random2.js";TestData.testName = "geo_near_random2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:30:57 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:30:57.292 [initandlisten] connection accepted from 127.0.0.1:40086 #46 (1 connection now open) null Fri Feb 22 11:30:57.297 [conn46] CMD: drop test.nightly.geo_near_random2 starting test: nightly.geo_near_random2 Fri Feb 22 11:30:57.298 [conn46] build index test.nightly.geo_near_random2 { _id: 1 } Fri Feb 22 11:30:57.300 [conn46] build index done. scanned 0 total records. 0.002 secs Fri Feb 22 11:30:57.792 [conn46] build index test.nightly.geo_near_random2 { loc: "2d" } Fri Feb 22 11:30:57.831 [conn46] build index done. scanned 10000 total records. 0.038 secs testing point: [ 0, 0 ] opts: { "sphere" : 0, "nToTest" : 100 } testing point: [ -77.14705810546876, -1.40625 ] opts: { "sphere" : 0, "nToTest" : 100 } testing point: [ -124.55966796875, 58.5736083984375 ] opts: { "sphere" : 0, "nToTest" : 100 } testing point: [ -67.51740112304688, -65.14892578125 ] opts: { "sphere" : 0, "nToTest" : 100 } testing point: [ 176.7486755371094, -75.8221435546875 ] opts: { "sphere" : 0, "nToTest" : 100 } testing point: [ 0, 0 ] opts: { "sphere" : 1, "nToTest" : 100 } testing point: [ -39.4497509765625, 11.56640625 ] opts: { "sphere" : 1, "nToTest" : 100 } testing point: [ 125.78945312500002, -42.32373046875 ] opts: { "sphere" : 1, "nToTest" : 100 } testing point: [ -102.3269091796875, -23.44921875 ] opts: { "sphere" : 1, "nToTest" : 100 } testing point: [ 118.00666992187502, 38.1533203125 ] opts: { "sphere" : 1, "nToTest" : 100 } Fri Feb 22 11:31:04.498 [conn46] end connection 127.0.0.1:40086 (0 connections now open) 7412.7581 ms Fri Feb 22 11:31:04.519 [initandlisten] connection accepted from 127.0.0.1:45990 #47 (1 connection now open) Fri Feb 22 11:31:04.520 [conn47] end connection 127.0.0.1:45990 (0 connections now open) ******************************************* Test : geo_polygon.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_polygon.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/geo_polygon.js";TestData.testFile = "geo_polygon.js";TestData.testName = "geo_polygon";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:31:04 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:31:04.693 [initandlisten] connection accepted from 127.0.0.1:57800 #48 (1 connection now open) null Fri Feb 22 11:31:04.697 [conn48] CMD: drop test.geo_polygon4 Fri Feb 22 11:31:04.699 [conn48] build index test.geo_polygon4 { _id: 1 } Fri Feb 22 11:31:04.701 [conn48] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:31:34.676 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:31:34.678 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:31:37.079 [conn48] build index done. scanned 518400 total records. 2.401 secs Fri Feb 22 11:31:37.079 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2401941 2401ms Fri Feb 22 11:31:40.815 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 1.0, 1.0 ], [ 0.0, 2.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3734518 reslen:48 3734ms Fri Feb 22 11:31:47.460 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6624950 reslen:48 6625ms Fri Feb 22 11:31:51.568 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 10.0 ], [ 10.0, 10.0 ], [ 10.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:4107137 reslen:48 4107ms Fri Feb 22 11:31:55.470 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 2.0 ], [ 2.0, 2.0 ], [ 2.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3901289 reslen:48 3901ms Fri Feb 22 11:32:00.036 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 10.0 ], [ 10.0, 10.0 ], [ 10.0, 0.0 ], [ 5.0, 5.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:4565822 reslen:48 4565ms Fri Feb 22 11:32:03.949 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 2.0 ], [ 2.0, 2.0 ], [ 2.0, 0.0 ], [ 1.0, 1.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3911560 reslen:48 3911ms Fri Feb 22 11:32:07.847 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 0.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3897775 reslen:48 3897ms Fri Feb 22 11:32:11.383 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 1.0, 0.0 ], [ 2.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3535090 reslen:48 3535ms Fri Feb 22 11:32:14.899 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 0.0 ], [ 1.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3515398 reslen:48 3515ms Fri Feb 22 11:32:19.067 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 2.0 ], [ 0.0, 1.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:4167217 reslen:48 4167ms Fri Feb 22 11:32:22.876 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 1.0 ], [ 0.0, 0.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:3808499 reslen:48 3808ms Fri Feb 22 11:32:22.876 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:32:22.878 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:32:25.095 [conn48] build index done. scanned 518400 total records. 2.217 secs Fri Feb 22 11:32:25.095 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2217539 2217ms Fri Feb 22 11:32:26.334 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 1.0, 1.0 ], [ 0.0, 2.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1238071 reslen:48 1238ms Fri Feb 22 11:32:33.031 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6671181 reslen:48 6671ms Fri Feb 22 11:32:34.252 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 10.0 ], [ 10.0, 10.0 ], [ 10.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1220597 reslen:48 1220ms Fri Feb 22 11:32:35.259 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 2.0 ], [ 2.0, 2.0 ], [ 2.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1005945 reslen:48 1005ms Fri Feb 22 11:32:36.269 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 10.0 ], [ 10.0, 10.0 ], [ 10.0, 0.0 ], [ 5.0, 5.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1009007 reslen:48 1009ms Fri Feb 22 11:32:37.322 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 2.0 ], [ 2.0, 2.0 ], [ 2.0, 0.0 ], [ 1.0, 1.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1052698 reslen:48 1052ms Fri Feb 22 11:32:38.366 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 0.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1043054 reslen:48 1043ms Fri Feb 22 11:32:39.376 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 1.0, 0.0 ], [ 2.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1010159 reslen:48 1010ms Fri Feb 22 11:32:40.398 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 0.0 ], [ 1.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1020803 reslen:48 1020ms Fri Feb 22 11:32:41.592 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 2.0 ], [ 0.0, 1.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1193572 reslen:48 1193ms Fri Feb 22 11:32:42.725 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 1.0 ], [ 0.0, 0.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:1132272 reslen:48 1132ms Fri Feb 22 11:32:42.725 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:32:42.727 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:32:45.095 [conn48] build index done. scanned 518400 total records. 2.368 secs Fri Feb 22 11:32:45.095 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2368363 2368ms Fri Feb 22 11:32:45.341 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 1.0, 1.0 ], [ 0.0, 2.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:244973 reslen:48 244ms Fri Feb 22 11:32:51.095 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5754023 reslen:48 5754ms Fri Feb 22 11:32:51.377 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 10.0 ], [ 10.0, 10.0 ], [ 10.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:260268 reslen:48 260ms Fri Feb 22 11:32:51.649 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 2.0 ], [ 2.0, 2.0 ], [ 2.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:271680 reslen:48 271ms Fri Feb 22 11:32:51.954 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 10.0 ], [ 10.0, 10.0 ], [ 10.0, 0.0 ], [ 5.0, 5.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:303692 reslen:48 303ms Fri Feb 22 11:32:52.260 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 2.0 ], [ 2.0, 2.0 ], [ 2.0, 0.0 ], [ 1.0, 1.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:305520 reslen:48 305ms Fri Feb 22 11:32:52.513 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 0.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:252597 reslen:48 252ms Fri Feb 22 11:32:52.829 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 1.0, 0.0 ], [ 2.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:315162 reslen:48 315ms Fri Feb 22 11:32:53.195 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 0.0 ], [ 0.0, 0.0 ], [ 1.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:365994 reslen:48 366ms Fri Feb 22 11:32:53.567 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 2.0 ], [ 0.0, 1.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:371191 reslen:48 371ms Fri Feb 22 11:32:53.813 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ 0.0, 1.0 ], [ 0.0, 0.0 ], [ 0.0, 0.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:245115 reslen:48 245ms Fri Feb 22 11:32:53.813 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:32:53.815 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:32:56.043 [conn48] build index done. scanned 518400 total records. 2.227 secs Fri Feb 22 11:32:56.043 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2228023 2228ms Fri Feb 22 11:33:02.153 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6049740 reslen:48 6049ms Fri Feb 22 11:33:02.729 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:33:02.730 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:33:04.860 [conn48] build index done. scanned 518400 total records. 2.13 secs Fri Feb 22 11:33:04.860 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2130188 2130ms Fri Feb 22 11:33:10.667 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5789215 reslen:48 5789ms Fri Feb 22 11:33:10.842 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:33:10.843 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:33:13.011 [conn48] build index done. scanned 518400 total records. 2.167 secs Fri Feb 22 11:33:13.011 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2168066 2168ms Fri Feb 22 11:33:19.173 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6155421 reslen:48 6155ms Fri Feb 22 11:33:19.251 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:33:19.252 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:33:21.781 [conn48] build index done. scanned 518400 total records. 2.529 secs Fri Feb 22 11:33:21.781 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2529235 2529ms Fri Feb 22 11:33:27.882 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6096494 reslen:48 6096ms Fri Feb 22 11:33:27.931 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:33:27.932 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:33:30.548 [conn48] build index done. scanned 518400 total records. 2.615 secs Fri Feb 22 11:33:30.548 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2616140 2616ms Fri Feb 22 11:33:36.301 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5751201 reslen:48 5751ms Fri Feb 22 11:33:36.341 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:33:36.342 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:33:38.417 [conn48] build index done. scanned 518400 total records. 2.075 secs Fri Feb 22 11:33:38.417 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2075310 2075ms Fri Feb 22 11:33:44.628 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6208534 reslen:48 6208ms Fri Feb 22 11:33:44.665 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:33:44.666 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:33:46.738 [conn48] build index done. scanned 518400 total records. 2.071 secs Fri Feb 22 11:33:46.738 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2072017 2072ms Fri Feb 22 11:33:53.197 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6456794 reslen:48 6456ms Fri Feb 22 11:33:53.234 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:33:53.235 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:33:55.740 [conn48] build index done. scanned 518400 total records. 2.504 secs Fri Feb 22 11:33:55.740 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2504692 2504ms Fri Feb 22 11:34:01.620 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5876523 reslen:48 5876ms Fri Feb 22 11:34:01.657 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:34:01.658 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:34:04.198 [conn48] build index done. scanned 518400 total records. 2.539 secs Fri Feb 22 11:34:04.198 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2540029 2540ms Fri Feb 22 11:34:10.165 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5964495 reslen:48 5964ms Fri Feb 22 11:34:10.218 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:34:10.219 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:34:12.995 [conn48] build index done. scanned 518400 total records. 2.775 secs Fri Feb 22 11:34:12.995 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2776056 2776ms Fri Feb 22 11:34:19.026 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6029223 reslen:48 6029ms Fri Feb 22 11:34:19.066 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:34:19.067 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:34:21.596 [conn48] build index done. scanned 518400 total records. 2.528 secs Fri Feb 22 11:34:21.596 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2528460 2528ms Fri Feb 22 11:34:27.791 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6193069 reslen:48 6193ms Fri Feb 22 11:34:27.827 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:34:27.828 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:34:29.929 [conn48] build index done. scanned 518400 total records. 2.1 secs Fri Feb 22 11:34:29.929 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2100740 2100ms Fri Feb 22 11:34:36.071 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6139710 reslen:48 6139ms Fri Feb 22 11:34:36.110 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:34:36.111 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:34:38.422 [conn48] build index done. scanned 518400 total records. 2.311 secs Fri Feb 22 11:34:38.422 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2311357 2311ms Fri Feb 22 11:34:44.722 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6297420 reslen:48 6297ms Fri Feb 22 11:34:44.759 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:34:44.760 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:34:46.885 [conn48] build index done. scanned 518400 total records. 2.124 secs Fri Feb 22 11:34:46.885 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2124953 2124ms Fri Feb 22 11:34:53.312 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6424334 reslen:48 6424ms Fri Feb 22 11:34:53.349 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:34:53.350 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:34:55.526 [conn48] build index done. scanned 518400 total records. 2.176 secs Fri Feb 22 11:34:55.526 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2176365 2176ms Fri Feb 22 11:35:01.648 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6119660 reslen:48 6119ms Fri Feb 22 11:35:01.686 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:35:01.687 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:35:04.629 [conn48] build index done. scanned 518400 total records. 2.942 secs Fri Feb 22 11:35:04.630 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2942675 2942ms Fri Feb 22 11:35:11.266 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6632783 reslen:48 6632ms Fri Feb 22 11:35:11.303 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:35:11.304 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:35:13.869 [conn48] build index done. scanned 518400 total records. 2.564 secs Fri Feb 22 11:35:13.869 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2564981 2565ms Fri Feb 22 11:35:19.686 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5814599 reslen:48 5814ms Fri Feb 22 11:35:19.722 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:35:19.723 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:35:21.883 [conn48] build index done. scanned 518400 total records. 2.159 secs Fri Feb 22 11:35:21.883 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2159875 2159ms Fri Feb 22 11:35:27.979 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6093347 reslen:48 6093ms Fri Feb 22 11:35:28.021 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:35:28.022 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:35:30.201 [conn48] build index done. scanned 518400 total records. 2.179 secs Fri Feb 22 11:35:30.202 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2179743 2179ms Fri Feb 22 11:35:35.990 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5786478 reslen:48 5786ms Fri Feb 22 11:35:36.029 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:35:36.030 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:35:38.268 [conn48] build index done. scanned 518400 total records. 2.237 secs Fri Feb 22 11:35:38.268 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2237431 2237ms Fri Feb 22 11:35:44.159 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5888711 reslen:48 5888ms Fri Feb 22 11:35:44.195 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:35:44.196 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:35:46.605 [conn48] build index done. scanned 518400 total records. 2.408 secs Fri Feb 22 11:35:46.605 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2408285 2408ms Fri Feb 22 11:35:52.759 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6152271 reslen:48 6152ms Fri Feb 22 11:35:52.797 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:35:52.798 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:35:55.159 [conn48] build index done. scanned 518400 total records. 2.361 secs Fri Feb 22 11:35:55.160 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2361776 2361ms Fri Feb 22 11:36:00.972 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5810582 reslen:48 5810ms Fri Feb 22 11:36:01.010 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:36:01.011 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:36:03.211 [conn48] build index done. scanned 518400 total records. 2.2 secs Fri Feb 22 11:36:03.211 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2200316 2200ms Fri Feb 22 11:36:10.061 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6847245 reslen:48 6847ms Fri Feb 22 11:36:10.098 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:36:10.099 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:36:12.281 [conn48] build index done. scanned 518400 total records. 2.182 secs Fri Feb 22 11:36:12.281 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2182192 2182ms Fri Feb 22 11:36:18.579 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6296006 reslen:48 6296ms Fri Feb 22 11:36:18.616 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:36:18.617 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:36:20.885 [conn48] build index done. scanned 518400 total records. 2.267 secs Fri Feb 22 11:36:20.885 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2267807 2267ms Fri Feb 22 11:36:26.552 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5665432 reslen:48 5665ms Fri Feb 22 11:36:26.594 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:36:26.595 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:36:29.130 [conn48] build index done. scanned 518400 total records. 2.535 secs Fri Feb 22 11:36:29.130 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2535259 2535ms Fri Feb 22 11:36:35.463 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6330141 reslen:48 6330ms Fri Feb 22 11:36:35.500 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:36:35.501 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:36:38.001 [conn48] build index done. scanned 518400 total records. 2.499 secs Fri Feb 22 11:36:38.001 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2499781 2499ms Fri Feb 22 11:36:44.154 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:6150814 reslen:48 6150ms Fri Feb 22 11:36:44.194 [conn48] CMD: dropIndexes test.geo_polygon4 Fri Feb 22 11:36:44.195 [conn48] build index test.geo_polygon4 { loc: "2d" } Fri Feb 22 11:36:46.759 [conn48] build index done. scanned 518400 total records. 2.564 secs Fri Feb 22 11:36:46.759 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:2564477 2564ms Fri Feb 22 11:36:52.412 [conn48] command test.$cmd command: { count: "geo_polygon4", query: { loc: { $within: { $polygon: [ [ -180.0, -180.0 ], [ -180.0, 180.0 ], [ 180.0, 180.0 ], [ 180.0, -180.0 ] ] } } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:5650505 reslen:48 5650ms Fri Feb 22 11:36:52.472 [conn48] end connection 127.0.0.1:57800 (0 connections now open) 5.7995 minutes Fri Feb 22 11:36:52.493 [initandlisten] connection accepted from 127.0.0.1:59369 #49 (1 connection now open) Fri Feb 22 11:36:52.493 [conn49] end connection 127.0.0.1:59369 (0 connections now open) Fri Feb 22 11:36:52.494 got signal 15 (Terminated), will terminate after current cmd ends Fri Feb 22 11:36:52.494 [interruptThread] now exiting Fri Feb 22 11:36:52.494 dbexit: Fri Feb 22 11:36:52.494 [interruptThread] shutdown: going to close listening sockets... Fri Feb 22 11:36:52.494 [interruptThread] closing listening socket: 8 Fri Feb 22 11:36:52.494 [interruptThread] closing listening socket: 9 Fri Feb 22 11:36:52.494 [interruptThread] closing listening socket: 10 Fri Feb 22 11:36:52.494 [interruptThread] removing socket file: /tmp/mongodb-27999.sock Fri Feb 22 11:36:52.494 [interruptThread] shutdown: going to flush diaglog... Fri Feb 22 11:36:52.494 [interruptThread] shutdown: going to close sockets... Fri Feb 22 11:36:52.494 [interruptThread] shutdown: waiting for fs preallocator... Fri Feb 22 11:36:52.494 [interruptThread] shutdown: lock for final commit... Fri Feb 22 11:36:52.494 [interruptThread] shutdown: final commit... Fri Feb 22 11:36:52.591 [interruptThread] shutdown: closing all files... Fri Feb 22 11:36:52.627 [interruptThread] closeAllFiles() finished Fri Feb 22 11:36:52.627 [interruptThread] journalCleanup... Fri Feb 22 11:36:52.627 [interruptThread] removeJournalFiles Fri Feb 22 11:36:52.634 dbexit: really exiting now cwd [/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo] num procs:1 removing: /data/db/sconsTests//test.ns removing: /data/db/sconsTests//test.0 removing: /data/db/sconsTests//test.3 removing: /data/db/sconsTests//test.1 removing: /data/db/sconsTests//local.0 removing: /data/db/sconsTests//local.ns removing: /data/db/sconsTests//test.2 buildlogger: could not find or import buildbot.tac for authentication Fri Feb 22 11:36:53.126 [initandlisten] MongoDB starting : pid=26685 port=27999 dbpath=/data/db/sconsTests/ 64-bit host=bs-smartos-x86-64-1.10gen.cc Fri Feb 22 11:36:53.127 [initandlisten] Fri Feb 22 11:36:53.127 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB Fri Feb 22 11:36:53.127 [initandlisten] ** uses to detect impending page faults. Fri Feb 22 11:36:53.127 [initandlisten] ** This may result in slower performance for certain use cases Fri Feb 22 11:36:53.127 [initandlisten] Fri Feb 22 11:36:53.127 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 Fri Feb 22 11:36:53.127 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 Fri Feb 22 11:36:53.127 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 Fri Feb 22 11:36:53.127 [initandlisten] allocator: system Fri Feb 22 11:36:53.127 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999, setParameter: [ "enableTestCommands=1" ] } Fri Feb 22 11:36:53.127 [initandlisten] journal dir=/data/db/sconsTests/journal Fri Feb 22 11:36:53.127 [initandlisten] recover : no journal files present, no recovery needed Fri Feb 22 11:36:53.142 [FileAllocator] allocating new datafile /data/db/sconsTests/local.ns, filling with zeroes... Fri Feb 22 11:36:53.142 [FileAllocator] creating directory /data/db/sconsTests/_tmp Fri Feb 22 11:36:53.142 [FileAllocator] done allocating datafile /data/db/sconsTests/local.ns, size: 16MB, took 0 secs Fri Feb 22 11:36:53.142 [FileAllocator] allocating new datafile /data/db/sconsTests/local.0, filling with zeroes... Fri Feb 22 11:36:53.142 [FileAllocator] done allocating datafile /data/db/sconsTests/local.0, size: 64MB, took 0 secs Fri Feb 22 11:36:53.145 [initandlisten] waiting for connections on port 27999 Fri Feb 22 11:36:53.145 [websvr] admin web console waiting for connections on port 28999 Fri Feb 22 11:36:53.943 [initandlisten] connection accepted from 127.0.0.1:43082 #1 (1 connection now open) running /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/ --setParameter enableTestCommands=1 ******************************************* Test : huge_multikey_index.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/huge_multikey_index.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/huge_multikey_index.js";TestData.testFile = "huge_multikey_index.js";TestData.testName = "huge_multikey_index";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:36:53 2013 Fri Feb 22 11:36:53.961 [conn1] end connection 127.0.0.1:43082 (0 connections now open) buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:36:54.128 [initandlisten] connection accepted from 127.0.0.1:47808 #2 (1 connection now open) null Fri Feb 22 11:36:54.139 [conn2] CMD: drop test.huge_multikey_index Fri Feb 22 11:36:54.149 [conn2] end connection 127.0.0.1:47808 (0 connections now open) 224.1070 ms Fri Feb 22 11:36:54.169 [initandlisten] connection accepted from 127.0.0.1:45791 #3 (1 connection now open) Fri Feb 22 11:36:54.170 [conn3] end connection 127.0.0.1:45791 (0 connections now open) ******************************************* Test : index_check10.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_check10.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_check10.js";TestData.testFile = "index_check10.js";TestData.testName = "index_check10";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:36:54 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:36:54.341 [initandlisten] connection accepted from 127.0.0.1:60518 #4 (1 connection now open) null setting random seed: 1361533014343 Fri Feb 22 11:36:54.345 [conn4] CMD: drop test.test_index_check10 Fri Feb 22 11:36:54.353 [FileAllocator] allocating new datafile /data/db/sconsTests/test.ns, filling with zeroes... Fri Feb 22 11:36:54.354 [FileAllocator] done allocating datafile /data/db/sconsTests/test.ns, size: 16MB, took 0 secs Fri Feb 22 11:36:54.354 [FileAllocator] allocating new datafile /data/db/sconsTests/test.0, filling with zeroes... Fri Feb 22 11:36:54.354 [FileAllocator] done allocating datafile /data/db/sconsTests/test.0, size: 64MB, took 0 secs Fri Feb 22 11:36:54.354 [FileAllocator] allocating new datafile /data/db/sconsTests/test.1, filling with zeroes... Fri Feb 22 11:36:54.354 [FileAllocator] done allocating datafile /data/db/sconsTests/test.1, size: 128MB, took 0 secs Fri Feb 22 11:36:54.358 [conn4] build index test.test_index_check10 { _id: 1 } Fri Feb 22 11:36:54.359 [conn4] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:37:09.891 [conn4] build index test.test_index_check10 { a: 1.0 } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "aacmkqxropiyyfzkipdksjddvecblfeotdzufknyldboiwytujvuthuhnlckqqoswvqerktwnouigshughatpkbgiugiubeiayeulpehwsfzpczewzsgegckhkrtnwbzyvohrblzozwntnfmuptovl..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "abfcgwrgltvlzlexkkphsydxlipchxjxvmyqbjfvzckvobbrllraqlaragrnvseulfrufrfwoeeoribnmjvvipmjkhesquqqxwfhruawgyzzsmzmjdqllqpfehemkapoawyqtfeszgdflgutzvheyg..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "abghqyaablufkbxqztsyatrhjahxrlfxcfrbmiqpdpiglgbajrwxsbjtkkdzfdvtdrtnqpncykdlxlviluspvqjeibophyoiqpokuudyogrxhvijlmvlmnkjhddbknkhoogqnyljialbaokryoyzrs..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "ablyiiamdjytvbtdhsdbxggvbpbrbyzbbzzjejxmcjaizoafsulkkubzupfiyuykvfuutwdtsspknbwdjimjqwizhrzirgluecnugzanxslshnefeixqtcevgjramgtvblxettptxelgywljjknesa..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "aboypgvwfdmwosbztqpgcsqmlgllewlvqhmimahfhowqxwfpusmwkelxrrkjcidriuudtileicxpvzwpxfwcutxdbhojnrjvvowbtwujcimrcfitqaxvcvatqyyejglmkimgzrsjylugocwscrxmla..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "absgkmasnrysgiybunwciljzbmqbcwmlytilskvhyczkewsqlywtyaswbejswlptcazapgqzuubrivjipiknxepremuxgkydglkxfypslzzsxpmebhdvfpwmjeeyupjkhsdzlbgumakvvjbusroyov..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "abuvnpetmgdhcrjfkxhyfjsukpwseuimwgqpsoiatgrhhruvkbutplrwbfjjwcmrmvmhzhbrbcsnoryxkssvwdxkaczrqxamrlbtrtavaweukvcnmaawlmbbpfjvozraxsllgcusvhgomjltjvwzts..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "abviltxxqwlphwxwqvapynucgrifbnagnqkftakwqyvcnbtxroqadrlhplhbkbmbsujaribqlekonpjsqscnzmsmwscausharqympsxjqpliysrqgxoabbhxsnmfrlnbsietomjjxaendydcjpdbwa..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "acexejviifoqkemjbferonavtphvmfhyygrdbkjpqrvyyxfmdpnfljwlgvrpcrfgwtrjesyqwxacanrwmgkopubnahbdcuozxjshmrcsbmnubbvmxeuckenqifwgubyznhivyqasrhuvgncbmvwhew..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "acmcpkcuuvttkkcjkqwxsovgsmysnsvrmaalqmlcwzusooapjctnxynxxcxzwbrmojqhkwdtgvuekkjklhvnwxrtrajtozwpovczcovilmcifvctdnnuruhqdlscjbdngentvfuwlbffgfvulbovmo..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "acttirdsqrjszvyuyxuhjnfoyfafznjaxhlhpcorcqtosylysesljlmbroxrfwobdljfcvbkzrwudcbyibmqutjzmmmzormfipugfqdmchnjamekhlcwnwvymzvvyhltxkoplngnsvyuenacswboqk..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "actvkkymmynpwzridydussmvxqgyyeqgxziibxgurtaojusfttvzmkbnvvhzijjkyxidimuplrehgprctnokatjiiuglmpdbbocywkqhzrjqhhowlfjvavblmfqsszykwckmkswugbimmnjcuavzlq..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "aesfqtzcwnpwfmclvocbgeloodpfqancpijgduayggrzjynuuiiqnrvmeyoayqldanepkqjfzucrrmqkpwpyoauqfaiocxyltlhognhulbwmdevkdyipidnbxtogsgytcxmbuzaxsynrbsmbdiqwch..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "affaomdoonaxcymnpszoewjbkexqcturmuyehxorztdaqsnihrgdicygcfapdnqksursyldyubyupvjdpmozkmswmhtutglmqaoqdclzoddhqcmkmkuzvitbodapumpmeihqykcmrtyfuwlwqvmqfm..." } Fri Feb 22 11:37:09.959 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "afjqfgrwckultasfayhftqowavpjjeqjgbiohtqgrvuxdhlootdwggzuvmeirhceqsyfzsznmvgfvtucfjheqabgckrqthtcoeyybvqucqjosybeqeywqyandaiiznrbwjgwfnfoqslxqzelxvtpzr..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "afvadpcmrqhxpoaciusjfwuwxgelwkytncwmtwvsmsxwljbfkswyzrvcqwiptkzeeocqoedmykogyorulstidygpsslkeeyiuorvofegjqxmcpufchirirtjxplrezutzcjeagcraggtdjmczwbaof..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "afzowxsedddwixvqpoitxjuwesazndvtekyqrmcrpppaowtwmnmthhppuvikuvqdlscqskmtcrvhglxoximkrgtxycoatfijspozkidtldqmqyxojxmotwdfofwmoftacjkwwbrbsqpapwyboymqch..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "aggiarpgefmpkvzntjktywtrfvzcgwozdvmgtfqppnqsoxvomduherujprxpabworvlyzhvvljcrtftrcxjzxuihgxevfrlqigbmksogybphdgtgfylaqcspyoiehuyapsxeeljaglopkoaliezxen..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "agiaxjngwvqztgegnswiwnbdtouadwshhpmszpcjsfkcmqoqhquttaybpxfxzomoznzfjdsmadlkbeztwawnspffhktmgddkqqmzfvwsbmezckruwuegqhnpmttlwsonwfircujbxrsgszcxtkiuwb..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "agnqiglbdadzqzzdfankffughutotdxovfzievdbiniursphpehvcxwfetkxnijthnmklacadzwfewxvkfpsppzwigjbnjupffefdnikygnraxvejolgjgtfqcoxwlhqefwvoncvxfmhmtlwztkmki..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "ahaejlwplnayfcmxyugnqaylkebsymvzqskbkywnowakcvpnbglvahtrxtucrqtthctiqarimqbchmpegtczxjjvcighefqxpzikaajyhdmiypnasdkunsobedginkublrgieolohooxybwgzgoovg..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "ahbmrrycfjgsvqqknyepijehhzooplgsmkugqwotnwbbfdlardumhbezorniuuncfuewskmbmuqvznulvcridwmuhaklukybesxagvkjzplcqknhkcwgibuhfymvveaufkgfvjdhpqnpzypiqkwaro..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "ahpirbcmmmktbccvgptadztzxzwffmlpszjirrcqsidrwhboglbsngtrnkvcnanxbghczqqxcoxpuzftpncclsuvmejpfwbymkfkaxhfxwflcynaiqrerlbkiykollixrszmxtxrtunvtblctkrotw..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "ahuhirbisdckvrdwfsfyxldcviiegacffdifmqistybjgbgfkrlvdexuudlitvisvefwefbjqcnfxphtyqtbcjyfmhdoamnsnskekfghtawtjnmxoxrjpnlzxqucbjrtcbzoxtjpoalmylngfprygl..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "aiezbfoypoqwdlckoworkleylzcdxuyjohclspykeuckprftclslvykhnpnhthtcgyuwemsjpenqvewabhvgmhjpbcmsdsjmkiktwtinswutsjgdtrgtmfsonnufebrugztxpepmkungprvsjlexir..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "aifzbqhfdqqszglhmmoqdxvyugcicueskioxehdbzxjpotnxbjfjklbppspcowbgfyqboogkmftzbdzeytuzpsxpezsnuzznobkgsetuyvakbvfwkcuqduvrohwctkeaxoxzbkmfdbuiczkefvyjlf..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "aikipywvohboxnwxjmjyjcpmsmshofwsciglpjqytaavprdruenpxxmqkipuwvnnxmcpcaczkddyczvvsloqapxyabeqdcuhefsbjgrihzadodyctjqxqyieqhrbsoiljhkgyfwawjrslqoselfkxo..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "ailrdyurlwqvqzsabkfmrnxwpepuyusqcrqgmlwwhtvsxqkejymjfugjhgsgxipifhkalggwktzmboyivlaqgoggkhvjcfvmjerclwyxqzldeskfquigcbbhhtfownhuyhkdreifxklkyjnldwtout..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "airwjwgxtnwfkclloounzfmqpelbpfivegtpvcuotweqhgfiprovsksgrxscstuuzgeppugdetdrwbmnnkeqvtfpnualxgrgnlyctktwafefowdwakcevgksewrkkymiyzzhdeuohinakslpsmtqtk..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "ajwofndaqywfoxqprwwkvezjuvqyemawcvrkdhyztnfgabauzeanwfqvgvgprmnnjutrhwtlfpxgfmskqnrgfhwjyzwmcwilkvjjwtlmsuzcfaftfdgfrjnjxagppikupafoghwtrbvtkkyuxtsagg..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "akfjnmsrnigfmgihqflpqiovyqvpnqvlqorhwzojrjtkaomlepgdnyttnnslgkychvmnypdfhwvcwslbfzvjdyzsgsgehfenuuiucnitmzisbltixzbhbffjgxdbixoptzkjahdthxtlbuctiblocd..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "akgjrkhmrznkwpcljhatjfoouenfiffcbufyzitxnrahroivokkkujtmjiksrrlmebbhgwnzzumcprgrltvdjwfxlgamivkqorwyebweolddakwbrvmvaljillmpjlnegbecxkedqqwkvufipyiazj..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "aklygwvpljpxbtxlkdwwfoqisztyqymfrktjpmjqklmoolmwrddmaxncdplqatkkxiaztceqmuxryugdcwsackjjkvdqmwnmmpkgowopnrxtinwagajnphvmoqxmclhctgixpymervvjsinaavyiop..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "aklyvzqwftnpkiskqejgknmajuitouyqnubdxyhllivgaaprsaeseemgzlafjsnlpqcjjqwynftnithnhofqqvjzardpeiodajlbelbxhqxjannwjondvzucgpxcfjfpydkdjyktjohfuqymnopzxi..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "akndipboltmopdaepfrmpgudcdmsropbhxbwrmwowzpidajsnmzrgwvlfyvrrbxwmrrievkjrdcllvbnwfhrizkewkdvfhnawmqllnolfukcbscrwodpcomsxxuyfwcapwtpmnkctesvassdshtxjp..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "akugqrcvsevvevmpiynkczppbcizyarvmlgrlhibhngrinassnkebyqqxfhyhwowhsleotzengwwhkzktmddivhwgampseeoluylvkecgazhewntwiliztphxsavyognxduvobaxhlltjsyckpvolh..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "alftgxlrcswnrtaxtllhzsbborzseuomymvqidqipofhjnmmxfiukgmbwprehhthlnylozhtffxlvaavieeroxrwapaxblwhlymvahlpszfglwidmhknuwxwyddqoxgwajgiirlablgwcooazvfwac..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "amadzdlchbnipihekdpvthaerspobduoqfliqqkntgvtowumggrxizthxyeegrxgmztptjdbbdcjjtoupowctspmvqdzmiosnehtauqfecaraggbacwhzfqxmheznfbebmgsscexfrcebivrgejgqb..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "amaloxelvsahfysxqismezybntjvgpqcbgdhqgvlcocqhgxulfakgxxrjdllhaghgobrmsxxpkniswteqznjgookzamqsjeuygkxnmcizdbwotmovfwwlomprmtpypamwqsryqwwdcmdpwbgwasqzl..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "amckmcsotpvwxcfthltyhfgmkskeiffeectzzajvxebhozlzbvfstwmpxxkjssjvzmdjjpkowxrpplnaagvahitanwazdbbkppcmkimwddyepayhzzxhbabhaqomdhnfmzolsqxrsqeoagtybwbhbi..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "amlewzzdzlyedimuweswqzmnnqcizspfvgzsyqjmeyypaojudxvgjchpxmslibonslzsbklwuwhwuafssbkxeqckddrjjmcytwxxfrunyrirttxrkpkelaulkxmukkwjlsigguoicjpueuwiizzqmo..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "amouhweecpilkjlyvsjaywljsjlqzixxngsdsbwacglayjomittulrboziksdwxfgzeuhdnfklfegelsceifoookrqcjnyfxgqwztbcizpreuozqrtwehbobrzvyilwxptljltvztvnpqkjeovwaep..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "amrlzwvcnrrnhkeexbntoklrazyrxurlvcpymgciotaqcrmugrcxkjrekxyfnfsalbdquvbjdgeuhmooteldhwkffmsgktsdzunibqmudwpvectxugbxidgybhixliledlihnaydabbadagilmphwo..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "anqwtwocvgwkdbwvqhcedtysuutwenpxrobvdhteuclnpxplfqasjsszjywtrsepiaesjpbphtldvarpqgazyiribilfvkzgnromphsadxlkntvzqnrbwovfjwrztadufqthswpslsisfjmxkkrhzj..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "anvnnuofhguiacvnkavqdussoufyqunehcqptckuzdeaptrovzotoahgcharnexvvknuafdvvucabngoeruvgkgexnaxfjjrghbhlluqdrfxpgacdcbqdahavqgfconwhxtjstvvfrdjylxanntwru..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "anzdzxnrkhjmncwlxekxlthvjlerdffapgczxkthhfpvtpzakucrfgqgtlozcsicftdssoyeailonfnvtsbdgdlildmcfzducziaanzzvhdgttrhxffoonycuahrjfrkxkftiilefizthvhqcwmlzk..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "aoopgzjuggscqazpstaorhxoblhypwsdeeytqhokeogilctsxyszyvtwldbzqdjroacpvlhqunvrzbkhulxrpmytbkjsjektbqbotrozwexhtegvrwhcpjxjmezvzrfcjsfflzkluckkonqjoeazdj..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "apetwaklpoenqyprduflsojgxdcrawxrqhfrtqilaypsapgqeeaiyfbqritrbnjlwdfpcsziwituerbpgswmmjyhzftnycaozwkwvpgzpsuegdvbzhqehhlyhmncixrsyrkrckcankhhnqhbqgquqb..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "apjkptfssbfpbqpnqajfkphqppgxptduymcozyhkosbfpziounlxjnjeyzvbotenboxinoyelqjssrbmrnoaklszgzxsmbluizsjtumuwvabpeyrlscqzzazxrkuxwzrbxmcdlzhajxcipwyqutqqe..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "apmliofnywlykarqgdqjghimanvkljhhsnlpqoidztpfsnvfmrhrgyvdhbfgdqafooqjwtiptwdxanmzyxzfdlvssvtiasiiifnskbaenheazadteihusnyhvofauvwriyuioazqbhdkhnutepbdvl..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 876 { : "apxsvozizyqkmwqgmqbpqmguwzdtzwjycmxzqhdltvcnycjmmlgjdivltkdluxnmcqhdebiwqqoxndcjelhwesgfcylcohjmpvruagspfnfnqcontusaueidpruiqtuakroepoxofkfulkvorpakpq..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "aqdrcizevykjuihfxwognkblsrwrzzyxnmtpscutsojooeurmvfuiklcbwpcthmeecytbvryuqcyrnldqhhqmrlrmcerhaxulzshxpxweduehjrjqkwoitiqhmlaxzcinzjogjwdmicxvlxwnrrjna..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "aqqslmllukoghvlvxzdczpquogcfukipunlleybjvnqnxoxmwtgnlynngddwwjkntjvhvgksqbksfdxkubpwcvbngembwhpkjikleoaracfmrhbekmzcqkpocfgusmwuqigrzdcneivcsaskagqqae..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "arlxfmvowcodqztrjafxnbrkygjgcntfxxntljqqwepoixxpadmwbtaemkszdspsiuzltkvemmdqdcvztotyzwnhhbwzmeynicdmkbwgrroiyhqraoacocwbawfbokvepsrhtgtkvohhrqiuqonebs..." } Fri Feb 22 11:37:09.960 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "arptxptjnlgjivuwvfpwhcxgntccmstxzcykbfxphmbllxqmjlovrgwtkkmxcxratjrdmxcagppnltesuszwgcqnlizwdkslrxbcqtlyhyrwudmsjwvahwymzssphtxkzxwelgftfsojihswfpubhj..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "aslzhmhljyxpkqylnaptmjnzqfzgecitaxgbdtabuphvcklpcwezcooerztegizfptwuuqrfpysilzdburvtlnahnuejopyyzvybrjvtqiicwjortlzuqucwnxtepotaiptnprdwrdxtdafoadrnef..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "asysgoxyopugxwzwgixhugarknamtqflblalriykkdcitszmlxkxqyobcsjrlbnnstvuwrdrxndaovwxwaeijgzmuuzoggelcptuhmrrmmcvorzmzcuhmelkuituuiupzrpuowplemmpwcjovbfxpx..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "atqjhtxnrtmgscmtyyxrfhxucgovygupiseboxxgkpvxsuqqijkmtraqznjhdfwlhaykrformlyxrvaijuiafyaoggzewyequfaunuwgqkhjfmyhitltoelwnjbcfecymatwesubiztnbytpefwjyi..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "atwwvapyoxwclpnqtrpivtvjvjqxnfoqiaaiaxciyjikyeulhwjjosidudeymrvdzaodglymfqrvoeysshjsnsmmxdnmyqflkydiezplbmlaxadipatgfuxyloelwqqmgvlqzhxigpfeilyirdrmsk..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "atxvszfcdbloclhlbsomwdshvwxwikdkbtyquccjagwzdzbgmyqglidvzlnhizuzezxdyvntwusdgnamiitizpukrhyhjfdvhrmkzxfjwryigweumjyzytqbmeyxmosbhqaxeehdzmfekcsrxdxujq..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "auhahhtuzcwxgfumishxuucjrwmkcsrfgjjimlbvhnondacqqjvyftfzzkptgwxtbnbmhutbslppssxstinljpmvxfesbiaxlfqdjpofpqiozwwpgaqkhwipaafnikwcohjpeomdqtemthagyrjivh..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "ausunpauymdwkheehnooelmmdsbctcnskwlncjjxlnwnkzmozxwqhtomehxwqbyfsvxupqcrnqqsidecljbnorwpuyjteavwvbeckymiapdgizvuhtfctxhfzkrjkzdewwhprdcizadnjeqrewjsjm..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "auvcckfndlrzfyilkshefduydsebfiquqrcgzimptlnffehbusvoomyfwfmssybkpnbvhehfbifdnfsiitjsxdbcfjkqreudpjdjezcurjuoevqhifinwiiayllnzmbqocqbohwtovgawdmbikckrs..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 949 { : "avdyljcqdwenifyrddsjxzjslqsfaqhzmrnjdgjbfuoeenyelbxrujspzypqdilatcaysugpvfzmuefwzpwiizjutlmgvqagtlyqcimaftcwbqfxghlywahigoepwjgoposqebhsamvxofzxdhphsv..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "avhyfiynqguosbmnlvvczxrhnsfyzqyalbielwlzhezpupjgyfyvfysppckdhfmwurilwmhrvtqsmkyplwkuxogpzurmhbcuwmkenozynfdvfqysmdpaqmvxgefwmgzuangzqicrhckwriafjvlqhk..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "awtahxkmyvvvlspzhuhrgbymypspwxmidbjlrrmykfxtfakrutmjurvjskabmzowyucyfqcjalarkxjsfspvezdlnntvocnyltaxfpxduzeyuvnfdpgbewwnkcxirgbsxfjaxlgjzvldvgjfqvdile..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "axgzfmfzeilhvjmqiadgcfziycrpnrsmcswpispvwrvjlmgqindwtbwqgtrylajcmyiffwhmjnmvmmrhcosayemnpozszxndysyuufmhwqakkbmssklatmbnatynjhnhcerafliyguspekoehbpwml..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "axhukgeyomfwixdtzuqgwmsecudfdxlyfzvkqkivihhufxotuughtzbrilbqysbnergrqgmrcqsxlrbxcuypvgqjhrddkbbkhhbsxubgzwwrbkyqmtvslpiesvwcbzqsgkypvrhqkjlleaesxshccw..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "axnzcumdgkbbjlvqzbnytuuiqnztlcjanvxqmggqxnogeuptnrvhjpmgnvrphikyrmntvvnwdaktdpxnjvedlrodozowqainmhxxtdvbolxvhzyklaliirpgvivmubxpxbhyjwwkuagcfkdrtniwvk..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "axolyjfryrupddqrppngnqhmlyuezaagecwhesgjrnevjpgowuoxxrngjhebtzzkkyoarxvvzqywtbumqrjhyanshattoipftonoutxaozpqlyblwusfossneaipwiurexkpcmkfwlfwchyolszfcg..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "azcoblwmzikxnkactznynbzsrejhpfcstancttburmhzwsgdnsaenovfrprccvemedxiealeipovrivowshqvaxmkmybcwqqecwxxnmmhtdmgpamjalpjqzvineiwfoeyecsgkfjfjdhutqxwdtowp..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "azilvrervfjvlhrabamxfxnltuyhmxipxdnwucgxxogotfpcqiprzykrktwhrtshywaphydudjazuerhndjyhecgcyccydpdhndticvfsjztafgyvhxttrcqiaisyujpumveiwkiohcghwwrhrysmy..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "azqbzwybgwiludwdvpplhcunkbzoiqpkrqgaclnujyohtddnnfqkhjjzywzidjayopwxjyuwktbxkzpsqesztvbuscxjottmqdaleblqcshjggbgrrwfdyrojsznvsghsnbddxivdfhkdbnzubpyfm..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "aztwbemmfnwnjdmbvsfpzodbbdoxqfydabzhjhrvutzftuecpfsvmutwggnfklmeefcewxnltqvikhckupopcelewlvbrivxpqgevdboneqiawfhcoxxmwzfoqgszteephqmhgskwhpdwazglnvhbk..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "bamyshprnxipgdqxdxmclephkqwhirrzdplhvonvkgphrrdrjjnldhtaulknkxkencojepmvyjkkwqdteogywmpcfkzfqefzxtuolbppvjmhkzliakxnzvissxdqsiljpephoizqkwzfcmrgdddbyf..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "baznosrcgsnqhajehjbpbowqzyyutatfvqnwtuijzguzdmeotwsdnibqlyqiukzypvhssvzfllbnptaoyspsnhjdcazvpjgvqfamgfbrcxdwbavhhglwuiabbzjmbdukoxnuzjpnutauurenpidiyg..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "bbckhnedrrjccmueiuunsbjhoqrrceypfghphsrpfrmlaqfwxbcbxmjgjintsedloewotdiukocxkyhsuffkmgcigbzlqegnvqvtawofjwkqarljgkdgjpuamjsskcyyyegekgejnankjkmmxqwxab..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1001 { : "bbgjnzugninyfvtcdrmspocrtkdmesnvmdqzfrlmjhowywmfcmxkgamjiiblhdajitqtdmypuqhrdjuljszhewbxiduwyjfkqxzrxyxhliieftrczhctmfvztgejqxzveiqjcusgoersrtunjszmjj..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "bboxdiptlwjbktsaahxugmrewcdcclwqbfmgbrdpaxiwrebgitcqqgniutochywybvojiehcipcnstqewaybhnnchphancwqategspzngzwipfofaxdlfsruccjbktotswitidehjbklotzzziyyru..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "bbpoyuyypberfopwqjnmpyavmlkjrsjdxiiiwaftuuoafdnasireweeppwimnlofzompxcxgeicvsbqeohxzslnamdyqwkokefbwxebymvhgfokwjwzjewnhrazlvhrlfbnqzhqvetbnntbegbball..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "bcdzycmxuivikbhvzpvbndnhvjgogvaxiecqqfosmcypfplavpzwztlrfuqxpbhpexelhlmvlicfrihwyvsxkjmigkatudaefcbcboocivutyiwluwxkqeajwdnszbsmtyyihbdwlsxwgkakolyrpb..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "bcjhfbosthydtuwmqdfgrecrwfahxejmnuypdtvqwfkcqlbpwmchlpphultivnaerujlgrgmxewptllfbbxclaxbjjguseycmiqimqjrlsmgesbojityskltailmfsomigajvtlluyvczovvopepfw..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "bcoqoeqiewiibobtxafyplqfeycgocgstlskaxmuxltkmcitqzdzcdrakqxrbjqfbnxkuyybeiujbiajyxjjtvasbhflxbvxmaxxefqscrjwnyvzdgvwovidohudavbyiyaethhvnabowmqmkrahrf..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "bdkohzkibmniqmlfexacczfjciaqpoabylfydymphgonibehapilaqfoxpoitgrroszeztrwleabfcuorqptfnsiygxfscovuktwjrfqlgluhghtxhpxulkhjrsymvttdoadzefludscpjrmynkhxl..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "bdoaccqmobfbcyeopnjgwcwcrpqsphrcmzyjgbcjwjrdmhrujsxztolyarlmteakprxxpkarhwhlxhrlqxbykmehuyhdgzbrkalveqalqrlclokjrgebynqoohfataiumlbvamppfputkayjjrwunl..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "bdteijysiudxwibzbdndlfyohdiozyxxqcnpmfjhdeapcvzxzvssaehngyrrgixtdbcsldpankzxcblvbodwjdonmjjyapwbipglxiaryhettljmggwignyugfenruhlmyefuguxplyndopifogfrt..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "bdzizmryuotqenviyqbybpnlcivyvstjduvygggltihyqqcxbyezatwfyhnbuhvfirmhntcmkxjkoskbpdlyloeskewgfkxhjkbmvpyqorikxmtadzicbrjntccoljofwbibqlunynireugxhyydcu..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "behebljxbarvpdznljdqnyqoxwzsesycgxunwtaxdwhesmzluouixjxytgokficlktyizivochqikrfdwimngdxtddmmzgojzllobafhxrqbaavtrkeokwdqwkmqmaypmrnzedtvhiuaihzranknoo..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "bfcrdiwdzygrzojdhkjelzrvkqeyjynymjzctgaivpdjxhbglyzjnptobbztvxltnqkfanvynlfhktvrviogolnnrzhlfbntrmkecuwsewbsoldbwpyujnacqvbpvqfuelqvlhrnqgxmvjkgssqtag..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "bgcwucnvcxasuhatntnymofqmcgdlkfgkpgstldybdrmbljuwzjqsppjskrytnxqypvkpethbzelukppiypfpysjkwjigpplmrjepwjqqorkzyeydemvjfedbgpqlwsrnqwhecceejqiqsmbaqvwrv..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "bgexxnpbpsypjpjweygapgxxjiysnadxldavygbprjvjotagltlbyahblglcfhdiudedwuftowhmdqfmwbsarlguchacmiadwozblfzowxwgbjolnrqyqsqsszydwzrhcnhkqtfauvpjkljacadhwe..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "bgvwbwqypxzxrpliupnpdeuacmbqjaxdfoiwajfsdaksbwkllcknsrxwrgdprvbeyuykyyyycvvjrloqknctwaiibukfxmysvownbdutgxdsqfmjdebujewskzmjhwoxhwxfuhpubgwuiwqnxsghhy..." } Fri Feb 22 11:37:09.961 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 955 { : "bgxomwqrdnilhyacaksajiqogfqzmuievusijlkgczzqzjocinjvzdcbehudvkpiddkzxebqwokjjbhvdpeohhpebaqjuuawscejfxzagpwceezyeaebcyctdsslfvnocqvdwyrwwgfzihpgwdzspr..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 931 { : "bhggenjwfeeuselvbjekrpjtanpxnesadjyumtxvnvnxegpsoictdgvahkevhgdbhxjvzbshynotkyvftlytjrejqryykhxbxqyegblvhebouzsdtqrtillscbroenfqstcvqimruewrnbhcykfkud..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "bhhezhtrlzkubijnnnmldfiubdemuiaorsvfsjydsuomxafhvnzzjfirtpekdopxmuinriolvudbnraolkmpnhmcddlejmtbldryknebyqhmzlyvmoyezgeizonjxgywyuvnggafribzgotjkrkxaw..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "bhlstvzsgcihidghnlkutbxwlhpnknjnmdbhtvetenoyrlbuxoimodqvvzjihigagozdwbgjlwvvjerrukcgishbdgpwkhbrytkbizvzgcuwpkpgrosuambnzafhegjvxwdovkqcpfvnxsmywsrkvl..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "bhltdwqzncnhzlfwjhbeekpgwcpsopnjeotgfbakkifucffbsblhwqorcxhwkgfnjwibhpewlsordukxffesrwpnrzcfazdzigrrlvpnelrrjqlubwyivdidazbiikmtpttfbpuubrliknzhfabrxb..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "bhyodpepnhngpqhzvwvamlicosqtzorrgorimctddqyihoclxehusaowftpdrscoglbnlrxeubcibjyzvwioxcqkfskvrefzworpvcjjocqchsbioflzsybfekeeuqpyixbkhgkfuthnhhhykyizuw..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "biguukfkvvrxpjtenqoiydsobnznwkemtbfffmlrayrutmxegtkatnlfmapyrsalwsynmftqcezlgviqqfdezxyvzuqrfphupbqhjqhnwoqbhexxvbinekmpvscpzjmhckggevnlerjeqfrdzyjaif..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "bijbmpfgxprcyhddzdhhbfvmvbhgpwnjjtudtpucgzsjratfkndlozueyhlzxohcnpspjwfhybnrutosgoytvjuagwklrsxebndgjsqbsbwfktlsftgwwkevzjydltzsrxyjpuuqibpylpvjamkvgk..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "bionnfelqqkybuqmzrxonjlsebzztzibfaxmnjhmweamtmvqceoyjoajpbzqlrmdyxpdlnfnzxqypoizrnleosoggpqmndwbvcxlusqcfzthoqamjvfzjpymyhlsjcbupnxwqjrxzmxyjoigayopug..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "bipdlecwghqlsngyphgwywmdeebbwdkkbeepyxylzjzeswovlenratyoblqhgltclwegfqekiqepzsmvgrmvtbfcictgmvpsqecnfymrwmksppiunoanbcdedztgayabfvhltpyfjdzcjhaozjzrmh..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "bjefgogwiuqjvomcjfjfhgqeyhakynrzlohvpytjqvmjafykelkyidbscjjawyerigzkrzkkqgxqxrysatkfedtslbuyrtdtzvqzepjqfmgwdwchbllgdujweetqaxdzjkzazbwrusejjxvkzlyuox..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "bkivqykiuesvpnbkiatfaexpadgyvxanyiyfjpcjwvemhfouvhiiwpigonfyxucmdykalnhryhdvrajtetoymjqemhlctgdbfwzmyyesaemocpdnmtqachoesldgmawjvhwgleerztwtitqevhnunh..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "bkukfusytmawcyzjcldzcgrrinnuqnmghpmhzqcehkubutzpwszhzthmlaniccnrurbhaeyljjehyybipulbwvwgzbxukgagdcxlxfxmqrjmcqkmrtftzuwxxdpbebprpoyxlklmkcyrbweoibjbvq..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "bldcicexulfdrbvtevqpfduizzrcoipbwmrxybfylvzdufrtycbsncztehlamqpnwumuqvbneoyliodrbcvsrcgrmpiwcomeksqxjbqavdroayxfnkmjbvwuwgvhdlunfyhpbmnmposiqeittzupax..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "blgrnhcmweiyoxyjgfvekqzdjnvyadzvutnwvfvxfxzscmwwotgpyossphrvephjoxwxdlmboowbllviduozhuccfmyehylrluckfpzjcoevsaqhwygewqktqwanoibueyhkxoahlzmidkvjftdkzp..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 917 { : "bliwmqcxnwhvdpvadidipzirngiotvnkjdbczvkmvpxnbgjdnoibgqtvmfytcmaebnqxoqwfqirfbsevcjqnseatmghdbthxdicdzbhfcaaletkkbxrywyhsknxfcffozftjcrewxxrtisvnbkjkmu..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "bljwqmystclfnkgstthhvobtboocrjygaxuawljzfyfwapyryyshbpbsukjpaxershcbnzdrovahdwdpvpugrmwfxwoohrefqplgnhaafozqebxopzgbajmyemigqaswsibappjzcjejexxiwwxtik..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "bmblhrfrltkhciqgkhqmphjjgqbeuospbtxebezfomovqoljrmpnmantjotnqvmkwfpwtkideykcghfgeiqbpagnvbhfetpuybiembvoizvvkfbkfbaqmrxnxqelxhlnkporgjuxpnyqopvumahrtd..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "bmsoibfzvnciezhszwsqcnztkedxbabghzdflsegnkbedjwbsdhdxtbhludwexxccuckyhawqgqmscxghvzjtwfoiittzovpccnjbdswsqhwuewzzpqxdicrrewyhrkceyfbouplpwmkbyrchwbwgz..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "bmuferjjwzstfbwqmuqvhflninwevsacujwchirfzuhcpewretipsrjwfblcykljodgqycchtgjxawlydfvkxdluvercoscrmagnahxrvakfujxggapkglvqervidttqgoooqefzldkfjsnzdjpame..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 935 { : "bmzoniyidpwqvrikrwlxhjcphzihstlmgyijtrndqlmcofaeadmrxazqgtscjvauxjatjjbcbxrqcyeigkzxfaemaiwrohzttiijtffyunqhkyikhwsqojrdugayuzqlxrnufoiuuzpykuofnwqxgu..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "bornxpexfymaebvjcelezspyvouutkjolttkznzefgoblqbhfpzugbarrkacammmdkcfnkjqnepjwixybzuspygbfhdmrdrjhfynmypuzxldlhphzvcrwpkjaqnwflkrljpdpvjwxebycoycqahpvg..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "bpavxoplorycwzawjbhgrvpucjhgaywvvyspthwfkpmiiigqsmelferqwtfljsofdajxingxxypigvrfmggqyadmmatarlqzrtkrwvomegawqlshtgjghoukwhlnnkbizbzszdpdzadntswqljwjyp..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "bpkinrsaqpvyoryoisankjtgqwrbvgzfdpzkxjfzsytxesxbwcngjgupqehqpetqcthlruusrepjbkurixpkyygezqjdvtsccjxiqtjxbgewubdvlkcfcaaucppbumyqhqzoxnxniywtemocsunmfo..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "bplboqcsirgxjktlpesouxmzfaxymexjexhngthvctzjmnlarrflmzduldlcnasevvsfeebyulhkarfdofuwqvwraquoesvlhtlnaekstmyxeniylradsbcxtgxpsoqlqbtntrgceqhojdfthxzeis..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "bpvjgyqmbtvjsunldyhylmdqapvliajckbntlpxcxheimzrdsqhmpvpqfiagubowlaanpxdfltbmwthzlqonohsizigpcvivvkcrvjlhwwxxlsawuaaaxbmtsumxqbsbpjtzhasbrilnrxibmxtqhc..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "bpvnhgdyfnaapazyrpuayvpoksusfwiwqjtfdqbtjavadrcklgnbrlkswuurdfgjnnrnamdabcfnfmgetupcgveazxddgeyfsjcayyrgvrgcgvieiooqdybrmoofykqmtgqzzzsgqvidrnyrkaeyvn..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "bqddqxlafksauxobgnwipeymkccdrfuktmfpwcfphgmwhypeqekjvpbssvosxfopkmerzfcxwfspwwolxtinzfrlfkbcmxrhgkylifuaiqgxzrxwmzbriouhxpkgflhmswprzmmdjtybtqomwjdkur..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "brcdbxvkakqtniekmlrpbstazanigwfiwzjjzfuddtvelpwkhrhlrcdbyrboepyzdkmkjjynzpzkvmmltzsenwxyxlnixtqojnizukiwdnrddhqadscatlziwbhhaolmuhevadycwkzbaewipkyjua..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "brljtpwhqetwbbumnmadjpnzdskhkaacxzldjtfthfcgonzmtvhcpmrblcsbrmskamgesaatxahvegkhxiclcsrimovsraplufogvjcmteektdvmyucqguxknyxiymotpwwqhwlbatfdpadnrsoqwh..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "brptlmnveriitgtzzputfvtumxtnchsdemljcbjjgjphncxlcoobojxlxliydbkoghfsbtnmgloinddxarkuygfxnzsfmymcrszviolpceapblmhzpmwwgagmhizkfitjjaqvffqewzansqypafctw..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "btzptijydckebjhmepoeotmebbsdaorjuktggevkdcylmorqstzuplsnenmpdrtifturezmbjqfufawtwvqdxdmkezlgmxkxlxwgfrsmcwtrxdystrneuectqrvnumerlhfqjexweflmkawzalbcuw..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "buesmirucwbmbzolhccnyxdzyvgtqepbfszipcdzlkedzqcsvnasteqwrxutlijialooapbaeelhfzzhmtekjfgjpeijfkworiizvscqtgqvyybkohkjoysammkehvokmenpohbgmfgxqnhdejhurf..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "bukoxckauxfjadrcebrwwzexkbdrrxygguhhuxfgpwtzdpwfscsuzolmjfikayogdyxuxnpdalfqdcwuqglbhfyzpffqtlrgiodspsirfixndmgbkchymsttlndowtqkwmbaagtnsnfyeqcbqswaft..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "buozlmbyiqpsjaosolyphhezbgatwfmnfvhvydvwyljxobmlxyeoourvpsbogeokngmtwryubxxhuqddaginzihasnvawfavbuyxisrfhbwttxjiajowmnhbolwrnclsurgxztllhlpqmdlmvwuiix..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "bustbvhcbqvubuonofqzulygzyrnfqpmmtbnkapkdsiemksopzebkiymakvhmjawldrpcluzmylzaumljrlwqdvnndqgdibpfmntoisfbwzlslniikzqxivzcraoqthnlmssbjcxhabpmkiqklnnny..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "buzcdtqqkjyrdbxzrqydhecbfjyklyzceqiwroytoanqjvqosoizctqdeohqjswmsivmfqaqbuixybsyvwzicbslwyrzfyttuamllwolgtneapkiilarxoxyhavspmhnwlmlshzxuqjirepotuovvr..." } Fri Feb 22 11:37:09.962 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "bvgwhaisxvywvlknitosbggmeasxmhhyxpjfqokiuocsyogxvvyheadntqgrmilynzhfeweneqczqbwfcpniyqgyotymkiqmtbeaaxjleasiqsgyfmpqlfhklcieoqdbzpjfiysdkfbuwibnwqrnzc..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 975 { : "bvkmdplvkdawxzstqpymonstlwptmotcynnniotbrlfxctuipmldybopusaxxirrgvnhofyujebpoamolwtxykoywldelypbkzkblpttimqxeangdktkpthpncmgwipracbpidkvogyjqkemxqiqos..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 899 { : "bwapqnvjlpcxdgjpgfesbxtzojkraiectqhznuutsxgfjjvvvrgmgqmcrhjjsyxozaxryvdjgxxcaykrauxusfainuhsosnprcscakblledktqodqignizrgxranfhvzkmatfdvvntnbkypwdovtcy..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "bwbubhztpymjvdkfnbixrglesekukbawtnuznvplkjmhkexrtjrlklhoxxxctclqnpdogvfghbwpdqbwkgqckuavqabhnpmmqdeigezyqrbljwzoorxxgteqqmofdaaubygrqayejnxipllpyjnmod..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "bwiqcqyzmubktvcajrklgjpzsgiwgfvamdxqqxyyzhwvphkuvbctaqnxkfgbcbxmmcylvmmghbxoesyvdkdqghkfbttzidmpdryfvfbbbrkwabsudtcdpamqnzrflryecwrxupsskvuspulnecycte..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "bxjzgfxnrezlzebqnibbmdwqrivyuufjbbbzsmcvdcelazbozjvcdcwqpsromcmavjtcsirinfatrcttxalvqcsusjiawgxuivdgsrymqnoxshcftxafsnwhvkthnxbxnrskthmiicpkobkmcwndtm..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "bxnzdnvtrocozrjcaqytvkhbgcuucwmsghqkcpsvcuxyydreljegbwgxtpamrrdusjrrbmbiylngcldrfolpxksnbxbejvormyctkynwlizigoiboadfotzvynlqkoopuvencodqapeyxzfftsmmsf..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "bxuhzreritphbojklykibyypesgzcicxfkneysjlvkxqnsmdkdfaycmtxlabvtlygqdhhujbllgdvmqxsvwixzqnixvsyxyisbdmlbtxsbyosqszpujuvqhofyczcagcnqzxrnpdnylzvvcfsdtbmf..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "bxxqtlgxryrkpkwkvdubiopqzctdugygguzrwnfmpfdknxbnuytddbnpjrlviyeefrdkfirmhweuvazubcgmdspqubdhfcjqxepoaopqaittlaqiwpqoqhryinlzvqavwrxipausowbzhlyfcfpgys..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "bxzncelbomuhlarhwuikojgjgwwlgmnjovbimrdatbbymlvtkbcjiwjzcliidfzwnfggjazhqtjfovbjtvgvqroibivgfronqwkjuhzfzmuurqjlixswapphdyyflrpsgqopvaukaflftzlnferhzz..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "bxzomjiuwpnylxcinejvcrdsbmbpehiuynnywmriatanhedxqlewckkynrnuqgqhrijxjebbudzzuahxabfcpyttferqgdwoibvmxqxzkeznxsfauzfqjmvcxhzbgvhqbxzzzdzadvlsctvqctncni..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 892 { : "bymthrpxnnvojikskxdlsfibcifnqdureklgubcfdhgyukkfljwqdyquwldhpxdnmtbnetgrqbqhuvxarqsphassuwyytentjyhsjjdxokphmfpqqpagauwyzpilmtlyiwwnczhmrvweoowpxzedwy..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "byqasnixfpcdykutfixdkjtyztfsjrevjwurlcsnhexplguqedefxjdanedvqfohoyurzqffivcqgkedxzcstkiuoyemscwxoabcfqvodsssbbjjafcdecawacxwedujntyvwuabvyryjqjofxmnej..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 834 { : "byrrutqtjvsxdgwjwxqhpxpndjgvxwwtslbssjrmvseuaanmnnryphuqyszmyucgqvxfoilshueqsycjvvupkxsxasjixfwdcnobkpzzkryllqodolzddoxtvszyobbufhtcmwgmnheipkumcgsnfy..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 933 { : "byykxrrcdmacbsfttxndmljgreuajpofzcfiiksqjmrxecpsygqpqmgskclbzcfpkozjloxsqmxmlqkcyqdqzgpuzynqmcvejcgxbtrvuimmxxgodtealyzffamhpkqirhpaizquousjyvwcuhqelj..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "bzehsypdvjfdxcjnegrtrxigstxtcgtblrdwxvzroerwzzqjjogfcjkyvqbzwidzhmydjievnkdazghgokbrmmwijzyyyulaeillilpmgxuqgrmqxilmhnhetssanhlvuihdptbdhtsrxqslobmyqo..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "bzqviccmytzkbtdnueeiimyvtdhettrcdjqlkzzqyqexjkpynrdooxcgxygmhznufhgpadwzqpsmenfqsfktzgevbcvkcqsufghjxhkmkohidfxnwwgxuqzfpsqspqveganupxroqrmokgyypaueqw..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "bzynzesjkzrjlvnquamespbekbjxweytjvpjbzrimfwahqfdkkdgdfrohlhozjptlwyzqemixywwhitrbjfwyncllyftbhdtmdtzqmoebnyhwqmtwcldvaixsmvwsdbbsypxnjrfhjhuabemeziebc..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "cafjrbutscibaemgmylslphemngkdmwrctckzkdddnxfssbzmfbeviwmwqvklcoegvaxdlxmmchkjevcjmngccayyouycpaooooufusvqyjdjccaroutsznitnybhigochbyxzmppmuxssltamqpqu..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "cainbtaaomutngnfblbuxkrzayegmpnredyyrekagzutnpzuloefzwfarvbifjfrskdxuywhkeabnsnnkygdnunrepfgtshwmnxvudugdsowiwoahthijjqkuthzspikkbhsybblvxqzjqyzqzicbf..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "cbfzupjoebqqxvczfhgbfzpdyiktzajnzjkqjczplppfbvgnuutwxnlfelasfdfwvhcufasuhyuofazppgyvnrdygvybpdgkbzgbpimtwapyhhrxiblrodaveztrtmrcswegateuczywomnrlvjfqr..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "cbgzezejidjrujayuwzofuymhratsrvqokngscvczqhvnnyxdruxruzigxkcnskzogbgeqikjghlzagzdorrfqlcnmmvburcdkzcmlibdyxqpctgwfftlshdjutgchgfmnjkdovwfmmdnhsbshbqxu..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 876 { : "ccefygsxxrhskfmiyaazygslvhmxnagktqhpovelcrfkftdadpwueknwifizvofyqsktrwnfwqzsgcbinfetjuzbvghftbwzndwexaqjqkheskeuxqqfzudvhsynswxdflwaljsgyquyifbehhbdce..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "ccjgfwakstpjffldxionzzvpijhzfjrkhgwoahmwrlpjndrvbsflqlibjitylkcqswvrqiteyezewtkzyzvxuxlceqcwajcytjnwegblnvtsluchbeuglfhqlgvipygkzmlomkjnurydeyqdfwpbld..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "ccrzsgdvgvdkytukymkxemejjgnwqcuvlbfinpowcnyxvyzjuahuxbcpkmpmhvnwkedsxceqsfvqsnacngpuobpdwmhxqrmigjonfpnwzdptlxdkkjwmgihofjjxwnllhfhqgyidtdxtjptyrhgvot..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "ccxibpknpnvtzgmihcdbzdgsnpvoxhlixsvwrswadmjqaalfxetyvvthmlpqugvvozhlkcnuhlaagfxlauxzyfledyvrdsfmncjvauticvgbixatxifnnhaftgaxjhoxvqeycuouynqeqiorfrkcma..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 930 { : "ccxuqdyimozqwhtweroqnwxhhfotfnbnwvxfsygpxwowrlzwudvnqvujxqykmxbvvxhwdviaskuruoveixqroyrgaktpevedojapkymrainstrxodvrfkygzzzbvunxogzjwepxpnkaczbrpfovsis..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "cdfhesyowmehxllcbwyfrueopzifbbaohdrjmnjjbzdbqapjvycnyqezhmhxsbysvfynkdbuwcrwtxuenbeljwktvfdovvpezcpbcwevoonofyijeyxeprylsmhmjppaqmmaotruyiruxsfgkifrdv..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "cdqqjjrkdavxjyujylghcyhixexsxwevptbrkokodhvdxaejqusjtdvhnqcmojpetxltlvmpwklkwctbwrdhsbjscbpltdmnxdzvoelldubhtjzmrfrdnuhoqufcwcqksaygrzhdauzzrkyugioslv..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "cfqrkpngndtvdkfgorwchgaqkuflsnympwunsopmyxsiuoicfprhkgthufcpourubqfbcxijbtbhdqavigheehgfkaxvgxajplvwtwrywomeggyvthvvibpstfaoylgjxnbnyruzincmkwudepzlve..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "cfvjcssmgjbfccuqggdpamhalburrvhwybuyvsimagfvfpmpqevhieunpcdegmzovqjtabttithpisjybedbunjkuybvzeforsrawxqmxdmtlpzhxtzvugjhudazpbtjmqnmboxsqhpzyjeyrcbaff..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "cgalyngxximzvklcpawtyvkyelusczximmluuxsfnorjkrryoupigudzgrfejusteyvwpmhyazmtamwmukzzrbbjpfikvjuhyffqndpnxyjqntwahqvqmjeeraulybarfgbaypbwlbhsxfpqxogjek..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "cgmjewbytvtqqamdmerfijwlegwpvkspmdfsjiqusaebmstldgqvyptmrayrbdcvxemnpgxllpojqbuqwwqngjowtkaiivtpjstfrdeciifzswdzeapqnnsoxhhpcgulzhiechamxefkztpctudomt..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 862 { : "cgvcvyutsqiuwnyzzxcnfmtjwjlbnwrxkssawoffkhycjpcuvqmnwttgvtjwggijegzkaqwbciwscvccfselivluhkmdctjtwkrmjveztfrverinndclmuvkqsplimfursmckltyfiolacgzyyqdlb..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "cgynielnqsaipekwzsoctvofaifojtpfyfgpofibrkinnlqhrdpebiiayegjxkwivujixitgomezepczywnfatxnnfeyuzhrdvxbnzmmwpkfseswedhpoodzsvtjmplofveiermtbbsawxqptsocej..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "chednoawrdcrkxapyiqomkymgtxnlblpuwgyzowboolemjofmhzejwcobrjkkkmdkhgmubeskebpflvcoheztebhdpjdtqhfjrjygxwsrfmttmtemkrdlzreejywpicwseeiqmnhizieglboyxboek..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "chkijcipiiqaawhcydyqiidkmxdipvcfmuesejmdttweyxjfinyblvutqcitcyomyqdsgmamprpzyemsyozhsxbwkqyekioskjmtjofnrizxckijizujkqnbxkeanjolaxooduczflfmkibpjbhrme..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 899 { : "ciafneqdrclnlhffpflbptmjeqrqlfnlaxnhwwnhsxyjflgiuckvynjtjxswhettssoftelmrbemjitbzgqcehzfllgmbxdhzbopaoqhokmbwitvcwdlzhtxzpszazjntbbriuulmqfwihpgdbvwwx..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "cifwwezlswetjksemsbrelwhdyktcukombljfplyjfvvhgbjkrjfcruwwvnzoakfbkyptpxwbdpqrfcitwhkiykrizftcdixdcljlbqduhgjeoxpbrflvmjtieibtebvmdzhglryenrbizhehlsmmc..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "cifzlarqafxouzldogorpgkiajtdvekgylphtakgqdsfxxqqjzzwuehwsxqardmtjdpcwbmlmlgadrpnwtqsayckleyoaivpymlwkixpftrydlbxslhlulbvlevaxdzziijoslddsmcmjudfsxqvgy..." } Fri Feb 22 11:37:09.963 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 989 { : "cimfzrmipftesqvdwoszbwysxjucqaxaykkwugikzduzrkrequqkgeowictqlzsjccmkpfthhobromdshpwdxttewumynkkhrrpxacgmnnzzrobjilbgqcmcgdpgvivsouenwifehhlsmcggzcizjn..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 865 { : "cjfggahrcigmicseedetspovdgsxvmucavbsamlpudekjxsxecmuuragesxvmddfobfdxoyeaoxmavdrtzidvsxtabgyogkynkulxwbiybpvvivnzwstsuhclxmxyjrqetxsrarwzjpsbyeeghxvcz..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "cjqyjrefflabpsbgzdtgdvypbjqzhulmwcrkbgrdumfmlwkecevopgjqwefyyabcqstlxqwlbbfrzxouldfgdtfewzjwpmjutdtjzfxurwqkxztfpixjxvljlmiupuztdduwnorzvljxdiyowugjeg..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "cjrdlyuzyudavctdxpnolriyyjnxkmllwtnpmckcyaieujeenzjobfegrjxptunnmgfedgkxghrkmkghuahqlmvghrtgfkyyiixmugpwkkgzpjfjaofjpqpyrcpnvfbtojizcmxstcxvhwjnjmtywg..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "cjwxbyrhelcnllwrkvddxyfuarbfbhjkkjmvlchdstwzbsrwlcimnfckwrqzfzdtfsypcleecvntwnnnfjcnkztcpxjcouplbieelbddgwvxbfiiwcucgscggrybbzknpeziktghfjkjjxmonyqcio..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 852 { : "ckioqmkboahewgvrsmskdufsrlfankhrxhkrwovmuvuilzgtqeozdwelwxkyuuucoufcfohypygxaxuccyxpqbslywoyyihhdizyqiujebbpecwccqxglnhdodiaoxdtogrcpdlwmxdvqeygmdgpns..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "ckoedeotyehalyydqirsdmxqirbgrcyfgkvvzdetbcopjcjfhwoiukwoncwvllutohodazklfpjbubcukkyxvaqrtqdtugrnhawapxbdrfjmnlwgatrnfekhocrwlggjgeiuiiubjznlldcatjxsyv..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "ckoxhezriqlfgmkuvamswuhncocreqhocbyxfbhepvdasazuxnatwcnuxihatxzvqojvuhxpbwhfftlsiivvvgmdmidsohragtrnxqkvnimcedluapyorcryndxirvkjedzinaeqlomngtmgkfqxjq..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "ckphtjxsesknkapaeiynsspohhsmvivtpkeejcqwtccofxriypztcqtmmvlpegjrdhyioxlqqsbxoonpytgiudbghelrkvtmfotmwujjkcyfxksknztmzrolrdbkcgaqcfyjcohxjeudynxxsycqal..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "ckspuezheaewtshdrmyzsfgufxjvlqfnqffngugtgwsotcfwaspksilahofrtajwzsddubcvahatlzuwfmfmqaummtzygjzykscepwpkhbuycciplkcchutxzqnvdoitxsdtcirfnwzyxnbmkwypoi..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "ckxndxfzkgihltzemkqnhyoqjgestxlyikuirgpgpqcmwaeieyrjzhuihcgmhautcklajnibtauuswqcypijzgmouhgunekgxyhghlxjfeawdejisedymqbsajolhttfagdeswpcxrvuobgtwhdxsl..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 898 { : "cmemjbdunqmpnrngxlkpdceytnxjgmgzvlwauvkhiqssmzqphtrqqeemomwnjzbhxishkkrpuinhfsgggvqvekwjjpzeqmzdyjrrzierozophvckhoncbtlvgfpotxhphdfpjdxabcbznbywnfziud..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "cmxuihcrrtajzozjgwnzdplebjajkeuiroqyxezlwmsehudlrnnqiqyrgdsslxttlfzrabwhnkbguwhmnnbdtwwupglonxubfpbkvsgxfiuppxzuvvhbllqdjumofiuydndsfwlvujvowbnlwyobiq..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "cmydnlvucttwsgveeiuxrsxaghyckypscvrdfluqmezyejujehzujwjsevymafqwbombnvtfeufvidixxwwapnitfirdbsbqyxcrhoosdqdhifqxwotwrqzvlnvxqovxbtlagoufpipegdnecldbqd..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "cnllvsvmiopaagvivljqctlvbjnuyhlghojkvkhbvvavpwxjjsraswlwwvyenowgqnpotekrzcscodyyckiycwfxkocpvvsffhwldaanupcpjtizruzddxigvrvoejqrjpychemalknrbsxncalyml..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 834 { : "cnttonogasfberuwvxurwuctyrpuqoqkiqofbtwtucggnkxckphudtuapnxxujqgkdcvrpjafhxeenywtyiqchuvrplejropfwygzeoyphdvmickjxakqotiosfrojlpyawihtldoqbprdfpdzqqot..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "cnuoziaztjrsrxunxdqiqhrmodctgcwasrmhzspureaftcbcteignloskdomluepmkypyknwipvjsfmwohrvqwlbbhzmobvapdfaucetgvdotdjpcgqzgmjhisaxqnkqkbfufyihizasxelfktfeim..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "cnzzequluoybxczdvfnkbzijfjqfofdldajmfcdjdkscvqrhttfybagzbgumeprzfhuswblvwkuqnkleeghszpzoffhipkqesbtulxsraoxrlwwmuslogemlrboavvyvpschoesskobujwjywxptjk..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "cohjvvbcmefrqrghbrktwcingwkvemjidyqhpcftllkvfcoevvhbcbnfsfuoupogiwiacuocdnufgkrgvskumggiluubvbeaaypopptindonygqihwpcuzozrptuopevfnbkolfmcrfypmdazldsct..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "coircspxwvoyayzbgfghpkjubyzpzxdblbofydjpcjgknwegaqgqiedwoyszvjemyvzepusxbljodpmotgzfbjjbfcqjprvpgoaikdphkwlxdbykswasqfykuvxmmapzlchjvfvpepipuvjquibsan..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 842 { : "cokcvtfwsnusrdhhubaiaweqivtoivqbaivtrwcmjmneytkqjioxxlsrhmxrdwwiclummsddgveeeckdnmlywclhshqnlcapiiaoxdwgthwkucorydzgyohihsdxsxgpaxglkymdsyutcrxtqnnyas..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "copjttywczsdulvfrqubbziofqbyvpdunbqjdgcsfpnvbasvzfnkykfpqlurmkyagvowpmfqxbosgpgvldhhexbqdfjrjhxugnomraaheyqfudwdxskvvlnoduilbduvdfbubjdyshayjhugysrfhx..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "copujcddyuqfxqnpohuerlkxkykwksnrrmvzblpmzhtiblcyfjggfdivyczaljynhvezmrxwtjjsmaptyjcxeyrhipoipswhlaalvnlhpvczjictreynuqvsrfnsrhrujkenixorleyiekluvjuzgg..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "cpahujkecxuozmuznkiusbaarltpucyqqnkysglhkxqhkrsgscgyfqczelybnizkjbntpzobbhfssohddjlicnnnhuavkjqkvoserfmaaexcpxhixxgqzktefnzwxrvuucghawjvnnehjxvokgzhgl..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "cpdfodupjseoqiafmoevjvyaktcsutbbfmzbgkiutsjdavzlxuzrdqkjtprczcxcotvywsfrvogbfiabbzdvdhholhvxagkwjtllfekcygkkguvyflynbmrblffnfkxectnrwegetnluqppzzhnshh..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "cqaqdmnvyypvxkgkhxtlyuampplequvflhuzzcvzohnrckfbhtlrsnqdpiwijqcsgpvmvazpnsbqvcuucifalbvtodnqpmrjenpdwttpvtaoamlfupwyoplmmcyvxpojsyfqtjzdjytxzbrejlskty..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "cqhscenhhwbcgobptyvxmqmcpwuqomoyivqkccqixeicfjhypnthkryzksfdxlfgteuyplfbxapxrlwxdawipvwbymagnfpjczmjvwygnwkgmxcgayigdrzbaoskjnqnacfyoxsmbxaeucptxlynly..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "cqjcpcfhcizoddybenkukcpunvqeuqgmdalhssvjppteujeyviiktownjvtwpowwumzmesjrxezdtmcsuteuwcskxjrckcdiuamrawpwwkrdhdsiyfcfuvggtylouqxmhrsmhrzaytxtolwaskkwnm..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 925 { : "crwaqqwihmkjdzxzfeiahbnscqjmqcxqkvnyhtjfirwsawbbwmecroebywdpxtprmuxsixcxcdcvclglwtzosdueanjskgbfedietueiygaqqkiixsfhufjmxlpgkojswrhckroosjwzyaqkbdlwhw..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 880 { : "csagbcdwbepchkvfqtuyldzokmvvojdzucjfywaezcafgouxzoofiigslbgxdgdghgvocjyvmkkaajzihyipcfvmuouuqshszooxssrbxsdrpnftqkjokujwbekactwurqwqwtsmdtsdzjdrpoekur..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "cshdrpetfaqgtuowkjprzpuwfcffqehaorychqnrtdnmrnxwsdckvjsorjsdsrnbbddxgmdaesdvlcqponcoleycqedagkydvtennjmvrebncujpbliiogdlvwzslfuhwsejonhyonrcbaakhlcfix..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 905 { : "csnljvmxjdtfkqaqvjuewocbxfyxtuqlmizhxtwycgssjbjzudgqiffuwomsgfwkoxzhrbcvtwexvcnabwbtnjjvmdhwfoslirilvddhjlntofkgpxyzjfeckxbdpoivteqscbhocpyauojnnvjult..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "csozgxsyogzzghtouxsrgtbkufdpwgfsogwrpdbvzxdxoxwwxzovwszwkcjsesnguqwwukosshnpztwmwvzdtfxxslzpbpamzeimezhfkzfhrepjnaectkennmdprqjftoynyxwrawkosjlcemnkkk..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "cspaavpphadieslrfsndvxijyzunjflxxdkmhfwugfckvqlkiysjklyznarbdtrztwqeutirwkuzgmllwzhhblbyhsabsbrjaydwbaxekqlkeqdcunntnxuvodbjqkwpfsridjzogedcyfogdnokek..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "csthexjrrqqenzpgfrpmikmueimghezgxblvgkgvooellbcqzfahafsdjbcxysekxzewgcukepnbuzgdncrcbvbhjrudwxalclhivaumeoziubukasmgcnrlnazfuweuwlkbntwkujdipddncqszqe..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "ctjcwauptsbxefnxgdrbkmbkrkoeqkkxwkneucjhftbebiefvdhscevyveuajeklmofhbmfdhtatbutspuxhwlttructeyudckncjvclhcahdwyfaninpvcxmirrplfcllwguruelmmrlridhxsciu..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "ctntksynbgqoynhsxlwsqyfgxkyqybghgswnsyhgmhwwlydchclxqojbxsscgsjtrnnliiwmaglijebezymtwkrlvbdffbrheinhbjmxrrjizbeatpybrgjryyptfqldkgqrzeyjdylbxxsaeohfyl..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "ctyjsgbicaqdvnnqfxlqevqvcfgugkbdjgljnejjasmiqcytchuxmfwguoblivsfvbyiqgacmycmtuyjkancrcjrrvlaozmigvmazrmswxdagczdajscrckavkqnjkeqsoeljnxbycjqbogmffmbfw..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "cuehvyexehstkbdhozyqcguiemngqyvcojfgobqihaamcurjgukdxhlmvudkpbiswshhipoapezcalvtnpyznrvnvcyeznrneekgkribymjdwvqsnavezolomtptrpodrrehlmojacrmwukeihebsf..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "cuskpdgqljahayspchkjdkistcarhnvkpoogmdntwmghyujkkddolhyvurqhonjrriultryxtshgmvrvsotgzuxbczhpammhvuquntczznowbgivdslrgbohplubaiyvqkiurrazqhihwqgqwmmonp..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "cuxvcutyeqzxlbgpnouvispdtzudjgsgdastmoolefvziuhmsefeduzokgtjpttiivnyrnfyivexbuhxoqfdtijsfjdtlxxmctlwqelmzxnrwktiyajenesipxjuzujmxnnyarhomdfkqpkuxqdrjx..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "cvdklygkxmhzzfftqvslrwcidqvxfyxbevgtmwsvwwuqkjxmqfpsaehxwgsmdlhdlmdxhnrzwigzbznvaaiirbtbsdvviruqfcvyhgwjtuvwbwmzopdthsgkhresprjktyuqhzfszbkxylgkxcwduq..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "cvytzhigagygnjuhaszlztoknojoimfxhnwevfxgcuroexuuxpieowurnfoxvxtphapgbifmpqgjqzesklrvnqmplmcozyoslyqntvjsanrbntboxxhkrdngjukzrgxkfzfnooweakvbzfvjwvkynb..." } Fri Feb 22 11:37:09.964 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "cwabblbghckhdzfdouuzejeestsrppuaveqctiqtlbxxzckwaapivgukfihnuolwyjlcicnzgpbafvugntuckvnprmmqgsnvhivjwzuqscpfhscdsrjxowumiyhglvvdlpmspgfljrumvnbhxdsrzi..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "cwzpsldkmdtvbezvmbmpclefandhlicvjtjbxqqyvotteodjmvvvlgldnqubvjtqzmkyuzdgpbpjjhebklwucdnejmsodkxmqrxuchsnxipppnqwoigpckzztapyerehdninzkqtwouvingrdlixwm..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "cxeqameggwskdmvqhxlhtscfwbwlmtbejwwutozcptawysgqnklwzsdnlqagcxhvuqzjjnndxrtwdjwcoykchsdnndhhkqvzwxeqvoifeinkqoontcplgndfnpkgjelqtajkgesfediyjqakdasxld..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "cxgnmpygwbdnqhncapelikvhaezmhpnugkstuxhjboenrkbyhsiyjfrhdheelytaiwttmqgzvlwmprebqbbvtprjlutbcxnqzfdugldfnxgudytlzwfcdhzccuhtoubolbysubzedxnfgejauebhif..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 910 { : "cxgtburdnowzqyinxsbsptqmqkdnztzuyzpthrqutehcvghdhfgcqaholpopiueheezybhkrqlobzjujmlauxiyspxeeoakmzszybtvhqzqmxfbyyfnvbfvogzctxertdakiwvlmsqtqghwyrmefsp..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "cxuwihuswudugczaftiyshjmlpzgscapgehmuyunwsvxpmtypbllkasvupsvnfnxdkrnkdfigvyxyzgvtiitfqfasynangxkluqkzmvnqgntjyyoykbdfdvgvvjhonkiyqrlwkvasgkilqrpmgpxbh..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "cxwypfchdkuypxgulyixwebzfzqzusrtafloxrenawuwzgczjzstjubhdelzrixikzzxdngqzeakyqkseuhjgkjglnfilwtotfvttspjzxokhdxbcmqiovahiepdyniszypnguerlvpkyhcnudjdek..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "cynwqtwyegtnskpchscphmqhqumobpecagspfrcuaxwhvsrvfpswwzlbhlrzfqpqxwbmpprzmntvgxvsbudxnlyaqxtboatorwspjsynnlfoixvhlwrcfeefslcaffsbreyssoqmizbncwyhfdlnnw..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "cywfpvppiytsbobavadchyrphusjtetahouxoejztbkbhdsedtposlyttkhxkzgzpzeldwrikdoqihrgiyvtcdvtrwdbprdwhjozrwldfqmizpouqatcnzizilmjikoqnlebxevpwohinufbeaweld..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 880 { : "czapnubhvzathvpmogzbbaojhuhfjsrnmkegwojzjdzuglmjvribkwvfnmxuwkpzfvlzyupgvvktujjhngpoksiaoemzhxnkapurfzphjwlvwvffcyxyekvgzwxbplvohjnekkcvairhpzxswdvqbr..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "czmlirynygmyxequlpmfutwqffngvxsfhgyftybwbveqroidenoqhwmkjwxohugrarbegwsvvlhbumolzcklgkjmtkyjmonbimghmmiucthcsypjjjghqzmsfwmtsgbzekpxrzpztbhblzcntvcewh..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "damesnjzrbqkoyuqmcmeaumccudcyjhhhneuzpcardtjulnmliifzbcisiwcbbtxywdsgfqzbrmmbslupgexxmypravejcdhwvjdrahxvxjurefvxhsxyatbsepgqtxjhhfdkfeclccdvvdbwtuqdd..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "dammxjdykykvgtzrhgupzfkqtqwhfwzupoqgkcrthiixlhlukkysimwtkjgydwoxtucofxhrmceuywmyybboxrywhovydyyxwuulevzbkpuzbtmzhsuzczdyjptfpcsofnepjhyljhtujqywlvjotd..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "davpygocbdxtsphpyrshqdetdwfdivtrebwuiegjoneoymgwyjozztjajestpntnhphamphjtnazovevyvyxffshdznmhtmhernrhfebzettdnicvqywdpytqieykjbjamexphzrgoaideqrcpvmpk..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "dbxfhqoaayutkfsltsvjhijjbogsweeegqybyutnlsnbjipvzhgjtllxjyzerejmpzxnzbniqzcppycnptytxpypytsrdlzkmblciyvviempwzyqkxziwawxlaehmfgoeqwwhktpavecczujzrvpbk..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1002 { : "dcoctknxvnbouhzeotnfytmroapdclntipnaadcyeabbhpozsvjcffiyfprzqkqcvgvrpxwxqasjtfamqnnczoykphbxkvkatrscvnewjhjzmnvsglgedlmwdksxtmoutpnafnakzrpiukzfzjdypb..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "ddfkhsobcbumaglcrclwqtqtapvcfsjwzmxytvrkpmrxjxnyzdsducjlpvxtnmendnrlcbpaekgnvgxdbnvhvkujffqwrfzruishbvcgtovcfgufejxgizedhysesecthdewygeyhwqkwopguazvkt..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 898 { : "ddoiwylxhavvifdemndceklotqncxrbthwkhybbruzahxvuowualkpidaxrmcoilqeirbldtaahgsdizczznysrwxorssukmkkhyacxnnfwcftrqsmjdrouexcqywcvpobwjwnurqortepvufrbxvk..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 837 { : "ddrvzamjilgzqmmrjgmxnjungekmmltncvmgwgvwaupodmeiexoucjoaiaaycuwdudrxdgldquxwyykokdvarkaycwnvpimfpbyrxydllaysxpyyxsvepwnzunbeqymiwgvttlxvpphfjmsyodxbql..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "dfphooocbvaeworzqccysubeqacgoleyozgqwzejytmfkczjqpbphconnxkbvrxwpcjhjfxkqarmczepvobrkgrugskpnnhlhetsocddwzyabeyqovrrrdesmosmjjyxkgqmdldcgjbbqozayovevk..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "dfporiatzpbvxiqetrffqrtdsnubmvqnqdaltgnmjdilguoodofgvrpucmjwbrcjklqfkzmohsvquxyfzqgliqlitrmmsqeleinivhpugbhltvnliwpketrkwbopbwvkgoyixxapfjoeeqlhnrntgi..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "dgcanuemtafjovkmtfqtzvsabkefhjctrwzcmrhtloubagjkdddpfcuzbnbxyxvfxviipgjvaqtnpmbsynhzgowfqjyazuxeojrstlztuhgxdfylwcmxrkznlxgzlkvuerhcfrsyxoybynowiluykf..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "dgdpwwwfnpfxdycwlehknfrcsqgwxhamewcebhlrpwcglbrzazxgauryojiksxemewbvzqejgopbfnczkdivvwjqqghfnuzdtsdafowqnmoysknftrbpuothahzjmwdztbegunviqutmvkimjwlagp..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "dgjidtdvbtrpxmwlumawhvaamuohaaszbryxybnayorbsmhkwjiafghpieogonscludpuaqhtktohscqaupxiwylvrfxhlswysntvzbcasqizrmphgqpyboseqyvdxiryiadlkgctqmfgamqgymnmn..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "didkfusznqzdxrfjjdsrnwuttmnndgvdtuwaqptpzvspkdfxqqrcygygmmpalwofwipqwohnkjmtkalsrghwmvzosptmtrtpnukavsqfdeuycrqaasmljewowawicdglsqyhaeizoszsgrtzdhwqfx..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "diqehiicwvebcnhwglctvvnyqqihvierwyiorvnbawlhcdhzqlxjkjeghzcmfruhonkcsndcjaxfacguprbrdfhyfdoufubftgdredyndymhskzytcaptombgzdkcqepgtyirjkdwvhhydxzgbehvq..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 846 { : "djyitofyutqyxgoitiftvefdnvqpltnhfbkkhdqtpuoijbkumqvhohivpwhxfbmkoxlymjcxqxmxlddgjyzbhhsjcnichrlolhsrmsqunnovpihyckhnnfampjhmumzvacrqyumqvmvsrplwjsakbd..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "dkaqjacspjxmjocdocxgpeubavvskqvptvcjrrohqufpzeoumfkjjmsxlkcaylpxlkhtlfemednfvgbqnviwdwxmivvagxizjatbjzdgxgaliaegpooiegyuphiqykkxhbdgcokrfriveyciiukxqt..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "dkiycmcglrzgremdbajneggaollaxmzjlkacijcmiwkztjtzilnpejukxiffsogliclcdswhzkztcurrsroklaaxuhkybfxqdhvgzpyqxctwewglndbwdtivdhspoxmabbzdrdqdiafspaoevrzcob..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "dkoarhssuievcuyryboaxubfbglpvjvcepynqbgqlawwmbbejdkrebfvfxdvlzeuzzsoyraxmhkdekkuhwrileqxghjlgcjpkibvdkcydabnarykydkrjjyqizswqyhscrwacioqinrwympnifenxp..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 884 { : "dkvwvxazmjqgcbrbiiwstiseykzbbvmdmszpudayqrypkztkrryxwfrksynemjtnndfamjzqotujuuoomqobcgkjabhaorlsfciumftzyigcxflhayumadsjkhcmlblkdsnwikasevsvlgsedcxxgs..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "dlibozjvsppsnlasbdjhzmjdpcyqqlrgggdbhogymiiwadcuepgsgvoyjsyyyvnubdfsbscoptfzupltiywmrmhpobanlhjdabrkdbritabjpkwdcigofwcdjxhqfwenuhnrjvemqnijvzltbaixnd..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "dlqckuwddnosyfaegvhehlxayycoamvcrlujoxgyveblirbrrpvvjlroaaadeeyksotjyzengbbbkqwcbznunhdutxvxhllyahidbpwaxqfxgzjflatkgvvyjbidwzfgvekogzguthidudjeftabvn..." } Fri Feb 22 11:37:09.965 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "dmbjkexydbbslgjwgsimhttrgxumedivbtvahwastxhpehbmmexuzusdapeuspnhpmlcbldxahqsofzwkoxldoyjcigcwzviwaamokunvmbhauormryumukmbkeooxmgluxymuiwnwvkykwalpaatv..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "dmicbdkzyfgvivwvbojptarfbdzbnspcqfncjzqbflnnccparrnmvurotfoxjzvhomeszwfbsckxsldvofqqwggyfjlaaskpimyxhmotxspnaufncwfxykguiephoobifnelaabegxkqeosmljjvkp..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "dmkrzyxvdckyneasqqsidfrdwesmcayabkhcbttvvpuoaxgtrswlndpsxgibifchcvktnoqmudndywtjodigxorqondijbdzjbxvpygkcodexdbheytdcrbngfvlaqojnaccssefmilpsxmgbbfbax..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 935 { : "dmlucimiswocxafhprywmzovuesnxsalyujmeoevcpmpacyivdishtderqdzlfdyyrozrtenvzttghbfeakthkzmekqvxlawahypqztkhnphlumhdvpxzbpevsnwbclllikwxjsgdcmdiuvrsllhpt..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "dmqomdbwumwfgjpfemqphjigepdwskiahsevimbbcdprbbcfiqmeukxmzwjjqzkhgzhmrnncgyojriioyxuetzlxjdfkxkdqiimnjgcwogaquwlszpdykhkhtoidbcwafnknbcxfndciwtcfsybitx..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "dmvapppjeiyobdwqdqbmzdpnjmeebjisiwrxcgsdiydpjuetlpdihlhnsnnomrbgemgjcrxfsfkqsryqqiqwmhqcobxdnrrxpxvtdsmdzagxusarzuzbanugjvkhhwlacjzrqgoijykbebcgepzimf..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "dnbuuljliqltmmeokxyejkgjusknlqxlwtvlgratwyompeadnqxzmnlwwoinhtvxmqhvprchdooueqyavqseingjbyfkbbbgrypvqqtoxhgnwmwcxjshkcxtowqandyfzwgmltgsrhaqsequatqxis..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "dnqrwmxtuedjknjtqzexlejmwejopzczyphtcnfaeazmsiarzceintuliokkqpqrttbdxopcmsaaeykfimvfxnerivwycdweqketolwuuepwwyfohffeduwjxeifskoqqnowxazvywqhifkrlddffz..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 917 { : "dntdiyaneegjkpboaubawdoyabzuyvreygfqfubhxdrqirpfyewledqxrjjcffjgoeecoomxizyazorbbzfjpiyaihhvaqrnzoiajnzjbtvjhbjijspszvsmdoxvoexzhbevdpocvsanwsdrazolom..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "dnzzaqtrfcktydxqrcadsotcctuvphxjlobxqoqlhzllfvjdwwzytcdoaxnuvxtuccxmtlzjqmdwmdnmxvczvvvgmvpcfxywloxjpczkupgnaswvizljcwkiskbgpqulhorblanzwfwtfsosfofava..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "dobvqfvsxrdkcqvaajwyfdffyijzftwsmwdbdzydnmofwuvaikcjqoccizgouzhiyzscatqccsflzdlbxdchcnrcvpcbbpftguutktybgjthrywuyhexuzbvbnnmybihccworpzulhpmrwkbmtealj..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "dodafbgtfdddihhejgioyjleyiickfsklzshefyqdehjudergblpefpnhcjxismehoeffyvolisojoezjgqirawulquaibuuukahrrfljgnnviictsfohvywxwwygcqtsghozyegricnanpcxdvnav..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 930 { : "doixfzmgybkarqtrbhtbnaezvorajrubyoikwerpzhagazpnautvzscnbujcwkyefdrirtweyengpethaxojvqjglgcbmnzoiuqhgfacizkvcwlpjltomjaezyijegripqghugiioobdquyclttmxq..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "dospgydzmktlxuczhqqscqnmudkeamxpvpvrvexsjfryuqrjkzmlelwekfkauxrezqvmaorpuoczyslzgsqlyynrdhmrvfjtznijgeddlfsdhbtikkrfttajgmwiokogpoyztduogohesqmsubiltm..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "dpatnmmbejysijhjzdsathwuosooovztevksrgelfmrrbhiezhjocujggrxmylxywtxfmfolbkrexpfitwuybvkiqplsteanjqrteroyczjoqbqmtdlbwmxcawknobltnpmpekedrbvvnzflifhlhe..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "dpnaxpzdljzytghlxihyllbtyrpniavpybrkhkfnnqzqplkrcajuqznwuvcvpfwbtqifwdzztcrdntmonvstkweetdcemnovhoxbphlfdqksnnmqazgjohmujwzzhnokpzyfbrjvuxocenwltmyqlt..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "dpxxekordfnzblruywoyqasxyxkzcqmipfhxlexjrwavqkncnujwukclfnspetamjfgmdmjlflecwifwaxshblpayvdgkbhdinnfdeolcokladzaadktvsrytxrzbsvmszanhrwfdsikgerzoloojm..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "dqnmmwzrkrblifwdfdfxftvlbmulorkhbgugdbnjxkarqgrtrzhexuohoczjanwqghwugfyosjmuznekblrznfhkdntvpbrygjtvysljlthbnghfxbvxbanobkxkwtoisxitafrdzoiiddmgbfecen..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "dqxcacxrniifiwmfaoicqvmyzjqsgnmkdtgueaokalthbeqwqomgacpqmatbjakywxswevbgvrlrtligdjkqcamlzlkqvixbqutoppjgdfvovomsjkkbrewxrcltsjcbwonimluhdugkzqipesmtbi..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "dregwplncondegxdrltksuufklptdewsnijmgtmcwryvxsfpcvxexbewdylisaqagldeepydhcoyxvuziatntaccunvkyuyoucbyhadsfqbvdtalklwghuqchcxdrgjdweovagyysdzjixsjrlruhs..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 870 { : "drrgklldgkhojoansdkjbcwrptzpgwxmpfolgfpqqjgvgpwumnwchnzwexlhcyjdkselybdbupvsihphvmemkvtczrlyjnhikkijwaakvrdeclpdxvsztknlyvgmaexvbpyditaykjqaynwxjibbre..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "drullcqlnfdwdbtdrntcirzfajjvmurqjddvuvvrxblokdsjuezlcospsajyismklnbwmzqsempbvusomckhmflnsgoyaaonrsprhfzgsjxpulcmdyfxesuobdxsigpfoabnubhriyuslqxrossrqc..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 994 { : "dsbjsjtruzwhhwgkxgzcpiecxuwixegsdogfireskcvfjzwtuhmqdgfezroworkdwzyvoehyolcvwfmdvtutymlesibvsdjqxdlnwibmyjzzsjlzmeapgsixefwkjbnwoaqiafmrzhjgeqfdaaucke..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "dsfehborqjffhnxgpuoyzuttogjvokyjdqdbbpzrqgzolzewhitzcmuejhcmlmtmgbeopstmidnklbtirosocdnkwttbsbfnkimxunmsepvssmxgpcydqytvzaegzifmkgysaeztpsnvcabbwdpmwb..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 887 { : "dsweksvzlougjarbqmvczveevpprvskxncyanwerjksrtndcxnnlsofidaplnqorvkcsusmzaopvutldbvkkpxswrlpnxfvcahfsfmgthaamhqpoqmhuahwqnsqkrbnfhgfdmunvjqmytnbxokezao..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "dtaxxqfuvhkatsvorebdevklrovdkqilzinlsbnqotjlhadewvfopbidipqfwuqetlubnmkhkclifbfclqnyznwjqdtrdrmdhsfwbfjxcsmmxivmrkbngqxbbgjvhfqsecqdjmtdyfwvdduckhnqnl..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "dtukcbehvmuxdsruxqmznwbyvdohgdjnmfxseexnhtvzajbmzacsenncoxtnkmthxjabrretkkwzlfflhdohcgjbzpamgbylitenuxulpucqwwzjlyybxremrpztdcyeyuirjodimfoinhsceiezus..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "duecbgeptqyicxlknlxcvqvooisyedeghiephauujvgciqewdbekahinlahheffbsauvafrqnekxjjyfntsjssztrltmhmywvkbusmdubgyfygkxpxkrqdrhanzdwcdbdqptfsrstfctcnlfgmsixy..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "dulwmbuxhprbhkdflhjqdvtgdfbhfhazzghgpxncfptgwfcympfbnunosfzeilhfcrwgnmdjpsnjxpmukvrgwarjphskeyhyclmbqqcvbbluurznwpmphkwvxkscjjhklkcevjaqiiwijlkvjexvka..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 827 { : "dumgpjnthyqhtpmswtggyapepxpkchlipdfkysrsslgqdgjfdqpokfjriostodjpqkpaagdeumobzcgeehrknqarjhmzlsyixkzngyqctukbqvumqcrdinwrjqbxinxnzvibhhzsszfvttsgzehsgp..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "duolskfbxkvfktnkfwzjnunwclywnzrjdjdzqbythfovthgrndiavbobiznvrexxijppbfpljmxraewswauocapmtlypgzzlnbgwtpayxvupvzqcrbrxlsgcfhovspmbsmgyglmfvpbampljqwemtk..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "dvcjeplpkstilcohzohgdqywzvixnbtvkethhafqoxdcqedernushxevvtvorbwpmavcmnuacftlquoykrkkmdkitbcytyapdvutitbhlwnmyhmezzigyeaqpzjlivanbxsapuyhtebbknwpxhkwdk..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "dvmscrvtliqbpgdrutltglawmrfeumldjejjfhgtrofadtybdsvwnfbbkrukaxpwzpsuhxfthfssclopomttoqctostzumaxzfthyibyixwoopeqxxcwpewbfsxehqvbaraoyqzmlzrqstuknqbexl..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "dvtzywxvsbbpfdsntsmesjrjgtslemolwpmbgoxdaoqyfnovgfsafgedftpftzhfvwucqcqywaalfhzbcdfufgeujjqsmiuwouwbrhvbqxpbalgwbwieifgptwzdqdqafffmddzfeibgldlwqpskig..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "dvyzzqfcuwvjkwfmwsgtfnsdmwfmdgklqfgdbccujmgxgqpzsllxbyooudebhgfghqmeetvzkanvrbewgecvmpxefitcqcqcijauklaghojyvavjrmykgjnorfndohbpzhcxjcodjsflhpopflntsf..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "dvzkkvbmwktjumzpyigsxjaxszmtzgqxhzuhrsephwhlhmggfvszxofqvkwghcgvgvfuboqxflwdahoqswdmiidyhddxksyfeqevxextkxddnckqtlmfmabxpfkzfinioetqnizecougdcvxacwmpy..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "dwjuawhffipjpzehetvgbadlhxiyxjwezvwpeiamvbqtfzwdqkaqvpesaujmugrjpwknucvzytteruxprbhudglegsgorbeutpizaxfijbnoaqwhwicypqheaxeomngfvqgfyfklltstenaahxejvx..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "dxdzmnwfefoffcoizzcwccxnqivnrhltzycvxdjpubsyjlbvowsqrzbiscjmdydlyksudaaldohklyltpqlmhgneqauashnxoegkyeviblhqvhuawrtrwxgcyutjjqczxahqhssvxgacbgbmuajwhf..." } Fri Feb 22 11:37:09.966 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "dxglzmzwavoodlumkgetplajfrejqgyzgqevhmlnuflsquogucikirefjcyyikofnupdjvsklxbnnxnccptygczudpviszhzhjdguekycvxbxrcbxiqowslskflplrjcnmosbswukzujzqmqkymhsy..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 898 { : "dyigxchytnuslaonutvjkanixshdhtimlgnnhsovwqakpwwltrkhmxvnwxwyjtlrimyjsrspchrapjvfyncwnynwtkebvtckdpduaxkbtuyzegpfjrnkviafxlapfcsroksmxwtdkgabtpvcgkewkp..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "dyjgxdqbhbsgkeiszurojxqkttjbrkhylobsxfmgbjtixwbufqyrhznoppaxsueyitvmgztoczxhnrjxcummpgwxgdueugljtsqysoxexfmppgzpqivfwpieudbpqnnutllftmviidvicnclzhktmc..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "dyunnfbjwsddelfhogtijvahmuptwpthzpeazlpiyzgbnvhacwqkotrghpvgawkbmoeqhyclvjmclpswlkaswusobvqizeweuspprhholtqxwqdixmixmughjlxnceqsldnajqsubdqvokwfiqfnqf..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "dyxnlzvugltqswxsqjvwtiuqsugwmlvgpsharuhojixyuqwihsdpynlhtasusmleouqxrfgyfwubbckkguovwcxbeyrympttnawuowifpzoxdusggczjmhodjqvuwiisjucnejxjpaknpgqxivpfpd..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "dyyecvzkorakuveztncnxpglnyeozvocakyosbulstdksvzhbqfttltkybgqbyeaimpakmvmlwmijdgrbrhptcualubwvbyumakyqhvbjzsssvfqjhaohdxqhdyjtclifhaxooejjiqhovvmvncime..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "dzzzkfxrxxzvcgpfgilbokwwetidophqgwcgdxnutsrauxcemvzwaxfqkdvxlhdyqaerxisbpsqymitssiuiffbmmxcmzkqdtbztzclkudwddblcyiqnplatrmlmykvhdifhzwxlrtyhfpfidvxzdu..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "ebaejwsphfsnwdcfhkwxphvzdppvbziikazupbbdshuerardidwnrpbelgwgscehqrkqhszroljhzqvpoqywschbwyplfehmzjyraspbrlletrvfbxamwbpvooktoxhjkosaltyslgjkctvepdlxkr..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "ebmdcyfowvmerevwjffdinpljlnelmljxoyazjazvmnkikowdkrkkbxibutnnvhicgoqbfdpajvcalkhpmhjubjxgearanqpsoipilobgxrmdycienmossdouaojaauqsdbmmpqmlogsnjuqlrkoea..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "ebmhicvjijmucxrmybxbasamuvmhjmheqebmofoxizqoselnhkpqooedfgkvqmqzlmqjdewvmkqfqxumadwvuzsdecqjxhtpipeohczezzzmmgnyqxsdnxsgfhiawkvrmgtlirwjobmwyxftcmkzlu..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "ebsbqazrwrsqrnzxgsdnfzdoakzsvqgpmofyqibtubcyfpyrzacchzukczbcrdoltytunpytiawjyrmsbngxcecxyfukrydrvbgbngivtpwdadqvazeslphegjepzpzqbpwuzbvdbkgyrrksdyssen..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "ecizovrfhipykbdwzgvtjmhzsdurasgaxxpijafiprrpdroeweosmcnyfyruupwysstqlnemffgcfunecgxtuqnykjotyxmfhrmisrkgughmmexldqpgufyifdwbhuiouyttclksmcwqrjhyuqtlwp..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "ecsiqvshurzzmiklenmzxvgexpmxitklnjufefvqarhpahjugpxxuiolmrtgntjsitwqsbttydjsqtkrzusjhroumnodpvwfhxykfpawbcvwjeyvgmxhhmpeyndnanvymydnmyqubdkrcbeqiosrok..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 846 { : "eddlfxdrqwqzuvlcsinmlcmoagbvyulbispuveuniniqwziezjlceklrdtwuhzaakwbubonrmjiulwlddytpdahryxxxfcwvdlqoljsweaqdrkibdgvwwjmtffucjrykwbxtozsrywltuvjxpgjgcb..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 870 { : "ededmfdbhtznieqeavogeiymzwnichnmgmgspuiidizbzgsrlssnceesjeuhlhfsccsblatxtynfshovopeismruoiuwvhlbuyvnzltdhkcpiytojwmeoyxbhzujipkzgqmvnqyhwbxdeacuujybse..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "edhrprghxlctzlwrbnlxgenpleoiyhtilmefhnznaqqpmckfjwvulbibpuebsfgzghdyeirmsrcjpsxqicozadwjtptyhgkfufqazgaglugcszttlxkzwjutfjvusjdiyvrjtzvemeimpclpeupcee..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "edmgjbvvkcbnclaqaaplyussgxztpzjgzvmeviwmsgrzgvomqdmxnubhrfrnxxwmavsxjsbyzzcssinikuuqorkkeipgbyiijzydwojylwhqkmzmkbrmrxusmqpmfelqbqsysizahdcztwhbqrxqax..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "edmwoqhpcvyopqxkvhxpeyudcqrqofmuofylwjchmcwzwyknyqffrdysvuhbkczeciadqoekoblapwnsqongrnqackozeuiblnnwnzwfmzjmayhinlewoahfnqbvfjdhcjcbsosxltbktqzqeecpgo..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "edsleodnsvwvhwtzcvnhxrtvmvfrpsswbiyqekqmmdocgrbfjvrjcvoolywlthrmvehuqdfwumrlnozrzusnwjspvypzkzoiojlgjpxlhjnfvflgwwduqlvhrlabkdmjdmtykezhmjvlfrqvwfbddo..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "edwrxjxaucrqefpemgbkctweytxasccscbidrsvkokbphazrzdcwrixvnvrdwhlmvissmpoihumxqcfnqlxwiiudcetoeiueuhzhnzuwntoexmynvoktyoykszgzoozjloaqlnopxbsfiyfzdfkvcp..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "edzimfxwbdeozyzvvrxyrdlkugotcwlrlpsvsxjnjhknyqojyctfsjkzfkbzgarggcfrmjprstiijdvrcmcfidgdxrcvdltivpytbttancilsormnqvpsjkfvzxzmjhpoygonqkmkrptzlmntjxpue..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1002 { : "eebeinodzuqfshbakysaioebxcwibsbjouztwlgbavknnfvfsrqhxsattnfolecgmdhjhmtnkxzhlfocokbzstltafxymrroacrsnxyghqnmmvlaygdanbnqlotbkjmyhbrvygatyesizrtdyxqqjd..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "eegnhxjelrtiljgvrhijrhkkkoaaclbaegufnizucwquracpjryinromizijdeifbzxuopzgdqpdsgycuiglvcbfmnwgfosuyhfpjpytkkzimjkchxbcjzjokvmjsvewhmmcqgefabdqurvanmcyvr..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "eehopvhiltpbtnculgwmojgnfgnqabktcknwssunclrxgepjlsmdumicrwyvwcgztginvdilhirvkelyeqmmwmfligtdkjtcmhkaycjipdikiibgeuhcwmwrqvvtjahfglyzfqkbhatlkcrqxiurdi..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "eeiahkmoizjoltiblecbmpdlawqniofjepaynghwkjzyfpieyonomhpkypmazawfqrsabmmbkhpefkjqnrujvjnagzntnmetzngsozuresnxawyynclvuyfliajpqfuomgygzbanrqrinyuucmtazn..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "eejtrbeusdlfsojusaetldlznysxdenuarrjbcatjgdeidbkzbwzyznludvxqncsdcjmnfgyyegipmakbutwlvpohkmufzzlpvajeyizuyrskrhekruaeybchgikivsyfgvxlfrubovbwdyfhakssa..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "eeqduenkogclptpjfvehogklcgnnoxkynzfpgobnewblmgnxzouvcfwkejwxhzlfqjvngeuqvgqhcmwgawrwiyqysbbrocbtsyguspnnmjfpmgjsrzrmcnfrjozawdjflqjopkptlsjepoljssswgy..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "eezoyjbihfondhvcernggpuhavouaxhblebnnsayeptsnkuvvzqyearwkehkdxofanphuhxwzjwsgveyfldtjrifgbdhjwxjghkptkmdxzxgmyhjcnaewxiwmsthkxdhehnrimkqrmflimiwsjpcri..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "efdygzkcexcjbvtnyyhrislflubygoaqofoqjqavwdrpnapcqulqessvmfuhechwgepfrajcfofqunxqrtgmezndezxqnpktolmxtcpeciltkeqgjsthpuoxvilpxnkosibolkznjpuyfeakujnuyy..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "effsqoxauavmtfuogeejdjllxnmyokhiliwyhseptgtewwsusmhwpjclekhjaqfchqqkboexdkxgkddpbcnviwsswsfswdjcybqyhgtgiwwvcqtjhutihrmmnfatzqjggelxofoucvqmyezkczeaou..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "eflmwjtscchkfoxggmevgeqzhquqfyvzlobipfepxyzssfzrmnjozjuyowmzcqabnhmrghhtpoykzcwnzrxsausnfzcdcdrgiyyxlcixyclthotvcfwgekixzjiqfufmrrlrfdwordoklddnfdbpti..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "efnwzwlcybxehuuzjytazkuqczufqoolkwrcdvplsbijozovlfvcwebekfohuvhsedxpbsmfcgasetlwxphqjrsfbsaednosfelpweiglufbdqbofaebeeinpbrlwjwhxsatydejonowsmsjpethde..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "efstzmhpdthyntyypifccpoprgxdplatmdhkkchaxqmxyikyqzxmmfvougpocmvxotggmuvcpahyuqksowkwsklocfqomdavgxoisdztchahusvgysjovkzlswpfrmkcxcmdoztxtmnrzukpxvtgxr..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "eghvxpbfnbktqfrhqsujrfwjyggpdqfeohtxijznjeswsqvkoydlpegirelrodkasuqkfbsryluugvxjritzxbgcygucuuevebrscznhjncsiquxkowoycbhiytrcbaujdjpaivxrracjaqqeyhclr..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "egmnwamjmltdssjdhhwwkninddsxycgojuihsahaqkgaiqcalzficpzwlydxyeyivjqrhsnhelwuyspxqjsfuaysrnhldnytzfbwifqltoyeatjjnyemfapuhrhnloelverprxxssmgixtuznavmgb..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "eguqihnhprfvwrwfhcuidumazjymdkbzomiueaoivhcquxgyiuohuabkvqfotvgkwhedaupgvazkucshnbujzzefyeudyymbjtiueyvgczajvnanjqmsymvnlgjdbyuqujxzvudipqkzpfnoqvtrvh..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "egzwxnfiipgmwfspqfpvpkrbngnydcwttrgwipntlokircozbenakewjqwoxhhnaaadnqyprqqpyrwajjbiteeomkigwvbbkmqdemsfgujvdjxueukzcapwawynvcgxgbhwojdxihvzydnapwhsgvq..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "ehtqenqnpccafiirmkbhqmskqihswseouwzdhxoqlqvgpwbnggcmtafltzjxvboemvebzljxgxopwryejgrpishjjywhtsbnbtayjnwqdyljgaecupbpqgfqfizbnttuujicrfsnoustdspecmrgub..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "ehwfgpduzwetjbxrhxckatcrparowsvkpbfsfvcgokysvnpciervkwamtazyjapziluixhgyihkkratvfopiqosumunsexjlauqrajnllmfnaayyioamvbzufaryibabvqbhdanrofkrvmgtmsjqcc..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "eiepgownowuobzltskmrfyltcutprqrlknmxzcqoamkqyclktayyzlfpcghllmnxjdhnvgphckaowtrssjnxjuuaqtftcdvjaydekyfudwkkxqrofyzuriqkoqyuwgsnwbdsqxyvilwmikalunrakf..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "einhalnnljqtgtxrrhpzqqzinqiwwmuzskkbocxcsonpbwqwzulqnhhxgxpwlwextjotknqihejrictmxlyjaocazvwjihfzqywfcxdyiqexiorpiwtsygnwxclklxlfnpxsoqcoyupktlyewilpke..." } Fri Feb 22 11:37:09.967 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "ekjazpsymopsjjqhimvkhtojwpdlrutrmqtaynokiajmmympqjhmnyunxtigfvkkftqojdguasppgywykkfoagjwixuoifdprhdulfhupawguvczzpmspriuonmxeeizfswppygixijkmprnpyppjs..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "ekqckhhwpcvpmrvssnvsckydteygovifzgtbdxvhfcdvtbfridqizulapildddwqwvgwwhiyncbhlohmshbkuultvetlpgrlnjuqdqnvzvawatspsxufxdlgmquoytvnyngrfgsphlnmwgvimlueop..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "ekwypvmrcdkujuvuoxhjdvlakkwmtgqbrbozbxuhufidfsvvahszmoqxoakjglpvpfubqenfgnfsdkyjiijmexvoydjtiisahcwrskczshspvkgoiuziyznhwajfmiswdhzfgcfrmyjwobhvxxdzaq..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "ekyqvnrlcygvxgpneqiqrrnxdyxlhnaldbvusdwtsulojkyxyqkdbrbvgmtdijfoiimhftksswkblkqiiykcrnixztawpejjpcrqanxohnsvorkzsblznsdgpegrjgwjvnyfkhhbwgtlukjqqqqefm..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "elfhkotidtidfvnriquegzdmpbxmxrmzxnaxieurjtwnhvxlyyrdrygwimdphewzbtiygfbkmslqhpiyijkhamthqchqycioczvfpfppltoyekohwzanjfttatstyfigfmwvfxtcemjihpnmtmhdkw..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "elktijgwwiiuzscwjvdwyyrufjzemmbgoxcxtwldhfwlopchllgfmagwferqnwvnyrdtbzdqawxyxqkjpeidgybocigmmwbkwvsiunaekrxiuuiuqnumxpsezcwbpalolfhbpkqnhtoabgxezqiled..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "elnafvubtykemdohdrvmdljvrvknzicvolenaphieedulnutpgdcnrjlrzilbcfvgkroenjwyqcjloytfgycgrdhxjtndwljvuiwnsiylrozvqbiwsdrdtjwkmyszutdkdqbstfhktznyrbvvsvvwx..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "eltkpkltxdmjqialwnpwxhbbcjfnukakuzpvetjwpsgsosspqmiiccdfqbnnpaauvfhkizdblyimqmfhrewwotxlvxljfdyjzphjgtvzcyuotizhdflupvwecsjlfzeevvsxkonxvedvbyjwymwkuu..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "elzhknkggxxycqrjxrfuomyayybemujorladofsoolcpydcvqrygtcmabvtsledldhfcugvhnnbxffzfgmtglwuirnyrnybqbyvblrxqeoidqrqmcpdfmgznfoujowtzqqnzzjaljetrdhehfbyqzn..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "elzknkqdquwvzvllxgrfscmdbqfbydcavgrznbugjrqfngwjstihsjatafnhwcpqzigjyfycvhuqbwpykmblmmhkzvbenrtrbujdbasqyefagshxghruwbtgghmrpvsrgadxhopjlfglaspmgfnmyp..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "emkdulhtdriugcxpckpweepxzerqijzojezxbxudfwrchhibxdblleornnebivigrgopupytuzxghjilgvoelrlprvtbctxuypbnxrmgxofjzyhdkkhqgmxmpdnoyzqdvearooepqdihfzpulubfhv..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "emsecnqwhxgzrlhvxehcbdncijidnencgjecoukgaegcwnntkyndxqofshipnxueinwziolplfesbrcaiaeamdusaxhkxrwwgwleqpeisrhtakmkxpavcnkogwhpbpsymywprxvuoyioskarwqegzx..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "emupwxtefjbvqpbsqwbuflxegnlmlpoekuxyapvbeecihpfypktcrvobjedrcigsabghhiflieuqvvzncofmklqzozltmylxsroefxmfnbjdzthobiyvaiayltsavtvmqdwakpxoftgajmrlyrmzbd..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "emwfpfngzwkyelwjowkqnecfmlutbqdtoervbdypzhxnmunocknnikesmeyhobhwezavznkotgizxtozgutbvzfyuxkbvcklnsfsahjhysnrrglrmxhgnrjlofojjbbecikakbrnfvezozybjvdckx..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "eofoxeucyvbctqdqlueuazvmmnulujtafkkzzvsxxuqzglhkulftsyuithisimquqktiozfbumscntaeftweekyftywjmkrzzdaggsjxepvaaxxmbzejpwjvkgqbvkwzkpppvtcjiybmlbnmktsqjm..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "eolgojmmlxdxwdhulbzawzmumytqhcvldobvohbiysfrnzjgatfliicspdjcbkzbkrbpptqwxyxynhmwmkocdvcxkjnbynwtubnkjhyvfdjbexssnpxtsftqfeoghaqrztevayvaqrhgzckdtxsuwv..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "epgysupzsbwrqwnrcefjolfcyqzjtptpvhtsvzwardxqmqrntsangwcrzyieaairwfajjaamganzvykuavlgghfrujorlgtnkipanvmmbjxxhsrzundrejwwgbnqpmjwbpfslvklhtnnileaktaqrk..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 919 { : "epppmxbvecimjqghjqzplhhemlxxpiwhorsldpkywdkigmfmofrfamlricoapgcounkyzwrbvtnfxqskyzttjfojdfxogbiqcibhtgpsutqjlrdzlfsfuwfmxdsdusgqfihkezrtzadvbxnqodhmnl..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "eptjebfevtwparhdupkzmzrcdwommddlfxwzrhvqbesnktgtupitlusxureyqsznunjdjnorvvcsdkwbdqtqpimxuyxpzleyjikxxdozilcrjiclmajdninabkatgtluwhlhxetngvarkxqrahhrue..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "eqbpfybzyoeeuwxwfswqdyimbadfxnereyhtjykbzhhfiigwcmsysihjjdykkpksabwzmhoxjpsjpemmwkrdibehmkkbdopulnhgjhnsmhwjakfafftjqmdigjvltvzqfizwjgoorpwejylrxgobry..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "eqmayuzmernewnlsoafidtbqbfophvrfepipawbkugtapxriavmconxbvksaqqjkmxdbibcswwwrounewozmxphiekgedwelqbjbcankvhnvcbzcyalyltqygynyoaoavhusmsrjocrqmvyvsvshib..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "eqpcmmmxpsnuagrsmgvotoqbcmwslcputpknvlaoptqwjjsmglwngzwpeiluwmcahjgpvhzgagdotmdgqmlegpgfancnqyapumubicxqydvgpwvankedlsvdbxznhmgrcmmkthfohicaqfsbsedutb..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "eqpqgivcmcojkqbdzzcsqsfxplugracjocanjrnfpchgdlbxzkhatqmxouccgrhhlsxyvtcwsxvxghwobjqgmevecnfsydssuwbjzmiqnbjtrnlchdkrbyiylnixawertwiuvbngwwdxxukpxyxbku..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "eqssseygobbmzicdipvaomvcypljytdbawzrojvibdojuwqnddfkdxbmqpabkzkwnhsfnajtvtmpgzvewwowxwfifpnwymzvtgsdgasunvkblkktkeufczinqcbariawlcabnqwailzmnqxicncxwm..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 951 { : "eqycawptemsgynlkfcjpcubuxlzsstpchjdvisvmmesclyhxsgzrbbknagzqdfebwpluuneflhznpxwtjemhnamnqridviehvpoxbkjtaqorkvswwagpoknpywjdwpcithlquoaywrzfaljfppxhnb..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "eqzrutqfolsvbggxuaehmzqfyxyucqofhgrwabwbrqefsqkolojpyivfqyyvsclinfghcmszptqyvyyvapbmtdkvevkbmuuvtvlcibfwnjfzuagvqxpnhrvekfnjhkwmmjhgalccagicahibyipcua..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "erjbdardrpcvnxxjdqzjdvqxjuatwzoqndhiliyebpgmoqkzkheredtjthkskipifrhwuhyhpxjqpeprhuapblgksohdlcyjymjzwmujauemhkephcsacsgggzcipcusbcxbojpzssvimeyoojxzuw..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "eruyrlxxijbqieikniniakepwglocafqbfjximiwrxfxswaarnxiyaxglqiyadeabbnsfnvvdknxnfszylkmohwztutsoivbuujidmbhlxozymbyvpuaeudiwflwbdtvpsiidmtqvigwkhljjcetie..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "erviviyhczgoagbtaileafkzbuqbsfukagkuemqdoxsuwwkdkrlenfdzljerukovywjkeqkwadahfcearanrjbgegazpuimqazzbyvbcjlqyjfsxgfchtakqckewnvhhxruzukekbgjassfgdsxbyl..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "esjxfktennowxnvktfgvjisxuhathgydvykesldqkvxhkqtxhrngjaxdgvmuzqraulujzjwaoxospayzvudjhrpdtxlemympmcuseyiygevfkgsnieibnquwodqgmvrqiilqyactvowiecphrbymyi..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "esmsjaeicfqqyhctzyoxwoypfijeczvniwlppoitcmbosiihnktmflqhrltgkbnxmrbubeaifgqndjjjcbfgnfwipejppxejqdwtmiazslojqnahakpipnhdyhaioicdheqftvwplqilrhzgbnwijf..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "esncgalbnrqaqllbfeeuoiwzvgvkorlcidbvxjnchxezbcuxkzjsnpitjuqnshvtkzqisjipuqfccembxidcewwjxgjhoxvjiwjohayuititeuhcjcfnogtkafpgkccjjxoszjxotpenmbpdsoktlp..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "esrjpfjuhsumfggiskwlowbwvxqwrqymjurqxeiryhmuzetckceqxopdkgfftkvjlrxhhzktxxymlfpbhgqaxojuukerdfifbindutyyyonxmcfdutqhyvnmwypyqbfdmubujgbrglyhybupwsszpy..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "esvuecaffbxcaqmqhhehtyulmwurwfbfopvwnynjbvshhmqrzpdpmnouombaqoyjwngkugjdexjnphsbjqeuhtfcbwbdswwxugitvrdqsilpqxaewlxunipujtkzrhweibhnhwwqfppnybmdemgufx..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 837 { : "etfhyuvmbfwyubvqbtzclhpruszebxgqhpnrnmpbnyhrgkyftfbjooqlxjdynfqpumkizgqxxgorbskytdopxytqdsxtcstxhqvsficcbndeqkhtsxjtdouqtvfvmewdmkoepvojvhmmxocakyfsmj..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 980 { : "etpdfjbsrfjqsjhpxfjdqbsqiuwticdvvkjsksujuebnmhwnlglfusyoapvitsjjnepriqqztmkhixkcxmexpjlmhtcpamrxonmciorpjfhbaoovihwdycgaewgevbbdvumxsjsqakxvferphgdtpr..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "etxdzsbhmsutyejnnfujjsemmtgicbonzalfcbesxnfggivfvisruihymlacrvkzdihitqyvrcgsbxeousirrsrxruppoxngdvqllnyfcsndkmhnckhbgoelylmdbzogunqzgeknnjnaxxneqhfcoz..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "eukowklcwntailiwcoaoeufdqujtnjaztflzzmudmnyjeyjjwubwlllsrqueenclmwcmatteegzaciiltakscdvrniblaxlqrjgkdfywnudekkcygolvhckxjappxbxccnyymnpfkgzxjojclqyvdi..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "evgqppiqufhqntbachhroffhldayorkrktzxkgztlakpjnvurtvzlborzefwmncdtpwvdpulrxldvzshewwhkefgbkssxcilnpbcanbvtsotkrfkrzcuakulcmteiimkjkvobupthwvimznwmbjbun..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "evjaccchhxwfkpxvxhywquhiqbnkasspnotlybssaetqdglzslxhcqcqcqhxyjzmamngvgqnjreaglxjygiolbrmcuqchmpiottkqysphcautkvtossjkzciiroukmwbcfmnztdxbzgchtfbrsqdps..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 899 { : "evpjwbxjfmognoxjowzmquludkzjmzqjnqrfgcotbvjydblofoabowsxhxavxnhqqqlymgfnzmprerpynrefydfcyjflipbnerhfrmtsmwdskprqmgooaqjgjzuzdsihowepbsdyqwlhuuzivmvtfd..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 891 { : "evvtgwrusodwqhdtelljmiziuvejungkwsvmhzypgfllpqdsvlrvzrmwqsdfyoayfgdwbfeemaurevsitdohytdiircdabnclrofyfusmrzsmicdpnssbqxgyrgntnffrtocvqiefmqdctulaeljiq..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "evxbjgjlopumiodeexnxsrhbxhejeqiteeeqsccwwngbuosxxrugoaswbqnnyawhdtkgluxnntmfxgdlkebrjvzlsgolhhkbfvnreezqgeyuuldnpuhwisetwqllhpbzdwmprrgtfzjonofdvzqlpw..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "evynldxohxgiqdqokmtzyxjyqfzosoltxudyhiijmmxmtnvnrdnrhpodzliwpwwaruprcchspiuezgzsqkeffpbecmvnijuuziapgpqlhrymrmdqtcuqvyvwbljfntvajpvxnxkidmbxrnopnqcirw..." } Fri Feb 22 11:37:09.968 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 906 { : "ewphllabedzjzglteydnlsfmvawkfladssdzjclsicxaarplouflydzerdyjonsnemkjjbdsscxgpkrqwohwokklmyaxxwqgjgqkrgbqkjjcoeezacovlwhrywbsnioikrcenpttsmtxrudtjskotw..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "exqcjzrdbivjvnueiqdnoljixuvjdzeqlpttkyfdmgrnhencyyfawbidtgfqybgcfogfsuibkcsmbffvcddyjqptctybdojaywufrkrbwrtgqewuhklleznporfslitmctuhjkdtbwhvafytccnqeo..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "eyaeqcfrjtyeyiivxuisdbzxhyqkrgklrwzhclfhhfamvaefnwixxqredwxqknyremnnumqmvuwglynbdrnrotwvnxbmgaalidtomokyeofhwnwkdqdpfqkcutlcgnmafjwuttdnxrbcaviqkjmcbb..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "eyaezfqrurvzqyzjxjpupfayonoqgpnpsyvwvpknweusqygcnupscywqjqjxymydouhesedutorzybknwhzphncicbgqkttlkfbuajenftvkksvkqsjtgeshajyluenbvfsoqhgeayhpmcezneejrx..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "eymjdxllqgtttadhruhgxavljfruhjdegltwuhwhswtqrabvtuscmobdzpwxyynxhvdgjoyfocejpgpndashckubctfdmrmyqtzkpznacaqfqvquasyhagrjaybuerkpkfdkkrgazfdpeubwlqkptq..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "ezhrfecrkdylqzbbbjavhsfjxpxqlqsabqgyshbtqyadibvehcqxefxvtlnfnfwjfazyyjvlkcpuzcwouxwipilrlvhwoioaaxtjuayekuivolzwfcpdjxtpixbzyenhkaaqakqvteeqyascrzxfhj..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "ezisyzeykrzmzwoothjaqbunrtoxcpcnsoldazzpnfcedcrbrxblwhttnppssdkvpsumlbxnirlflaowdnwlouvexgrrjkbmxahjjfmtygkjjbqdyskvkeaudhhlpxkddxewcochwwtaqudxmqpxee..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "ezjxtmmigbhbonbbuijanuqkpxpnlgzrlbkqhzyhdkngjawyrtpzhzpekwaobqogtjektcsvydtsgjhksvewwfjludogpknzgeohrjgmrfmgnebylbuwxyvopkuvzqrtubzmxezkyzpppwbvaeocjm..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "ezmnifpxmkmhiufcjftaoedbqebtxrcmajxliupylsaimbvztldcjcmjnjhfrakerblrfherbslazbogrigcmnovfgxphdzuwvlpitddodoxemyvugeiwrqcjlyugmmkbmujkpgkcpucrotumjqszh..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "fatruxsccyjtaviqgsmdfmtnaoaouifapfoyxkmogdbayobrjoetjsxmknqytxjovkjhimoctzqsbrcllnudpofebnknglkgqwppuqavocvpbmujoihonwdxtgdwocpdgvtmxbtifesduygdevwcud..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "fayuvgxxgiewnxqggxkcpydedfxuisvskwvalkcwtyayabvzfohwqamslryizucymjwwvhgpxnvugfxkkxewnlxinmatzxtdbgrkorvorjlgpswqvpaleswpfmxtkolapgnjrwlkutmxtiazhjktrw..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "fblmedjdieanmhcellcplieldngffneqyumyavifcyyiheaxexltflysmeudquijowmmhlveehlksopxtdllxuuszpwhdamrciowmsxjdnrgbdeezeaqosdumjqxgzwaeefienledlzwvuztisgvgu..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "fbocqqrfvzviooqevnxfujyblogbxdhpjbwdhtrittujzmgwosadsngxoquolinhekbffxjvhzuchxrydsczwqnhkcblavfiduzqaqklrzsccyakagupqbegswjikdrkzcdqtdxlfjgcniwtzkprvz..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "fbyqvjegzqrarwnmagmescogdzccpjycnlkxyanfqxeuzhayfcdzczqxatnpngnnnfdbkzbvnbfxicbrkcnzdavipuzvdofkpfisxcghezswxgcrcdwefxhabrtdyopmijegqyygiviynkxlvggnkd..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "fcermiduehqnbkopkewkasznlkqjwvqxsvahusofslrcqfbxwurakwtclqkrzcozhiebjsoioxxgdvyrvhkkmijnxkybjjtmsvesgmkyrsmggdnziiraijacqobyfbanhhvoqlfvrrzvtfzbafovcm..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 949 { : "fcrfuwghttnpmxhveuljxjxnmmgufrrccpxciwdapfakdkkselgdjghebenxiqqzvkesxtitlrwlxcgumgranfyivgnkwrnaopbqbpdrqbaqffgwhorvrigcnhawookimvxxspzwjrbdcddmrhytmv..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "fdmidngtvhupblwpgbygjjnqcxiddxbvhozngjvgkxphzjlvrjvftadiiyvvokbakcnknlvdqmxtbzhkesxrypuneulsplwagughutequsohlixcgclvgwjcesrnlilsdtphuxtjrlpdhbrxytarzh..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "ffbmhoylvgqjlygostexpptentwkrrrtnpamklkiyqyshsrtuljiecxqajzhembceqkwrxvocgrpeuihkfvndquzccjtjveyovjpwgqicfscqwukojcjmwagpcvhsfiuhbvpjopmxrwdwgduigmuzg..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "ffclmnaddolorpqxpopnfcryaufxtpeajniionksegqwzaquotdfzccrekywlgptbwaxkfdhbhmzdmetpsdvbrdlyarbszolpbrlhcacgfvjkneqappulhrkhecrzvnmjpqpwqahhjgwmwwchecxpn..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "ffcxozdtehkaipinpmmcbabeabwkznyubewmacodsjjbehazohwfhyiimwdffwojvxghhccpuyqemomkgqateckdbgijjpkhozfdizndqasaeetdysrxorkxrkajxgkmrebqwhkixuhbgmstguqrgp..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "ffezxylldohtcmcpnvchyewlxtichcfpjdlfoujsfgbpuszhqcjxgbrtifiojcofotwidpjcygpapatubcbbgmgcuhbrfzhkasqifyvzqlerpefawxdrfhmnaxzmbifetnmbtwdajunwgmcyzqelut..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 905 { : "ffkbtrhtzrizcxybbqhjbtnestusxrymubefxsvrcxtwpxbavkwjewcjtlotqtkhkplixqksdzeariqzuazxgosaquzjhjxhwwlkgazrefupzjpfdrpcwjvsjwnhbdibqjvorknxbggwlcsqsadcvr..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "ffxqkgyxiepprnzxzjrzpqvybhiiberhudaasrlowjawfrlhtwtojufsrhsdejxfqsuzpjzybxxiitmussysneewaclieyqjqhxlnqipvklchvjpkozpwifbbffxerjocxatrfrihizxciufszjyxd..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "fgapdvgfthtwjytsyzkbnrssqmgchrhzhidezkapajbqdymjvuugmzgkhvzwhocmpmvmaqwdzdvpqqjwkvexwsabxudtefmtcmkhanynxrfpihtxcihvqeizvoudirjpsfaijqstgjemlwimnwssoj..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "fgicargimhkcvmnenoasiromcfauobibvpphhduiqiihwpuiednmsuxihszbttwhuhzavinkufxqwhwlkkbsxaouomvqbhxqrbkjiugucqpkmtkqpqilhbwsmfjbhqgivqkexmxvncjktrhgbbqirb..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "fgifbqzalbcdbgzebtjxxmcdiawpamaokukqsbyjvdobgsvslcndupazxnlteyfhhpflovzofmkuaoirkkszwxdbkhazbiexzoieibpkofjuezrqjxspvpwpoizbblkygcnruwdokgnarjtfikmsnz..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "fgwxvvrtgbwkoapqsfepwwhxbycdjupmhfkpygmdmjfjvdvollguccejcqlwpwgwvjanckjvmiwpxuzzpnfyvdnqgaxjpkwwvovlpxxwrhtzwytytzruraalynmxifvtjawzlfxvovtinjdzoobayn..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "fhclbmhslbszllxnfxkjhdjblqvnqbhqjkhfmzizprsdtmzaxyttbhnmaajdltaleycqyopkuxunxfjyrrzdkrmlgnnnldrjvihgwpfvnhvunrcparudwowmniixsqhaghgrwkhoyudtkqewfwfmdc..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "fjfociiseyupworqgelkxunuabynfrknhvjgdvawqgjzhjcxhbdthwcfgtncomztaypqtaoeobumglbnqawqjwgclzyeqypawulhsljzxkllmgijnzuyozurwfxvirysvcrkcxqmfcgtwplhvolajq..." } Fri Feb 22 11:37:09.969 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "fjpfocanxouhlhvizdtifnjpmcxjropnszkjvbyejnuesrbletgthdvzuqdbabvncwuagcizwczethgeztihvvbvnwpwolkcbktnrwtdhjfdlsvrdnlaoyazffbxpmnjmafuphjbsffnjxreshatvd..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "fkwkeovuarpzryywalonkzqwfghwrlaiqqsbczlrpalrxiqujutevtfdyutaietqwkancnwwplnzyecrtjcbimuzuszvyvngsbfwrtzyiguzhyezzbtccbtpohgucldsuajvtutubrvmukqlpsoxou..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "flfjhvybffwpfktxmzjwozqoepswnguorawreyprlichrgmgzctfcknnjzmslgjbzyrwmhiqvsovcxalalioemmxtgvupehgqktsywnkksuidoqjwlseydflqmjklrpuqzeccsefnehgdjcldypskm..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "flgithbbnnpxawfjjifrdvvurljdgpcsifltrmsktxnhblozbguziypshhbhjjatfrtcbndbpkhgidfclqtfevbhekcnnkqnsbojstmldtnmlpyttqsrujkzkaxmwrhrjmudwvzeislngcrfzhxmlc..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "flwpbawxyzofunegxvqnibxyajqexfarfywxumwztjaqagbmtgcbjvdkidtmtjvapmwcclrtsttqlxgdvyjjizaboqnmbbtiubnxmoznazghcjbisorrvkplpxnimzbjswqanacafchpgdsoorlair..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "flwtpzavfliozximhbewzybcdjcrlfczhaioykqdlsmenpfxdidzjuklvbmkeqszjjqnrqbjwtewwqivtpwvfttgnrfhkmyymoxppujigechxnwpmsnnfkzirzpoligqzfibjyfptacjjsmjvyjkyj..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "fncmmdnxlgrhcylckhgpefdtvmdrfjwxgbbbmpqvdynopsufojrdnusivetyvucebbjsqujaoqvtinvrcphwlhuochrfbiisqsoamigvhxydlotvaoynijypuhclguijyaxwdxzlyxzjlgxzhkhext..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 949 { : "fnwmzpkxzpegfmvgmcwedrnmlcvgdtvvdhdubcsfjdyufjmpkwvbfqdpwlloqmwmjdlcuwarbkcmdelncfvzqqgmyoobabvnmoaxahlhduyqdluexcbrqeqdxuawktaqjxaejzkdxkuadaaufhsofc..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "fnydsvdktpyuozyepyflgbkcjffafvlelzhogpjtuavtbzbhwunxlgdxmhmlmwxpbmaopnghmxoikiigepillvjtiuuvavxcekianajatlgvyvfsvigibqbvpaggtgsbjflrufchwzchbutrqxgtzy..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "foreydycrmjsesgchmnwxvlgbcdiwrnumtswjsnoeshmwrzyxjzsmwierbqoysxcnyzmwsfenpcasbucgokprxbmmzwahbycmppudaqrmfymidsmmpuwxpxbvmkvulwpqgnojaphwwwzzycqxeaqdh..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "fotofyuwmzomqqwcgrtenrmcfokscbleypywxzdpuwfawfedxkryqjkzwzgiedjdlxikamdvyjlemnjzxwxusjwixocotwrjdlfaecckbhbvaapkrmnjaqsnvceoojnuidogkxjmzkonlnelkshcin..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "fpdtifruasrdlvbytpqsgfktfaujbbpymcqizmnxphelsakntmiltfngvsiarefwnfazhdndhteuqsvimlequbgqyvgrqabmuicxozutckneuxmsakztatncmoazjfmqawpureyabjcclyyfsblvto..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "fpfvyrcahkwsccejgctgcamzmhqjdyjekfcsagmpxmydyfogqyzlzuyhwbjxuktpshxqwoituritzfgwbofvfjlnhcylzzxorspnajmjykrrcyppvvlhthrkbkpopsmpgjihaaluvevwgsqgrqappt..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "fpuordtqbdxusyuudqjmihnzophxiwrunnylisdgjhyqtffakxuoackhtoxgigrtsgvhrgriolbtcdzxoqynttcwlpkcwyeiltmlkabqhertwlrkyqmdtbqmnpkwbvqzusevjmtbrqovizmkmlimle..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "fqbvtgdsaovrrjaohblusfsdqzbjdiydgquxforwmfkexcvnjsntqwydybfvscsvmxzlxkfornzaqinbkevmfwkhyvrjqcrchlmrzgxodfdarbgkatkzqtbwaxpxizmbpxniqrppdvuixqbpmmoreg..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "fqcuwjowcxkfweexbszuqhmkwwmgxcuvpigbdbsxwutjgagiadfxwwdnjxpornqiugfamklncigfjblzdzdoinhfwwacgqvhiwqsmeswnysyjxedclnwmlpmumbrezfebdhildaufqbzdfnqyeffqw..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "fqhyfcwzfskqbovquvgzgpurnuhfdoygpglsixuvynrkxewlkuzwokngsstliweokilzrhedljtyxelzzawrwyiiuzrikyygquoyqhtpnwibbtswvoyqsflgitbfbjhgyumuvxhiqwqqsytjaspzud..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 875 { : "fqspwthliqvzdzsxlpaysqpwdnkxocdpltzroxnxrbqovrqbjokbaqohwvjdhhbiugqmktxwmayjmezrjhnxsqnxxtfqaotxvthimgylylfnqaayxriarqdsxcfdszbzcmfamjlofuomlxfzognihr..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "frhlpcreuklunqpjahhnotsxxlkvhyxtwmqdrrvcgpijbscxqsirqzvgckxlrpgpdxouzmgwvprhxhtnktagiotjgyhgtnufeifxvylubbzobvtvddalkviwdmbipqncltviikouzfyiouzdzadqjb..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1001 { : "fslpphdeyqhahxsnymlbsguiutlkvwrmhscusgicwmtdonaneobxedbyhfycpefthyfgoshcvugirzbgqgdirltmamnzmswivcgeaickuxwfeobfinhefvarqvwypfhvcmvceuuefabxsqbxtbkvue..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "fssngrolpwvvjzxzcpjpgllsggctzkbdxaoyaqowcpnsstgfyscmyrmrhmfumbcknonukiolslkhdsfyjiscmjfdhzroramezoeaasjakfnvqilebnmpdcuwkxsbquswvyoaobpmujkykijulccqcu..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "ftdyaywsjwvdsejmdwqaphqgbkcstihlnlugwtpboirxiwminxxwhvjrkitoodyvgbabjbtxmfxvzdtdtmiiihrkcedhiyfgrvkzdbcvpfomlzioikalhcufkpjhfmbbmfvpumfpvcufkwasfpkgvg..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "ftgiwahmijbbvzhcgsfnkepdxpouxrgxjewmagameuzwdevakaxdsjrokoryixdctralnvxwvbfyynwolyhgtctddssfosqrrwmynpjfxegadceeeqlozplaweewjccjnmrnucjjgkmwfuvunnjpls..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "ftikpkaimrtpjhkmhyahhdyspzfhnwuttlwnewgsgddwugnvbmphooxbtielnpfmrhypucxxdbnvffbszptsltihmjwmkuruyzuizkbgikfhlzzaifpgszuowrvrorcveqhmqvvtslbemhfhbppshe..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "ftvhgcwlajhslwzuwlkvhqjqgwaggvwuufzrszgstepkgcplawhpsoywmdvoccnjkqqspmvrsvysclunhutzwwtetyqtandsqtpkxdflwdraetrxrtkpaoaaakdmazwrzyjbdlgaamwdgylmfyqmxr..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "fueucbbguqgxhtzqxrblesuwnxvgunbgfqbhzngwdnzhjdwpxmulmrlkkrxwrcqydfayywnpatbkmiuyoabskfulcfrpzahpzdaicblmmnyffqdwptkcrwtzazqbjrlvjuhqtflocshyfrdklxuycm..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "fuyjryczrmtwzfyhfvpafzlhprtrztgrravvpqfoieqwruudyudvnwzukrdshfnfzkfogejmebxgpyxuqomispdkgdzwlvladjmtdptnondkyqydlgevgwuucqjzzrahougtsdeealcwpxhqymttmg..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "fuyvmktcqjagakurccdiaiitbzxcznhxrzevplpveklqfhaschjfyuernuycxgcscfsgehwvdaufptdacurdsajbqxfudpdaxgboemfzbqiyigljqjyjgesireymcgnrstotkvsgvmijdpvqlmbscv..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "fvedauhaemvpjooyiknpolmczoavvpdamxjjkdfovmziwfiazxmdrwsibbtabwgbonjxhduejyluopbykzyitesftvcyckxixsoexjedmbpouttjoyyqbtdsuuquyufwtwvtpfoeumjibulrwgcsbz..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "fvwvynvqqjvbkgfeyfrscwnlrlsulszavmftwarpnyrwovtoskztilazqaalghsfcyzlgzuivwqvlbprmbecotwzrsnryjbrvsustzgvwrfnslbcvoptcxlimmyvgngnkanyesvcnqiyarikxzvhgi..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 842 { : "fwfninypnxefrribeumekzxjzlxqplkndpnyhngqbwngfjhmswkklujjzmojxhaiajetzwnbgaselqolifimblznblfvgfgwbnxnijtyopecazemolevhuqoljakveoqcbtknyuncjtqraccnbefhz..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "fwuchstaqsqxwkjjkxpnimawnezpnfggvwnchclrrdnklrsrjpneqztqmlzciijdfwzmffymhtmuvbytzztdawawbvkidnesizrghlwskxfremstagxhehgokdmicduiqoiitwoxmzqonnkeqtvmvd..." } Fri Feb 22 11:37:09.970 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "fxiyxceqhbvmpspykiyqqqoywiztphjomwwusktyxshxnimhlngkfqlzbcfbbfgyfhkwpquqjzlyrzolqotgoaebbwrcoulrbjtrzkihmtnjnkbnpxqvumwrbijgukrjqvkdvoogzohgzqjlzptgsw..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "fxxvcmzqztwtevxodkozcjphzbvosonecxoekzjmhbebcrjpnfogqmuxthljpuzwtvypiqqnihqdllsijmdbzpilmkokgbjjuiavvrixpqzmiqllnoihzjmxgrjtevntkmxwuelltqysqsriehhajk..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "fycjoqujfhuzoaagealkvdfropfcewyzikzrpcsginmodibjoejgrpqljckyhsfpkqcnyzkzodqrufjlsojvfdlgorlbfgjmunbqbxyqvfsvkxupzhlcskvanfroegvfujffwnpcdkxxjshuqbhjtc..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 842 { : "fyhdzirkklvbgrzlftonobpzlbuyqezvfnhvqrgstgvmszikjbedckgqeehcjbxwmseskmdeoerlkhiotpfimrcqtbixorwusfdnpnaohsnxhrjjfomedxdzjgcirvcucumuoefyuopuivqrfhxtht..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "fytgcadzqqydwuzdcuquxmijbfseaanfejpfekztifcrocbmlevyynxuhywmfaxudaaotiqmelcmmcvgazztcdzbwormxuihthzbrvdxlpnhlpegjbzwxsdnkiahzyronmeedmkdevmqpybkyggpga..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "fzbsfhvourhsrdwfxxzwefwdvxkktmrmgokclweakukpqhuhyzfabblhsawcxapimkekmmvojijhecxknejxvqaxzobxnpkcjomcosrvjjxbzgzyfpkjnkzzyhcbsgbzodirowvbdjngndcvqnhkun..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "fzgccznyzomxsvszgknhnpheplrjbhhlothluwvlztbducxyxobnlijrhbxxyrphvstvpvdynqsngadfadcvhiveajjcpsmimvactvwesqqqjwyafsddcdxiqcqcuuufhnuihwcqbraeiakklourwf..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "fzvagffsrzehbrsspqorninnssscgpkgciepvfrfymcyeseoqqhmktjwjyscxsdnusdhzpoxjgmmterrycmbkzvsuvjmwbcavrlvwsspnkoxjxwxwmmlhydrvtbxvnyzjodmtctvptvsqxhzgwbcwv..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "gacwbokoswuxifmyauorktsnmmwrjoxpbhhvunjswsdckfljeklzgxelnmtwdrsnkwrhjdpyhwgbzdvaayoijmcyqbqgfhnnhkyasinvafzemgnznlljjtmhifqpdkeupknneqrgbrslrillkwgmgl..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "gapeqpulfrkbbnfywpsdhrazbwjevuiqfjhyuvzbtozihxdrvcblgakosajjstjghgkzanmybsgnbeefkhmhjasrkwavvfypwbwgpjxluunmusizgcpbwqztnfxgclzvuexqwrucbrnvwpkwklfzak..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "gasicidhuehzksrcadqgtzfifahtjdruocdukzieyzcotzjjvckletvjsrfrxzvkwwpppqgmnrxcrlwlhqmkazrbcnrnaxrymwnpavcmeklegfzedgnivzytyddrmfpznmwfmqfnjdxayurszhwamm..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "gbcjtzswarlkyoelcmuzkgmoxxjdbukmkjvabhtxjsfumejkrsanbpfzhadxlzbkbslrtddqqsoiqqwuaghdjpdgcazryzqmfgvntxbwcxyqlshhlchsliarpktiiswiqxqdjczhacmsojljkynjvt..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "gbigtydmogsjeurrfecinwbeamribvweyzmtbqkuzoeubdditejrhmotiuxuajopkaaxpgvcxqnykhywtglwbitznlckrqbfwcplanrvsmmspxzupczvitvbsrorriouvuwkvfxccowhqpsfyxsrqz..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 837 { : "gbpksxjzkispmallzrnlvcwwgwfcipbnytauiufmxfkeosyfqhbctujosbosumwxjpezmwocjhjqatwkjugjueaxudxmmhalpemzpobtcrkunkhbnlerlbvzmeghmclxfgqtgberzlrlqpxqynckgw..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "gbqglhsqlkriignrafqhkkheyruetdypousbjoumvpaxtkqbqxxajpewythpwdtguefdpdlnejxreqwgxguwxkvhyifciksrjxkdbhgnizvchujqqxvzgxtdqcyclaqtnreofuviusxmeblvonecwo..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "gccgtbkualadslqjhfkiuocjwyfnuxckczqlwzqmhvneygkfipelpynzpktrhuibojscvptqgrxnmuxjrnvadreidxkuborkgfckzdyvcumhqbejxjwajeuqdfrhoygcrmdbzelzwbcjzlkblplbrk..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "gdacnlrpcebakixmxwzarrickululqfmzxhfvckxgzfkesvkewsmqvihotjcwoajxckbqgsgnqdqahciicvxylzwozauejgqhymcjrqsbisnzdncakyfcpsdhrqpyzqsasssptyrjeetfxttfoaenz..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "gdsltonvjniwvtjmrfklgkbijrendmhivtaedddpoxhdnjvprclpcjhnvqpvwuolgwlwuxkclvnvfrezaqzifogzfxlkakuahwpczasozpxtsrizpwtivqeubzcswictaogsvmzymbhwshvpqjpoxc..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "gebvqnbrynizdexzdubucedeymuxjbwaxxstmjpyashskeyfnvstmkwfuwirygeetrthseikooxlozwmwrfyrrzldvdgogpustellhpfuogjxoxmkthjvghxbqmkbkhxkwohyxrgrpvswqdqdygfjf..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "geqtbtqigcagsexfqpxwhkamqgxnpiuxaepktcdhnsieybgjtwxufvjuaaheszumfqaxqzenjcoqemqldcnnwfmljeupptetbsppsizptndeoljbmlqtngpdpmoffvybopmmjqvepfdftkxffxvwqd..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "gflgtrcrokzjdbcaenkjqlmmojdxsdttbjpkhhavljsbquftcymtngfhbtynovquwurijtblsoqimykvrvghjjtbtrxudlqwwbadcyttrnfkkgqrgnbhhekxrzbqprcsmfafmrerziawykqfmwkdex..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "gfrkxkdcbnhqaqkndknbnpnrordfrfhrvivdmtgvrfwfpsdmldkucmntygnueaxteejcmoigetcwwsatodnvcawruhydpyvhwtomxyarczaubkajpspayjpdvhlnefwftpoundtahbtjipucepkkts..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 977 { : "ggimipbsvczbpbaaaurzfgortnyorcyuccqpcksjnobeesphrnmpcxbrpmceirbkwajxgvpmwurusqzfpnjniubdmpfjammiidfvtgmspbtzasaiexfbkbhlksuqntjgorbqoqybvuiryjjrhyynsl..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "ggpzgmvaaqhbvbuvrddvsyipyimvysioavqpjfzcbxpssykyjqvhiyeytdpbuodfpotjngkjsdsychvoivvypzcosajfriqlucngsapipycdprnyqzidnsyecsjoskbfyhsdxivxuzmltfmsxmtdah..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "ggvwsysvvbpmhhywgsdffzdmvjlfjyohttkjvgbxelweparhvnctwhyiwlqwomfayqljchtrzzkaxusvabnvxceitvxirbrybctatycbecigpsyihzigplzwgxwzqketucmxcifyiexkdbcmrpxkgn..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1002 { : "ghhcwjnlxldrgxxyvpyzpsderrwhiznnkngszznflodmvbtlfnwrixtafspeoeiieoigmokuxqlezgbmgxkscmyjqpxfqzhrvbmcefazcnlrlqjexqbefnwhwygnfbvlwfvcuztfzrpeukxgldkmvb..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "ghzjgxsnvujfwyptllkdvmjwsgscgpakcqhalgdkwzvodwsxpubhrwvcyywlayegxohumdrthamytdrvonkgdbjendpuiinrnspwviwpuvlwwepcwtwmwwhdiqljhulvynrclwvaliwkjtamcjzcjn..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "gifconfhxtmgywuqyvvllypeusplmpomojijplwrlguhumpmkwijawdcaajbhmraghnzyjcryxzwnalcizhdstpeehntbmhacaskrzrdqmaelmcnepomymkjoemlagjedjvpmlueaqwzsduxoxzsvi..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1001 { : "giqihazyzocficiaowxoktgerfitzttagmlhbpawtgijlyifmowujrmkrqtfqgncsgcpcjdmplfmjbwnekucceialzshpgilhlsosrqzpmbqcgwwwdaccidizdgaovrrprwgoizvrtuqbxmlmknedq..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "giqnzbpnbgkgedvxccljbvznnfrqwmpoaaxsaiigvnueuvekqaxbalphpononvpxzlikumntucxjzpwnfujfgizbzzmatqkmstzdremyucptstjsrycaehkhihzhdvyrjxgucflxjeyvevygnureih..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "giylcebsrpokpyameikatzynhoizagzbvczyvifncypviffxdviyoyyqcyvxhkapajepqnkyvsixoxnqzatgotmiijpgrdcmiicvayyssqpvpswfpbntjgyoqzgabjuszawtwfgykitmyuiirzawbw..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 865 { : "giywmrfbzoozolxxelrkfvwayhgrahwwaatuwizqphnlwoeoqkrycnepoeagdlzsgmxrqxiznarmmhawwohpleccqivurpxkpelbhadlwqvutqnfczawayqpiskisqkqvrhjyeqxtfvynidkpxgfgo..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "gjixdfxxrkaeaetbzpefdsckzeytvgyfdmzximqzujcvgmqkrhdyexywuknzcylhwmabjfzjwtrqboqabnpfwmlpnuqdzwnatssvrgipcunoeauzbejlxqexjrwsynyreqhrhbmstxbtwepvpjbcne..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "gjkrxndyzqgvaczurtqyndbpwmhfevqetkydhqqcnijradqzkufygkwjuedyxrvvdulfkifiozpdsgiltvxjovogjbtakysfklxrgqccnlpmiwtkgnqsrxdbxhavqcfpoubrisysugqhpmxakdxvgj..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "gjmuvijxnpwljesgebdyweqcxkaodcdgikbrujyegrrkuxnytpltttuzexubbstsbcauvrzdsvxvazeeagkbcpdyqbugshezsllmehygudpvejlzcvtkpjpklgiyewgrffvpdmeaviexkhzlrzqxey..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "gjovppckvyvtmpwbnqayqyqpdmzbxpdqqdjxozqgjfnoejhplsaiserftjnqhfvgapxotqsvnrysjawwakalbbpbedugwgmrdbqdbohewysnasdsjlrgorhruielfoatlogjiacgolwrhymbefwffk..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "gkjjrnutrwatszgcsuednvpghcdefsotgwadqygomkrukqmvayzntruqupnkhrhplmzuysjkaubaeznaflwcteuccxuztedaasgxzxruywnnxwazkkgbovdmhdabozfbuhnimlrvryxmqcooffysnz..." } Fri Feb 22 11:37:09.971 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "gkljxfjijesazhpbxfbqjhejpffblgjjouvgoeyqzhgcynszbwwozcxvfyrxczotqdhuslomqlihukyjqtenncpcgczouqcwhrjxowpvkuaxxecavxidjudqveoqslxoedeutnnqkecqhfcsxxkhnr..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "glbzdintmjqlnjgnligbxjnrgnzmhzbgvzwqjolsfasgfbvmfevlzqzompncieysawnjvruxzuhqgotjwgdruuzmaqjpwyntsmapfdmqlrzslqbymsfhytioeurpdeaofvmwakdvocnrjfhtphdzxn..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "glghncmbvrpticipyvlskyjnlfslbasozpzlvlwfpcsovhkdytaydijeqhuraxokiiapwjnysbssfxxqmyhwnoyuyiuqywqnpxewspcdxannsfhnkwuqthfaymtdtqmnaxunttdhqujhxlzucfnpga..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "glgnoiuexwjxpyblbvyjxjysiyjherkhbmvywalcfzkwkoxgsunkinqjvoywjactpjwoptejuyozwqqyyuilzdnomwydvvmmbqhkflasplnmtajkmgoupvpbgaufzgfyoupcqmzzocpkediqsuanfr..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "glmvrrrygzvwvnoybkszgzbnxjazavoagnsaesbvhaobnesdukfluwlhmauewneeolpfwrnfwtjsemkraqauajqlknmonglwhlbvxfokesgthjogutvgshuvllgmqrjwlqoaqkxuhmsxtehtbjxxaa..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "gmmkuuwmfnyixgbdazavookkdezjrelufcunhupktrfyjsfqvrjsiuebonngiughkxflvwslopadosnpynidruklgnxlsgaavfiyibvikejjaiuxryeyrkcaxvkyabqbendwydlltttepmaxbsqcol..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "gntofhhtfsnmhislykqnbivhjqbxavyacpnhkmheyqyjyrnvancqlrdovkaslbczixhxosvlqsiirwknvjhhfxrdnucxfdzaithkahypexonfhcxnjnmgdlsnkwdaqqvwoytdenxujvyhvzxozjajc..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "gpqyjblmtdvauhzxnjsjlxrkgkbxbqjqegaqnwvtjmeruezpaloaguvshnofkkcfjkqydxenesyowblbzguisdtsambqjfyhvgmabpsqmcqheuxyxxddywirsuiztyplipblpkwtlllwhhhtaydrnz..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "gpsfkjwjzybsjqcucvpppdgkqnxjqhdjwwrxpriwvkdkeijmoinvzmqanmxlhompmbvbvrrdrbqkmgjkocwbwpifplmnrznzvnubaafzixqgpohlqjavstcsyrwzeukyfkqrfrdccpraxodmxdrfbm..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "gpvspwrmksbizltffcghorxgxotialpacqmdteckwirdyzgplczygulycqcbiefkjnobkkymvpcwsvgfenfwpvkvoyjztwiqmqefcdcixnhxwpjoxrosabvemuoqffhjafkbbiuuvzgbpcrkhtoond..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 834 { : "gqwwlrkotkglzdklslcacyurfdypulxuwxglotvgrgzulfygjozqfcoafxnpivajjnkjspyvblefmbiavucgrxmhmwzrngsydqyzvzfvtdcnixlbaazuprqjvoxveybuoyhxgwjnmdswnbkmwuwrpy..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "gqxkeyvlvaqngfhrypwzvhstfiesxklamrsgmvwcgrpywnjcvhoxczlkcirwbkmqujzytewdvobivfftjfdpysjcwjrhbmkyaheagohjmoisxxtwihskigofxiujufdhyxerqwvwznkwexayipcmpt..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "grbwcxrrzpbqznpdopfutwjuppvszeehbdntcgvebsezfjgubjjbufobhryuxknxrghqqihrbksblipqyliwvjjyjajfvwmvkhoqieelwvjacftukpeeepmhewdomadjmoetzhtygdnwafurpqnuhq..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 922 { : "grjrsaxgpjkhunoiiyankaphvqnjqxwugwdyftpzbwirsywgohksvepadwpwgaoarqpowciykchueibofscsiyzxcjiijpdkkuizwvnpvuqmbdoalixoefcysrtxhqmtxyhalzkhahadfctmadyyqd..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "grmxfcwhxtempugwsavsibfupjcsdedgvfpabmmedwimjgfbzzyvkjmkgtzkdctmgxtmsoilhvnoxccumfzpdoyhuahczpbduufzwmhtphxjccenbxvipuwlnpknoxdpagmjxixnonvruaenysmnqc..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "grojrnhvlkvbvoeylrgmuoyseihwholwbavhxqzsdtmahqozsxrxqoxceswkuzqroahqffapckygxorcjyeefajurrfknnijcmtgpfrkkezxgmrzacdplfzwnzfamckfkgeuzbxkqunyyfjmqvdghv..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "grrciymdywzijcixycxedkhjmifhqnvldcemtzochphjzhbbgbtecscxmqzztagnmkvluebvobutoocestdzbudsbmuxzjtljndeqjimlwrsqwmwgerrqhtpcymtwusxlnxqlbokaqlitqwagigenn..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "gsiyxhvxhlehcjmfuotcxwffjedsuvuuwtwekwqgfkgezkfydidqyadsvsabaldlmdmdsmeksyauczahrrljakblibgqspuqkbtsqbhajyktzkiravnmeowgimkwbvtkruukkvwcpfsinmusszuvrn..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "gskpemzijkcspovorneaaebpyokelixoyqjsazhfehecriwserwglzzqwfuhtbnwsudxohpdyvsxfkgnfkidmlscozlxfuxydjzsbxrorefkuqssykzltirmdyfahxaaxnieoxpemgdkrejbugcnsu..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "gsqmhvrnmxkrajgpjpwshbyhwbqlnitirosyjoygvhvzzgciofoqqqamtvrmwmhfsvrztofzeozmxdbavkkckiqldzytmekvsjwrmuudszlezfzwxmxyqgaycqqnchkiuuynopwzhjcokyrcejxjlw..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "gsykfkkwzbwfgbxkacyyxnrxoaqdcduexfvuzojcdlcnewmvbbrvygkmndnxhzhydgbturagljwtjdukelelxlpemckwyiqvpvjhwgnvymqxjjlzuhjqkhectdiomavyibuehksngnczqryuqtifcr..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "gtbzulpyhhmmpjtkjnhnbojhzbmpsyunuwvtfmulxcsjogfkmvgggxqxxwukcnscagwoicqocthuargmxzaxcyakgvysibgaxkleeuqqzxckoieunosrlepknfccimwfjxpfmwmqrkrslouzkhjsvr..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "gtpyhcshgvexapgtvkskyyldahgdlegnagctkdannfskhidkhboogwqcerwdykjdksbhmtnkkakdwinvvkbhvuhwarrmyfvwygyqeihllzdlwmirsjjzxtywihsgnpmtssokfofdsmecjxinvtwksb..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "guehtyrsfagoescveotvcpybgmeiswnzmxugbwmquzuommgrzkuuhxruzcttxeenwnfyilcovdwczgpixlsiyrwtreilphqyvbptndeppvbeyiohwtdwksltrggywkdaxhzcgfwfmurvgtzoyrmsnp..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "gumjzxjpjvbtqvimxbidtmkskdmbtqpupcpkxxcflrrehyqxqwdtuxbwlcopsniizhxzqixtuafovjpdudjvveouqumpljxzovrzhhovhmdsojnrsokxamnnqtjcuaahyvmsddobkoezvnpylqzbrh..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "gvaubyurfqmfcsswutxkgbogmmnjffvanmyullkqbqadwmhqqpsgjszkaqkgexyqrtxsxjnzkghmaoiifrrmksywzdzjnhalzyedynogjtxiflyxkwpmzfbaphvgsvwtwnyknkgmhmjstmpydibsmb..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 849 { : "gvswybvxszzrmmqesybrmxugsphekqreynsvymepqametjdvtvkyyidnubylrwslqjsrluabxucikfqdmqotieyvreypqiwcpdoqljyszehgznrgheitktmwjleqetvdfuzxofvmjiiapidvpnjqpx..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 891 { : "gvzwsltpcrsehkywfawvdeejtdrksdwemzvrojqmmxjzweehfaqfbmqzausuggdnaakmkpxaieggrvoqogdrhlwwztjnpxzcrivyvivsnmwxrzdozadexwoowkgfvwjbacmdnamcqnchyowruhsadv..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "gwiksyjihfrwiiqebkpwmdvpafhjbzycebedhwlhvyywvvriydhmwgagpwabwkghhpedfgldetaaywcpvoxyfztylibiletgmnfmtewjgkbrxpoeoxopkkoebgjdfasoniurnrioaevniizeyadawp..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "gwwjjzphsoeezddeqmfwhbqtgzzyckmwlkopjvsvxlkoewoflrwrkqwknpdoorpenujaaezzcysjsrmgamxldecmhpdapjwiynjifoyhrfzevtvqmpynsefydwzyhezokjqhnpwciwgukgcohdpgee..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 975 { : "gxcqybsutwxwxrwjlcmtidvcocrveipejerszttobwyjacgeecbxxzttlcnsnjhrukgziwjryqfkbzhlhboswhzwdecatefoiegzfwbzibbctdwknabnfupxagbcugigtxoxjtlnbanegscnnzkkfs..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "gxeodfwzqcvlquocmlhltkhmrdzgcvjgacjwupnzlgucmghzgkosaawtgzmwpynwryvyzczcwhrrknmhflktrfgbnsxqmxalrmzdqphjqeztciubakgaphhgwsjtviujvhgzqygltnlbwjmzqprfpp..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "gxuqzfbggiqglpowraajqakohhttiyawxgcmnjbwtqxpifmarewlnkssnaifiakxwyjuwanjbmmeiisdcubfjddidrbnbvspaeieaboqtguhakabuhriybyeckjrgkkignjpkmohhxdpicnrprmymg..." } Fri Feb 22 11:37:09.972 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "gyizirkskqqpkpncgiucainsgmkvxzcdnkjkmquvangovavpmdzbojhcfpzbpxqzruxvthxefdfvgwtwkqvhedlpjsjxmnrqwdogllhfoikeifxothbbnpjrjrwxdmzikfceqjzoutpxlkhkniigdd..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "gzamfegbmmzwvsokivchvsvfkvxstpuiryfjiztkxcnukryvjlifzsclbbmncuhtdegjikgbbxirrbfgkcsuuwrdcrlqmqduxofkvjslgowtgxkusxueoeoqifewuuaxcpavvwcyvhindjsklruwas..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "gzutbfgjgtyszofsxfytcjgrntkhpmbyfqtukxzfxflppjthugoaxwadyvririvlqgkhfsupquzzjllajmndwharvohqlnqmpttrypygpuqwgawznisjduytrfrqazdiacmckysetmbamtsrdgpufa..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 982 { : "haomgnnkeidqiyvowyeosslcfycwtglkcnaimftdwjuhupgfergwcwjdkbduepasilflukweaprpdneokvcbcguqbirhsjndgqyozidfpnspemjmyjkjjzporxyjlrxhhdvjlwcjrwpiiytlfltxcv..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "hbherfdjhdfqozztsfaltyqzwdcmyafmwseanjnrelyvylexajzixghlvnzmeikjhompsswjqxuxesmigcqixktvxwstylfmbazlzdlgvkfcdbfmbnuakwayyefbmqdtzxogxnczhhqujsfmkgjhxy..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "hbkanhwntyowoqezgkyhrkighgmnaarcofleccndiumrlmevjzpoaatntvtkpsuwvbmvzxvwshoidhzcxifyawyhreixnumigfwadyjtewkzrkfkbkkpmfnotnuyadcoyyocakuqxnigkhwcxtqcnh..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "hbsggqrbjkspalaoiuikzqhdvhwhjmsudivapwelaxbnjpkjkcewjxqpeihcgmplllkyhqwibruktduzspbgpajhdvlhmkzdbomqdxgbcgdwhcdpivevqyuzdbdpqvkclnieeqqyjtoybtmmbaalnw..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "hbskzkzqlhrjaguotukivrhvqnynoghpmycwzrtlfecjhfcsgnrqthjoxjihcolztkheamxbfhohjzwekrjdfwthkphdxcxyedhuaaxfirozyndlskknnvtwfeuosmylyqywtxwurertpckzbqsgvy..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "hceypgerholorfuoscpzlnjuurhlozdbfetiqmqdbnsldrarmbkckvhtiwofmgghtnyyvxkgndasvoyfidqqkhcvwzzczhbhxmgbgayublimpwpmtsddqdhtsohypidamnxpiurmvlakocbaadlxua..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "hchulqxtgnfwyewjybdtjfogczfnanehniksgodgdwweztdtnqurfwqosxjhgbohsmkxbrnlteaiejjtqccntocfztophfbbzetoofulntlhwryfvzvvufslfeifeaqzhzmquwuouuktwwicnxhvof..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "hcpjwskdkbyntqxvigqtnkvdtzxlqbpwvhklaqzmgzshtktqkkdcvlhoemtswqipzeahkejeahqykywdirsjmeplrqogwlrgieokoxlovxpdbhjrjdyagquvjnebsihmryvhpjkluxilpdtduaconq..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "hcpzlwtjgpjfcywxolurpwvfncvcntccerqmfatefoaxpxiilymdojlwhwmhmgkegsifqapjeczsetoodzwsinmpsfjlgczxurjfxdzwdjnhtvtqgemmflhdtcbtfzxprmcqkmujxyjmjhbgdqhqtg..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "hcwocngrfqzzdpiasjsvufmxzocoebhivhxrvactobuwsbqmxzcguqgwrgvcvhcrgdzyvuizbdiehpbdyqusqpboeypiflqltqnhjzdcclyhehlsbfqvlzhjmorxzxnitpycldczhjwfnrezwmoxcy..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "heiazsqqeeglbohzjbylnuvdpgpdcjtdxlfjuxrdjzwnrkfcqlwhjznzryzelhzhwjiutalslgdiryqnkcqpcjddvsmgjvrbllkichuklghoymshhprnycziyjoqguzvmlseevmudlczmgpcozzemg..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "hepocdyrdjrcsdwtfplhoyiflsfskbkjnaigcqivhwdxeqfcfmrpecfpdlduhyfzvewchcrsmcelwxalvfavlbhdnovwdzghxtxikduthgvswjwpjoqhiwnzfhfjsmqnndyjvfuwinyqtenbbmnhpg..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "heyydmmwsvojtiscmtckthoounjjxzeckxtoebbsfjpvymmdoumkqzrzluhdobuyxkgwmfxswwvrarubcciybirfxmufxhrbzwfdmpicsaaddjxgtsiyqlnueodauffusbdzzwyvsftuxfidddurmq..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "hfaurypkzichucwexcakwgolxlniehsvcufgkcyinuwsmgbtwkvqgalvrlzpdsseafbovomdgaitjuwpololsghpkoojodjuxctesloubiqxpmalnycwlcrcctlbkvbhaiquqwdybjardbquschfmr..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "hfdhjbvwzkkpvuhzsahljbaweadquaxylaenvqcxzjnvrtqikezgadtscuaaonzxzufbkjlvqbxmqortxpqjlbltxngeffpuciebnforjunfffudpztuwldpoxscneaoigdjhmssvudhnyzrcxvbfi..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "hfpxeqdpwacrytitorgerfxadbriassczwzxhlpevhjcngwxogcsntrlnuapxeojdwpnxehkzudgfohadqudgtgeafkfagehlrijexxyjxxdjdtrlxbrgvyhqcsinmxiqfwjrvkkarxfxnrhqymzjr..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 935 { : "hgkpessjxnbkjmdhqozzfdphsvgetyanncrrboaqkvijckqolkuousojgdjgwjruifwbocdqrowehzmeyjzbacoktuygsclallxsppxlklvkdfitmwulbytfyrdfhcqzurjprzrqjjgrkxchmmjptr..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "hglnamgcftjvoyaheyxovjbxlypewgqqepzxgukkkqshltfsynkgmejhgntkrtdebhfujswyavpylodwanvnnvrqqzyyiughdzwtxqgxsubfizmahayhwamckurzprywxctzajtnkuyzpbknvxqacs..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "hgmqmsryyrwrkefnqtxwodncycvgfeuhpzvpapgrvhydnebyauwfffvkoylqdzjwocharlayisdeqlynsmzwdilbnkjygstsokfdnzhhrkuxrrgdromqrvjadcimdekqajolhbscegztapmnuffstb..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 911 { : "hgofewiszdyheentlfvfjpmqqhdxgnvsjivzyddgbnvszkkzrgvpehlmiqiufpaexkyjbyxomghxmfbymhbdckfiefogsvsdlsqoesdqwczlhbqmllsrnbktkhjfstvopiofneqrbsjospsphtwupq..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "hhbyjxlsqfbdryebcvwbosamuirjnccezokakxaoaqogcluiganmqvwstrziqktfxucvzovvaxolfesiokrfxdvimgcdsipiifofpcooeknuqzlqqincwtceldqjzeemeisfxpxmtwgkekomtmpfhr..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "hilpgmmifleeaedozfexreyjgcdpobdhdutgqswibipglwrbktyftpnzcvjdtlalyihstxkrdsmjjhghdgdjlwglcdemhkxvmogxkeupbnptfhbfnrwoxjpaayuroodwqyqkxcelsnqefduuaxhrut..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "hiqkqflxxfhptlqgvozvcadrywcvjnainauhwojhqbmxkyketdgjrdqkucdazcuonefpjrpglgqsbzmevhogirfchmwrucghwhrexdgjalfrzhxqprsupgolrjdmiqplbfyvtkqpbrlnzibfsehgpb..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "hjbjssshhddowtszoongyurtwapakcczrsheqjuigrzutckpmtyexyomohytdzczwxokhseafylxofboltjchszalhgecnybpyykpmxtnseigorfectgrcrlhxnzjzqsnuhgwwgfkxlgbchdfgrpbl..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 875 { : "hjsfmjxtvckdsduztbpbqjyjobhgpsyupxtnldtutppighllcaneiaoxfpalnfnztcuoybtkhxlrjuagtkjkjodtbgjgmdbtgkgfmhlaxefatghpyxdhnbntuwsqgqgecsrohwyuobjibksjgrxnpo..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "hjwvdvvkbgpodzltrnjicarpkgiutbtoxbarjuakbjavbdlngactbihqfeqewfekzshmfdrpiyfazbcpssskbzrfitajsaixwrrwlnjpfcctcyflbgtohnvhhymcjbwkiwhrtljtnmrsjxjettwqvm..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "hkzmkbmxaxbjlpugkpetvhrqoqybquykqxivqyucmzvxxwsogycvwjtgrdsrobazturcqktzmjvdeoqqkbcwnilycwwbynielsrkbtfhwnobgspxlwkrylvsuyyltpsjvvdfmhohsgsbbeioduhdbb..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 876 { : "hlravsjlahkweevbuvjgxyoghmxahdvbazhgcgbicqeurmxijkkrworbqubpcwhqsvexxylhooqsunfjtzgyooagndtyteccdroibmblnbimfbcqhntzepmjherxxljtivrcgnaoooyamwmftnqzwj..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "hlrrlosoyzklsordbudpjawbxyowlhemplciocwtnzuwxrjheenuwlyrhzqgtclmhybmyslambnzvwwuyynmljbcpkgzzcvgyifimpkyrdebbolhkehvegipueuvoqgnchcbqpepefalwzotscrauj..." } Fri Feb 22 11:37:09.973 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "hmnjqiegfozngnshtthspcucefldgqkhdpaywfsneimajvkkmgbmgzvmatobrrcbbekjlsbzfpzfsrbepzijbrucnxkpseqdrglmishuiucbceasfojfgjzvvflgwezxsducdqwpqrhmxcytsxewtx..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "hmphzlfgbwzpgbwietuesqvalfsqwsxosxqxzfhkqhdezfpsvsdxuklqhkdnynfobfvjmpkjqmxumecfnnkcwgyqybjmggrcoswmvtrstzlsyeetbrxgpirzilnwxrwwtbmjfvsuzuvmkniyrcxzst..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "hngqgkrjfdeukxrltoapstvhsvwhfbohuvzuvdnqateqrzjebraiwxaupygerrixfjsbdudtqlkjwlqnifdklpzglfkvbqtagevenebiuuuysovfbalrihsjiniyeibugybhmiqywaltctdziqkppn..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "hodloikahswoakbjulxagzohjpoeuelqhwdokidnukurmlbklcsmgzirlwkvwdnrnwaalijszifwzezdkscwxxlylcacbwxpkifgtuqmlxjdnddhezzmcytmqdphtaqgoxchaodszjoaybksmngdkp..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "hofgdpaldnmjgzwbrtyuwvoemevvyxpygdvrufrssqdmvgdodyzlnpouiqhoetrcpxbjvoxtsjvmiuatmbrehfgmnfnpyeysqtvcoefedvqdjgoagidnpibhzxiaemtggfkcjvwltzbrkposrrulxw..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "howspjsiwoebnzsrmbcanoiycbzixzxcdllkctzvnnqhcknufaceruudyjxdghkqqjibudusscooucdkiojkropjlhhdocxdxqgnlhkgcgnrpwjhtzvismvlohirynomqsjksotbubglpdsmwwrpzs..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 980 { : "hozmvdcqjalfcnkxwnjitzrudlgjcbtcakyczrhfgzevkqnwzmemkgayectykszuvrvvvmzuyrakeltovlegiyuemcrxrrabwnniuqvayzpdlswzuxlabeqqgefqogxyusgqkjipfovqjhkewbjqbi..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "hppkmrjcanhhjggnaqlwmxlyyguskpypsqolryfwjiovajiorhheosvmxbyaverjdndlqucdrunixbzshumkfbqbeznmoywpcmndlvnabtbokzkodjhfakilbiygbxabpvfzejvayfjdamjzzkhznx..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "hpqofjdrmjigarlelwojemgalgwztejbgbcgrgvbnzvszpfahzlnvazddvfrqgqdnaxwpguclbtnszaeucxiaksloxcclzthamrbrqzyvlfkvuswtsiibbytolwjxcprnlnfwdmbadwufnhlziqsnz..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "hpskeqfeogbsvunynxntsupsevnqiikkromludfgsadiivzfcaewxxwljbfchmejtpemrbpzpqargxateswhutnstepjursxhdlawtrhchrmtpecjqrrgsajqdmuebfzxcsvyscbassugmeslbgznk..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 849 { : "hpydqspffqmxqpvlelothtraklehgpcdvxiwvacypwhwhuszvqnkspygzfgmchloejmoyvdcoggnzfspfykqsacufumnvinelxgymtvhjzrkihhdotsqscqkcogymzrmdkvqvdbkvkbzvcpsqnekci..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "hqinpagkgggedskponmafcgahfyjnwwnbopuswvzcfccbszmbqvpoyhwuugwmhhnbhptbplqcvbwwarbybflgjhzqwclkffoqxcfteyqmqxjkkdvbpgeqxpzehqeyqofdbokiqhbmvdszxkfishomc..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "hqjvlnxjcigjeyqggnewdnucssyfmrxceeyuwdifvurdldqxuaxsnwfoujblijlgfhzckzixpkdrbratmbfqnmjeldvtjnjyyvdfbzdzyufmigidosgtpkwunlytuezjibgzrdrruzpfmjuerolmzp..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 862 { : "hqqjqdmxwfhcrrbnvaqracliavkowlnwhosfnxypiyeiaqjtfmckiktdvxgwncnwgtyduhosfslookassvvkyahgouekuwhksrclvtltlzuxmjpmrkggwbqcnjbzbhiarliwdarcraiwgfldxffybw..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "hqskoayrdczaxctmvlfhywhqxkjvweusdelhtwrbaetdulhgtxhyhitvjlszvonxtfphcmumjcfwojvuxvhmbxioxcsuoihwfrawdwfefauypqtcbmbrgsndrlgugvqavfjcfhgmudvsaewcbcgxnf..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 931 { : "hqtcfkqrodpqqywanozhbidxpzywrfevjsaqfehvdjpxdkrgldpbzgojhqosepfzqadsfgwtfguydblnrkdznadjjnlvtecugkogrmpmeoiesklfrqfvdobjcdizaconaobtfzmyersoehuqpwttod..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "hqultbxkkyhggwqmpvafzjjihqmlsomzalwlbcnzwvcnfxgszauoipxlbgngrbsrkokozlgzgextnpjvqukowulmhwuwxxcjjpnxhveucydlinfcjkfvzcbcwrivqogznmvpyvhkpnrezbwhqkcbfl..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "hriztrjscmdhxbvgfodqylhjpsnfpvemutqsuzcqvexgnzreiggzfqopcvsvoluxvxqwgateqddwvwaxjhyifdicyopvcpdikqfuhybsbtenznotgquxtlywcfwznywfiitmlkxvmhahkjzerddnaj..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "hrmnaystzbfrazttamsrnohkrpqyiekmonoxnlpcbodelqqjjjsaksvdmmcbhptfwvymeixeimmsotkclpkgvkocfbkyeeyddeysttyyoajrndrquwolavaqwhtdyckuonvgvusbaxbqemrngkcxyc..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "hrnkwdqzqeczufmhtxynwmunjcqjharsytbuxcrcitaxpphjadibpgzatyfluxmhccxguljkdgxbuvijvgjspugsgpekcpncsbwldlrxwnocuckqpjturydbsjuaiisltpnkdnqnizqxlrnkrfvzgo..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "hrpgmwjvlqldchxwjybvpzmrnqutqcwkpoanfctnoxhoownxkwjkmvdqzdbjswpykehdlxooqmjndvnksrwrrthxakupiqvehgegjmjgslwmuliamkxgwvxdvfecjftpeaobaymrefbflyxcjlxamf..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "hrwkmldwdqvdywxefshiyrvjdzjedddxgqhtlkvbmygylafukodusyymzfchadfrcaefkfnlgunwerpzwwvqtkbopkfkjpzbmclyrcgoyyraeacorsfqouvonsdtzqnuatxichiqavmubevexvwxgz..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "hsemuksvmxsmpiqiedxdqvqxrzgawgpaiaqfjklamqgwhxtzmdzfgsyjjmazxekpotdxcernhxchiegrbekzoxjkwdttjfudjalcoqezwtmffrsygvtsumehoycgzuidazzzubsdbdiaxnvnergooa..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "hsewnjqwkodfrypwfuflmbiksmgacsmoavxrrszrxjluvaheumrdhkrezckwmlhekgffxfegsmobhwaqoykmuyyqruzbxzctqxfeliarycgzepbzlnolpjsvcrhkgekofocxexsazjidfweymmhtas..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "htgxdpkkzfzwyjqyslydeifekpzuycrxvjggnvdgnpfppyoneiacuzeygecfswmjiygvxqhujxdgaoruofdxijpquvmcpjtrzqutniwgfjepfhfozblvkhvlcdsasuqeqkuloipukxgxiqgpasmpvw..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "htiktwglrkpyrqubswxmgsgjjrqjsyvkbqgtkeizwdwaxwzotdpotpwfggosgqlpxtkqzfqlmgvpfuieykwwicclviguxdiguefexbwgmgzlnswwghblcmbbpllotoohmavhyhsbvqlyborrjmbanq..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "hvxrrgetjwauyqnfptneulemmgzmkyanhkyxrbninrmgxvlqxujghmpzotbkkkfgpguimdongnkyoazibpysqxgdttjuqqklgqarylwcjjuilgcclnpmwrentwgpgfmeycowhyzkkodwmnqicjzkfo..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "hwbaglhiieksmdbekxjncexvbqztezfyhhepfbxpzdbsymcrpqgbacnyneagtvaboqjinlpmzxvqgsrercoksbyfbvpnkfjgyluryutaeuvxpqlngbariewztqxaeksdsrugbfgwslidvzqcdsnewn..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "hwcnncsxqhbtmutecelyvrihsojemgywwesdumcckbxdzhnuyjvxgpmidmrhytekmqirhflzhyxnluvtlncoxzjqzknfkdddonnebkyrwbjutrolynbnkhcsdrxxzrdjghiuiyuyiyczdfmylpxlpv..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "hwctulfscgemkfyrypdzzwnrojmgahcicdjijpjzevirgdknqxdjhwwcgohhhfqrbmtfdwqnesdocshqqjcksjcszfjvtqvpybxtuxwwqzoeykcydsdjhjdtjmsggxuflejhtbdvfxojmjrwlwfpro..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "hwhfigaatdxeqhxltxbazybdaabqbtcvghsszaxzjigmesvxsodopxzjzekgrldqaqjecfsvujvikqnosghprjkclsdkyvvxajoylrekemgsvvbfnolnezpxnqdpbdhrhjvkeyxalhtrfdmjppztfj..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "hwmwqhivpaxubtulvkqhvlfrbbdhvngcagjcolzghnoyhvecrqlfwplhzitpnlefigbhiuibojhrbcjpjncmelpaoqednpyoqcmmyjyxbvcfvlkqobnfpobkalzptxzfsomxdeaesoczatzqctruvj..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "hxfbemfeepvmvbqkldasvowlzcraqspiprerzoucccdpbendhfxijlguuwalinhorgjfalwgyynpsincexnacqpqqjbypdpkpvzhqcmwgefyjtnquwyrrkqedqnraotvrjcfwxbfsslpjabfanivnf..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "hyldnjveqrzfrabapquqmmbdkeuxzzrabqpqvlscxxubltfmvomldemgygmiihdpiqmvzcuzljzkrwycstjcbwgpptlyjakcdggyvizimnjjtjbbirpitjyafuywummnrxygtslekanbpwsdxwbwnb..." } Fri Feb 22 11:37:09.974 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "hzawqdhvunggxseqrvvythdwdrjcvabxmnasjwsvwereupimtxxfhwfxhvduwaegosjutzqzrjxyhtzmvzycablzcebykocdiipwmjpknhtpmwtlolsxhchmkdstqgyelluulszuqonlrpgykenang..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "hzidalnocqkozmsgxkrqssjlolxmtzchvbvfcnaceqzrgeljnuanjodxrofsiwplsnoekrnwkbagdsnhfroatfhekgvmtpgbwhcphlllarshfzunjstjjvwhzsixifznctuhmmjkfzxcoziifeuahm..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "hzpuhkkgcvvayxigbaisigoxjxxkgapyaldnfpxdsbiymugfgnwjxodjfgxjjrrecfkrpcxdtyrclejuhaafeychjcfhqgelqxuslmftvbeoqpufcaxxsqtgsklacjdjitblxqnndtpadxopboxpfc..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "iabulvrbsqkxuwwjhzkdhxqiqdlgeqdnscxmzxfqzbdhpiutgiwizdkwmgrqdrofnzbqmvakreaaokrkajnigitpylzuermjcxnmprxcnaacqkkgqtisctggztzlbrxmbyetqwcnonfznafhfaeoez..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "ialkkmudkgemchkjnuiisqijbxwexroyrjnvzcbgtiimgaiqnhsbrzsshssoylyafuofwltqnuctdanxtpwfolqoxxcuooohkdyqcrgrwtzjznzhzzfomghrzlhelxuyhbhotzsqqextyajbniqjmu..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 955 { : "ibpkyxlvosjrvnqbunqugbkcfirvoamtwiwodkseifzztjjcwwhlsrdswctbifgkrbwipvgswxokhzwqqtsotemeeexblhrtlytkpysfniebkjqcakriqplngejwkfdxjuwjmemufectbfvkledqqr..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "ibrfqjqyetfcoeazyqlsjbsjvyctgqufdrivcyyfaciqezkinqxsebvnegqnwcddvggtmqddxbocidbryzqpclyuvwkzwayvujoukdmtnmamwufnbmmdvszakgtqtaoefcwxtxltkpgvvbdbjhujji..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "ibyqvfhdsqbsgipdazmexsxgybyyvltodwoygrxlxuulgixuuxctkmdgjyplirzekrsrzftrkflgxboesmupmbbmkerqulwwnvgmltiursocmgoatoltxqvbojeigbpueoartnnwgtnnwvgzguwtfq..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "icafijxulsexdokulfvonnwpylmrjoyuhynydpqysndigotuxwttzedyacmfnzvqyyghlcwlzfhrqzjsbnwsprczyrktcodbieccziilvbwhvitaqmnyonsenhnqpjdizdaljrhtevfrhexwwngdvb..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "iccqldgnveqrcoebjhutdihphcpjcgpewvxasiizvltdphgearnvfegspzfydpkdmffhrylwjkhmenmyvdhbglhfaeuxrorjtkkircnguilxiiarqxuqrxmetmijzjctxpqsigqmfocjyecuhdkmkv..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "ickfsgosqfjwwskajxifkwciuitsuabrobzkeypfpfbolncjnesnvoucfmubsoymxecaybefvompoentuekinxmllnyjkcsmymxizsaxvzrclnrejgemwadzfzhyczepnnqsadlgkuulgidluctcbb..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "icuhsuolzloohhndpdfooaqsxsgilnossxvywnzkkrvfzacndvxxszzxjphjtkzvsblhvospdnsbjxelsrvmndrajqdvutnbpaydpvunwrlcpmtxagxedhgjmdghkmdumnalldgjzatkcdqrmnqyxe..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "icyajwvkgmmmiadmqtdtcvdrjuynblmaumptaqhixdzqoehdlavvkkfywozzeitsxsszvunwpvueanwrlspgjwryehajoruasnzsewpkstmewweneeqcsduvvgnjdfpqkwaydaxvwfkwxtvllywitt..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "idsmjkdsetnwbdeznnxmbpgcpnigolohfzgwrktdooaacmsjpcfoasebsmadpxkfsksdnfsxivlhfvngwlqhjexnozmapchzylbaxgumycslfwrdyjrigqlurumfzoxidoeejpeyaekhjxobpvpiop..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "idugzmqpssvrgfrhuwltqmwqwictqjwgajywbwuyojnyxsyxpitplstxfymuvpicwbfqxbwxkkhtnhcuavnsespucubkfivsxfpzuhaagakagujlxmyjrpkkvnqwwrxwsymbbhgeqkiogmsrkkkrqy..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "iefowylkzdyaxivikvolofsvsdudsgpgxoyrmkqailvbrbpcpmufpdystxpziwnneoenfwicgwixblzvguzfclifnlwacknkevnedfjigcocithbxfpfjojlkvcycebkzsgdtiwjsnkqfiafeutjdz..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 994 { : "ifblwduneguatgminswsrpxmjzlthqjsvrkmxbzvzmrlyusgkabtaxjjaykxzhmnixfvrhcmyinmzfjxpbgqgpaqmzuixnrzafigxrirstvxxifyzhibmhwfrpthdlcynatjgnpaluinuoslydmoyd..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 931 { : "ifjdsvsgyeajllqtgpedejoiqpgqaxnvtrsnxsyywzulvfyeefohjfhjpoqjxxsszfzpxhitwehfobvhdppuelpcodgtldtokglcowwbmdbqezupsvqnksiuvabhnpncqsmxopexpvnuycadtrbort..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "ifobnhanzsbgffwdkcxrbdjsbswvcmkffseqvzapbgszudjgxzxcgwxujfqlpyzbbayisdpwmitgerzqrqgfkxzurujdnqkmzqxnhlfiwluxtwocjhygwlqhngexzqsgdpaadpoypriehvelnvrolu..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "igvstiwacacewfoczmluutpnnzyppwqzwaztdlfqzqbsnwrzqsunspxyhcqbrdclwllnvohfnfcuxitdzirscfivcwxyverpbieejcezoimupacjuklokacgkqzhxafpfbvgxjgwzafksofftodvjl..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "igwpvbsnfcjcqabbyqxwnwcuvzntsotdvragtrkmhkljekxzrhqjuhyizjgsceqnlthidsueqrtdxdgywpzlviyxavybujwkunljsowiczgfbmqvjitudwwbnsbgnlgyuqlhzoikxyzrtqfbivzjer..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 919 { : "igwtecncvshasuvavybjllkdvtfzvzrxsvwziebdfwifdsadklvolooydxggvrbcdxowgcpofbkojklibfiqzsiggaatkrlbwsgbnimddpzdtwpjwnzcedzmtzqaahrtrqoyqyyatauatqqihskhzr..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 989 { : "igymagffbpevioxwpzllwbdifcyqlbrfkibcawdyleejbsaasugzfmngxcjtwndqjbzpiefovrdnbimkpuwgxrbhyaqataaqfkuyoebqvtuibesmegddhyxpioswycvttdgtvlgwwehnbohxemrcca..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 862 { : "ihdecegxoqhnhzydimcevnzlylczrqabbvxdxtinzxtrhrpyyokcqlepheorbzjaheppjqolgoncstikwhkkztdchljqbmhucmqlbgkuederjudnmgqslxfpnrrzwifiasrapfcqcurrzouhqsbbvp..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "ihkywzzjsavkikdwidqfavojbmkvstjjragjrdangtlzuvbkzaihlbtvikmpoimrpwncdxdrvdyodhhagquptnjcrvczwlnsyujpiqllzfwsaqfuzwqkxivjcesstdpoqpyzefopjyqmxcrwglmppy..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "ihmtpyrbtfozdborsdvbqiaeautmcccyojqyqmupoxwhzowlttqgxfnqnazfowumvtftkgxfczpznvqbrtqaddfuvwqbuhmqssnloxoictepxhwahddwwfkzkvpektlhdzwpmzvetacvekdoqlnjjl..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "ihualdmghyfivjcyfsbdmqoupciwsoljaqngxdprcyenbtztwmokxcizfovxzwqqfankrjuajonxdzsmknfztttmmvdtfgnktlbnldbfgkjsktndfnrabgaaqlgdqgywmheulnusbcmeyizdrjkphz..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "ihwwlmeddmsrakwjlwszpivmrqdrxikxbxxnpnyvjfwgnlgapqsbxuistxzdicghadxapicujvbzrnjprjvzcxpiruzxtjroewzebidksubtkzfwjyhpfvzjfhpkauasacnsjlghityjnuqzwkonej..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "iiduqxhtfedzkemsrhkbqlbkpmwyedticbtiidtzwuwfzfhhymslvytgswmgjbfejsgltzhbtkhtrbiooafrtpcevoeojbrxctomrztfmabpinpkncyecixiguaqjaktnduzpntjbcwcnlqgjlasgc..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "iigmxynysauinfmtlbyrfxiorenzggwxpwexfyklzlundaxnmicqeqvkzndrrqdwxvlyiuvacagjhmrtxuodunqdvvwkepcumgeviwiatomugchwoyjrkbvbbwkzzlkpvakkyxtuwvpkevplpcqkfd..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "ijboshdbhvvnuhexrbhpjcunbvvqxxfziaqseykjxmtnapvbhunrgkcmgzjsdcibwddfczsxlokrywojmlqhhhnkjrhkrcrbudrknofonmxqemgkjbldmcpjthsiqezforojoksmgumxllqdouwuww..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "ijjdudxozcdlesoocnabrytnhbfbwenwlimyziymwhfumrwdcsumvmxiaaiixmbgdmyglinloufbhwvpdaemmjqsacpbhjmyqinahrjhbpnwrgigpszcmrnkxoylntyqowdigyxmrfszqrregoeorj..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 865 { : "ijmpikosbeyajdgczgrinvsfxsllixtsabbynnswpnbgksafvtiqcqbemytznnfnmmikktobglqbektjwfvgrggmovsepzfmjbnseugrfsstkqfctmsbempnvwddoeelevlvomdztpzcfwjvztfeae..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "ijreohmkrgzejbhutivqgkydttfokmjsmzrppftqjfskyncliantaorptrewmxnobnfeqekgfxgidicvjluwcxivwrgtqynttbozreownhcxdqmsgexivmtolsixcltoopbslnxepernbywxpoeeht..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "ijugzpoaacbyatkyxldpmiqzcxqaoxnydoctlqqprzubfjcmzodnyfkzdafucdgyioyyojjntdgzsyaqqhfufmnieftovtnfxkppadhoqjxfmytfnzqjirkureeoqxhecwkwinqlzhcthgkelxrxdo..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "ijvagwmxfaejkbqplvbkeeebveqkwfnblhezppgtiamvkuqlwjpnnvqneyyziqzpmpqclrxfziiwfsdzzrnxrgkuqmbdxeztzabfrtzeznhvjfujrxucniafxgdlqwstbdppfxzmvyeomycwkzufcd..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 891 { : "ikcwfnrbndfnliyiemzzypcokkvtgxamtlvnibbpfbseqpnadlhhliydxfeymhhdqftezxylruvggaqfbomckuhdojehpdryeojznruyrcqbnxrbzfxnutdnoofjaonghkvzrlfntqspebabvukizn..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "ikerjobbwqhqlfwrndakzhlrrqhkwkvgaejbbewygxfnyvmrrvbndoanllqviwotacbtzrtafoemclalgbdejybjomhgwgvevhtbzgvbjnzypjuhsfaufpuuemwtnjnhxmmmhejpfezrwyhkniectw..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "ikfygqdpchnngfxwacacfvbpgeqjmstkigdecffrrsinaixpnrwqujqfgcyvjmfdtqxflvrfwlvkhamdhtqwrzlxupqljczbxyinzyixzetdezvpkfvbehayojrnuyjlvosdrhgsetlkdglhxwrdaj..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "ikzwwlxxglyebsqqszphvgzrxnlyfyegryqclqesocurpxcufkndqkqmhgceejfgjitzxwyqvuvoesvdmjevnujvzfyunojqntojfekvvkkzyumfpdspghbeguryplorfndyaiechztdllbyteethn..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "ilalcwfkgwydjusrfchkgkjqbgommpafwrmmglyjmqdvfiwhercbcokjvzcwmmyypkztbsubrvewfbserulgziakhkapnguaiezcsymrbjiehmgkdlgtkheospnzjgevyruhyafvxiupzxwozafjjt..." } Fri Feb 22 11:37:09.975 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "ilcvyvawdnoibonzclvlabppyjiafcmunmhkfmdsevvlwcfpqrfchqlaziwpxlfrogfpwuvfnsqaatvxkiwdeeygtssboaffdxkoakrtnkwhfqdshnunoxxgfhwxdczhdllbkgqsuzmbtdmvidcudm..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "ilsnsegqulgoaapdqzthlsqiiqddrkfomwhwobvjorzaopaanmvmdzgzikwxzhydlxkxulbcvxectxcfjzagukiksivcthkehvyzpxoaususduuihecueuymdyhyuaqkatkijqxppoirqrdhcqheio..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "imhnndyhaubzhitnopjyprcdbnmhbceapzxwzhxqnhzskgdzieeiamflttfscqzshmklelzcfcoeouhmpiuuypijjjhnqxpguocutaftnoigmqistvvdvrnvvkefoctvleyyyttpymrbhoreacslnh..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "imzfehvovakmwcusvpodxxawxichybqlptojkvxzqqaharoitgigigcrrufmwqxrqjlzlweafgtzaxwipnpeqbkqagwhitkwxskzolxfcinxzjbfpugrqdewrqoojexticizyefmcjklyehwjmikzw..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "inpnrsirhwniqbokngklugpcdpofxlpbvpziopgboiyxsdegvijioxzycxyfsodlwgosviwquhyoqbvhmmwufuponmtttpjevviqrxxmmhczdwdkydxdttwimkynaycufwcvuykdhbwwjhdvsbfrph..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 887 { : "inxjnwhkqkepvtgbthlccvazpnsroecystepujxohreuwcgmspbiugnjqvymigzpuffwbwohazzdchamcgobzqpccnpzxypwcwbokibzmjkxppasfahejjzwxsgmklwgwakedlctycwmobppfwbsxp..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1002 { : "iodajumpxxzybnjcafztaocupgncjavbgjrjgesklbtgizvdtcketjqfyyzudyupxhkaptovilckfscdorvuagsvcehlfnydzjsqnpnhfkbisnfdthqmllobtgztkelfmlyjbcngydqkoophwhftho..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 865 { : "ipecbnoktaxuacykpavarpcevaiwirhkjbrkhbfajyyjvddwiuyyxasjweofyceaudrsamqsniiqqmebysdbszunohhbwkifftynhucbtwqbuvpjrxrplmymqteaisdfneeqwznjzopilspvfvxwjl..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "iphjixxsullijgnvespoeyjulctiuwikymqsinhhtguwndaqnolghqygnpguzjxchdwoptohkznyxbcexdhbmkfcvwmpimnjhupywjptuomviwtahwcfgdbgxsftytvhvffimfjraqkdbnfuoxoirq..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "ipqubublegdavenpjewdpkwqcfazoztjpysypogjiiphstbnpzqnwbabqlilelcrxljemeokyuryhasmklozdtzzpdrzgihnnczrqyujwofhmadewexpbbavghjfxkaqmbnliqvhuptimheocsjgoa..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 942 { : "iqomvjyzhkwbrhjbmpkjwpsifpwtkxlogkimneqeniqddfoeggxfuvianspcswfihdanqnelnoxmnvvxzpoagrsxfcndyhcgzrjmmuyubbxssomzjpzbilwdpignfyowazvbacrryavmbogqcjqqsc..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "iqqepzkgqowsmszaynxzhqikqrrccgofdmrgbyrawfpxtnbalfrmfikyeguobkvtxhawboldlqbpfdheessbbusmzqenlecoxzyanhtttzfqicqjizibplyjyobwcmlgkrpoalylfnssdskxigpwek..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "ircnjlvpiwlgdsewruxiscezyhdlllbjkayqoaeqtdqzvncgssmrubamteaoewjymhhrxyccymqtgsypxtbomekptlmzanzusyxcpxtnijoeduinpbomzshtbsdxfzuzqvvnrovgrqkbfinknfzgfs..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "irghzmcgxntgmpyxmdfhiyiwqmlicziplbxwbumyzyezffglvmhunuitrxyfljwyktshvbktfhbsmmziyrasokukngflaxyufeeqtsagwretqxlutkhiphyivlwyntnqxjlfqhrtaxapmggtjaqega..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "irxyjuqtnhaydbwufipuinjlzouwtxjnelwtvkrnczopjyracrifnoptoemhemyppwmdtdslzjktgfpxlvhdvgtajdffsivrybkbkspalhghidtshzsdxdazzvqsueyuyloswrujgwtjcyukawbbhx..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "isdojrvjftvqtvxpldoxkrclacjpoiruymiosenjsqqymiprqqxnkvsjwxccknqbnegrhdwlntnndyhtlnmflortcrwhhkixcmucgcdbhddhgvxkxslihmkzwcftarghsumkerunpiunzfqbzxspje..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 880 { : "isjcgwjztqvsdtqedkgnjjurykslvxndocwgyfithifpttriqntepxedhnwkbrdvcvulexwcpaflgmxgyrmkxnumcrbingkmqvjnbvkhnabsfdaprhmbcjqpqcvaclfutzmdehkqsmpnsgzfjsiokz..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "isxqxcyadfuuwohrmkzvkiglxadcdwpjplanccofjdgkevqxtqqjcpdosusopptmpgongpyfkqihjpgertuqmigjolkfduvwbrymckuiizdqtjwdkowdgoeqhbymoqxcnuuphrltvfzdsyzjrovvdn..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "iszaabbwbglbppuhfaynivxqrikpkoeyubzjsevkyniaqzvnqfcrmxcjqpukhoomticwyqfbwzoyovwkiyndmzxxxxodlqvhtpkuhavkfztkuladlnfogtqvclwfbshdqrubygtetqmpikuckhddki..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "itlooxqpqzvdullvtczkglvzyxcuqalqzdqtapzxpboxjceqoueqgzvhrkhagbpzqgrehnpcztuzyhdaplicjyyqhlnljrprehlzibszyiaaqvysedesrfyhaopxwqfhlcnsjsrfvqiabpkfrmdpsy..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 884 { : "itrqfdqpwrofayesdsovkorroswdngavtmormwgmecbjwyjfvopprdmcnozlylqpkbeigznnwuqcbfbqappnakalhnnycodpkborptngbcdkaptioqgexotrcqjqhzubinxyljbygtskbykiryelss..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "itxibhmnvfsmrnjblmmhluxydpvuopiwkqeggkwlblptucvgjrenkimepyjhryudtofmjweczsjdulysbweazzrcnkkdasqaufdudkvojwbrlwxbiufrxhkonogxapdjtombilckqbhitkzjssskes..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "iurgdbuslshmpbvxphsidqtgawnxwcjuaeenvplbybtwwbvrwueflqbolgrfaelfjfqzwofoufeofwibqnhvpnpjwxltxeddlglvlncjhargvaccdsxfqonideanctwqnlwxmiptxhmisjjysntxjh..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "iuuuewwhjilgqxobgqpniwznhifzwlzsafyiglnbenzxkicvesccoloebojnnmljlozzvqmxiuqoqlrdydygvxpezxvtpeaoourbebgpzchsgwkvmqzjlokhupbtnacnmiwxpmxtnwbybewpghcynm..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "iuvoqdxovyytgetibbghmiwffsjlwgqhfanhfvrljykexvzxnyeoxjbyojuavbmcfpchklbphguqmudjjecujgjgqgqekmvtleyjalovsexgeznpmhaefcrxyzwtwnvpqerzzolxlpwrimgmknjgxg..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "iuwkcmjnvqupefsysankwtrioayseqpnvzszgqpkisxcrewixgothcpeophzxtxtzeohdoqljltcwyakubojtkbyohqcllelpsrlxghpjrzagibybpbyzfwxidgommteeiwqxnfkinmnratfcebdyr..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "iuxgovcrvfrthzlacdmgtcwzredferupsdwnarmbgmbcwvexzvpjdbjymgouusxctmcyqqyzbxvuvjrtjosubrdjwxepqrszdmmamerhkptnbgjinqdywsosahqskeiksauehjhhnxkvzzcyvmbhop..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "ivfjyonxgqifodulmmmiqanvuhkzfzfyskqotwwmwxcdlqystaxaxkmeqipjeozilwobfbddhwilrnpdxzqtekkafmfpqtkkjiwhozpdwfdwxldvufskfgnxkpcojdkyckysdglhvybkxywkuwtbby..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "iviodnmqxharlpmxgmasmrwlihgwwkjgagdyoyrennnuflyalcwstkncwmoolahdjjevhhhlvhmnsaooefhbturespkyoriuywzfjznaqioaorbdefyvmtheipzsksmgsyfgvnhhdsnqkswdobmixh..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "ivzqscnvdsnagpcbgcjucxptsrygelcdhxcvcweevfeikjisjuelgfxzepdslaeqxyevrufqeerklcpgmcdwdwxtmiabngmbvkhxreleddereyeqmhhnvqgufohlpiqrbhqochgjxbavebappbzygf..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "iwagxnyuknncjmbwikxvgbizbgugbfzpvptxqcwmslqhizkfkapcfftizczzewmscueupgmqnmgzntmnigonvtrbvsehecxrvcemcebgcywtoxdjmwsqvnfydxmgskleggxkjnpseehfbewoxojvyc..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "iwhswxfkktmgkjnfuezvwtamzrpvouzeihykendkkhqihycvevrtaoavmoqvddopscnjmulebhtcajgdzjzavprmfaooxzzcajgozuajuulfyqoggnotsiaurfzgbdmbvmjchjbcdcgicjzvcfzxqy..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "iwxkumuufjauttkxrpvhvndwssjdhgtybqqnqtfjumivwtwkmkwfrwxhgscjndkcyptynqnqtxagiresgqwethzyrkgriguxvvfpvbipasuulvocrzzqafyegyhzfuomgrlofveweevkuiengpwduv..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "ixhmlxfnadaggdrtyelvzolhsfxzflkxckkmmpeadliobbhvyjvmwjxojdcpabseucuwolnomfvnrzvwckrffweotkxuualmicaxvpcphvenkpawtoklhapoaccolnjejxgzpcuinzkcunrmxnupnw..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "ixydproeirupfvyeinzpsiezrfwpcmbumqmsqyjpucydcmaayildpiswmlkcgtslcsgmtbjclidraatiuxwjikqyqvxlumbwxujbeddfptsddaewpbhlxnhyzlcpfslpswwzpaqasyrnfwswfvqjxh..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "iygxitevglppvfcjzskpppklxeqpuyappgrbtbbnmzedzvtnydwiaabivpaiylxzgfvizhylswwmzanhxaddmuwakuedbscniazogdddtewwpwkxbccjexlgculflpldixpptaomwzpkqyexfevegb..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "iykpblyhmhvcqcrkizjymftfwfmhzibxtsehmfpiwumcrxwzwpblabrwjrqmyxwjfzbazkiwnkuzwshgohqqfianovvujgytojsmdvbqrlmhbpxfvzppybozvxgmadmotescesgotehdvacvhnowvp..." } Fri Feb 22 11:37:09.976 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 862 { : "izhjdxtivguglqizfiucwuaaeblqywbyudqkirtnkpprykrfyecbmeocsmpyixfgfdhgrsfcyqdilhrrxjllzjkvacuurpqbquhhpwvnjxinqwhrrvmkdgqagbcprzbnigwffzvxahhxcozcgazcpq..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "izzixpijxoywpwqoofjpvktthauuniwronluazgdniyzgsdhtglpcrhgiuerylledtfnylohurajnlbzreruztynoahvdguxpidimohkjknuoskwvqulbclhgbbypyaymzxzxuwlzmtvhbmsxojwkb..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "jaexeccbsiaskcbzsnddxobzspkohwfmmnbmnbpribpuihujcvptmcfogzkkzouaqongxhxmeghcokqdlpxvwfrldeuxlxhoiunttexquivfskweyrjouesczkhqiiuujixwhwgwyccwmdibdknizg..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "jaxmdmhyzuawvdspzkllienibktxuszjtcaknauommwbdgagupjqjfcyabyqizkwrgcyxkcnjqsorilvhqtykbepxtnthgxgwbsplzcctfpqvpcyprwionawmnplnonusftrixjadhuowyxjfzenaw..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "jbbcuronvzkhxkwzsutfvbretitqehrcbwkxuxrynfaporxaujpuizknfdnffnwbxpmjtmbjfofpqdhpnuauovprqbeecbyidfmvuncgraahdvvwbxstieefboaixzqprntdigfbikvijxevahwumz..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "jbcewjueqfaskuvthdllywdfcmdkbenjcjconqruffhjzvklfiqebamatmrtsevlehelhrcvmmqflfnrvdsxvgftbnftfowyzfnlllvvxuhondjlmkndjssmyjezmfyuupuashtluxyijehvhardyz..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "jdfjtofpwyzughuqsfyjqzwowetolhyofmwxkyyushkpiggysulhsktoatsmeuohuzvabybsmqidkteqpgyxmzfrczkynmvcykipqgtfphtkwkjazixzldjksslirhvmclykccvlwrxgwogsyvxfyy..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "jdnsrmpapnpvltkxylutmopfkjvaeytyuqhgoxjfgaimoldnbdggzwdnvuksmsusfejmiqubyckoefvnbmsuookirwlqiueijkrjbokyeziqzakgzhvhjkjvjopvfprmpesvehtueicidrrsqrvvza..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "jdravkuzuggrhddgkmmmbcvkbfyxpxdxvchlrnrxafommmouqosbthqzmxcajkmcdhosehlmwgqapcfgljmsxormofscumkxsneghecbdxnnfytrpuugqdajydngehcbrrjfidsgjpswilxissdekq..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "jeiyocqlyafhclfucqzvrbpgmqtrgrvoszpdgiafwzcgspamlfidbjucwhgocirhagfzlsmjnkhpirqdammnsqvkgvslykzfduxknfucmksintjxwmhqdyiwzgskxpmjmbstzzmnatquwejtsczoup..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 880 { : "jezajmofnswmutgujnhdymkkskehapjjymkxwkzucvelbepzlogxdmumdfespuxkllhprhnxjgnuelullikbairzgfkmwdaaxguekhnofltbmjptakwblkbvesdtahshpsckfwtsvqqeafirzbwyzh..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "jfbzfcxqrmfhyfnybhplotlxlxxamcqtzoztoczjpoyufdghwjpyhawdfnyqsctcxorsesqhslkkpbvbnlqyjngweiexvexkntmwzxrlylynaqmlmhcwoeaivglgyioqgbqgklffndymwcwfgxagbn..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "jfljeitvudmwinumakhkckyzjhqvoucxbbeuveratwntcfyirypahsbhuhmnouuszlkpxtswseiilyoybwshofwadwysctnhvxtshiypauigqohrxxqjmfoiiqistdnhlkmljenhbapoodarakatdd..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "jfwxyxiqqgfbfjhbioumyknocziibzpizvipjfxqlgaugbhwhcsxxhihlwgxuqwcrmedkvawewgvuhngylvgvwmherouonhtgdjzdmkfbvvjeatjtybedekoqbsypfxxtprcjhftouuclndijvgyzk..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "jfxbzjmvxainirrelerahhbmfaneiubmkgrmxqnqvexynfqehntjqhsfxfkpnoqumcbstxpqjoldvvguqrcfqffhvvpjptunqppkxhlsoctgnciwinkeczdegjpftfsomqfbrjpzzvvltejorbrqwa..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "jgehyexvvcvuvsppojbtxdslfqthrxsxfijwvxdjxqevjokznxfsrjkvikplhayueodosnyjlhjcbaxijzallwvhpwrclaganlbjhwzzkpkocnaekaossgwlpyzorngfsguuyhjlyzhvluzeggyyul..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "jgrmdkgzjlgfalnpnnnllilhrttivmatlhxgyiibvoswwsjulldltlrxwqddjqrrqmyvtbheffkbqxzgwgfypgajkkzngliqqydnkgojakpotbopvqtrvshwncucnqdzstentfhuefyxsrbxshbihl..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "jhrnntwseuciuuxmtdidxncxvjstguetzrtzygrxcltkgenpmylwdljddjstldzwpdbfedeoxqseilafcncodqohfwrtmyuzklcsxlznmnuexunabaakpecntxznqabrbdrkwkkljyagxqzpjieejl..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "jiawsiertdyiybolwakfjnvikupfwotbvujikhblpyjsvzalrspdhdacrblozdtradylyctexwlyrgjfdosayoulxdnjsxywknfhagiicqbkqumlnnwgmxsvyavsfufvbuxkkjskjrmwurdsrimqfk..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 901 { : "jibuqomtazzpkuimhxszphdtmjwiiwwnyuqiyymzufpsqbfbhgndaarfnisxmgunmflritsklgitahigxmlqeucaophdiokrbcsntixkupuldgrwjxgyxrkdaehyuukubrtdzlglvqmdefqtbtpeuc..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "jijczebveqdpiinmhvljxfrxckevarxqjcgxlurmhgjqcfgqdxznhixbcbsjncolrgbpedrgduoecgplvcibhfelllwgxgajvilztgtxduqljfyfloktwhiopfkhythzmbusnvwuwvjuirbsepdgop..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 827 { : "jikjqpobugjyzqfywzneczgxaoblzbzttyiimlgcbzilenrprvlpwglgidwkllpcslfffqpkjihlfydglcuhmpofnbtseomwuxxbkyxixmathjqeuevljxaygmfuolfwpxbghhtzvjwjahjrgzmqld..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "jitgneoszazwlmahjmausrlrwvavnnibyhrzgegyrtgfeazqmqzoyprwldspagxasxkjutfvnzdjvqgamioorsxviltxabeovoddfgfrjegdyetfpwogucamxrrtlzvlpqcpfljzdtfuwxxbgnquue..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 977 { : "jiuzwrcqfajreimzhelsqjlkkddamysccyiqdvpmudrqvcjtixebuftlwiyyvibnxskvfhnpejvmbsaahnihvniqkeftoynuvppjbtwrbxpcazdfijqbudfcntioifejhhvrsbojtucvwadwiocktz..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "jjfmkmuimbgqgowsivfckrqrxajsbjzntrqsxhisgeevtinugtywvyzfhzcicdngsuhwfmzrywswsvucdmhevcptfomwdxmkmdcqgwdudkwbkaclsvdzqboylwbogfjglyafrmovrvulsfdtzlmuhz..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "jjyonedhthcqfnmgdugpjhxdokzeorxigywuxvakmpmuqctnjxxizfzooqhfpgnbqmeogtheqtsnvfokgtynzcbgbfnsubusmkvdbrzptmquwipaqcsqqveeashjidwuswoahmdokdtweyrglkikri..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 966 { : "jkwdkoulprkaknckbmrwgeejwjkuawlbgypfjhqpsxqlbxozdwzregmzxbvcgzogizswgbafutxxplisynpmsbcumshywkaxeapanzlsimlevvfxqfmvvsxohfdgrhldryvulebfgrtjjupjvatkfg..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "jleflzvrdchgofteizfnghqgmtmbnxtlmclaxijvewrwzzwzpvmpiiisfqcpawoxvgyohyjusonhqkgpgrmbjissukglmhixadsivubkmaukvkzsaswdlxhmqixumgyfwwwflehooyktofoetycoib..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "jljuypvprijawkshiisaaoujqarfdbdarywzdpprceyphmoaslaemycbcueewomwyblebwiqkphbnapzbkgqlhpsgkdrzrrcrukarybekpeluajsiwmsdvypxgryjlghirrnlpbyztdhsqntngnbgv..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "jlnkcospurrkviajiechphhrkixgpfxzftwilfpoiegrjejbxqqtbjrdbfqaxpfuxxoblnuplzpciotyoxpupbvathrkkuctpmvmxxrojhpuyjmxhnynokpbxeyihuuhaylztcpevchhnstxprboit..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "jmiziphrmxevoymajhgqwfsfefnkftuldkryvspfesfatpdnmxarwnuzbkhggfrwxxtgcewbivkgpqqoapqotdhiwykrxjdrayhwjrzrhalktgckzrjufskbekpqsbbxgbcztuigsmapqsfcuvpevv..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "jmjsjglhufbjedazyxwsnahsjnczjemhncqicmyydhtikooxivwboezynxdehvodmhvsmbfouluebxgrtlpsjtcstyqfrqsaqodhmwayywvvsaeaalybvlvqcmpppvzadvfjalammeblcrkjowjpdw..." } Fri Feb 22 11:37:09.977 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "jnrdvxqlrxrmsujjeddqpiotlmulijhwjdkeaitgxqshlqowjkhquyvsuqnxafdqypzqkrfyrlcgoxwvmzldsiitqmffnyypoeyqdvgunpaltmkibdduhdymfykiuteptdrjaeosfskviwtzqkpecr..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "jojkizchidvmkhljylvownpttyzkmsvajlwqofofndfzrsxqkrwjlztdbwuvcwxotircaowwfetmakajkjchqewnvbegffzyyqbtxqezcbpblkdhxofgabgmwlfhrobhiadobvvuavpsnivrwjtepk..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "joncmngawnljcsuregwvohzkyatpaiusuzgrmpyejeckgttvctdqwrutnridposkgyxkvcuxvxmtizdykaqxoecdqhvxsclitoxmokhzhtqoihdfbwxkhtlprabdmqokzhafbyywmddamhugplaayc..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "joqakvvndctrmpixmeicxjejlvzsfqirexgnpvrehbllfrbqgqwdqdxbloblrmbmhresrzlefquzdtiojbvgpfvaktmgipekaasloeovhpwsdrivelaaqdrzkrislyoqwabcwpajiovnyqurhveqeu..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "josgnehjvzjrvcolgfmrrftqfbrsbumqewfadwhrapfjijwbdgdxikqgunqtazdaxivrbgdliybhqusqhkrkuvgnjhqjzyensxctcjwtuzsntiwtkiascxticlkculbwxzgrrawerxvelmmmjyxfzu..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 951 { : "jpthwwabrznzgegemtksglqaxkcdtnumcskilvxagqpyqqybqbdkjyifnrdobqfkvlttjvduvahfpyntndjvgvmiguwmflyfrfvhbzezadyvbasndcmfznpddtmygubznvfrgdnqjckzaqcgdkgwwk..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "jpuyueeuyhenkszzqwitqjanwjsrkbygupdtgtrimqvuyjymcrldypqmrzqjzjztdqehclvxxkyicslynzyhgkpmesdllicywqycupmkxvaolusjrhqezlepvavfvugqbrurscgcukwtmhnjudjtlv..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "jpzbqiclnytuebdpiufioyqzuzommauzvvlzcdvkfdfcvafckvhsemvlgzdvhwzqqydzsfevxaluknakdqudpnrqoqtgetmvidnxnzeqwtyatfkcixvtyuoqqbaulbxtwuigtimyunfrofuadprojk..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "jqhowvxrhfktxstocdqxahnhchmwtlouqehikoqcprvpvuhodtsdnbqndynwydrpttyikjptixvbhoxmznwhsgophhpzgptghbrokkxkfxspijcnfqgbwhvxqikaxagcuumqjfravtojibkbeesboz..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 837 { : "jqizetrhnvkzgladhjopmzsbxllgqxrkujdzzkvaxxjhbyusklvnmzcvsezmempryaipajhrvntbvromtjrdwvxcngeumqsqddrthevbfrbzqodvslblpoqnxyajchdywrbprkwqjxtwkmwuakvsje..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "jqyvgxqohwrmjjwdnbsgicfhsutvxysjjbzmanfhnxwukcwelhwwzmwzyizphotssyytudaxnxucybatgfuzcwgcxsniptgkimychvnnvewvgtaizywwjbcivrtwnhpbgsvoycpjfujbkfantlqkaf..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "jrmefgxuooywjsmupzbovisvrvithpyagwzeykiwrpjyfkuxhmogwqolmlidnjtkpztznyopzgsahbofhelmwfvamdryzdamyrwjjuyubfizqkqremyagorgzseqfqxnzcpfvumyyqnxcinvzghzeb..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "jryvjitmihzoljueitnckvqvgvczituwjgrugucddbvbpfdnlwnnorrrusgmdhyknikwixveusckzwwzkxplpfeaqppxcdccsqkqhybylcfijfznnuvqforbwdsqmmedofnywxvlanxuuxiputsuis..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "jsckclyvlutaruqocrsmlurqcjycvryqxnapyssemekljbxielzfaokfdwcsqkedbruzcojgggskyhcddvyfifmeiuslgrueogzfknpilyyjchqnonkqrooctkuxlcpauroauuafppvbrptpspnvsi..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "jspnvgmijietvijmckbkjvdqvmzpvamujrkycswfkrovyphpmcwyutvayiuiqrdyupuvznsyhkbpcugkqrhjxmndgsaxbopneagtjqmghgosuixchpgcorxeqqndxhirkigwylcyxouggalvjerbgi..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "jsvescyuzhwdhxlgpzlwwecbgpvimfnfoymhdkqmzkfhmoimpbsctfyxmhosgupnejkdbpmypyrlcuwnljrfvafxxxwhygwmpwxbtfxvfmmwusuxfxdcucpeasakcgtqqqjmlrxsqbwxckgxjfavgv..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 979 { : "jsyfnlnzltvqaxscguonohzndnggzkwyyduhgkhwwhajspibzkqiadgoqdzifpebxsrhgsvntcjuwgufdxxhbtcixpvwjilptzmdirervadrmblplkvunxzskwfpalwfkogqkiemifvrotkmdbaagu..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "jtbiqbmdbybswjyhpkcjqrozoohfaierpxueqnkhktpxuphtxsjxxwlavbxulmqoudsowetftflnnuzfqyxejvqzpwkuhubujkljbhblinzuzevkrcrlvqwnbmzigvvupsfffsevgrdqnqrkrceuqu..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "jtlsjlwhfqfjubevondxsdconmdcdbzflrjefyejxwypjekmwweovqyboxjgjcqhfpsjrixvzjsayankaywyqqlgkpzifidnanlolihjcnjzxqaewiqifgzmveqdxpqsgqjbtjbbdzoyjttykwrabh..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 875 { : "jtomxgresllistdtubxigoduszanmwlngigdyfzqkyefcklyhdfzykmqqrelbxetidtidxzcusuirhzmefnyawmmkrpmhvzijefrvoulkbpgvulcmqbnxheqkdhldoczrbbgikkojxhdekmnwknnab..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "jtvsosnesqboxhgdazpxlkmdezfwxiebxvffmhkocsliktganblvztcqvbfmfvbycuffqzcvmpsynnhtotpjoymjtrnnubothonhczrosyrssajadhtirfkbullbybwtcstorupkrgdoczxrkumdki..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 827 { : "jtyxckkwhbsgvhysoceemqnjmepatsmzqxcoibfmofabjulgymwvxwzhehibazptljvazvyaszorjnuuuslsssuucqfvktsyframinqrrsfbinbrgssgkcwefirahpelrbkrhhzrpmwsmcyjodlnil..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 900 { : "jvectzmxpgeceuwkotrtkvhjoxmuufthpmynbpuxohvwccljbekhlurxgjznxsxtnkthvagnaictlnuybcfyrxqtlsbepwwmnixxtbhbnfwbkzicbjhsqvikgmabixgffhmwciudtdwetfvfpcrojb..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 876 { : "jvmdqiahbywpykxcakvjgxebulwkxsitqafxwpwfkqsfevwgoytdkfwtverutrhtynjlewwoenqdjiblrtlnpegsvjaxmznzikwziybzgaqrasmsyqumxgwauujgimdixflfrxqzphjicveyvyfhtx..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "jvmzluklypcxqlsiqztdwawzcgrifceaxdjgdwnjewmpjjwxuyhxbxgdwdkyuijwpxtlkyeoqcfgmuerqfjthbyuhxvwwgbusuucqkiqevxmkuogizvdoclgqcxbxufmhndoqcekkruqwaajvjffjd..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 884 { : "jvogrwyflhnjobkedyckquuswdxmosvvsdhmijperwhqckdvuqqdcmdnfrimfncczmirrgwozbyjkcrgjgkuoufrgosvhrolxdfmkgzufpcssfogpptvjqtdjlzxfxwrdkbbopgjacwklemwzlohhm..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "jvtojnswjggejlgsyivpywaofawsallvkrfvkqgcqtufgkgsptdgzprfcyuvinlwhykeedkuffdefhjgsmitprblrvnzjwzqszigvaxsdbudrosylrejgtnlalzjhagqivowzmsjlhzlmuqeycessm..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 862 { : "jvvtuvqlholpbebqzojfukcyjrliuasbvmhuwnpqkocybuznkcwoczteiasykqxtahaocjheqqmfwrackucctkvrkystrjhcxoxcbjwkatjwcuzqaylndjrigtfonhmlokvwywzuxpmybmfmkogfjw..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 955 { : "jwwmsvqaeiwglabvkbxogzzbtgaagfjgixznpvjdmxisrmgbfeiveehcukurupnhbbsdgyjgnwxhxyfmrnijbinnuxnorwjcspqckimzvxqlwgbviydfqxxaymieerzbcynxprechuspgyqebiejky..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "jxjvqpvjpsjwlhexhbzxxixbmnlaohavomoqhjufobivwqqheybodjkmsbptfebkxgkblajvtrlwhermtfzxdyfoupetxhmecmwiontyspognlqaikhvuwqilveikrooaxzxasoykgsfgyncnkphyb..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 925 { : "jxlzvelskisgmxayughjldsdholxsbiutvvjvfhjrfvptbsyozgypqpzjajjawzsvfkktxothtjzymteabhfetbsrapvqguvnuhvxxgrojgsxlgtjmtrmezoexstzfgfjwcwobdvgirkqmbtmbjfxv..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "jxpsrxwxcrouraxeabyjiqrikvfwvmbnauxlpfgtmvsyxumcqojbxvcvaxpwdaejqnvticidgcprlwcjbqitmipnqmjofbhbhhiioewmvvhsgpmutopdzsknnprfqhcrvqytjaopjsgwnbfwvcjxlk..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "jyevqeeavjbovjgmtnxbbeukjefhuabfojtqqrnmxtnqivcszwkshzyycmyrmgofipvhsrjbwnhjiriodmoezkkprqthiybwxlftpbfrzeerumttejtafizsnxmefbbvzcyhtueduubtqtrvibpdxq..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "jyhpmjtwwvkjdoqingntpepsdzhcosyzfzarrzuhgzoyfukmxmfkhqfsbykscxrpoklaxralgrteficiemonggsbposzbihvymfrqcvdkmuoxckcuadznbadumixjrpbqlqjplrjczxkopggwtzgml..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "jypcdwzyrmjkpoirdlykhjdlkpvrcxyzhfwdeglprgptyivwnbdaznvoucnkxckbemaqznuasolxrixmiadufmclcizmsaippphbozqpaqkrgtgscukvnbmtzrlubvulbnhzdwnfpzicismdcdtduw..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "jyphsqjiwfbmbzdwpiipkwtosdnwjjmvcsdabfoqfuuqrwtzqlvyjuyucdxqrkqaqghogegqizcmvbfiimkevrcybqkdcjpijsylfygnhesdztbcyykxmbgcbwodhxplpzvjkdriblcccdbmxwucno..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "jyzmznblqkbhvfydlyrfxallwcxkyllmlqxqxecvawkxvqzwvpnxesngvhvgmvzzorzvhomulqrqnpuvgtvyitgwdsgwebctuswgjgegkqvdzsdhmgseytwqmyfzyghyovcjftrxtgygbpggpmjroe..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "jzgbrbwwzjpciiaibvtyzmdshawpxlvjsoritupslapkyfkpyivezzakeryruakywsuqmltincbonzqvbjqpmkthvmcoszvtdfuduhgsrobyfnnmzyaombtcqbrciqxojggdvoyjfvfzipupaijifz..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "jznaxxsotutmtbfnwmqdnezfusbotxjbubbrahgchskwczhfjrvmmvpggbhnglywlcvwibhtigrvfomxtxygubntkcklxugdgetickjgrglbpppfonoubdpygligfhykjwyeiordlbpnsyemrjeuvg..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 931 { : "jzrcsvwcqgvrdbuierofqhaddcrejaatnfmsdjimvjanuybdejlowioesuavvriugtiwicsytxxbjsoemufafbmgfyfsgwivnpwlvzizeslwpggjmqznkwfooofwfepdfyrkqusqyapkxalhssszwv..." } Fri Feb 22 11:37:09.978 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "kajxjklmgzrbynhgkqfgoejeuyfaufvbszgzrwmloljwcvesumdyxxorheprydaovccyxxoiqhxlhyghiwvtufuierpmalwgbkhwvdkqeiixpovzzatrlmsintwzeezibfltdtohlkzjewxdfdpoes..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 933 { : "kaqjvjlnbiixpgsnktenrqxrqwhkvfcbpohnganyonmuglgvzworkcwzajocdsyhaqltitgoshutauhbtxzvqcaohvsmgyeskzzfwmwtohhegdorqglhzeircgkrzsvcxwxkmhcpktetfjhtxvzsat..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 899 { : "kavtaadetqoyrjslucmkhgtvjiqewlthgiwafzfycycitalkynrqqdtpxcdfdupbmxfnorrmyorimnzqizsbimlxukurjswqcfrrbhfrxoatvsjkzagqclexcxujujvaessygvwztdtmkbdibhsiiv..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 906 { : "kcaiuilssoavjzfruqhkqllgkkzfucdicuybpwtnmbadzfxsfagewlbwjhfhiydjbbyrkoasngteynvppaexaledkrokrehzrwndbmxcfkkfxpdtxyqvkreyyssskezdpahcfzoqcqgghnltkaeyzr..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "kcyiukubdxnwollehmeyvwxfxlwqvwkyhekbmfklkblmvinokwlucsxufllgvmzmhtywrjrshogtllwbmdbjjogeepsocuskylmqsytgospbiktkgeefamdfkdzcdfimgojrxetjnsoemeikcrhmlx..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "kdosprtkzzrctxgtvadjfbvzqafvieqmqmdfeudvbujsiwcjyynrhemsokglewlcxobuvotvvlvngsobmthrllrgyjotquaqmietxctibwljcygsgeejepdalfvsyobwrhaweolgjqikxdxyanpeoh..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "kdrzckfdrplviyvrizuxwslcmijwxtemaufbxszdgxtdfxtfdcgdgkpaqntwoqunnmznampjtzfarkzdxwyljnjegekwribkyyaqdmndylucyovnxokmydpavygkzrbxtalnwgydzapthbgsvpmuob..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "keciezhewlacrgzwoytucftlcoxvtdfloetmbalrivifbgfcpfpzsqkqzmdpfoytppjrptbatutpgjxomvyogipdcbszefnrnmpisjgwnmymarnkwtedzbvodwjwmhhwurokgiyutuzodmiycsaxlw..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "kepyhozfhapnrwtqttgqbeuiaaifdcwfhyhecbznhrqkquadyoystgtzyewliafcalvduwmpdkxwnxvsqdeszacngckzdpvoaxedftijtmhyzyjyznngcwewdjapgqwsfgqzgtfvqxbfuvrjsoahoi..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "kepzeigrpbqybbcrwhokgfvpyxnbsvevoqlsrgnimigbkfossqoamnrpojtgopzmhvkhizrbwydtvuypshrklruazcpauiwdaybzmbalmlhewzuenycglmxqrqdjhaigtddxlnnlnmwfyrlpiiecuh..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 922 { : "keyuyqucebdrgfbenthyzsdktmcpiamvscbwkjsskxjiykxdklzoysqeakeifpxgrxlvonbesbrmelhcnjygqdnkeshvbgmhhokzqhinfubksqtwyrmtdazhyhwgxmvjpkdplolffpefwvbputswyd..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "kfgygmaigvbmcxtbmduqqdypjrhtlplmvsfkrwaogzbgxqrpnuaisccmlpwwkmwmownlohlajhpijrfaumvktikhgzhnsnomtqpvdwmmxhabudhxpbcoysdgtuvjdbegwuxcmdswnkokingbniuzfk..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 846 { : "kfnlcsyriduwvwdqxyecyormzhhdqnyjsiovrworvnphsjjqflvwbvvzqkujsdwjfbkqtbzousaogiyueqnkjelsnixvcarjvikmzxuwlgsasvajgovroxeiufarqofdlekpgnrhzuvuskbrqdzgsu..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "kfqascyyaenawlajpzgxerhilkpyvpozfvcxtnklwmwoafzogljhemqqzlqfbtdswmwoblfihurqshohmtawpnbjvyikvxnuqrkeooqmfqcdgzspqpzxbmzljitasvgouhsdoqgqojadospgyfrbre..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "kgijontidwaxdjhrwcbxudnwudxjlghaulhwdcfzeoreaimupnqnyctppmxraeulaypzbdycnzfmwynnshjkmafnifnefrbahksnmrpdjuulurtpduyygmnmoeuqgbvfjypmwtudoadycwhhpunpfx..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "khewdyftmpjukyemkjylhwtlhpizfwhpepaxxcyicrgftjbcbibelvhbxvqizlykacitagsxxcoyvstwpgrwgujkewrohvpftenybnrprizbbqoxopnxrpzjbttcnwujnelrmorwpzllthhpxujlus..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 837 { : "khipgvlzujudkrhrtgtqzehbdznvrusmmtfyhirzkabclpdligtnomzswqrnhfvqesxnhjtyexauftwxqqcdmdydywfovcowhvbmyhllxihguapnciaivmhjjvscgwtmxhkevldvrhnvblymqtvaqu..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "khmurfpyiwonnbhqsnbocybdsrbobicgobqklznyndxzydseokmxgnibjpjdcbgitryrnvqnasvpllncqvmutrlhcpsqttabfzgiuqgnzicnpbvanbklqpcysjwbdiiruvobhyjgwuouavjweoxzuc..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "khrnmsfutjmscmbfsipgppunnoqlyckuphsmqdkhtnfgzcuewwbmfcgqxdcgmqagidzxshxxbtnjfxrbiogqmuxfmxldmbhxxwpsgkbfbibtvdzdktecrvubjtxugyqxswpskknfiuldowkfputesb..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "khtmbpgfwutjnypzykrksrclpppdiaaeqacjidszfwjjhdaxlrfjhxspgdkmcrkgypbutxwpuxvglmrtimaizefvvvqsivhndlcvumqoxfeblhhziujeyjbidaniqplswzgejzytvyxpdltjalansw..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "khxvexenvtedylkjpjluuprdfhhcwziiiansiytxjykdctcmcoqfesttxsmrxvugbqlkjyguwbclewewsuskrivutomwldlyxpdtinbmfaclreeqpfjuanhxazuceehfwrrwpcpwzcoojbqjrhxydd..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "kjabskatlnnblgkxgarkunphxruyikuaxcxtbtptcpxtrrjbnrysfluqxwowkpailuuguqceaxskzewgrfskoxbipuqevvxlgjrjuadmjgkloiglodmgxdenekhwnqmotvlynkroisehdqexbbnabc..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "kjqqpmfkhpohvvorryroywsanpkuxubizwwccaotfdxzgjisejsuwwriafpzztiiqzlmfyhzxmeshytxxzasaxutynosiiyqquyebxqvldpeyxprogeaxwbbmxbmurwgifzgpygddjkqyzzlgqvbux..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "kkotbkbbqcvecvyexwwkgrmjbsydsgfqdyftbjhtqcbftqbpnzvjbbaubkbonydcvxxrroolchbqbkqerazzvumyarqfkcqotxifatlrurieagllhfjwzdoaqoajcscriraxvntikngnbboigwzpoz..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "kkrusqmzctmlpgsdvfgclmjvaiueuprupnrcauhevkrzxoznlggbkzgqhbqxwvrdbqgpumnjtniqqykmfvfdglaahzuhrenegckdbstqkpbfjkrunkcszqbjcfgzfzzkqmzprccomkghjrstxqqbce..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "klfhcdmyfqimfcsimdqybyuvqyklsmpvgxujpxankmebwaymvcyivarsalidghracwzkkcosxhtghdrvfgtgfwxlxtauttyuvwjzjnwfrostyeecqxrlzvzpzosrdoyfpynveyhjditnatgskugusf..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 852 { : "klifwkstzjadakxtnluqontwohehsokvijlmtrcvosppbiixltsnhawjvppcqgpvbttmlwjldgzrdvmswuagwgfmcuysyyhlrmkekhawdlygehkyonypekhfdzpmnlzppoudexhtoaocaryrtyzehf..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "kllcjcxuyjeomlwzqcqzxbeoufpdxqlavmfgpwazgsahscwaolbslmqgwkmlbogxrdmirbrjgetxmdlxckcnbfzgcylatrzbsgpmlonhyusrucakhavpuzhqgodppwdphakvacepbbswzgezxsubvn..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "klsaxhsoxbuxocnihyydldbslgtmowrtaxjjhvbapbbikqpptyvrklxozzosxwytrjdfyfhbuwtihayvhjqtfbibuhwqnsznwseditgoevlhsiscqsblgnofknjokfvuozuotgoubhvoeewycnboza..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "klvcdcwirnssbzqctkuvitayuyxckhzfsesyeeygtcusethmgcxllugmtvcrpjcwveivhfqeqynackjlmrdbwmoumtpcfwgtnjwhordttencnweclzcypyfdahzeiwmaupkjyhzturfcwioodvjzip..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "klwjagftwldkcytbfjottlcnptnegmtzwgvmlkvrdtovpdnjwxhznvzlfdrcensywsostneezqblttwoylsswbsccwcczgwwcgtsrzvikcrrascjnjvysizqejqktwzdcdvxjtkkjcasglkvqxkkkh..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "kmbkmlueuzxdwnvsinvjcolvolxajbdncoaqaataghjaekoanzxbiepuastiackjnfflfqfgvqepmqkglfonkvmgurswjkfoosqvrcimqbxjeoqkivvobjefxjblixeztcmjhgegmkpkruikxfekdu..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "kmfhzzcqgfruvwrmcwchwgbyrzzadhusfcpcqkmjkdljmrqghaddaeqcaolupfdqoggjrnacamdvgagihfcjstufafxnxsgmdbsdzgewkgvgzpzurfkxmgsakyvrdfnknseeuyqbsnhmwmhvyqajhf..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "kmpnqgkvgllgjsgnpcchvrwnkohnpcdwatayjdjbdsfnddxvaysxudrrlqyzxvjkmuifcwyqcgtznljsbhbxaibeyrutkheqvzrjjyynfcfqjtuakuuylsdncozxwblgobjtlvwpxwurjewigjwwvt..." } Fri Feb 22 11:37:09.979 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "kocdgojpydxzhioucnujivjqxeizfkinqipvstkftvearslsthzhsvqzuozrtsqrencqmnrtanmnzaoqjaziqtkyjipfnylenihsfgrdiaicuqcokubknujdxtrlmqsagvvscseonyiykgrkrpizmh..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "kozxolsmyhrikxckmdxlzarnazagyikgtinvxwizljoozeuvdfcxukjnjqaasyqbnxasnmtizlrorbciejynpfruftydlpldbchnpxhqtmopvflggjnspuxoqsdpgcuklrqxoekhqhjodrjdaylloz..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "kpfhinjehhgnabtxhdodhaoiutodxfqsciflcupyyueplaoyswvygqvetcecamsuhqraqpusszxhnjiuxobtprpokglomuvyqqrtqyjyqfxsthifdpxxnengzkwwrszuzzintxivckthoyirzvayye..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "kpqogdqquzcjwvqokvwlbfhqxorornjondevrvaslyygzojlhrbxgdkzfvqamddedljesrghnitzekenhnhatphnqiohcyxrlncwogddxzhfhdphvwaovldiryfvunegyjhxmtfmywlggfaqqjwyrc..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "kprqwydnezmclhencxlvtriplzijusfrjffudmgppejxlrquswnqyajjntlukdavgldpprryqhzxawdxubwowquecdylwtrlsxmluotzkwohwdcosoojksuploliuktqtghivnjghhfvncjlzknjbw..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "kqgjfavokrwnshijphlqrunilwbvidwfgcitfqycpzwwzegzjgaozekxbgifvdvkpyoyuntfhohqdwmsjpqfeqhfuwhnhxjenllmaxoialhjqrxljrtatghkjhswearsmzorysigupfovmdlpxegir..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "kqhpxbfgldzrfnpomkjfydoxuxnkxiiinddwunxmrtulfrrzkelxvnpynguqzgncsqeupeddviqcqizjssylrufdeucoczfzxceoncjuyorxqrfxghyckyaygdjgtitnwcrogkhxxwrpyagrxvilmf..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 930 { : "kqiwsdfeyffggqepssxaboeokvnvsywtyqfvgeaixrzjvdbqrhndbkqaurrfphugyqfieoiertwpvitzpqbhvrghqwopnxqnxevdbiuptvteedoxpwmfgsxzlbyufolcmzoxgflvrqkmjageuhrnyq..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "kqjkutppwpsiuovcxyfgonkwmwbmmxwbhifamlwelexzsfalojldyzoggsiefsflzxtxrvghceqeehisruwdyshxduypqgpgwyzezhbvughnrhwckzawnceagohjclzmwkzfwgdjtthtmtgmfdkjon..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "kqjpupbtndicsqnyglilfuytbmldivkaucnmxhbkijzhjpzxenaqpyotqdbobmkofyxjuzigqmvbaoyvsvonafmxodaqphdjvvcgitrutzfygkjoimsvnxmfyuhdytaigeapoaumrdtwpeaazeotal..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "kqyndlbumobmobuhztdtsvhitfduwrdhmihkgkzafbdbzjtnazmngmlumfffgougbhwotzxkenjxyxxwmzkvypcxrpotbmjckovyhfguzljwhjxkzkobjjzqquxerwmnijnkqodhqlvrubfnmzznhq..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 994 { : "krdzhliwqfajdklelhvicufwhbnregfakchyfibujxlagmhenwmcpgysmgvcahcwdetfkxfrahndxzntajingmyrmygsqujtulrcvdgjzqepqqqchttqaopbnhgpgcyxbcnjnejqiappegyjmnjzat..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "krktndcsuvdywoedtaiwlbaptbpxwtpthjfzbgyczbudtbldbzluwcsvbnnauzzneihkmcmetwidkqpbgyfurqemozmzlpfnrgeaydyoxancqqlldlexwqkyskrpywittimzkvenpqdjyjituialjy..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "krlueoofkwlfntteodrigslhtzbmhcljifremzdsntutnxoxmxulyxrlnvbwqcfoapnrwmtoeofakiagsystdvbercpsfbeulueukitioebqtecwnztxtguxbzmlaboaagxfytfkicupamhfjddrrv..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "krvkaydrazcfllcaramztrznberrrhtgrwthizvvwayhmjpakqcfsldpiciyopopgnjxonhxapubwuiktycbfyigepsujpiomzajofvvlkvkacjdaqfuxvyezcxsqdroqgucuvzhbrxluqvzkywbxy..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 892 { : "krwwdhonjkgcxoejbevizuvrfwwvbzgjtzsekxolfqpzgbvtzimfkimclyjrkwqvvczctoasftajvkhhdshiayqfipvtiktwafyadqfbnjvqsfkmewovsflgtghuhuvygibwmvfdtswtajuxgfbkox..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "krzgirvmcdnsfiberqswtkagplndxeownxyhdnthovejoknepgglkbvjjrwotgttmrxtsdshguyddgcbmxpmggikcvrbiqfqqoidayvxnpbhtqormukxwwpfnpubpzdcamdxehdhnddhneninpvtxo..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "ksfoicnpwlyawegqqvdhrbrhppnjbnbegladzkafruqikghqljixujjqjuktcgdbesepalhudijrymqjhcglstfavetumdmaeyopclekdxysucberdngiasxuyuiiibicbrejutxjbzjwhpiyfpzzd..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 834 { : "kshbtopdajuuzpfenvcldodegvewinxebfupgpcgdkkjvltuwuoaohklioepcjjcydveajhutrxplvifueleljmkayfchsldsfdvurreulfjhcbpgsuiqivretzbycszrdyhmvscabwezydgdaboge..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 821 { : "ksqbbbjjdtrloaaqcwanxelbchpjuqzickhmajxefyigguxkjzpamsomfhuojjtbkdhslgrqebfsjxdvnpcpqqijpvobsttpjwsyneuczctriqppeiajwtavfzgfcxnzexnkzlvgsggydbmazcsofs..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "ksqntcfzvdgttovjdhlzqhykapycttlzkaibryafccmwucsetmphjfubragjuftvgtogsjkbnyqfskikltpdggudfayqhezjfsayujoldjjycptrslwwipqrkjvxiqbxcuvuwjmonerbomsypzanlc..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "ksrpydaotzkxlcxpzliqaxgdqfbljtimumhxebqahbhqbynqiptycwzgoullsvcxcjjqdwilpdihttwzdnyqmkqjtowqtjlteenttwtihubefdjwlkbfedeincyhpsqxjpmdbzcqlsflagayurmxch..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "ktkxsqyrdgfdxxhylfqhphdxouenjvgpxhndcjqzxxxhwnqxqxrebmgrkijgucuujvffzccjkcvblawqzpjuoiraomdfiyekbknftcnahxesoprkaavikqlaepeehpjbromlnruvjueqzklcokeerd..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "kufegkbkbsmdgtknyckruakeuvwovbbguvhkwvvfplkknqndbpifbiqgzvvktevtmgekzxtarhfirfqfpoyvurugwdisryszffpnliphvjyxsaesurkvccsibwiwopxetbebomoxuymbiezbyzroph..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "kunmyprmzthexqrpaskfhvtupgjztmwxafzefaufydxtqikkhbqstkfcgovynrowbmcvtcyompdsgkvynbbfgywsqzlqfpjzcmkyznffccgwlngcuooolvwkcwjsegbtpaburkvbbogtdvazyipkhy..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "kusydgvueurbuoxxcycfdnhkwnbszewklerkywaftzdtsypyhosybssipaluprpfnpxivlgconvrpsrjumkiouqpbxquvlomonpncfwasgimsuxzsozpxhvzcalinjhuvjdsxbcztbmqmxuzqylwik..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "kuwkyssckzgjibzslijuprmyuijhlmeyvhstshbbpvngaxpvernhhcvvscyjbrrlubjpvbqkztiqcbzlbiaqpleojdlzvdsesmyefsjmlhfbjlwgftmpwjtvstjyoevsfgouuxvjcmrhmuoryzrclp..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "kvxlatrftirltdtmspncfthqczbmcbmtklmnvgrairnciamfkkdumpzimjurkfpxxbdzwabowudkhcxtiygrsdhgzvmntljsdvgdidquhvethxamuftwxegxeawrcjppoqewaxsxtujbuglykamdtr..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "kwbyfxcxlbftoadrdjfpyekoljnfebqahstlesivrlrjecvgafaikvmtchhuwnjywjfzuhjghkktitlnajounhrgrpxexxegrhzpddvkyvykltbdotetzhgzicggwpgscygxmcqucknqhdyazivusx..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "kwbzhikpeewlnihmgoofqjjrhkgmzwixzykbjgyjtbdlzrqbcyrgkdpjfxpfmdcbeoyedkmscperrmeobndbpcjkeesrqlvhhklofkklrgcqhnlilgbbqydnbmlzigrmpshxsciewaqgxtirnrgbho..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "kwkhxhuhofbwrtmonpohhebibzetcllvamwcjesdeilikhifbowsbkfbvvebtjycsvqqkkjmougeqeoadsukhugdhpabdtmxfjjymnzlgadrkhetscqvigsxsrlzvhzruravwtkmdlxcsczofvzenh..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "kwqevdvgxydatvttgkqksfuatphwhfzzcczunxoddnhlcnpdvonjxrcbupiathirxomxvkbftekezwepyokgeyxzhhfglrhqvuxnwjtiwcbrqotvpxpuokaczfpiwgrrdezdovokyfwyyqyrexhgtu..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 911 { : "kwyqccnxvclrdanhjfgroaamjjcpmpszfnrnznoejlnqujidtmrevfshqgmmynhctxzlbxueuvljmqpxpqohbatkdonlgakpmfijtzfgpsidmfupzehgclzclsvooetaelcwiwitjmeuwxyegqgynt..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "kwyyolunvnqmmksnrsuszhblcgcsksjyjskmmtvoxkfdgqbtarwcbsqrawvvwyytmclymridbzapiuphukkijhcyjuuivftpfpzvuueetlaipiggzidrjdymmnhuxzvocnpzbtojgovcovctmnonzm..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "kxnhrtgvjueppqpflkeozvwdrbxnjlvtmypjuxetiailhmxnpvqesguoguhwoitlmvodvhzuxqaklocebxjdwjbtlgvynuyxsgcrsrvtigfgfhbjszipsdybngkaeeaoodocbjgtyqrslxatmolocj..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "kyjfkqqepmmlcydfzlhyftelyeaexmfhsgrrorikjtzutgyvtclndqgphbjvdcoztnloshoqrdpislgrwpiegucornhjdmuxiblotpyfssgndwqfqtszxncqhofcbssbgakdieomvnuoagfbocgock..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "kywitzryxjahqjvusqgfuuocmcxrehkhktufdsdhlcvotkftohhxnjskrzjhdujbwxyleaktvobxqqllztnrejtxrrwzozcixkxvqqvsmwlkosiiitmljcvylmribluybujhziiibkvephhohnygnu..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 842 { : "kzvqqokdhpoxqdjdptmljqbqodtzxmznyvtukquttdoikiijaqyrndmojsrvjjwitkgycupehqxofctnawchoatisdjxbuneyhlynyxmwyziykhpdjguifwohdtgkiczixtakapucvixqlklmwyerw..." } Fri Feb 22 11:37:09.980 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 951 { : "labwunuzetynwlrqsmushbynmxsfvjesosebccccgvbgjalxxssulepxdmfiguiqmyruljjhkhntwnntlcmyskijhuohvqunncaboiufkqswzuolnxcmkkvmcoipfxcdpjvigdnntchsmyzfudzcxy..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "lasbxdcufyeezyqyzutajdknztcxbxjmrovohzllbwhrcfzehigoynaeyvjmhaqyhktdbvxjezbqqbdrfgwricipkdplcrkcbjjnsoifbrakgqxqrdoeyhagngtvnbxlduqssbdipttsmfvzabgdei..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "lawpgasfilwlesncmmknufhqmrkbzpdnvxpqyhgvsfpgckkcczawpcdsdjkbvyctihhrqvvcpspzszzdweqdjwvngaedwueqhodvwrqfwcqwlgjicxfljscxdtqwvqlowzyaohwrgobwemrcwwlncl..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "lbujbyollmswnqhhofpcobefqhyulifnviwnihjqztwtrqtsbscirjkapbbilpjksatigrechwuzlzpfxnnzfpbfnqumouydhjysvswwczffalsyxgkgmlrvfairyialbkzerfuxgystmcmbejhzpz..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "lbwhuqmwccooxvymlfmbrwzyvkldwvvmofuxtxqhedbqctjiaszbhlbeauzlztfkddvscpxixnuiegbbohdbbqeylxcobxgmsmabjymmqxcqzjwqcgjlyyomlpzbugkyvfektvsrwefnawwmucypjq..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "lcmglczsyekwptrhjpjdgbohusbiwdisogsycaamjpxduzpsrptdbatzoqlfvfekyzbyqsctbtcacwkgocxdfdtfsuvpsvowcfpyfuyuundjexhdtsvyslbcqiwyqmauyghwryvorgfflqdnjmqnkd..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 834 { : "lctokdgegzqqkwyuucrlpvzgsmvuwsekdxlmgidbolqssctmlsmvehruzhaeghiikrhvzpltnicigtratsvymesimtuamwzxsqxjbxisigxfuykuksnfcnfwehqgudfneugfelsjxncmbdmcwbrorr..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "ldfdfmghvuziyrjhfyvwaxcoqhuhpwubiitylxajjjqgiuvefoirymjegxistsupdcokwpuryyyirscxhgutkpzhixaijthmnmpclosokchwbgtbgttakpnmjyllcmrymvalektucmgfpuyolvbhyk..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "ldkwtddtsqwhovmlsqcloumenwtaxhgzhbseytyyiyejxxsndpoauyfwbhgeqfipijlyfjhxibotcreekcxszzwuydtroseoufubrorcfxyvwknfcpnehpyylgxdyqyromlyldozefrnapfljigoxs..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "lefpizrbjaxiovvckxorutqxlpnselmhzivinzcfbeckxaksraocbkhwibravjlnzkrxhuuyrhiuycyhdkbhoiobyxhsawmhcoqydcttzisegxsxlnexngpxqvldlhjcalffaqfarubrsjaodivyrz..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 837 { : "lejukuyayfvpvepdcosjpagiwnprvnzxycsfzidrcyhoiqwilbvdmkpsshbvpghwduijwouzrgaxisshigsgclnqidqwdrhjgdfdwhnjtyidyclwopwlrivhazorifzmrgghmdgsqalqyetnbwonya..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "lelhlybblgitabpsbzgihdchunlfufuvmjprxouhplsadapyrzhirpyoxnwgnpyrsoxlvuvlaaedpnvphxhgypodvnvwlytartuyflimjbhpagztootnzlxwxvnenacednttxizcrcqduxhbogdwok..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "leowcotpgfmbkoxbpaqjjaptlzhtnfkbjoqpbghhqbeifupfsrlbawirewbbfkdcychevvmsycwdaayrcgeneelhtqyvodiwjcdkfbcnsfkchqnvjasywmwkqqnpuxzksthuigdskgenxlvicemejv..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 975 { : "leuaqufmpyviutqhnfndfhzanvjvlmyebqfznweprfmghmxkjjgiassjezluszvljanjlhnxscltgqkulkgtqmioiwosrupjgxziocosiilyouobtvhgchhjdqjznnltxtvnxuzjgkaqtsizozciih..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "lfbxrgwjesrylmnwmsnbrvlekgwfcomtsvvaaexengmtleuqhdlaapszjsqoyrxypbsqqapatmnmoawissfqgvwjuaqyfnkmrzburvuohljwcxjcwemxuazqeiincyfsmzanumsxsmomcnfowlxwqi..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "lfiaoyhetxqyvhzsztiwnpcfetlpuomvkxiyoltcptkmrwkkwmchhqsxingkykstyeckseusiaentqjfejzanpkqshbeacuehaycpfhrblcsbicqsoylgplntnbpwurlyvgxssulkhpxkrjwtuwajr..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "lgjpeetclmnqtruwnignsbsnljsdouueozccmwdbxvnhiafoihhpekadmupceowdizlntwwsfvbfedgczlrjrwhjtbtsrsyoapgongdbyeterkendsqlcyzzybsgfmzrxjfwzvlpuxexszthqhilwt..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "lgwkqgiokkjcqotssivyeoltjqzhmmeseititscrvaimpttufjfwyiclihcgfkiaggonwvkswusclduepzthwyuyupfpemubhejpozsxobdayltxzfjoqglytajovoouvwznunpkvmzsiopyknrhog..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "lhmpyoubktnfiqsondzlszwyckmrtyeubeymedpylfemwijfinjknvljwoqpqommcfkrnyxzuqkzhryivdzbtkbcbmvpslzifuztbavimqlaqquvkisdyihkbdjrcqbrpegtxogfjjsjwkfkdeuzvl..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "lhwnkgykqmwgfssrsxmhwltvnefakhiywvaieqcwhkiudyrgwcpdmkfzwcdczojbjhnwflyvokcppasodeznnikhlqpeafllmrfmroygwgiqctxapszulkymsqrtuxlvxdvokrwovbcxrntpmlqgir..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "liijjnyxwuoawxisdctvzokomedgwbviddcbuidoqlmuuoopewnjbqvevrxenkvmriijudynsflytomwnggscywrhjmypzzkctzfoqskogvvurdmxikkreecozibbefyurdscpkqacdyrticjqsdfb..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "ljbcgcuqpyruudpfvvdmollwhvzinkwcleekkhjdscualyzcozfqqjcytobzipxnfyubbwxjcngqceagjskzwjaxppklfijkiqztwugkqtncbjgmewbqzddznyccfhjcbbkwrnlrzglszmiwtiejdw..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 891 { : "ljbddyyomhnemesrndaowwhxuahotimzkskswjvutsaypykunzhfvvzsyxwhhxvplqoiwmgomtyjzqfllmvllxxftctcqpvlqwdeffcqgckaxmnfubaazonopoirwswbcpiqlllpjcgvdvvnducdef..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "ljhpxrwoplptutejpvfhhtrpkuywvicmoezqofgzkdeyfyhtjmzoonqkffqhkzblznyxpvjizczlufexkyvxjilkfmzjrewrkwskwtfvrclzynofmaslgngticekfkwxqgrlsicoyksnmvitjzgrik..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "ljzbyifvhxruykuljqratbvkdidyelljxghnadtjfscpqxgfioxevudohkysuokdispyaldlpxsuaoyukfrcrjhxmnjjapukfzcbgryipngfgnllafyjginiiskafkdpzepkhsyqeobkwiiqofikuu..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "lkfmrqinlyrovmkagmpkraqztmfzowecgmqnzmjmupcupzfbpstcdwkmaprmfnobpbmidwxqcsaigogezvwatgnllbmhppljlsttblviomvxnizjvlgkrouhhmtacouoqzbkeiabjbixnzawicvgmt..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "lkkuwowtzcuxwboauzdultnhlkxqmtjxbdujtswqjbjjaubpyhglqhduxnhbjknrcdryurjklbtkqljgsrwmedypwndyhjyxpyrhkbrudjmqoganvxgfzwmtnkybxlzbxnokwqohwtezjoeqihqhte..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "lkqpjnkaznbbiksigzxutnhquugljkwonutyobkiurzceyjgdczgacjqgjdsgtlqgkpzsdocngwgezlfsmaecawagubazeivhbsnrwymvnsmmzyliuhlkxlzstfctgmyvebsgmqpxnwgjrueajiois..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "lkzmguwkdkrchovllnxvdtpwlhqudkxadhmwpzrpbtmoottlovouaevhgfdmacaextlbgvnatavuufwxaurdxfjebdorezeopvakcdmerckgzkdequfbkwvorxxfuhkrumzznbrszwlakhqwvjspmo..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "llbmdgrctddwphvrfzckkhnwkfsoxbiajiyorcjdhynntqdglrflurfhkpyxynwotyscxynhfhtcohdpbzcpngietfhcvzmslaxhenumhdzacqallzhijxgniwlipmosnzzymxfvmjhjhmhmiogvrx..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "llnrepywpojbhfeqnqvshzrbcevjycustyzrjupzgwzwvxekthndbkdaiaizxuhqpwsjgicyfsqwpiaocxvjnduxsqngcvmufomqginhcslhuunakwaykgzpehnopazzcrzmoyniyvpslmcjygjnjy..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "lltmpzgnfkvnrjqkxfguzwiigiulksgrughskukztyyfjaxnxvecosalacsajxedgjgdaimgtpkrpmnoqvwsjltzengrqryaredsvmjpnnhwdrxiqqwmhtfyggdtxghhignddvnvuvaspedaxyybby..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "lmedlvlqczoybebfyzlkcqymcputfgytcbbqebfkbahnbwpualsbrfvmbtsalqusgoetjskmpjeqxxrzqcehofldgrgwlexwcdhyvuinwetqioiqsuudcddjforrabwymhpulelqxwazdangbqaswg..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "lmhjaropubmpoktunffwajadekhnzwzlfinonwbgksjjsrbdayphnyofxppdfekgjysuurwsfgndftwvgtulrgvezxmemoqsdgkfazldwfeeqoehapmkczbrhmuggfaodgjtjhxwmgtmlprntoobll..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "lmirfxeqqcfcebavbezkntnfsudamsmhbeyvyczercrofoqbgvtmjjcoefnmvctxhichkregbkdqtvnmdieextrdmbhdrhlfkbbfbqktnkfybybgracgavegmldugxqhzxfacxwhewqgfzfjdqjtio..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 862 { : "lmlbeshrwxiaqlonjrakzaskuxiqfmyksmqfzryqfthsoowprxzokqbxkuyqvxtibwaxizmlfsstawsjhtyiswvxaqtrxcmjerbvzcrjpuustdjhuxqxgnqzvpwhhrzvlguswhcfuijfpzeoljiopi..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "lnayotwdeelaolfukkbdpsthgfxkrwpgbzukxooivurgysklxhvmxvpfdpduvedzmfungkferokijewwrmppkmgziwqjmizleoxwvmhkawieizfcqjmjmsemyywytugfkjqtfnhazgsjqvaemhjlzy..." } Fri Feb 22 11:37:09.981 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "lndxwwfvifdzqwrvunsryacpxtjmxdflllflzsjuzevgwmhpylebtksughilhdwkyagtnrmtalazkpoimfqyrgurycuqcedzmamharczzypokdwaysfiejmsdszlwmtynkanvhbdoxghttacubachd..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1002 { : "lnvverbqkelzfgtzoyjajhzspehnkedtkbatdrpwlxszxqguinuobngqgpduerezdkusksmuveqvjqofbnfnetwpjmwpesqrrvhnudxlestqviydrvcddsyxwtraukhmuwjwidzlvobpvuqysqktrc..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "logqljinqxindkaanwgpojwpixfnzvpcbenwragunjvxyeievozhzriliwcjyglwxxytiwpflgymwpgxctierjeamotywlrhlghilymvhstcsimxhxksxhmmmpeagfrqarakmwcuruxhpixacowaft..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "lplfprmrucnxyewidxlrqkkpyajhgbvxogtnoqlolhtfpalenmwupmchjjdsyupxdhpurihtmvbrhtehjbnswuudmpmbuvpoeshppjprhidoilgybqaheqfrpuvosjspknseevccajazhqdlweoeqv..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "lpwnsdgpgbryckaxvmpdjreopggxismyqcbvpictjggierlwxdgbcqdfkmhzyanzaqtkauhkbhissuuvuwfkkvobirajqeehagsrpiobifeqhjsfugbmojsiswbzcmcmwchwhhsozztktlxscxzjet..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "lpzpstprhtfzcxtflsfqbpztkjxqvturcelmbadbzuwkbxcitvqepqwgdbbxcidwiaoubmqssczpywmswnifpwpjxocvrcjxrdpftkfqludrtgvtxycdpqnjysqtlafaxxwufcnokoccgmomlutmze..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "lqepdkfdwpbetvjawwhtbtnbfopimtgujjtlhngqfofmyrhqhdeovwjzbkytrklhafzrtfmysyujbzskkqwxodkxeuhstnftcefyuflsbjrwxbzzvhltezqlqpowqlwkyygmzpmmpaaijhhbpphlqa..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "lqgtrjahgeemgbldljyjjqnyxuclbrnpbksvtwtptrywmxwbmbimtnfnaweekfcxoknykxrwztgcimzqtpwcvkvcypiycphgxfquzbkibthcbpprfutbqwztgepjqekzbbqngdubarbgmctwvgartv..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "lrfwxhszqtpuctboqzkbgpjoingnoqutwvozwwsjkvclbqgwmvqzhmdhrhaseuesrpysoqugpnnkqyhlwsuimddxlyehuhukfkqzegbmyryuupobsqobhppwlkrvdwqvxwstmbyhihhqahpazwmcto..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "lrkmzbfwmjxzryzwnugciwrjtyngvaiomtkiuwuggytiwsihjbdfnxmirtbixulfbpmpsupftkoameuiqmxqhvjgosfpxwliyugybaqibxgnpyxhpqnieohvkoqjneltuwysodvjfohxodjohamadm..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "lsctqconngpozoiwmjbvezwhcesmeuhdimiacinmqtcaczmkbhfyqofngsrpgeyzfamuocazfvhdrnpejeiwbilecmrgsmfqkbezskelkxhqcggtcysrhfdoppcpqeqxarntpbebasqgccvdpvenkx..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "lsisuvynkypqdvpodutlqfonneprxxiwzrstfdkolufskecsbzahjdliulzfwttezhzbuqdntwamnxyucrzqafbycmhfceyazypqdcucxoeubckqrahbvpximumrfgmbhjlqcfvmgogsuuotzkbfgj..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "ltamxqgsumlxhsyxkfeirrkhcpxrvprhyxruervrybvovfylwvnzjrwjqqhkrndjvmlhkacyxpicruyilykrfhnsdqjfrgxzukxgrzduzrpeahmbvnjgeqculuhtizsjounxtnhxnbejogkotuuisp..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "ltcjqsvkqnibmxasehbbvifwvtzzapvmdqueetvdccwghggrelxpqpwdvgkovwocztbxrbsfoxzoaohiuixwrbztceadtyccmzypfujkdaouinsqkaldsxpnhlncncsxdrgqkcckxceagblhmgszci..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "ltpnkvfhxejpwvcxcivxlgiuodidsspfjeckdlwukcrjlclqqyrtobyjylvhtewbgcpefrqxeaxrfswboeocabyviormdjiinjwfzqqqwykiqewztrfaivcpitrreuqxknoefmgwfkimtbgczvawod..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "ltrkqrdeztkkgmxgzsebncwohuscmidslkmybavsivwumtaaraqowpwjxxfvzipreevqtafsxagxybcoryzdcivhmarrdedxloazcfuyqhzvjbedsmidgztrkytyxgttzbbgsmeljlajwaefkpmrpv..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 821 { : "luetzngnccpzwnnimqvjzlmdxbuzegpstbnoydzlhktvvwibpabsenicdqlbkqdhtyupljlxosntuqwwzvtzrbuugaoorphugvcpamwmwbmotofkhjpboxnrloxbczbjuwwqjxhrsdnprmfixrtgbd..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "lugerqhdinkumrxjkremurbzaqenvckhydabmkjoppdavsaxaclzejeexezzhabnewnbebzarbrtdsbftfhqbuifproiovhzkyysxmylszazrkyihahmtoiabprjqopishckaxauqktidcpkekmdvp..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 884 { : "luhvdoazzqjnotwsfxlkiwgpqxkpksistkehszmrdwwtsqdoflhppenmfcrntzdpqjfkobedxciqqfxgtczelswxknulebviykadfjutaqleqavncjburqqqysvltofqluayqtjvdhrqlrtankvmme..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "lukpbziygxaiaktrfbkqnbzwxaaqxwgscphbuiyovzrqjayfhcoohpnnmvzxaenefzfmhcwiggldezqacdzojignzmqzahhqfbuataloeclxedcdglcftxcnnyozykxxrhjuzpfegbrxsawghnhfzq..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "lupoegzftkrxxpowirbiqqtlgcezibovkhpamqzknjkzgwldoyrctdgsuzjlbymdxtxzdshrtnnmsanibzulwysngyskqbrasluamrzuhacxonklotalxpevwkamlufihyewbmaasltwhrsbjoljao..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 849 { : "lvboynxivpurjcynmubqkeefucdswqvmkrythnmvkhsxkouxmtrhbixrenjklqyysebsvqdwwtvppvzntpqwlqgdixilnqxkwekleqrliiswvqxtpfyhqsxfmeegyrjmwmbvuxpvfqeqakplrnwipr..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "lvesqhsnsxprvsavjadidydczzzkdrjwpfoomcuicegxtxqfdlmonpdxhydxtgrywmnzpoifxammthtuhudxvdmfygtekbsbqedrjodonhkqqaolfyavbuodyrhtbysdbpfzlmddirwhmtkiavrqnb..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "lvurimsudjxrogpbppjjqqfbikcetflrmqawewznytnewzuveywddtwicnljchoxquygcrzgjahxutdhykrglxcmgnamsrteubbcsiyplejbnbvsuvenzktijzjuahjoxnqufakojmpwtejyumstsi..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "lwccikogqhnhgsrntqxdokufekpyogtisyuczmyaqqifwlkvtvnxbxrhkwrqzkrqytofbrgoqwtkumwhambhksvflegingxzdzlvhnzkmvqpdjspyhszqewscouqvmppdixbwbtjwhfdgbruuwjnby..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "lwejabknxuahtaobgxqtclehpxvalxecfafupgfcucvfhbwtupyyftrqojctnrbjjpdofwgljcvqofteibmjfqdeffnkmbgxvhswcmubfuqlepfxvjomxryhsgpncuhsrlohtqihzvqyfmiihcazqm..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "lwurrwqiireittpjvaylvucrffblpwxenkbgortvzeoypcdujzslyafybgmupnosirrvznqnbefbfpllptsblnrcnfogvsafdbkyulgjyortljfcxpdszcpqpoxeoqsoktwqmyziheprzrpsusdgzt..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "lwuupdjpbgfimadnmyutaqbvfqvnxzybvsqtnkkboniisgbtbievgerpprmzdhgxoxvrzgoiofiyyfcymbuuxpqgjwudgzqckeqbnjqpkgxqiziqmvkzmhynmlkuqixgediclwvdqhtqpefncduiaz..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "lwvdvigrkjnbopofvgnjxvlcmzewruucnnsppqybugfwzercmhikcctycjxhcmozzulwmilkzzpeuzdzrifxnmuxtlrmzsehfrstqiqygejuswrlhguiomyqtbrkwjmonuzmpsmuuyewcwiwflknap..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "lxfiqgulgburhxaiirmmvspfqnvsehopmpucpfgcstnrynnmternaeevspecneqasqykhrjufrphudifvyzkhbwamekbysfcamedvxytxtknuhjpxchycyrrhivbfjxghwuzavdnfbafmgbglziwod..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "lxhcaxpsupujxtuehvflescdxufqzeozplgvbxfccgiqobxeablgydpfklwlpmupmjjwlhjvaijzkubkytudpvvhgvcfqbfmksslpcljfcwtbnjvseaiuuopyxbwlwzxlwrwiforaciimsxzuuhcht..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "lxswotdcwhekrgcvgvlgjrkglzjwvuoxjsobqqztzrtnaxtkhgsyhawteiehbluaaepnmjmihzwuzwphynrvmdmeuifuunwsqcdnnqsrkxxhjzulzwrmtvblzokfooclrfkyykgrgwfprmvylkwbog..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "lxwmzzvvbnwqganovtzxujpupiugozeypvujuskfhqfhzhvnsunpxvrfijyquqzuccnqkjrixwvtapgylsludvxckdkxenstezrizfnitkdtjecpcpwrfeqpwgwnvxndpicyursjcgbpdjlhgdkaxz..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "lxxirbrqtdycjkyspzirsnbjvdzwwxzwwnygvbrdxxihtdedicbsxowmiphenunhapbuuhasdcofnelnqvhymbjiegficjqyowtnqvujvgocjsbwoffcryxuueycadrgvlxbqouudfsghrjxulmvxa..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "lymtoefjmfeyzmfnkkfmdghraffmlbcgfgavzcbuuzbwkkcfizwjikopptbagruijpphrirljitjqnapzwuytryharxblovjwaphpknfighnqrevivpzbyyusgpscmwgmbvdbqcljehnqnonxxjidz..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "lytlzevxeefgqjjhykjsopqwyxczlasdjsvqbsnyjdrdhypvhlagsckijdhgfmrhogebjbnvfsggxaqfpzzxuixvdhdjiyotwhhvcdngkqezwnfldrdccavhxstovrqczjiutndilrllvyfradvzjj..." } Fri Feb 22 11:37:09.982 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "lzdtesryayyormmsbewqronohxfbcxoccnxayhfbtyqkozjokkyvhuturqzvjhaamgnwjmkcpujdxyjccvbhwhwsotfdrzapiprallwiqszfascukeabrkgsdggicbhvorewvrrgfqituyoexaoutt..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "lzmsrgkilejsalvqqdmntmajezuisscahgeeiqisiwuhgetpzetdfjzpzrkcnftleuulrdlcwhlzntdwpwuluyhemjraujhonzmowjujthkvvbifihgkxfdrjaqqhbomtlkktrvrxxpxruwwgprxrx..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "lzpmslmdflhciafkybaiyatssxpkwndusxbfkvvnfxvqeyznisqyqzkdoveygksqbhacreaijevhrywjcbeufqojlkkyapygwudphxnodwerffvyrmkkjwlipnqnyjcctlhixkiaibtpacldxwfnrn..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "majckqhzggzpncyjpvcqejfeqssqcfaxfmsaiuqowtddjelzjewiudgkfnmyhpcqyqrwtscltczhfnzhazwdupytzhflehoudnfqmnrvgsmqikbcvtvwpxuewkcagiygycraqoojtxalimfhyzxcbt..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 935 { : "mamojrhdgdfzyzpyuddmkixcvjxlrkzvfparwfsrafftvtkcrvfmaojegiwztfbrbcqwqmppowkcuqfbbuwwvhzpevygaruvfvaxyzwgnlbfglozpbpciniednsrsqsjwqnvjgvxkwcrcdayyjuxii..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "maodovziowlmerubuklrkvbbyvweycplculpcrneqwbqxfockyrppjpfqhvcotcotxlipdghcfxnnedanbtcelsysjjquroywuoywuufhbyseulfcmfjiogguzlmrorwdiwhnciiybdzveawrhtweh..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 875 { : "marvkjnpubehocoaurryrduwoyregradxkovscxsksvadftsofzpuslovllnmacdghzdwjyulepiqrjbagolnivlzbiibqpwiqxyrnghawsezolvdhybpqzgxhsmjogsmplhuogzqhwkxrranzvvsv..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "mbidpopiplpoqmrkuxxzsrprquelfomrpuncytxiufcgrdahgfiaeduanqmtjjtwqbpvbthxojspzyzvsluzgbpmiuohayjmtqeiwsoohkuccslvqtbrvsvahokuhtrloxfbhakvynarpnffzvghzp..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "mbjufgqbvcslslnitdorbbrupafbubxcuamloknuriirdftieifutuhkhcpfutmjkegvwlrkuvcakyfwoppkhlfarlfogtqicynoikuqhzonxezkzdmvbquvuuwjhprklgzpmnvadxxomjlksslpyb..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 898 { : "mbrxxdlbsqfoghdvwqzhrtjnrrfyptfrcnamlumpspljagahwvjxsmcmyubijammhkwznqnzztrbydbkujfonddvqehvququagygsdydsogknsfmqnisjottelzhapbqdxhfwlyprvagmmihxtcwpj..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "mbsmyrljqljubxymrgfzrgllopudsowhemegojhrutvuvwjeyflpvkkvodigecleujbptcemunqylyrsitwjpszjftjnkzvcthnxjrwqyakgkzyxrpkpwulopiulolbfddhnltyncvjtlsgdjuluwg..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 821 { : "mdebflzmranumrphqebtraqwqwvbrjofqdbdevqgqmhxckstoyzysejuntiscjolhsqiumqyhbznaetdxlfebbkkdhkwhgswwyrojbvaanfpwfvsrvaakiwoaatqtvxqagyayfprncreduqliwvggo..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "mdgivnvmylkjcleqgezqxdvwezyrgaklenwbtdezpmuipbiyhmflhgiuguumfsihmbdtedzkaqdgshrmngnnmxdrlycyevgzxnwetmukwhdkddnurffkxwztahnofnhdirxqyqosdulphxqnqypfye..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "mdkmcuzzhomdwcumenuusagqppblhslnpbmimfjsjgvsjaloavmwsdsmywsvdzzgexbucqvvttivvlokpeayfstasuzohihmxeimydlqooftfimwiygutmyljedttvtnzawjeychqzxjpuovgdjrbg..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "mdwkfmfwlrlwdqltmicfrrvbcwshqaucrgvszfummmgwxdoylhnisjivvwcjvguppmxoftdczrfpfkstpsqxubtpspagchxgcghviauqditvfyaugyzinkpkylunhvueywseenhcyizakpsulirovo..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "mefhtmnrzmzzxsjwtqsdcljzqkhwzclsflxslqczqomawgmbdvbvpypmsowryabwkkronjcsqwpixrxtfeahspqlosfndqglhkggqbjphcbvepabfnctwxygxnlqxfjrxwzvupynuzfddcdjaslgsz..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "merxanfmtvdcuaqrmstxjzomskdmgevqxllukmoqgktuquucnipwhdofirzphffvjmrbtkmvrzcgqwiampjeelxzijtedmayulqmfmvixnslzqjgbaowyeqbdevxexxivcghkavhxnvtgnfpcpoexs..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "meujjwodaghjpbabkutqjpwrzqdnitsgcotpucxewemjajdznwutzmcsxhxmgotyqcmppoqziiihsxchlvchkegthfplghvurhhugvbipqlcnfipsuejstowsomjfiufozaviljsomwsnqbenidcdc..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "meuvwqsopfkgbhbggfapmycmficeyilgsdppjukkdusiwgwszuucszfpdrnhzjzaruufhiscaqrbmxgshjzyvkgugwxtkukpnejvinobdbnqrqtophuamneygccxporivzpcdstswcsqhaumvrrumu..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "meywdzbwrylvgicqoyreusocplibzjhssjilmnuyinoqwmcnyoqtpygpxwfyjxuvxdmuuvxylfkfocvbzitnxytsgrfagwwveledudawremnpaogukcveytfxvrwdpkgovrypuzmvgxsqfuzkqlkgj..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 910 { : "mfaeiidmjggnfvnrzxibawuncvdunbtwolppqfspfgdtrgdcsonprvzgfrfoiujlvhwpaquobklftaiqzlkhqlrnpbtauwbnsyurodenxexzweqndcitqgvnshjbqzauodappgzosqbpzzxwnogytx..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 919 { : "mfbmcctesbbgsrrmadztlwyccedmnpgckhjudfybbpvodvxspvyftujmqxxebjnogxqtqhdnmuwdpcuenzuzynjstrxkrlagyjqgnzgbyvttymyqmlomrdxqvxlzjrtvegqgbvfteikhjwvlcpuisp..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "mfcfxbdkxtlsfwucclycieouoxdoblbmudpeqkfgnvcggmgbakxolomfyuneezbxlmzodhknvyceptyxgmrgixbkkjkhfbqvqlwmchwgpyfxyukybzwtmsbqidyifdnfsltauszzvzaxompwvmwqzc..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "mfhohdxoehwpfyxbrdrhsiljdsurfvnqjzhvbqqgatnfdkfmitphgxwzhemkijkgzryitaacnqrzfbwngubkrcefhhspetpeodgckxsqzynjjbqulecnpgykrpjdqicbugmvakewcsnvlaxfkveimj..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "mfroilhmzkpfahbjiyfdnjxsywdovjljuehkmmrfkmylfejupelpvmgicacpwumgiwxdwynsndxlcngekfualyqpasyushrmbquashveogtzxxmpyhckiyhdphvhbooeipgxylcvzrnjrdieqncjhc..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 933 { : "mgdbrynycwrpbdrowwgsfwcpnvianairpvmpylhpawyxgpasefrtzvmzzooisyefwvotsdopljostihzjkadtusetytehkqsubnvxxobtmgtiwgemqklvstiibfzvosoadmtexagodudncccwyayfz..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "mghsnojiecxdkyzwlsigfatjfiigtiftmcbyedjwqvhzrfgsxlnmbbciinrgbvkhdsfarwxpltkjctnafujablapvcxckxqrxejicimglhigpgbukcbxsqihsdcuptzundgpiyffxgerxhtsafzjic..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "mgildnwjitcbeyknhmmdcmqrbjxjyuzlwiidnofznkifukvadgmtgwfwvedsahkamgrotoggadbkwqysxbroxgrydhvknwgoqudqvjizxpzwxwufeemvzsrzuktmpmjhcxtysokmrrapouelybmqkd..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 865 { : "mgogaiojnmhbttfwiqnnelijwunmvmbhvqmyrypgkffbvgbohauwowbsgljbvogqwlffvqpbuvilhasjkqpmaiztoisatonvhvekkdyrdgcytakwnczemsxoalczqhygiwbufrvjkqbbpvcdeedoss..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "mhbcylrabmcvrjvuheyawralvvwlaeywixlnodbnkdhthclemiwkqifqhjqnruzamtuiccgpckegbdlmtbryzadgsqbxzlwrfjwklcztzdhwiyhihfnqmrswngntmeqxsyhvwdflwtamxfpfjfitde..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "mhbkeqyoygyfipxtfdarldzkdcpnsahshambmaalxdtsiuywtawvdqhblckyozbtjhbmfxrdrfvruncowvzxwcdbrdznzhhuipyaawzlertfbvahqcxrpxmgtwnididxsmzrabajhqngyjlragtmnx..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "mhcdwhpcvgqvnqdnfoartnytxydnxngzcznmgvypyjphqymjipypbvssxbdefmorlfpqrjlxormpdgxkvabireqjlskihvznzpntftptqowaebthaufxiltzxgtokxyfypwsidzbckymdinwcywafe..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "mhjkjzriosctdbkukeyowzuvrqwukjrmwdplgxjjqncwtcyhrefckxiznltvszkslywkhbtmxabtlimmcqfybdyehkexaffkcqvtrjfbmyhjaenewurhcoyjydedrewjhukirtbodplzljjeuousth..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 989 { : "mhpjhvvcxigpvsqwsurdrksphwywwcmwkhpdwlmqryzhyjcvpwjdehkirsglopsdjnjyvtxewtnudladwwmzumfcygmxqrbhejxjpijbinpxrhctcsbohtahdfwveeltaejngbmtbwwbljltycfgdb..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "micqzjqbxmrnnydafapornuloeghthpleeophkcwlohadzzfgdppfkqotccuibvnnbcemkntdpmgulloxbrsfpxmfcevgvyifalzuktptzaiaqulujjhljypdpnzjrffsdufoztmvfedvhjiiprnan..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "miijacvwhtgegiqzoxaajanbhhxknuzmlfveobxygxgesltcfxapqrpuktbqzuanzjfnbqwpigjrdzbdhrdrztybdsthwsywngqauuoiapvxhcucdciujizzlylxdmbmghucmuwvqriuikfawccdeb..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "miooowibesbxieawjhccteqlrmtuuzbyojphpizyparykzxuiwvcrleuhvhysecprpalnbksyixvghdwipukqqcgfborjbfoldwvguydjfzuergotifkfxstxymggdncekqkxeclrqsdtpvzuewqbh..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "mipekhefegeknezlivmsjpelixwkapqufaoaxpwbmjlcgcenlppgfldabhwnjmkrxoylitdeukxybbnnnwwhjjvzdijsteumqkvcwghmtsnialrylvzsjkbdqunedfkfgomgprotsmphfuehwlxazu..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 925 { : "mjbyjrkonqnzbfneaiaxwrzwhrwqpugpsmeqtsxqsrzyprljcjxdurrkxsejgsigmcvuslcdzssodbemjfgcxswilkgkkevqtlnoqyromfepmsulspiskdtrkeyvdhwskohuivhnbsnmzllhxbasqu..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "mjjqpprqlxbwqvzwsgtpyywwzodubturdhmbykovzirefeudjlpbnsavrkfgvrogqcgqesnsmfkputltqmpkttzbrtwjsykildqvkbyufxbxbkbbhgeopznhwybwxdjunyrxtozyuvrttnqokvceft..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "mjntjgjkdexmctepgvyxxvpewcedhcrdzcjwyjwnhhognwnanldzjsugfmukdabipcbzkipxdjheseskfeuwfkzxknpvjnhxapgxgpafvcogcoklpxyikvvmszyvefutbckkrkwpojqjonrhbrmeio..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "mjqmejehkmqhpvtxwgunkikxowukhocwwciytjmxldqaermuerohoiqvnmsfqtlphbgmdohrvgayvwvagkvsitlqovmwbhinrvuitsycfqghcjraijatflajdtewsqczlskfdunqwydttjrdefyvun..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "mjuyhfgqzmmcdegijplydxzfrnecryidwqdytqoqhrmmdqiaowiruhfjpmmavvmkfyhpppibduenmyigcrhrrfqhriuvrhadhxeiymurpxitcutvgjqjfkqndetryyblymoqmlpvykntkpjbljnekk..." } Fri Feb 22 11:37:09.983 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "mkgufmzaghoidkdpgxwhphntqguyqxrxhrczesveliwlutyctdjzeehvsthdlmclpnesoyaenyjoirpfcomtksgrjfrgjxsjdnerhsskqyhtjsyqrzlxpkwdyeflbhggatatcloalqvjpmnwhsrzzq..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 842 { : "mkpxragrtarpjdgblbuvtrzhgghwiirbszanvdgnocudezsvjlbfzfxztoxeibcfvvnnvzglavdmivjqilbatupvyxjdhpegentmhbncvpzrdvrmaynmqchzflozbtvdoinmxbgacmygwlxgppnlmv..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 975 { : "mkqznpusqdwdvontrcclgvkdkzqvgdjinpwqmgdqjjliavbrdaqqtnktrxezftpzigsnulnikkbchqmqatnfuvagzqbsepayqhbkqlailgloyexfmccipslfxrvxabjmcnotlwrgxxxmqcrjpydvwo..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "mlfqkcomckkcxpahvbhhzbdvmdvywrvqbactpqaqpijfhypfinxtqyamjslfdvecrkaticocfcfdqqzqxmogpwpnparjgvcsovifcxpqfakzkpshqhsixdelrzpzpzlmnronmdgklqsublyhyhahpo..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "mllujcranfrqwjhhglvdwdjxnkzvongraqnkjcdfveywoqqgkdbrszvqbkttcjhcpeqisgbkpkvuoikruxjcmwmfawkmywzuojtswygcntqibfaredaxtxfgdecuabeodlkhpanbfdecoqyshyvepq..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "mlzdfpetrjmnnbgbgugjhgdmsnijstqsdichvmyxxryrnzbhmnkelllpqppzosjmdbdfvbjnwqgumafizupqsvcqvuzllalxcoylsswaelhakuocgalgizeqkipgoaxwiqyikfcciingpgeqqlefty..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "mmdmxxbjvgdmgdcqdffanozfjetbjzpengchmovqywbngmipogmvmazqzfwmxddtkhkumezzpijeyesqljbhpoivpwvbqysnznuqwofwdfnbpdltjmuxftoyverxumvqowefjfuejcfrkujhipsbrw..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "mmwtpipztjgrmekiggtrxipzrlgsksivrfddmauvztcjqtctfzlzrtvgwjjlndjhufzvdnkfxjmffmznkiyeywpzgoeaeuidfmdaeuizbksikmvjfjyzdoolxvknxfgukddxuqvobdxtyqyifixafn..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "mouygdxexppcfcfcdupdzexddykllsznluamnkpfqfaetxljukymfozttayasxckpedgncqpheerfevbzivuhdyzsmugsdbxapwoefqlrwhpqfdxbdbmbhdulzvtuqvcflwwgpwsaeuethpkqakecl..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "mpjbztfqrcwckcnufncebvzppinlllytndjwnrnetqlspqvetebcurgrhqojoiqetrygfwzuplezzcoxnlnjglnbjbrmmtldhiuqeaveixkqnvmjhueraasldzphcuinoalajdtdhkgywptabpojip..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "mpulhygdhumaymlkugijnbrpnhgpkxiqbjmqgpcqeemafswtsvphmzkwgtssloxmzqrhryjvabxhaabrxhhaktnfympamasofiiqjvzkryrayevfaldlgwospjgxvdvvwabhoifwbynwybkfnorxov..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "mpxfegbftqpevxcwebcakbahcexncgufpqewwxfextuhztzzkgwvmuecxjsatqwxyfszvxktinjalluayewpgsvuxkqoskewmyzttqbcriggsoibyoeordyhsfwnalmotvvmsipyhvuwmikyjpkyvf..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "mqslhqyruonubmezsxgxngttcashztubrehfxwbdfewrcyujfbpseambyolfqefxxjrufklrlojlopyockmfgjfjztdnmqntvklppgsdaionokcrovgolvgxumsvjcirdxvjxordmruoticrrvccub..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "mrajqkkbcghnsdnnezqjaqejuqbvvuozagirirfvtmhjcpcikjqxjzutmisloilscrfzcmsrbmtrrqjlmtegzhzflcabowhvekkglqvuutnqoxuadlaofulsaqhnjbwydmrtsvazzemqszjahazdkr..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "mrfkwstuhkzpnhkiblbbsukkzjopsepriqemumqcgaiulvxjecsyroqfgvyxyvirptcgjoxxozaefwtljpkkxpqunchzkpnedppusbdauejcelrpyobwurbalmrddrktpxvzjjzlnigscalxzqbmaa..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "mrnlddlrfuawnpxaxuobgdnfeiwdlcefvgwgvpmuolbnbbbvbojeycdbtqppitodslbzcgbixxnduvddxjqwkmzrboxqthdjwsqtaaesueqnvgdxlkziorwlhimrfzhflygvlvndapspdqtxzgcyge..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "mroijxjeztybsbpocogpifarhafmxbxsabvcxcvizjuxnfonwhadlcccdcyupjfeqtvdrwlpbztjorgdkrrxrssnbscxbasoehadruzxkhjhzlguwjlraoepbhkebnyoemjwbfesuimpdofnglaexx..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "msesgdneckqfgqfxrpvvkqhajmtgtplxkuxzhxvbwivdrbtixcwypwfgiuiccxkbuoieutuuxijqvhkxgdbczkvkfjozfgeatwvosrryasxkcbocwhcdbskmettqqkqafwfohluwpcnfdaqykiaatz..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "msrmacvisbkrlfrhochyxzsrjlwksduzezsoivcnvwzkfdkmzelijbpxsjrjcprqnatraqdbrwinrtqqthaokwfnyfdmptrthcyfgimsrvgcdgyexnfccsxdhxoluqrzsxntxrizvyqmospauhhrgt..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "mtapemjuiwajfikmubabalqavivgfphhmiqhjvyolepvdyagzmjakbgjdxkzfdadkagnpdkvyisijfjhpgtrozttbulgbbuutyqpvcfbooqrdkkyoitzspdzkjnbrqrnxwcgjscodogazvgiugnhkg..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "mtippyhwhloztosmajdvuiesukdquotqufzydfmlihrieksaxtliljgpuhqhcrqadsyvbsyjcofnwsvxtoicwgffdszyrtwgnuayctaaqbbvnjnkkksthahqaathcxjumegvbhwdipmkstygzbquzm..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "mukrmhvcggympszwdpnmwtztiiljdotkvvwmirkfdqxgfmrvnmqsuvcpdjdvupyqdfdnxrqbuvuxvicbxgdgjdwpblqnlylhqjiimqzytlclotmdepqdjyclwzmmbqfwufqshynwasmbjeqilpyugq..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "mumglsfqogbicebilhnyptjaixzcvlbfbxswbmzutbdtmldljxzqjywfcueqjoevrejxcjgwppyjnjrzbwdccbxqbjvwzlzgvlkhzjupzdfdhirprniemmiufbsfsicmfspwsitezepbqlmointttl..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "mvdqqfvmtaycfrzfjqourdsefrbmiegnyxzhzjreafgnharmrmijhuuosmfnmjcnbrzseaawqxrzbnuhltpysxcsdbyxqaeczdywnxkacxbgghvgkxywqmwhefnfhuhlacmhfhuegxeqgqwcmedxov..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "mwdzkexvwcngpjcsijqiancckihtaptdwdtydxgxuyfvfxuvcffdvnkseunczqoyfwlgdbpeszlzywzlktfcvbrmqkbbmqepukrifdkuvwhkjkqifqcvereqbsnbyjxndxxdwwvcobtityjmpoissi..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "mwdzuztzeqifabcepebypigtzdjhibhqdblkoaltjbvawetshefdwfjvmqfnvaqmdzdagzlcjbjvbixzqywobyjhwsifwocnnanbakvdlqdmwaibxvvgfuoubtwkakctohwjvdcdblifgcgclwyuch..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "mwjynicuazoizwvlvbamqibsqxrjztrxwqnofzqyzcveaqayrowtpeigaxwvkngwfunouusbszufktnroaajvkzjkhjcnyhumpbeffenhvhutpqapjdpeyqgrnofginjzqutgrkiifcnkljagyaakt..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 870 { : "mwzufoirtyfrpdknuplzfxfpfinyexjwipscjuoinvbkyxvwuogvpmikbdtzbugngugpweofmxsxqppjxvmwvplhmbbhnukwwrasziausgrkgbkrgelylbbvpejzzisdwkjptdctiwktellofcnjwx..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "mxbjmrwbkyexenfifjrcyscakpntyudfindbudsepieagmopbdiculrudydhhrrsmtivgiqzbiisbvymlemebcagyvfgsdxvlucqjzruemsgxzzzmzmsooyujrfqignzxpudgeiypdimvtoblbtnxd..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "mxjvvujogrjyzbedeaminqtwanbnwkmdimlczqmjnfffofulmlyiqcppkgirycgnckorbdeymwjdkvqovabjjlnkxwggzhsmybuztirsqrdzzjqptebiqlvwxzwletjsxxqheslijsiudvwwtiwtqh..." } Fri Feb 22 11:37:09.984 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "mxkultzovguogpsprxplavathpedbxerjbacunskhdxjpctottbtrvxrnqdliwjvsjayrbnaqprhzwwmowtztzlrhrbuvydlvyxknjivsajffnswhrmkigqdwvdqqevmuoyaalibglciotdadfxtwt..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "mxrgoehlfdypgipbnyvpmvxkjijukrtlerfzcnfbsvmmlwbmawxzkpwuktggysxytxgtyrayjpxiwzhwiicnlisnsantypkjszfsxchayyfcygwsztstfispnfpicibzuniyibzoerfumwrdsxyivt..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "mzgsplwrpjsqenjdlauobjwupubcnvlwlnzaxlipvmaqafripihnobdgkfoodlekssmrponhlppjsqdkukaiebmslcksffgyiikjhmetpvbdguxwxnlqehgwinpeyxvchbxrumfjjhrzbcgmywotch..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "mzjqolnpoxlnaxyapiwrmpswztxnpsrtvdpjztjyltteokdtgkrgobkzvzprlmmytniyvzrwoilbyxbqavptjpztyopplnsahtjvoavgszsfhjkoilesnyhyaolwiecgewaowrbqkklwzqvdfkkrtv..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 906 { : "ncdgkmoicsbmjisyovdgjaozqlqmulrardgbxcnghdomwppcmodkrbrgghsepbvaoygeawkfjosuvfsaobxxshzvuazjlcoxarbfxjrtbpncbqnqdggcvuntxnuvwcknvbkgagdicnsjujrrmssruu..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "ncpzvntmygxszkiqevzkfcyfixrleyqhutfkfkavzvsjvafivvwttljbildmjwhsqbdpgzcklfswmgevugalyjsghkifiomgnbuahmdpexexaorqbshcjokzchjvzkhpazjhfzsidmcefilqubhegi..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "ndalktxmmitirbhfxzqihvalmoousfnnvkeufmeknrhndvcqjqdqsevlgpfnecqruaflfaanixyiubsgetuyrbauixlgqzpynadkuvkbpfyvsshrmkbanptedaqhnbhbihwhetshkoqsqcmrshhjfh..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "ndsxxnynqngbkhiauomoxprjbksjzvvuadwutaeiaiddaerwymlityjbnjyivnceubbgkavwzgyrmlcswydpwicqljuinvvjxscjgczobngwzuxygirjaqbfjusbblwymzmcijygtmlvguplsdcrmk..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "nezbyonsnjkqesuhtavmzvmgqdulztiryiuavnacxhstxccjuukwllzxxauibgrqcimhhnmwhektbcnawreybeagwkrruiuwcpwjsxctgqihfscopgvghihvxnnrjmpzxuhytxydyquedttwfliowb..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "nfafekxkqdwvjxiaqzklxitaqrdglhzdbinthuvwaoftpavnyelgexdmtbdchrzagwetgzhicpxtdocxvwmczwpjqjdmdkkwdvzokhoiykfseikqddtfcgezpgytdowhfhhiamnsadqkuprpbdosir..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "nfncjgjdorhixcskujiezzbpjgbgumwjomqqsxpnaidlrtyrcvdywnrdejpyudmelbblxjqxcffziaugufeaeabmrxpyaqqzshnfzyvoenqgfahxvlpbggsthfzdoxbxwrrtqrtjsuhpepsaisjvcm..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "nfngsrrprpmfpyrnzcwgtopymxyyahjijmtrcdhilxlpbhdohoyjtdhjjiyubzqpvxjnysqwoudkqhucqjitjfoqeqvpgtjnmzauseojrujtkpdhquunabcqfpyfdhipdsoswgqqxucdhudcimdaax..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "ngfydmouoyhrmrwndhthvbwivqfcajnonknpdclhtvymdxzkozsbztgrpjvhwcuywonpaatuoqutxbwjhpnntzmcblxjhlqackcpemigkupanbrhozqvybxywyfwzvanbzekwuvpedfbnsivvnolff..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "nhgikookolapeyseugnjpdrqhmbglmtxljdcazynegltrkujqjtlbdqcdmuajxxrnczfinhvzcdoehsabtqkmjhyhvcruhdcylioeqzxkfpasopjonmfvhkqckuvibpoquedpshaqdxhogsbjebxgc..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "nhrxuwaskxcirmkecrdojyjahbknalqnjstrtnkbeyfgthtnvsfolcgvgtnqpmfvpmxngmfqqyyriltjbbnxmcvfjujslafuawesqntedszwetkvjcdujoklhqfkifzsafuhhyeqnhiabnjkveldxn..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "nhtqarqwirehductuyeblkdpsyddpppiypdhmudxwxkbbzonzqhwyblqfsmxypxtzzuzkagckifdoblodacgqujsmrhotthqpizsytrxohmfyultkszlqlmbwjbkuvvjdzzsgkukjpehkxyufvisay..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "niidqzwdtsbcwodqdjlviwsplwawbattbminhfjzbdougrlxzhtgphbbvlfuceceoxoiwkmjtcukqdkxxudkvahfxoxzpgzozowfpninypflvzbryxklczerkvyarppsnmbskgijaxdigzvvjtfkds..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "njegwtaqcshiimjpfjdqevkqwgadoqsmbbfnzzehdntwbwuhmndhgtqilivhcfogbnqdzgtcpsczbsefzlqyqifiesnbzqkhjiwdmfdazlkizcawsnbsyqgjhttulaalfrlvzniqhhtwmahlcpzzox..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "njnlmwdyavlaucogfhqolxbhcswjuakufnqzwciensfgabiiytapkfjouosbmhhjjycekeixymxtzyjqbqrszfzpklhkxysovhhcwlqhynkifdkjvtsddewhmirtdwkvfoccgfpaymcxiaghfgscyg..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "njuepfokspknickgwkxazbilfxbmrymxgodvvqaysebvvqqnawkglkltdwyywudtqkoedatumapgxtlcwinpnbzsodxfqelwmtwlxchvbbxfmcwxknblohhafxjhhssobcxbhorpxtzgkgmozoxirg..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "njvclvjkindafhnzyfbzetftxqgbvrsiweeasdsggnrerlsqkzunyoijxfuivrfpoeucqhgokvttfnhwnupmobxdtpzqbcwxfijkadzewqkgjmavotrmeafgstyockhhxdpemmmnbotjrhfigphlxh..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "nkqwmopqwkuugbbhhhvxutuxyipmidbvdewrhfbjfldijwvexghcasjpwriuevdusxtooravmnasgnrshnpunbaynbiehomlilcutfjdoranmgnbyxwrwupauaxcyurrytgmopalgxgfaumrhksfzk..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "nkufgzrtfrpizcezctoinmqykvzpdmhpfxqqnymyipcyelzxmuwfmlhnvglisizsygfcyeuegpjxtsnsndijbzybbqnphspcfzrvcsgzbsfelkmosbeburcbvfryabcerdiepresenouvwggaiuvdz..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "nlevwgydmsbzwmgyrctmhvhkrhedsbjzridfrxrqxerttiwlfluvzovhjxlsrbamivaoqebbhefqmfubuuesvjptilbulxgplobqztzexcqxyiazqchoczlapfpxknnknzsocmgsngcechlhnfuhej..." } Fri Feb 22 11:37:09.985 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "nlezbiswjzmwgivshpubxwtsxckxfpfgjvfeoozefwgftjnrqjlafoimxledzmytllrlnelrdlgynfbezgqfrkhjduuwwejbotbxzdevpqtvtenefmzvfumqvgbmmauowpcvwjvukwbxkonmsqegre..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "nmhvirksdkwegapxzqeojzqhqmczbhhylskobdafavfcyywtloobhziapdkouraldjygidjilssbudxwkdiqvkvacwhdlicykcuhobkukiavhykytyskiijmrklvwuxcfkqowrrldttxpbqsqlacse..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 942 { : "nmrhncicckcjlhdfzsybpcqlhrsqdcjgjpfbrtjsvaiguvzydnemmcimfsrnhhvevxlrrkrjdravttwrmhrbivmtzekoqanxyagaalxpdxgcjfaxfhktocpolumlqoxgzzrdbegolvdiplewvjjrls..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "nmzasimlvpeoaeojflzzpifivgfcebaoelqorymhiscnzciwmbgimzkwaqjyngqtxpuinjqagmnolirnoahamwrdchdgiafeyzcyqbgrxomnbrpkmikvkasdoftmihvjykmjlokcahlqtgcpuqfahr..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "nnoyjgdjzuebqtzorwiedtfjmlvbvjqtjrmplvxldangbwdpdghkylhioxnonbldgjpgledbnspkzeexqsopypvesoxhuzmdsisyisjizqvyrnbkgvskkfblvltvpbxqhzixqamfmfepuvcwzuetbk..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "nnskdfmawrpetymxitcppqacrjctgtwgeusvkexnvtrcyacqzgwwurqjfcsmjjoyyhhomcilwhlalungjowdoacqucubhzwknyushxkezmhghuelordnznnbgdxgrniuiyakxisnyszzthpsibgefv..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "nnybjbcomwxgozgukffmqpyxozlqukjgeltvyrznxdezuoyhlwuootifpvertkzhndyythuouazbjrckeaalxerzpuhtsffapdcabyshznacfguafaimejwboiyokvhfmqrapsfzlkuawgktzuzjha..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 852 { : "nocaimdjvwtftnumnkolfbqfsgacnrvrkwonmzrdvsmodekflrhfbmwvwkewligbokerckjvtgnsfvbkrgibiwiagqunpsofcysnofgvbpkwgfbkwmvauahxkgiraosreitcxqsbhdnbawzdsytatm..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "noplpbaktfruilqrqighrcntsglvvzehbljdjmykyhhrjbulacanqkatwwntgtbgvxbdyxbtjitmhhbhnqjurtplbsllleqgxfddngrdrnaxfvtwdpmqsheckumyusbyboplrafniwqgrxdapsggat..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "noxjwawtlytdtgkbxzscabybpxtloauajphnhzutgrgzecreikvbwbmfhkkfjtqnfhgknmhaqcukljadhzitravkxbbyhxugmvkebztohczzdjoyjwkbvarcpbfojbeeximlgeblzfhzpxxnjzfmom..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "npamivcydhxikbjsdfvpmvcmgmikljtardlgzbjygacftpuwjidztglftjmryublibxxpofloaadpbajdduomqeaahcipeohyqfxxhxihfdhssgtcunailjqwjcertvhiufcmbhdwxxcuvkdfvievz..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "npevfzlzozmskvgiyarkmduuqwrnzcdovltccpemkjuhkuckjwkijgnicnuayjvpkavhqgzospnlbxurumvpqyiaqdzswhxyylalvujiusviiusfxokfwhgoauqbugwtzxnlxrxdstkuipkfcubtzy..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "npgqrgzlmgzufuptatpygnhmxirotcwnbdsgxisdtjamrslmauwoebflbnfmsiateblowdytweazqpyjspwfwgaizktgiuwzmxmyytmlwiawnzvabtkhtbcbnlqzsmbeklkyragykyotchvswgtiof..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "npnozkwdfqkhfsyqwwvcaofwuqhqwgcoibcjdzpqqxvktnhdxakahjxywdpdizwnnvlqpswetavqvxywsznipoxzzjihdbhxuyfwuogpdpcjqtjzmooersmgtemodlauweqtosqnhwxhtakligqyol..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "nqiupuduqdnrddrkvkycclobuvbmwnksvzmdzxiyncayxpzofdctuqlxeapvpurlylykcumviwsoltsgmbmlxgxdmyeengehjuusdgomgqhiawxcirdmsovrrcazjcggxfdlgbfpufojvafkryhjfv..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "nqlubevjcllgxvadivkgihoyrhpjauctoriaazaxhoyiuvxithxreinqamjfeecbashdubyestvgjcqrobuqfcgylobaxqjkxdxkmxyuuewbkqeognztsvfrfbktqknsaijtmmnrqspkbwsmcbyggf..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "nqqcehuewquefvkhvauhqzibbnueguwwheiimhoeczjlgvtchisxqwuplqcwpauuzkcpdpeumkahzfhxpwtzlizeydahtpniwqjfvxdcdmbggsvwkypduvlvoyrxejhebfgckjbgbohqdlakhxsuug..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 949 { : "nrefdevphtiqcjykcghgweyxqxllmugtotypjdcmuxamerwlicplnydgfkilkbsqdhgerxllheksljhglidkzlkivyqhedgtwbwsxeohoezqiwucjbysgofmebnoxbykwozqzytwbgqbmvzelfglhi..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 968 { : "nrklbxtlxrqmiljnlixgaahjbhjvmqodclkctqtkyhwikmtrmizxtzajjjlpiegcnhhwxamqainbrzltesrndtgxklmiyksvbuzjnuudwwtrppkoosgfwnxhgwleywmwpawkwpnlzjiiyuqlmbhhuh..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "nrmulpibglcwadggafmkuybwhfitwcbvxrbuuihxsdpfavbvntmqkbykpjprpdxlhrbpbmdpgyqiidbzbanyxxpuuuheamfrqvnskkxbhqqpvwtkbezfopzeguqlrfbuyihjivnmbbpzsixxxkybaq..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "nrvezvmxlafbtlazioibkaucjeaomkgfpklfrzeuczhaodtmjcalvpendsrctnwwvvjlepqmvelmtapjvivfrfgjvjcysozbemejmbyhlmoynojtagrqcfdmzwtvepwzhrxzwvflhpvkelruyoskif..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "nsgwykzuadezjcfvggjdstpvaggqhzfjdzapzmonrfworvckvbeybgmwszosaxaoeqvoenmlgwozznjvqidobhogceytxdfpokbsdecohgiaaemnslqdfcbvjgybbtnimkhjkelmtaiojukslhhcxt..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "nsjovncgcbvyrfsspddlutzdkpzorgeggyvrosjojznufrkraeclttmcsrggjxpifnxktsveookdmkvrxyqvnatgehhmcbhhttrguiofvultqmvnvfmnoreurngzlozpqvjngyhzrxxirpyhwumhij..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "nsrrbvewelqlnfjiehsuxswwqutrymelepnrljoucanpyahdhbymapgjeqkvswtqonoivqyafhpyvjlblunlyrlekdrazmauotmszherofuuzeuuoalxfrhgxzhfrtoacbqffehrekutrjououknvo..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "nsxdgaokzejynnxddsranadpndypbypmdabpnmxihidyupwkrgafdxocbptpsmytuqnoglqplxzkomliauenpiozllhwyucckvjbofdjpjblbeuqjwxzdoxerwubphgnoicctarnrruhtjbssilsds..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "ntouigaxecrdjlxsppmobhwiplckijgrmcqomudchyshvkoqwwtkibcfwqndxtzupffyrqigrxjwgqcdqlbnpqxqnuoaanvvzeypbdbirlvlvlgmixuywrdvrlklkpmsgdvdokmtednkrnohtzlteo..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "ntraokoylcznukybssfsitgzhpzhzrszmnclmrshtngbllfdvzllcwhefvmfzrccahyzqeosgykxwrtrjgkwbzlzlgigmninfwyqiralhjecnllitloapuhjdnlznbfgwihjztefybuwayckdwvxiw..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "ntxrnqtwshgantruhtlktwxdunzasdqxnwevmaaazgyhfmzfqoydoraybkdwebaemumhtvrzqgjteoabfjbmnnqeemwlwfzjefknadieqtjccgqiuzpczqpfoxvcqjcbkgbhldxxwplidyzzexfxvf..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 951 { : "nucrefokikakadmkxaqdsukvfzohnwgigqnzperuibkaxluddqauvfenekwwygbbmspiciotqwsrcjjtmurqozpgtnepffmlniknsyvcsetznolkdpvxjukdsubfigijpgkhoedhursrfkgucleaee..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "nulumwkbsvhqlhypmiaemzjrltinnmzrvzfmdcrdzropalpcubllnyouwcjjdanbpxeoukvpiqgmwfiaytbvoqlburdqkdzifrkwksyeckvkmwcgbkoldqpexnidthivpegsdebofiexhztqwrauwr..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "nuninpgkgjvpjvydrjzpjchmvhojrgzdpibbioeairxwjlccgigyrbytvxpcrbpmaelqckvzgyrolfzmnbwemibvebmijsukpazmyatvwwprqmgyceismkyayzmkvwmqtkjuzseomgncwddhajjqen..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 892 { : "nvtcvtsufgyopcfaluttilwrpfdkwlfecuddyzplfcfjqiwcsyygcdrniagevjwpissnjeqrmhszzlpzppkallrhgxozffzftiheaslqarvsyhlpgiuspkbmeytlzvrwwbjkkewgrpokfuqtpilfhc..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "nvxigsccyvribgfjiyitazunnpshytevvggwlyfevicgtzhpxruodlafuyjpmihpxayfewomkzdhxmqumiraxkudhwrkqngpqjsmvndyvkalwpnehtjqpgjnxavqjzetlqnpsqpwuheletaexulvoz..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 852 { : "nwrgajwajjoqfskafyqcwvwmaqjywqtroeniycxscdihokogfwijboohdnhnekfwpiqnvgllstjkmibilmzyveuyrchnoznxbaemctuauwxybhueqcupncudyihkodwasjjecoyzlpizddgjqrzkkt..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "nwwbxweuqlrgqzwmtrokxjefrhglcfyytbppetsenebxlbswxkjrmevtqivpxrffgajfqmgeedtrqdttljqvcoocuhgxpuvotncieszhqhsbiozhomahtddrbuqjyitmsoeibbbvrxhsxkmebxqosj..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "nxcpdiqiyqskoojlxgmjavfvfkqdnkapzicfbvzujuefqgehxsfqpquyvudeopkjhacegogtoyipkhulrvtiiudkpyswhpesrjbldiupjxvgkxsvordnmrlmbwjmqqbwodyeyzqkayizxexbgtllhp..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "nxsbphswybirnzgfxjahkgmcsjofbbirvvvsolokhlfzkopjtxyhvughredtohgvfjvbwwvhzuzchoxnumfndzlogekxbtnfakhcnrehiiujwrifcojiobnvwqcucxowxvvlbftjygvqnbmxvhyqbn..." } Fri Feb 22 11:37:09.986 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "nxtpsgqzatyuzxnclsiaxbesnmmjhvnafqptsjpcdbkugfzdvesoicrgcicfkzmkbofnookbbsfhrcutodxnyrpmtlrqyxkgjvrlrqcyywkdhpusfjnugyjrizhjolzyzufwoamdirdzhkpewtlern..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "oaalttsbnpxuxxwiuhhgqybgxdfqfmquftdhxclcjdrabsgzghtjujlkfsxdqlhuuzjmjmoeptcddtgjscolbxaqqopbfotwewuxxltzcyfpjvanzjpeyjcgmareigfhalcdpbxjqduoigvvpqiada..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "oahgwoilhndmpmvofodafnxvsmutnyhymzshlvccbbwlddurjjyrlvdhdpggqqvfbyzlrsngdaujvgwskcscoblidvpevsrugdvlwxgjivkienxjudljaoplmdbkhaftxxltqmcziwsbzavjrstiqf..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "oajnzbvpsxpcpopluvwyngzjkgyephromhddtuegmjhhtpfaqnmjmjilcrobiuwfxrmxiujivmxtdhjebtbfcbmgyxebwmygimbnebokzugdlttfvelxylntyliekcczvsudzsswuqlpkiymgerksv..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "oapftagtarkwoznbqwfnwadrrxoxtertakiwlcklfivhmvfmlncmwnsslovsfvnlywmlmkjspmwovdcibuymekmzrbgytwbcicmznromuhqkkznbnbhriuqgqwlynhznhewsanylsiguriowffpkuv..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "obbcwfagckokfbuonpazciktevtisajihjfxvxmdrfatmptdgtiujhuvmfpeijouqdufogfxdqwmqkymjltpqgnkxzjbztsazhssezqvmrlijtaiimrxyrfjznjzuogttmvpqjqjwczmfhblsonral..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "obbtnqgglgyjuvsqsoklbsuxcyxxfsvteehjsbktyohteaejulehrtgkwirhdebytkqkimfolhnkzpdyxitfpyayyfggihvwgsilkdbgdkxcwuqagakvvahzftlevclncppduizfpjyrhpcbiqlxdr..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "obgajrwsrizojuuvdkopymgiqyvycbirekqjknsaoyyorodsqgpovyrrrifimexjbgvakmgaqkikakomdhbsmqjyhgtwwafswecafinsralrhcxyhbvzobrqvitllqtpkrwzehcdrulnzumcbysuib..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "obhmylcjvoxfqeolewynvoepmkpkwkdnwjqdimqybkhimnhwvenwyhjhaurqxmyavgqiqltippaxhyegtzgagrxgjrnmtnqhvyjfwzjxhpqmxhxhbuvtejbhmjqocwuuofabvvathmsvziukaqzzyb..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 832 { : "obpdbmrcqzzryutuqwoewtmnrayxqzwccdukgkmzkbveewapdnqpeswlhqenigbjemivnjzlofmtilauhjkhisvizxugmcwjpbxihijywjwyfhqloazdcbvnqutcreyijcbutygnfqxhdyfaljjayx..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "obshsdvbzwdbgjudqktgabzdclhkzmhzwbhihiwjiylamymygpqbrbkbtjtkrmfkihqptsssncucnplbzosknkfipioathkilelkdfivqjkutefsatsoztjlwzsdobweednakhmybwwhnecncsrqpi..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "obvrvkuqfuqeziruknfnbygkohzbwmmajhynyrmvvmjpjpphdbmcetcdmltbwoynmuqsubexbsgldiqukjbmjvuoqfkotsjzpopiptcndgpvokvwtxpqoqbyrilcxqyzcogfbkmpbyfpggijoddfca..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "ochzhstqlpgfcljuktgcspqzabqdaxtzxbcyqjvzkkpmjbvhvzwlrxtuunoewcgdoswucqnyxokzpvuyfclukburdppbcjdnkzwekxnghvaewfwndnhcapckoascqwgpsrljasyrgwwbxceammrygh..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 901 { : "ockriylamtggojmbogmzabupuxkkobysgdryuablgfleppwllddcqbfdodgjomdgkfwvlzcthjiwqppbtblqxnkpucuaemuzgsxsiuogzkdwspyuttqcpnnfgucqrvtfztigzpsdnfsbiujknoshaf..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "ocsbagbpjihorclxrmzdggjkpbcsxrbdziabikfmdliwhcnbytlyafollngcgmfwtcqanlgriftuxzmundmucmavuqguzbilwsipgsceunlzwxfkxvuyjwhajoinfsggeeqvsmuyhvzjbeselntdfg..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "ocxmsljwdhcabbpclhhmptahsavmitumcrhligndyvmawfyxspbjzqhtwigobhrdtqcpghceasemazsmihkaxclaqrpnbhdvmfapwivkehpnzatksrinwmhrrxeinspsdsjxtjolbzvurlblbyitsr..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "ocyamhhmsjjlluuyrfrichytbsqijscocymboweihewkquushzjqvpvaczbomjmizztjpxkcdioimkqglrqnuxkfeipkeoxazocdwdtbsvtvgxnxduqtescrgibmbwsscepohyhvqwrkwfapgzmzpi..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 933 { : "odbbjdbdsvqxehtgfnvtqbolemgncewgqkjtfkiylbkecafbzqiyagnnrjrxipzwmgenywtbfugxzinqtgnqcmnfdfbjtyqeioqmidiglvljeqbuyxpamthbpbhubsqjupyyxumywllcvmwagpvmsf..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "odkfewavedkonzgxkwvmfxyhzzmxlvboakkvvgfmgikymgfsytaxzdsizatohgrhqfsdlnuqglpdfvluswhfqldlavhwgcermrifnfrmzqjfvtetzfagmzjlmykesljlvrcqfcgkjvzvyfahcormoj..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "odkqecwkvhinsucgsddsxrhwsnomxhzybwrozwuulctibqbrwkxqolswaywxrysvwelpywrdunzcjczbyipxmqjnfhjgojokfehdhuqdyzhnpoovsmvggceqmzvckqmolvwfwqlqgtwwmuzledeada..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "odlcwhmcifwcgrgbgodkhlcsxyywsgnwgkysvfxpszadjakiuftianorqsoxfmfnukuuiavpeuteddaeswlhysujfjkerzpumgebsenlcrxxxvcffjnqaablhfbkmpcbjmtkpehdnexcfvmugwwumg..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "odmednwirctkmsxlphpeiztxwljkufgxymrjivqtseqbzrvxcwvilnlmgwaaekzwywyyjznandwqqttvbtygtegiamuxblioesgigifnafaedzlcmrcdglumyiyfqzpwyexlamhvujezeqomxfsrcz..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "odmnzelfemhfcbouvrccvfclphqdgcemozptydgmupjumfekfluttvqbnaapciydperuoxhhimksrrfxufenkdrboogjviavydgwvvfgzacfjqjhgerjzcmkvcjuwkfkuwtofixutuxuuljijdmkep..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "oejvryodvnvxjahuiezzrgyngzamiwtoiuarhbfjuxyvjcaxkespxmmpiytvhrjpxxczncvnbapkhwcidvrohbiwwgiffmvowupgttsqpvoqepymlsedhurwnufurhavlhbklwdrtbybevytucubjq..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "oexwdlphqjkgwuxfjrxklgwvunjrcjjttpcncmyfeohxlzhrnrfcatgweghybdddncbiufkhxnggbnpoqtfblyzkuqvcoiebrtwahfmcpwizmvhucttozyekhpkyhmajaicpdbybxwfbambcdhjjkx..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "oeynlaymzwedmwkqwwplutyueswraqmuvqribyhzzjpzawzcvbsrffuzncznxpccakruetpfneoieqtlfyyhvohwjzsjavthuhdjuvomvomzkkktnxvgincbfuidwmognirpdlqsxafyjgotbsdtmu..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "ofbbyglccygjgapnxpmmkvyjopifbvklmgzkbvdgmtzjpkvdreatodwzocowcnkcwscmtxdphyvrwhfgcfphiztitpawzwnkcdqrwgfystmglovqzmamratbfyrrvgtoiieimlzqsubgextuvauvsv..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "offbjtiwtdmiyyexmzkaqhkqjgstjamspbmezfjrwdtdopngrptafsqcvlirbnnprbxpwgimisqlptjkpmbzacbgatdnupmklvonblayxcmqderqzmvhubpsghkivqawpmglamlpkgmmqrqzjaeptj..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "ofqwoaoxhmiyivlyhfliljumjblxtxyzmehdbxoktogxcbmrrkluiebswhksykwwvjgclpllnzbchiecyyvqfapznxoioswyebaeozdlrnmmgkmjmuhzogfzstjtzooftyncjxmiaisajzkijhayzb..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "ofrtwuprhwejuzrsfxnwpqpfdealndwjwhqrmjxohukwfhekplwttmlsiddqnsjffziyrgqfwtbxocgkrxvecynzxvhaakbcolgpunkddisfttxohytdtlaakshqahbbsumpqtayzylezchyfenjdt..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "ofrzegpmcpmnmatpthljveqvjvopszqpbvpzmpfafuzqfrfqfbhbaphektunvucdqdtrisdfrkvupwyouxvfspoyuneidyqynvgqfblsiaonbtuynxadavzksrqyfieqsnudvnfiuelfhavnlhgbhl..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "ofvrbnalkmbflzhrmxjmokotfnbspbhyzrdznsyyqrndzyzfsefuvgefsveewekczdlxicjbqvoibdpqkslfwuoshquypmnuazmuawpllrrxtkyrzffiafzymkiuzhfdngslpjyqelfiqtexvzibax..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "ogrcsaylevcfmkdbyqmxpwybgwbbvsbezzuaanvizgkmgrahkocyxscecjglpuvylhelvslrrbioyftmjntgiqeykngbevtpnundjyrcwhaldvbmdxvtqpnzbviqevkkirwjxtegrtaqnnmzrwcypu..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "ogtpbqzjcrqwvuerviycvulaznyizfsnbmiymninmktvdujhequgocbyulimjwddeugnyezrgcpmlrrtjpyfgrzmtakdxaxztnvuesskcbfpqmdwolystjbkjbawcyomlvwzbrkoltfpwusrxkyifl..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 832 { : "ohctgqiksqlzcsewkttspylioavbdlrhyawjalrjyybhxfqdyxqiywbsagliyaufboncohlffqkovrkvnjpzjxxaljocootpetyymbdwshgryqgqddajzzuphqzlxddbyubcjswddxovbpodbsmgtz..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "oibtqkpnityhwlpfmcndwemkzfvldioxcwuoowlhidnhdvxpxyobozjmkhywrhnrasgmrgvvvxhwkfntzfzciqacpzvvnvydtnkvmbtjcotxufwanxopqobitjfujydyokuxprhudxymqgigefnaiz..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "oibvziialeoolsdxndmeklsnfyxklkgqzcgatyalnbmcdxvivmgqzmsjazalizowklwpibelwfhdcssjqhwedbqlpmtirytfsyeqxtaiajquepiszezszvxodmrmrbbizbkwmlhmvacuxmodcllthx..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "oipfvtyjylvfmyusrckzaruwlxfxuzffsaxhtyrzxgqjkwgyntpkvdqfrsnoljatgbyvqwubgupnlmefiihyyzxlgsztkqmwcrpfsgnwisvmijxqxropdktczqkhkkkivkeyjtdhjaxevcvvrzykmm..." } Fri Feb 22 11:37:09.987 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "oivakmeuwpxkgywfvrwadufcofpdyqzduvmelquanzqmuxeuesjdahmawccqktwaktpjuxssgbpbmdmmdnybmuwzaqsjcbnmjwiwyavqvoiceghrenfakmhowqzsscbdtfvxcofeglhkiomdyjrfhf..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "ojaptxanhqfapffzylklotcbekubhyvydxasyjskhojbhxagewbjdzoidhapinwrvuywrcuforwgxknspvaygethlhywdolbphlrptufsmcxozxtgxdygzhfeuuwcvhxkhryszkpigcpvyoqbickyd..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "ojberepxurikwvfwmwoliitxwewoclmwczmuadmpcybnfdcjpkcmqroblevhiybgnuoaundeuyfqyoppzoieixtmiyozqcwpcrjrugcaamdefvalltsiphdhtiiobkxbfstonvlbaocosicgxtpres..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "ojtmvolqmejwerdvyucgkwmtavmuirgmavtijoaomxftlkkuhqezrnpmgrkwsqnbeanbezzdynhseznunbyxytojspygjwsqvfedlqvcmulumlddrapogusbopmrxlvvecxdcgwxxyoimzkxeyjrmy..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "ojwgrppzvavqbbhladkhvxyjfqfoaukmlaedbggebynkrxosfggsuhjwbnzlbzigrscgkzbmglgpmrieullvwafcvpdpibcudbeotdnkaconczzcuzebbqltxoifzlybdyfwcpywkqdkczocufvrij..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 980 { : "ojzomqwovdtkwvlqptndmfjdfgnhwvbzuepkmkvhkcaxaurjqxeuurtrqybmqrbslasblwqoompwknztgsurilaiwdywyrdosohitdfuirjoajbwxfydnajroieqfaxjhklasbeubhkzvmkgorhfrl..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "okcxmlcgbeqicmjwyzjayzksuenaomcburxaqmwxwlsuuuvqwpogvejkksetikygnmzwruqlnwqfesojjrigdbgichgumpsqexjjuzovnhrvnaymtnbethjlvmwxlikfyrswoubxhexcionmiculbr..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "okvjcbuhikgtvsclfojfhvbgpapcepxuiryhdoinwruapgfqqekmuvsbxsnkhhtmwkskwvtdfiqjswxobjfkadscfhpcznynypcgcrafvzwtaypmidsdpmrqoaheizdwliyjermffllbxmaektqpsf..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 892 { : "okwzmucooidwbzghqfugattmmyqeltkujrwrqozyhzbmoaneprgtimncpfvklwyzcnapfrdaxdvznrqqxqhvbvwjhuywxupbpoyjnhmlowiuzpewvrulsstxritvoimwmhyiibzxfkbbkyuijhghfl..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "olvfstyxgrhzloxpryiddiidnmpuxingqweotteciyvbyygywswxtrlshvfwkicltjwjrpdravsumqtlswummojlodaveevbajyeehcatlgjnejeuxlqetixrwbwgrevoiyelhmvxthxvndlofszdl..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "omcowjxscgcqplkfwrbmabgusqkgbfhodsaiwktwbhmflfyyokhffezbfvbypdfxnjjsetsogvrzwoopkezgbjxwgckknbchizukgrmibeueawhthskhdxfncmtmqwyovngavjplmykeijapxckbfu..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "omgrziozteekoktcwyjzmvwfisvdmjcbdcoehrwblykcrsqsopuaclpemcdhkbfmyzlojfkxfsnlbcytbvepfqnrlfhbgytcbftakobizggaomjuscyhbfvqracwkrebbtlrrugnapnfzfeqjzsrvq..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "omkezjtwyvesqstcnifyxveagmgqsabqeqckiqughntsmimwgdmjqpytzcuzcwldchfmtumyfvziwylbjprowzettzlbtcfxbboanyczabmelskciluchwztbzmxgeoohrjlwupujpyvvjyycjcirv..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "omychsapckmokjvxneuesccymugfpaxeuzbbgeyetwrydavglxkoosfesmopsbwmjhdbjlovzbhokxwbtoxlepynvwbsaoutihzzhaajelmdoxvvwxockxmwfwrdmraznsdjysovojyafioferbkje..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "onwcjxgpokadlbugmvoygoqabyjmuafkypgjvwmveflhbyjvugbdnapgztldijdfrvjtvxrfdqasfptmnwcllfzfejlxlozovhcxvcljwrinzsqhcfqgmghbtnrfsyxjdvqxirgzixjinhqfnkdldo..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "ooguzfzadoemqumdhjhxtpiycpfrpvnrddzennduuwgxcmavjjkeadguxmrvfksramavhwkpppwyisrgkytukogqksygwehegwtpqadjfcukiqcsmggidfzxxbubxlpmbjnttxecomtostipyfsxdu..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "oohzynosxrvoxxsqrtsfsarylyfshgbbnztilziemhbqszpulbbtqdfdiecvhurithjdlzrpggycypaewvzhgmfgcdjfyfepbbyawbskkjrahjvokwrczlacdifcyogmnhgympdspyntjxoiumqrxm..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "oojhuqpxwmkwncxcdpoxmfxqtvtovnwczpkbtfebzkfpiwmzqqnjmlntfdslmawhfqjdpkuprzmxnrmlpphmsoszfhckntyebiexgxofaedieuyrfjoqeeeuqwenrkcfytgwxhsaufomplxigdkiet..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "ootmgssoihlfbcaybdxephmfglxsqzpakwhtjsrfqobzkutmqfpcqmjbxxqvzvndmbfsxkueycjayvifvxwjdkrwkileemiiynqemopebzqohntdwusglsordmyycpgjyvlviqakvlvpsyjraoayos..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "opfdmgakweuulurfccvrdikpuauaesipszebwqjmcfrhkvrwrqkytrlvlqzeqddqtdggfcauvffjrzjmivrquakitgsthmlnyobscopmztlbkrorlctnszewiwzargxysnfzyhewsfjzoppaqqtatm..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "oplujdukrpbmpdqflyzsinprcwhboxhuqmouwdqjbdsohledvjwbqewzzufoawgpktgbesumqapsngblzhzrgstevovikljrshwzjyhmkqmmtbeyqkicxvqzywdsghkicqcgqgybxcpxqtdkcchifn..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "opvqrhrvyqkleokkuefxkiamulglsosfepgwvtbohagirgpxgbqpvgqimcicyyvxbauxdjksazskcenxgowcfenlxoqykcubtrnbbotplzjspwqqtqstdoewexxmvazfmowqgudaqwdemuibfokzis..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "opxawhqhqrspmwueeswsunybmsazppszcpphfirltnspgeoeoindtqezvnvgzjmhspjjbueryvdjmtndhfimqatkwxjhdwmzngtnvwcfwstsbruafqbqmlovoeeyybiuimsgbtjvwovlyxjlbhzpli..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "oqnohqchqgphawwgzbdmqqsnwzhbspzlvfqqxvcoaeqqparevgycmevnxrundavhsvbkhudjcvjxelzbrvsglxcvgepvigvxukjojqzdqkxuggmfgwwfhxvdhgerbxwhxhvxpgrvphfvuidhkrudgv..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "oqumtqxotkyjkikcxklkbzjibwgxazbwnelhobxhjzigkxjmitaztdytuthgsubxomxhcbwzufitmjrbefsfrcpxtehtvfvlzcgmkvxdekwvaptehobskaurfdfmsnosgcpckouzaiaqilqwzcuife..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "orkjamztgpwyfikkoesoudoszafelttbmjlzrvjdricrswooiebygiaipuchdmgbagigxxeilfhgdlgociikukpfzgsxgbzjttizmvtpzrrtxkokqrnbdqywevjborwrtlarcfcqccuciqimifszzb..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "orvutjrpmhiavbyybjnxhilnohwttiwasveplezqfzxgtznugiawnhkitpdskvmdrvzmxxcwvlkvjwzlckmivxkqbblywtsasrjseynpbpzsaetjocxvpuoshjlgihirjilbbxefwcwovxtcrrllys..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "osatctfpaedmqlvtnnbjqtwhivzwjsczaozedkrmvaulafxddlpcfwlqhftfloxuapgddrvubpatqrypeuidbpyhprvbhqkyvgoumnnuuiyhcpoykolntgkejresfpxjthohdoxfhvosompjrcalnw..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "osqtddlppyaqlxpycduaxeaxbssqbwfjicmqtbejffkvubwhnpqsivhxwrzqhtmjnsqokamyntswkbgtdkxvxrzjxmnkkbdcqmdgqzjlhcigolfexlxmkkbnxbnhineaahsclksfguffxcmoonrkwh..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "otadkipfvljcocsgwuwonpdjweaqhvntqzlehnaipyodimqogjjgwffawjhdwcvcxvmwzxlghekssianqmxkvmahfoouajfwzbruatyymjbsfrmrehyzjylelunizhqfshoeuwejwltqyagtkxblen..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "oteqneeufftaeqjaksyuthgolzdyniebpwqtsjbwmkjfvvetdqakcltydquprniioabizdjmwuvkiadllokrgofmiajqxrcwqexczgicsixnzvanrdtnvytmckervkojhvzbhmxuuokuxxifnruzft..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "otiupmkpwnuozsdggvxbuvpohdzrpmhrhehkpbowjxbcluhyaikqbucizibsfeqmtjdvxrnvxvlablmroodmjseznatytzuehnpddsrkbcqwkcxikrimdqmccjphgmqrrrjznxytskbyumczhqccly..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 899 { : "oudreuihimhwntaeotawmqobwgmykvsxouawtreowrfirudmhckrhpqznklvggrprhzyoytdwoyeboiwhipxwugginrfnnbxqnjdlqzbyucpsieerefncjijtboqvlttxpadjbqiftrhqmzhcaeazt..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "oumilldaisiditibixhdpkklhbuawlkfyepohivboubjavpzyfnhdgocpwjwuxgapddnaegmkgkqmtpjqcmndlxfmygjqcrvgrmvtdxrpoiutjghucpsfhrfsoadhmufvraqnpseyvvhnvcwyjfkpk..." } Fri Feb 22 11:37:09.988 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 827 { : "ovcotiyjxkllnbqdxkcxvuicltptzexwgsdofwjvwmhxujoqiwpgpntcygewtizkrjxlzclwsuyshyrvomotpuqcjeiqnplttnweqzoczhsnymalnhvpdqkurffimcadikdazqgdenmifqvezvweue..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "ovevzbepsodwxlpganjbczpksvihecinldjieupenqxvwpltfuwktrbzrwvoelerwtafcksapvfmascwxumdiojbuiytanjermdjawoupiqbjkcakudkkrkiyfqyrqsxvbqgyjqikdthuqmqiwtajn..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "ovhwxichnrmejpsuhicmizjudefrhtitemuyumqrdkvcnvgobifemkvlqmrtrbormzilburopfeqdtanesvylrlovfntbuyzfjjdudyrunnnizcqpybchzizlttgejlglsrvbxyadtaiisgcqsguba..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 906 { : "ovmphpfbbjjwrntdgralhowdugrlmzfwopdqavtwhcmrkrhcyeerjfkngtmjfrzmoigkvjtdlcbcvrpobsznjjzxpjdgajtgmbmyjwmpjnztmtbfekeakostzsdelpfdqusdgpzikohewstekhtypw..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "owxedomxgihmoesfjktgxhzovnxsldblifmtfqewnxhyluevlcroaygsnolkybvmotypdrdaoksawywanvjxlkfvagacboqksrxjlqmwpljjnukbjhpsqrzvkzdszkqmxcfgtrdvngleesjhixlwhd..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "oxairkxykaavohkypgatrtuiymruzlgnxtzmhaxjhljovmkhxzhciqnptnboiajfsqgubeqwlsismhvpqjqewdcjxpwkotfuotwklwegjvujigziezpuggxtekaonraomxwvvxhxcgsycwxsdttzgu..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "oxvajthonkiqdsigmmircszcnwjzyfitzwtpbljjzribspcuvshxoqukawplkjueqkmdscrazbveceqstnenlolaknzgcbskcbikbsluvxazrvzabxyqmvqmtgxjylbflhwyvzskirjltkbcdyshkl..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 994 { : "oxwlbnwtkuieptlrcwsokrzdybntlbcdrkhgjvmmpjixlmimpjleqrfsyvguhpyqbjpvxjlcemscqvlrvwkztzwvenxnyngegwvaovfigtszlljsedwigukncxnexhebqlmskvfkrcdvmtorvhosxd..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "oydffddpbedmwrdusibsuqujaibqxscvvlhrfigttahkjpecrvxvieskvdkjrbvvcehyexucwzbwqvuicgrmkncryiizxpixlqkrvokvbielkvppzrldmpezlgvmooegxhncilwgylbpumgegyvzku..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "oyxpiybjozdjvodduixmdwrcnzyavrdnsnawjczqtfakqmjhwkgxtafofofgmfvwleyyqionorjoisgpwdjogwrbwuxmimoqyqxlckznvqcdajazgzoelqaxcelowpwerdfenazvutdqnmngrvdpbo..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 968 { : "oyxrcvtsniltbdctorzmnizghmdpqjljwmecyxwicbffotjrleehyskwpircpedmjeynwqolfihvesnxprvzrtyljbsphqkpweldwwyrbadlgetglsjyepmqxngtvppzhkcnrytuaacnvssposzemq..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "ozbtpbwmebnsfdjosbzefzfjudlqmkrusyywudztfmbyjaebshrwikfkufeisxubfksywtbtcopyhgdytvbzkrrnrxhspvftzneiolexdjzuxnitjivfoolcxcdtyneyehzzebxqzszfywiehdqtmz..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "ozdosmjwkxqfwmcvpiqbgpqbyonpauchpwotylexsjzsxrekhwxmprpaamimlqqejbqufvotktklvjgisetqmzzetqwnfpwftdaiyumxseuxsduuztbmmjtcesvacozhiqxsuzyeowqfzdoggchsnf..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "ozzvnuctdbcwzfldzwqizvkfbzlhqacurwfxkdfolrqfftiberoglqynozqturatqnlzdrknnqxbczgjixyplzgrsfwzvpratprmpmlkbxgghdveimimjblsklmbkoooohzenpusbxdajihflsjhgu..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "palaoysrugnaaruyvywzosclqxwaxjjtvbwmyslcybnmgyqhpzpnurhjmowycwyizppyvzifealypvoygdmojfyzzhxjynpxfwvudjhwuuvseruxafgxggffgvrobolcahzixzzaqdovalgksdzflh..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "paszevqryzyeflibywjkvcudlycdzwohvwclosgeycbgtlhymujdnikzvaujqhdyzsyrzpzfkfcqevtroefnuclzjyihrwhxyzqdsajpgfllihqowbaxsexonomgabypigcblwbjhyblgbxzbjplpt..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "payfgilroaelnnrqcsizzwiaitwibvlmexqavevycnruussyicnyzefhjsamomqfygawmprommgfzlpvleorylddswmidhiywwpmkfgtzmmdompryzvbikdwvrkecrcqygtdkxixezwrrzlanrtukf..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "pbcvbwetnflmlsnxeezbvxtghgqlogngvexjhsumdzaevvylohedzqzhhukyzcgknjnrbqnmkhvfsgkeretszvdwnvlznducwxyqotuyzspdstzgqlbwtssagtjveruxzuwkogetrmvwdhytnjbyvn..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "pblapkqxlcuvfsrcamsqmqnyottmapvalfgssoeuaapfpiwjtzejtertdojaoqghomcepgwzueuuqwkjuimccewovhqeaxfeayaegvlbidekyahqtiboslerdojhwwlkzawhdkcydtlwxeyejyejnj..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "pblimiorgmdlspzhkaibmkjqbbqgtznehvgrisudnrkixdqlbztegbeddzikyzoyfryakgheuiyyfggjlybcdavwisciaosfafqknwtguogoyfkobjbdpwogsstdhcnaaruksrtvsfhzlnnpzlwcag..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "pbqdljpwkcmdepxfadfrjixrgapbzzyfhyvlujuhqtgdvpihyzaplncimpndqnwswsvrurqjjjyhpieaduxqxbummxqnatmlvldpkopoofwybtoczwsukfmqtadhvenmnrxhflhgicazdwvwycafhl..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "pbsejrwwfbfwgvybzeugaokylwooeslkzqaanbrlcddeoloylhrbzytidhabuofwifiamykejvyxvgbgefildjiuycksklbymdzmpcsgofcqmmqlajbcnkntyxkzqbitizcotxtjfltyjdviykvrvi..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "pbyxfiupzmdvyxxnfpslnzuvevindduxwzwlzregqsttnbtwyhshdfdvisalvkihyfyxeitcfdqujxjhggnbzxunbivlzsynxfrrymvzgbmazueusjritmwxwwxkzefeydrpisiwjmydrmbnfgsjsq..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "pbzixkldaonqnlonsvkjgoelpmlfngzqaevclqglrpnryypictacvjcxbpdcpswgbdunfvuoltngzxyekdwisybgfhtwmmpgteqtpzblmrqfbgxivthpgrkoweojdhjlkqjsxuvklbimoknfalxhqm..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "pcouegctjiqjvsnihueyiawobarfrccgrgeetxgqvyixqqqfvipeuhhumnurgytkyaglnrfxmkbxmsfptoaadgovohnzchocvehhgcvlawdcyvhlzxiajurhcrcimwimtrrfflirjvvebnhdpzkvcn..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "peyzcarewnwcvokmrnbgjndlejafmdjlyaymmkxqdhphouxbxahnivaocfosstcrmuqvlnhdblhslhkqgkssikvdusdblklduvqyzulaivnyzuaaeoqsbywahspbgoqrfqcnldftjflewjyazlbamn..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 975 { : "pfdhrahfgsgsdorvvlnlmuizdbitkghftbelwjmehelsapoleyggdrpzwvfskjxrypwqmtgokbfseilfnkyjuglnrnswiiqzdddqrsrfnwsaswlymfrldytfuepygmzgsdeyqlolbghtqykwewqyvt..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "pfnxovmofxplmhhrowaedzuqsllepufqredpxpsqhrhdbvskpgmfathgevyynylfqapftfwcpselcahnuizhkavzxvpenachjrtwgrmndvlrtarhbirshuzjhzlwbvnostaaddflwjwmtdutfclczg..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "pfsqtnjmjlmqvrznujhijrnxwlvrkkjfweclzxlclmuqnnsslkdroslvwbhncshxeburegkbdimbqvkuydduqedsrvxshmdeinpbskoqvmzqndhwysyeqdkkcgkjlxehvbabkqhhqiixamrmxfvdcd..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "pgahgmpfcuuurwvzmdulninydhsxlprejpaczbnszehekesdhiipuaabiufxsdlfyxklywjujnizzephfyjdoifpyoettxiqkphdjbwbdqiztauwrujpzntpodtojvfvluilkxqdtwcmbhniriaaiy..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "pgehvuujawxsatdmdbijqskglgwydnvicfixbcpgjepinwchnxqxegrevprpoaqabrffvkoqnkvatkhdelxdfnaehgipztscoiupygeaijinvxjvnkmuuuxyqnimbogykupeqrbflwkfbjaivkcifu..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 911 { : "pgikxmvygeuxanekmhefvoitmygkziwjxgrmnycgxytwifyqqiypcsgnvyashracggqfyrtxuzebpoicxsxwnrchjsuvpmrshqedhvmqaunmcbkimqygnbhfbbizcicstjwffmgbnhanufpsmxniyp..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 917 { : "pgltqjuhrobklwfauevnbutuhwoaqrkwpuezyvobbgacgwkrdfbnexxjvucdkszpsuvxfxlibqldcrveqtsmzszxpkixluslboootkezqpqlcafrxbvyaqlqoxujnrnlrcjejevqtqogsrlelvhyjc..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 876 { : "pgovrycfrionxhouoezcdyoznosnuvwzzecpzcthvkyymyyeyfiwckzzjykemywkosdvcaeqkvqjuildhiqopecppwvjamoupzvjpzuwwzioxtlzyramxjytfebusdiemjdboxgatirfmgnphuhzau..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "pgrukhfyqmzfhhaubcikijzsuxsmcwiphdktkpxklfncpxtbvjwmnhvwphbzklicfepbmldzimklwmyfanncszmtoavsybhxnmfsehtxakyhbysxunvagoadbshdbudrlydkpagmozlhzqtupeevcn..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "phksqzzabxpqnvxbngykkxiohlyqcevciakiccobjalebruxsjangetmbkwiosecegrbtauqlcrglpkiceujufiqtxtvixkriofsngtcifktgzuajcdnecvskjdeufqkzevqtkunacnxrjgledsxaz..." } Fri Feb 22 11:37:09.989 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "phnqhzkyzdvuejgtnxjaxpktbwwftqhsefxlqlsgistqactssplocptalziyfyvpcnrkfilulpljieaocshrgeavqlcfbwgpfpzgrpxikxgytrmsmwijxfpvedmsfbrnklochgolelgrnutjuiqvyv..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "pijtefzxqvojtqqnlfoiwkbhozzrgfjinkkdefmypgxcjunozrnrwtkcgpphovspfhynbmkngnbpsgfbtbysgigbikbnxqbhaaycgdyaewydkrvwknmssbjkggohnpnryxwdmyjlswadjlmojqiybs..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "pikuydzbzdhjljgpfkltauqkcsfcmfyrdstzexucsqddrrtcryaxlsftijmybosuwonisnciywnxgptoxrojemnehwsetisyoawkoouulncolbzdmrzzsfvtbjslxppuqalluqoosdxntgyaqrmdrg..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "pituctaiykewzxcytywceuuylwyymtbpqwqzbpseeegspptbjxtgjbgmzkisbttmzjywtlwelcrgmzdndhwkwqvpwygxlzgtdzyjdrgtczfzgoovvjhosyuljaurwhrdjglmiprxebeqtqczdvmtfb..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "pjbrsyfmcrilmprgigyeyshjgpyxhesqzapiwanpkgzoymtivgljdndbhgtywplxgiotoyrdilfbziwtwotobtdhlkvrxzikeohwmhsgyursklkxnwvalgtmlsbrsopfzzihgdmmsamzzrmnfawwvm..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "pjezxbjiksyfqgwzmgrispjipueentjwvslcywbetxdlsegfcpdjfvvgiaviilqafspharmggimgegupvloavspzleoonwphruuopkakcylkearqzempcyelbupnztyvwhunkmithsciejabrbzccr..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "pkmqaibvozobyjvpvzzlxlonahyruuuxeaiisnfnxjqwuxtgzpsqhrhpzphngiqsotkwidyufewzazmydjfpphrdodzevquzncqgojqlzfhipaygiembpozjcquwyrphjjewkrldiciimrytdrydhc..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "plczlmtbwnjwbaoixvkrclyscgysnbnehnuffogkoytdxmtfxofokgwyzmtcxtrwlapwfayfmpacrlugvukjgmwdbwlfffatjoycvevjtnvuwnfeiprzcnxnhgkknpmmcorlootrdhftdjnrrqzplb..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "plvtgimabcixmchegmrjtfnbticfpijdkdoitidhliomsxxogplhtxruewuanvaddhrbqluidfsvdmqzpmyvrxlppnxdxblvbjboidpwrhpqivorwsvnalkcjpmwxnqvbbaliqsjgopsbtpohfdcns..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "pmgdbfzouufcxkputtpgbiqnnnadypcnckszksvzkggpofgbfzswosoysvhfcvzieibacevdgnjywtltkrjtpwfnzgiphhzplhnqsuykbeqanfxfyshesrtdpveongnpoejwreqfrqdkwevsrxexcj..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "pmwsngidrmgetlfwwxeixrqzuucobnauybamalmeqlefiytusiabccdsdmnuciemotuuyblnnmghezxutycffpjgjpyaaduxscgmzuhehlvgephxmtqizejddyiijpxydowvauyfncmihxmzakhjiy..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 910 { : "pncevmepizthorhvvjvhrfxpqqfeyktfumvmgnppaoxwwxojjmlvkyyadpkzsuiqfwssubcuyifajmiecupymgcdjazhtiyzhzopasekmgztcjypdoutchgdbclxffxbcyqmqjeqtvypbxowjryrlp..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 842 { : "pnezemztcwtcaemdvweigbdupbjxbzyxfkdcoxrkxooeuhrncdvklhwxswmorrczsmwpiauseufeuhjunewwwpkdlkdoiswttadjprbgcwkqyxmbpbsvdkvxhificzdwraatoyxlmwfrldyljrcxzl..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "pohlpqpwuyoftzlogebdgorspupnpwbbebvdridlwoijkvohlkdwalhjcwwrmwwdhftnnvhcjyopseyxzqmrieftjvrsihivrmbqpirpavloyugfilcqdtxglxuwjglbnoxmktodxzixmbxhjniqix..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "poirozoanveeogwqnsfhvhnxymfdotaxnsgrmvozqyjqnxtdpghzwwrumpyoosjdhiumgqrpncckoejvzdcqmhufgdafndsgezxvbvtaboknxdcgadnkswlqrzwebjonmpjlrqbyhwedlnqiadgodr..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 852 { : "posfabygtvqbnwcrdfevzadiniabxejwmcgipogqrnjdbxgpukhnhhpmnyyrqfddhhalawunvsgpbbyaxedbyjwwnavulpunhilektcqrlbiojacsbngudscikfzqwvqudrmidajmjenewmylfyhdm..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "ppfdwsnrwjqjmamxbzxoftxesgerepdcdviyzkjdtdtwpbhuszmbrgrhynqbjyxnqqnakbdhzkodntreikffjqylsxxssbltwahdcuorkkfwsfkgpirgyzciifmfwlqrneazmkseynqpgklskcbkux..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "pprzslafopjyqlwkkjrlmvgmwwsgiuywsykytseeytkdtlsuzghefdrlqegwmgindmybomzltzebnciqnnfzjxxsukizeinfpgygvvjozhvbwaaicyylvyniftodaoziruuuqamxrhgwbwlexysoxz..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "pqceotriiosyzfcljlfbksmfxozjtehcgeouemwahidixgrxijcrshjnylfytbzarcawscpgtcqmhzgzlzbbfqngotokmzbgnywozdzfohibsqbhqgobwjycmssqbueizdvncjzcfbocuqdhlxnbtj..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "pqcpubfiywnuzuwttjpsifufjiverruqtglcbdufdvbhnilkwzbmouodvjztwwhlugwlfionurfgegycgfewvjodxrgyaqbbxtdiwkfeloiwvsjplypkyfcqbpifteiddkwzbfpvneptszcmbquuet..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "pqnjfaefwqxgskvnbkcdbloogudwzyigermsvgpftaintuhfrbksubrbbaugtcebpdrhvaxrwcvwngepjtakzbybplmdlfawsixhokbojoohltuwiijtwutrurmkayznyocrhbvdycapzqfmdpbevg..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 887 { : "pqruyfelvsztngsuijeneifbqaorijutxppoqokbsklhfpdyivucytsgfwdwfynlaspjcrwlzwwqvtzckrqrtztxxdiulsogxaiaodurgrqeqrkwqrmdrcizboakrpkescpizmynmvcnshgqncflqe..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 852 { : "pquoimokygefgxzhutxogwzhvlofvzkqmpgrxwkrgtnxbfzoxprcbvjbpdsbwpbtpggycydmgohyrwwsoxuduaziuwgyrkvvuwdmfknclrwwjcxcpcmjoyfmlgakvcbbzcyoesqcjiogdwoqavukby..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "prglbdufxxwaatpubpwuotyuvzphsslmapdsvzvfcxgiaibaynfrgaemamnnubgwbhhxueexxzwuqtdrwuxzipecjcvmjqlletlaejrsswihlgzawrhhjdbmjifdbfxibnlgceswetlzwcgakwtjze..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "proounnzjpaftdthhslseijtjkzdjcdfqpqhfcgevwkhsckbkgbjfzltwfeaotimyajuzcjcgmiebocvhcqwnoiedprrrkmmwuglcboqewatwxhhmzamzfwjhitjfsrzcnriivvsmfvpzytjtmbcem..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "prpzxvzaokmyzdosaeqvrifigorukdrlnkmdevieqlorsvcqwiziyrhfaumuvdcvguekwghtwdjhqzisslwxubymarwerjbsjqurwmvkgixkmzodxfbutoqrnkzjwgsvxnrkfdspkroeghydhcrpjk..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 875 { : "psoplzwnojqubwvqsqwxovgwffcwgnenqhyddbsrrutfhctaasxlxnjtqzjmpgjgumcpwaktpvzajmanbzeecxwplwtkjfjoyuuergqxtowprnyvnzqfphhfhpkkweudpseynvwpdgezohmvpjomvn..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "pspkkohjwhopkzidqlmlntmijstcrespzigypnfoqqkryyyxtmeohckpihwlvsgoudblgepqzpxwcuiivovvlajfeuupzlonaqqdmzuhegzhqonrhbkuettydremnqglpcdzsyijnpbpsqrmlkjteb..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "psugwjroysicclhcpftrutupetriaqwskupjclokdymqkzkvgdgalvishuapkvkceqnypxkkrtxbdmoztzzkavtyxuxmmkvhekyxxbyqcnzkythguddsbxeenvzpnqufjrudrwwvsbqspmcurhiziz..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "ptebhssayfovpjyjoyptzydeyfomlydzdpseutbltcaljsvgegloknmsmblbnekmgqyxbfluiboqcgaekvmiqauboifsirroizjmkoesiqenghbwyjvvzgtbpqzkumltkxucevgczkzciqjoyowpvp..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1002 { : "ptgdrjyldyoufhfaircygtnryoyiyfibyjwhtbnltecqbcntcqiqgwckdnyuahivzuvhvpagndnrjonxrnobrlwlflmpdvlgmoxhohrfakijzbnxhuwshmchqmvauhpkzajewwvxtmnvzhcoxwutdm..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "ptpoixmpoduuxwgzcfjutygsuxwhqaahhdtscgocijvadfycxkubxgqrhlmbtiubtrfvxlhxlfoxjauovlnpybpgepfwiikeatpztgylfmbjnwkzciuxvmvangdnxjlznzscmarnhcoerxfjkdbjkf..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "pubpviqcpxferpvkpisebtpmzyjxcrlyxrwddvwsrcmcububwcbfhmttrgsumacsrfkntybwwtjcydophdnewscovtqwzshpldjmvjmjuqktfydciwcjopayljhuxrnkosxtssnovvksxjkececzjb..." } Fri Feb 22 11:37:09.990 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "puqlybipmjzrhvzvnkqladqtukfwypfhmstxonrxutpejmuuxegifmxhrpwsnelxkxsojxhlilshnsetzcpmfyxrmlyxucgkrjifleakhzcwqjxhqbelibznmupkigljttbajixwyohzyjxusnjcjk..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "puzozslfrvbczjhpgmcvctabnqyobehsidjvbjpcygwsdwzdqzyvctijgmoylywrrumonqeezheiuhhrrebnvwrmskgxmmaiidehlpxdxjmahekvpgicrtjnxfusgjzdwddzmjtprnzemarznydgmb..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "pvbcmptwguhgurnyyxjzzjprbnxsniuypaocxmvxrzsrjjmcsxxkxaeaqizhoetyawjsewhkzkalphgsxxkcddzzicevijyrfgwjjhthromooxdocqvhnwxairtngmjxwtmceohlqpipccbmzqaust..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "pvcwvzcxvukpkvdouxupbskzaevlpvkuvxxcaxhxfrnmbzxgnxbeteghgrwymfflmfucozuoegwswrotpiushjswjbjzcqgckxmdwzpyahnhbgvykjctluzanodkhptekdicxdwouqbaeknwcepqel..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "pwbycrznpbzjzmuaerfnyjsutzxyjdkfehtcrzeidnucrekpbmruprzkqcfcdgskcnoagfkrnfhtufaebpmehnliyalxdroiifrpgspodthqbhampmrfqutyzuqvtemuupyxovzuekfjnfcgadzuuk..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "pwgqxmiutusqepxdaiotxwqpqzlabrvgjsaehqlocdpdcnsznygwebrrmdbwmqiwjccpkscmjqnwljuagqbydlrbvzbpqsgxfaqdezauzblbmabilsrweerjhcnpqejgszvghoomwrnahegrrwwrkk..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "pwgtbraqjfrzyzfvwvsxzgcaysywlcognyxohxjkvhbacyuduyduifssgyqkzrsuzndjlqcasfklgvspxekijnsolwkuuefchcnantpottqlerdpceqymzvalwfjneajbrtohdyucbvommugteahwm..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "pwlziqxxsbvhleylwrgarqqxxlmycwegajqicvhulfzihnndcqkkkpneeoznraeogvgesiuewkkkqdoseuiqsdwxlaskbiknmvqzducqoflqjpgiznehdnxprwhmfmwxhqoooayttaurnsoeaainke..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "pwoscutdipndhgcsngoonuosireocyiwcztdzedwpaufhgwfukdlbgrnsvxmqyldfuqjgetdmotpbczqoxoknoaqprmlevbzbevsssgdvqshoedsdjnvsdzamutzdydmcncscsvsudbmproxewwjij..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "pwqhlxyaxbhcbejwnztydxxtwcrefiyqeynpbczfsxdpoocdnasuolxrgqeuleuylcomtcmcoxosmpdmoglgmbomxbihlrcmzcyzdisjpupubamhicrxnmjbvekczjsiqycduvobnjqencvibzmbgz..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "pwsqwfgonngsahzbfutyqmndgxkohyracmbjbqgwtpoxugvccpqjqhduntysnwppzuzibsnwemtuwlhhpxihfkizenjedudrlljdyfyprhzlsniluzbyzazylnqbtftuklrmxuhpslyxcyyvnomnve..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 832 { : "pwudlmudlugvmegtgiairjnxexnrsoioazqzczvzlaxfwgjqkuthmezsjulwjcyvbkydgmvfqdphrfccjlxzcpruisxqboistwovcnchsdzanejsejrvjeznfmkrrqcgeraperdohplqnkjxtdznpp..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "pxntzqixxdlcdxzfykuntmvoyqxqfhdntviqolnjhahiekelyqzjhvrbpyubmofoddkmtbzxzuntcddkgjccaibjjoqyyfxgrtcosqsfxuzyiruzrhdhnrkelhjofyzywecroooyqdxyovuxmvmypv..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 977 { : "pydoczvhffpqwezokwfyfwueezmnxyddvaljxapqbtshhsqnpfmgbuhlcjbfqwbflwvdjpgpllonazuoxlveygctyxkncewidxihijnmfhxgxovwaytulqevgchpoprrvlurijmpmfpxmlqrcxoxaq..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "pyrmpsfmyeksebpnvnvkrggvzqgueneregijihbhwtilwdbarmevyydwvxjfspqfipykeaogdmdomlojbfyetjcgwjobyoiirovdnmwfwhjyxlldmxsmcpkgpeehxtcknzuczghrzocpfmcqcdnwgo..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "pznxuohcigpqhwkfisntoybgjntmiciceyhzlkvwyjlruyghanyaiapvldtpglsyddqslhavhtvfixyjfskuhoktpbpqgdvebwzhrlnbeeqizunsrfwujbmfiafpxynqzrywdqrzvmxyueozrzpnmq..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "pzzabprkfvyltuxhrmtyuwhmwksfmoxwksdubhwkozxrjbpvvaaqvzfokmhmbxkgiiiaixxneizddlvbqehpktweaontptrsnkyexyoipmoaalrfziombjmwqryycrrxsgtdwimkcuixzvdkjqgkmi..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "pzzazgqiikkpmgicxrgdicbaxpfsjhhsxfzysqxpklcotwtaoiaaseuvamshmwoghymrzmbhjplopbwcwowchgwphlmqzeadepqaikyzvvkyomizhskxmolnyzljuzbzdjnkxiwooazpxyfiozergt..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "qbaqjgvsqzluecaguavmeohgozmmatiotdmlevapxfynqgwgvnoznlubpsmrnuddupukzhkukubcizvkxbmyxpzupwigcsdpqntmwylsbiatsbfgsobpxytuvkkgggmzspneyikrjcmfhlbglqepfq..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "qbfvxqbkbtpauaiymmhqigytfqmefsvjjyyoyisoyhsicqamnhriaxkpdrxuccmgsvytfnjmprcupnxitejldzzkfzjlzwoxwhfxpiuzqvclbxqnbcvdzoxgidiemsnemovtttkmecekaxjwtlrrrr..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "qbkcargtygjnwaxhgbmerxukkxidwzxqwxpmgwhiutmtqxqcxyzkaxlqmeeycicwykdggxzqnzykstenqporinkiiyjfkglplmuxkmvuszvmkrbttxejldylttolbesfkdstatykmwjzvlmgazasfw..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 910 { : "qbpusfrjnyobpjizwcemwqwbozceajszxuolhfmnnnpltekqeqkrzhvqeagcgjksdjprvpgsayvdvwbrquweiftgnyzgxwmvesxdqkdwompsjsywiuykgprgwxgjasecglzwdbewhcohwfsyxsfdgx..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "qbwupbojyrbxowmaiyajhapormfxnatxbyawfehpjtsszjadyerspjmnuuhflolvvfwamtfsdlhoittxviqykbrdzoqrnxepjzaxpzxrdhrmjdjxecudostqomyjncdtwppbczuaddemwkmnaqmncu..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "qcahtcjqmqhvziejvktjpvwqourkwmqhjzijlyjztwidyaoemnrzbkbmkpxvaxujnsutpsmepmjidpuvcuethuzgjpgsxxwblsanntbubpbvxxuazutdjcuswlrlbsyxmrhszcztdmuzcnvcpfpryf..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "qcbsfcttbplhcviwyfrcgfgqnaxnvdhaospbihhemacifdpmnkefloncnjgpxnlsauqjrmbuubktewjblhsbfclozxdyvahxbtttdljbgpmnanggaumflytpxmsjdyjwddwbrqirnomisiqzxazzat..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 887 { : "qcfrovsajgnjulasmceghowcjuwnsfsmysavgvthmhmsmoagilpuzoykpetzbafbosdqykgfdaxhhcloblaoiksqbcuwtvjjthsvybleqoutoyghuztygamklxqxglpxoxlgxeqedikvtuislftgyg..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "qcqitcmcyqfvvoqzvjxuedfjuntczuimcgnyibdfgawliiocrdkdpjcjzzjkeojrmfrvrymzawidwgncjfdnsyqzxsuhvpciqnnszxhnmbxvjwobgsaszjudvthcrigululmaxrmyurkkiypvwtmmy..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "qcyiofhmstiqfzykpsjqtsgpcbhywhpegkxflqtigbhfdsrytxgtjkgwofrsqeumqlatfsqtnpbamodjesgpcikucjlzazsxvpsitaeyzcwoxkwzmdgndkbhdjstonnxykghowardpxjquqkhiyxgh..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "qdhrwpzengnfxnfdpcryzjvkimvgmwwfbvdzhnhhumjylhvvdkjvffbfvehggtwwpjrnpjcobgtpqhdatbxirbdboixkrdhzlzfftxxgvktmmihxvebcfltupdwhdyesfrajjlsttlnnqocraspghh..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "qdkjytxdpjvhpmyqxeochzwpzfeviekumkxpywirdjdvaismpauwsixulllfvismrbvpipxiiriclygewclxxkurnxbjqbofkcjibmpdoigxkwnqykuqpqfqcjhcnjcnzpurqsuthqrzxludefvkde..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "qdzbdyqchjotorujrlvszwmnbvbycdphquomgsihxmqeyuuncgszfydfipziumfdykjdamjaghxebtsupugpiyzsxrcapbefnkiafgjzpnevsgdkmvbocvcnvkmtzxzcbtveqzykyhwjatodujfanj..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "qedwjkggsqyvgsqodnjvdzopnspmocodsbwfbhnqbvjjdmyrrdvfcvpkjbjsyolwzanhzkdbvlzijeszvxovpxurrsinrfattrnemczhejmvmimfgvdmarcrimxdwdjnzktlazzfvkwjcsrzgsqlyw..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "qenmuzcygfhebrxalnrgyzzcskntzcjwrwgkrzrdzbcmiadaxagjbpnczezblinjghhsentxscyumawxmavgpkazshuptvztrsssprkelhyuyinumujuslftmjobppkulokuxizeftdfpizelzstdk..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "qevrjythfbynfdslrliyztaiphwtttsjldvdbspnriynakwwpsvglcyocppitwhemkbxypsenzdlzxaimithtbetsnbmftmcldcaxmqjdepmyhrvyznzmutqdatqcttiwmegeuymowbkerhsnkdyoy..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 827 { : "qffohyekdjbtiojplnfaauytefzupgkysmpyxbrljyvmtpxifmiqyevuwybogyqfgwdcidulapddimrsbwtesnijuwsmlrrrgkghsibmfoyxtmfmbxrlhvquqwfsgbfxujcwyniaejaykpiyhzgfiw..." } Fri Feb 22 11:37:09.991 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "qgvttwsslmbcsyeodatqpanheriignbcxhanqkjlofojweaudcuiridixsfmjbshzvistocqutwuekayfrwjbrtywivsmayebwmmvmpeuglgvnnzkbzbdjczfnnodvqefwvjzopxjomanmzvubxdws..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "qjkvaqsaehbweofymkscshwuumjakbdxvvckujdpnjjmronarnkigduphvjtlruvyolgqauhagnvyhsfszjvnkziynxgqwpelpdivjooasduozwupbvdgtnthcqnpljaylavnxwpgzzmaxrsvleaxt..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "qjqaqxbhacccxrodvfbyxiifbpntehtdvesafwedohancixttajmihcgyadhaljbjufupsoyngogwxccosftgkmiuflsuoirqcodymqkzlwyktqlvcbtwcvyjxcdfbmpmpjdhclbnsdhxgnwogorax..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "qjzvxzodhzlczkkfzafbzcuwrelaxzflynoavepmhxxwlxvaexecubhvzwramewizloiakfvwpuuaitzkvhwdwowyxkwcpxgutzliqffgbpoqmhkxaepytthmkgehwjesedbplddblaoisezhsuhyp..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "qlrmmciofgxnhaozezqjpqppibuttqicocacgusqcjhederqcelxdhgpndhowzxvkjukvemwslfrdvkcehvapecswwyyxkjjycgimxeszxrnrnvqfcnmbnwywfvvoqoccapnivqpzcuwwcssihdtow..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "qmarfnbdkyrzlmbvjqnlkbguhrwmhakedstetwccmpsrsgfipbyyjyrophtyqyrybfkyddezcajrecvvryxfjtifnpabtsyzzwwpxrtewkoburvgdparrchsdxuhurfijlomyjlijzruwzfphipoty..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 901 { : "qmnhvtzvdroorqtppgqjrlikplssfzxkjosjfxzhpckcrvvlgjhnoevixhaeehggmqfznqpmesqcnwcxalochbrwsoztwykmonnukhaiksjlryenfrcatowxqhicougfmxfafqvxbzurgjeyfdkuso..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "qmobeeuulhvojywuljrcbvbmvnlwhihrttoiwvdtgnwyrfzseerzyjectwybmmbvlzdwpguyaragefabhanuehszbiysodlyhyvzsmwtdmwetczyskijzuduutckyhevlqlzjbeblvbjhtjzutoiqu..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "qmpllkzcfzebpczephfpcmmlrzxrfgapcawopocggtyyiorvxsnadfvsogaykjsbcldqvhcqvdxzgrckwtsifzkaxdeqnwwgjyyqkmhhalenwtysslfdteduucuhrcgyqiltycebxkmhcbfnqkahri..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "qmrclpgwztcujlnqixonlzfeuizxpkkmvdrgycgekzqgkpcjcpeiiggmfbcewvzqjhywfzqhldbiusvkyimuflctsovgjhuzbdvvyupkcscrgdccvxkftmejamauiycncyxutswaymvvptaehdeqou..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 942 { : "qmuvrhwcpgguydriecgqcucipqbbkplycnivbwnjkowfdvlrweaydrbhpfhinykbjbncenufmcfepscrowlgdbrlyctoybeqloguztlwiayxqhzwxrnsnfnoiyrkzrlnsqoakpxvwnsooqbgdglguz..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "qmvxcpllcjypjcvlckefwljtobxzromwmdxhmfpbmttukrtcxzosnxbkwjjvpzbjhhkfhadhukkeogjnfqmusxszmkgmoxxgtikieizvchrlwgdsludxnqestuxbnatsavdcufsmcrldiqhckbwmdw..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "qnkvdthwiublleahxxfkkkzgonmkpqaibmprlqqqxwgutohulxuqofdtpfrfpncjvejbvtddoyeytldowqxeoqcitcrvgmpsludvqbvlpjbczwtucgjfxnwfgymivuffhnyrnctveksehiiyxynsjb..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "qoflnkdiarpqtfbcymhksusrclxfjldzpcwcnkhpznofzhowmylvedkqkcuudsnidnozzdcinpcsoeauksodnissvyixzbxsjblsgrprbylqcblfjiufmoifzwjwjlncpspoiifyjlwxzngohhfplj..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "qouoypwpbdppesmdiexcgoauvdjbrfvbhwuxjiamlijpxojuviqytboojwnkelvevxccghsbbfgmqpynlaigedqqtokjlxvvgnvycworlhcfgfhcwdgspaqzrgnijhgyasjfjbtxmfynolfrhswzxm..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "qppljyuniteohscddsqqbhdyeeohpeupamnkjwerbslflujvldtbpzaydykhgdstqhurzqjzryobjpmruccmisedtrguqvexxxsgbqrnhjebmzydtkmrymddvvwzogmqodqrdnvmiwztwdvvcttwwq..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "qpstkpannmbrctjcuqewbkmqyakgvfzhfhpuyvuxofzalfzcwugaupinrdfthoauykuwnquopoetrdjcbztursnboyzperczobxvqmytnibcyrotfootegwbhdsgdisphxzwuazwybhxjfkuzruxyb..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 982 { : "qqpmizmtmeqabnpbltzdtceqppsusfxmhkmcaubcjlcfeludgegolgzhrdznivktlhwnjagvcunmwpxemyijzpmewurtkacvousafskarcblgwlajejkssxqpquegrwjvrkwoxtivsjvudtdakzmfz..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "qqprleegkfklilotajnxefgkbycfnzqblkwqauvhjovkqijparyafrmdcbuqswkqkuyhmrlabchxcudvaxgjnjausxwhsakyotuvjaywibycmeluijnvtougopcnujqeparwlhfhpojqscqsydfurw..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "qrbfuriprnmccnrpugrfvcmdnjbvniywxnjhbwwtxnfrneecczhohoijzdwswskhgyqhorcxzwwjaburrtibwvhlcdxrvfnyedfldgdvmlfqmltrwwocbcoyyztukrhrctgibvhcytrhdtphfimjyq..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "qrezaoypaprxaxlvwddaqzzzjaavzzmzwnftvetgtrxcnxsibiyqaranzyvdnbzhlporzjxtfnjgugqsgumiahwwlploqgnrzttdplljsquyyvsmdmgwnxhmpfbturtjfdghkkuvpwqvsdwomsqzak..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "qriaixhjpgaagoxurnyyeekvpymnakurujnveszprnmkivvrypvcuebwmevbtseqsrthklswbanjgafzywpxppfveaiturvnugokmmoywswynorlikyccytcphiygitbxmsuvjhtvhnsuhptjyvzuw..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "qrpwoxbbsntpbiqzeqpmqptvcpigyjhuhblnyofxekrntshzgjbleedjsipeyevmpsdcfzsbfcqonnoyyfauaglrpaytdtkfvjnynecghqzaqpkqtlygjhpozuschttntusfavtllgfvjjljqimljd..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "qrqqdbachcjabtgojkdyksatyoscqsskdldzsqvmoyalijwjpupdutnpmidvudavcdjsbycienwizhgorwjswykqrwnemywmqtgxwfvcxddtvvxnxznojrntejulzdgpjzchvuvdhuglroglignabj..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "qsjybcarhspukgmzrlvzlndcueerdtzaeddeewanmknavrjzmzggriwlvjihgycxkkktnrqewmegfgydyiccfpdcswhnnguxlrzgixodtskqwgpsgorwdhubjutspednsytupgwysrvtujcfwqkkgu..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "qtedcoqpcxajshvyzskmtfzbnnnmpjkxgkmuseagkedrnfmszxlojyujmtdgxbapgjwazhtmsfnqrjoeizfdyjdhvnjoltoauygnbjgcqmdsebuvaekebowrmphrfxrcphanaaqambhubwsdqkzcns..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "qtepsbmgznguabbeenkcptrdwghmvloxiokyokhddxbyqnbpxyafssymzgfppczeqrixtnckobcjmgazyaqeyiwlpopzjxxnxtgvunwuoufqtcjqfsmrawcteauaejywtcluqvwaqxkahexyxwwzlz..." } Fri Feb 22 11:37:09.992 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "qtggtcfiamycrvjhajtdebokbxculjmqdxffkdqtcknbnigscwtuphmsnqudhdjcdgveniplblvmysfejdhaeaciwjrbspmwrxwrbphgkzjrlneqrplqqdsadfanntignnrtnwzvpxkixjcvwgater..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 825 { : "qtzeplkbmijvdmuhyayhiwuonjwaddpcghyohaclbylbgyvpsbkyvlozscgfvlahdswrngbejmwgbqkvkeoyoujnpitpkwkurzrwauqxchmwzqaqxztmlzsnxmdbajzmeiekbdlrabvfcpmbdghwcw..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "qugxvqgirikseqxzmzbhxxeexkodfpevdhepqvmllmiriwoxekqgqwbaiznobugxmsfciweqzgcymhncnmgmoltldrwkhiqtuhkarlbescpfnenlanqpmgoqrkuxmwynvneatcxauskirpdjvechor..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "qumhkxnjgilavtlsskkpjjkpzfqshjyiepukuyckcfwcaenncspuwvbzkshhxlwesjzuqxkzjhuzifnzszwidwgmwhwanfhmxvfbcwbrkylqyoowhyjhclexizzxqtovpxntzozmpuyaqnamrszqig..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 922 { : "quprmrakkfuumpylhzdfexukregusincykpclxusdogpsjpcbtpamwyostxmicfuivsoruulfydxhnywtedkpzhibumounrsliuaxgwgcnfbqgqzjdeugkcmtmxfanivkgzlzrrcqnoigfdfemkmfx..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1001 { : "qvgdtpxcpscmggmtvbvpbdfdcgzxzjqsvfqqndlvnmtuusdrldoelvbpvuoxqhuvbvxikeslwmyhlhlafrkuhcbfewswtucfrvydabtbzxvnswmvgtlirccuvidtwbcdoivwaqgdscpucuancowksm..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "qvloawsmpnxbzrrrrwvdrspqazeoaqwzuprqgxtwfbmrmlnzvxskslbbjtzylzpvtlpfffeipszpzzdkbgmzovdzdammnapffafvoommzfkbcahhxmvsjetcyzydwopznsxnaxwagbbvkknxmvvumc..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "qwbmugbgajnvorawlmiewerxigpsvkdoqksmwwszhrclrtiwzuxvyitfuioyjkwpksakakwtpucpwvxoiirmdrhftqrctavggptyzdrhydkizpnwvcjjcihwvhefakxdocoufysskmsrlpfeygmrya..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "qwrfepacjvbkymompglfsxrsbmecxnfvglxxbnurrhyqxyaocxukcmsathfzcgonbrukwkkwoonlcrpqrxevayykdsqneftvpzytbzttvcryshsxykjmxmbvpmulpyojskfeabifmxhqqkihadfuai..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "qwuyngwgzyvkwzwwvbagumactolqvvhkbffndrazybzyugnlnzbqncwkvkjzuudxnjkjonyqrdzxsqohhzworvbiqomqwledunyquifqqjslgfuewbpzgbluesrcbbywupkdsdujjwekuklwifflxi..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "qxsnxojhwiqqxszfhqynvcxdkgffgswqwzxucjebagsfuhitmpzbinexokzczieimizqzbnaotgbhrxufubtxdnnxpelgronyiszjzxrzxmgdhmiuvlrahqhksxdauirpusdnpbfwjfgouxfrxzgqn..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "qyqufgkhjsdvfjeerdorfyjnaybhjumhkwcdwutijavadaeqsulzcbbffwkvddvewbnvfgnbmzzxiueyytbhtqckedzhwnbjdfirkrhyziufzektjkdzxgbulptmuhxmdjnpwbrarbyujlblqnvnqf..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "qzrfudlxgarlwrcqbwngppbydkhwrttztvsnifvmmejhlitmvtjlasxiemjfahbzztzytfhanwomhgyqblibttqhbqxqgvuogjgicthjnewkqdywujqwueyhqkmxyeulwyfehztkybyehblykunduq..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "qzwvkekpwhetzkuvazifzixchfprjidldksmolffwcjolptuklotcrysnwwhppwgrkkblxwmucerfvojuyimsperjanlgybuymavspzkkfoykjgcwmlnefsrypwfqivqlcarhxigigeumuypabfrbf..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "qzxmozfzxkwbezhzpkhvzymqujnxdquxlzzwgsxiqbrzlrbhndcsirpbmajxyknntcewkjrgbumirpcwjifkvtbaggtetpafdtklaumrlaqcjzrcwhenipebawcvmrjlmdtawcvukbnrvlloydljlb..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "raaxeuolfbbhmcdnhrpzymcixitxuofbjhrlwxvmuzvdyrjgshxrcojgpramvbyujcfmgelogyllrlpmiubhvleqbzeldmrvoeaupyrdcriswqnguqbxyafwwitcxdyslzxrhqsrhreavrprjflakj..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 899 { : "rafozccflhlqcjiodsnykvturqrldwnsrvzavcbxfcwuqbbmyqfwipuikntrddptdwbqrxzuscpvmycdluhrqdihhnqjexmngvtfizkqdpkxvxskprulsnfmsgfimekkrasetboeellqzcbgzobwzb..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "ragdtdwmltehrtvwuuaymfvcmnslivwxjtnfwehanvozmtpqqpzfxaznuguggevlvclkludttbfuxgnrsdbhtmrrkwjbgqzkukpisatxzsbnvhnhwlwvfcbqywthngnliluusydloqxevtcllcfuil..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "ramsdvvffndytotqcsyqgntoqxfhtrprcztxdoqhtvsvkcctbfskvcgjdblobufsybolyxfhnqfnmgwltlkxgghxlwlmgjhmulavypwgiakaaimdczilaseqegrgykogvaozrkverqmudwfepczrqm..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "ratxdkcehmdlscnkpasgbhechjmvrnbrucpfwoqzdeomtlqsbttcnkhtxfeeotuhjstrmohacizepjcwiwvadlmzazayigoimwbohrvhdyydxdxbviztljsymglohnzpckvadqgcabpvrmdsazysks..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "ravhmzvardyesvpjbkfiiprtdiuchhwgbvvtpxcbhvgbneitlqhyurefihvtuhllqvpoofpfsdmlkygwzchiceanpixzypapkswvrkkyigcnpynaifhnxxksnrnlrtmhomfnbrwcsnpkvrwcenjnnm..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "rbjkjewhpjiiyoucwbzkldjaigutrngkmqayjsbgowlrcwlxnxyhhqddzdxhxsxiuqrniafjyvqmrwzquqlbmzdmovxyvfwgnbtxosbggspvkbiomweaxedzyydwdmqhjfevmglasjjyeskrcrfmqe..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 931 { : "rboedlifgrlnabuzdcpsonzvlnvgotegdxtlwsskjdcpdfrahdaqcynofzybkclyhsoifvxrfcopxzodthivrzqlihkdtouqenzwqjvemsdrlfqxrmxeptofdamxygbzbdysicxhgydmkhmbnogdwu..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "rbrmbltholwhpvikvzxlhhtegquilehumuclnrrvdcbvlihtexqwldvcmtehcxfocobtbvdvnamtttpnshrvupfxehghnfojrfoedpewajlocoqmsnlltplghqmvsjqpqspheirxlnkszwptzcrtuv..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "rcdbbfeanfirdzvwojbhurggmlfuaduhnvdahawsjkriftbqncdqznvrunxrufglmseuwqnnszhxtrlpdqizdcqfivfytkwsxovsdozrzpaybhypvgpwfvncasvsjydkgxjhsieisomeuisixrnzri..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "rcgvjtthbslmfhamcegslzhyhyqohfysgiuhoifzoxyvmtscmuopznitsxctcxovtvvxbxcfjwgannaoxqvvjcasjadoqikiihecfebybvniivdlbkfhbgzvtapeqzgjeghasluhccksijnnkpjmyw..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "rcuazximcdwmhdouogqzxlmjtshzudzaxflprhmmnforektdltfcwqbplqnndibzyswlcynghbesekajmwngoujkylakxgcgbtbmhtpotjgrrljitphnpialxdklokadfekyjvrcvmpddequzpmkoc..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "rcyvabcensefximoqllksxojlxcfnnauzmzwdarmrktmbxodexqmqxivcdvyvjffoarsfpdtqbxpgylotvuddvvksiuoxsolarqjasntoxtydmyxribylpgipmbrhypljvodecibxojhdkkbpervpy..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "rdcplrdcvjspsvvpnnqxjpbtwahfeppsrbdcgqkkklesrcxzcrhbutlsjnheofgbxydothlmhnruwbybfvrtbiifbtjdyedzzsykiftlbqgvnymzcztkasycsibpnyeuyirzgqoxtxcvwlteodsgnz..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "rdctnjyjicnnixwfumdtnzpquwwcalrzgtuuofnxlzlofyweulztkjkobdktftujyzlwbccgrioqvcpkrpjkgjwawsrivymmthorwhufatbtftgxzmbuebdcystsrpnzqdhqgssydvypwtdyznnsmb..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "rdkaalpcbyzfulowvzskfgdfvqzefhiqrwzmathnrysetotozmdqvntbgygdlyxyqlthlvwtdwxyqbenkpamqfootizosttvvmqbjmfwgvtpzovodpoehnlylafnpdhxdzanzidjngzflnqzafjyqn..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "rduowrrazzgjpbiucxyjbaonhhcxwpsshsngssgbnmecdgegopcpumnubzzxbtvicfuvymdkwqyxnadqpttfsbczcxmzegxvltmaaqhyhlugpsxgeafzormttftzulfwuoioblaabeulplnagjqpsa..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "reqwokrejjzgvasuikztughpdnmzippjbyeqknxrxdwummekeqlxrszhatvpbfuexkonjcuhjrsbceugqcbtblveuexrddxkliwxocfflejnolrmjhkohszcddrddouiyscbqgnavasohhjrisvris..." } Fri Feb 22 11:37:09.993 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "rfrnyjvuyzjcidnjldnlugguwbmdfatijivdgysbgrkfmwmbijshwvhaonwtrzudypvnljyjbizcshifvwcjbbvsmtrkddqagnubskrvcmanngkbfdcmrocciuqttocuaokyskctzszwqaddqqrdpi..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "rftwqymamuhlfbosbdaazxxhaivofflrclpadgnnapbggwnaapzwpbaukmdwemxjuedkqxnylvyzlqhoiywdoowndpmigghobljhiyzrpncryhtjcbkkpaizjcbtnqgvuxunmrfhwlwzybmozottpk..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "rgiehkcjnofxempztopvpsjuwpyllsltbmomnnvrwfqxesxlizxngfitipcusljhphkffvjztwxtiovzchmcbuuujmhvvmqygvfhobztypdunalujefygojpojsrcqdqphkmroqwrmsamunlwkceyc..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "rgmmuisvylpfhllrsoatoypnbmsdobnkvcgabidgrcoaeplldhlrixrgqasjnggefdxdhgyebkveoomsiygoiejwirdoixnyyjywminksqvvgtdiacmlxhixszhhoipuixvffjqihlsbqmipymxhyn..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 930 { : "rgwjrjbywfhoiqdsdhbssourzgfzbbxnrrsrnrjvmqehsqfihdlzmtawtdboayfvkbsqpzwuuniimyvkgjibenkermlgdfavfituxkwxspfgolfcfmljmitgbbdbonroekjivoxmyqwjhkuoguezlx..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "rhlaotruaoocjjnkxkdkiuwcjdwcrhmyvkwehszasmnabbynageuuthvumlrexyoilxvhfvcowdqpdqlxkahamgrhkrrvmjoxylweuqnakpwbneblkbnmfkrxztgddcxmhjshiexspeciqpjrooujl..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "rhnqonnknnyclkpzukangaifewvegutclxbeklbminllooonxllbcdvvecwudurrwuzfwitwepucgphsbgfhotsowszsmvuthzmclugqugmtpbqkinszvaoxkmiplijbsxheakepukmeuhifpixcgw..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "rjfeizwalrvhphksnlxeuptshdgjdxfifkoyqnbhqdyrscrajwikpjbixtowveyhvquydzuuawegzluenbyacyslnrcksqnsfoziwxwfidwjjetkqvgjzzoqmbccbqmiaiaphaichwftkbsuipkcne..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "rjlepavyssvmzscnbqsazbvqaaiqopchtaympshelgiiagbutlgqjdbeznrsovxmeuwfdrnswzrgcasafebppknxhkermikpalwjaihxihhralrjpmfogzgrvlidnrwvqpnhpmydghrpfhxdjxtvqh..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "rjsolcytgrvzfmgystfswsunozujzjtmntvrdpgossielcpqilhdrekhlepbpqrfncbkwlpdzegpzjifkedsdkmuxvnjpfygcauvqjvrdowsogarmbrbgkgjspaseyafbfoylrluulrvskcfsjdnat..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "rjuxsbviavbtddnffmsugivaslaeoqeetzglgosfbhnqhqtndrirqvdxrceiaokkmwqlwbqlrgynzbskkxutgwtnwzolszcfnkypjopvdyweaefqryyrovegyidqrlglkzwkztrkopmfnigeisgsln..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "rkedzfmowgrnmeuyxccwvmueibfpxozchoxkktliwwzcpfcscmgjcpkeyesjsspdejqxqgyxwumdcwwwgmrilszsxogiknuhiqldbnqrvldzspaqacavosloghwaqwjszmotwmvuatagywezjszbdj..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 989 { : "rmhhyzbrrgbrpmdaisyleakbfhhgdhwiqjxhbfslorxymvihqufticyhxjbvktkmhjjpxupimzwwfxaqvqnmlxzecpbwywvievfqpitklmcaybfahchykfdijetgcqcxqcrqrukxkgjqfcxriangkh..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "rmpkxjjsdftqozvsvzwmphbifioplcxxhtkquejbzsmluluortznpjuilelvpsehgdjcmunxdyahieytytpvkjidiyfrvjwbgjnflhhqskmkajrzzevcwnuhfbmawbrkhzxawalvboanirvwjxqqni..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "rnwnnflmjvivpidefqlaovhekvewutdtqynacdnmziubprjchwvxfknoyfesdcfnbbxzawlesyoyhegogqvvegabcpevpuelqvhwzcnddljtbsoeecklfcobfhfydctixqmkxmamnvnslxwbxtajjl..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 966 { : "rohylhofslfybhjgnbywojgzkygtcopxivliwyxyvwftvgnlypicstlvdbkxmysmqrfkhdbvhtewntagvlrlnsivbwzkmnqydsjrxmwvojatbisycbrdljlhnthxuukkgfpxkoooxkkdynrmtmuojx..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "rojlfdrcexupvoqbxdlhcxojrmrbeqjhhicsifbctucqpbeiczszmhcednuzcldkbnqsyhqhdpzepyxktxlvgmambuamibirvmllnxhgqrlhgopupvxsqkrkhorzmmtrfqnrxgyvngbulubgeqwfei..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "rordokkfbinmsldqddmyufnlstbdvrophmaiyhyahckswnlzxxddlnzwfleaspyomekrjqescytijeqbyqxahrlkpdzdsignstekdgxfbebcksssmlcqcesmrooswnvnakumnowsdcaobciegojzrh..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 900 { : "rovvzozeazteebqakovlwlrkyatedupbiyvhpiedgdnbagyjlkvgbjqknuvanxkubdoontvyawguydcypliyektimyzveqnojabgfduocoovmshxqganqdhghvxagvaehrstbiqfrqdmcsitvjvwts..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "rowmtqshdaeayokhsimrtxsianhstkwefelntpbwcwropxzrfzdqrzjqxupnfuyizetxqnubpibyuoglufauvgvkxeqafexlzziujegiyfjbodwgrbjresxebpoahzgttebbvyokvkusxtdbpnoqob..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "rpmisjarmcrqlgpjvyjhsorfzzbcrhvputihmmxgyasvifmljjrrzkbujqczmdgfljfgjrwugsrygjtbnygrsfquhxuotxpewilyduwdtcprwuajhfqpiiikkuwmhpyjzyvhulvvklzwdouemdxylu..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "rpowmuwwqyizyrcytukauumjwpdfbuxelgstudiadymapwrbffvlfvgrbhlvnledjzuobntyfxfyxxkjfzebnaynenavwcgnyjcutqinctcppjqnhhzwvmuaukggbfauphmamyiepwebwnqspqhahb..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "rpznqbthljiojnubkdllfqofjjmurbdzvjvicwigzombiwlhbgepgpuuanqxhnlkuzhqorlpqfkmgtcwehqenjgllgmvnzgztydnfuwiknkmznxxizgfgdezojqrwesdsepdmhjccbfvqjbzvibzzx..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "rrftyzkoxkugmghuoxzetbamxuzbuovvofxwhcrezmylzdohnlvppvrvgzlgtolndynfwijjnmrftyvjxuvgfhwvjeelmfmnwkgdomewtqrohofxoqypnduafhafbuzzwmrbjfudnobmhrrgxbpasy..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 870 { : "rrolrhkytgmivhcctxaozjksgvhwryimzkojdtuxooyhkvoitinfevwfxysnqtdwgicebcmlwttgjiznqblimrcuegaorkpjoliesnqftxdbvnrhgcbbryfsjldkzahpjumnlvezsionbtojvqspog..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "rrpxgommlaityovfrirretfudgnrfolgiyruyyxrxkcbtsrinhppvuibbeyorfufiwfshgtpsorgncmkyawhwvuujgkclssolpnoyafnybudzbqvxelawsziseuywekiymfzokbxbqndynfjxqownv..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "rrwohipdhwqtwohxxkkxpcvcqrzgooqytmdizacufrizfgpmkkednycolzbpytwovejqmgcviyqjutzskctejyjbxetxtlpycpvwmiqikkvbfiyiyfaqgbhpzkkylbfgkgcxpvuftrlpzlvypahpkl..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "rsbtnsvetkvmmscpdafymnpmxjvpouwzbjdvhsumrrzlessxoxlqrozzidyleyygqwcmrbdzdelxvccaonzgboglbqswjpjciaaolwlfsqwycstizfupawmxcodhahrdzqrwprhwvtmhzveophkggf..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 891 { : "rsjbyoaffqlfernqoykuhnzataluzcplixxbeeuuspjjfjlqgdathcntmljjsqwothjpaquoitarwxmmuydjzviiumsnuselhqolgeggaofgycicmkjxxsjseithiljsiacggsszvtscykzxvqjlgp..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "rsmfheamkzdzrylpykssudurmcaphzaybdtqzicixmoajogdptemjwqmydblpvkqcuihyqcoenkynjopbzieavbzrbhultsfrrtqdhuhnfsuixoauzozxuisagcsrkiltlcoypemgfdtkyrcgfwjqq..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "rssmpquonhwjodbisvqufvujfumnjeqfchkhxyhbtvdjlkmabirpgzzgfockubkwodcxidbxnqhpgulauarzjjudbasjythzhagawqrhwctpjguhrytjbkbpcjseduaksdnoivheghzgzuyowmyeol..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "rtdyqifrlnhudsegungguyjjkeeajwimnzmznvlgwwcdsptsbwmccnafnpxqbbfsmoygfbdgrgrfmuzzehyvraomridoptlmfujrjeeaetzibbertgjyymmtryhzwanflfnddatkunwstwxyrjluas..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "rtfopfaeongtkwtrybyuohtrgglrwdgwghybpkmffxmpgslblotjrghpcpqyvarntbyltkvptolhsfvwkzpihckylaeaddxbtnslqgsbozcgvfgixnmijeyfsdxtlgzyxmtspqeygudsahjfemhdlp..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "rtgwwonozlcfxnirnwsxlrvvlrmpbvpudahevdghjxanrecvhrudsqqfmdfxdaydssifoevmrhpnrqwpofqldzrllvyvcwlmuhkqvgvthvtvzcokkduazgvkyrqzabtcbqqghxolxbneprqkgnyhhy..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "rtlgujlwemakzlarshmrlqkewavahmhphllonolrlxliolmllwdiqsiznbibesydounzqclvhcthymmrrqxuywgyvmalmsnabylewjnbndzalvqcgjotzxyrjojqztwltldzqrhwipdqcrchhlcfdr..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "rtynbpyzieexfnduigguaezgwdcjiulguqfxowcduprkyqgphnfxajlxqpbfjrdjpguyzqynhuwsuwhfotffpeimkkneqoerjpqyaqtspephcgsgzorattblabgbjktanwpczhexninnzmkuntvmaw..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 900 { : "rudgwssggjmiorswvfakhuwboccwrympsckhsscdjnpmctmbqfxzvbspdcnwgvzkdfveiphkndqfmzyewtbtqlhqfiqixtszpcyacpdcvafjjsynwfadnhbkynpxfuqgzldmfhxoqrwlfutmasdcwj..." } Fri Feb 22 11:37:09.994 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "rueqhysadsahaezlvjiuicsjifbylpbdqeoiynjqaexdxbuenksfbpuvqvimvwgwpxeuzkvbtvtdlqbwqgufyzbgagxijsptvmqtfundzgsmsmnewfzfyaoxbghvvuzjqcdgxyxnjnpqehfobejwom..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "rufhktpcnvcubgtcpsuaeoejjdxbflqxhonnswjqkcfynozqcyfkuckloyzyudqcxoxstfhcsmzfnvdfwghhdyodqmazcvpgkmzitghtoalxkgmiytfokuwajvglxmoqzyqsvbyjjtwgkqvxurlysa..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "rulawjzaqvjiwiqyircuvvdabggisvgssxnbavmzoaldepqgctmprbtqkhqbfuqlciyequqjdfcdolqnrysyhgvnofnnfrjhyhnvkevuomwtretqcbtwzxuobhefmrxweelqgxlkuiqmjcybbgaojg..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 970 { : "rusfbqftxmxzpfjrgmcjksedjajygqcnooatqwrrwrakgxbmhysddqxomaysuapjraneevgnhgforalwpoajoixmwsqzveglboqxpbdopbuanlybsfvmwtqwbxqtwvdexrqhjwzmjzvfbulriaxuxn..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "rutrycbcyfvihzyfxrvdlumqmiuxadygkbbwwejjdmzxsclpxdtmgfcrmlarrhdaqriblmnhxmpgnkxvwqfqburtnkjsbdaigevnfkvrzhyrcgpaemiqqmdckonnidydcrnzgnuadpbwfdzzdpwwim..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "rvcxyafbmqrfiparzbtvejuysxhmmhkbodzscyilneptgxkfxdwpuiaxtzfnkvlyrhsfpccaeejutmkggevrwfwijjrlevrnvimmxelnoadolpvbekqdxpuuncaxbwvknszorojhlhpgfgnpnlrfsx..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 917 { : "rvessmyepfbwkydliddxpsmncnfydvxkncnvbyqvjzesggjmqzzgfhafewvpgtjcjqdoaxllvkwhoiijjlwgoxautusszkxsicsejhoxpajxatazoqgbzrcsxujxfeocgdnrejegeimprqnfzsgnic..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "rvouwqvdttjxtluqmjzjfbodbrzcsntfxlhypinpvgxqwropazdozlobukehchuqmibzydjmyhzuqgvdaczeflvfzjbukvouqxnnwwiwpzznpqpvckkoyygfxosxvhpdeetsnmgyvesipnuchjqmjb..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "rwcarypeurgzuwvlbribiswicrwdodlqeiuhcpxqjcaiucrzvucrmfalxfmniqerbncrjasxtpgkrhpccxxvrbfpncltzvizekrukcrnirhwsirwmapbizwyojvxtwkpwuobaediwzvfbqvozbfrsc..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "rwghxtbvxhsvbjvlotqnfrmdukaemmcudcpntahigyndflfroveysnlyxblxaqlnunlfwlkryvkvysmnfobiunjisxmhowdiuwgvlsstvmvrlklcrvppjghmqczcmufekeetfgxwtpemkmgciuzpzy..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "rwqhpxhncabdikwhwyotwsxtxpghjrtnhhuutugqtymcsqxvzheyhmqnyxboyjubxjpnrgnepsvevcnfymcceshetccjfnptpkmuafhfaxourfvabcmoidpoojarrzgnbdjwvivjzlnkhhwjnzztvl..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "rwsoowtnzgmqmvkwrjtxydmwvbanximgfuhhjykauwjrwzzrbwpuujlekmteocogzqxknmtxbbwqecjjojreusipbhyncxcuejcvxaetganplisesahzwpyndijffijjpiyooxzvogjsyngurhhgfh..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 901 { : "rxqlgrgotrjwerupusnqzvzglodybidbjdqceeyblhwkylatgkhhniwxcwszuyjmhnfftbjzzsuvpywfzmwfwtlnlmsnbzeapxuysquuybwttojvubdfgjfxihyjniiivqwuvdarixbbyqczxseisj..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "rxtayiszkiwlqykzsrmwwvhsoxjrryhoswvxerltnpnoodmgsrjxegkfiyxpfwsvuxfssjnqubxrvtwyhxqaqajawkbcgridmjclfhfqvwmkkmlrqiwfihnzuofhpytkovkdaycldpokxrmpycnmuz..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "rxxvcjzfesdnawujomqjbbehofooprpapfccohlpqlyvzllkjhaudlwzckrtpboarkegiewtvmlcnkqcespbcgpldlkaunirqogmbkhsahrpzjniilaiyeojwaqdipuejykwzxaylqmokqrpfkssfo..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "ryljvygoysrcvghdjwgbaorbcitlrayfnhhurpgoxyfnrcoesclkrzgkwzauxffxtlbxiwhmnkaoynvsoerpiwetqdvdpuxpcfcspqxpuhnktaoqbbaraaneiptekqkjyxfkpquvjymkjyhgwfcwzd..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "rzukalbppyonknogfjnekdgyfaovymrcaxygrkrisghdyqzntnpiqixhmfaydktapsmspoivqqdmxjymuocjpgtdmejjobmzwpftuxosxjdsusovanxnbjezpkcvcigdohcqannjpdjgalozhvzdsf..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "rzyqhhlhkqfoctwnzyhxxbwlehtapzdfsmdsfjqkswcorirosbnkljoqqychvbskwxjxlehywredjzmixkhggdkdqcnhiuzxinjxefifkksmnovdgceyrjauqlslebjuignubaamwzreabiylvlfpg..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 968 { : "sakguhurmtvauqqtrwqfyaopnqucnvvkbmxeptgtdxwapoavjupqowkywjvekwglyynbaxmyotfgetjwvofhlbrrfaxktkwbtgoagnwuzsgvnwxuyvrazwjuwjtbfdcayuzqxlzrtubgxsghquwabs..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "sbehzldmcbupfpaouhuazvcxekoswypwkptrpatkegsoozopgmmidaxlgoxuceyexilxlhxtjdaeqmjtifnvfuubhjnrvhladvkdmstdgbxaidazafzewlmoioaxfstacwqodlqqoklpfqjpiisakz..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "scaafteatigtdzvsgbljasalxtfohnearudrroulqzucmywddxqqocwoibwzkpqadvwpyohilqzyjcbdvoqilllisatpnidioluyhnzztsajmgnuxhxzsmrlgplkpwaumziwlzliirujfzwryougxm..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "sdffcdkcdjglutjrbegkhnmwmskvkbxlypdfhhosounynnjxylfjiubabdqqoqwrzuvquheyjqmpqfazxqdswyiluoennmjvsuaessmwdkluktnoaquhzcjcxszmynwykdkvbdehvcwcxcqytqzbpg..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "sdkdmzznursywuylmkzbluekqncsgrxdixehafuwtnotqjxmurpizfhvvbiolsxsvbwyemmtzdypjipgqeerjsfwnpbuvdiaoghdhdloupiudrwuybllgtzwcafmiohpaddnivlrouzgwtzdvicoae..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "seaxpvbjdkwiuadcdpunotoeyoqydsqliyzfhubmzuiqmpmsdfehrpbfqoxgygqpkynniubsebhwdhhcvnjnmdgkslczjuwuxoidxanycknchnxgthywrugrsiiywmxnbzwabchclwqagcfiuekmoh..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 980 { : "serakrhvotxxzwdzippxomzalnjkmuxpsxzilvcgujiyfemxswsjjkylesxxnyqcurbncdyodqkvissiqodkmyvnlvqmtpfsgvfvmweypjenlfdzptekwvseavbpulhntfwsllsdowtdoksfkykylz..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 933 { : "sfnzvhyojmshulwxpjyiqbglipkloaomrohmqeuhamnbhbqkjcfxcoflgmcccfafrppdfoofvvjumhzpxhqebhjhjdudhnfvpmkblxkcctrdmynbnyazrfpfxchjbvxwidhuqtuqacocwydjqtulmj..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1001 { : "sgjlrdyxkpikogeqalnmxtujpxeltantuhptxsqqtojmicdgwimvgsuzhzethfuamqufdvwyoxlpqalflmnpctqmxwhocqetroadvfzmfngcfuaqjhuygjpebavpzjygrjmmhecimkscvtoysbqwqt..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "sguznuymnubmecxnuhiqvtoiegilloikozywpqbnvoongyskndvachiqeoomgryytmhvbzxzvsztygrjqbkaoxxddljxpvanumblulagfkztekpawkvrbodlnbobaghnnuufpeoebwlsxibjhmhxjq..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "sgxcsxgeltalsevrjkdzpgttrdamafybpwahonnmjyeetjjkivkbyvrhohhpzbdxryuuvdhohzbcgxnbnswyyqctiwymwtaapxmblcpmgfankjgwfyqjwftdecvevnudgdfczbzfpuiedpcbwhytqj..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "shkkmdeirpnedmkqdotrdzzkypdecgntbvwjkpqjzmmdizcfcujertdnffsfrmjgkclpkgzuhnhiljebrtlgbhgcqybwkfvxyiyiwtlsqmvxpcxqnmovvsirgvkoezgcugwdotrdbrolkwthfndfch..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "shlrtyqneroycwzvwvjigqucohzutbmsrcwgbrwayenvzujncnnehmnrbhelsgcrjuhkqvcyhjjzupvaudmgmeommnwudlgpnvtimbecynpvncipomsypjsdddkwiymdslcnazbxspgbpacjdbjxxu..." } Fri Feb 22 11:37:09.995 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "shmhhnwdcwxpslsblvbqfjgshgwqtpuoqipdbkpdsavukccmxhiwlfjgfmuvsfibeisnfrumyeotstmywqnwduvnkooimtsrlkqbzrbocbumxrfwozzzielpsrunrtatuohujckysfiranbpmfbdws..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "shyndauyijynlylzifcyxcclnpujrhdqfyqkxvobxujpipsrnodbcgbfewubfepzyheodchykbflrzijdusyqkepeujkpgsldqipvrbthkwgdojnrptzplievhpvxoynpcdqcmqprqdnmzsrfklzcl..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "sibltxzmtukfnrunlncirbfijgsghpnvtmnojxcwavbhvhckujpxmrkpjvtltcvrjmgipeutydndvkingygftntwjyrbilbfotwxofkwwtqvjbnffoyvtvqvqcfakvuheqnfpamzbqxaortazqcbdi..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "siptrrqnvccddcrqbmpqvafksracyeqdhjmljbwibndnnkawhmwblyewofvfbmssufwwcjvjequmchqmeysvapmhbqxrfxripswenbqlkedjbkmylhwmgrqnfulwdyjcruzhfegljyonqdgygyjjqn..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "sjegvnufmmrkimizhnsjquljmuykvjdkqkkfcrkdckjptyaemsvjmtlsvwdwcbhsurfnpkdsprxdowptaswjbvdzbyxlwsngpiloxrqnzjoohxtffdkqrctioaubwrnqhmkrzstizkgdwbchgopiwd..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "sjhtlafqadwoyojkctrzcbyvcbevhmbyybtoqegprtunlyfewszwwemspjvefrcnypwkflgtnuxgcjhbmzzpuqgvgvnypjtmbisndpeohuqhtrjubejyhfmxbrutgvlkzboehhocgupkbsuuxhuxom..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "sjwdhupxynbajwekikladfohfoapefxiluuxfilfjavyxthjqikcgtmtztwzetifbtuqwpfaoyhgwhxfzlccebvusaovjzhpelklbnmmvkqcbckzcjzznhsbdlmssfgplbvccnvxiknithhykuzifb..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "skariszmozladejjwerjfbgktfqrqtrwchwyiwizbpqdmfxletjaifvnrloqfojcmxebekbsefcqovmlqlegosnmzrtrpxwttbgijeapsokifpstupujwqiwnrpclhilztbxdpjvqpydwxjcexqnqz..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 925 { : "skixfcjtrtykkwbapndesrhktlljqvzybcalwzvctjxvioljyfrsponcqgrkruyxkziffhmafbbhzmuulmkkwftnunrjcsjvbqutxmrgnfsyqsngudcpeegtrjkpqlaqyhhcognhzwlgpswtcwvbgh..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "skkpnxaoicarqbftomqoxfcgdyonarimmodfdquvrfaqchiqqwlwnlhodmbakmabizbpgpgptxtegqdbrdjoebqkqkmjmkeztqmqrerasfwflyfmfefjobrmmcnuxlawjguvzawgyybhujpmzpvpdv..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "skyqultsgaesyauyffgvufosdkzbzxjgoghytrbhuaacnhuzubfbhnyjsppjawfwoftovwpmijgscleeoipfggppewfdxoxmutwtcmuyhasqlnxszmixzozctsleafazaewjzncovzwnrzdghvvlbf..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "slgwkcxeeiiqkvmjgmdjvwstwuejvikxrbnqpzzfulkwaxrucvjpweawqbrokkmnyjbpeweoycbajbzhaxjhhnzaxfyvuxkmghemeyvfiltliavunrmdrvxvnvazwyzxtuhzoyandpchtaipngjsjr..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "slmskwzhuqhlmgjeuusarutuutcfrpbyhqmyjywcbhqwnuvtneppezogjrgafqwyqpsljyhudlonnlcfknbaoppjoxeumpxxgljcuxzxhhqhlubmbxuadjsgheituisuzcqjujyexwjrzrcqkelxwl..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "slrlgkftdxifdoabyicppciiydzxacqesgaxgtylryqcusliojbpgbzcahkymlqewhacsqupqeftlrnvhakbizkvqjmytftswvtfqhbutheyoxxoscaemiksxzpbnssaddxikkcunwhgbkltegjtwc..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "slvwdukeogukwfuttmzcjenqiqqudoyhznvihgwmunkbonhhnfjyoknsxlqynxmcsqeoizmvyrihasoilbxtqyuoxhdygdlukypcuxatyuwpvkejtrcwvwpjsftsdqlxgomcshgcrvwivrwcspfjvt..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "slxaammvqjbnrifrzaeghrjafqkfnqgycpvvoaedaptorjtjmdramvmlgfxtzxzinrqwwfasxreohvufjtsprsmlxxlondlycrwbumlgtuwqmsniezsyxufknqvgtuvhswpjkgsyslvxoaeldkjbhh..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "smgzrijwcsbgjedoyhcsaozshlzeyfifzgdjlmddmtlnihgkbvjqjnsloxiwlpuidupqvybpsbgndyearqchjkuwywgpjdzvmxssweuxdqutthrxtqlnnqvhxwohqfyktoriplrzlnkqzyebutuajd..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "smyblhrwhwzsjugdijkgtgrmuzfdlmidprefepimjamhrrcubiflujvnoztjpgsazpaazfglbaoenmbefxorcveqqwsnginjdtzgphuozihnzuyxnlugrkqarmmaejntjgjyhcnnwemwnhspuwbyzx..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "snfoxbehzqipomvkhfwixddblhzztvjqspqsqrjkjfdpnwmvpzcywzllwepdekxaloxvuthisskkerxwddcznfpkqaekrirndelimwgxusoxrdvpcobitkpabxpprdqabmmpmmyonwyutelswjfydt..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "snghuulhitklodacltgulwtnflblesurxzadlblavfczhmmdwdimelrabcizvmbwysmkhuexxgcskadvhbhfoumqloapcdzrqutjrqngswbhlwhjzgcvdeqcogceeuagsuomhkmhlrgtieltgeinxw..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "snhzxohycwxrvfdvijwaujbivfbwsjvccbfslevzoanhuemnfuzpiiefhguhhnlmnkfueuqvtwdyffuqtincqaaxtxmrjckyzjrtnwvstegkbtqjzovgdmnlzfehimozqdaiuaakbdkqgdgqijjvzs..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "snjsiyttiwlmgcuamibyomhkxshwfbqwtacefayhrxmianlacnzpnoskmrveglxbhlekipsgtdukbphvppeeytaixvbazzdwihggfakoratnychrxsldbzlnpxaeajsgqkkssnfwfiemjnikqlexgv..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "sobswoagpnfuwizeytavpwmbchdkbrrglngtiahgtuxcvivisgjqypyqezyqvgmnisxyzpfhrumerlwixmitkzrrvtmuxncvxiqdnyzcrfotvddfmnkqwikfktuxldscnhsmcwzfdnoagdawjbudtw..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "sojcyuxnxjtvksrsheeelbvfajmwvajqiqscagbpeomlvvnexeostcnczhyrhptajqikyqwhrxuzmsacszcrndtufkbdmdaedkslfbeeaijaklnzfdincuasztcmxpuyjhpuzrhkebzhkxxditayrr..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "sojekubnhdazxnjxqtippshmjowwmxzsmorlepkjelyatrnnzjahkmbtkkkvbvhvweuelzjpybtdtgjlekkrmzzzcteeprjfsrvsjsqzerdkclmzhwtiulmtsonnundmfucswnkavpbduheeqgiygd..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "sokudaizplxwplufimxbrzocnzwgdqjjvcvaxenqifrimzweefkcutngsnyrnwvqaajbtcvslaiexmqsevevfucqllaullrfvyplyjunhycoiwtsbrwvggplqcxjsgsjezfvqpqdlpjwyekvyofyyf..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "sontyxieczshbmxydwwhbymvebrnnrbebokxkpredtkbstqxixmogmjergcivmvdfauuwvtzjksvmwwhnlzrtycpfocwldiihgvrdnjaylmtrsmjvvolecfwnbvhayqsltmvphtignfygaqumsdptl..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "soockuaitwyotyolxthyeiqzvlanjevuqldsybgphehvouytmcoqpettrcxdytfvanetcebltbyxeuieftjxmsggwnuhjfahxxknmnxehmoalwhyzcbrvgshrikjlzudxgsqnisxjtxucydxybadjz..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "sowcdukyfwbnkirpgklrsxoviicwchtgzriqxsrkxpapqevynedzefrpzkqbqtodpyvqgwmdnracwgsqsgsnrpiqukzsmdpokiecdoflwrifngndnmrnvbikxhmueibpvqmktjgtmwoqdelzemvsgs..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 956 { : "spkwxdqqfcuejnctlyevmfxilqcvxgyeqqxssuevmazhgmmszzcvwgwpgijkmhyskolzwrwvdgdkjzpdjyiiprrejakvxrbljqagdhvxjebkpmsniyfgluklzmwxuebltxhwmqgodyezdbdmlcsjrh..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "spqnvzilpcixnhcazpmseozoevdsuottwzkcjdtneeulagotaoqtrimkvijlywcgurqznsjqhmcltibxvaofvtkbeolefunufoozlprwtfojipkdbkhumzsimsvqkdveroavscbtfrajtqmfezkzwn..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "sqthpxzgzrxcvouqqcubuollxcqcgxipfzzkhqxnwgfzbjheihaaofevtgmevftwugnspdniainofmdiocngcdumastojauadbpiyouaioqvfeffafmiojwzdvsgzynwzghihxajqbgsmmtjnnyvtu..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "sqzpruypmeczvwkrmzfvpuvamfjurdcrohmaamxqgkebrtkfjqqnpxwuulwufcdttimnfbacubqanaodqfomaadsuejejjlyfgckgqjwsmmncduzmstkzpdyzvzlhdvnirrveyweehazzcutlhevmo..." } Fri Feb 22 11:37:09.996 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "srkangvjhvywgzemsizwqbiyzgnfxyhgcsgkxvsffkdeuhjcckgyermtwjcnnqapayclzgpkizvfsdoakngnoorbeumguedemybipyppfvfoynhfpkzhlxrsclkxxwhpmmcfiaewniiszclysyjwcu..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "srxidyogucmllquazjkooavusgydjandfckudgsfzxgdktdzljryzsbdcmnmsvcduitqbqlcvrwataniadfpbwydgudvflyypcexjabanduputhgoijltdqrrlucepuxhaailvpnqenivzpfmtqext..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "ssowxevpdepwrnfwymrukqxgywzcpzmxsxcudsivyzhvxkvkzuqomdrkvnynmdcdqbpuahyhhbknuskvtrazdabrmoqxcnhfhkcfdkduokliwrixbvqgsxzotcwzeajdkdgejyjnbfojzyztmzwjsz..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "ssrjshafrwnuucbkupeiuxqlrxbpbajxzuosdjdgryfawgxxcbtouljdwdwlcosbqvafwhveugewpcuczefzflxzpqfqmgcuztcepskoqefslqfqfowwbzthpywbkyqxlvnvndmobiyylqcnvzalxw..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "sstestquhucidohobjjmymlmvlyozukxxufdosdlasmczrctpvwmdtcsjrdyuazxkfditvdnrcmtznaplbvdrskqzveywunrnfwlkxfsoaughcwgxqzgkebtosrapnmwlsqkcuhlccxsllrwfsxzqc..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "sszphovdahqtlnnramwzffsvkjieaabszyufihogexszirfhjcwfmzaiybwmscfmpjevyxdubuodbsibdnywzzxxdxlclrmiuocxzklniwogiulbeqpcbhmsupobxihfctrefwvxbnfkrbsbvwlfvz..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "stbwscdgtxhnoucmadtaatqfutsmfbcftaanayjbambnydheskkksrshnqnsrvpeevgagqdwreibguleqkonurkqdbzambarulyzgafnnnxxagxagskdjugdmnmccazuqyinmfyzmasnpjkhfyeukg..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "stllrpumwgejqklsdxpuichlexmeqykmmdglvitqxjxrohzvotslxmtahysiazxwlogvwppnuqfvwopzediibqstjacgqizvbmxizcwbwzuoueiayybtyhqdntxvhncepebahtvdcvqysebrguxrvm..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "suozcpfitiryutikpohofrncfgblsbmylvpliyqpuopelcgbyfnskqpbnqgqaeisnhomzfdvktptzzuimvfyaqrkytzkgypmtffzxzddkgctcrwbtwelgmvjkzdrxilqgnlltflwwhmadxwckrdybg..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 857 { : "svxdudichqndtoxhsuvcrszqxyefgzmrogvmurzjkpjcquxgljhxyoioxfilbuxaepvexhnpnkngexuevcdoipoccenetldmubbfmhomffvawprppzjshsxibiwbnqgmshpbdbwpetujrprvmmymtu..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "swftzlokrnqzxamchgzssebwzyopukaijgmxpaynzkiusvnyuwmtnyzgwaaywfaznqqarvydtjcigjnjbtcpvgtnwolvismpwsrpuozwfczwchfsqklaunfloryhsqcauwzssgilfchevnffqdrfsg..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 955 { : "syglvwlbinquvilopvwwmufqdnbmayulpaitgfglnxlqgrysozfyfhtfyeutpvwotoyxcqwfaaekkjkboxknulwdbkfyvpvcpdcwpdpumgkneceijnpfpkthalqsepmvgbawdylogrgtovjgiogbam..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "syoqzfuhdiqfqxxtwsymsyjsviqjrshdzmaxonvwjmbapevanxjajpyxghfggczbvjmnjoveocnhrsqopzlwtbkciuclpsuiclazcdsdcuazqxsqvyddmzinayuxwhjyqykwduugnjozxuvygcfurs..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "sysbujrlpjfmcnhucvhpplzyfcrklumuijlolergfvtjrjeiviisyfkudtwxepxeiklhudzgtjnznzykwgczygljqnqwuknanctreppylezbyvhapvgjkwhogjmekfsbbumdyaxqyzqqbklqxtlzbk..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "szwkzbhinyhfhtstlwmyaoqrehrdftyvgfdfwzqzonhzjdhmtzwhjvlwnoylacvcdhzottizxbmpdohbtmuphoxwqhjnuynomypbobqvmwujpdwnwwmosyaybxjfyjonviiobrhtmpfvpyypuzkeig..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 931 { : "szxjukmrrgwdfsonwqegyivahpviudjtrlzdubsbgxcyuuoffhakyegsbyapxsgmuzpqevirdodpaxtpmnqvmkhptdaoergkfpoxwtbvztbnfazwpybbdqsikuppazsmurebzypmeflezzbshqwsqg..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "tahptcqblsdawvbzndzugssmkvzbaxsnyakkpotdnjpssrpusnyxgytwmgcoaynzmaijkbocwxokskqortppuivddytnaegprgzrrrwqjiccsfxfkouhucmagxnrvoyvguhdwvyrzqhjmvrsbkmezw..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "taiueakfbbhmqibhdvecdusenxuofjflkfdfamngvmjdumlrlkizrqwmpqjfdldiuxjmldyblukyuunudlqcquiydcoukowemssdupiaectbcgynwitzfiltbrdcwqigkrvufmufqxqwxuxmuujmnh..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 968 { : "tapydqqtrjfndircqtsvxkirykezcxsvrwqqarknwhiiknrnycyfmtacqaryodidfggnzcdjzoorksnjkypomhdjjqxwvirfxozgnwywjtrgvzzbwzmhmdrmdvirdixktsvbfeultfxektnwporjeo..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "tavxeethiuaahimtntzujmtvioswgrnayelpplmoniraxeqsjtsmdtxsemdlnxwlcsnwzjyhuxnrqborcjbyybnjagmeyivczrmbgqnlomvraaoethcyrkkybzgqmtkhugzjfnjiwdfnnonhsfcddi..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "tbblzlncpurkjklqhskiyrbrutxtfbfxnmlujbxspofemsuyvsbbfgfuxfzqnopfzokzaxfwrogeuxhcmzhxfxooketpmyezcpmmkwwqjfkopurwaqhoxgxdcjwlnfpklrpeabdctpobhmpwvqqfky..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 989 { : "tbfnpkfvmqunhbdkizxlorroxzwvtksfbuyfusthxsserqihyjrdtsbbscdgieabrzixmqotsfmjfafwnooamvmkdqvutdnnbdodtijnfslebwqkedvnkbqouvyzahjazuahtnseiycrcnfkoqyzcy..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "tbxhutdioxugottewrldafjhhlxuqdqcwndmtxykenkhxsbludpedhgqmbsyzfuhhnwbqmrziujoujcirnoifvqvegtzoeqlrdstuehgxkfhnkkvjwnekgootzpkbxnqygivccakzvlhfydbtznuqj..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 979 { : "tbyzzxecsofzklerelmfwlaodtqrakzhcymsyxrthhquaouciimqzgablcptnzktxknicycnusdlkapqfjzstpbycyhjjugewhjpmewlxydhmnbhjmidtkjvcwvhkwjrfsldaiavxcinbiggqudmkh..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "tcivriwzmnpzsuwmkucukamkegvlsppmwtibvdrkvjqlbzgqpnjtpvfjmlkyccdccgnecggdyxzyrflapevfgadayjhmegyxdzxojkhksdlikbxwgcxhcyddbmjfgvcdxgjddoxqfavcvmmjmpbvlf..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "tcvxbhweztrdxhpznmpkpfsrndwogzttwvlslawxlivtruyacfaixhjdoaicvetbdetoefigmyvzrxqaaaqzewpfnkiekhvgelcqptsktqgghrgswoaspxxaqdiwpuhlnoknrosladkhleruezngtj..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "tdcogtyvcpidoreehjnooeohkaxwvfopozfxgljojexzyjzathelybnabfszyehprhqcndsrxbzbtcemgqkziyrdaeivrxlxppkbvekkpkmjonkvusudunqsfzrykyizvvmsqremijcynasslxpuyf..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 919 { : "tdkiyaeuekhbpjstwgyimzxhcbwdrudjrssfnxopynvgwyjltrctfbmdifefrfnpaelrauzdrcssdngcjfvsbfllilmldabxsofkoftmbtfvsdpjsqxzdkscjppxqffgykxhfunkzmufqqfphffpju..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "tepbsacwhcjbdxsxwdbgdpegnhjbwquhacyxcfrwgcklpsbiqqhbctoryznuqpnfqbgsfvvjkiekwnvifyygcwimruiqrxzttgjwuwwzyvehqtcrqwcwnjwnhxbaswpqgblasfgwtbiqunvyrpcauo..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "tfcvszyznevuvormdkprcvzdemleobtrhyaksefwltridmksgothsuqcwcpfgnpyhgsdkctiwbewulyrjoukxeuoenjixazihehzingddntmpkpaixdwhybsfcuvwbnpkoikflfaegdpinmkptkwrd..." } Fri Feb 22 11:37:09.997 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "tfhnfcpmxxlcvoxhlmmnuxjotzzbnaozjjbpgbxvcqwhbfsitwzggybrzzkxumbtrrxmmuecuxsalbdlhioquxjiuwdlwblkitzhtfdupvpqehwfmjaawilzgcyqdeeyrwsnakjiwhkwpohdldodxx..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 977 { : "themavsrxgbbwhvgbbrktjqjmfwmkorrmjimaecgiasbprpznugvemkdnfmqtxxenpmootrcbnmalbmhtozpblhiiovuxmzofzgqixcwjwtfjtjypiryhhzreubamuegwkwilkcjhemnqnbxrzyxeo..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 930 { : "thjeppknefbiankvrtoawsdsxzelqieaqykhnohgcccfnuajydtskoukpzdlhoewyrtwgccagyaizaqhrwiixbydbgijpfsgcxjgtaeetwviiqebkofpdxqlzmbxwcunphzsoxflugjxbsstsybjjl..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "thkatfahiurccezmwzunadutabmzeudcadlrieokhglbqxyvqzmzasaptcgglnjkbwgotyzwzqdwvtvrnqexekgshnshvegzyxkyvimrsmcypftownaqfzexvpncvpttjxtelcjkpsivbosovltogd..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "thmoahqaxplrzvnxmzmaudtxgrraqxutpjxrxurbpagtypkfckulksytrtbhwggxmwporibsuwzzorsltsgwetljpvqokgiyfkvioxcibtimroxhlkofhztpgazinpfpzltrqxmffhwaueynjwnznn..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "thpgthtyjheyxealqyezhyfrfhntdsqdzsphyxzelecbzbzrjrqgdipnzblofelyfgofrqqvrvihlrbtxyksmibcswahjsdtjjmkqxqdvdosdwhkjzpvzviluxkoucasuamjjgoaupnoqmdfflgfcw..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 994 { : "tjbjwpoaezaimlhplagcdbagytyxorxxfevykgrzfiviwjynheywjmvrzqqydkwsxgpavnthjywfqructmgpnkgabweplpxnbqzxdiykqbnpaicakbbzwooxbfsjjlxpsnersfmbxxtvpvheuayjwe..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "tjkcldjrafdjqdlqbiyevswgsfsppmvndpwzsliphnpcrpgwdgwctvpgucofhddbsiiqmlnziofvkpvwalmkmpaguuwxitqaoadozlupvdgunaqlwsotymezvwrppvmrsvkqvwecnppodzkisooenw..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "tkikaatuqgdvyoifrarbhnbdhjnnttsbipkunfgvuhantqziyyvsgvqzsfuqrccdsbaqbkikhfawxiwsguixddmmwjkgcmlhaswmkfwzfkjsmtrznfpmetzbarryfdlqwdbkjopagzqithbrdsktve..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "tkmcinoizhwgzszwooqzvkactubbijrvvcjabrsvznqangueamnxobjzaxsammmitnmxicfscsayawlwfmpvaudsucqtlsnfrvpwjsiygiaetrvrrdxonqnruuoqdteflaynmqhjzsygfaqijvrmku..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "tkuzzcebjixijrguvfciexmrwcspyahylczgvevnbaxfeqjlpmetxyougqppxvkucybeadknuomqdmmqcwqrezmxbsvhuocuiqkoxjaqfugxofrxrcbsxgggdrpmvlupfyqazyjqjszgqitzdbenyq..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "tleiikfvtkmhixsfuustcrtcttfpgxubweenjzcwgnmwedvxkxymvjgpgjoroymotwqrxyprmiyjdrccwgkncvpdvrbbltdulkllgfvpqwijbmoxkwaybypgeisiaubpdyijhoemzkzhhjqwcxkwjf..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "tlmahaxlygnjomzychevwovlmaithfvjfjvohbhwwobkxakvnbxqpikrciacfdxhakplkbaqklmbltldnckdhxchoqepiugvaglpcbwwfzpgruiyflqcjinahwkkrxoaqljktcjukxtjfnfjhatpiy..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "tlnaigmgopfkvphuccmpwsxltmrbssbpwxzsjxbnazfoznprrxkrqqtxdxgnpynobhpuufhganjjxypxvecdnlyfaqnjbjhiorslombgjfoflwyfunsccruspopgcufrnehszmemuzibxxlmnmaihi..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "tmoxsoyxygnxotzbsqhrizolfaywcuiccgvucyenebmehekzvyanagtfqpaggjprpjntucmcggprnlbjwmlhjmpwgdjscjlaqgqddnjniljpdmupjrcbovwwsqkwxvknviftezvvtbcwsnucbummkt..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "tmqibozjxzmggfvgrwgujvxcwnudmmcwsiwlowkkefeuqylhkmetxvrypvqjalnjgfmntzlfvocjzfywpdpqjyajgjiuvafokvpbbkwkxpzntxqxdgvovdtkfzivoccduessiqjecjdsquxxonrrrg..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "tmrrkejncuvfymejpailxceemhefwhwwhuyymecshkmuuxtqwsrjqokurhyipfruuvqxgivcpxliurxxrxpzhqtdppsrqrehhraovjueuiyctvpkihxqyjvxrgukhnifhqoplsxscvidmarqmrrowi..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "tmsaxbuuztjjpwanthyckpixaismyzehoyzfgnaklgbletrhwlpfuasspqnflnqsnuywdbyvtlvdglanzauzpiebiqlazfaopzzxzzjwnwergykgbdvtasfxkokcgfwpuqjbukotmmnefumzfbxuxc..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "tmverycpnngkccjqngjmhyxhskfaphjvzpsleybnznuhilruohyzarveohhtujpzyewpdjmefqlqfxztmanswqjiovwtdprirgjczeoglgilqrwqhyxaayedhkhiuwjfbrwaqsquoqckyjbyulpxzg..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "tmxlbhsvqwnlizrsqiedgwvcckysgtkiujuunjhnczwhxhsmkthiyyvkyiawoeljzwjfsdxdmyvoyrweuoedrtyfkazqmtzccsqalruaslnquzwftgvwcuctblmawicghborhwqhcrcfqbarotjdum..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "tngfjbiswqxqxqtwizbqxzuwfwgnykshplotqnhpffyzpiyelkrorabbqkrmdwrisbtwxsqzinljqsabjmdliobvpwsrlvgdbllsgdfgltefnejtgebrvnkjzordvqejxiisjemjznzcrxxlovervv..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "tnpdmpmpkhgzvvesmdauqdcljraxqbmjmybadqcwzdyvyecnfdrgvmbohvpxmnpzznmzofhmhzciiazrdmgkqjkpjveyebfpmyfpflmvkowlsklqtzkfrqqymbrwbamkxyejyvelbuzwcycfnfkmym..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 905 { : "toasnnuguqcydokqcgcxcyaeqaofnpvqupldnmaibkvsscwtjabcwbyairbdeiqmclhrukiuuiyvrkpdzhjxaxfgoddqtxbjjfmsqjsxqcrwvxlghbmzmcgdjlhbzvtdvucenoaywfsyygzmleikfk..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "tpcvblmsatphvuwirjhfnvwaaltwdexqlqjenhonpzcuzodtvnwevhpvhrfeqscnfshwspnjltiazeqixdirhjrbpdmgzkaesajxyyqlundlainofqrksqrhfzksimpccfconvxtedmrfrltpxadtp..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "tpfkcapicuxgtukuiepxoerihwzjgplxuzdfkaynwjoxcsvcajmhyvqfxmoqpcrsnuyxjfjktkrpoathzsppxznypkozoypokpxmifocrolakkddnqwchykqqshwkvypjuojsaxqdhjkxptgkklkce..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "tpooluhbkptihhxcpqnmdvdapselohbcznbhbzsxiikpqfaeyuskzcjqhmnptbmtgilzlirbnenjodorpwecccaovnicoamtdgbgcbvgbkkykipywzxlmtennmbajpazuhocosvenatepfzjqxhtgf..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "tppmweqnltfeihzpvjuoxogzwinxnujmtswrmzbbfqpcparmsbkogeaajjwfjjuyayoznjfcbnxsjzebakyflsoiesbcszlizhkjulvmirkjgxquvtzvtljuvnspcasqgnwoynbbofgztwshiibkew..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "tpwnqqzhstgcdfplrxpmuskxbpiuhlutzuoyrwrjvsojvkspmmyygedbouupzgrejfvuhhfewlyakqjxlvmawlynpyjwwshelyyeocklhcdugucnqpaubawqpepweoxdwibchcmfpvoiinvafvcfxc..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 999 { : "tqhwfsnbdugtybheqzhvybctqkljwgxiolfsahpujkiqacwheyenbfkjjoglessmztliijsameeplchjarkbidbybgoghehuzoijwgmkxzgdhpgivphlvgiffdhbculqvscbcqbwgkxggmuakycfjk..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "tqokgnwjzxrrhoyjuxcxpkfmgigpajpridgghyuqiapnecblwpywyofmaqdjlrzyyyaheltpvhfnekuxhelayfppciluweblijrjawkurcehjfaxntckgtlgabygjvtttjiuahfrslqxunqnreqqpx..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 891 { : "tqrwwhdpmurvjkpdqqqaulvwzjspqtcvjgreafdozlksflkvoqpzcblaqwvzqsouocaqvedifilqzbdhzfuwomxhgauhaxslcqiupjdjhrpytwhvcybsrqfjxkitpfqoqmuacjcmgpkwyzowzizeiv..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "trbalkarhnhgleyhzaqcmrcnmesgsksceeptqjrdkyahuroumagiavonoctkbewklsvvnznhggdtfluuymaoxshdrlwvgnicbqckmdbgmqeadneaycofgevozclsxzkjpvhcltxeutjwlbofzbeysg..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "trhwubnkkoftveadyymomqsxqxmffytdyioalaiuqhirujtoexqumsllqxmmabupaxysmyzvmvroxfldubqifdnyfkmrqvujpdtfeeffeqgtsblbfezklypvslanquiwwzosqolchhinuuqwihgcuv..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "trnmrgxazkhgdrewktfpqikoesouqrzkxnchgfbjtirryqxfnssvdekarbktqofwcczitdyfzqsxqsdgebxycfbbvwrfrgpvrvoimwuofggnomiexxwzrnwwzchpqfmdjjmsovmtndjuafrmtiuxdy..." } Fri Feb 22 11:37:09.998 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 832 { : "trucpfbdujyuaizrdglejzsatraiqahidshkiliaeogcwszjmcahuimbbyaewqtjhzjcapamkbxzibvqjhkbqjozumcdejhjtgxaarvhmbrlotnfwgxkpnygzrzsojnjdptbinxpjddnhwihdptdyz..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "tsxaushkkupsxslgdqlsertypofjkchkhiiaydpnulocpoakvhibclhvewfiiypvosydvjwelpoywbnwlvhfldqyxbmoealyidbfdaeqitjzmoizyahlypfqpswpgmcuuwhxnekixqzsezpmwrnveg..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "ttjfunrkiypimogcomvsfggatrcdlmzgkykgpxstdiukceqnehmgvnyhffqixvrirvljtynyfnortgiztltbbdtbtrjbxbmyzzprgrnkotqhuzjinpwcjyamnvqvvmsfecdzztwznjbelylyckgeck..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "tujjysjxnorbgsewazdfhpgkqjomkontrvmdripzgudzakvuzbttlpgmowgzaenphthyvzslhltjtiurullgacgljqlawelsoqkhwfriafvxgdqzefzdyldxqxartkbtwrqpffdzkxtzhzonfqftdp..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "tukwkpkgxkzmclwbfncbydvzowfuuurkotaxhtkqjhahdqbsawvtawpjuhlixuxpgvnnnkqjkedelpxcngapmnxvcvwcfmgshejvglivpkoevdrhhyfghpsbslacsdbrfdazihcwroegmwvxxkrhao..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 933 { : "tumuwnotavmwgxmpguvsxdlowluiajuzyaamfmpandoetauaqulontmcjdjizgweohaebdcexupbrmhxdeynnwvdwbqlhmebwrxiawfccrnqkujegfoasvzkhrwivyewsprjzvvgyvuakekhhnypbr..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "tutxnzxbeqywqvkjrkdzdhkibjfkuxldmcpickadeatknfznenscpfeuuolxwwqhbidjsvriyqebbteipnilztiwfbpmgynrdzshmnbbocdpodobbsfyotpkhvvbxwxjigltxvnqxqgcvbyupchxec..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 946 { : "tuzrpbrtvnkkghtgmtmxpcfzguasqmimyiiahacnyrdldhkyyujbgpwnhcmcobsohikknzfpbfjgeyopbxrkrrcplnfocxpoefbloglroptrvmudrmouhihavbqxjpczzjtkzmhulioixpssmwxwxz..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "tvglalobsrozycpgltelluwbattzryuccaknxybjefqepyvygjfwpmbdwmueblzvmaxaidjlkkbfryabulluypkbieajwzzbbisdsouirajedxhhhnvzrovaiekqjsfectsdaqvjlwcdldfwaneeoo..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "twtfgqnydpytqwulsjaxmdifrzmitqbyarcedbvrxemswrawswrvysuzwsdnxhkezuhxtolmfjhtjnnndddbltkjbtjiivtumavywhoecjgfcivkclmnsqvgcjdzkckoxikngzdyenleyqevggcwmn..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 887 { : "twwvikxpchzukwtrvfxeiglolfmuwusxatzcqvrzcjhynlfnqifnstxxpmnverdsqmvkmscunomlrigjmmdigtdykcklqewgyygpuylwnohnnsytaabwwlvrstythfuutydinbqmneqcojcqfjpgnp..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "txhrexkmajduexfjtygfatqetffjxscxfbhjwdmrqbmmzkzzcgdiuzdduivlzimidcjevjnvdoqrpluvewnwpkvfajddifxjnesblcuqkcqsomdulyxdzebhhaaqeeipijfvjeqbdxbrukmzfolhuz..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 915 { : "txuamfaddeaswcledmcichgdoxwzkfkvvvgyzlxsobvcgsmnpwnpegzdouevfszukjtdmeduvuuikxckmhicmawjgqamzbnrcfxyeiemegjlafoykqmkappwhdrsnivaknbbrwwvrxrvejhhvmrlby..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "txyegnioshtbzepahwfliuspqfythwhnesiwuxcgewessufqgzdxrdjyvjhznmuklnduwpkiwebqkhniggaggipbusruwirkydxnfcdtpkvmoloadbvcbhkriojmqbtrbozrfslxgbpbfxsczziatp..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "tyaebrublndzdhpdmsosthylenuahgcpdpzebbhimzlcvaibzehlupmjsxolahvmexxtdeagrhfbjlejcfdqdtqiezotvljhviqxctqrileajrdzkamjwxhyyvkscsjdepwujgfcypveykkhlwkssx..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "tyqspcazalpndcookeaybusqebhgnwoaxrpbaibteagqxmonzowsvchagpbrzcxuxfodkprmemtuafrmkwlrmysucldybfjmmxkwgirlhnffkowjgdzdeftkhokfujhuyknbewuimiggdrzttrzqfq..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "tyywewilchuaxfzjzoisvvyzgrmmiiubylkashbwtjbxtkfafwnsahyfttlmeibsqlcibhmexzjvwwrqvdyotbpcuzswxlxhmzukwipcigtqlyxarelyparfzsudeveylrkdrtccgsghpucsyizvzz..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "tzaviqjslyhkusrrzxlrjarqewxtzhlcrjhntibeieqxaqgkzxecyxhmpsjqdsiqpgkkxmuculzkuwsltblolwwqdfmfyppdywcrndbtqsromhqkaxbxsdsxjhhiljhguahblqeictnpowiwdqgqpc..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "tzphjmuxktbnnxhnojavutnokjeihfmaryqsmcjjzfknvsahifmcxpbohbeszkzpgnzargbblfkigecctowogkfupzfeltkbmrdhbhxypdswrnlggeqtecrnbaexgowmdxiugrgrddqllekuejwvmr..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "tzybkitiaobfupmrmndopyknylclsjkyjqjgeinendpiakdapjxljvmolverhoqxbmemlrvxoflltfcrgvjdtggvpqskcrpxzulcpqjlisafkkfbnmxfzqpccpwyxquufjwvpocmaihmxnslnkslri..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "tzzhgkwhexjujolkkdyzvwrpzosooirmpmsobniubejrkzgjywohpuvadammmmcgmzmermlrwxujhshqslxltyomuhhadyaujknxqqqgbskwydozkugggleypjomijyautcmlzgfpehyffymhzaibb..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "uaifnpisxhmsdzxpkykkakjeevzdmscjxfwsiuuuarvdpwomifajucwfrnvjjlfulgdqrxvhuwyafnfvbpcxjyhuucwujhbkpenkvtjlffitkugtdnffaukrgtkdlvlgesryeecnxsyirvzisdfxfq..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "uajsttmllxaqxngpousiqjwerztsrowhdswvigfgmqbahzadpcvgtqmlcdwugxyjatukguwmcpwytyqjazjzywdfsfygcblhckzdgvafgtmvwjhczuiocgntmrvklddkyasimgjswzowjqhikwlsoj..." } Fri Feb 22 11:37:09.999 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "ubbqnqflsmzlvdcldstcymorufzijxlovjhduciowevpftsjxeehfjrucxrwvrvutiaeqepzjjntfecnqxdjdhdilzrreatudtrgxeypohcyueeifkexnaqztasmuqkfzxvaklyyeuvqvjmwurwrck..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "ubfxsjzlemjgwolrrrjhzwbbmusxmkgroodxluotaubqjhwgsivsbmtvnzufjckgfmhjujqewhpixprhmxotsgttwsjeulwdiaxhoqztdnelbvnexhphxflgzvabyvkqksvgxdmjfstjtohcbzpkdm..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 980 { : "ubkggrncxhipskvliwjqbhtlncdybuiheavvkcqcsiwoeeuzqmpwbjyxapopvaeppiuegybbvdzaeslguhnqntvlbvjatqrkhdwgswqyzvipcurjryrftezzjttryzzmrooooqsuagorxluohypivs..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "ubnnsapbiuzkaxtjqvzhvjkdpxfwffzolanuzvxkstjmwzdnkohtmekhrscpaxkpxnzkgrymazomumrplqqaufcpwbdkevflipnbzszkqflheqviwkaxzqyfzqdwkulgwwmuecngsysecsgivudovp..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 900 { : "ubnxeiixmrdsmnionuwxgfnzujlczxxncakxzozlttulcyamlpiasoykldyjhthbjzlydnducaybpeolgljvszjetykacpkmepazdjvrgmzbbrknnnccoylswxqypnsdbnfrargarfzpvgddqzjiuf..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "ubqdbacpwvomhnmjsaouqqipkjwnvrhzrikezmvdwpifsllaidqpmjjmgjnikivdewuskakscjyxtlztvjjuydbudpbbapeooosecccdksbbpyaviqvrkcozagusxepupmjvxsyrbhozjadzhbilcd..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "ubtlvidwvtgepmnkbwywrykkkcahtnymixthedcbvnvevzwncgpyszvcgdprpptypbfhmikqjfbssbrxownlmhyalpqlnliiexomiqpeadetwtbmwsixebkcyaaocbjhhgqzosfzdbthycesdicahk..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "ubvbhlhrblizpjmfuknyyqzttolpbufbnvcmykfwtvzyprdnozsttoycjllkghtvejgxpiwydgfpdhdvjzmphxxsqbhrmuujriqgeultobcxmxgnydjkydwcgfoubekdkmreocumwpurwuyejhwsdh..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 966 { : "uchiodwytkjumkzwesvvghrbaeinpcfnztuoyzlujeiuiwwdlmnljzeajuenrmdyysktqxuymwyoqmfwkyfzmfjwmwfhmbrkqylspwpgqnnsfczptwumcogmguvcwwsvpanrudksyurexzbvyuqkrw..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "ucqtbvjlpluekkcudsisrtngvuqealilrulcyuiccmludzvklpzcjkvfnyhhbohhzkemydahjqtdittrneymopnmhrdqbcwboqfrjftjvsaugxlraahxdfkptbrlgirjwyiaqmswmecucxmmgkowaj..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "uculwuwbjgxejjddnpuqzsrxkgveaftpnitkffroupszilstrmelopycbinmxnwxygvujkaufdhaculxafpginvmktewcoydhoirgqlxyifnqtzfssthzrzadthexdggqmhiidpredwalugeunabtk..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 905 { : "ucvwlakvrvmnxcqntjzcxprjuekgpyzfceoomcluiqgxgxbzyrblcibdtrhllpmahmozrczvyujwahbabyouwmtyzboyfiqovosdzjasefeeglzpuixmasftxiolkcroqbatgylpiybjwayajyutkh..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "udckusivjjftgphbowqdqywpknsjxavgcaojewkyvgulvcgwwmzbhdmufiztomhuexbsyyriyncklrytxgowxtdfwxgdnjjuiaahpfvcimnrqawjgpmncxauvhdcaxqdovjcuqoyeyziiuqyhfkaor..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 834 { : "udpxfnatokdqeuvpgptviwwzdkhrpbpztdmskghvycpvxokxuhewaiaejlilmvqbhhgcnvtljnidrbhpxujkimsqajjverlhpxualvsdpvrniopbsyxetbtmhkceqpderhlbkdkmfeqpavpydkaqfi..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "ueeswiesjqznzzxwuqzudjjqopagegghnqdytwqvwmwfycmvjpbvgdwpdtzwzlxqoysnueuavujpcutfqaxignlejvemqiuetzrwthhpnwagwnliamrsbywlqiedkfkkzwokxdzhgfhtkksyzrkjhr..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "uerjygvcywbqarqjqvwcsxuocaegoptjbpbensaxvxsvcnyqkqunlhfnnygzyhlzehmtiylrqdvubrmyurnsvyfvogwfaryykxzuzaeeijwhztmqanqfjrcwsdmwingndxrptmrkzegptstasprceo..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "ufgkliblatzuczznkyunjohpczgxhbrnewfvvgfsuywvqprmhfzarqfvkooopkknzzomgvsouhtyqugfgiyqgeafzjnbwykjzyrhorfmgglmemubqmuadizqzbkvwrwhbvomzmoxjwpasvperomzwi..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "ufqrfjuatabsqvglfxyxctgoeldxffoniozorlzmlwpydwtwtlptpdhedzwwyukyifyqacpsuuscemglrpodvzpdihrjvspihjyayuuejnhotnrtgixlymrapjpnzzauixvhedjjglncbdukkicnhu..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "uglkkpkqmjleiofycptkefovjenmafuegfygomnwhckrsnynemylrsxrwlgjqofuwerufrjjfucrcrkziwuthbqpyculisevmhqtktjvydqvimcdyscrhhysianmseidzindsgiofjsdrnruiomylp..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "uhcxkkwlqlcosfwvqkxupfnduplvbgqadtiuxrvrwgttdvvvixisaqgqbdlbgwmeilzoxkhdyimpfglforqohoynlezftlqogqitztamocitupovqxidfumpnovvahprnngldhxofwxtbliqovpzzs..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "uhmyyhnqdtnpboucqtdiyrwqyloewmmxgeasjdrsgdzjnypfhnbituveauipspzakruowmwyciizhaqqfuhhrcxehdvaecohcqggcenjcmlqjkohczqlayoapwqsnqgmwcopzadsjgomduwqkxjghn..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "uhwbcyksurzhlvxxtoszlsbwkdpswcksjmrtpkiwlkrrmiwlljhalrcpjdznygrjtjxqewyvucpkaphydamxcpaibpnsnhgpajjbowwiiqrhhaywrpgovnkajpwbwmkuvednvsqrkgkfrvliqmabfe..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "uhwgwsrwyvwjrqtjsxzeijlmttktwfqbsiikeclznlbddwwkudxbgwbtfpbwqztykkipdagzgvrggoqxkfsezvnejvcdloxlvzrxlptnlunxyjqbwvaljhafkvrguaqarythysoxcjzgfthshbxbja..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "uihoqlinonmovmsyxhegcxdmfilinsnspcekqhrbdzpuusgqjxxcsxcsbaksqzcfqgycvwkxsrvgjjvrcuapfwyphsucbufwcfodzfumuomezbkeuhxnonarjxsrxxijylrseufntsjrmtamwvjnac..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "uijesubzpncuumzaessuhvtjqryppixlvvyqeikqzhtqrywmdluttaylqovqbuptmklqzwzxgnvdaimkkipvloqxynsuqhnqjrzgcxomenqswixapgpkedrhtovbmcfvdsjabfdxodjcsvfmkqaedk..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 933 { : "uikllzojdyxozjdqjodnwrmibojjionukthcihgceaxwlnlxrlbmiipzmcyfdqnzdokeyezqlmftvnwhgddvcxxzgmcubxkkmmrvadncdzemvoeyoubooaucximrygwkwgpwpbwxddxwljdewrcjly..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "uioyjdkddhtphfbrlpsrcefbtrxdqrctuigbbssjhsznhxdsulcekulqposeysmvedlhxbsreymnvtokhcofpffaeibgitscbfpwueeigktsbychspokwpaqgjzgsnrotccsfgjuhjiswyyxxtkwcj..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "uirdngbjfwpfcigdboynlcflyhuwetyfifkbaxgrqkwnqipidsmuocfjoabquauhwejkirccpwyytootlceoktpyfcfvozznhmpfowzxhsuqsxicfgvydjcfztmcjyrxfuonysevqwzuydbtlbsbdz..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 975 { : "uitjlehxypmicchbwyvbmmnxolkzigqupebdcrtodpamcdjteeilukigfvscqepedctglzxdunqnvtjapizebfwwspiurifrparbxjprxjwcvdiaqgvxxyzzxinqsfbdwtfqeosleexndchfqwpbgz..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "uiuwwxxmnvvrbhoindiorlpymgfiadseentiowwmzutcusthjlhblhorxcmxipbqppybdrcpqmjsdgpdmmsbrbrojxnfrdkhezyzuctcovrssezjsouuedlwgcmctbqamwnlwbujkqyrhvujnuautc..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "ujcyxikolshciptmbogicjzqezgzjcjipndtderrqchvfqlktvorqyywsvbxhscixorxazpedlqhgxdvwbzmrrmkrkttovmugwcojgrxqiintykgxrvhtwykgvskulebgbrtofydynfkpbevqlgtcp..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 922 { : "ujucvysdyjufwzncshqygvuxjlhwekabiqtewyiklvmbojdnokhzgzujpixqjsfiqahiimesbramvzkpcnmpfdzzuzxwqncjlwhqbjbreagwczasflqiyloqcreuwsdfulwzvcqhqeelvckmgdmmgs..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "uklnsprslnmqvupxqxbzmnsmuyawbjcxcxrtjwkgpriowqgrsfdhvqvlfvjememzjoxeavurrcoqjfzuaxzrvswxtzuoyxrqvuoxpfcxqlhtswwevrgkdkeqxxjmzkhhvnrklswqjylatzarjvldun..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "ukodnbadwgaikzueywagyvbbbdclqlndmddkcmpbphtnjkwvhpvnqlisisqlfhqbvdaogxcsgtjigxsvixmysgxxhpjyeccnhlswslpxipqexuqlqfuvsjyhanudprwfivhwvijenfxwnkbyonezse..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1006 { : "ulqabnvxxdjvsppnandtfijssxvoyflkgnggpzjqglnjuvkaeiiecqbtvgcqljopthdkuwuxfxkqjxteoimsduguyvxstgzlhrgxptcpvfyztbwqysbktiecgzdgbnwmibnjvexinoyyclswjdrkcv..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "ulupdisrdrwhieccvyptwavbpnlktaborojbdgsekujwyqpznbccylkorqgsjemsilxtrqpfemmyxpofskanduouguvuraqdrnoujexytyogcalbuunjkvovubzindiotyntmwolqeuifcgwlvtolv..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "umfvhfqndmvhxilwzjcjvcwwvlhehafehncmuljmwzxsfqzoehccegckpryqsbdybpqurbgznachwdcpqtlohttutkuiclnuklnsnkmiexqrokxvxivuthuebesjovuvbhlitrtngiypvbfttybyxq..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "umhwdulueggkvnzffwhmtbyypqigpylfaomdceitrzcxldxmgofvfyvfepthnrmehzjsxsrnxcnbxttkjmekwsalrrmdgprmtmgkfgwrarlytwkhjukjsithguysbxfvuqmokxuxhowdcghkelghod..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "umsucicdrzxwcxhlroztuxetwkztcetwmboexpvvwqwmkfdhpuooblpnfwptxxerctayjexkupjfzpozxjhucwinbltjpehzlalvayydpeljbrfdsdqypcvohheaagemyamaqbjgyimpbeqktqlyjm..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "umvlktsemjuqzbhnxrbnuchmiqlvfhtjabauisbspweetmfllquegrudpyfhqtzwaofzvdihlqoinfjfbmwkfwjlhtxoieobeuauhooyyskpgnnadghctgelwvuvmgvmuduhqrcoatxuhrquysupop..." } Fri Feb 22 11:37:10.000 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 979 { : "unevkfbgyyqabczfqrbhannidhmfqvlzkdhqbeqdqktmeqgrjhsusjbwaywlyfhpphgzsxcnzreyxhualtlpbgfbrmevortzzwqxdfidczyywzjgedmhgagakumxaricnifryzymjzddmshgffbtjn..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 889 { : "unknroauyateqimwwczcinxbazvobhrfizthopqvwzhwfoekuhakabqxywnqjgdypxedbsxytsunrrwfngeegapogppgperkvhwhpwkbinxqhwykzhicykthxhtdxrnrrtgbkzjsrgjbbdiznfifrc..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "uofsrbytxopuzzpwqeftuwyglglgtpryldqueeznbbmumcdnnvgezilupxtwfllxtwmffnnxneklvvcrnlnpqnxmoztrxcuymsbonvwmvpbyladxqecppdcorvllxulyaabmxrjjiismxiwisnlwur..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "uohecqtjrdbydzqgcwrhiffvjhbiygssnxkjpektmomojskdrzgsedobysbkzignvutsrnwsjcsdybkpxxvzbwpawaeamckdeausjfvcxebqzlbjijghvpgdhzqkvehiffwkldiwxzurhteirakocu..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "uokdjvjuuslavoapfypkfeljgstzuumalwwomydwuosjvwowawtxmhmwopuysihntsobtdujduxodojmtcimtlmgezymwegtirpqwqammjvuhmrhtrlldyututjsovvedfdtfxapqbexkxdhmnepho..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "uopkfabbsgdplzoyznencqipmosbmbxssdykbvtbyvfoaqpencwoijxubearulwfjwanwqlpgmbswdlzlckccbahceszxlnhgvyaukacheoixtnbddfabcmxmrkhzgwmpiilcvdrhfbskndssiajdq..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "uoreelceotqvvdpsdfntmfzktmkkghtdsaizxherpvwzaugrqrmfjyqgykvuzrdbpgzweemilmhqkauuyslnzgumbgcfugckmdsmpujcihxtsleyheybxlxqxxbbtujeqmmypbnbspndjzxcyvnsie..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "uplgmyyrscfvebwpdrpwyhmaetazdsnqqstxzdqakitqdasuzhfuoihwoiarphlnjngipvhilgcixgkxubjqbmzruodcsfgkiyvalchqtpsokqextlvvcclcfhdfnedkrxbgtpbgcdsoznjbztyksa..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "uqafrzoshvsitvuzvydcpobwjyrfalblqlzemimkvsuoacbooguwbpsmxjitcrmmflfgjyhczjwvepnzlpzqhzswcembdyylauysviyyhghzdcptwmulllpvyusheuualkbrpnngesxqevrpdkwgzh..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "uqbkyljiaidvkvllswroouhayarxjsuepdfmxybhbuaxwqyowtgaerwvazxdjfxnjpkcrmpmsqrmkbsranncphtpewjbhjnxxpldyohkkkikebhqzrvmivokjthwsixfdpezzbzrtredcyukvpnjnw..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "uqoelniewcaoguqdjmhdyanvkyqpamfpipktdfefjewzazoetgjgoweubdbfmyuzziswetqfiahognuctemkwogamzwzfewmrlnkqtooavqyqouxsyywbmnywdglpnmshnkaukuuqutqiudvxaqosp..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "urphoznqvgugiyrakigppssxssrydxbyojopodvuybbdhgzdnsexgvwtdtlqomuhcezjvtlxfutzrmlddalwygeyvvpdsmflwgaehvyxrawmgwuvicqzwfkcvuapbbtppcvombfbisjhgkcqiyhtxv..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 977 { : "urxpicdmcdlhfgqfvuywnzstixrznfqblzouskjvlfizxnbbxxfzwgjegtyvyxtimyeepikkkdiutkjxfloncviujdmkbmhewgzhzudrdfmvwtheuarwehtsitidkzqatndjsazhxpwubtwixdeysn..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 960 { : "urzpgwhbgrbxvctjjvhudlvvggbzpfgyeqpmzbkwodgyyxfcfipniflgttjbtoguyqimmsmgudwwqzrxkyvugzgrbgrsvtgpuycjdiikdlcywqdsnscxypcostvhuyrdbhidsfrbnfoktzkcuzzjrv..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "usbcytefsrgefjuywnyzamxcvgoxkfcmynejkkmjywjhtbflsbgrldrlwduujknqeqpqnbqgvfewknbxqrhubqpmegajadmpnoxtiuubiptycnwahgnnhglktiumdytqgpmaozmyrpehidsqnpklgn..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "utanjbdzdmtjvihcyfkrljdovlwlsxkcjhzrrnghneesfbejkdybkbpkqbkplapcpchaumeiooitwbwaxvbhagokugyuufxlfvwbukczkszercsxrpcwrskbmffcmaoecjmhljpmjytldpogezgeky..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "utennqqqqkajwcmhwqlcnexwlnwellgsbezagguwtqcltcwkenyzdfqiyrpneldupmeyjwmstjkutjxvyucptvswhoncytzmfonqmyffvstyqwzckpmhblcipcqaztesnsgngrocvfhmhnmlliboxf..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "uteybfevqlghfwznpcbikmriqwahinekbncucbuabcssesbpzalaztgmsjmjuazcpwepdiljkvhddntkiyfhwodymzwjpgssreyopaomuylcenjkcvmtoedpqztrginjaydswukoquuyjqykjrowjl..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "uttxjcdhgtqkuxjxarvzqmemkfihgtuzopqwumgwdgniufcfkhaltiuuuyyblrgyrrrdkozholjgyugpupoehtbynucoiehpaqjbsqxukpesdlhitcsuhudfrgybxdgwzvqlhlpqcnhtkipvoayhyq..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1001 { : "utvldvzqbxmvrocxnkyltdufdbrcqtrmfhmvvfktvncsyggnrypcayhvistibnviwoeufojlqmlczvrywivdhxutdhkyhvmnsxojdsddtrnxvvbuipwblezqoqvsudfjhylnobwnxzqymmscvvdfkj..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "utwjhlawhcqacknfubxcrqevamyjywupimgplcsjmgfcejsahrtxqlsbacsgclkiuufkodxpvgxtwlaxmoxqdmlyimxbyodaxgvhiiaeovsgpjdexidzkchfnstucbqlakmgmphuomzjcascselgsw..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "uuqgnamkwudoqdtxyvrhousyrtzrgpfidrtkhsshnamfntuwjqhqmuxtsquiedkflajovbgaylygqgryhradgkqgtxdvekgilbslfoarxxfpkqeiiasthufqusxtqmiiprjfkztiuvirgateplqvgh..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "uvgrwobrieolgrvgtwmxzkbixnbwjwcmrhplebopcothjlqqfvhxpnnqhdyriwjjiuytjnsdviqarwadhfrplqwzrpjrysetqnxrhuijpmdwxuzmxopfboajrrewfudsohvrepkflpymjfsdyyvild..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "uvlnrtnmgusjfxytcczvusmwktsiqnrngvumvpmnezxkqpxezpojvncqbzamubscxkyksfppdsovjjbgxnqqywgrmimgrmdyokwupbrynalvoqotmptkysedjaesnhzjqmrptvgpgmudkzimwityrk..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "uvpovsujqbxnfssovjvdzwfkzstywzwysyiciavyosexndrjlopxnzyuiofycouqjgmyrihenwxxzrvpchjmmvbxntdmafeljsxixwpvvgnlgrybuqvdbpkguqebyqtmyqyrfrbfqwumhgscddzioh..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "uvxvgqqrylbkqfktrbpobvjjpelybipvheygllcmofehyabjfdeupyhxdghhzlgeuebolqzlvajkzkbctfbrjaiaznraqqnfhiamblsavqrylfvjzzdupdruigftqzeljkvmbumwzvfcyvpvdcnppy..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "uvyurltnndvvgihqptalwniwtewacjbhfmjetrhziilrqcxptcvkywwpyagcvdnruvwgjgnuzmihavndnyynlygcslkvlxsxlvwnqccgwyitjoszmohavqndskgkteamnayxndnntjiikirlqqailt..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "uvzsgqprhznmsyapgwpgxybtsajbqtuardjhltrgkgemghxbmtjjfwdunkvpbvhadxpgvigilrwlzonpnlmsrohhamvevhhueacqecifcistumwqtuqrfmprwtiapakimjugofxibdnlbgfvkqckco..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 939 { : "uwkohcimkyhvlpycykfdiifixxlufaghniuvqxqtpazbilwgzainipovwdriqkqzeartfrobpyszemzbvoxypxvtfkqptsvcsskvgfbimgcgfunajbqxebfwkdwyznqoatbunnmiedowfjxnmfylnb..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "uwppjatkiukdwxfebvpwphrkdlmjjjzztgjtkzuzaxzsvsmofhhvxjlydjsunrofexmgcrezqbssrepulbniwsnxzkxijwlawinuwaxdkqgrhjkwdcwijjobaxiisrwytyctbekiswjsgqjuxdxmbs..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 832 { : "uxnnulqniaktmhqyuhpqxtwfkvpktjjhcijgabfdminesiidyhhnqeyfjqcvufiwbrsbykbzdfjcvsgsghnwhqteppxnktgvpzbhjywzoncjgqgzzrytstzmemuoouhmhcetcmfznwgjlpzczrosvl..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "uxoqkgfhhpzxndxcdwbmotvbehcmuokdoufnmncciqxqiaqcumrywiyyxginvlxsjyhjajcluevhoomslwzuoxtcjhmkzerisvpjhnloopqwuspjfwtmenbuoqzwrlbxhsvheersvygxgxnmjftabk..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 880 { : "uxpckxqcmvnuyngimqbqulgqpiqurtpekhzyxvzmowoaozlysczdvdfniflwtezlkxvdoimhhrvazgprtzxojcravbbfyohupzomjpliqiodvdbtsvmloptizujimigkhnbfmayjfpiaxtzgezttnd..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "uxvbcpyczhspsfuydvlzajhwjisebnjhjislgiidhcmvcliobowagolhcblipzsfgdeikjgfhbeszfyqrdesmedysgxlrjkvdvggbkptbbrxonsncrvoqinwgwxqipdtubwriwqnserzucbbrjeabs..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "uxvcufjjqnowqjurndofuzcdqvewtlfkgdhlijwkiwurbkohtfalhvifpccxovqccnxdjctaodifetjwhpakcqwhfafxncpeksicrxuoorryvwvpibpuuqveihyquujyojdupmikxqgvllztgbvvpf..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "uxwnqpjzeyyntahabxuhbmqsafiirijdbtjlsbziwxpnwbjnnugepmiuicucouirnbhjmnwpscauimsojolsyeeyaamixxcdnvmebxvcxccbaccqkbaklbovixpaiolzksibagxvqahnfwmeoshext..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "uxzwszuizvrlhromtzvzcbhcjvktocfqrsftbhvjlbjsjifmghjyoirskzjdmiteriyyhortaahmrjerzrntfhoveirvexlvlnvyvllcdrwoxmqzyyfhqkcgahnmzrcblcaylbkkcxylbualcxwcvn..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 986 { : "uyhzmbbauwalnrskvftgxrottefcfrpcqsxhcxzvcvhqrpkkxrolwosaztsepeayuwdktxxbbrdxhwlywyqowbcagldhrznpxzkkuhfppidoxgaksteujimmhrgcnfgqfctlraduiagwdjescjzhxi..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "uyoizfqkiuisnebgeozcrnnyvoxiipgnuzdjiooepazjpirxuohrfjfpsaxlkdsmzrmkotwzkjvmrgxyznqvhzjhppgppnlvuzhsevkreyfbxbnacnqjiiqencsyjhpyqbdklqlqqdfezpvfrvtvha..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 969 { : "uzkrhukwjznortzwzopqpcwtrsaulvpfbqduavrwzfvjosespzbmtmozrqihledfivoxnkmyjdrhentnnwdwvzxpmsfxxjplefizliiokdxdanwmszterekezumluxrsamruvwrjvncmrkrxbxkmmb..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "uzlozjscbmzijjvrwmqgjdwlisbpvtrtsvmythlnksxjqtulkovqvlvvwacwsqahkbiiwtftosscvqxqttwoeqdqieqrxqttortkkuupgycqniknkqgkczgkyqsmduenruzjwzdgbuivpoqrxubivh..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "uzuzkgjqfgmkssdtaynhfjbmtlkbwojbulkawxsnxpbsdvzijdltbcqxmzoyodezkiceybjzkktxmvlcnspbwiafjxihxxecrihgleuxxvjdxrvzlhajbdegzsrvbxouluprgajaexbytlhwrbafau..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "vaeqqlffcjrqirftoifegfcrpkiejarfuwfiwmpwbnpvgccftzdwztvfoyuccyivgqfunaoumnrduacgenhexrlearuqqbdrswgzminhgiwqnbucvfgzbsnmwsekjxifqkhtbngcgfxjopglulkgdh..." } Fri Feb 22 11:37:10.001 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 979 { : "vahvudobjlrajdpxgubfdrhureabrzbtqaluxmqzswwpyhyufcgfcofeqmdylwrfmsuehcdornmcwuoozflwhtojfjmfdyocidygddyimtdxkiefxlynyfqunxzuskktvwzkupxkcgwqcnnkxlnqny..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1011 { : "vatfxofyectbzmjufayatkkkxfhytiarwomuoqojisvhtvfvokbwppzqhvwzvfmbdoqqzcsntqrsxjuswjsdazcvixxxjycmudycrhbspwxpiaqnbhfuxwmqiecfdaizjyhvbmobgshmsrbegbybdn..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "vatiwngzkmuogcmarwrrjwpfhwieytdolwhrbpsrjqhjesygdstpogzvtrijfihqwjymcjxjmycqswxsiwvtckkyrqondjabckxljnidcthhbnblxxmvqkpimezzpqrbfdeiujunuosdwjlzlnemkz..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "vbiiycvexvswkwxuqqaiklkgndiuycqtdgsdraoxnfmodllkgyjbqenqftmnfqkapojxkemjkkpbstbaffnyiamujfwxgrddmmyqtlgsozbljfpdzxsuyodloomgntkthiazbcyfbjwhymllchyytt..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "vbsufrrdlotdvmyaltskdrrdvhlkvqpzeztcqupwprpuzpfjakxcuipywyrxyvxezspbaguygeuekitrfleysckdjkcvrshlozgylzgobgfrzzubuhbhhhdqyapcsmqedvudntfisitucpufpmzknb..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 831 { : "vbzcxyswoafwuulhfothekpysvtcwwrpicicgrecmcotfevhzhkehgdewxmkhfeihanchidwymbrxqtptgtqcapuiolndtjebyqbfqtdkxctbyntwvctoihtzmwhvvebxnjhpgkshxsniytdpaqaaz..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 979 { : "vcakysydlfqvgyxotphqsubxxtfvaklpqkjvodznamxcyhppmwhedmqcldjbsdngxwiyuqwcxnklzkdywerzfxfyydrthpadvtoymmudzkslxvjqifuturhwbzuksdtacgywloqgshrkcubatvbwqz..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 849 { : "vchahueqsypgwfsvapkroyrmwscbihyafypjlcfnakqrtthkuivfngkpponalegdgvpviiwtgcclunvoplzgtboeleeoubgdttiskwdyqfkghcadwbksbticvopmmzltvcxpyckjumcdiybulyutfq..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "vcqgaqoqxdfzuqyjrpffusdxjsjurstzqhruhekyawivhoplufyuwfvbhojmyefiqsjguhimowfmeosgdtithvhzbbogxdqlkijqedhvqfchfercvywrxhcrsmlpedoqqtbflzlkemxzubbgzeupbk..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 846 { : "vcvaycftqprutncntzulwztjpuzymjrbiwvtpmoqktmmodpdirkrjdchfhvbvgiwewzkuuzcpqqgyjtujnapzjjpulyjzgituuqahyhbhoonttsrghwrjmblychfdjaqqmaoqsqbpbnpnoennycltc..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "vddjiootrtcppcdncccphrhlogpyzcgobljuncexgcuosrrpzbizdiywxjumreexeigatcelqxeeamtsepcveazmrqnyejmthmjwhsfvaymfqznorfbnchzzkyrauawseaohlvwdgdkkxlglfpkmzo..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "vdjicaavdgrnsrlmzysxckigibaqpdvkjkadjkpfufnryckwefncyqslznjiqgbimyhvlcddsimrxkfvbpdxfwtgwzmiyargbtkqcetvgfkxvnlzdexnfzkonpukptrygztqcfcwbsknwelildhzgi..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 879 { : "vdxajqhizsxlnwnjefcjjmibrfdroyuqxafpsdpvqldjrpcqcqadfcbnuohhjrjvhqafogptzyqhseknkmgifnydvjvnvrwglujmhdosnrwzzxzenthsjaljwwcukvlmtreioebdxsapeilqybhkfa..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "vdytjdmdccbtndalufuycjfjuhrevidzunmmjflkcgoleehfamdudfiunegmfzjnctbwwogvexhrkbfbbahycakjwulrjucjcatnoszsoldidhbmqcshvcqnwvvmtnepyldulrvogakznwcdjufomc..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 917 { : "veebqdwaglppxmjdxpvijbzyncrvujckzzxiybafsrqeebuejzhrwskjdnuepnmixqernzocomiqfeyvutsmujduemubrfiuqieufiptjpebgbfzcwpzptipptwqjpkhyrtncgitlyfpurvzvlylyo..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 953 { : "verkrfrytknzvzoywrvkeexyjuarhiymevzdtcdavioigekdzyusgeyvwkccrbffntffkyqrhelwrhelmaxjffvirktgstzbgtipitqyklxmcwqpjwoxghhvgzvehkttydhoajhmscwijkcexzjehf..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "verpmpkqxjsquzjcdrbsiocifjxgxwhuqnjewumcutcizqiyxaszizhnsabzximhywpejzmzddiddhqcslvjagmujacufpfdeeiuldrsouhlynjmeccbaimirgfmbsoohwzhxeghjfnqhurssnsmea..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "vevobhfwfswgsdpvvqrclfagztjjszjsllejbltncyqcysoeznpnhpxvxgibsbplbayoxlexjfmototdafdagvuukzvkocqemimbqnbticdaxborkkbetdpezjqyzatuwfddaztmotkcaqapadjuzw..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "vezrwucsmvntuicubqfrqnbyxteipcopkcowyygqjojfzdkmfxbwdltyupgjazrakmdxcidijjxhkeivynukezkwlwibdoauakedajssmbdamvfqtsrtmkmjtbqzwrxdeiqangaektnptjmlzdkjau..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "vfoxkuohbwahgqejnuqhlejqugssnhxxyujebymhltoyzwvititbygtxwmygkabcobzrwiktxjijtrudcaayfdjitglytdqqpmuxdlvnjotrhuzagxrjdohiukvdocovrqtblgjroguqrkmlyykjdr..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "vftolagbnswuvhngiosltxxtyrjkmczklvfudzpwcvrkufsaakvjypsaahwelyblbooglalyotlpiibiwvzcjurvewmdkimytfamdfxotnforimnauuinocirhccxmrrtaognqlvmtsxpazuranbja..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "vgcweegtrbmkasppdvzhhwbxzvgdjnhjgymoalvyueuffdbgykfskqjhawydzfiujpyyozenxmdczhwkvtpungjrvqmjloiqtncqczksflmshxjskcgzvpgvhyysanjraxovtcawcivesbjylofeia..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "vgdwdzfcwoptxkwomsfhklkvrojunmtdekzukysspisgydttqhnibvjgahrnrqtakzcfgkwhdvfijnuebkuwqwogserhluhsysntpbvsbecfmlnhdfuiulqksgqpkpyrceuuqyvdshzuafpqbuxzlw..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "vgpezidsqiwsemwwgardnnjdmqgvlwnauamimpdyookaiihzjxervctdgwiqrylukaqebngogaykmphubhylrgqvrzremchmufxwlilcobqyjsmnpndiuiybibdtfiajpiqbhjowysziveglaserwu..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 862 { : "vhlfvadpydecdookrfcyfnrikwkyatzetudswakbgmeylevktucrrkffuxuqnxwtuyafseftdrxtehyqfpfflxlodonzalvlpsgmkqzshutxfohcnqufbbpallrtjirhlckewdugxyjojhaapymgyj..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "vhszacevuitpxpuglsrddzudkcowomiyomzbnuguvxllyhjojendlfavfmuolqyycvxizkvmwmhwkzwffzuahxbfrkklnploaijyenflhotphnrguwmzjccqsrmwfgphgmjfwokxrcntivtmcaqois..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 892 { : "vhtkstmcqrlrsgoksbviwesqpfycsvsrhcmpteuzomzdavbjncherrtixqfupgqqlsuzukzknxvupgshsevnwsykzsemvcdhzfyzilhrbkxtcyuafbuxwvkyzekxmuvvackfrjpvxzfjxgfdgyqbyo..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "viitwgsabiiagkfvvglglemcpqsjchbrcoghxmhwkywfwwvyflhazongvhxlexrmhxjdtmnoxrcpejkohixyeuyltfasecwrspmhijatdtaisvclqwcfxjdgpypbcwakfloakdzucyswtdozdoxcvv..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "vijzlbhebdnecbvuskokfywtbixnvuehgbymegjgwwybxrexjeirrnienfjyjfjeiefecikzjndtifudmtqhqvjfumevofaugbcsezuaeigqpqyhvakcixbmqjxhmlsmiypgahhbcihrrgmfkqqqsx..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "vikbcpqyrvdeusplhakkopkaxloremebcehopilkwetnfbxisjvnrfxuppbzzrzbroqdaggwcoeuhgoqmhijmomnjzglqcnilfdrnpshzwluvyluolblapdthxedfplhlvlmcyyueuqyheafclebbn..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "viywsbfbhvucdvqtpteyunljacknxzusswtynxtnqbdlpzlhbvuntvpzekxbezsequueapbspthadaqvdywkcifdbrqivsoqceelhporkpmevagnmficskjamkbtgmmavhtsybcskpqfhsmzvcazab..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 832 { : "vjjltalcidiudcytxeekzhnsicofdwchukwgfcfeyurfjoloungwvkdrlsrihkrvzfdgoaavrrrukjulrfmmzsiufigdqhsotfckwtncaixhwnofbrtqlbmjrahhcybbqxrlwyumsmtjauvfkpwoqa..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "vjshgvofnwacjkrncfmxlwqzwtgmrarkmglcuhqrwxbgmxgbjreglvhhjejbovnrgnuprerwwfnbolcvcpdijanhdkzuklkkugknixpelqryqefnhjgsrzdtxffgvrthhdiehcducsioylkvlapkzm..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 914 { : "vjwiymvufdskowlifeqrumdvgympmbgrqxkrjricjodkzwavzrlbhdvlacjifywwttgxcvhngztucctlzrpudxqhpcknfogcevhmgfvfyjdywgkqcseyanrulzaqrkposkxbezxwhwmlptokauvsxn..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "vjwycwggvtgrkgzqwsimqvxezurtqaqpcqnrpyjyvfwtgniwtqxcakeydlxwgynxuyogzqsqqeikwagfhgrnzykpjyjjczjkvtabkrkkahmlumayvzcowytjjkrvhhzbyosfnltoodwfshbomheqml..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 935 { : "vkapyuuksiykkskehnmtwwubbiiewhdvgvfrfnzzwqejzqieigzsfwezdvkuiiiajegcqlxizgiutmrhfyhdmokwmhjmhczgqcyqwpmrpuzgdpzagvmhwklhalozixfwimqbpttaadgnwdxpppentq..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "vkhyyxtxdwwuvgnghfxbfakfocwbilludvzjtiochetmrswgcdasihvlgtbqavcrnlpwfkppkrmhnlevgylifrdsqhpafgwkkbzgiaefpzujxolojwoevlzcokgizqrfnncearfgjyptkxnljlajdi..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 823 { : "vkokcmlbvghltdgfefuwiatqmjbljyobdqpcliqirgfitxycdhzevzwjkbwgryxvchjmlinfwjvxevnjweyigjcixdtqzvphqvaeqrhfipsbyqdqceketbabgzlehippeixitubwoqtdcdoncgmcld..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "vkokpjyincaunbhzlljnvzvdcureyeixoaswbjocdgneaditrshkvgkscgsljncyitoqmvxvqnuhqwgdipynptjkwqajwoyefcliguzmvxwnvqyuqguplakbvffwkscprbyxnhdvwwwkxvwyflirmn..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "vkwtqtzewwemwducagpfrhsfpijiflnyqegsiacxuusoebrvnzrbmwmmwwjadbkgilxhgxsenuudifizppfieocxuwkgsodoohszwcxluuoywmxqxmkszaryrbkmtinkxncrdkvdjivbsrvubsegpe..." } Fri Feb 22 11:37:10.002 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 925 { : "vlamplvhgpoldcyuoyycwnenlalapnvzgetbsqnstxeasxmoexlysfykpazlwxxazckypwqpbtmuwuyiewgavfmqcbyigiurjevptwhdbgsqwrgorxtlehgvxzprzgejcrwrmmhiuaeqxmxnzrknke..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 847 { : "vmcjqprwnoiymeifmtudpgqoghipybsdhlffdbqvjiyefrnkuvoshsnnhjfxvykpakebefpnjcjjqullmjsbrhrbxotuvqygwztmmvsdsmvnrrohxwxjtgwuguskpjwcmkmjgugrkulbwqpxhfmfza..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "vndoinqcrnvqfinwryibuioersxndbzmwqnslbemszgfyszdswshuhviuqjqxqnmyevxoxlclwqdtaxebojbxcbidmkivtfzozpmaeeknarzqgomjpyykkyoyifvfkqzijlcdzurvtenljwjmcbnlu..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "vnkzovftasutoukmugmdwohjwanwasicbprjudntsqmzvngcbfpjnuwfmxtluqdtoxurloueybgpqbnsmhlbeqiporphpqinytoyiyvoebyblovsyhmrogxifoxsirlkjmccfugygznoquprvclplk..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "vnsezwaistwflvmzdnpgspfbomkicidkkborvtwvugtsdjjcexpzljgeyzypgvfxsnfdyehysvluqbpewejysuroxpdceyxbbmggdioleabpanaxaqhnmomlbfldqgdosaybrmhctykqskpuignlgn..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "vnwysnzwzpgpyneadogihilswlwhgecrcdlhrsyiwvxtxzclzsmonilxodvqcglnsvescwagmbkoyphvmpdivphtpncwrwcqexiomoscqngnfxuqldleteellbrbnmkvldgjiiczkzxiufeexxwrqc..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "vobtribwklhihtscdncifwaxeglgwqbpjixzwzcxejradbpsxjyxifgejdantpkayqhelggkuaypujwcttbulxcepltvuykreszllujestunlgohvxomrjovezbnukpfpunsdqmijxkbqsumazfhog..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "vodbkxddkkrlramywhnonhoelbouzqpgpxjyvephhidavgqcksalppzewqqrraoeuggxcnluipftfndfrrqmyszhogppplcbaztroskrpuftzromzsuajmpwwhtvuenajxvejmxoqynjdqbxaxcojo..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "voesnxvqtkzwackjehzxjuqlbwaiggofvkbbyklfaalfsuxjjgfaxtewjzbejbayytjsixpvwfwbgghvdpmotmeifrtlertsywkycwbwvgmztdbxoesijsulwemtfmfhefrysghcyjsohtoacyaxer..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "vofvwuctnapcfclytgnjwutmadfplgooqvzokuxkbozzdvioolflhhagchbalmrtbbiakmmesiipllqsrpqahbcjfkzzjlxshouuhgniwwujbncmlcheelmiioozlzeftuukehgqzvksqfvjgpnytl..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "vojebajvprgqlgsqhkncvnyjhxlejzwdooltgcagxihvxfxgwmfyntkauqmnxqqnobrgricmfsqfwwgcoetvvqnkomrwwhnqotyeodxlswcjfygarroffzraspohvlqllnbanryirddxmwqgposqrf..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "vokefxrviayrwmijvxcrvpnendybseaxulhtpanrmecilvoddrlwljjpxusghanaqhjtgypvpwufffoqbiwfjxdvxloeguvbfvdczgkodnrnzuijntxbhqzircpzuyeikzbofgvssuelhlekozwrlt..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "voufgskqmefwxrststhxsbeetrfztpjnwnmdylwsknjprfsgwloeequlzqhudfbkktpotqsdqljdcgceiuglacsnaxnycfvyjcrzknerhedtbtjoeelqzxmjusxrcjaucmkrrwdmaztrmfvsbblmve..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "voxnqckadakmdsvaawyupwfbzsbqtrjlqrbrxiadssfoobxhevlyxymsckewfbqcmtzhxsfbvkzhtiniuohmwnqljwfxlxlslrkbnrxostvppbumxdsicarxmldjpgnzowvaeqfenyntnwavogmpsx..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "vozblyxrlpznelpzlgzvjzgbnbyqcgikuzancscfgzsctimnydvlemsigxetthtlrcqkfhbhzwkrurhsshtjbnejagegzwygbwvljtqcntjtnyqiqqfsuurjoeentrxhhcjbmmvzofpordcmmaefkn..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "vpbbkqeubodgwqaoetcipcuostgjvermgnsjoswrordxhmkdfochnhyhpdukathvaadfheahrvymldqnmibcmazdufflymfsyesqzqywplmlmhvhjkahgxptrdjbsnknbscwupojxkqmnpruqsbsll..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "vphuvxjnchkhysdohsgqehhfuwpshxxvxajghhchzzorxvvfkcjskrlfikvcbrkvpjoaugimdqljbwqgswzckwjnyxofhqlkwlxscdijvztowxyimgffbihvszhlxjxdkxundlfcxkqvxdayqfryhu..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "vprtfuoyjlbfbmohoglbpsgcvpbujxvvoppnotfvuymcjwqwokmprbtplnwcdxypwbkvtnkvborykbwhienfnfsguffnfikrhtsnsvggvtlsdyplkqlvuzkibavvutfmlytuwpgjxrjwypnfikwyqh..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 982 { : "vpysmjbzctiowakvmbkttuvtildcshyavcgoqabdcsesncwzwfezmbsmpcgqlderztuvwfccmkkpznurzllinmwutydooghvyvgicauemaxsppukcmwvejrazdgffoboqmkjzyetknofgbzrtwgflk..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 966 { : "vqcykkpnnimxphyodvpkrupbfurfizvfuoewklktbbyxaczhdedrqasmiixmwubgqmfbigmmtumojrwcmarqievpjcbigoetskyvaouvhudymouivupamihrrmkwwpgswgghetziqlidfvtrfbkkvc..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 861 { : "vqpstnjfsosgwambsdemrbrvqjmzzlpesvdfytxgacbijzknjmsczyxrfdktnpzdjubmpjptrifypbyhpjqugzknlrdrmufptksnnblblihnmydtrspylrdrylpynzyoqncdvakoxacckveqwjxtuy..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "vqwydvsrfsaravuatvubzqqalmsvoqjlfixmqjzphohwukceuepsvokccjjyapbcaiwglgdacmrxvqonkzlepvewnvbxtguofaknkauseiepkwwnkihlulpkpynobyremesvmalauiftxitgjpnnxf..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "vrczqrpvhcpuamwrbhkxtruklhbcryvjnycxgnkfimmnjvypqqctervsswgyvvbxzxrixuxxndszccrfiviwgasyuzwlqmbdlpumdnkikghnyyclxytloreieoclmzaiajxppoclkkrpsvkvtppuzk..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "vrdbazqrbrusdhjmjqnwqovubjkmfaxnbeepwoxicyljkiaprjavoytdcqcvdeeyyjakwzgpdcjmjmqazzlfqgvkycebyutrpuhewnrkbsxdafkvtypayoeplwrckkklwzzidbivplwaxhsihtvcxl..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 852 { : "vsilqzzrwvugeyseyavzcamthqncarlzcsmlcboqvsdxbnuiruvptsfejjumutujqgemdylmsunkermulsqbgciipfydnsubsmxpzpppkmgqjzkalgdutdwnfjcogqraoexjmcfgqwjuvdkkxlyjrx..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "vtmjkmdamddlphlvsirgnqtnoquknjobmpwempxehfksvomppbrwgcuylmlfrfxmsgpslimvhptkdndpfqhwqthtxaanmojvpixawdfjrihepzdqsdnfyieyogajhyscgjnwtdoioivirdjsaqynud..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "vtopmwwbfyvtbnjudmooskrjdrcpddzvtvtotwnjatwsiosfeyqalqustxujvaotgrjhooxtzfczfwhzhpvwtlbzgdsgifnurmjudmefoevtyemqwtpdnntdobbczdwyhvlibltnedcmcjwoelykep..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "vufvksxrksyjnooscnurjkvfsweyjwgiaynjcgrsbxvxzczsmplyisdihylaxosgafevtjsuhjiwlmpkvvfkpremnsuozbgsjmkevwbigpsqlbbnqefmnwvkckvymzrbpufkidigekcoauhfowkqwo..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 905 { : "vvakaqahkkyqzccuuwjjuqhoqiriigyvzsewudkiarcorayvizemnvaisvlliupizkkgdagxlsunickprvkptoxdqtomxifhdbteuusjwtvjuwqonrjtjdbkkvrepqnuqwxzsjzejcquhgblnoanng..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 846 { : "vveljfnoxfvcppcqdnbdvjirwqtelejswvbfggljcbovpbwvhbbydzcyrjiymosvzjthtdhrfwtefutkfwbzirjvfcfrftrfgjiemtfijwlekzlnwjynorwtbbtssqvfoumyeopfpliokfppbzepio..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 827 { : "vvogdrhzauyyczzqztexpdalmqpoooezqvcvhqnqpadoazcrdabnjnlpulwkoacekmchtjrnybqvytfvuighzmcfiwzjldafysoddaqlkhrowfkrqjyerpxjumykhrmmtvaxhniuboaracqjabyhve..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "vvvlaybfgpulgaweuwvgdlotglfcwdihuwjilwyeguvllojimfulydourwtazrryiyypjrvtgkmatcytnhvpeoedxxqlbutnvysijwwqodxrjlnvomtfljyrykatvzidhtcncmaxtwksmvviwforiy..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 849 { : "vwdffypblqsmmmjqmkhmqchdpfygpkakdyplmselovbemtjajjmbeqnhhnztcmqfqmbtuautooutactbbqcztwotvwqmccdkgwwjvfjtygrjjhqbjmbfxtowhbtzcfjhxinyzzhjjgmuklpfncghxr..." } Fri Feb 22 11:37:10.003 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 887 { : "vwllfygrvtbevcwqugkyvpyeivedoinqnvzvtksxohifnkrwejgzuhotcszqfbceexjqhusjtjlfazoamoglrchbhwjyixejdiqknrilphdcfiuplatkhhjnczbwmnlhoelmlytnciwmijrgswslwp..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "vxffofkehqkzlqqwrslwdfgctywgzijtjtjrgafqzijauskiirxbwppapoconxqdydwpigtxiuxhirqxxmjzqpkfnjqqergsuwkwxmgdlwoakctvewywtqawqttpynhldqkfnudqjllfsahsroeuwl..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "vxjqhnieyedksgqknnqjkrzvqofcrviufcsoqxeuibjigmzuyfkfvjavryvxzpuefrlkaagtckrohakaqyhhnzabrztpbqafalzsrdydbxjztnawbzfnufsthlkzgkbmhfxxyigkqizioeuozujtxi..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "vxjrjasoegnbxghfamseotbqawfaeapnrbjyjlygnecbzugcnjfzjzwskaugilzelzgrfmbzpkkqxlodivleitcudsnvgzsyekptspycthyflpvnqlkiaftqcrulbgcceyaqgovzladtbzbovtftst..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "vxsjqzqgscdarykrqldblpbbtzxpkvyqjnqvnwtxnaabwbzqrkiunfpqumrfbzxnvuuxazdcvgelcdeaxrmqritxcelvmkzqficaxygkkohulrontzwjxdcmljubkwkjkwcgirtozusohswemziknm..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "vycjmzpkckxrlkeuxiaepnbiykgbzyshtnksjiqbnjlpsozchqehcaiwmhhavtwmvnssdlxgougbcjbfgobkbxgnubumglsmrrqdctvyzpqtlgrvlzgwplxlcgjclxvwyyssficpoyqhflofpadwlp..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 987 { : "vygtnvdcqtblxmoeywrzmgoxigpnhkjyelhkfozrcycftxaphrwkxagbrzkcszzigubhyttarmasjffxjrjobemcfxgdodaraevjxfqsckzdondinpbudvvrkemuctiwawbifgrmqvjqcjojleafho..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 917 { : "vyrlserfuyioaytvaeurbekzvxvywzmvzbqinsuiqfruksyhlwlmwmzxghylfvzrqomtwzmqrztxgmsaacfqvwygotegpqljdwwytijrbgebhlpzobocciodwnmrkrovracmrejxwoktrymysptaku..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "vysgyjvilvfqllhiqqxtwzrnohbeqbdbbropvollzqcllzhryvgnawdspilexhkhvhuwkywexhufvnuwnyybnkxjppigoyireyqumwipumszwddsewmouiduozxozdrwxfwbemyjafxzdxmpquukng..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 849 { : "vzavomcnbhdueoaukxpbauddzvfehxhwgykxohbmtbgbeioxgkonndvmgmqxjlwcvfxkvbcwvntbgofdrgtqcmbsdobiownsvqtmgceqxrufnbrxhiypsagmzswinmbldimmzmejdwxxgpthiaouqw..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 982 { : "vzgdzkryqctdbykruttqntnricfgtomxdkungexwfximwvlcwsjqjloubjnbtpwefjtboqrygoigskiifozutzgdasqhjjcuyggshtncybfdvcewaqdkiceqhqwnknxvzkcchzziiiaqrweomhvnbz..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 952 { : "wabejrorvdjsffwuuvzuwzqkidaibofebeyqycawyxupafkmayvbrhhnmuvwydowgigtodqztchhtonfdixmxldbzbnnzueydyxcecdmfvotygyxethgmytyockmieegzdfnvpbvveokkpjipulaol..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "waklfuwnhbeqpxkuxrksovnieeogimdtuyikyjwovrfoauwzksvjacwtiwwbymzqspgkcalajqbrskbbbtzilmvythmtptdfnqrsqvairmjvypexhauvblaceazrkbkmduuodokhoxuynzsvpxvdtd..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "wakuminwgruiggbplaokmicyglbcaciqaoounwrajlanwkllhhgzirwnkmiptcdgcwqgmathhnaiiqemjcqwiozbyaskfggacqebzfowfhfdkkwvzcutbdoksvqgodbelntrohjgrwcyplnwmttxvu..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "wbdjajsqwgrnygrxfhmwivvvugufontbyysmysbetfsezhqnohrosdrjyvyywouqeckoxiuqqvamxclmhaxliaexxolkqymdxxcbzuhmrimiknhbdjqlfiadoudichvmexxhhdrqpfbgexdvelrglm..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "wbgjdfkhrnlvhaknpjiauphlrnatrbgjwlyweungypxfqnpmksbzimrubknlbiwwovddpxeyzczedbqbxxufjdbqxmerwksjgawkvoakfmoleoyhmpbsdehqgnxkvlawkqbiyaiwhwlizhmeshhaoo..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "wbsbbgbeglncihidmnhlsdcpsvduxxbbdkdcmsopqaqwlqfhhwbhfovucmavazupvmxiqmrfnzolllkmhgvhxipfnsyvlumhkkuauzcfsyyeimtvjkuuheesvemqayqmiqlktjscethjsemxqkcznq..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "wbvekiaoyrypubtrlfnlfahpjgkshdfcdxnlptudrumftruhxkshxufnqtwdiwddwluyxvbkegnbjkildselpflhfookozjovdwhflepkkbtnmjowgizxnjjihcdyohsstkdthmmabnthwwtnpcdov..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "wccgoftkaphwkxrrrljxiuaahcmjhlntmvtrwigqxysegufybcyffnkoroizuhtxrchqgkfscyysjclsspzdfomiwktlchjxvesfculdhneixqtfldteybdymzvlanioehoggiuramgrjuqwyybljo..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "wclupapsucsbrzvwelklflfttkmycrmefsenvhvpxgyioshqlpmngihbcnmjlrnegdlvehqhnhzjmuoewfphchbcpzqfsjvzgttjhetjbwvsgctexewulutnmmayemkawvyrvqurnvqroestmjtawx..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "wcmfecqwndqrxvjxzeqjwaaveupiqaomjtyklrkyjazgiorpklqnoeweygzuhwspnorjtzrtztezvjqdkqbetcxfzqwcoduxnqflrqmyiwluxegsjpkuuyyluvwvkvdbypdanhkwrmzxaacgbqjwzv..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "wdfnbnkznnxhqueonhdzglagocuwsarrktntdtgfrwvqbfmavjgcazugkwjtfgcacsngwhwkqjwpqmnboetvxxfaukecrxdyhqlctmwxkxbzksoecpcmdpbvfiekjnqjjtdtuwcxyxbztyeqwdqqkp..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "wdjmheoizetwvfareacrzbwwgnbdihrzismxgloupyevqbxvqadwrtzszgckhfqcbeowrzirplacibvtknrnppydjvpejfvnzobuijrwfqlninrutwjtngjpmbkpnbnmstunbhdghczkdvbcmhfcjn..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "wdwvlkbrieylqbvekboqlbituumrwqbbrcntaydowsurholclzhoqejzsagctdjhgxbhbpxxbomentohcukmldrnfawcbdhunwkibikzxpujyrzzjthcgoqibcdjsvjwqcmmgmiguomjvocplwhvem..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "weemltaxpmaolcrdqpkjnxfrkjypegnzjoraqdjywngmvqnwwopqftzakmdsphulgnwyvldizrodchvmjooypfclkhqypgxtecqvqdkuubyimjgvaldygfmpwezrtirbdxuonluumfldcbvkymgsoo..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "wejvllewrjtuyequxauwuieqwqhfxhlamiqlxjqicfxcegaxekpiqtovucoylhjwxmgbodvavoqvfsaeftfweonpbimpcmobsbtolflengnmlhtjlazialivufycmhelhoiqwmihoctgsckhfcvegm..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "wenzgjtspucfdtabnbfjcmkyhueigognaiktwmfbhqaxiaxyosogolknhjiasuotrkopltiuuezpibulakljxygnvqdptfbyfvhojngzzjwnxdlaxmdtabaoadercntslayoxtvzdiupthenumheqp..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "wfcuedywbjwuuzhynjgcypkdzwbfssyqniaaubaontyusweihkqqcfiyujzfnewiwutwrbdfwkfevffgqpqusekusjytaammhlubsidzzvwsjhxjrsxffnhlxwsbmvpwpcvdvsqlflidoyksyrijyx..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "wfqcokawuknsweznknxvdjpkkslskvxavkwbuixscgbmclssewgvwbzwokwkuawgebjbkpfvypaeotiqqhkjpiufbiunncavsrtydtxgtyyhvpdwjfonojkmlmzhhrottmxbsovxjqszprhboqivbo..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "wfumhvidnvixvgbktiexsvbghqnthtnbcorqfhadootrrtcpejabfwiijhrpvhnqxsspmsozpgmjdhokgwmouhexitkqflqspiuzzgvjhtxwxbbvzsbnrfubcseybxifroblkwgwtnmioetqniihcq..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "wfxeykbvoppulxfnymnuuyemlkubhruuzbreiflcambxubqxszhwaqwjoxhtidcjpjxvqifwstcvorlamgvhxbmzgyddpvmyuirttbgsdbminksiyapttzapbfboqhtithtsipqqvcyhzqtxuiuxiw..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "wfylemluzkcdnrqljudhdbptueuadimoemrtzeesepfkepcjleexcqruxflauozesvesfebimkkqywfmtczieryicxuleagsksnoyqepnetlnwwimrhgknykvszukwblqubnefkrllmgnpuxfzagsg..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "wggcdfexawtagsmgksyhdetsewjpbqprvfnrrzzpkmrkvogldbxogfkeppquyrwuyqqbodnmxzbkrrertbicumitbdycnxlukfaedmmgufyvqxwijooizqvhiyrkapbvixyykpvamiemewighnvrnb..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 898 { : "wghyaweewguvoxkciodzcdpdqxwbeccsycfsdysodxvqtjhhrcwxczylhcosfxlzlqdnoyayyfelhatubqzdryxvteacbsddxrwdzxqyaplnyvzcdyvgiuglmawrtrfveyevueohjijsfdvxvqqijm..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "wgoprzskubdajytaovrpqurfyvmzmlfujddypivdvytohlmboigjvreukfmmzspmexbmjykebukdngrxpugfaqvisrjwlsxtqbbgdchmnwhmxsmnhxdfovyqvatkigizqljiixftzreinacpwrxowh..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "whhnymzqjwopaydrtmdtkvfvediiztcvtvrhxwwkvabdafdypqubebneeigmliykyirxcruhefwirivphtnurhgtkjnngullzqigufltgozclhuruwztfupyoyiwkncgdzadkspxqmtmwwpurjkwjq..." } Fri Feb 22 11:37:10.004 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "whuzocmsrgkzmyajyhnfaapizsmygxrlmzemukvvlkrlnmurzuropdjuebvqycedqwxyzrhbymaazzvreonxjqjwztsqyqbixyudxzzkcvgptecvlxnagocfgsdjwrfusrayphylpapedolgoituks..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 858 { : "wiflnsurpfmfpjzzyecbtibhttydkjpualmnpycfembiwfkkbavuzyruwtsbljwpytbdhoffjxllxdngtlyhvnkmgcvklghrgoocdcohhfzmdzydghmlgxvucqtqlshywilfitxpehcilftfysjzsr..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "wipnbmoljiemugbiyipygbnpaiypjwcudcuwfnookeafvffcshffutejuhdsvavpgadmdrzziffgssdxpddcmqkpqbiejvjalutvcsucvuneblisopoaaeajlqvzmkoakgxefwrrdkzjmubqbpxedm..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "wjdtgrqqyheahwdzvugexsijrjzgxsjgxovdcuwmvuxzognqoezvrbuahatybdrabqwzsrarpivgldxsbaskqdxdpdvpvmdurticxpdrnjqvnkykgxlxajngwahbrhphivfthdxcdfwifhsyiggztm..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "wjyiuefxoqxgbtvfbljyxpivgzvmgesmjhwxhowkmyxjotjwuxarpmxrvdicngxlqkmquuywpfbtiyxrkqojaycutasauajxulfbofuiuephgypmizyynrrnqikdkxbkngqoxvmhlodsgbimljkfjx..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 935 { : "wkcwggqmhrxxmuerxakdsmvdzmcbjuevpltfwehujrmxhkwhjwihunaesgdswclwgytlwuqjvjuarkacplknfewohoaptsbadnttcuwyxiubwfohobtyflssaeryzxptscfedootxkhqqcroeferrb..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 849 { : "wkvhdgngvstusdcemspnaespolujhdptsrnuuncswnpbatjbtascrfgvurelqdqimubqnnetdqhbomoovovhxcuzyxnigpbgzyvwtudouotipauhbtzsnlxthhrqgsuxgtmtsxvcjdykdbsmgnhfet..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "wkwnzzdqptvkdtamyvecsrwnzkrypoppcxqerootjmiethcgnleeskftvyrdhrydzciawcuhuqlsduxxrqwbkyeydwihhkdwdkzxpiwhcqoxtltxdrukqszpxcbrmfchlaoipkewwubfaxhswrjmbo..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "wlfgufjrymtqcrhcqswplktugoaovwkzjwacnkbfxajhoddcetyowyqosaesmdlgkssricaupcaelsqwglxcexpewczejscgbyubimreqcdloxjgqhuitudgggtpcxscrrsdanquqqxfhyhnccjdmx..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 967 { : "wlhwnygmthfetsckxgypbqehuhlzarxuerxtusfeqgbxszfaqnpudfcnwofttyrecwbrlhbsbddmbdxignuhfpvqmpvrqwkpkfrruqkbzyetemkrmbsoidyzxprodstdszdrontxyajpoewxtdbdtc..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "wllsbsuymshamhdanfxdkzxipxjeiufpcgpzbyffjhgcllrphcutnmhhlohptpaagtejcxhhzgxmwcvnnhfkpjwdiypxuzqrwaokjddlcvpubkrlczhnydiwxyyfkmaobjopfbxlhqftqiftnuqtbb..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "wlpxekiugavtascbjxmulncnckxregurampbxbixvkszmeaphwtgzsqhplwprvyhxpytvgtukszmphydsuyggapsslpddymxogkcmectkkyvbrywdzpradcotevczgmnxmkqsadxdopabwcryfcpkw..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "wmuznumgktmnmecptepdgnbztcosrbekzgzngxpiyehxgzktdpxcdsfpvqosrskjfzdfwllivptdzjwyhwmlekmxnebjdaydnkinuisqzovpkwmjwaxebmwqawbcryoufstbdkrmrypwaytawmyomu..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 971 { : "wncxtxvaxyjqrlpgaxuvbqyjbcubzzlbyinzylvufjhpjdsvnxkihvzraehxklibpqhakamzbekjziipauttsjbvujftyrvdihqznsvvmsfffxtmvbhmfthjvgrzlnschnfttjadfrpnanceaszcxe..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "wnflfxjvhptlreoranpclfxodrvbqcgepdlihirmhhbaeprfracoduhebcfhyjxmtwkehzjobryglthiicjeuqzvohekknrygwkhlbzdtyjifvxvzxsslugsicandcebejqaonjrqpzjvvkxuvezra..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1003 { : "wniezqhlxkjzndvupevgaqntusqtqmeanwmcrzstwdmgfujgdlhmilfettrtrmycxrjkovqdllhefqvncrcqlkdbezjeunegpiszafxnhqvvxvhcyritmohfelntohmkbbwbvgpqapvqbszievugyd..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "wnjkuzasgdzabjsyogurhycgfaoyqpzvkvqjbcgbuojajjcvmsuxjhpdllxhqyjeogfwkzbnpsoxxtpbwgmbhipzeiwfshhleazfuurdzcxknvfyifiamepdyttnqlountwwenyrrptkytnotpwwhw..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "wnmdbolprnatmlhxdwkwxrmebwtdxhrwbokhfzmvnddnlsxcsdxarxcxgvtfwkqdcehvksdgvricbucmhjeljrfhtjaliefpckxlomvsvlxyukgqduweqpzllihixuzmjqvqdsjnyybmqaenltmhvy..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "wnmrkznsogdnljbdfacrfmsboivkvxqjamhgadvxjqzfsdelcwpiusimzemtqxrwgbmashtmpjuyqxzmottpsizvzykuqlzjahzfvivusrhfvoetpqwznxgovheycpvyudmrcnslxoxhhjnbzmvtcm..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "wnmxvuwmvgapqezrsfgstokmhygbamcwyercqdixqizgjuevykneiwzyszkwsgunyqeolyljfcpurewiegduftswuzkkocpepahmhfklslhcxlezuhkxthowwyekstxcxjikwetxoelpbduqkxpoby..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 985 { : "wnxduzvysuazadfwzxhmixobsdortwbwssvbtjvymzzogwfjexoteairisbpawcpulsmdppbcekrcgrnygdcupgqqmvpknqrfanijlmnjcwxcnzdraaoozmffnzwxajilfiuttpmndceovgwzthkqm..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "wnyzhalwhzpxwnkrzpjilcgxbamjcnnvbenhkfznvirxglpdgjktksfuvnjnjsjnhjnumrsvwjkglulmnrnelnnuggkkdqzhtpbqtguopomarwtryrxhjwlyxyeetltyivpbrtjbbtfjithkeyjxgz..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "wpislyebvtzhvtoeiohfmyanhmulvxergcynqzzpgajydfxchgpayfpytuxxfrsexpcdfworcfynbzqiqjrnfmvnafirwxvvoyvthgfihnebyniekffilqkkvaptouankylkajxyigukyukdafcxsf..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 907 { : "wpowgykwzlokzdogrobhdlzdgfkyupcedxfurthgmhzdipkzvtwfouxurktoosfgtdczarmyciqtwosmpwtbfoghlmiezssdkkpqmkftfutcspbhovtgvarldwmmdhbtmohlgunoqnzramvvuiftfq..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 893 { : "wpsqfkyovzreptiucsfvjtlegrfodghucyzjdkkiivqxtsjzkuhtcvildcczzfducogglolwawtynnyabdebjvynvbedukmrlyzyhpndhysaebauysnekmrumfuylivhgjyowlabbplfbjnambgzkv..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "wqecntrdhejwkabqzabqsrkbbftbcsfxvrhhpgxrskptztalfpyuxcprvvpmylunxpovlafqxkehqyhxffmytrgdjvsodlgpiohoepdlkebwavmwphdskbwlzcuyyppyzkdxwvcnkjqlywccmcrfae..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "wqlycrowdyhlhgrhhcgnulzuyekckhmfwyvwybyqctryuedkehfxvufwwdufdluyjkuinjyttxelsxdasqbpcragaqhgkpvbwgqvgvjyctchmqukhhwmnzmsukjfpcqkomblrbsxuhcimwzzuczvev..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "wqohewpzillvxdxtmzjggjyridnmtylyzwhojieuanjuekskikdzupcvoasjavionjysamviixewmepjprzsuhocmuwkvpjalpnexonybukcjrzbpisweovsrfsgiyzbigmmeohupryuqbepijkcfj..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "wqvkhgozlmrhxtubtzbcslifkihttuxkghuxprjkadyrmbpnmonmjdjizlntysueeqagmgeqqfxlivfvdeoauvkqumepgybsnkhdxpplslxllieqjdreqirmsdtkwmjvxmwygqjtmltaeibwwuahzv..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "wqzerozqdiigeqoeuosqazmodqkwrdksgqobzmxophuijehfyazlkcurupubyjyubsfwozouenrvgfxvqdqasamnsqdpuaxzxjhjlukehvqlbqkxtsetbmxobktjzotdyvqqxuttwtrhensdsjrnsp..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "wracdhblghvkiyrucbuaxixptlkiwykdtglylbbyzwlytwfpuafsqgvbtrhcelwyqsgbiwvyymoebkwrncbnuaiwxbishslascykxvshtmqidcgwbwhmrgxquzvtscmlfnyzrynhzoxmhbildcsnff..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "wrhvxorfzflfumjhmmmvqzjbljanzzogowtclofaxkbhbzwplqtwmmypjffpffjolquxjqbbhpvouedtdpxfnohndejvrxqbsblzqezuqipwsvejgeimzxjdgxgnxrdolnozuhdhmibacikjngjksa..." } Fri Feb 22 11:37:10.005 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "wrhzjbtocbtmeivgvvtbaqzaukpvwrmdhliyhodlidudwkcfcljhtffelxvspzrgkuvpukclirpsitwacgzzshykbtytnxecakndfgpayryzggnroledtuuwyaezmdlquawlbitzxfgmqhohbefpfc..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "wrynbywqzutzgdmqjwmaeylugvxxvcvulpqmdttbhswctczdzgboepsnexnroiqvgvsbntwiinolraoxrgqeiongfudimypptsvwofgujowraqnracghwtfvcjrsywvvsjocjtnurcrykzrdwxbwrm..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "wsclzaxercrxqlzyqcffngtujbxgglcqpktklbmbxdwbtjjtyqwjphgryztmxnwkppffurzxgbesyzhpvpudykbamsxdoxrnjkrztjfsdowzstlxgxbjqplikbxlmygjnkaforzjytvhacsvgzkeba..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "wsevjptrtnviyjycosvcufuayqckyhpalnlfthutrujelmljjzhsrfvpmwasvxjvwawrnbbhcsrynbemaevmxaleghzcyfnwktvwwtshofocmtqrailnstfznxxydqiyygziauscdgeaquylkqktif..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "wskctkyrqsoplzctfdsadeuxsxtlikbhdpapdopzjlrylmdrbhjvqraoddalysbtnbquhpbjknrfkyjgtxoegyawinuramfdvdipexdptpnxhrhfgavxwyvjxfyyxctgcueqlactofbeufxnrltjzy..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "wtimpchlvwqzvunhjtqtcyvdxbnphnvamnqnxhiwkayohzyxabvrfqewatfoqnshwvkcfdhkzpqxtbtcbourmmqrqdujeyxnjdyfhrlrmbbvwxndpmqbdbvdxaunovxmxmzizbxihowohrrgiusnpe..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "wtncfqqqngvrauvvywbamwipaxyfzezrstacgpauzstjtwvsdrukiejlpozcnxwyurgivfaerfinsodkpapyyukhjyiaxutrwmiiaysibscrqrdzdvaygwtijqlugvrxzajdsnsdvfmyxnroxuthav..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 865 { : "wtnpyunqicmamvkvkialsteogymhnauswbaxbivturxvvonkzsqyjwmhcxzofqqjfquefbjgvqxjkvnleoqlxqfmgjhxoqivvqsytmnryukoorftycsymkgxvttgikwopkmrqxxytdghizsljvpjwg..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "wtpkdkelcplwbnevxvdyjvbrlmfmaxakbvusmlamzucqaqadlqadaxrhzshwzjcgjdndndkbrkntmkedcoylisxmnqclzscwbstwiejsrzmxlnmiinfvbocjfhjgksoutjfqmpmdszphsaefujdvvf..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 980 { : "wuhtpkrkwtfmdyuywzcxbdbdykqdkslxhepwkdohrauqxwyjlbpxxeopuaqdncrljvwrigltkkepbhhsdefhwpfhlotxrlexgdezzlumxbnngnuacwagjzifhrroukczfsmnqfptzexnspvrqvwckt..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 867 { : "wuqgxwnffjgcwtpajgzadriqeddwtagolddfiqbmfypziauxabnmqdosxlhjmjirhrrqygjsujlinfwfytagiwadqfclompuprxqkhoxulnnbmjfhqdvrhpbszrlaxwxslhgzzcpdimziqgrnifuup..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "wuuemufaazbyirdgwyncunhbqpattundrgoigcxzlgxmnknbsmylgimenvzkbbkjzmyvurkcbaoptpetdqaxotxrktckesvmheqqqparslbyqewvyqbvhejzxorjgjkmsfmmdqyfquuespzzlzoivq..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "wuutocxckjlqarsesccclozcspjklutwjripxgaihacnaumpeedqpqbixdsxvhkzabukhwcwnkhurmdfwqrvjqoytaboubyjvynmabrsetyqtccvvpzltptgwatljtcnwrscvvqkooufpngqgmdfki..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "wuwckmaaqjdwfalylebhznmhxzgerhneinacrqbpjnzggaqdiwsayoxiqibxxsnzyojqvpthcivydnexgngirayxpcfnqylmzsphameronljhwbadljcpgjqqxafcdnycshcautnuppiwvuhfsrtxz..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "wuzmppqgsqmleffmjckcnahsinimqnoelpknnphqxvwkhhstxctgonvynajgjsttxgtahzylaismgsnzrzgcdmflwrfimfmwqslsydadclkdomrtbztnafzsznavuomncpssykebdawefqoomcwtle..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 941 { : "wvawyxnbhrotctnmkznxqwrfcyedfpmulkdgjtjvnqqqnhpfuvhyeightrcbsmxddkyydmgggfgfvmpqzalqzihdusdoepynprlcitkjjoylfrczwwspftlyyhynjfaucazucuyokakfyxkuxuyhxe..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "wvyrofqnxugyinqlbyphrojvrpetkfgooihgriwsbazrivbvceghxsgazttpxdibcdpqlgpdpttjipkwwxuykbatuctslywucmyghofvkzkbiecygrjonivmumqkonqduhtcqdwuymbfncmlsuqvvn..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "wwowgiqvtxisbywcygjqpljfbxqgtqjrqdlgvhayfngoaxdqahnhuuawbegwaiwgngotcvgcjsydbumetnopeiuoksmjudhcpalzysyffvybtglztnwtwixqceemmrgmzvgvrsvettoeexrhqlnljd..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "wwrnwqltafrgkshamuqvdiofdfzegxwxmdjixqsivdnyfortmwuenzsbwrayfacubmtrmakixesheydukgaseenbhyfmxzziirfctwmvuugnayagfwmaezlcrfsnsitxqwqgycxnoxlxtoptpbuuac..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "wwvfcxxtthkkkwaatkpdfulhnxqpldsyuhevnxunqqlkxvspzgjspeptrblmmyzgpamrvknqohhxbwfhhglacfruypfdafdctekfawswlobutrwcutzagimjiqboxzhjzirdavjonhxdtumswphcja..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "wxmyczxsqjhwsntnlxjydcklxkmtxrhbyuzjsgfibwiyxqchlbjdlqayeakrcsjnchggzurdpleaqvsxfqqfndwhljagkxdwwzziyhdrtcbcujwmzpwdrkoyghdgoietahxujswkfmkrvbjeawskaq..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 848 { : "wxpnftmaqxktirvjckegkjurnmnrxaxfyrywsetyytohlztpovapythnsyqbprgksskpmnyztwuziygpzrjupwjppbnecnumbsrrwsrhqhaunrpkkbdkstsoptrclghojfvlkyncvulcpmybquoqur..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1004 { : "wxpqtpbgtldwkirvechtzsxjiazywwcfnbgnlqeodobalawbifygvymvulvtqvcqgympsutwwyvolzmjqkilysldhbbjumkvsiprwtwmohftwtingbibxcwhjqhophjvqfxhsbwcvlvxwqktmsosvx..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "wxxkxdrqnirdzoyfhmbiheimxiqhzevefitrfvocrlgjyvvvsaaimqwjepqdywltcnokmcnmdmkkrznudzbblgwybntdbbucotshywvkoqcykvfuldkzcdeoqefiuefzlggqeqxwixvjonfqkdwhrb..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "wybgzebxczjsuzaobkrbbrmporirvfmylyvvycfohmeqefdlksqojsaldhaapywymagqyrnlksbriykbausbfeyusuhkzpatdvzwmcddyaldhlnlqjrpsbfxbbamzszpxfaofcurzwyaxnuapipevx..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "wygwsodiacxqbprxgvfsfbqnntqljvlqedrkneemelsjmtrwqlckmbfcswvzyvokbhhtzmbxgfqfofrodxiobctuchisbcskyjjhquvxwbufaekfkdqilcjnazhxtyjwphebwpvyzliwfdadgztnwh..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 826 { : "wywfqlhrbbkguaccfruxcghgwujuxofjnogxtrlhtryeegzlwnrypzbdwmwdzhxmylhmlajhzbfoqupqbgfubritlsgbgcbfbneowimnbljeofgdcwjcwezuvjywuabgqemvlnflagcwnwfvcylqlf..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 934 { : "wzgqamdxdkccfqcjmlpdbpscgerqpgrexbgvxtufafjtbjbptadtowtjrzrjvadgshwsoudgxjwdxrwlebuksofitcvzfwalmztaczekjdwkekzclqclxttuactvxtvylzjvgzfwymjctnbrfnjsbo..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "wzqhuzkrbwpgwlndzkjvkgngorhhbgqaafcpfabuzeduuwfmqaekuyojvmmpzbpgwujhvvfzmrxzbtqhgdmurjnaffavuemzkrlgojfqyhndznnuglckgchbdcwknbfjqgboyupbtlgwesxcfdrbgw..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "wzvyhfnkirssjmqeqylstfffkzcijtiwjvcsqapbruwgqkxtkxnudskavgcuswdgnbggflkwhrhqzfnjhpgqxjswkfztgvqqkiauujdqxwcebmmueedlkftdfjgyzaxxwhpimezlxxeyzixazugzky..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 916 { : "wzyuswuvgzsguhbcpuyejzgudyvcmkpkcxfupqnkyefklrmxvnspnwgmkwvvvejzbpfsbencmwhiafjpqpenszxcexvojrzraoqilahnofgzfkwlzqbpqqqylssosrvvskihzhcvkmszrksxkilidt..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 892 { : "xagkzsvhikvukfwlseokmbejbyaqtlzvalebmaazgoiiifnpboaonqrrhyjzggiaxkucpgjyrmianhzkkjtiuvzngywjritmztlrlheswpczxcntqexqtybwhrgdsjqyrxehsxcsdutswjsisgkpfk..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "xaspnphueebiivolxethictqvngqfbzefwpmodidwqxjyypovwojyfoghpjjxykfupnbvexjydqvuumqmyiqocejrjblxknfbfhavgzistkcqtyatakuasvogymttrqtkctdqqwknrimvxzorcyqqs..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "xbiroovpelxyhvfvxwpyyijhdytzgbsznnrygnlossqopcsninoqyhmibvxfyzmlhuwgljzhyldrgweszwyxvxyxxpmodqzvaootxkfjsczkgcrwjwzntnvcqlchsmzjwynucymebovamswdjwymrk..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 931 { : "xcmxhskreqtscjycbiirxlryszwkqwgasxhoqtxredwpvavmnympdvfmfuswlcgxhprmylawykxorqhxtkcuqgbpyllgdvzqlsjqojahzwozqaidsvksunsmbluvfuitjugxxrtjfarbtqrjvizpze..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 988 { : "xcxsyihzahsotwfcqikcsbwpysanmwztucegbwwdawdqlptmleqnaddqtepaahwhfwxxpulsmdqvykpczgttlslcjuaqeawzusjvyalxcdvqsgjsufjcbpibymqjpofzaqopblmsmrzupgxzjgwhox..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "xczqgtxjehydveqrtuhfpjzmwbbxxvqjftnmitxzjamyxnyvebcmewujecqridmnqoatucniuyooiaclmqakdeblergzyrzandzyuxyezudhrvczcfkajgorzbihrbldhfroybpsmlcoexmklomiiy..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "xdezglpybajqtkfeqedlieikeeoettidynxrdguqgvetvjvokuecdwneretdrzgrunrleoyzyvojkexbabmltdziepvmagesxmkwnpltjnflvbaphjijpyetwmfywafvouogwbuxrjkmbtwhxbknmr..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "xdkayeafzmgsnaxnmgnycfqnlpwndgbqkvnxiisplusrkaoiaukubihkrbkcmiusvjqkcbyoazimvqafmkwbpbhrjxrohbdaryzxyqgvoogssuqjwlvfoopvfgfccqkahfgxwebqqifcfdsousuvcx..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "xdncjegtxvndezzxxsivxavhornuepffinrirlvhugzkexzktcovuncgzjzvjsyzkfffyepqonxrwgrpfccopganzlezxnnrnuojlfussmejnqnqysycpjahwwcqpihfkwiiicgwiewxngtbxekizo..." } Fri Feb 22 11:37:10.006 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 820 { : "xdqjelbeyvfnuxnelidvvzsjggngxpylucdzzltdigfjqiqgtlheolvijevmspxkpeokifbhlobxbjnhwawkrhsbfuqbwkieegcirpgtyfavgtircvwwvydbitmyhwkdexnvamrmkexfwhjuqezigx..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 830 { : "xewbihtvomrhstseteuqdcnglnwlgpmoxnyrkbzinssuekdmirrjvhcxumeokjigxzgqunjzvzndzdgvulocowyqidrzrsfryrcupseaffmneltsgurddsxzizdxmrbcdcowjkxfmkrcjtodwfzuzl..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "xfbwmomlbyojrngusnheivrriwjqwjwdqntimzltktgvmvffocnxuakblavbypcyehwihxikklhvvlzlfqcebwxkahfqrqpfalkpduvwuapgaheppgstkqlbskqycuyfkesamurgezpcteihignpll..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "xfeedplahvhbpmxahsfrabygoauqnpheiswesdrznwhsdeqipvqofirwnlrhmwmljyraeecpwqysizylzatabmmvcwqlmlgchdzaracylaezhpczygylmkmcaifracdevegobbpxngcrctdijesiiy..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "xfgprvhemgrlebvzwgvumcbyckdquwddkgvgydppkzgbxdffwqfaqibbwfqehuvtyqzxatbtinmewuxpvbjmdceqnkyyfizcshjwtphveiquziouadaquuxsqowmasnibpurwoorbfdbbzlhcyzbiv..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "xfqvnloeeissogpxbkhzttuhvjwedsagwvyffiolqnqvwhzwhoszoagxohcvztiqdklvademfsisbhamssvftdcmkhjqgdpeacalkojhgwztvrldgowmeympsmvnmcfmadkvlukczfthpnfrxablbf..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "xfudjllcrxuacnyvbtrqbocxcfihbsrogoaxybfhdlkkcmmwccwkwwdwiohkuudoktaoynxcdnlewmpzvfukmubjxsirbsudwtzwswqtslipdiogqrvcvvzyfeqkpgjcvsqwsyncpzaavumgojxzzf..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 922 { : "xgjjiatvvqbloquzeewicvfwleagqvnocjtlbphmnfihkbkjpmshjxkthwfdvskvcodyhefjiscmzecngmgcbuorgjplszupudunvpihzsgwvmfznxvnjjhlefmvtwiydcwgfazgkooxjwmibfobea..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "xgtackdjygerwwcwahudvpfgpvznxlerzmdolunttyigzkcjaoxtslqvmywpivachididkqxcvxolxnqnuniupeezpmuvlorzsinidzzondwwfjgkxhyrvcdautuxkfcrqlcdmpkhqmfuwibzzcmaj..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 859 { : "xgwfnqalidoccgujklhxwszysfpbphztqyhutymnuxzuwutjueiltvhikkbgoywebrxrriczqetpuqylyrxmetdkdzezrrqtudxtvkmtkrtpnheuzoogvsigidshjgqgkxzncalbkdmqfqjtvlfqtk..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 883 { : "xivhazaqsagbtaljzbvollcumahdpablxvzkfaypzyuomnlezvydffgfjyneohjobuenhdzeztkwazgpcbxxjcewmbpyynbviduweuavptcmdmhswzcietngwlysfnuyvmfeltnevwvlhonuatvmpd..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 961 { : "xjlndpsnzzgwrbazyqufxafiemaaubzmtixvorrroegmjjmvfuhmxjvosscaunnndqzebpxbhgskbbletissugzgmjvzunalthlpyomgiazuadascsqwqaksdgviguoabhqywftdugvmyykdsiepsr..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "xkebylsskzogjtdgzkwfztuobobdtdqrxdyieryidosufmhxuavlydxwpgeweuipcmwzvcljjtzqjzazxodecfiqheembfndozimabbpviobnfukmvpajlzogwdwhwjqismtlyxrftjptgjvograbe..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "xkhfyoezthxkxhupnssieaoifrkkfoambvwsdgyxtxhowlyicjckvkwceryftsgdstohkyikuwfhxqjqhwisgynrdyavkhpunkdsqrwtzhoeherzhafktwvlgpynhhxmlzhinariteehyuxxsjkakx..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "xlvfsvuywxkqswmlcyvlhuxyrkcsgicogycttnsekmukbqauzrdpxhzisxzhhguposvrlnxadkiclqilohcbzofizbqmktzmwrzutuszjppzdidrlmmfrenvwzxsrhljfgigcnqhnoxqwekoanyxsw..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "xmejlstotyswpxfvfrhrjprfnckzdoyyrgxshuimownnbjdryjgvsdbfszvvglkjkgtcqhficalgufgqzpbruzyftrlmraatbbtggheeqiytvgqzknjjmilpngwnvwrbaldvbhsmuatmbtbkzjazpc..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "xmhbhkyidicjwafenygqqrowtjvihjhtemwyzipunszjrthgmczcehazlwbszlqgrfcpjblmuqweucgwtiapuefzxkuneatzbcynppedbngvqofqmwdhwznzgswyypylaaasiethhkhtbsoyvlvwhl..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "xmksedaajhpwvcokdypioigbgmyekxxplzawsbrzbxvnucvjukibedlkglyeiqxbtyqckxlnfmgoqkesyiehcbpzjwllsfhpiejdtrfutpfnpndhpsnbzwociuqlojboorxkuvzeiqrpkiidnhiear..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 886 { : "xmquxymnsowvahzlahiayyisghacosgksutssgmnildaxobzcftyfysixwbitivimnffpzrsybuntlziqgleucenmaaruzlkihlrqstrfbhmnowpowlkhsxxwtnzqhdgzjezvjlxktinfprzhffsjf..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 990 { : "xmsjyrvtrczhbuskwxvoidijqyybcgmincejljxobtiossjboiutugjursyunaupqqukmhuxutvwksuqqvqrnifswmozvmimgfsienrbpzuifksseroojfvqnsmwjoxgohdvcudufmjeucbfiknsso..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "xnjzxlymbmpszxurzkagvmqbunzalkxztxzysvudolbzjgkwtqksngqpwsmyqwpuirezpdnhsmywovcwhvooqbvluwbtqvdiybhxmercajjvnmcvyzglzlzkrfxcvhoucbncyffzdtlszjcyibeoai..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "xnnfvxdkgvkhsojwnxfpnnvvymocqqlwhuzuztovqchptjrvzcztxifcrfbsrgxpyyfgubjwyrnatzfgbzikwciktcoabxukoxmcxnwcsgwrchsswshdenbcpmcsvvvdpboxguiofxoyjysjhtipwb..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1008 { : "xntqjdvkcjdzrrimowxsrlyswrpoktpnebvkfdtwdysvzyyasmwbssprvcqpjtesdpzpcdcyiwflbzjnmcsxdigrkzpfvcttopygmrbsdpigwqquwtvrkeoxttduimfdprqgtyopopxjljmzjmrpur..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "xnxjnbfyvlmfiukzwipdwdxzfwoiribmecebivwdtgeayykygthdtjlplsbtkqzmarwcygfvrynhdxzgybxpfnswgrldtbenqdjukhnrfwcuioqjytcluaysthbghjqvvppnwvbayraielrjcszoia..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 921 { : "xocmemqpeevgzqittoijqbhfaidvdaobwxumlwwaifrggcepprhgdumckqvxhlgtubepfhyehvgqireljetqlfbjhmhgabbaowsmkrdcglguwtfbitmjrygcxtzcskkgwzmjjhtbhkawyeiiseufia..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 944 { : "xomnvbuahhuyokdjbqcvrwqigwvqgxtkhucbkiypybtbyktvocnmcwiwnjqscbxmlyoanngmjorbpryeozpxmqjhoqpjciyltyurhavjdpbqjjvnrpibapuxsjznrinhcmytzlrbmqxactmzezcjzn..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 892 { : "xowmdfesphjdyksasswmzshnagbmdsjppudkgzwduruztcgrnlqxvdyzbfuarkiucnykksxhgmyheubpyalbtlxznchskqvzbrvqfidndpgcrjitifljxfkwwedgwkyqtvqwhzhehrltuxdcvlcvle..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 898 { : "xoxomfsyhjqqptwxaamkhknmxznujnsrewazrffrfhhpdmrpgsnobbyfabhbxockbchemrnrbborheqkcsvgmepzrpszenzzpyrqywfuusyrwwvuedihyeuswpuetcqjyzbksjfwobzyjwrggxwbmt..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 927 { : "xpfwklrxadboinrlgmmfdlpgljiceuhnypldzqbwcgnnpfhrrzrdmrovfbgfwmgzgivleydvjewijkydecraakrwagqpfwuavdwltqkjpshhlseyyuaqimfufygenpudzrgagbasvbbbovcxmokayq..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "xpoydsghwsvahbwtyjxljntigkbcnzydynxosykpcrginwctihyovulmjlogdoilkvupcbagrpzxrhzwhkixijuqmobfxdloqanlrfazvkxlxfamtqhcpnkpdoalwgwjfnjcovvldvaecuzzyudkbw..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 905 { : "xpqosyhwlfnxcsoowypuwlisubphbkrybeylsuzrlrxvdgplcvaietyrqyvgazcztgbagvnhxwiapogbnktwlctipqmglxqmswiuxcdbgirpbavicmdacxtuvxcajpvrbmbiartzhbbvzagtcgrnmu..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 899 { : "xpyhebbqoroxtqzbdgejuhfsuwupwzbxuodpcbklxmpqyuyfoczggoxkyoxcwdxzfvlmprmsxojflsthedfvgvrcagsdwqjcxrihlyfnepjxfnyfddnctfjhinnfmfkdjforpcjuvxrlisvwcmumqc..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "xqcmqnfqnadwwlceehbsuyjogxkblrradglimahcxvloqvuowyvojbvkizkxfhpjonczhbetlmbrgdwqiceblumrdsvzxoatfldmakzqbtoxjgvnyzxkeuvfmpmrtvlpwlebwuwngtaexzdxvdhvvt..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 935 { : "xqcuahfizhazprlbfefbkjkjsszizisikeefaijwdbuxnryltkzbhikrzcqmudewnwwilurggjpqixamjvrjblnnjbdqfuzstshifxxsvylihdufguyvcxtkmeqbnpwuzfrtcwkftcmkmdujzxolox..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "xqfwcqzxnccagnwjmvvtmirlbhpxfxoypsqsknagmmtilzviwgmzgveimzlezigfyyzuyhegmnkxcvmgvvolnboyxknftnwyikcyfnkssehdzmmnhihabzyyqravctutdkvxfpybotmlkbvyepxjgf..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "xqmwvxavrxrtajtmgqywvpotnfchqdxpzucbwfnpquvekuvzuloupdrvpdouqjrfstlpjunxqllxalhhabksewiurpzdfqhrxikfruweczdfvbqyrpqnsbkzsnwkjjvxjcmuqnnqlfhvcongqwggvz..." } Fri Feb 22 11:37:10.007 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "xqrbtgedyizhufqdgcscwfhzqpulopsieyswdqdlohvtketvurztkhqzjhqryodwozjnzkhjqmpjzlzkqnwzzhkjktkovupxhtzaultxprkmqmsggyalffsqcpytxlwtdxlucvramitcqdzssyjhqf..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 989 { : "xqrlpvnztwaetlytkucfinfvgcvgsqpjojdtzigbknrkymbakywmifaoolazhenpmhmokbmifcmmfrskeftlsklgnskenrndcjfzmlwanjedkieqqerehvgyouqaafpwzrocgoeusklbwhyuwmisny..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1007 { : "xqvvtgfgxczaymiwxxmwehjfwgrfwjfvnbcdphnahnucbysfwceflcalxwdyjenimytkrxoijtbvmwrrjjljopxdgigjafgqvtbdrcxcuytvgfjphgsuqrkwukqnxvzjfenqdlzppkckhptrzkostr..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 949 { : "xqzkfguyogoeldghxegluctejkxbpwjjgdpspdkdlamwygopwigskhwscczswkwhnselnujombiwdubcorxxuqifgvnhybkjaipgeabprjkwsbnnibqlzptlkzuvyiijgrtjkahpgtacdqyqubuksb..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "xrppxoruncizrjjsrdvagdualrrhdugeubyvruvcmtbefgcayxbizbekavlmcsysklauhavzihlwqyzkovzjtzpoxgqzrnytblszrknftpwaqqoxkucfywfnyprwoztgghfdmyigvbttuncitfagqw..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 989 { : "xsbxpxeezkobimfpeybopkyqynyutqstokaddemtzbenyvnjppygukvprjyhhbmyckeylizgiuevveibbqdhasyavivbfbrxmfknmxlwnsagkbwigrntspwtbbstujxldkuygxgbkboyymeakghpix..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "xsdyhmtoqaafyqfydzayfzuzuiglbvvuwsdozmouzcfffuabwhmgclpzogxobhuhzhrjtilrtaiosxinokpphlcdwprkbepixlofovmlyfyefdlolzruddplvxcosmgiiljazmesskpzcgbvswrrnj..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "xsuerxtsnoumzkcjkieethbsfysvverfoprmyqruobzsocgixbixultnijrdhscvexjdcimmrllyvwhwmslpptrhlsipyvgjxlchhzeilhsdvcxetecurwztayaamfjswcuwgxyfgyxaxtyvegmrvx..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "xudxjncojgpnkcnagvtfkmtfoehtguewgzdvsrljzelrvpzyonikxhsgdlxjpqijhlqfpxwekudaoloxejgbplhmsmqsvrgqzhocnxzgphglsgvzzolsxlujvcntjzrkfssmtbugqbcwzntoyknxuo..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 997 { : "xuiwvhjqxgcyfioraqopbeevpvmdhsyxxdjdzsntrovfvvlccshaqnqkcrytlvvkaqjusbynttaengnmfbqvapmhaouustkowrneszdjyaajjcufbtamtncsvzxitxbgizfviyeljeilmsvougspdf..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 908 { : "xupipbqtoydbnbvogjpokysqbrfksimwbymhhbwinsteutnkkemuehyebrwswztnpvpmasdgfuhnhdaqjubyajjnzrpdtwutgvqobdurokpywdgobjasbscvlfucnavfcrhilkwpwkpkfrqetjdajy..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 995 { : "xushboogpydpxnlistvvliqzpsvyrlzubldvuugeixxzxsluzjzzqxpkpgxapgxvjxghmnhrrehcmxdcgaadakugqssechyrcgqdpcvqujjtjvoglohanuvkdamxylbyzgglndbbovodlorhdtjybg..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 902 { : "xusvtlrcmrykeauwltkmmlcxivhgnhppzpdnplrwgbaulgeyxdixakqjzyuxjndilaxmgdhlilrlpywxcsvayimunsiarpzfxiizfeyjncjxvotfvngiwjumlswudxkcfiroyatrytowemmccbzlbm..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 966 { : "xvprhlprdznsfskqsovwfxnfoocuxmlacijnpormgblivzwhcuyifbvsnoccrkkrmfvvgjdcpldxzvzphburkdumrzxxftrhveyfnzfomldcjjobrdmmqaohmrqldynojxtcrdxgokflpiemhudcoh..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "xvvwxixxkizlpoyrkpxrtchnlzclxfsqtksygtdrbrkbnubqnmswayeilwhvdoizytjdvaanjdmnhocvvlwkciawfmwnkbepokbrasdxtonfkamhuikpjuvkbiilvhsulhpxihzxkcfroeonpdvvmq..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 895 { : "xwmkqkuwhaibqnuzxttdlqmaqclivrskfblyjkvxnvfakeriokqzrksbsqwplfphokdffhbnomjmuoilwqomgwreqcidcbpzsjyccrdesmsuttupdqudgeqhpfalpjxtnxwuegwgbvwqwtjasgewen..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 964 { : "xxjzahsozswvcutivcnszsiupozblaujrtiktltvstiainwqwadigecfnmknrpztldnncirngyujmaakdzfbkjkwfjhpukpdbdxnuhedrrpxklcisnfkcygknqedbllxvtolniplmkglynxldixwcd..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 942 { : "xylhfoeekmyteploqrmzqihgcdtzvomdmjvxnayuptanwugkdgntwwvcabgaahzxxagorsytpffhezqgdzoccplixivlcauqmqmgzxnkowzkhfldnzljazghdxjiakyzuvfrlhmdzqpvavxcmgtoax..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 875 { : "xzeytfrfqmykaamiqdedsypagofwqordntayhpmduxwfhhjslykfsmyjhbevjnmjeimhreehzspowhizaijaojkgtpfpscxczvlkzqhzkkbkgwirjbuuzldpfqnmlvbbylkkqamodmtlkjltlrwndd..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 860 { : "xzgifdlgxyjbfvwzcnlhvvtaezkembqmsaobvwrqpkttgxtfaclaqlkgopknpangfdrecnepxkbxkpbmkrqpljaesldrluhtmxeubxuwhqkczbakwuoxnaabalclzqknpriigascfkqgttflymhzqw..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "xzpgoxpxaazjdiwtfybhmtmkjkjwjptydqhrldshmlrlceasumppsorcjglmepsjetnnegnjdhzkhljdodcyyymnhxvvligbwxdloufpgkbhksvtzqvnnhgjowsewwbogexjlekakbdmapvaviewqc..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 929 { : "xztazsbzvggksxjwfoxbpdxomzatoebaeogiwqsuogstvywpecxhskjzksvfapminhlvkklkbypvqdqgtgicefwdllizwdjozsfnkvzvugstsnprkelocrgsaefrmqhvonhngjikdjnsfwagtekrfq..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 966 { : "xzxgyskxadmkfitrmujqqaizxupynuowxqqxgfuityfqxwydaqqsbzxddmdrxustwwitcaaekbcfhsvsqvswyqrmhkrarwhnovmrrzjmcsfnuecgqiasslnmkcpmyjabrjhyrbicyuhfiqrlcupchw..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 869 { : "yalemvewhaytvthhokzvmkhaokyufzhevpsjxrrvgsmevuehssjnmcavnqevwoemagqeylzljpampabzgoduaeagialdewwzofesftmhosakmavwqkwrtcfpsglybafpkescmxqmpjitxigdczslpd..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "yariweycywifubnvffimklybgfutonibzavyvszaodlkmsvkekvhzebhokfittjpjyqyfimftfqtmvsujhxmehruqppftkowlnrbxhguipebcsqrxpdslvokpvrrvbprcqqwvkxirtkznfuaiiknbl..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "ybvxxmvifxbkypjymyppzrgbanwihqebribchgidobttukyrtkqlhhgjokuidocmhideubjywlmtgvldraxhntmsusfqoykemcjgmneplhojlmlwutjpiglxfvryqonbqnlbsmuxcjpyncifhhjgph..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1000 { : "ybwpwmzykkjalpxggkdsywvxowepzcdkgkatjqphwoqpnhtuvwvittpdvnqvtonybkqvmefheocpjovxgtzuygyvqvxyvpmnrcwcfjfhafxbizqzlnqqyxdhwpqsnlrcoiujiyoqkhhonlbpolwzwn..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "ycgxatuogpecedxrozhekxvzshuyqmqorrctblcheonexnrwgwdftjwauzcuceygssekjmbvczzpfkqfnkclobgwqzfmgrxlhhvgzwleazguuhvdqocyutbcsvxsdshoxiletzpmaocsaaudffznbm..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "ycttiikjadrnkfizukrwnwzugwrtwzxkpnxowvghlwihmfgfpxalpjhpaecgdlamfnshlfirazeiukyotdkrrynwvxqyfislmqcbykzmgmctantgkkmjacpzgkqamnilexogulwnuzsxjnfkvrsnej..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 978 { : "yctuorazdzxhuukqybupicyhvdhxsonujgkhlqkpjfrcmrlhwzrkqwiiybwbyjvfilzgsxoegytzqwdzqxuoooslpcgrlyrxoglrwvcmmwfrpeflrifcbfobskfwnkctegaqheisetmrrzawwyiybl..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "yctvvbeuhxfxmgobvsxlnocuhhypwfbkoouswxganpcnkannjeiadscncncukevhlrbijfiniuqdhfxvimrztwkkaweprlzigkitcmttyzipbvbamjyodzfwbzenaxgzvhrkvflybeuyorayxadfxa..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "ycwwuybldlaqnrcguxookjrxmnzkitwtkotbhldmhngbgoyhzbyueckjfxxrrgstumphrmsqeqkibqtxmzeiuredthghltzerqxmdqamopdprehrbevhjaarppicvotyzdjwpgxiqmfgqpzzumoysc..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "yddjtfcaplywflxypzdeocpygljvkvyvqcbvazyipxjfxdgpfanymhumujhdqldxuxjthjupugomjtclzoujcamabsyhfbhmlcxinviyqpcbxvtcqcyqznijuenbllalyfuprkbwceqyylelhhesmq..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "ydoumauncjhhqnpmtskthdvhiggiodjnkbcsplhsbhriizymiyqomqpxhvsswhktdzbqcqtkejcywuwwocmaiafpyxyawblaezwxqkufqhuvxtwgngydfunocqhkphqysbzinxpytqekvyugmgxnjt..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 976 { : "ydpdmdmxasflbnpnjqzynmpjqngnwotvzujwdeetcdwqxhefukzemmoswssfueuwgacgnmcjugyirjeibjzwpcdzxfdgegcubqrnmtuzktcodstkfyzthkwrkzvsuqivkrjtdksjzqjkhdamfwrngi..." } Fri Feb 22 11:37:10.008 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "ydqyqcnuewwnavtvhlkemmugfvlifizufrimuarexsttqnglaesyxatdkgtvmrqgiypbwbkikiazvtydpijpxoqlsqdwbyxolfentvzqlwbfiuxgbqbwavghkeiuxayfpunoqfrwpmtdkjjlkwxhea..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 919 { : "yduzxykaqmcgdkvxxgmktihqrxukurrglctcddzkijvygajbzztptwyznkihyymcczngisdroxbspbunxvjdcnmbprvamfpypvkwzrddvqtqvzieppqnfxyzalthjwqbcctoldvuwnxajcynilaojm..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "yeonzsenyigsunzelndogqynbkzcckttgqdqbmmzbnhsqcxdlpbsfihdsxgqlekelhfjjlbbxqzjkovdtrkkelnctdtfhvicvfxayxhcfnprmngoefhfpnvvcxhrwpgmxhhiopekywhfwwbyxotfpk..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 911 { : "yfdqdsjyjhkyohlaktqlrfzjakkiqovqjlndwilirwiutbpmrhxadrgugtegwqvctntmwwugfnafqjthnypcbzxdjkchkxslfhnegybdxuqizyyxjtbbntcpdppbpmnjpbyqgbzxkqtgtiehmcenyk..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "yfiklogaejbvadhqogbywagwmpsajnjhlbnigetjqbvzaldvyiurkjgwkrvlucyebegzkdwazrrhktzzzycuzyxrkohoboqkldpidcdqvvxjtmaeyflgerbqcxkoodghjrhkjilfwcrufswzepayee..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 904 { : "yfnbrychixlxsywdrkdvbkrwbxfxhhifiislyhtdhogbrwwexqpqylcfjrritdufuzwbgckeeoykeoykzyvxdzrcmskpwehqnvfafymthxrqhvxpxmtpxwjpdxxsvwrkmjlduujnrmhcjzmtrfziyj..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 882 { : "yfuiabmcraiotfqxrlyjqqfdimldkickrybaapzyrtpwzqjhygtnikmpueajvavessrtrgarehnynsbevhrlsavedsxphweqdmiuwzhlaakuisjgsejnqwqxwqzjeeculpvqgracltsnfyoafaqktk..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 962 { : "yfvsdngwpmpmklcfktgqmbghljbbnuxuighbhprxasdvvxnaehzvoihpchhuoasxngkdnipljlfqvbkilzzldmktsahseryvysxgfgewqerysfjhkbqbdrgdqizorizcstzewckrmcnfucrvyouqqe..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 844 { : "ygfccsniwxdqlbakmqfqdpxlmehqkdbojrwqndyqcpguodatdklpneecbddnlzaciwhvqjdfpbtxnuwuiquxltmjshjvkdsnfchxnbqycsmjnkbwtuvuebwzaszjehopexdzmwbibjsodbzazonnbo..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 821 { : "ygsqpulqcfrrvnmkzqtvptyvsiepyrmtuiuadcddxtzqwctacyadmeibgzwzrndbxmecnernseyxbxvwvedzybkqbidtwjzrgjduvjgubxjojbuwzkfgtbxgbwchsaermoovmnfhflqigcxghtwnbf..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 840 { : "ygwcugzacxxhbdhwrugvkjacyczigwfsewtnnuixxjetgpybpnlznjykndotxnyxxdgadszmymibhscxmbktiqmmwlrkvsvviqpwjjjkmkmxzruwqxudldglapkigwvbvtyusiiiceppxbromketec..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 824 { : "ygxhwyeacqedzdvnewbhydiraofwfgdwyxugbftvpvxpmzmqsvsheoplqtnkiuwnyjtvfwitpxvjzrrdljqqqgvgzwkdvuaqrpaqqyaqqcgawndujucsrkcmiyxvvftvogzstwwivjxrtblhecvgum..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 977 { : "yhikwhaybklokglhbocznyqgozcfmzngiokmlleptkigfleoientmshuefxlhxtjcstidwrvwocxczevdfpqrtfggmujqdwjfxnptzrralfrizraiowvoupzwpzcytqwmtockyhundsskoylplgqvm..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "yhlbxytghykgufowmvwovgnrlowpesmjqewtwrqsmkgxwywebfewbwtuftwxyohtepzykezvfpyviaetdwuhvrumqgmxusqhpcjwysuudoesxpnorcnsciqfsjglrgcvssabbkyjpthlrozehmocci..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "yhphrvusaujxupldjqozvzzlkyofsxlelupkrqohjitauceoufdsclumijkmybroxhvwbabtnrpfivjogbkctjulzovtauhkzhldebffllyyajsvvxgrhtifdzceaccsanxrpnultrjyneyoerxxih..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "yhxmoguhysjmtwkyospjpnuvcamslbtrsipwkrhpaqqeasjtqakevfilyodqumtqdwhnopossbmoujmymqeizvfsohhisxiyhwovcqfaznbttpwwmoiggyqvsnhjokohalohggxllybwyzmowgrmay..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 881 { : "yiktyrtthbvjewwyaqxlxruxcfzpvjajtgnqhwcpkejcvwtitjnqbfynityitbxgllcvprublysjcnrqlxuqzwuzhthrynguahuipfdkpbbaeayvfviygojrtnrkyipppmbbrhufdjkwzecjgwmxyz..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "yirosthwfxupljdidimvjdbvmxfbydgdpfknhoyefwkrmvbunimibishplojbhizotfjafxmiaycetxauhugaskkicsfbppdngsujnphquzucmyzcpugsyakmbmbdxhwxjarxeknwcqrbikhfcjgxq..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "yiywlojluzcriyeyzntnmfxavyfkpmgtbicpenubiikgjmqosjxbponxwqaqznjhpcxuxdboljcszjskbnhfqrhitjsiowppjgulecmwsggvxswydijcdsmbsoxwnnsmqagyocerbozlrmbjhsrbnw..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "yiztjhzvgajumjwtiyuwvyyimxptyawueenpouwfyvpvitqfwnuumxwacjukbwvahrbilmsrvmuniycowzjgybvdmxhfxdhrjrijbadptnaxerffvjwpjyzxktmuuajkhkjgsxwyjxydvmafxdemey..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 874 { : "yjkfbvcerqjdroveiuifgxxrxbudnrhexpeafhfjqtberctcvbqnnailnyvjzqerotmldqnyedcignubfnzjsanvouowyiiqyeircjspjozbcumwjkagqaplktrxdvfhwuemohgyyppjyefgwnwcgl..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 965 { : "yjsrxzkmcjsuzwqilnycnmsqohejnwiipinfksaihqnohgjovbpolnaapguhgxvkszydmgqcmtmizbcktvxblhdietorgoqqhabswktejikthicykdkhezvdulnbgsxruhzaeqoffemhmvjookxtgi..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 937 { : "yjuyqelhqbykptmytysjgqtwcjkpudeecrkthoglxhhcugomzlpczzdcxbrbumrwwhjewqopfdlcawwtotbsmagweucmvmplzkexgxqikktphzjigbdrmvnhlwajgtqlkfhecfdhswqlekntgktbnz..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "ykgwgldvaomhtnzennhkfgxydleyutijdebgpwsdczhrhxugwsopbefveipjfoktgsahmqzfezreraldfgbfrmfpbhtctshqujiskhutxpqdkllncjmwijzlxuouixhrgjbzqwlpapdkvbxzfijtdl..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1005 { : "ylohagkutfxuhabloazhwwwrrapmkooihhnycbswyjaoaanvxrikjttggrvzsgnoybfuuvuwwvqhobvrmdlktdpzqwcbtpaoarrjufwtscpdqszwjevrnjxkpphcniugryyvlfsrwnvrrwljzoadxa..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 885 { : "ylwgzdockzshrwfphpaxcwsinquxkvspuxsqdwltgrcsennaxorduavxamiwhfyvdforrlsimybkidthrpwqermzpjrrdqidiqtwfhvpkfcnosjdnbayibyawgxykjukcnwrhnyrtzzdhwpsbgukfc..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "ymhhertvprjapeetxqzwvdjwhrgwprdbonevvppfbwctajosrtvxzasmkylffxybbajxicimxixqbczivazfpxzbstpdfoxvqjcbmxnvvknjotzqjlpedskhkmvpousofdgrlcaohkiiqghygxgamu..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 888 { : "ymzyjbgodlynoqblupazwacwrsxjuajhitznbmqslsywrphpzossxkltlbtpatmcrgwozsvlwbdiyxkveqkztdcbkpfeeewykdvmadalpzygjgfsykoahtfzjvjttmtijlkcdmijgvdpclhungwkob..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 972 { : "ynqzrroevtpijnjrezcmbqbpgeceyszjhoyymsmldyavnbvthtxmwyxliuukulzckmlypqdwvcgjuhttysxjixrnpiicrannandhtaplojbdtguxptkcsplbfbfogiwrkgkkvhartiokgrtaneaopw..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 878 { : "ynulftrzylriyxflqgynrznvdmliqzpqwlqziwjvocekekowujxzulqyqtxzllkvznjzursuzzlfejqhxprpjoeprrswgfceekpjkvhdrndawxmgzsttavikggyefwtckcsyroyrhhsaqzkwxwnuzk..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 901 { : "ynumyhtelbvolsrwmiedjpajzanaitqzpidqeqezreafvxugaacjwphbuqdsyljpdusmczqmmvrszkpwiwdqxngxcjovulaooalzabbrpyiaetwsknbvlkkoyenphykxvaytoejunlbxzylxdlfjut..." } Fri Feb 22 11:37:10.009 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "ynupjhiuqgtytjwbcrqnfvrcvksnfoljaxaidvwpkowdndgykvnvqvbsvbkjmsbiodrriptxdkymdkjvdvwzbrudotjrmhmwzscnmtedpboqnriiepbeivwwvsovymgbuxrjxxvvurewesydsukfec..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "yotexnvxloxoejxvzosnobjndqhdzqnvumjqfjjqlxnfnkhztjyejsusxwgjslpgpdbintlgeoqgyuuzqcdxllisgsakrvnziewyoxwnvixerxdkpnjvhjksvsrrllclcnsbowqxphtnntjjtqjczi..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "yoynjlqalgsrnrwpdibngqybzewvuzhmsxohrfotyxwjtjfteldoufjdyuxcermkgmcwitkjnidwdrgyanyedrcoidyqqxkloeqyqsswqvlduotgdzaazbtdpswvelmztefhpnhwsyswjtrfgrwvmd..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 854 { : "ypgirdvnnmekujddruyqeqairmxvlzlnplftzmwhkqbcjxiiejeoumqiaixraccyckpqwwnjgnfvsgqwshnjqeygrlnkxztgmllmufyodnsqsmqzgzuixwrmdfdifndqrnjlpynucyfgsgcdpymnce..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "ypuksbdhjhirhzyjdkxsmqljzylpjjiieyudwynznbjlldlftkvdtzbxwvsytdtzunshdrjcccytlktlnyfmzgmamyylrlcmeqdmyotyfvcffofqbhddlmznndcamfkldnqwjffwhmubjpoqfmszbw..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 872 { : "yqajqpnwzjyhhixbplygunxsnqoxgkskpvlcoukzubuonpfaubaortxsvexmiokyzraaanfvsgrnifvxcjaebyoayguwxlbktafdfiultoizpgdinitihjussywqnisakvefudzwumemrerccuqfeu..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 919 { : "yreoclqtmgompndrujhijtdhpsmahjnxwnqsomedxiwzcarokfsuuqfelkhmnlvqdawnendqjsqcqrowlyztfmnbgkfqpumuxfmzauzdmvhazxjlodcgytqprhndcvtkkewbsfkzxecvmabopowfxt..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 973 { : "yrwpsmrckhhygotcttmkefjvqzapdhlfkirohagvqdypouspnfraeziyjinvsmyhmjzblylemowfdmmgfkblbnvgirpuqesuhzjzevdzqrsmfjwcpadwcokskbmazvwjihiycoslyiervogjuoldjc..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 829 { : "yscsneccrrnisykipybezgrqboophqjyjzwjfjptmmwhcqgzomgboavomwctjlmwayqhsnpghnktwxnbmqsleuxafjrdixgyxjnyllvxaicathgtakbiolytihhgvxjhkkqynboyzvzrhfyzdebzlu..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 890 { : "ysjqdtohgbiwybuoaykkosmgdhhtqqerasslmziyorrlvcjcyqdmrcfdewxhtqewnjymyklndofrpmhoaxobsoqxgelhdgypbbnvevksobbmujvtquisyjikkvvtkmcmwoinawusewwxetbgytoctp..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "ystpchpzvnyelsodqyzeyaketuskqofbohhodytqaybpurhsktqmrctkrdrfnvggvqlpnltxdqpvwprigzlylkedeehmbljnzasjxzlhrhlimkvrskmswjvznstqmdogstylwgjrnsxnipovqyftpb..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 909 { : "ystpqarzauvzymagxjmqkmowcjiajlleoqdxzrmuhsdcgecjmtfufwbhlvdaaezdozrusxyehafiaqpppfyvwjrtrecqcrtdibtsqcoififhuyntenclmnqmexrjatanhbxqeaknmedixpkvhhcxwh..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 963 { : "ystzpivjjbfqqyefxiljnnspkocxeunjoovjyachwopdhzjdcuevqnhthcfsynoxvymlxbhugzrlnbeadvlcejxtkeyzqcihgmmzcjkedyzdxnejirikhlocrnqjxtvzbdfugvjjthoybmaktzkeia..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 880 { : "ytevgchpaudujdwcmllqxviliehpjhzotoktuizmnghbdwteqgqflrkfupiibidulghriwiydqrcqmmfkkglzsjwpogtrhqaxxriprpwpdvytwpkayjmhsrklsntsgpbupzfpdikmxzjuktbrwrbrr..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 846 { : "yteybbddynshwtqduazikveryhsmnipjkcgxamsmdkhjooyntmptbmnrpdqnjkyednrxqawtvcofnafkjeiykzytduwngqwqgyeetqfaxytvukcnyswhhtvzbwgbqijvgrmidfafnvjidbwhdfzlie..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1002 { : "ytwstcczwqpoytacbikvxbozofrzfvgywrwksvzoapejxwrsrkwqayabrztdgearhqdikofifudqyrczpqoqlgepyrqcjpbqtwuafydjgfikcntgsrqapshdycsbebedlhgjruzpqqwbnxwxnstnai..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "ytyftxqlbvgkxawtlacewvcdihgkphuqqxbdayebischlywcmyzcdudeowtlibkghmrasxbjqdemfosjiojnepyermukwefayoszjzguujrdzgzvhhkgtpjyijerppdhhulrunqtglbvpoctjfotnf..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 903 { : "yuctmnvvnidbwobnqridfzcsitirirxmhvmzxujdkvgadwzqjavkptpcfynmvebgiaaxtmomygmkbezavnriqefxpwpweczudwpquldsonqnhnbpnfexiiuuwpfnvjhircwdlcchruekyrkuanmenc..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "yufhjilbrmttxdcbzshsmzmagnzlkcygbncivttdahkcpxpkiroegnuqxsknzenocxbvdrvdlqwfcrsmxmixldsaqewvnnuxarkgozsyycuqrmqwqeilbskazgxspqofrhegtppjnvokalwxmnyumf..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "yugenftpuggzkdteyaccodgkbxihgzvxmsfxlhkywuxpxfufslblecsdndqofxkrfulvdqxictofunqpwupnzaqulsagzdmirizcmuncaomosbrqtkaikhakrrkmjzshbrfsiygxstfgmnvbgrtusu..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "yuycabuxieyxnsannihwfzzkylhrblvmfzlngiiwnwglnvepqprkualgcdamzurhasworkanzmvdnlfhrfefnyimknimnblqlihyydlqncvfdwdovpumxkvcefwspzkfsnqmgymkzzitoehvkoknxr..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 982 { : "yvuvplokwbddepwypyrhxbelnqedbwjhhwdlfcytsdfbqvmvdkwyjamnydpcmhwhsulhamogqowvvgrfgqzwghtusrdcxsxyfwdswerczzlwkrwiomwmttkejcadtxotaqndhkwzkmztptywrehbrb..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 863 { : "ywgyxqaxgxrufezskcicqaakbxutndurwkudaujjmjutyjfxinqmcawxgmqdvvpdfmkjcfsuogmxxbelrluoyuozsfgsudmgclukmddufevpgnreikcsylmohisyovavdzrkiqhrjaahzydtojxrsd..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "ywiuagrigmfnwbjrtljxlwnoriqkltlgeiudzvyyektohhnudvyhrpozacfuaxfipfxvpijfckmuxakzmdhdrdsnqnghktfkncbzulwjdarsjftdhplylmfmigcirzzavsuprnyyxvwsglvnslmdzi..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 949 { : "ywkmsisybrvqeearldpsqywpzlalxnzceicgbmaxmwxakpagttpkctceeakktiqhmwpdxtzzszdjlogwcnkcpysrpxmrhkhmomvfzxynakgmlouxbtgzpkeiwdurztonwvjvkxkqozrkcdwpfidpyb..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "yxjqcmojnyfjkqzenyneppgpzziodneuhilebfmevhzbkqazddsqiqlkcckvjizwdmzufmacxulvxnovkqxthynyjhiptwctbqbutgpwnipnegslaossmcbndxzeyefdwwufqvymqpisfbyqwxrhta..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1010 { : "yxltzwvvgdllnqxvglcivmkkauflnznbklawldhxzjsziizjjzblngfosbrbssiuyhmpgcvoeaompztuikleplxulduulkuafeievqvvvchfquueizesuiyviduludfzvvxtgjygaxdmfrzzpdzfku..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 850 { : "yxuapdxvrcaswzpbroxxaipqpwfefiymawbjixrvnzmlscyptqgvmvalegecbydfzrmaayklmscynqjuxjpnvomemeicoorwvbdzasnpdmcpaxjizffpymkyjarrqhoxidkkwmjutyeaezklpastae..." } Fri Feb 22 11:37:10.010 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 873 { : "yxznroymwhoavkhilzmjvmzwcgpoixvoojtfayncjlpcznxmkbikmhedprbabdsxxemqabwqrxuvigzarddsvtoczsnuxhbrxgsolwcomrazfbrvgbzwnmismvwzqaxurnahvqrkpugzrrvujqxazm..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 845 { : "yyvyjucagbdwcagstyhdznekaknyfnupowegghmjyuwhsvwxamcvvgztnhwlljtxdxctxizbupbhaopknfqjrwjlnyiyizzttkheqqgmrxevpvgtsjwvzrcasidhcgwjdpaxbgegedfrhtutufjzoo..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 851 { : "yyweshifvvijmggckldakbgwndhgqmizumopwdohzfquxbxgntgxybicryubpxdvpjcedtfwmxfilrbfhpnkqbmmozjexxszcxfqzxslilpxqoyytmhibvridmyathhsuqyuygjaipzliwwztelmwy..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 920 { : "yzpmdhjonlgltdwticijjyelgeihxdiiosvhpvfrhghozaitrvuvnmljfjjjlpycypugcflsgbmxmduxxvdflpiomgiuytvkfkvwwumtiodkdakbllimlyoktpoivbztbznofoihwmstxabjecomuv..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "zaglzaxfjpslfrcsctiiblzxjkirqomjvyexpfodfxdewoflrovythoaceoxjfqojjvaptbohfegjhsvtelqdfdxhgnlzfxhvihzazuujhbtzkzhhbixptwxtjycstzqttrqqqteiyafwvzhknnwgv..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 959 { : "zaruinovlbwbokbqrzjcqytzrqphckrgptkdylzcmioxirbzhqozwjbphfrdjqqaasqwswkwmooijtiknpbhymsfapeyrmsbalcqnizoqoyjkfrowsghgsdphiiwsaoasbqidgjinpzkcpqybgmmxm..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 958 { : "zbjxqrfwlvjgbcahrokxuuaxkxcuzcpfmlxdpuvtdjhpigbiuuhzqmbbygmfjhbiaoeurxzrsfjkbacktyifkavdwvpiqvknjqiqlxibatubtvvwdhsiedxiemiwfmshwwpfngisijdswrunybvvzb..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "zbllhoxeahkwiyehetchlivkzullpyapqjmlevkylndjvhhwnynrjgbyednvelxvesvkssfuvmkgcsaqieepaexwfncxagmdrsurlkxltcjajudheqtidsgpszgoftglkamolymatpzdnvwzxqwgeg..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "zcfqntdrrbgmoninaxpxdfacbuulcpxrhxcemfxrtbwcqtygrtqlusstoqkruqybihcrrlavfiakxlalrzyywkycbdewxhrmejlagevqxwvpfqqjhhhspjjwaikfqlxvijqbkvgyaurwgxnohkuybe..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 833 { : "zcnoccciribspekeuroaofixpuwwqrlutuxmpuveosygvkatrapuaytubvxjnasgjcaqwgvosoiqqidpwtlnwvphrqiduzpdzneptdvrwfxpvkhxltdppvzfoogehssrnmerjqnxpzelondwgbigbh..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 836 { : "zcnzrmkssxbjphjqnjjopzvqlkqdhemavxkktjtrgygqboapctblrmueimlypzocqcstufwneredtouxgfuovpttxqglsuwglgjxgjirrsvxsouycclumedraqqicvuipkivmdlkibkdwstbffcixa..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 940 { : "zcrilhjeedylcpvmuiwnfvhetwchglluekdedajhqgrtdrhxkqlsooudkpmzfzndlepntofsuwhhbiwiapcokfsvpziskffhiczlwurkqjleugweybwauirxocvzsheuuensbzgrsiyhgyldxzavsd..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 923 { : "zctxostxtnhhfqdmxkhqbfihysdtgpyzwtahusaaiottbegsrtcepoilqvcoofpqhvfokcfukdptsovcddvsvsffgrwjozttnmqrkmyqmguqybllxbxayilnebcwnwmckoxzzroztpqindondtdecj..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "zdjzfgnbgkgpizfsxwtntvmbbpliecevrkavedxnntfqebrzbteesxbidufbfykgwiwwbtxsthlhydmcnrvkvzoiyoryaavckndcxlrtlmtoqfqadqtcvottkjzjvdqmaqtfdbowwvnfsylznvsvvu..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 905 { : "zdknsrwhtkqolpeftioagnrgycftdgaohyknhuxblseyqcnzvsafmubazeyuqkzkkttzxbiwtsnfykomgjumkaxjuvvhsgtjbdkxxbpjizabzwptjmivseypcrajuxgrawegdaspveuzwofkmlwgpv..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "zdrmouygzncakgyhwklysldjuwluhkbkybkeihuzomisryvyzaaqyqphmyootgghzyivjyplqrltruweikfcvnkocqkytvcthjrsprulqkgaalofjljkgfwowaoyljnypnwqcrqvqtzsjugfcsccky..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 992 { : "zdrzisposapitawayspjilqfjeonlbaivfmhmuzjjyouqpgthwzwjezoagsipaqbyucqcxinytiykoomclxtawdlzcskyntwyaknqsroeicposziavnsnhghnindexehlpxwcaevwesszayfpatuog..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 984 { : "zdtbozuuyowupxhzoqdhtvclkyxmrmdgbnuvgqniggavkhhnbtanqiqbuoqejbejwoiafsdjnxjygxftjllzxqnjyjxswsleyabpzkbdoryxuoufiatycepvmmzvibntluvizilgqueinulpgujppx..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 942 { : "zecticsnlwytkvutjeklleoffqcxnotilqbdfroobaykqgzyguyoynxxnnrazpcrfvsumxetgokgskkgmzczikcvfrpgrskahjgljlnvgzgnmobtivffwpbghklaawemdsggecimvkswxhunoabdxz..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 926 { : "zfdumhpoomsuqgefqqnetimshdjdwepzqqjyvjjxljsbxyogymkdmvfyjccxmbugjlmtxgxmgltoyijyjxvyzvvnztmvykifgxolvtxsaipspzujqovzukpuhqtabkpmqgprkbmaaovnowksrclhcy..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 938 { : "zfjqvayyqtssxygkvdyqtnrgkhakllczyzxuwdcsssxqkuudwhugewmilpnqcwrifchoapzqjcbmsneredilkzyekxjuvggaultshjhqcyiupcnqmmmfznbrbnackpvdfsznororbzipyqomdvpbig..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 932 { : "zfwbqbfsrmbxqtecjmwsugxxnmnaysqbffawnilxbkpekhprnzlobykqhmzjcliisvllkxmisoxvznqhvlmeepksekemipygbuxxonyosdzkdaajugyjcmoyoouznswyldpoqzmswqhpxkoslbjybs..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 993 { : "zgecumrfickgvomekwyxbdhyfiyndtthttdknwregnicvswdaofqrgyidnosxsetcazpflszoqrbxbiehvaclbkrwsencwlnlseowbtydymcbfrroyszescavpcfxqldzowvxfvtvbitsjwdyaetvw..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 913 { : "zggqtwmkyvxskclzrfkcgpyeubkygddgazksbtllipoqsztaxalsgbajvnlmlwgvcdjbvuxkuvnpgtomwtdbwoafeicmfpxyajdlbejlvplzexibmtobzrgrwunacvxqtruwozwidncnjxfdavvtiz..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "zglidmosgptdarmixtzptqurzadywhsssumcvgjsbopytbbmqkbywxkwcxdkgqljrkgwyirinrmblavqtxsaqxqlobxrwyjphpdawwihzwifjjeagfemuwdbxgwgvrdhcujmrhtixuaoigzbjduqtm..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "zgoaqenlohrveknzucauskmmoagxletqgpcjmozfxngxfzxvpskerqxuhduwwylmbogvxrogpyjyxwxiiggaduanmklijvfwzvnjzyncmiscbjgdtcdkdwvdpslccdbtzpnfrwrcqtzueviqzgtqot..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 866 { : "zhcrihneoevbycjxhhbkrgqtfyzwbhwnklujnlvpkjzrngorjrtjedqawufudzuzblqqalzwfmcszcxlootlxmojnngcwgsmlaodtyeanpelrmwqgavbneckdcjhtsfowtehzypetswtivevtfjnwm..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 910 { : "zhgtfjoyxkrhctfzopsixyumtpqovhbqffkmdcxwrgeilfkvkjwomcetaqikyygzwfwefoffolbobkutdnztteqmdiumwisxuqutysaxuatrbuohzkmejwsrgcgjjfdpsxzvaawyjxgexgwgekmwhi..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 864 { : "zhndjjdsyuoxlacfsinkauzltfrhjmzewfpdaahrwtlkqvalurbbocdeizxhgguaxszzouhgqfkcyifjfsjwqpvkxncynvdasdslvrdhlkbuemmrpkgktqppbzbjbwxzvvkzbiuvjooammujimdauz..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 922 { : "zhvpnqdcpqyompfsqnqdefzpcbsnamfoeaqsoptrdnjmuaxperybwwczzgpyelzhldohzswzwkficloglwsrejotgtnnidpgoyepeipykqlbaacasavitltppxhsxqqckbvybplzoruouqqtkjdfxp..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 838 { : "zhxxyuosganbjcgyhxcbkoliffoltldyupwbbuotsuzvnpsdujujcvwkzcsctnbcenmqopjccclapmbrsmalttrpdkyipqvtnrhbshfwtzmmgczeikgnjqrrmcemfudpyrmickknzijwqfgkrelndh..." } Fri Feb 22 11:37:10.011 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 998 { : "zibiuikiopunilujailftojhgqcpxijuohmkgdqafrkadcwrdhctrxasfspupumnorsmbzayssbrblzmqgbwbetcpvlmfmtubxenqitrfyprnclzmwdtbvvvbyjohqdgcbczutcpvenrivwcsmyahm..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 948 { : "zjapyhgsyvhthefhxywuhuqxgtthlcoazoemtkcjyxghqmyhhmxvonjjqkfmxzfldhywfjugpfwtyvtccgxwkmhbwpfjdbhqupaqwwaszqsijcysxycbiqknwhjzwzakwpgigcamtemacvtqwdxmvl..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 855 { : "zjgddaunnchpetejwhswpvwonoamdhhzzszjgbjqgbhtqkbbnrhbpelospgqvpcnqstkblwemqycltxtsppvpoifdvwhgjbrugmsvrzgowcwqetgztczmdrolmxlpddtvzjhvkjamixulhphprusbc..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 957 { : "zjpwejosnsgchegylryqpuhiyfmilmqaeohudbwetkppxavcrkdeklppqbutdxsusyaybacimmkqofwxjsxbvwdbnyznodnswholfpooyyanisgjltaqnobeyvyvswchfcrxqkcgouvfbzfoamdvxi..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 983 { : "zjwibsrhjahaoprgvdncbcjmofdcdeeqbeygtvajlsehlkdqiwlxvfvmzeobhoazwdmljzpynyukghzmyshfyhbmczhvhqlyeszrclgevqmgytfyatawvxzjfpnuwwcwzprstddqrwjsbbepbgiopk..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "zjwpoefdgwqmhjftxhlmlwxdngpvfmuqkljoigfxqqxynfahtplozmeeezruksgyvcwsmgtckjzaockclxnyuvgdttjvskmrvgiqauozfzzrznrnuusmwapgqqhjeazohgwhxnvofvdwhzooukdcbf..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 924 { : "zkbsrrtvxjfjfczhlcwlzskqgaddjvdfkkzzlmmoiedgzotbobgvbzibnzljwocgbbbdzvhkswdozuqrjywcmngzaawmjkcaeeiirtwopuaasvedfndfnycysyhlqlkmfjvhqnkdodrxhvfmsgpsdh..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 947 { : "zkgcaysahuaogemdftxtwkxvdhijvzqszjgvcgwqotrrsmdqrzpeqqowmbhqrfdieccozrtpatkvuunkbscnogfvtvnyqnmuijngccdcgvcxyiwllyhlaimlimomuqkfxqofwrnhmtaiwiivnqiaod..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 1009 { : "zkjzorvvnuicrswtaddmwbpfgopbfthzqydjppcyhyylxqaxquugvgmxhxfhajaaglwwhhkioskmusjhklvvueiqjvwyryspfufuphuidqsjxxoyezdavacwckhdprpgsvmkuminmdkdhqgengjijm..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 950 { : "zlevftqzyqzvehsygbqozkrkaidqausxgdsjwtbdijaqvhnfkrcygyasqgybrpqxexrwvaydmwqqtpadvlivijgdhqkygreqisyckzdofeuuiduujjcyoyoxuhqrvkwkzrbyavfmbrvbfrjuqvohnw..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "zlfukfqkwbmcpmqhgmelquhjzevdylpjkoatmmmqhrtbhipjzhnkafszzidadtgbbrsqdapnoljipwzeemsvgrwwzimemoorcqrsjvfsjqxqyxggvomrwocbbshinolmkpujjrqofymtxbtosivwrc..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 942 { : "zlrdmfsimpiqmlmpdxtjypxsmqswojmqvyhwbukkyktqruzmqvfjxfopdpcmslrzlhsorcptnsfqlfjyiebqkxdttiymrjxflkqlxxmimotvvbbixvodsgbofiikyqiqqcvhxosbivocnazyimymye..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 822 { : "zluuwvlhjxnuvolaqtrgvnpbryewqlrxgixqgcshrwmpfykwgpyvigztyzedapmvlkmtkohgmiropcakupitpxddfcxvgoelifjyyulqgzgodaylcdfqbrpfmlmcyodtwekqfoaaqhmbuaebfqxsyd..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 975 { : "znctgngjyiwmzvxgerccsfwefmmpggxjrjewpzqgjgbghjnlukpjdjhzcywmfevzfkaziddxqdemxnbsjpscehdygnshrtypvwdxgvpajvaujrbopmglvqiafosnwmhioztfmjapzdanmpdnxfrhjj..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "znllfbiqjocqunmiiavsxsaoqulpryamjvvvyovjxagrxtphaldeyniqqudddjehykwtycaysqlgfaufdfbruopoaumyjjwcfzajqohktjatxopbvopiedjbuysujzifzirxcnhidoflwgykjzmjld..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 835 { : "znqktupbsycychyopdutbavdwwaqtpqdzwepajuzkchovnvaienxqmtynkkevyukpxjfzusdpimorzhiaarnvhdnibkeavenmotfvodmhgmurqseunbyaqwxfxphzmvvesjjdsycjjrsdbqhoqdfzg..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "zokrovphuxatibpnpegdrfiwsouckqelpqtibwvvokwxngzokiuumwblbddocorzqjmnbwdbbhfmyvqnxzdjcljkkvgrumseotbgynzzqfybqmijvfmvxqfmyaisdqcjmajofjwcdeokwxystmlirv..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 942 { : "zovcuioscocxmzhywwpwtxowbnirjflbmqzutsgcrxpfzyltnwhmlmkmrykhsxxnbneewxxvxspvewraahvpbuffpcwooqiqjqpufwqdymecatkptmcaenpfgvapymaspftswwhfnlxunsyhtmgcnn..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "zozwsowtcqvcgrgmxlzljokatxgppjepqzlblkddoxgldaygtpknuhkmzqkhbntcjpkwbxgzkqqesqghqkbnjhtajtnfzyxkbdgwmxmhxdrqruegxrrjhetsbcepuddvmjcibteidvuzakrcrcstaa..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 943 { : "zpdcsjwtcjsejrrmqsyyycwewbkldkrwvprqrtwzeenmzfpnpdhhcwodkaatdfsgugqtzzdvylpincnejwzhbounmpxthwgpvsipmvcvlbevduevpgxqcvihgjjotfxeiohlxbvixirdzoxpjdbfjn..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "zpkwwayxghgezdtngdnfropoeufdxdxxmtrwztzicrlclaasbzkeldfrrndxpkpnwhbhwwflxtjvgmioucyenplqmnhgnxkvaiwksncjxsxpetdvmrlbyznrgqzhugakddjnjqonhbvmnuiticzogb..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 897 { : "zqaflqbokrhntncilbyynaxzfmdcjvaldolwtzmdydceoeestsswmkgjahcalxbchpusvtotvtrymmyiedgiukslmxyqjbzmylegydpzszlpwvxheiabmjttsnzkrgzhmlmmxodgcvogkrupdxhebq..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "zqbqvvbxvwfxvteaucwqkorolvuflsxvbxoepznoeiubvinioaqsdyxklnquxbieimcuosksuzsnkcewiwlnejhrrvhoqocdlonuwgsliijtnbokhffraflmteqxkxxzgdqyifuxnvgzwoosaorivi..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 974 { : "zqbsxaqntqfqbbkwovhziqynuuyfrcwnqavkrgtchjamifdnpfovyloqaowitfblcxtzotdgzetkbecewfxfgkoaetslpzpkrjuoanalnfksxnjubabjxmtvseujnjrojrjlvlsqedjvvsbvoddqrm..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "zqonwgepkflietchuwaswxxvmqsncukimfrvntxsgtclqtuplsxcvcwnhsbzjturzzshdtfudxnrnftpummsupbqnhuzuinfhyubcwzedujrigincxokfpyxlsxioqxcetqubygbmaynbzajqlvlcm..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 981 { : "zqzhwizkauvyhdgjpvbanjpylihyteafitkudhfoxrqlgibyjifqdpuiblyriddiulxiqcekihvkbripkffaedxdwjrhzqtcrivitywrmnhkwynnpmkwlfocixxhtcnjotksogqbzcxrojmffyamim..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 871 { : "zrbgpwqevspcgmdkwloxuoaaebzslihnrruuwfvmehyxituwezdkwndvtpqhzzbzzsrtsejtmzdkaehiuxnepsxerarnahqezffeimigxvcsxzrhqpzjtmwghtfjgnvbhkgjberibmemsprdcbypfl..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 991 { : "zroaucedktzjeylzvjcfwxtkhncgmncysrajuhbadvlcrjlsxyekrsuvyqkbnpuimnhgogjcaozmymyhcbufzvmuucyiyydecxrmbrfcnfonjxtxcmjjgrvtmrjjzphaaprufuphmhizezggyzopui..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 901 { : "zrqxdhctyrwawmufxyqmqsrvqszlrafrbtfuumhstawnratrdgjhcloipibeyqazqvocfotrmwhlsgqfbpucvjogjsaycbbcisinnfxjvcphbemwdoudqrarywzihgtulntudzbgdursbrfipzusmi..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 912 { : "zscwcdyyjpbvvptbdqmifckpdwbwuakixgrpjkmtoraatxtnbfbzlackqcjxbyvnamdmymacoloyjmsfehojqterfhsodfvgfxvgmgdgnesvyajwqrlymkgdupvjxznayrkervgmdwwixfeaeworuc..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 828 { : "zsewrfohkoqejwnmhqamsuxrgbizhxkhbgbdiglpssnulbslogrhgkgmbqdjcfntvjuxosirbcqnqythgfcgghuflbqtblxktdrtptkxhedddebmjdcykcxahuzhbapxkszfllggjpdgpzjxavixwd..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 918 { : "zsqyyhtvttjculuoxskacwthjijzhvnitlcckvabrgzvzsqykuoknhxhazqbczydqvnfqiwrjeyobjuvhsavqtcmknytrghtxptlpeveemqhptvlwjnafoqnkledtjhdysmbykwkshjdraplpvsqqn..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 945 { : "ztjbzotucuhmekkmvqsqvpqccwtrwtteeimnpykzlxqermarneaivbkpvbvlpfsafgtfyfyfomjgoipbcwujgpzsqtrzenqzpyxftwgvhfeasbxmlilyynwpfhgdaejgpebuwfhtdbewgntjdmvjwd..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 896 { : "zttubnngyvnizpniipmtvgpxkzleidfhjkexqqkjdgieqacoqeinwxxvegfytfodncgcgggebtkmsopxnmqbjqetcnnhypywtowrxsoeagefuppogkhadqojcifhtdbjktstfcghpwaoygxegzvcox..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "zudrtkyastzwikdxxahaeeeakgrnepltarpnnurcjzbxhtuvutnssimxgdktktvoyysrmkvjregnnzoznxzljzjdwkbvwqcbsuxekivcpbcrocubqlzvdtyndakyshtxiygimsjoyxrqgdbpoxkxmi..." } Fri Feb 22 11:37:10.012 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 868 { : "zupwzxngxtakghvjqpajzbhkruloezpgglsnemkhryjfncrhjkaknrnyidkebjhkyrgswkjrzczybhrykxkwczooxudaduwfoocoupprcpoichushjlzevtndxfnvhynrswiaxukdhnlfritqudfod..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 839 { : "zwglaoppwtmdxqqqcyubnxjxygiaftslivrohyocunhaojthiadnlayseyyqmsnslvkdaydopxionzfasulnytlzzeodwocwobxbstyeuizsljplwsxynfcrtaubpkzxrczizlspsndqdckudkbvzj..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 843 { : "zwhrrovxdkaihzjjcnkeluwythcuwgvmhowtgritdbylbjmlycatwlsqflircpanbakzbqttrjwjkazutbcxyavbqamazhkapoclomhdropiurxpvhrtyuwbkerkeuehaltdsrcjwulidpjfgmiajn..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 856 { : "zwlsurpirioubvwpoafkyiylxsxgupamjcauajrulbjbbaayrleyzurmxbbneuungmzjvvvabhvmbmfgmcijliapljfvwzeawcvoobyljngxyvezglnbkbbevvmsvkkxktvyybhhftrnysrmdojqkz..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 928 { : "zwokrtrcvwdhgtaeqzzfibgsxpzftpioxjkgcqyvdzbfqhakuwdktivaeogmunaizdukcwgpglvjtxjanfkmfrtombykyqxizlepurwhhhszhmjbxomytieqrzxmjmwufgldhkwqugewabgtnhhuow..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 894 { : "zwpolzgqscvtopnmonrgsfgiicwwkuiqghkvzelrnfwnddlnnepngoiffowymzieyerftstjvwpgtbspcrnvoaubfaesoqnmgvmqpdrsnibeucsipxdpllpazvwjmfdytrevglpzxxljponuwwiypd..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 996 { : "zwpoqfjqhllsphruicvlhebmytahfyvspfwhxmkdiishzyrinynlbevzhvjfpbrfaglmimdiwrgqzojwmclcnnygtppgngslcixveiyltuvoldqsfqgdvrnhzpbvfnfavgcdmbvliljzzqbxwaumyh..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 954 { : "zwwddijflvywejjyhurpzbzatqflvetmozznmojtghtkecubdbsytkcqklbgohfutlxkgpmvngpehchxqykitcjqtwpjkiawaywjfaeywtcnmtqcluoutcqpfcqdsjvaghjxgaswlbfvqyjrryjgnw..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 877 { : "zwyamyobiucohudplwafugrcorachptiokqjwmsdhgngejcpjfaxgcfsldxcwhdammvyikgiuvqiiyyahrqfqlwjborcveivmnexkzkxmdydnwbhiswptiyoyypipvhveyaccbokssnwwvyeosihqe..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 891 { : "zxltellwrjmjhjyewmlndmayfheityurvquoewgecrxuydnlwdunlrtldqgandfpzpmeiuujakvdobaqcykuhdesirljzhdvxssnxffslxsqwzqsunutoikjgwaszgiftgfolaozewlrvrpgcrtyjn..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 936 { : "zyousxpatspdnrdpcomtbaszubzzphbskejxrpjrzpubooddvhkusjhuemorzepfqeptyzsxlowoqtkozqdbyyblldpqwoxxhawpskeirboydrafnalsydpbxioleoqfspafkkelrfayodanctjefr..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 841 { : "zzblfueubfxugcztkjjezwcvowytmmxrywyomvaisjdfwwvdhsgvroqowhortgvvuddzggynsfbzzamwvpluzltbssmaftikfifoerxkjhguhbgppkfcbqvisbmconrmspdocvsxovoplqhwvfeijn..." } Fri Feb 22 11:37:10.013 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1 853 { : "zzrevkxavzxzxojotkhlxbxlffutpstvmwtfudyindrcixleyabdmtvfxdparqfqufzntnsysowaiiyjbdicrkkntrpaefsqnsjytrszvjcezhluaceglqxiyfcuoabxvlbjoraiseeffjawbtuqnb..." } Fri Feb 22 11:37:10.015 [conn4] warning: not all entries were added to the index, probably some keys were too large Fri Feb 22 11:37:10.019 [conn4] build index done. scanned 10000 total records. 0.128 secs Fri Feb 22 11:37:10.019 [conn4] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:128305 128ms Fri Feb 22 11:37:10.020 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:10.020 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:10.020 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:12.857 [conn4] test.test_index_check10 ERROR: key too large len:962 max:819 962 test.test_index_check10.$a_1 Fri Feb 22 11:37:12.983 [conn4] test.test_index_check10 ERROR: key too large len:975 max:819 975 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.227 [conn4] test.test_index_check10 ERROR: key too large len:938 max:819 938 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.245 [conn4] test.test_index_check10 ERROR: key too large len:980 max:819 980 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.277 [conn4] test.test_index_check10 ERROR: key too large len:991 max:819 991 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.363 [conn4] test.test_index_check10 ERROR: key too large len:874 max:819 874 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.639 [conn4] test.test_index_check10 ERROR: key too large len:877 max:819 877 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.668 [conn4] test.test_index_check10 ERROR: key too large len:823 max:819 823 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.723 [conn4] test.test_index_check10 ERROR: key too large len:920 max:819 920 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.818 [conn4] test.test_index_check10 ERROR: key too large len:894 max:819 894 test.test_index_check10.$a_1 Fri Feb 22 11:37:13.944 [conn4] test.test_index_check10 ERROR: key too large len:1007 max:819 1007 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.118 [conn4] test.test_index_check10 ERROR: key too large len:826 max:819 826 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.141 [conn4] test.test_index_check10 ERROR: key too large len:835 max:819 835 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.151 [conn4] test.test_index_check10 ERROR: key too large len:860 max:819 860 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.168 [conn4] test.test_index_check10 ERROR: key too large len:908 max:819 908 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.185 [conn4] test.test_index_check10 ERROR: key too large len:923 max:819 923 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.213 [conn4] test.test_index_check10 ERROR: key too large len:894 max:819 894 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.234 [conn4] test.test_index_check10 ERROR: key too large len:882 max:819 882 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.300 [conn4] test.test_index_check10 ERROR: key too large len:982 max:819 982 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.330 [conn4] test.test_index_check10 ERROR: key too large len:876 max:819 876 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.361 [conn4] test.test_index_check10 ERROR: key too large len:863 max:819 863 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.466 [conn4] test.test_index_check10 ERROR: key too large len:932 max:819 932 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.528 [conn4] test.test_index_check10 ERROR: key too large len:843 max:819 843 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.568 [conn4] test.test_index_check10 ERROR: key too large len:859 max:819 859 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.602 [conn4] test.test_index_check10 ERROR: key too large len:930 max:819 930 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.611 [conn4] test.test_index_check10 ERROR: key too large len:960 max:819 960 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.695 [conn4] test.test_index_check10 ERROR: key too large len:840 max:819 840 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.716 [conn4] test.test_index_check10 ERROR: key too large len:1000 max:819 1000 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.738 [conn4] test.test_index_check10 ERROR: key too large len:846 max:819 846 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.751 [conn4] test.test_index_check10 ERROR: key too large len:1007 max:819 1007 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.860 [conn4] test.test_index_check10 ERROR: key too large len:935 max:819 935 test.test_index_check10.$a_1 Fri Feb 22 11:37:14.897 [conn4] test.test_index_check10 ERROR: key too large len:857 max:819 857 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.025 [conn4] test.test_index_check10 ERROR: key too large len:880 max:819 880 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.030 [conn4] test.test_index_check10 ERROR: key too large len:882 max:819 882 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.034 [conn4] test.test_index_check10 ERROR: key too large len:938 max:819 938 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.204 [conn4] test.test_index_check10 ERROR: key too large len:879 max:819 879 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.458 [conn4] test.test_index_check10 ERROR: key too large len:951 max:819 951 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.572 [conn4] test.test_index_check10 ERROR: key too large len:981 max:819 981 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.627 [conn4] test.test_index_check10 ERROR: key too large len:843 max:819 843 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.842 [conn4] test.test_index_check10 ERROR: key too large len:1000 max:819 1000 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.864 [conn4] test.test_index_check10 ERROR: key too large len:920 max:819 920 test.test_index_check10.$a_1 Fri Feb 22 11:37:15.990 [conn4] test.test_index_check10 ERROR: key too large len:1000 max:819 1000 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.015 [conn4] test.test_index_check10 ERROR: key too large len:908 max:819 908 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.029 [conn4] test.test_index_check10 ERROR: key too large len:961 max:819 961 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.032 [conn4] test.test_index_check10 ERROR: key too large len:977 max:819 977 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.051 [conn4] test.test_index_check10 ERROR: key too large len:856 max:819 856 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.268 [conn4] test.test_index_check10 ERROR: key too large len:980 max:819 980 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.339 [conn4] test.test_index_check10 ERROR: key too large len:1011 max:819 1011 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.385 [conn4] test.test_index_check10 ERROR: key too large len:904 max:819 904 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.449 [conn4] test.test_index_check10 ERROR: key too large len:979 max:819 979 test.test_index_check10.$a_1 2340 Fri Feb 22 11:37:16.512 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:16.512 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:16.513 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:16.676 [conn4] test.test_index_check10 ERROR: key too large len:892 max:819 892 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.802 [conn4] test.test_index_check10 ERROR: key too large len:822 max:819 822 test.test_index_check10.$a_1 Fri Feb 22 11:37:16.906 [conn4] test.test_index_check10 ERROR: key too large len:861 max:819 861 test.test_index_check10.$a_1 2552 Fri Feb 22 11:37:16.928 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:16.928 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:16.928 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:16.980 [conn4] test.test_index_check10 ERROR: key too large len:1004 max:819 1004 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.139 [conn4] test.test_index_check10 ERROR: key too large len:919 max:819 919 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.171 [conn4] test.test_index_check10 ERROR: key too large len:889 max:819 889 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.217 [conn4] test.test_index_check10 ERROR: key too large len:823 max:819 823 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.243 [conn4] test.test_index_check10 ERROR: key too large len:987 max:819 987 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.281 [conn4] test.test_index_check10 ERROR: key too large len:969 max:819 969 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.421 [conn4] test.test_index_check10 ERROR: key too large len:908 max:819 908 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.574 [conn4] test.test_index_check10 ERROR: key too large len:973 max:819 973 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.583 [conn4] test.test_index_check10 ERROR: key too large len:980 max:819 980 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.689 [conn4] test.test_index_check10 ERROR: key too large len:895 max:819 895 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.695 [conn4] test.test_index_check10 ERROR: key too large len:964 max:819 964 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.831 [conn4] test.test_index_check10 ERROR: key too large len:828 max:819 828 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.848 [conn4] test.test_index_check10 ERROR: key too large len:939 max:819 939 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.922 [conn4] test.test_index_check10 ERROR: key too large len:928 max:819 928 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.951 [conn4] test.test_index_check10 ERROR: key too large len:845 max:819 845 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.971 [conn4] test.test_index_check10 ERROR: key too large len:914 max:819 914 test.test_index_check10.$a_1 Fri Feb 22 11:37:17.975 [conn4] test.test_index_check10 ERROR: key too large len:978 max:819 978 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.050 [conn4] test.test_index_check10 ERROR: key too large len:960 max:819 960 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.127 [conn4] test.test_index_check10 ERROR: key too large len:967 max:819 967 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.300 [conn4] test.test_index_check10 ERROR: key too large len:962 max:819 962 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.316 [conn4] test.test_index_check10 ERROR: key too large len:829 max:819 829 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.366 [conn4] test.test_index_check10 ERROR: key too large len:905 max:819 905 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.544 [conn4] test.test_index_check10 ERROR: key too large len:981 max:819 981 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.575 [conn4] test.test_index_check10 ERROR: key too large len:981 max:819 981 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.664 [conn4] test.test_index_check10 ERROR: key too large len:969 max:819 969 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.747 [conn4] test.test_index_check10 ERROR: key too large len:949 max:819 949 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.859 [conn4] test.test_index_check10 ERROR: key too large len:975 max:819 975 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.914 [conn4] test.test_index_check10 ERROR: key too large len:964 max:819 964 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.965 [conn4] test.test_index_check10 ERROR: key too large len:944 max:819 944 test.test_index_check10.$a_1 Fri Feb 22 11:37:18.983 [conn4] test.test_index_check10 ERROR: key too large len:956 max:819 956 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.124 [conn4] test.test_index_check10 ERROR: key too large len:919 max:819 919 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.127 [conn4] test.test_index_check10 ERROR: key too large len:925 max:819 925 test.test_index_check10.$a_1 4160 Fri Feb 22 11:37:19.227 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:19.227 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:19.227 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:19.514 [conn4] test.test_index_check10 ERROR: key too large len:1006 max:819 1006 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.612 [conn4] test.test_index_check10 ERROR: key too large len:892 max:819 892 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.636 [conn4] test.test_index_check10 ERROR: key too large len:943 max:819 943 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.791 [conn4] test.test_index_check10 ERROR: key too large len:866 max:819 866 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.931 [conn4] test.test_index_check10 ERROR: key too large len:980 max:819 980 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.938 [conn4] test.test_index_check10 ERROR: key too large len:959 max:819 959 test.test_index_check10.$a_1 Fri Feb 22 11:37:19.967 [conn4] test.test_index_check10 ERROR: key too large len:891 max:819 891 test.test_index_check10.$a_1 Fri Feb 22 11:37:20.044 [conn4] test.test_index_check10 ERROR: key too large len:841 max:819 841 test.test_index_check10.$a_1 4804 Fri Feb 22 11:37:20.209 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:20.209 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:20.210 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:20.909 [conn4] test.test_index_check10 ERROR: key too large len:944 max:819 944 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.188 [conn4] test.test_index_check10 ERROR: key too large len:892 max:819 892 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.344 [conn4] test.test_index_check10 ERROR: key too large len:858 max:819 858 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.375 [conn4] test.test_index_check10 ERROR: key too large len:865 max:819 865 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.501 [conn4] test.test_index_check10 ERROR: key too large len:944 max:819 944 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.624 [conn4] test.test_index_check10 ERROR: key too large len:992 max:819 992 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.661 [conn4] test.test_index_check10 ERROR: key too large len:926 max:819 926 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.834 [conn4] test.test_index_check10 ERROR: key too large len:891 max:819 891 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.945 [conn4] test.test_index_check10 ERROR: key too large len:896 max:819 896 test.test_index_check10.$a_1 Fri Feb 22 11:37:21.968 [conn4] test.test_index_check10 ERROR: key too large len:924 max:819 924 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.057 [conn4] test.test_index_check10 ERROR: key too large len:921 max:819 921 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.112 [conn4] test.test_index_check10 ERROR: key too large len:836 max:819 836 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.172 [conn4] test.test_index_check10 ERROR: key too large len:988 max:819 988 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.469 [conn4] test.test_index_check10 ERROR: key too large len:890 max:819 890 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.546 [conn4] test.test_index_check10 ERROR: key too large len:921 max:819 921 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.645 [conn4] test.test_index_check10 ERROR: key too large len:872 max:819 872 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.708 [conn4] test.test_index_check10 ERROR: key too large len:992 max:819 992 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.765 [conn4] test.test_index_check10 ERROR: key too large len:937 max:819 937 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.775 [conn4] test.test_index_check10 ERROR: key too large len:932 max:819 932 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.778 [conn4] test.test_index_check10 ERROR: key too large len:888 max:819 888 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.898 [conn4] test.test_index_check10 ERROR: key too large len:879 max:819 879 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.983 [conn4] test.test_index_check10 ERROR: key too large len:956 max:819 956 test.test_index_check10.$a_1 Fri Feb 22 11:37:22.992 [conn4] test.test_index_check10 ERROR: key too large len:995 max:819 995 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.088 [conn4] test.test_index_check10 ERROR: key too large len:863 max:819 863 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.230 [conn4] test.test_index_check10 ERROR: key too large len:924 max:819 924 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.449 [conn4] test.test_index_check10 ERROR: key too large len:845 max:819 845 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.526 [conn4] test.test_index_check10 ERROR: key too large len:946 max:819 946 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.560 [conn4] test.test_index_check10 ERROR: key too large len:909 max:819 909 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.586 [conn4] test.test_index_check10 ERROR: key too large len:951 max:819 951 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.823 [conn4] test.test_index_check10 ERROR: key too large len:970 max:819 970 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.851 [conn4] test.test_index_check10 ERROR: key too large len:987 max:819 987 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.870 [conn4] test.test_index_check10 ERROR: key too large len:851 max:819 851 test.test_index_check10.$a_1 Fri Feb 22 11:37:23.924 [conn4] test.test_index_check10 ERROR: key too large len:953 max:819 953 test.test_index_check10.$a_1 Fri Feb 22 11:37:24.003 [conn4] test.test_index_check10 ERROR: key too large len:940 max:819 940 test.test_index_check10.$a_1 Fri Feb 22 11:37:24.109 [conn4] test.test_index_check10 ERROR: key too large len:938 max:819 938 test.test_index_check10.$a_1 Fri Feb 22 11:37:24.152 [conn4] test.test_index_check10 ERROR: key too large len:870 max:819 870 test.test_index_check10.$a_1 Fri Feb 22 11:37:24.279 [conn4] test.test_index_check10 ERROR: key too large len:850 max:819 850 test.test_index_check10.$a_1 6885 Fri Feb 22 11:37:24.858 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:24.859 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:24.859 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:24.937 [conn4] test.test_index_check10 ERROR: key too large len:843 max:819 843 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.052 [conn4] test.test_index_check10 ERROR: key too large len:953 max:819 953 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.102 [conn4] test.test_index_check10 ERROR: key too large len:929 max:819 929 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.206 [conn4] test.test_index_check10 ERROR: key too large len:941 max:819 941 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.209 [conn4] test.test_index_check10 ERROR: key too large len:831 max:819 831 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.499 [conn4] test.test_index_check10 ERROR: key too large len:1005 max:819 1005 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.623 [conn4] test.test_index_check10 ERROR: key too large len:846 max:819 846 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.668 [conn4] test.test_index_check10 ERROR: key too large len:974 max:819 974 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.769 [conn4] test.test_index_check10 ERROR: key too large len:910 max:819 910 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.837 [conn4] test.test_index_check10 ERROR: key too large len:1008 max:819 1008 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.858 [conn4] test.test_index_check10 ERROR: key too large len:836 max:819 836 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.930 [conn4] test.test_index_check10 ERROR: key too large len:992 max:819 992 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.933 [conn4] test.test_index_check10 ERROR: key too large len:937 max:819 937 test.test_index_check10.$a_1 Fri Feb 22 11:37:25.936 [conn4] test.test_index_check10 ERROR: key too large len:861 max:819 861 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.148 [conn4] test.test_index_check10 ERROR: key too large len:870 max:819 870 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.155 [conn4] test.test_index_check10 ERROR: key too large len:930 max:819 930 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.202 [conn4] test.test_index_check10 ERROR: key too large len:845 max:819 845 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.295 [conn4] test.test_index_check10 ERROR: key too large len:920 max:819 920 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.368 [conn4] test.test_index_check10 ERROR: key too large len:840 max:819 840 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.379 [conn4] test.test_index_check10 ERROR: key too large len:985 max:819 985 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.639 [conn4] test.test_index_check10 ERROR: key too large len:971 max:819 971 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.684 [conn4] test.test_index_check10 ERROR: key too large len:930 max:819 930 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.838 [conn4] test.test_index_check10 ERROR: key too large len:906 max:819 906 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.867 [conn4] test.test_index_check10 ERROR: key too large len:882 max:819 882 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.932 [conn4] test.test_index_check10 ERROR: key too large len:999 max:819 999 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.961 [conn4] test.test_index_check10 ERROR: key too large len:893 max:819 893 test.test_index_check10.$a_1 Fri Feb 22 11:37:26.966 [conn4] test.test_index_check10 ERROR: key too large len:929 max:819 929 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.121 [conn4] test.test_index_check10 ERROR: key too large len:866 max:819 866 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.180 [conn4] test.test_index_check10 ERROR: key too large len:840 max:819 840 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.463 [conn4] test.test_index_check10 ERROR: key too large len:858 max:819 858 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.513 [conn4] test.test_index_check10 ERROR: key too large len:849 max:819 849 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.519 [conn4] test.test_index_check10 ERROR: key too large len:894 max:819 894 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.555 [conn4] test.test_index_check10 ERROR: key too large len:923 max:819 923 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.567 [conn4] test.test_index_check10 ERROR: key too large len:996 max:819 996 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.632 [conn4] test.test_index_check10 ERROR: key too large len:828 max:819 828 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.753 [conn4] test.test_index_check10 ERROR: key too large len:981 max:819 981 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.923 [conn4] test.test_index_check10 ERROR: key too large len:958 max:819 958 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.982 [conn4] test.test_index_check10 ERROR: key too large len:974 max:819 974 test.test_index_check10.$a_1 Fri Feb 22 11:37:27.991 [conn4] test.test_index_check10 ERROR: key too large len:944 max:819 944 test.test_index_check10.$a_1 Fri Feb 22 11:37:28.002 [conn4] test.test_index_check10 ERROR: key too large len:888 max:819 888 test.test_index_check10.$a_1 Fri Feb 22 11:37:28.048 [conn4] test.test_index_check10 ERROR: key too large len:833 max:819 833 test.test_index_check10.$a_1 Fri Feb 22 11:37:28.104 [conn4] test.test_index_check10 ERROR: key too large len:877 max:819 877 test.test_index_check10.$a_1 Fri Feb 22 11:37:28.189 [conn4] test.test_index_check10 ERROR: key too large len:827 max:819 827 test.test_index_check10.$a_1 Fri Feb 22 11:37:28.530 [conn4] test.test_index_check10 ERROR: key too large len:869 max:819 869 test.test_index_check10.$a_1 Fri Feb 22 11:37:28.656 [conn4] test.test_index_check10 ERROR: key too large len:905 max:819 905 test.test_index_check10.$a_1 8931 Fri Feb 22 11:37:28.742 [conn4] test.test_index_check10 ERROR: key too large len:960 max:819 960 test.test_index_check10.$a_1 Fri Feb 22 11:37:28.742 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:28.743 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:28.743 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:30.403 [conn4] test.test_index_check10 ERROR: key too large len:874 max:819 874 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.466 [conn4] test.test_index_check10 ERROR: key too large len:905 max:819 905 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.497 [conn4] test.test_index_check10 ERROR: key too large len:855 max:819 855 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.576 [conn4] test.test_index_check10 ERROR: key too large len:937 max:819 937 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.677 [conn4] test.test_index_check10 ERROR: key too large len:954 max:819 954 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.728 [conn4] test.test_index_check10 ERROR: key too large len:859 max:819 859 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.821 [conn4] test.test_index_check10 ERROR: key too large len:843 max:819 843 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.846 [conn4] test.test_index_check10 ERROR: key too large len:854 max:819 854 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.847 [conn4] test.test_index_check10 ERROR: key too large len:830 max:819 830 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.862 [conn4] test.test_index_check10 ERROR: key too large len:881 max:819 881 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.898 [conn4] test.test_index_check10 ERROR: key too large len:957 max:819 957 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.919 [conn4] test.test_index_check10 ERROR: key too large len:901 max:819 901 test.test_index_check10.$a_1 Fri Feb 22 11:37:30.939 [conn4] test.test_index_check10 ERROR: key too large len:913 max:819 913 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.039 [conn4] test.test_index_check10 ERROR: key too large len:845 max:819 845 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.315 [conn4] test.test_index_check10 ERROR: key too large len:913 max:819 913 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.404 [conn4] test.test_index_check10 ERROR: key too large len:908 max:819 908 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.415 [conn4] test.test_index_check10 ERROR: key too large len:943 max:819 943 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.473 [conn4] test.test_index_check10 ERROR: key too large len:955 max:819 955 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.564 [conn4] test.test_index_check10 ERROR: key too large len:999 max:819 999 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.614 [conn4] test.test_index_check10 ERROR: key too large len:842 max:819 842 test.test_index_check10.$a_1 Fri Feb 22 11:37:31.831 [conn4] test.test_index_check10 ERROR: key too large len:909 max:819 909 test.test_index_check10.$a_1 Fri Feb 22 11:37:32.078 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:32.078 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:32.078 [conn4] validating index 1: test.test_index_check10.$a_1 Fri Feb 22 11:37:33.214 [conn4] CMD: drop test.test_index_check10 Fri Feb 22 11:37:33.222 [conn4] build index test.test_index_check10 { _id: 1 } Fri Feb 22 11:37:33.223 [conn4] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:37:50.510 [conn4] build index test.test_index_check10 { a: -1.0, b: -1.0, c: 1.0, d: 1.0 } Fri Feb 22 11:37:50.647 [conn4] build index done. scanned 10000 total records. 0.137 secs Fri Feb 22 11:37:50.647 [conn4] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:137725 137ms Fri Feb 22 11:37:50.648 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:50.648 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:50.648 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 198 Fri Feb 22 11:37:51.028 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:51.029 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:51.029 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 392 Fri Feb 22 11:37:51.473 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:51.473 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:51.473 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 973 Fri Feb 22 11:37:52.594 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:52.594 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:52.595 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 989 Fri Feb 22 11:37:52.682 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:52.682 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:52.682 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 1838 Fri Feb 22 11:37:54.204 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:54.204 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:54.204 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 3338 Fri Feb 22 11:37:56.471 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:37:56.471 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:37:56.471 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 5953 Fri Feb 22 11:38:00.410 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:00.411 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:00.411 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 6180 Fri Feb 22 11:38:00.851 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:00.851 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:00.852 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 6317 Fri Feb 22 11:38:01.159 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:01.159 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:01.159 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 6506 Fri Feb 22 11:38:01.508 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:01.508 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:01.509 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 7957 Fri Feb 22 11:38:03.862 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:03.862 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:03.862 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 8708 Fri Feb 22 11:38:05.006 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:05.006 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:05.006 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 Fri Feb 22 11:38:06.699 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:06.699 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:06.699 [conn4] validating index 1: test.test_index_check10.$a_-1_b_-1_c_1_d_1 Fri Feb 22 11:38:06.710 [conn4] CMD: drop test.test_index_check10 Fri Feb 22 11:38:06.716 [conn4] build index test.test_index_check10 { _id: 1 } Fri Feb 22 11:38:06.717 [conn4] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:38:23.352 [conn4] build index test.test_index_check10 { a: 1.0, b: 1.0, c: -1.0, d: -1.0, e: 1.0 } Fri Feb 22 11:38:23.400 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 882 { : "ajhcjsrurafwbjwncxhfxqpqaefzxgvbfiixecyafexdjcqztpceejdeffauszwlscouoxrnlayszkuomyqjtudctemiwqapsmibncbycuzgkcsrhlmrfhwksnyjzweucqoegnmoredoevvkzysajcoqyagt", : "hdumbopeucggdynfdmjyqzmgglyxvrkcivbfrdyxkdktzmengyulbgedttbxmakgezqgrowitudulgpklomjbcjlipiwunwoaldmppfnaebrlhzktowhxkkwoctmxyastxshikvsekjkkyof", : "fypdrqhyktdllelfbgycrrmdoyvafttvrhjsvrxtnoyfquxyrbiopvoqseiimzfxcejufbzdnuulsjhnhfcoskwfwcdrovyprhbcrncbjvdrflcfqgpketohzhfwuyodmcslowakbpivfvbffjyrnd...", : "zzehafvoussxhinhxkmyuiyafrqqqvtazteguynvrlxkbwtdfzvkohokfzzcfjhogwhgwagetoiavwycwwxesznhksndkstfgfxaxdgnzywpfrhainvxmjsegrnyzajyguxdylxyihatdfbxthjbqu...", : "zyeqjbmzaxrgghutrgqlmazbqliieoyehvsbqtwjunebvtyzyalewabcyqajcskgdvxxglwujputhpioqhjjlsvuazkysmunhwxxsarllmrawpbyzukmwmcqnfbxzrgjlmmdzkrfnfsvfmljfvhgjt..." } Fri Feb 22 11:38:23.400 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 843 { : "aroorxypsbnwnpzisgdnqeeefazcpgtstlkzzixkriqcwgjhosufsnzxqvkdqgbcvhgbxhxjvbulzewrzbmyvomlfchgtgiamfkrljyqojuqivnjfkmfstlduoxhsgrdzicscsjthnklbqotqxwitduukcmx", : "wqntyfbakuvqbfpsnrpscnquoxsdxyvfozshrqmxbjjdtquqhveypfmkporfhfsgiwregbvcdkjqoyqpjkgzinbtnsxwwxgccwzctzvnrqjxrgvczcaokyslhwenkpaniumwiufrqdgwlmxczwfmet...", : "rdbwwwfvtqhpzzplitsxcnyvcuvavjrwecwsygzaozolmcxccdumkuctrgbcpdkabazhfqrsvrkqkbamssqbjyfapfndycwaodbpinmziporlynxkuwfmmczgdipwsrxxwqtohuibupzyridoxxlwo...", : "iqwylqldotkryvlodjzljgnkhozcijcjixogyiqkrfjqrczatglavsydyikhfggzfwfbktyxongndzbxurpzgduelfxfzhxnjvqwovrmlolncuzp", : "shxixdhqimsxspuqrqlylpvurmchspjyzkrmjpzsfeuatnheseszjeenqyudzizjnyjbujjfxdsgpkgmsixzmlqvtnoztfdvwafhoikaxowjkprkxfjznzzgtzkcezbhyvtdwwrifvbzhzgcaatzfb..." } Fri Feb 22 11:38:23.401 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 849 { : "axgqhliauuvqglngdljvkgrpzqcoucrnjchssyetpbidflsuysysyymyrletfxyjlwdrrviezezeglhbipbsoevbfq", : "jijwddildujvnmawmzjddcrdinmitodynxfkyhkinbfuelmrmhsekajbvmuocmvazbqojldnxuriubksillompdlclohhqntgfinbclkcqhshjpvxcylcgmkwtrhxlakpejlyofypphjsiwmufbbid...", : "iupjregwuohjlazrwpdpbbozhtdwajoamgqhmqrvikiwezizkutjwxtfkuvabtkcbozvvstimdteqtjdnequvpsneedczgrerloentfmiizmoowhsbkuiwuremussvemprydpvcrwraszcrshqldcv...", : "gxyvuycbzwqcajuxvnkdeddkkpgqjnejxepxufpcrthqlvgkpcxurcmglcoysgffujydwybahuuwxlftjxutvfgeadvggvtsocmzukxhamehcgzioziidakisdhkzadjpwiptirqwebcesyuojknjl...", : "dyxargofnvnapbmbszbrclddfrijmnajuzhoecniyajbqzpkpoyfemaahzfxjjakxnoqxzitpecgqdktwpxnewgiqqhxscmbzhpwgfwpwrmbhjcytxjwyucmwthbdcmavijzkaurpkkhenpqswuqdp..." } Fri Feb 22 11:38:23.402 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 893 { : "cfbghhlquktexegaqyaicyzxoogtlfjqdcyocqwrrudzggcunbakxuijzuafcrqfatfpgdbxdpfdkvqfvcelcxdffxoighzkflxhclupunjlvjvvewauyhlyqqiuvihknpflxvlktigowxllqjkstm...", : "dfhpzrzxbyguojhtpnqymtatmxgkjmqewapklsafbouqrclbmbgwpoltcqdahynbajapurxdsdzuxcywkraeiwplzbyygzoqgrnggbpwlizcpbjhiaepwhkqkrqjcxuauymuboashmkcewfltlegpr...", : "eqdrskwdzwxcaeobtwnqfmscxpizrjfqndduceixtngtzxrjtshcrqnrvdvlxmhyalvthtliawieuuczoplvxtnobbscshktqvbjedequwucrjzzwbhdcsxjrctufwnzrdsqig", : "vfclkqfcfpcgslnllcpuojehjtqstjaxqxpdfmwudbfyhvaoxdgsqyitfvzgdphtmvtxzzubaxkxzabgmvgqmnguqfkkmgxsnfoufpmmucbxbroatqhovpxqmpomrisrpslystjcczbtxgfdwkmvpm...", : "dzhfgwmhttxdregewgoglzbfwflckbmxjqhgegzaaahiwqvxvohmsnppocgfsotyzzirahaibsedcarsrwocciozugmjtnjtymtekwdoadowqxkgpywpfindltxcgclypigzaevwpfvdxcpcijbyvu..." } Fri Feb 22 11:38:23.403 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 823 { : "dlfxjtllsocwgasysvbzwxgvmatxdnjbzszajnvivomatiqkhgzxhdqkesbhztbmnvxcqsxbgiptypfnogakqudskmqfpybsqfgyvehzbepgbhewrqrzqxipblfbxglksbsurbliabvgnibchyuobrotsmv", : "saedugjgyrqhkszyuqeuulpntpbwjdxxdyclnjnoqpzacngyllvlcdpvsgvpurfegwuxcvygrontqpykdgaqsrtuypvqfdvafutaiqrnfoznaxxxosrwtmfhrrunpfgfkbjgqocvlvdlaxglqronxs...", : "aaczgrukpwzkphhgbmevinotdoqujaqxscmkigbeakcejowfwjmdykqlywbogortuzpvyroqqdaseoxrhaxdaibmsodhnjhynmdzpnhvmcemymgvsomutvzwxnfefiwbgaladlpqlxtjcrvojq", : "kurwjxcirbfamgmqlzgzvkwqnmovupmppfrhrdkfrdaoltnmzysfxhtlllqrboatjqscfybiizumabfnvjwfixbmbtmbzmqgabtkjqhwfrrbdgjymareicsumsttvplsfaylinxterxtoohuudimao...", : "ykaujcicmaztxsqmrymbmwnspbvkqyuckbcwdkhufguwczcozhmgisdeoulkaeacdtzuyercaxsxatwezxxocqivnohofvimipewibmtxozvhcxoariherydefckewbvmdgnorhbcjvwgyvow" } Fri Feb 22 11:38:23.403 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 833 { : "dqyclqjujdabclsvcyzcdnvhigjximsbshjcgapomsygtjxzesbkasjwdbmppeuydyrfhkyqobjkyzbpsvfxdrkwxjglsbphiobibdwkarcjoqfmngsjoriikhbuoupbvcwccdqlxgfykidttybjel...", : "ycukhselsxcxxfzglumeuwtdsitagndbjckawkkwerwvvsobkugasterjeghqgzrsmuvwyumotogoxsiteowgvrlsghkgyqygjzvykrelsrzzobcapccdibkybnalfqquwlxvgfbvcbtvzbtevxyzctxwgwv", : "cftnirreqereqraxcvatrbpmpxrrpkgpiksfclsnhcxlaufxjeyqbbyewmrcwmpzgxgzrwknphxbfunoysxgamvrmcjfzqy", : "sclmpbyrkrfxdjznmrtnqevlrajrzqijgdswnojngzuivjnkhcnsdenahjvpzmnqwxwsfzrnhqtelywwgrqtuudxrwbndkfsizzanfnedsutgjpkwwxvbnymxbxmtuxrwqnjkkoxectupeepdumtjb...", : "ztjvthivlszyzavpigmmpacszvuwsrgtzzdtmonnnghjxtjvzqvmvdwgczkplnrvbsmkcxzmuokurahnlcefvmfuplxkkfvwrijhhfhclnmqrqdkpxkvnnhdvltwrnioewphdkhqwufurlobixbufr..." } Fri Feb 22 11:38:23.403 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 869 { : "dvxwoutzcvkwbmxbjfwuzkhwayrklcqqovctuqzmzjgriaprarusmjbuacvlwgcotwegjnjzfoybziwwzkvipyqozpehtfbklfcnvhadkimfvzbbqvnliykgrcrduyponoaanvrjleulokfmoymcjh...", : "nqjbttnasokggzglkzfuooslkeyrojkcwjymqvzporzuxgkqxdpbjbdyxoyctnehgdenhayvubtanmbvlasotgztjjqgpjruqqwrcgs", : "gdbcdqvakkbtgabqmrnvwjnwuugkuhoslopedomsyntgracwrwqposkhfadkgsuzrlrpbypbvvesauxkapkhupyoayrjpzrfydzruiuvqfiynrftkzvxaxtseqpddjktxkrwgcrhpwqvo", : "syjlhlwmjiglbqlbdkzikvsltkrizgwpbtfsrzdxjppcrwhzasjhidqlorpnrjhowlxxhuxtflgrnyqzguxdzorapfupqgyotwemadmhkeaneassvhdxkzrxirifbfceiomedkhsiujlrlyrvmjceh...", : "qlxplnvyuvccukcpxqofzqhcxwoqwizkdhkgqscpeqqrcqblzasouwxpncbzdbgfodhovvckzxelgqnxnmpgjnfgubhvtqbaqjkypzhcbecsfmxkfmyewkjvbybkambzgrksxwpslhrtoyaszuhgee..." } Fri Feb 22 11:38:23.403 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 840 { : "dwoxqirsfdyftnzlcsvujfzkyvavqlgnmsejiquwqljyzswkjudatzcbefrkeoakucqlyfahfpebjpjnfwjttqfdhebkurgpthacrqifkiqogvdypotspvhhkgabicyhnjqtvqqxfxhwaiosagvdrj...", : "yqdlqiihwpgengzdhmwsjweyespeamumqezuiczioatvcvdwsczpnthxcjafcnrdvwjmwqunahlcvtzrslhpfcasjpvflbpslsojfombxolqhlvthskufqfoajajwboxmeotzryzroojnihfhrtoue...", : "wnodgdbdzwbrvqyhxagcdulssyyibamubbgystltwgpzuuoysgtsrg", : "cjjofqnsdvovymcseocquqfkmigyponoujpzdniobpphzvdzxtvumgotqozalbsjqfwmrfutkkmwldcjfwndyruuctckhoekobgvxejxuzghdumvltaakyzmrvaxspmmhriaidvkznqnhjzazyweqi...", : "bukznlhnveqyycwjkszmeghyzurcjcmardlqibqlrgrmxrrptgydcnnafbczbqgsyyotfdeghpzbvnqevimoydwrxsuhsboqeyplzineaxgrmpypuirkfdvktteaehzcjfyysebbyfnkxsrylaelnl..." } Fri Feb 22 11:38:23.403 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 844 { : "ecxlorsfklxzfnziomkrzccxzldgwayuzrlyzczqepmowwvsodwjdhvxmopeehmprkjvtwuevinbnnkukuuxefmsaidxwlzpxjlrbaarbcqssuyxrxprxcousguzposgdqvshdohnigijkuovdjwlt...", : "gqsvkngftpscrfpdsfrcucxtmwhweohbxsolfoiymiaeglrjdfwxlplghfriocrfhufoofngbbgkakejoxxtqccwdvdlmbrvosanxbxydmuwtagbvnbcfjsmtfpzmynjddddkcdzpayfymllyeoesp...", : "lkzuvyhquzgfvelqabvnxdgtcyzfzafuazrcsdnxdenmoillhwhlnqtkyyoramqaxnwnjcecjoisjnaakphcmrfcqdwaicbmktkuoaowmpoaal", : "htfptigqusogiabiuzvxpsuejnbylyqystwsmhfgwupwqucsbdyvpuczgqxazfgubaxblezqdovqyzvtxmqnkjggpbggpjgbsezxuivuzwzkfmnijrpcpripxfnvhqeulywintsxcncwkudrzotptk...", : "afrvpiojakltzgxjahscllioiewrvnjlgtcwdeubwunexlrykwlkqzzwdbacxylpglkjhgugojecmxlydxcqxekttqkmosvdibibjlqvzbbaulpujacqyldzldwyyqxzmdwarwdnrldtdckxwrbutp" } Fri Feb 22 11:38:23.403 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 898 { : "edsekgkmqagqaaealbuwrhjpueuwjzfdondqpwhqkaswvwcsclljntwmosqlacfvsuycrcqekseejurkbbghdzfjecsuvbrqtqjhkqwqtbopbiviccmsoypxwgmtbyshcpvraqwtsavuchzuflminx...", : "enxnklplviyoyoaqueitjghfwvknbnofanfuunxpyoqqvcgeqtksjdwrsqvrjufkyvvoypedcvkwrmmrihfpyrvfumucaicvpgcywicokklniqrpegoomtacetkpovshtyfcgzrxbvswdezylhatvb...", : "mndllansdemzdxupihexsurrcqjdgjqqbeeccwehhsdxkkdzkikaoaeripckiurwdugpdmwilztwcmqzcugzwegihlrbkuayacxdsgwuytfqezomlkmygk", : "folnlmniznuauvfcynylxolxszyaxgmejdcgpmuyjyroopvflmnrgeyjhtgzrjujbmyrlgzyxytwdajphnojyxdkcdktachuxpowvrcdquaiebniswgdfojdmdjxkymjywrajpszccorcqqgmewwdz...", : "mmbsiogpwvcfkeojvhfvxucqtffzkbevkpbflftsettiwfjgwvntizfrqgknpmwdotpllucpjwltxplbnqmzzhpicromlfwzhvhzllhudwyvzsuouzwkjzbgaexyigmqfnuxgswynhaleixnvyfjpv..." } Fri Feb 22 11:38:23.404 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 845 { : "ekxodlvbuvvshjksfuwmvlxxrjrcwiegonkaamrkpigkteszkkiirsspkxyzsyxolsnpfknwaignqmijyrxrohjofpmlftislribzzbeolwcyhsjtukqxhxugbjlvotvtdhwtqpmbglbnhzuhctfyj...", : "gdbpkjicbcdmxfziushhqaizchejsrksrzjnxvfqtvgkypojgkep", : "fzreyuqghkkmrhtujtreogurylsaktozqqsenapmbnixjbfbvmnxbhabzvpowwwquyyhfzwtastmcafgmhknznztznekrgripffsausbmlxythvuwsvytzgflpvmqpugibmawnzffreeaeqrwohjqu...", : "slnfznkwfhcoljhxypvlaylfeletwphmxqzlpiabbjjxnmwogxvfukgqdaagltwimikonujujsygkskeghivcrycpjpncnemwncamwmoxdtqprzqqbrntsmqdfnxudvueyrxktfycbcmqdnvwgsasu...", : "ujfcldjrijarapwiyluufbrbwxahkfjphlycqgivfpzfefiiekveddhonkpwwbwladuojvryybdrjzvphnioihtzrpouokpphvauxxulpsgalcapafiicqxiiczqxvhsccmgqmyfzxqjposfsjkyax..." } Fri Feb 22 11:38:23.404 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 833 { : "emyxcsexnepydzskkjfdajarnugmczacotqjdvhniibbvevstzmxyzkjakdeqxaahbqiejybiqwkdmelflvdywzfsoefqddeozauhqjbumzxmtfxsqwnmdffwilomxquwnyblugtdpluesxejtgwzz...", : "gknqobjarhkshedhtzmxyjlahagjrqtsbpjwsmqhwxdctowlkribysbrbjmyypqnmiklnjynewgipabuubxjluuwnsnvlkcibuvefcxaudnfyrfqmtemifsqiajfqxrlfdkpkofkrufjkey", : "tittwcdcuirzelbmflowbbrjqugaxnyatkuhsykqgoqvfuyibugykjezjbddqgaeopbwvvzxtqaadgfyufamdtrkkmynfkbd", : "izaleirfmfrrnupcscvexicczimpkzeqelzhacamxgpzvstqhedwbgsqjrgafyyeoarhtbfduqgmubdwxpfwwtdbmtsdfbgzxwuvgzsqxpxzpbqbkjndlelhccjwhpbfqtrowueizwmqwbjktzjcly...", : "nmlerldakecsdkuplqfihxgdgxuihblvtvdppsaujazhoeawihrlbgjpqmwncxhrfoqvfxlyqiwrtgchtccswofomeopwfceelbhttluxmnnhjqssvqnlhpbtygodbtjhhsikmrdixzcuwwgoivcno..." } Fri Feb 22 11:38:23.404 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 833 { : "eqlqfdcbpbvbqddkfvhrqqvwghtxbdrozglhdiquqqzxmzsboapnzriyrxvvofqcngkgskyppeybdzpuwkitzczjcxhcdglewuncjjztyryvijsxxrqiifnjcgxyheiojxwccnellbehiro", : "nfpulleeprjffeximnnbrqmczawaevagwjgfktoiovkhuwkmyhnknshdtceftbmpggribhugifpdtdjfsvxfxekderfrlaupgwwlyqmoiepjjixtyorzssfutbduoqhbratnjrrzhuadtzplbqtzok...", : "smybgvjjgjdvolylgekabgzsyxfminloizgbvqpdwyuguadgiyghfbkafylrpzgnlctvokxjirbiiwuwddynfhbqwbdmsgahlombybrvhefzfvueifqojlaefbdieexbdmsrgowbtpphjrpswzokyv...", : "vrtyjcosomehubucgzvohduuhdxubnqzvemdracbhdbnmpmsplvkwcqnluebjnmxgpqbrnthgvznmkixdbdcfnruubzbtubcxpyduyoekspdgxbjslyvrmeuwrxzbyvhttefdcynyznvasxtlapfek", : "znepnuttfbkbvnaxfgokqqwoyscgqlztvswqaglfsgtibocvjzgudttdwaubywiqvsaojkodvyvpqncozxsfoqdhmecgqqvqsyrwdxywocpnujdnjejdslzcxdvxupiifbtckfyfbsuslwqmtzpcbwtejcqcdi" } Fri Feb 22 11:38:23.404 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 832 { : "eqznhpoykfnqzgpfwrjdokqhcrirbkinvlhonqdmlrhjkcrhqxksliitafqfqwbofbldyxcdjyngdmlmxbydbudwvmwtegmngsgkxnlvjrjgpifywxiasglunegerfwfjtrhrdvwdpgdebzfdgdtsq...", : "sdjzxmblpvqojerzlwrfxagwgukonlqaznrwwiqaxdymrhjnsdcvpxlvlwbrjkaykqdukborzzoyrktrdjmkdhckyeetefnpzha", : "xspbqwhtnqzccovdbhnzkyvbqmixjrajaimkhsgtkbeaobogctxpdsjbrpmlzhhonqzlfteyxojgjpqedjmuglthjybhxbowmgsrhofjtvrgyfkjkskvmkjhcpdezewverkzdcyfvlbymgsigdtljc...", : "gysacodscsryoaofzdwzkqigtecrwawkgfioqbjvdragiwncsiwprgofororsfsbhvpqpebctzrjvktpkragmsreqexlmwfspqpkkyxriymbkmssmwxwcrptzmzxzaqjrehpcqbspvhnglhmnlxblctq", : "gortfpdezopoygmkrysgwzdhvnyeibrxynyyeggahkigwpiuifbzsovvfcfsbxtpvplocfajhqjlsrscovqtjlajrbhdhbxzdnohknuozeudfofwznbojkcjcrbikwfeuzflytccfvizktlvyzqdut..." } Fri Feb 22 11:38:23.404 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 954 { : "ethtlibucztwarlvsjfquupdjoxltnxfvoxxfubquksjhymsvykkgyfvmfhktdwvcxjmwfldtsaxayaumkhxuqenlvzpttpipxztweelnibkcjjqegxhnqzvfuqhvxmhpnptaphwopsjunzadzyjyf...", : "rgnthikmpgpwdcjdavxmexijncfnlhuzczxpvtimyxqmlazzfyrfjxwnkhirzmnmychhopuxfqqfauhqzcwzspzffitnpefyclfgslhmrisanldpuvyztappfibfnhvujmblevlrnkvgywqfwlcrnj...", : "emcwyxnoufbpnsganvtuivwkbcvlpeimmejfamdzbqwwabafstkaupaypdfharbuhtlsxktwfkvvqmtvpoxqyxhkkiopqndfgfyirkaxeywfmpntatkzxvupdcxzcaohoposjandhqdoupbfkzfdrd...", : "qnpcazdqgahswietrssqnlttbtskxormdgtqszhsotcxbfdtuitmxoztdzzuiglclddzvzmkqcbdlmyooejpnstqbuzxvrgulqxxdxamoemjrfbovvbxvylegeykkjdgaqvntuiqfumyzpcamrxtuh...", : "xgyhqrofaxnsdcujrqcvacentzyaehtdqwsaagerxhwslorvbufdvugzbracqdrxxhnphfhiufmkkmshmzqljtqhdgysnjznodupjjputebqlnflyoylljtbqlxhqredskcvxnkmeghhwjztteuzpv..." } Fri Feb 22 11:38:23.404 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 885 { : "ezuowkhlcsrvffbxpjflmhawluounyebvzupjijbntqbfavvivjkdpyjjgfqtutrxevatzdksznnichloyqxjssjczldrtetjbbdcmtjjwkueuljjvtsdasyalixlagnnhubkmbfqxmkldwmkppjnq...", : "faxwoigwaozqwoqqqusxanrzpaqzkcpynwsdiwzyxabbidmssdkykpryzhvzktqcvqrjvmnverhexgzmjnlihlegpaxewfilitugbphjlzpoofvlmfmbhbbulzbtvxhyuhizhwllbagbkqocrcnnct...", : "ahirmrjfdziadspmzwcxavjnqxgdxewkaqncizcrgbxomrecugobhimishyqhpnesobcbwgbvhqhetlamscdhfpavhfvuwlrlugkspibradvytnp", : "wfifojkqrmbxziuoxxlpyaqkoljwxsktkbjoulmqfpnhlaucomlkdzvkchzhylgoykvexolgpczadlabkzgevczzwahaadcranyghrurpabnnpcfyangfgrhalubtjfifjtjwscyvebomqxeswokwl...", : "fdmggfgjywtnkvnmmqjxyfcnarykxosokddhpkslliojmiacdjvlnlgikcnkwzkykkulyiniydbvznifiznyxxqzxblthkqkhvnnfbbnvexfdwgcrchyhegjvvpwdjhczvaqxruilxjcqwtxobreiz..." } Fri Feb 22 11:38:23.404 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 830 { : "fgrdiiggjjopegkeuewnwuvotvhkdtefioklxqsokaimfdfbdeipcvacsoipfjkboqomqwketzhsdwcfdhof", : "pdzmjssrnsafzpjuguqjjcqihyjdbbyygianzkedkfnrexcgxswhnesryavneagiqblkjghtyiqlbrrlmgkqyugpdxfwsqpdidxvpykhvvrcuxalnydplbqfvpttuvngssondcxejvrgqndwcpmo", : "zdevjbyvjiduicknbajsvxrcgvzdkhwriztnwgelzmuhuojnaakqpfhxrbqhxwbcumvcbwlnchvshnmgeclrizzbdqfvxbttwuvlgaqcccnflbyximfamyvsziuurdooxcudddymdfrsarxziicryi...", : "hydkgvpjwnwggpxwjxfaumjsuyswwltphirthyeuvmnyqyapwwlmprxmspmopzkhxozxresznuhvezytcgdgeguhtaxnanoxvvzkmmjkwaoiymeyzuuinnntmumdtqmsbjvprncscvyqdvxcufgopl...", : "tftxcyezubflbvmmuvpsxrsmtyrgordrgroojxyortoxukohyapjymvsfvurshhxlmoymvvbwkazvmyshdqbvpejhlrztwvnjcnhaqjyuqrmozzoluatrvnjpdnybvxgikgclbtmgpvfiuqhlddliy..." } Fri Feb 22 11:38:23.405 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 889 { : "fndulrmvwshcfymtyndlxyoqelrdwtikijrmyqrsszaizhfzwyrrgxzbvnpbwwtthatxvtklwlrpeswjbqdpvhjtbophnkgfplnyjcuikmqendrxwczoopvijdwdrenomtsocmxebvxkagxfjhcdnc...", : "cyfchqfjwtdbizjwfgnakygwfnbkgcgqvddihjawrjsuicnzspaqfunixjgnjhlanmglcenfmuqownmmqygljekwqeisphgbcrkysifffakdthgvnkdgfmfimwgopryrwfpzwidrbxpwnqjnhyplvn...", : "wwnezictqjvefxfpkciicurscqofnsbhhktpvwiluruizcmswisanlrcvvetrdnhssvrbabhtgpakftrxsrufbcvvusobvnvjqioajtrixgpxojtmsovelyuejlfcvlhfbjgkzxbcikbaxtipog", : "uyjczekeqrtgctrkfkefkgidvpnczqiptinezxcowldkropmwpidphxhrifafexecurnyfrcgoewnfapcceytbmrejqdulqpfcgzjsvkszsvymloxuqcwfdgkjouyqvoxkropyysjdprezzztdsfqh...", : "ubwjtgloxpnzmdjxhzzsibemuxxskuvguswbdtpnjrqahhvqcaahknosfetdlqzdxgdegizylhslutccdhnzfwbschsmecnxegkrlkaxltdtxssupqkndqczmlxoaxkxexvdapdlfcuzskdbmpzepoybbmvppz" } Fri Feb 22 11:38:23.405 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 853 { : "foyrfljbmhatxnaeodwvzcvrjaltntjwoysbflhluwmnpvlvuxphuwcjozlglavnhrxadjlwfrsvcssvfkwllkwrmfzroegvyqoebvnqoquodqaqpgscyvqsoutoxvekkgskxnvtxbzucddezpggcg...", : "gnvjafxsvotksohtyrcptdfsxbkvjovxaetbxfotexydtfmqjgbxlunhajjtezrngbskrvvewnbkssyuhjukhjifrmspeyuwfqmsxtrkcyczxgppundihhjohmartvlqpjzabgpiqxfqtqcweieqmq...", : "wneurljggsaqyrhsvxmflvzimuyucpoecmwunylvxxtzzrthraelljjdcfquhubmoxhxfnkrdxpqmfpeixspsvyvrdvnwrsqiapqxmftlqazibkygqnnngukowtcjirvianfcoxgz", : "absmwnumrwpeyewthlpheqibkrqbpnxgfqffuzeovwxnorszuewlnqxgepqsoeervfrmzokyeqwuyoxvhqrhdtoskltqbcovryjzmvjtpxcwmuenkxztpdomehsnohrjdtxvimubmdseqocdhuaeldbofawnnaa", : "fieqsneuvnbnxfaxtskllgbbvdrepvyelalmvaaagaxiakgvdybwjiivyycomjnrcwiyblnrzfhhrwvcmlgfchqxaljgypwzwrvbwjtrunqrsrrqiuqsqlkdfibewqupdaegspsybxshccqnvlfead..." } Fri Feb 22 11:38:23.405 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 878 { : "fqrsknkkbyexvmjrexnulbwaiiyeenleinukwzpwtplmfakcsikpqbfooyvehrboaxnbgrefxmeingybwhbexthdzlanvmlieyzqznayotrpryleqqtolnwfgemwcukkxhofdjtkbzkpkvasgtvzhs...", : "byhnqbpovwqbwaudmzczdanwzuhlwjcrgwyzgrlmfaemaecwmwrbmfgjwzrdhfiwzcewcasroajydmkmrpvzpoamwinxxwmxzdubzhwvjtikehpqdcnkhldkhpmyfjziqhaauycrs", : "kgehnvnyflmjtjaerxtwbtumkqlrkvgfkkgguqymotedvnofoxewsqalvgahszjhjeohxnhtrlmsexzovmgbxmlomrqyvtvqvklrezzpxbpilszurtvhjtraagpygqfrmkgvumylapuke", : "ufmfwrrwcqisfvlfoomwnptjvncpmpsoivdvioieoyqhyirameibvuybuspzdsgrbkscllodnhosgkcmhdvrvqaxqthgixwezksxidmzdrccuiqbvsdwhlpeszhgskjauyarjoccaumdhkcfnztgbw...", : "drofbykmguxhnzlapumzguvzlxhqdfrnnygkbbnwfixltkytwpjfxsakzgupxwsakvaatavhypvjjhnslhhommtshqylomnyawrfltgaqzsuzjrpexkdpkywmnrlnshfqtubbyxundomvdisxuscmo..." } Fri Feb 22 11:38:23.405 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 825 { : "fwjhestshyskfzxlunnfzhfuxoyeiupstxztmpbksrfuyeosxlpwgenshpzergkxweowtwzonbdqnzrqakjvaiswmqspkxymhuoyvfxbgbwqlhmuczvwesnitrpblohufwqeoxaetuxheryfpygqoz...", : "qghlzeibmztwdovhecahlzttfqyvbkzfysuusiohgjjtkdupgfrlbsnbvjmvargwepbojrbkxzjwhjoboansbljygxumbjahyisztfcacrzskzwykjgpmkiagyqdoygfifm", : "igjttyolxzalkwmfgavxwyurrbnmliyuemehxihsxuqahcwjufvkgizpgvraspsqizcnzclxaezcdlzjlevkxhxbvqevtwjahbjimidvrergddjcdkpvmwfrrgxbkukglmptebnbafvduenbpvptwa...", : "vozgrzhxsvicyfaxietybwdnwwpstqpoeoshuwfaezkpccndfdczjrfgfquhrwuuyipzdjwtuszxheloslbcftzexamlwrmtjloecreonnwflwoihddpyn", : "woqrgvifgdptoaouftxepofiyvndoxephkcmtyqimkoprekbvkqeypxxfuvazjgetabjrqcwxueqxwdcvwjlzkeeizozrrdxwpjzsisoklkrnjkrczaqfxwsuudkhzpaqrxcjjcxdxitpwhitbnudwjynyfaio" } Fri Feb 22 11:38:23.405 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 889 { : "gbdxsbteufvcvmptotitzfvepaauwguytttjgoupsiixwlssxtvksineflsaupmxmyblzhvqdskbrrjslnqakpzlwtmfcydryumgnsxvoazxjzmjlzxyidsnjzlmzilkgdtirmysbsggbnthu", : "aasebwdyaipyegboaugmsrjxvlzpvbysvnafexccwvgeueqiscfvtetkpensdfaokfsacayrdlirqfxoqhovopdwutbrxgpihhilqqvqgehjxkfwyulbjhzmcvekgraqfnpsocmoaltgrsstdncgiq...", : "klwzdaqpfniftjifknriofrvezhjsuvaslgcipdmivknixrjeejbnywclwiphzpfpbcnacqxhrlpzgpuplyqhwzblvadklzzrunteveyfiurxycqwgpvblwtjcdadjhavxtujngqibsozusdmimyun...", : "dxfevlrwckpbiuzfiiavwtuiecducdovanylmhaxkqmosnlqiudtdsskfmblbmixybdcujfdqhhcsumhlvcnfbanzoytgabpwxiihxkiatqaoqtxwmliojqgkyzsgtudgxalkzladwfveapceltsni...", : "vtknifqczmdrcdgbkvwjpxjfziwcgskmnecuzmttixlglvnfshtwsgepjavwmvphplfuggtjlsanldqgrikifsfsshdykhpihetdeipxdabmnrmylhaarpjxfjrkqpwcpjdmzhczxihhjxartvfcaw..." } Fri Feb 22 11:38:23.405 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 823 { : "gkcqiuwbumicspebsakvaftztkgizknbvoxyjursjwjkdxzooiitqkzdcdrpiawvhqastqngqpajftibpaqjdjqxxeecvjlydxfqmytuvqngmwwsqpuiprneheetrmxwobqkodobympkvufhfyukgr...", : "yaqfzmmovzxxzbhuuylopwngtmkropvkpxvzhnzkqhysbxmliwvzwmrhmbiofwejrlmdcoqcjshxhbytxirmyuxllchfzwxrstbxetnkrdpxkxzcbsumkajjthwtwmxddmqwwjzxbailtubpeyqrik...", : "ylpxurpgjouczjohxajcjcntkbxecnvloedpvawqrhtjqtmocahfenxctphbgivdltbdadeyaprvabwdi", : "gpicizjlcvjubcxdhcmvyliktexqityttuxikhtzpjkkxdlzsudjypuohqraafqxetpbhztmahgxdcuxfaneqxxsncngnlfavdwmnckzrsknexyiqfbesyfiqairuktjbfzdcuykrlcrrpaluktnmk...", : "ajqrmujvfitjzpvhxbwdgdccyxkqbocpghoqvgyaoezaeuqhlfyzkownoafexkjnnasidboadreeamoapvqanctntvrevwiwapunsirejkkmlgukzyzjznilympfktrdsacvapxxqcgjseopamozmg" } Fri Feb 22 11:38:23.406 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 899 { : "guskmahekuyrardfdzgppvseausqgxwzbsgcqamyemzyilkzmqtiajjgkrlaveqxxhzaazqlggmkenmnzmcvroetkpudooxgitbnuvcnjzlavulmjrtjrztylvreedmuffooywobohufgouqofkqsp...", : "ztkdfjnwsylbktpkgjtsavcwqrxfrkfldrgldecfdomoybiwwlrqducixbtstdhpqhbldgmrmqxtgmsigfrfurgxbfyqcrgsirtycfbmnnfrbkprwenpghtvallwgygciqamparvglsetmkvlrzpls...", : "aoyrvmlogrrkjkpwchmzwrrglyxtzmtlfkhdqnulzahrpqdastsxbyllzgwwxgtpjrebbvdrrphoossyzmiunfxdogwgbwfbptvcbvgqbxhuslhlcvddbmahozoxjjvpgzmpjhhuehvqkchjmztbhe...", : "tfbnbneprwbwhmbjdpobfquougekdicfcrevovookbycaqftennzwznkoiyztystrfqghsnzuejbflchgslpwgqwqjlcywdqgsdgwiofznxjijmfwqmkezywjtetajwjuhhgrmrakrznknnhkzrtlh...", : "fikpyvzruzcctynlpontitxthwzsrdilyqqnkcgkvarfzarpgmknydaxbfdvxsatbrcdjhmrnsxtpjzygxkzhserrlvpanjczfifsoprrmapvixcppcvrnewglufhdeimcbgiweyqvkjzspax" } Fri Feb 22 11:38:23.406 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 822 { : "hadkspsdwhhyqplvrggplgjdijdmyfrofvirfvmunxrvbtitfkcoyzxpusspadjdzxdeaiuscrgwydfmvfjoqgwucusfjfblsiinbuatxwjmrmlnqxrsepjucmutpoittkliuryejyedhiuwzwobxj...", : "oduzmrjpdrayzehdawrfywdczqdwlhluwjgapdetgpwceemszcswyhfgkhknajlnuakadfkojwpknwbugrxranuelnvtumnknxrjzvkreswpjdlwxbamtacxf", : "ukfjqswgxljukkevumkqbxsbbgzykynuchcpqgnomtturplfxylufqcypbskzlvvcjkpimurdwthjsychzpyfeheszfjyemjosgpokrvdmeqstgglrbjllerxiynnnieskbrcqzphldbvjrqigl", : "bujrdjonuidnhslwskfuunypvwhqlxjsifftwozvwfhxlygwamnsgucbuzuxqcwgmjutylisugwewmnjviwyxghfuesnkgadqkwzvpyipesosxmwrvwkhbnpulfwmxguydyaisjrihkpmusphrxjnn...", : "syfbqjhcsruulwobngtborwaipzlbqfzfdmmmmuvpqptkzsmagnfkinfidedrhllkzmekfqtvpgcjymoncwkaqfteibbrhefotktqjecsntjwhjgahnxyloehxuxopubhcndfufsvvhi" } Fri Feb 22 11:38:23.406 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 844 { : "hfmpcxgosmzegaorvhadozvijwhbqzoahbzmlibcwfrmkpcexqdywmaniyvuyrkehkmgqpzzickrmzupnuqoliajfdgcjhwstdedlueclabgmfabnpzmijbwdvijpevlbaubzozohxenuzhvsuhduy...", : "uxoxjbibkpbellwuzjfhuznjdnaxlrnahcvzniubpqcwfjkntjmrjecgpooxkuikwpnskajyidpgvwjgluooqicffbaviqunmwjacunqudetopwtdbrsovfqspxlwbwdlhdkmtfzkpkjgprvfnbkwk", : "zdoxbfizfcdmydivaujwtnhykfhvnzfavscnoixbqejbjhghxtbcctignxifpzomvxjhnytlnozoyzpyujeogyvrghlhsviotifrblaxyhqnnwvozxaupwdvtnjdnaojk", : "keuzjvfryormtmxxtedbatkbnvhwtktundclqxqijfjxmlaalelrownoijxksqwfaauiekqyezibmqxfxctsgfqwyuvjnouueelymzunljunougqviutuehbkbcftbugcwbipwrgdupoxcvrvezfjs...", : "wzylybhozrnpzzjqktzfcivejrhqdbwlfrnyunlovgyjlggnjggziddxkkobcgjokecjuteurvahagrdjczawqzfncqhreceorxjtashccglzdxfjerncnnkqnvmfwsxirtttoesjbqauzfgxrhvtm..." } Fri Feb 22 11:38:23.406 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 850 { : "hmfsiurtnnwdymxquywmtyweiochjcxprabbjrkcznncohvvqeedmtqxoovcrcdyfyvmupifxjeuljjonpppilzllijyuazqfkdxmaizabmywmiyhyseqsnuuuxhvfnibnlveyuoeocvvudeyqyokp...", : "pftmxzxyljgtfhgpmnbigxcofoziubghsylkrsrzjjkebbvoftcplnrmhseiucqkslzvffkxxrgkxtsvfokcnpvragzmukbufqkgwayltmowqzsxcabwnncdszmdwylottucmmkspdwvod", : "qnlwtcngfovsnrnckbevvlojkkhnpqhtlgiqrnmnubamzmnrlzngtucqobrvjlfcddlwnozkgsykteddfjsmgfmktqjvldncbdtbcsjthbgzzdfvtgsysxyfbholymjfygkjgysuhwupwmjkpsxtir...", : "kykvlurkxbindpnuhuymkxdnekzppexkeskkykevptomiilpqlngkatstbfgdwkuvpjpipjaezmyiiqclfkzdhetkecyrjktoudrbzpcsnyfgbirkcfqdukecuwzeifplcefyglberovqearcrjknv...", : "oebxdbtvwthssyyzskknzhoeczsdhpemorxjcgvvawmqqskjwefghlzlvbartfuvouhulrkcjqsuffktpqdcmohkypdbyharumsuezna" } Fri Feb 22 11:38:23.407 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 830 { : "hnpiaosovnsrnfrpmyzwgozedimkodzarirrizwzojllzxlmcxscberdqmypcsfzuuquaqtsevqbnyjonfffmunrdmibyfalrxixiadonrybcdbldlxxthqsfflgmh", : "nzstdliosxwlvlxvyrelrddzqxsjmzddtsjeiphgqcmfrmisgmcrwsicbtbkatpncjbbizfedjrrrxznyhzrqrohbrlavcbrrxdcsqcamkpmumcsxgstbdarxeczteozdubdysledkrkvylmjcgkye...", : "aornbxjopgkucxdbenkdnadevppbiuvdakyjvvykhmczqudxckpxdeavomfoexqrsuarluzuapenveoeqfbhotnvkfsdgdwdbariwomdwveepbypnvuujwn", : "tehqokpestdijphqntcvchvoirhwllynpetzkouxaxftighxjjkmroraghnfuioqvgfgfembduatkuihpxbeylzohsxnqbhuwxkmrsoaudhqnfgpbykcrhdhbhipoeyezymytmztoxcsskvijojmxi...", : "dvivkowixzawiehdelfmuyxwjvxgaetzkgaclapjtieeuxxsryeizgupwfppapcwhnaipddhtpjnqodspjwvukdorjakupaobtpulpbftkeoxpwcbjuslkpebamkkptvsjlyuzzvhnmbulsjxaxzih..." } Fri Feb 22 11:38:23.407 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 831 { : "hqyrmvedhplqtmaavauxqtvqipvpqzjmknenqllcmdemcwxucgtiywuhfrawsgavdhbucgktslbkwmzlasybezxngtjueykpchwqdllcomwrtznaxojkdxzhhdrlstrimognbkvdgupdmdpqutgesp...", : "fqwirhcmtnjqsyfwuemfuuxocdzvwzjnimtlcgppajiebvwqxjdzdfgwrprcmtzouylrhjshixdwktofmnbegtefycdmlnkxiqhvgkikimxmbdngvpzkphwvstpxzepayrqbxevozzxajjkdqxwgno...", : "kszmdwrbawnwwofevvzpzbqxoeziehjschblkoodmfovstmwrscopztemygrulusccpnfvivpcjhnbztmsccvhlcqeaeanppwyowheugndjoz", : "lzrdrgbsampmaktlspvnjhqoxweyokbirwndcrzlzhtmpqeqwbeofktirhkconfdpyexhutcwvgsruuzdtzsskikeitcqgdsfaqqtuttjmestqowmncpfjmeqbvmopwdrbrkomscfym", : "ynqcdkiuozrjkuwrtjuadgahklvbgcwuzprzhvfygsqiuzbefbrkxadjnznglbzuctngifeoiroviiminvqsjbjrudpqudfhfmmdqkndmtxapvxoxrbyzexfsqfrlwvqvdektikdyolrhocfbftoim..." } Fri Feb 22 11:38:23.407 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 831 { : "icbcissrxqsnxfzlocqulgnuhiilazxooqaggstjcokpubibwfnhzsamtrnybqxochfkuesxjvltifhshinrmbg", : "fdielhiefonsslqkzlfltksovvgjkerhfdijnvadqpevczlytelokpjwbtohhdppyujdshpdvjtyynvodneovxttgwsusidwayrpnxmusnfqkbihcrtfmuijcrjujigdvdujoebsykvattjdhnyxdt...", : "nlukknevttrjfmqgxwrotagnonredyhtvioftllgszhthccnlocfolnqqucfloxucwelqshlbpmmzklvpzckiwglmzbfjyrspyzhggubjevojvhizzfrnypqqbxbosxhhjvpgncwzbmorhoqbisfxx...", : "nrxclhgvhcpqbhjxmgvifwklchrecusxpgembklzhgramztfrpzavbywzspkaptpbbrfkaaanxvrycruqtalsvjiccylzzxorlvrmvydswpzdgchqwdsdlencomunrfgegtptrlnmocrhfdcypjduhf", : "pvtinlvcibhaemvckbgsbuborijmxwmxdgzhlcggvyzkegvrazrmrwmtdwqwbreuhtfderduhpdlpcfxynssrncsjtgrllunpvdvbfjrkhsgvgafomjmfuzfobcrowzbnpcoyvaouymirzpzlhmdvx..." } Fri Feb 22 11:38:23.407 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 914 { : "imydxznoetxhftthqwvkkcwogxkdlpgehhwtrmqdvqqgmupsasrwoekwfmwecvfizqcdfvrrqglcqfzzkevnuqdwkndrkluutunilcfviwjaevywifgkzakadntmijjnkkapcgmxvgkmlpeuuixyvu...", : "cnmjcfpdumsuxwfzryflnjvrpbkapoujkvtmfwupzcqqbyisrrigyasehcpodbcpbfvwidgsgbzwvjljkbmjhsvfeogtpwfgnvbvlyclkkgduvezlweekveqqtkuzsdurfxziabmksecjathujykcq...", : "bdfghxvovgjemngyhxjllgbtkkmwddnhujawyecniavjcoaxkpsyrvggnlneqimmcgmuntthysbtnkojspcaqtyaahdnpajfyjoruvqrucrvylozeqlxvcpepilnnzbukrfcalvbchecftd", : "xuqrjszizhhonpwapfetbithcoxbsapjdeikwgemaamcizhhtjaekwepgjaemvtuefaepysesroqfbghgpewchhtamnuaismiqaanvilanambibgmsykwknzzthhpmvmcgqxzqercdcbjbhxaqkfxvraulab", : "xyeccuvcutvegyrfixkmpoubwyygvhjmaelzokrnnxhnxnpfwekozemvovmfolodwztmdgklsvyqfqdsauaalysfrfrqksvngewnnwvtyzcyrynobruvvhwwqfcokgkssxcyrquwngpbwyydvetzeq..." } Fri Feb 22 11:38:23.407 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 857 { : "inhpsmcufoswjwnaptqyguaunsbpcotevwzvnxyvoazngqodtstamtfbosroszneuzrmersfxvvkfyythxygkkipvx", : "fvchgbhwivygznynlfhpzojbmnijipgyyditwnowegrqvqlgotzymalvlxwppgbxeztyueqwtuoltrsaoyyqggqvvezseeusfhgfjesvskxljeipgowswvsnjolrjyaciphhjqcpuqxuqsatmxepio...", : "wzycgptmvzxarbwbfyjygkdtdszpranlzdxduyjgcdxjegzbpfmlhcwvljxqsyupjfuvgjupmuedpxedylrdntfknegccisdrlkcsrqbehlzzukjvtidougyvxajwagqewxgjxjdyruxlwmwhbgzgw...", : "bbxkarovzbypwlssdinrawzuhigmhiaabybjrderpznxkrotzaalsulcjrxloadugdqaipevnekzdhtigpczhhubottilughexedqsrzdmgomxjozbetsnvveqsebgomfcywvgbrrfjtietgadnqiw...", : "vbscmxdvbodnbkcgqqdbcumqhpvyyunaufoxvnwntehrpgpyszmndvsdpcznxfcekozqwpqqowozcsvjnngojuwolllfijukwlsdahnsffasucrkwsrpnufyswhkicfmunmmktdcsfwirrnrcpcjga..." } Fri Feb 22 11:38:23.408 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 833 { : "iwkxbzdjrynhxqbkuuzeadsdrlxkakldurlhxhuskhajutpwyyxyitwkfdfohzwtqvdisttvxgrgfiiqjgwcwnixcrxnfxcifrhxbaowrtvbflqbfyirtubwcimbwijrpfdgr", : "mbtkcgvjskjwgfhyvkmbulirupkgfbhotnumouhpoxfrususergozkxohvwglpugkkzamfwwavtspnqzqsxerbwtscskbnelitevrkxdvjpbmqjwmqpbqdrkqakrqznxfpvicswmosymukqrwmzbes...", : "wusksqyxzaoqzkeewggupsxwvhjgmmojibcbelmhjslkmikmvtqnyrdiruevppuhehgmuqhbtgfosgeyzrwcfcwmbxwbaqjwtzkfbaiepqpvpjhoni", : "gmnmgrchlsaaozlgeqrkkwsskeejgevcfwmemumtyugfvmntwfubqdzazojwahacqryazhzxyjekdizgcqxwqoqhrzvctwjvqgywmpcrhtutvhekhurbexqscstejkftubvmvwfkjgjkjsehgsbuan...", : "kuomuckrlesicpkpjycguyfgkxibydxknvtbqzjwyyqpdlipjgwzelfigtdqegsktuardvpasvazmwbcmnqmaezdpgwxgiiriacwbsbcdvnmdygyzbozawtcjtccloebststbaddxsrsrbazlavkqp..." } Fri Feb 22 11:38:23.408 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 836 { : "iyucmkbfopaazmxmxjcitljfrkeqjnrbjxnjaxhbadzmodcimsptnvtmykwbbaoqldissmttlollxseqcrwgrdboljcjwanjnsqtocgtgdgyzhrainwcuijgvhtqsbxuycyioqebgohfoxuvezssfg...", : "mnbqthnhfxfnsculuhhgzbbnirnfkyxsnnijfoclzgxqaorhrgskjnzkbhytqlhdlamtotnhpdeirltmdxuknurcs", : "uiwgvldecbxzrhjzqdantbnovqraoakmybaohagmhrpflqglippjkswuqpibzazkrygijugupdejpaypwhcjkpvrnucsfryktfwltjeebalazuwtdaovkqzofvxpncpmlemocvwfczktjubrspbvag...", : "mcgioiojopxyravzykahkmtwiplxchdmxompyvcivbcoheprwmmssthkpslvdkxlugnjscrdglalpvnnfvdkuwlptxnpgocfxcjsxedyxpldlqpcbtcdtaxdruebuaagrzlfugcwmdnsploucmafwl...", : "bqwujnsdhfgnptatyukqeshodbftbvtghokswbpoaovvqgfkjxcgdqupnjmwhadsvjlilcbtddfxihaoqgbedyhgbgqmhwgmzczpfegrcusxvqutcermdidlqiummscocmtwvbqciyihtdclaqzqlm..." } Fri Feb 22 11:38:23.408 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 831 { : "jewcfzczyydtttxkgpghkhhpzgdymvkwqavryyjupcdprypqjrzibnwtuxtdarilvljckggfkfoutrajgskkjxicinodalcskinbppmndemgnkjypiohhrftcffkophjhzhlmpiuwhdaoqlilpnaqx...", : "ppxxsmpbppvvxbayvvyjileeiwmieslgulkxxodedtivebaeolnyjnnxuujbkcuhbdekugxgycoakerdcntbfcjnxovdshmhanxaohgzrjxyrmwxwopzogbbrfddigabatupodlvstjghwqdcklzla...", : "fockvbxawxqerzihtpfmdmezrjebqynlzbkrwewyadclzubvjyuxwddcdczalkbhaqabjfgtbvjffpcjrihhxijnftugrpgpolekfaptmtqviguadwngaqrzzmvirodncmedawxgzpzfbyuchmebsz...", : "ypdnkspirfpoynkdowzujzrvgycelsxckkpjgdipjxhnxbomifzipavgrcgrbnkydmhnijqvsxnfviahotvcfidfuhdpaajtanpqfqpkxfommetmokpuxdcyqcjgt", : "rtrqilsbiiycgfyhsohkwgpkjfbnxcuyitngruymuclfwkidupnjohlbkkowhrzjdboclpbubyumbcddwhqffyusslislq" } Fri Feb 22 11:38:23.408 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 827 { : "jeyfzkvwmtniupoyfulheivbswsigbgsqthaahkdcuwlvtpseqlxkqhcwlzdcnrfnbpowoyccfuiywktqqtgspppiogsffgjntlfrbayratbvlfedriyzfkprxtulftsfcfrcvjqapwdnfmdehwtqm...", : "jfsojjgbwgffquswlkicqrpieubgegbgneinamtijpzjclmhnpavmzrygpdzpcqzsvwxsjrvfmsqzxjekkdzxoesfaphsrpcxrawvdpuslqsbtgqndhlobggkvcxbnbrcuhyjygjfnun", : "pkpioujqihdckyaljinftvxrepnajxhtegwgfrgsfvuqsgzhxxldvoyqfxakhktdgicsarxyptkghvslbozt", : "apppnpodmwetovasgpxqbcdheiftenxiotivcstjaxxrzinmpuojzumnohizajjfgodsnsdzpovcwvmdqkmczfmgymcylpybwpsyvuvqnwyftieczqpdtafmabfwadkewtilizzqwfrhjsvjzckxeq...", : "kjjbmxsvjgqxottffawhnyfxqutwyocvuivgugwwlytyvchrgerlnvgmxkjbdlghwohpsdyriekypokwvzaugokaxvnnogmcjvqzmoyztcdezvodjsyljfmjndhgegrejwwycqlpquojttvwlmhheg..." } Fri Feb 22 11:38:23.408 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 852 { : "jfswzjwrnlmvuybedjnspocrgwetdzwykbvxxbusbhyqyyfovhhgokjqsuwhztkzbnwpuvacmrjpdtgesjwbeqeacazbzbzmfrpnqosujyizxxbxrbkxeersjvxbwzbstflpxffkdaamgzpooiqwub...", : "engmkantetftbmrkbcoqehmnkhiequauinfmezrzhylmxlzlooiyxfzuqmyxnlddspjusaaqplzidxhlzvticrzxhkesjlfgovcrdelsalpplkrwrwraskmmwnrggldpjesdgtggsxzgdtsdhnyrmk...", : "cibtuxjqzxhmomsomtozxfukfmtubxsqwtpzducdnawvewgbvjalccqedhqhoojnralonjupfapkemqnbhrbwmgglkhwwdrggteajwkngflppakgmajushqhfmmtvmqqhlbtivxtoqwnxpgrbcrjpu...", : "xlryxgedcfvkwnpckiezgqxkflqzfbbjobdkpidixrrldlpysiczqgmjdlpaswseldkucxulfvdyxjtkrgcptwyhqviytfoqmnuhhhbxyudkylopwiogzcztrskaxyilux", : "rvjlhdqrwqdxuasvagvtikbfbneejvvlicevhcgrzfjegcieqjkxthreoplshcrhgegoqjiayofhkibwgsmnuxfroypbcltmxxjnfqmpqhgotdnevmzflddcqyutxxbgotrzyxcfmsiuwdf" } Fri Feb 22 11:38:23.409 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 847 { : "jndsubqsiufizvcmweejiqndhupbgiyadxlutcquioaffczfwjdcbqlotaogssginyizaxfkqgqwpqbmbiyziuzylqukjrxactnbinmoegueedhvmjduadtacmhxtjtwhtiwtwltrducebpitbukhm...", : "uvrkhgunoylfqnikmtekpclesnrnvrhiowenjoahxigvnqoyhpfqaykjiomwpjzzkmqzybgkxvvuzuwvraoatdiyvdoxfqedemojxruvuvxnrjdvwvyoxhpdjfmxbajarpaarrzxpsucqekvbuswyl...", : "okiybdknivwcklvpnkobjbspgwcepsffbvqyijpfxfyynvisrmbuijgzskdehfijfhamjvtsokukwechgyzokvkzfkytkwhojzvfwsgcrxturzzvnnkxzxhdnjcljvykodxnasupkhszmczynvcbvkxjhwv", : "vtipwwupdxxpxbgtfxdocohcatorajykxspeahohcxlwasknbfoschvrarmjajwnzgwitnwexsfemtxjqiptaphgpocymhpkvkmlopaocwdsynbeuzmghaqzxcxcjosimsvg", : "ovkljguvlsmwlrwgbpsknmtpojekbjoojvncssykwkqkktrzflcxnlcwpkrjyrlaaziottxvefpkzhumxylphqnsaecngkfakazqyqraptcdxzrmoqsylxondqqhjjkgmekndumumiqfgsyromsub" } Fri Feb 22 11:38:23.409 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 823 { : "jnxzaadpzxocbnxnkrpadifcsvddzllclazlehylmeyowopvmwtwqgtbjtdhwiqommefmixkyfmccxwmnagxmewotezoizbylljjrzrjrywxhqqlzgjftfjfggisvhhkkpgygqiudmqcqndcnygvee...", : "flwgfxzcahvpwtbqyysoickhikargicrdtpwviqnehszbzjvlsaymogjzpgcbdepckzdomqtxzadnknvhybjdhpzgcsvwfgqotybjlnmvnugtyqgcqsxc", : "wsklwtqnrwgmixzembyscucmlpugyzcozscokwdvldhufdbrfoorcjtwezpsverbnzjxalbbexdjtmbxdvidbwuyrggmofftpemimyuyylpibvhsdjgejjclnbpkkeyaardfucrhpquinivbvlsllfldqqb", : "rixfaqikwvwvmeeknguugajzwxisubemvbbsqvmfvfkgnmqcogxretfhgegyeaatlbeuhvgjxrbghohkfcxnydsljakkqayicgusewsejtplxiqwetidlfowbpzuqgomptjsyzghngmlchpkelmfevvneiwqeo", : "vrwgbywqxyedvfsildrqlfpwvuvxoxpmoidmxbxxrduuednjwaavmiunqaysaasvhnjdkuhnrtllzlmqzwlyatjvicrupiycdjnzrevfopybteyisvztpjbhsfklsgarpbvruycdimbixccbheyuvq..." } Fri Feb 22 11:38:23.409 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 866 { : "jrtksiroyrmojdockvxwqcxvducomtiumhkgtnuggqeyndjxoxhtiwfjbaouimbeowipoesuxcetmmosovnacvtabhdumhhekkizitfkfbifkmfhyjeiubrytwumbsdbklrklzweddwbntgjxbdzfvaejw", : "jjkkcqjxghsefeugmdemsibrpkqrwszcitjvlrfylkclpnbkesuoyphjikgjaftntstzgrvbkjmtjuwmetvysqkdujpqrxhmotxfsmqvzxkdmjscdvvyrbufbnvxukouaplvxlwflwqkydxdcqlskc...", : "okwozqillhftzugidoxnpehrhegsngdpvxvouooinsjrgyjnorypnrwiygnnihmpnxzpzndogajlrvovejqjucevgvpeqymmzmqnqeqpfdgmaulbndkkmfscmsdjscecbojuygj", : "othsaflbtfdyuwuarlegyvkgfrehevquafelqwyprnxnbfnohhajukzqneiiqclsxsznxpbiptegrpymmejvulmzqbnsvotyezzhwqubohvrdpxravhdopgavbssgwhvjrymsgajzfzuazaxeqdiwx...", : "wsqxnmsriaxenujqlwgqiwllqfhoxetnqyvtmukocbolwwpajbtmyqpkwcmpnqyhnvehtlonvjsfprpygamnerziuzhaxunzhskkkgvdvhzaragpnwvwritvvoqhbpheenqlsgqxuxmzgvlbhloddi..." } Fri Feb 22 11:38:23.409 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 857 { : "jxzzjpkzjigfpbnxzycafotjiigieukxmwyoiavzxxwwepejclxmteofcwaodvvctojjcekwxkhvjgrinbbjzftzpelljcpkaomylglrebqnguqqslhvdwyaudukzwdjmtpxzixayocovxhsqwmcgf...", : "klumzpsdjxtcmiwmucoqcrxhyvsjjvdywmtwzskpiwqydilrhionntjmodaclaaqakgskzdcejhacyisaxzyucetjnumygxelurgjrbdvzoassewcfzvfndxkyhlxdlusvbdldoygxdbtmsgofnnnx...", : "hmmabslnhpagmmtkfiwgnzbmscmdxzeledqhlcufpalqptwfuuijneemtdvwrzklnblhiapdpeygcrspjwfxgmesfjfssdymvtofjxspbrfhwhelvexalinlwemkozoukzceobeoonjfamzetrgzey...", : "sxnpgqtcrqzjryslqgaflwlqfavwyinkoflpmyvmpkatdbaxqdknzvfwlbefzxftjpstvdxwjlgvlchbwiqxtsoijdykvgzehfjinuqvdqcahczpezjxorqoljysfnhvzdtzzsobpmnjngogdysjidifs", : "dtazmztbszrybbdgqfxzesvsbudtmuutxscfptojfwerjlqyannzwsxqeaahtxxuyfmarkvfuvcjnspfpohdpurselgvpavyjiiinyrmdlgrlienngmnbqbpmubrioaznrskvntskskmw" } Fri Feb 22 11:38:23.409 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 828 { : "klcbyzizczriuzghxtdmgtxscyrcjmewyvgmpqtefjuavdabdkvhcvjfgdghcnrjkuufzcjljemjjiubjuwryswhphlmlbgdolfvqduuedcvgzlczdktvdjkyysfeyqwjhfbzlh", : "rogchimrrgpxotjpldsiyxfyhimwipsmdfobbrccnktswfctqjbtjtvmzaxktimlhhtwzhxixgfyfdgiinpttgzknjjdfeqtemptkjhbozmpbccpucduhzvejswqnstbdycjlbctogdyorqpmyqmct...", : "pgrifrrojcqhzpvphgtemhbqjwjnxcdiamsugpyvjfwzfmcyrpqdimoyqrgzyiuulovbwifpqczeevhodlaguwohyigmoijhslxdcdyfbihcklqajqjsjiwbmuubhwbwgwalapibefhmighuotmetb...", : "njgwczzvbatrrfqusaocaywakeycfcwbseyooctmpvqiadjdigctgppuwangufdylxvpthegxumhsobdwhypkfvbviesfjprdnyknypyvnzazzqckcdzuauiigckrimumcjfqdcvptaxykrqltzcsv...", : "zyjadukxwhgqfurqdeddcpdtaprmeyjnwjbrntkkipoldkkfxbxzgqtjwipekurgodyiniubhgjzququxkaiwcaeuzpbs" } Fri Feb 22 11:38:23.410 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 825 { : "kpsomipvvaoikucbszmkgpmsxwxuuqhthwllqgwjuffxpykvgqxdawhwblacwavwwpsktqmhxrnnahholnffetpbxchoyaqcaxhzktuiwjgdtiwqlixtgwehuyodqxtndzhtblvorllqxdyvwupqrh...", : "ruxhqwoupcrxkvkpeuhmjcvhnrynwkdmgomrspcojqbvzwalbsbqfyshofsndyedidqtnmngvtajpdhrongvthedlmswrocmprzywcpjpnplizsgwzzjadebuqxainasvodlbxkuhfrbuqbpibfnsf...", : "cylwtkybhypoiytobygyymyihkftuaesoamkttaiootlkbhuyozqlqxfhnognjlxoqtvhhisdvlcdagivvltwqkjslzyftbkwbrukecayswtetcktmithunbkcjid", : "xkvhcqftkarboejgxditrqwoiptbykqqnzkygazsgmmhmoowtrurvwxrseurgpbtkgbkqdyvjbfrupsbnaabmjtciadbgvabuhzwdrqelmgbclxpxhjjrnxzug", : "zelbxxcwdysmzyjodiriknmxtuexslazxpjyursasqxcuejthtkersofeprkwplrbnwkpxbiycffhhaanzeuutnmgcvdghozjdskgjdiupgpkiycrazriugniuvzylmnoefwymvcysadhdqkgcjier..." } Fri Feb 22 11:38:23.410 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 906 { : "latammhjypvjubysnyrccqguxxuiisiomheibpvinzfijajgtuybtqylgflemqakvarsiyuumucjtlapxlujjcxsjfztvwtsabexzmtlrmbrrhdirqlpnqwsqsdpmnncxclgmqqz", : "ivdoymbzygukkcuqlmvgduelqfpwdyfqkaeqcgnxwltcshgcysfqpotxbembaqdarfvusenurgsgjrrcniczbfsrpcbotdstzyspfjaenuogkvnwmxiecqalhqysahbrxvakhlzvihvigwmuvzmzof...", : "eclorvoydennrdoaomalkyqjgpytekjgmfgyvesenowkotjfgezmfxqwsjxxuxqcirhqulyccalfzulynzkgujtcwrfmyszxprocfvswjwpjgwlujauzudrtueeiepugjgyjqvxrdfsajypixenomj...", : "jabubhnhpibjxoamwufsfoyesnzpuhrugdwytphmccvmvqfkkpwvxwnkqnipzllzfeliqunufzcosrpjnvgfnwzknrkodpbbxrzlpfggziftthionlnqhzddphzoizvsdyphfoyxtgnzvjvewfvkdb...", : "wfopbdoftjmujzrbbtnnwbxwlsklhvslaonkaqdlnekqioyxykoemlhdoltvuimajtngbgltuihuxqdvtmqgqsypiqymhouxoysznkbkswwbgjtcxqridmaefiapekdlxgwbzwsagukbygieuliuiw..." } Fri Feb 22 11:38:23.410 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 872 { : "lfstpvkrfxksmcvgosohgdyobffqaidjxoluqptmlczobcsfjojpjqnkmfcewbpobtuavgyoglxxlnhwbprzfforuksiovrykkjdgnulavmrmtwgwuhlespvnswxwtzftyzjpjpplzakrmeqbavmih...", : "icctkptfiumfhaygnmgiobaaltoilordrmnobaavkbmlhfhmhcgddamztqhyjsaemmnvsqympziubbjmwjdtovmengaykwufjmioxmtejstuxttff", : "udnoshchrklsnosfmskppiurucfjaeclnjwctolwoonqszntcjguoauoalsuolwifptegyhlnrxbvwpltuottzxootfuxtscdbmktwbuvedciglclfvxglldwxvylkkrbcnfukybfowupsgtsbrsbh...", : "kyvntzhezpkiabjrvnqitylopensivlxeeegyvsnokuwiyabefpnmmyfjmsrdqpqihxwgmvcrzzyepqazjmlcsfgyqtlszymxlkslmkupgdmaecvnmxfkfvbkmnczssqkjopjoakcvvuidytprxkgx...", : "gxoiuxxpwpcsuhcrjerrzoghbtchnzijofjplqnwtgaissqwonqjnhutvhthveoytlernqqvpvwjuxkitasnysniglfqokjjquhfzidqfdhsiuefnuubuyudlhkxctiiqffjwntgzpiuvmmraodhqj..." } Fri Feb 22 11:38:23.410 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 834 { : "lkjbsojkcyqgozykrwdyrhhkloybyxbxuajkuzqumsynypdwuujoqkpuvolzzqzuaonbzraxoweitnummxoxefsmtgwpvgmohpdelnzzryubtnbpalbfkcxnhodwqkyvbfrcfjszobjfminfbihyfk...", : "hxjzzyysbrtyphqllluxdxwfelkjlzkrxqgrhorpgrgvyjoysiuyjsfkjdxwoxjirochogdzvlpgnyzckwmvjvazhlakreeweuohgayrjpipmyjctahqtvofoiuosrkjeawkfaxhikvplhnw", : "gkwrkheshvfynasgpasukdihqdhnekgwrpfsgwsdlqphucqahsrvnvcrrzbguhywxmuwgepjmhhavytbrlbpsovxoysrjnkxgkcsjhlvrrevxblyuiawnamlwehhiuzaohxgopswzhpalgbchgbvmd...", : "nbhamqckvyfhaxwdyhlkcgkpvapyciehipufxfiaivdscykgmqhunuoaaqtqzvbblmetznfpughpzknbitaasjounjfoowvjhhnligddaasykonehklzlmhuwvjdeerdbcketxlgkkatllptxsxoro...", : "eyzjvmgewgeiaynkgdxowhufnfwjjlipeoxldowfhwmnbrvtvlbnncjaoznwtrlhegemxxofywzsdzqpqmhdmvmhtkuhlqjslnaeasiljscutyljenxgmdwhkerdik" } Fri Feb 22 11:38:23.410 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 899 { : "lmmswyrauvkhwssatxlkbvyxgjfhvfmurkqbtbszboauklgxhticieptzowjkezhmaksvbpjgrnbclggzfpwoyjvrfvznmmbdixchbuqfgagzqmdyeaipudsjrgzhfsspcpyjiqhjzwyhirtvesmfp...", : "zooxhlzthtvkttagkyrcbidmcvghncbmrkxmgahsdqbquxwjcjbxseutoczlglsqohkblczduarvoocwxhlkzpprybyhlndhhjabszfeajghkaxroztoluloosaeuasepbnwnsmmsedgilset", : "sgkolivfdjnjahgaqywaqduiqaxpiekywiqpdigbwbntbhectolkxshvuugaxzkxqqpfkhlwsnfbdeiihckbvfjynsdxzkvahajklobkbvrwynwakvqvffhbpbxhjrvryglzvvxwbfonnfyqdamkiq...", : "ccljubeyltxtcnzfodvwrptyjbttffuoxcwfkbaknfclrpvmvqnotjzulifdewjtuqhkwsqnpiomdynbheklzilbpqbeaglzihikmopqweoawptwsadaponbthdrnfxomdsxkgcuajmncmrmealfqd...", : "tmyociclfzcdannfpdawriuyuicgkfaewujrklfyiqsnfbtlbuxisrmjhukqmfvgnmcsopqvvpdglifawefzspuxxtcvutjnjvumejvsvpujdenohdzqoajysbpikoawghxvlxospnarfbservcdsm..." } Fri Feb 22 11:38:23.411 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 835 { : "lpgrtudhefgsoeztkibjbtjmcvxnlfyqqwzukkiuzqtldoduxvaikwfdtnyojojohccdlzqfjzblrmiyphdoommcqhxncvmupzrkjqajeapkhpxjtgznxkmtgifkdufpbztxnwbfojvrvtnmyechqs...", : "uvlccwgnqxiapqxafsdlypneqapnofonenovessnkpvfpnvmgrlcnqmtxmmegzqipkuiyiaskgpcenpnmneozcnuwzzgeinolkewdmnuigndustsvvrchderhyuqpdlbbvltzqvuafernwcedmfy", : "mfgaemfhbweyzhejwyhjaccxlwlokwieruvaqnvtjrstogkreekisqrfpoqfynlszldildiydbyclqoabfoaoxthgcehmymwjymabkjoeppvqpuawburczpnbrbetunpmmolaivsxexumgfyraombs...", : "fgcpszpqatpkjuyhzqtfdwqdgdqgxnnlvoihtwgtygugcahedrmevgjaeabmgyhsccwnupxgtvcvulzvptekzfrzdftn", : "mfdgutypjfhlbdkwizlaxroluqccamiduhfiwnjacrqwyxmztdutpgofzhvrcmvxkjaaxdshtvrmsxnwsliwophpoqlfinetrjhuwcysydipccfpyrydxzvacqqxdumhvdgtaicwgcecaxhmbshbcz..." } Fri Feb 22 11:38:23.411 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 837 { : "lxsaqikezhepbnpowgyvratcdyuvnrpklstiahetsrcpmwhwkvfmawmxcknwsgzdiuwwcrknvphrmgubkjbqxetkthkxdniuaufgrtwhmxkjgyuuwqlltbartcmlhfnsoqqdynrnvryeraid", : "pecskaiwlqzisvsejxcvrgyfdfocgvrrmxpmuxkxgqxyducaipturjwzvkzneblmtfbgndlwpcskkjqmbdpxsfbrmovoycxtzjnypmksuluqaupziuewoetvzmmknjbybpxatcuqkylchrpbbsxwxd...", : "iubfvijjpekcpliijbvmxpeuxpoybmzxuczyzighqlddgfilhxufyjmxgpshgifssvmegbxwrepoesuvwyzhwqpujswyauaqkoabdpkfyogopytywnsmnacwkixgkwmagotfsayfqhkqjyiadsabfe...", : "pklqaqxuypswxemcropxyzzcuuohpxkffzjthdvxqriwcjpxdwybrfubykucngfhxkwurdpfvjliuifzqfxqocbitkmknvcmuqwhgyewqnqrztzqceqshkvipwbruhg", : "oxuulrrnxjdjydpxdttlwqcbscjnryvmuwlctibnenlbfsrkecibkgxdbxtotcwnrkdxztiutuqlksugfyxximeteiznxunzcusfcpjupnghodnhbeolgkurmuigcytlvsjyikdplfimsvjugcsfzy..." } Fri Feb 22 11:38:23.411 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 826 { : "lycytdtwcrtjunblvplohqfakufjndckkkcnkmkchuwxpnswytxbrjnttqvdqpwlsexbxyhuejnhvccewemtgletizakskymtezsvwydchnggllanjzswpwynunhjpgmcnrnglfulpmkfnhpzaezwi...", : "dtdlwyrqbbnzqsdwplwzfrurpioznhietdvttdvpespvaxkonpqmxczbheitpovmoqmhdybhfatgscrwkympgiaymdoaoipbcddwsifybnktatnpgktmbcwmyildkksgsnovjuhvkrmoovnuttlwlo...", : "rislspxdhvlctbbrbjmvoviihjepanvusrjbvgqazawwcmnjfdilogjagiigcxqwchdiwzsvwdlbiswgxcarerlcnodrdauexdvftavypqgdutyncz", : "bwpjgdrjinupqzquvohthplvoclqvnvrglnttvsityxktxmtjrrdlyqeyzmroozezvqikbztoialqxrzbswpnltuylivphepcgcrrafwnuozzyrdjkktgqqpjtlvlqm", : "nbvjoqkikpvfetfmsvqfzvnbaizkfdimgxrarlyiekyiaujprdunqyxbyuzfpakfkqfdhvmlvwxnbdnpxvyesrtxeckwxixmfhagewevmbmvjrmceozqimfmvgxyyufbydiuthjzhdkkqzjdifrkem..." } Fri Feb 22 11:38:23.411 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 821 { : "miybskfagquxfjccbeqdumwuzwmsidpqztkkxibrbptqmogdawmrqjtzqywjyacdabykototmdskffompmgzrjaiznbulvondypfxnvjtlhdtobtmockdwlsxbhkqujtdsmfbjjflghwislfraekbbmtqow", : "dzknebwzyamvlwpxxtxynjutffdutlfeinhgvfbkkgxkywetjvgstmgikmaxdqxiplbzztmnasbwpkddmyyzkubjecgxboltavmquvqpibyugeoaudzxwdtdwdsgdzqypqwrwbdcpwdqgetorkzmvv...", : "dhkrptaxioufecklryrqamoityzrxkcvgrayhkltbehajgvgsvgmckcbhtarvdrvrtkibowhahtmyqufwouowyjllbvyljxnlgrcxrxlabntcwmelhntkqwytvmymkk", : "cshxzlaflgfnbwzxhwbmwhzahrintrrgvjbvmvnmelwwfrefxwjpoyzaxbyceyawkdcnswoduwvlzrtwiqultvwjwblmtboheswbbzcbjmuhyhjogezwzjpjcnezpfjfclgzbrabsztytijzwg", : "dsgxiezujwsfjsmjihuywcudkdububgxyqkloeskxwevidptnkimilzqextoqflrpmxpgurkfyjttdjgofmxdfljfzsuclqgsacrqipnopiifaabwornzptrwkgpthozlfovxtwuokfvgsoshkqhof..." } Fri Feb 22 11:38:23.412 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 827 { : "mvfermhfafjhhnserbgwlbqtbtjkwtpciuoqqogxjrjwykkxvimpxncdykloeacckscyesyztiaipsejhoppovwdaxjtubyhiwrihojhrmwclzandzujohyusdoforripltphdkwmnfjzhdrgvvdir...", : "ubedtgmfaxugevsukbuetmmoebbzaigfcuendyjgihcrtgduxrrauziikpesakdjbwovvrkbgsufihfmicpiaayaxuyzpfrjjkhzfowsyisliaqpbgfgpjcrackhfykide", : "kayxgcxcnirwudvjlpyijwrxkmztlerzxxinqbbmbvtfqbcjxiybazngrzxytwcnvqonwsbvcpavtcfvhdsoqpflxnmcjfklyepownwhejius", : "yrnlwwbmcyjwwcitgjnlqfoqeogcvwzaugetgzhbhmuaetqrzhhoetvsmnwtxvdhaedtxqchnfjzzrgstokrvmmbwlwjedogolxdcmktiotvyiefitocljjnfytcjgscckhtgzxpzicwzzgdfywdej...", : "lkbgqasfcypqywnzuywcgjbgpljhaqmfdvngwtlhrciguxjtdneexjjvxlzelngxtsyrgkhjdihvopsrcnviflmuihmlctmclcnuhtxpibyafokpwrihhtbozscvcujmhizyvhxttefpybcdgbpboe..." } Fri Feb 22 11:38:23.412 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 837 { : "nhnznhclyovqcuppdjgsdgaregcdznlowrfuzrprejxwczxvkyoqgsohhufnpvykyreotonhwecgttgmlahsefbevowjpa", : "odpquqkeaaotgcsldvhwcviqmqychphnqijbipvmnltsrfsbhubylfvhkgbjkfdvbbrkossmljvayivqmdjqvlsvztukhfevkmzvheeuirldutindiziusexkutiamhecfjavenbuvbvpwrvtgltji...", : "abqsptgbcommfdmpmtgfbpqqwkecrjniowehutyfwullrglhqdhhdyfcmirgubuidelmpowbktyfxflufeutzullgailmeuixxeoyequpwhuzifwonmxiqwusbbhexmjhlcbokngpzyjqwaxggelxv...", : "gdswbjeqhejuposblrilolzemfbtjbtfhkpiriwrtominboclwqzviybmacegdbqywvqokddkwcjankhdrxmofbumcipcowfqoqfjvykxuhvucxxdawfhxmgtnvhysrgeigcketwdsegfihffsfyfr", : "kwdsbohkblxwvtinahzwukbfsnkrcgdqhmqzjkijdpvqreponrtidgnuwfnwpaecdorrdrfnbbosjfotcvyopghqgwkldxbkehpvbhqcljgfhackmolpbsgtpdgyuzlesbqhhwtzbjaelppipwxico..." } Fri Feb 22 11:38:23.412 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 834 { : "nqrnrqhfzrlntqotiznsvvivyvkpzkxcnbujocopzitqthimtllgcsaekmypvprxrrtmczhlximzxmdqspazjpfvizzchdfv", : "wpraduosxznyhlpadsxbkfjdiuqhpmrcqeznrnoltjqbxvoojphefplhbhgftubbozwirdvzflywpfoecqsaggjhidhajplouedoiptigukossmpwiensckmwujbkqdkduytzzmhlolsspbpraaayl...", : "snhpufzzcmkaxgbnhfhdsvkesgqttpaumzdfkgzcfbbphllkzcnybyvdazvzlypekyfeznyikzfiviqexsdhbviipbzgiybigkxfysmgkvsdhbannuedqyvklmfgewsjgbynsyylouvlodueyybwux...", : "kbkqonyybakvybnivnufplbozqivalwmivhqoewhyjcvtbafaaxqqgwfjbgfqlgjfzgftahlibypgsszmwkwngzooqsnladrlludcjoxgjioxxterkcvavxzaluxgsflmvdmdpjzsdfottlhznptkk...", : "foxuwisfjjosrehjmjdxyfdftbpksopqsfriodmfxgdhyujbrcygulyfohfykrglwqfjyvuetbyxbggctyztbduyiuuohqdpijgtegszhawxfyqqlozvubmlbjnozebybdjjdcuxanonhbmfstjtqq..." } Fri Feb 22 11:38:23.413 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 838 { : "odzrwqwqdvlffhdfwlqqtzcrssmervaxvviqadbrhdpgwekjolajudaseduxvmydsknyqvndkblyebksvmuwtuvfdivjkjzorcfpociaxnkeixqytbctxhykfafmmzmtnpzsngiwpzdbkrenwcdtka...", : "jakdrredhqvfcfbbkwrzvhyvtzhpjgcwsuqodsurihjitfflxvizcvomfafmkxlquyuddfuoxvnvskbustiudavrukbvrtvmcjybwzgwhmxmhzprnftredmikpaexxmilffrmvuizajlqblliftfel...", : "zvuxclcqcadjewuurorekloesxomkizzqtftlqxugkletfeoxtlufnyoaexwdtgfbbklyxypwymqpobidlxbaklovdbrheaohkkyurdsgzmnflhwxpzhltrtodrofvkglddxwqttculguvg", : "barjrlswaiirzjyrxnpmwwplelmtgbrkolyturarasptksqhhqsocvyqnwrpirwhqirplcrbuezgzeaollilqgohjvzzirmrrbtpdpbzrjaazjaejtjdggpjfzjrikttkwayxmfogwcshjbvjcqfmg...", : "wlhvlakblqiacgrkwdgxkfjprwzwkwictgouifmcefyfkkfgefbmsvuqprpnscmercoauwrmjcuxplxzrdnciornuolvvocfgxbfoyn" } Fri Feb 22 11:38:23.413 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 838 { : "ogjvijxmvnjfsojvtzxakhnwaicqxoxldykglavzbxehufwbdjazdvqyelyssjauhgnfnhnvtkxupnlmndohmbafwhexiqamofbzatzgmnvocenqiwpbquckbhjlwvhdivvntjfmrpweollhxzuebg...", : "dptvknwnbstbafdlmingmdtqwkzwycmqrhbmdetfbhmjfiygrxcrbyazxeitffusrpkzrrzoimndpzlhagssyicpstwdxznqallxinyhacahtdmbxzunrixolicfkhrxoembyigukpweusdyphpqhd...", : "yixdbmzjqfwegjclhoosnuctfwqpmrqortbrzbiaannnjecqvnrezczdsjmpbnbskzyhzdfqdnlhwerimibtromprprsmeymzydcjutjuwbolcflxctejgbpexrajvmrtzflhcrahrxomosqymykpe...", : "zeolmafnuzionilwxcerhkkiscrurepsjjkltrudhbggusmllowcethhofqjoejkaqt", : "ftnkadauxxhtbvvneuciyqmizfrdblaootfvxpjrveoujwuvfhxqxvuoqlydpkklatggaaujdkgybajuxvdwpnabyrsazkxhnrgdpfsgzisdzscgkpqeenmtulbxmnmwmccjvejtbhpsbrgzloneyb..." } Fri Feb 22 11:38:23.413 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 954 { : "oiaepdffvghqxpdjlsvvpezrkxlckjcqvmoophrcqnclpomcpjfvizfunlrxonukevpccohtgsofrpktwxvmdgrvmyngyvzicemvbzmmwbdaiyunaoenbfsqhjdlfebppewhowkphjtitdkpjhdfhe...", : "jpdhuxcwuvhzvolkzissnlqocmvcdjxwihwjvrbblvymydyuvastirqbutftfyoqjslmjcqajaufpscaathltpihrrcgclkkfchhtfunjoysbtndfndwuutfaohecfqakcemncowbbbovqwrunghoq...", : "ixygqfshgvtdgfqjydbsfddmhhkjbwipsrwsdqzylywoavcrwzhqmwajqxvjvqrgpvyimeopsrzsfzrsupvnkihyeqxzxippcsjgiagoeizqketaqadcedsujwhizkbyaavtmfyrtlimluvscnbeiu...", : "cwnhcgpnuarnohrmnzcbwbczdtaldgxemxdfdlybwvosqxtznzfoijofaxubejsjjitrwrtncuyaatkjirnznrjijlesejdhriyizhmlerbunldtaaneatwsohwilyzltthccphcftcpwpbzfoyclj...", : "nqabmiyputaxcuslfjgcnheeszbdplgqeghrczmgljldlpubxtxouelxyoxxnitjlsjseqptemwdmsumrdygfdqbpeybaoxgnbdhudaprufjjmznqoevtxelzfpbnubvomhoqcxxveghtdxlcjdvuw..." } Fri Feb 22 11:38:23.413 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 895 { : "onchiaelirculhnnernceblqxwfjkqqofxkmqqiujsopbmnxawkxonqsldrqtjllbqstpmnnkeisfnupmbwvhpbaqgtungaqiaqswsncwripshnfcazbasxesypizcjotkxyzayktawrgimxstfmhx...", : "gwthwvlyfeflhoqbtnlwhbzfkfoucvdhkvtiiibcyzfdmzuizacfjehoktdkqyeupunysrfihyboduucvqvalxeevqexwivxhkdsgehgwqenqientkdbmncn", : "fzsizfexznoucuffcsereroewfjblsaehnqbxqbuldsmymouznzmvgelfbjdevudipvmgqwifjvbgijdogtuzzaxgfxcgnynkptngasvkewrooqrqeptkgqwjdbwidijetgpesnrshbaqndusicy", : "skrmxhveermwxnnlxdajecdhxnyulcnqcpmfhkvqkntisbspbffgzyqkmhknbsbmmhpqupestjbyltwevhjupxozkuocgpoodjepsipkjkeueujxzqraqqldxeqkannflqjjzllvrtwlstwprnipjf...", : "xnzbzarrgiizsyjxptcchcnltkamtszgrplzhvkgqbkvqbuycwfeziqvrsapgijxptdjuakwomnrgsbdxnsodmlbgbcakxfmhnqstfevirwrzyumuagvzrhjkudnbjlhqzssizygmevfardotdpvqz..." } Fri Feb 22 11:38:23.414 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 821 { : "otipnqscjcfjfgcziqhnfiiwusivapnpjqleknorfratbowkcsmgoruaigbcvhcuqvcdukbrukwopnrjhykqdhlrqjbfwhocwwfbbpgeosnjozvubdm", : "skqhjddssbxiwxrsliszhkvxnxchyseizjezmefoyxvyxmofavnpevtcxbzdvkbrkhnrhlhtvdmdmrzsshmirsheyhmlxnxtjjsynokopbtumyfshvxbriqmtieyrtlratlcdlvogzaodhpxiqtmik...", : "zenejuoaddqmhwrekaghlvwbiwobnoelylnqqhlvadbbmtraewnhvrvbqruskovlhoerhgcqentqgghiinpobbvlvxxsthcfphguhqvceyhtzadlcarhqakcqvpqstjzvanvnlplxlkbupilezdxtl", : "rehpimdosgnwsqowrpnwmtoscljlynitnuouvwjganahrdysmxeoeqkpzgsbhxmssgibktoqipgwfjmvirxlnfrzgfzryqwqbtulmsplrultxoynigqrbzvkyiozejfdaqrbufmbxtffqufrjgtiqigzr", : "junokiwtbdfndknmsrnwbsapdsgmjnvttapodamgpzwwkavkonvpfsqqcdenohospsimfopoiyedeatctidnkieancujegqgdxxigxooupphfdzrxtdciuksqdvtmyrbfurzaomjbmaxksxnznvoqa..." } Fri Feb 22 11:38:23.414 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 832 { : "pblpwmbsatmqqbecubthwtxvjwejfxmxgipfwvdydsqfjzyxxerrurwdqgqihbedemmwisgdaotaoftoswacakerggatxaxklhvhcmhyxwxzdfqkgfvihkwzygfedvjkbedmrqmzwwwkmrzpozzrrd...", : "bpmbrtfswbujdluowmydgyhzbuoxgjulqkpgeqjbwsctnenvygpioufrlwrey", : "dibxupmvjervxvrvhzdgzphphisldompiftygvehbfdxqeyxruhklyfemepnelxxjnempjdnmoynfxxigihoafkqtstbmwmcmxlicitmstqbfcpnuvspgxjsgpqamtxwygrhrpnowqozsiilcvagig...", : "eqzoziyfreknqhfhtqultqzdlxboitoldnmxzaggbqrpfclvfgayspfcudjnshhiqlebbrjgjrdefpviyaxczuuzgxmfkhnkihsgjnjlyvnyurwipwatmjjgpmngxwxvygounppndyilsvksyasosd...", : "epbhwbqkveqjfodmddwhimljipzxvcgewzqywvgmpakpchjszuwhanvuzjvcrwiyasckodwgrdqxgameguuacazxlxmumukomfcvepnjqqidqmznctwtxqmebbeisazbseoncticvaaegigcioyvzs..." } Fri Feb 22 11:38:23.414 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 825 { : "pcrmjvjispzlgemhwqqbxkweywoqpsszclxqdaazocibstsgrjzbghewikmeykajwxpninjteybuownoggxhkotojaerdkxubzyresyxdurqyabvmabxcnjzueffgpkuenexwwtmkuvuuphjnufgvn...", : "xmjosufnpkimmjlpvzsycgkcwyscqoiihxsfeaoortennstzfmzbnaxuxqinrikggwvreouhbvzbeophvsfeqrtnjaytknfuynnzspfkozzebrnqzxqckbmntwrfvbabtborqsnqhlhmmzmeebnfjk...", : "baggsfhgvavbvpqqdqwtdltymzzygkbjtblprcdyannhjgfutsmzcrylozqjyfquavffimniftwjdjzsibmbowpsxrppabyqgoqhvtmaikfztgbuaqgugvrrtxwiebpbmxgepdyfzpiwzznbvtnbggb", : "dyzkjhevnubjftvtosnwyljaiapvtptjtjzqsndiynpimncydnmxwlrwzlegdipvbehapnboxtvtlhnumdlaeqjjuhlkcokodovrpuizjldikpzycaahpkkwrhcqmjakxebtmslubzlvvygmgylabc...", : "rcfvbflqweehxlteetbnjmgihqegpynbxhmnwmysrqqgpuqhminswumyylztahuxrngrdtcwkxmrnwpoy" } Fri Feb 22 11:38:23.414 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 857 { : "pkzyhvdsqjrwswtrjxdcraconxmzlcaunfvnbixnhfabokghhlercygxuofjeggpvdgpnnasedyzbgatynhoklitpnbirotzayhgtkldieglwdrxvgzcnqltnruivhveocrvigilybomzxcoljhtqt...", : "nzbptxxlmlbpniycntdydokypkrnrvlaojapwiekqviqfsjfhfxhiywurtsvohulbfmcexyhaizpvzoxtexzfrnuzithqbzrxepzdbfbxptszvyzhaepmqzivexif", : "nobngipcenoxbsubdbiwlygscrltzueudoibctvpfigdcmuoyrdhgeabwwbhmvvouepglbscbzesskcinednsxxtnwwnrwzfqprbilnilvlsndenbrnktyiankatmvvzcahdmnxboahtvsbgdgqizp...", : "mgfokpypdpznxainxwuxfdgfqtqzfremhmwchjkmwnzlwobysffkzflccbcqomttmoxhsmdxtbtywkepfdbnsfufnkgyepifnsxsbdfeznipsgacawkqnaohsqaugvyndelpjafpygqpcgjfzoj", : "jzqgplkrvldmgxdrnaeqehpsjtmucczmyrebmggsqcxzztliylovopckpobtphfxpcyvsnlpbrszslxnxaivdyapotjdzrbukodtalwoglmpanzuiqxheyewtvbbzvnngtjafzhncbqysibhyrfbdcjllng" } Fri Feb 22 11:38:23.414 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 922 { : "plaqqajpmcbjeyzihlqxwpfvaejqonvcpmuimisochmqiofhrczgegrrgcycasxlhspdutgcjlskjersodwrjyvdtraadbwlwxsiyybzwdftzsmntsosiknahlogldaqttivdvsdmwghrifthzgonw...", : "yvlhimtjlricfqajekaadrisasxbmshghhwrprbzhovraodbfstodbmxrmvsejdshpixsnwgunpekdrjqomvglkktgghyrnksoqyzzvkorplxlyhnbdmwwqvobryvkfkaqtcxgnptvaxbxbxobwrnp...", : "gypjvzuoqgrbemnpcclhnlelhpifplpzbiarnnngqxpvdkmnpycctfritjqnhlqyagrlonwhattolkchouuirzsjdviytialbaokuwbovxpdwpthyiywtssgxypd", : "rtharosipooaddypdesnvrbzogoofmalgdctuctwwcvnuhqihfpcewmrpcrmvoazxqiznkjovoqgwmeclyvnuddudhvbnhuzmyhoqmrzwcedvaypaqdnuthqnbatpshhzaeemydghbjavxpvoxdtut...", : "oqcxmzffheqwloczsnqioiwhrbkgyfptqbunjtfvbjozjahnxjuhndomwqivfhmlsmpfuxiexjwywdsnchcrlbhemvpryxcjpisruzzrxvlipnsegupshldzzvkhzzzlhgepcgyeeovrsnaajrimml..." } Fri Feb 22 11:38:23.414 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 834 { : "pmhhexedvuzrqhvfbltjerniobbstcekgmgmoniqusrilwhgeurrpklzpinaqtbosbwlyypahftvpezwczsxqnwelimiloolzkwlvyalpdegjlqyedxsymwfijvobbwntsqlctshvephglznpkvjoi", : "qfsyitfimqbccybygxdfldkyykytxunmczovazligdptuvcbbqsvyhzmqmumcjxtpjccqtrlynjtrrnkvqpvshhqknamljxwhynxahhgwyjvosknuklfkbzpfcqsccklmb", : "cqfempjqfaljamgvpixdpdqqvajsdwznzskoxxqdxiunpoijcpjizqjyaufpqfvhqefikukvjmahbzrqozlnehkmbubjuolgnrframgorestgxddhalgtyqfjcqspesmxpliuklgnvvjeqkjgyalip...", : "qtrwpkreqehwuydmlhvrqgurhikeemdmierketzrpuoyfcrprnhgsuycswquugsblsxbyzxwcxvkqgvrjchzytvuufagpcopoyapqdanrybpbgxcpfwoeohibyiihrpqgojrkxkzvpzvxqpooexqbg...", : "fkkqbjoqawrrwlnvlofojspfvdkymsmnenehofxhuxtjjkegdfwtwezuzsahxwogeknhygdgpgrpdcymolgreanynhslaqhmifkafctrqnhiujtftzbocbgockstlotzqdhqvzmayedgdpqirkolsr" } Fri Feb 22 11:38:23.415 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 833 { : "ppyfqcjbokkvzqtqtmpvhfocacnoojazcuextjkclboqtcbdouewwemvovwkqsvghzeqayvuomstacedbvjkombaktptwejbqxareaxbfptsstksdjrahaossilyzfmalhpzjdyanhgtg", : "gdnedzkchcpxseypgdwzeedkhznkjwixgavyjinnsqyujedqsnypsrfjnpatttyaaalksvrpkaojxoidbmmvupvbepsduumajeqexxfazeawmpmiprejormxxzcikcriaqn", : "lttceeumtfawsuotsoaxozpbanozzubejgjlujcnceszvyappkqlyajaurphghaogalfuhediwpiohtmdhxlcvmwltofyycpilrbdsxkzkdzoivwuvrfzewffvlbguttmndbypdodscnoregtlfqvg...", : "idwgljhzoxkyorgwzrqgobbbelwxgeajzchbpgtirpsyeozqmvqjlinotlwgwuzvxhmdxyebkgwcwjxgqfrfqnyzntogknntcyxqzhjvyofghvmgmwqyrcfpgnfywixutxznclkkqbatvcphjrmdsz...", : "ysderpnqqeukdapgjsscouiqzciaupjmbqecihffktmolmznmdbjubwukbshgmskjguygalahyjofxeovopbahpfaonfwaoexsgppojoaustiigqsreljnbqgqayybvtdfbwofarwlvdmkyhwgwuak..." } Fri Feb 22 11:38:23.415 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 842 { : "puldffdtdvsmwyamfaoysntgarpapizpcgdjovrgzlbiqaqmapurvidtphusrseawwotzmbssmsaavdyanowethnrkhmxfponpbqwqegxiylxdvmrjatxmsgfkpmifnvinlzciceckscpeeugrtdmv...", : "rktmcpdtafcjeviwjqxpfpfdbrcyokuoskvnqkmaeoyzccoatniwgisfcyziasjybmapelapicwdbjbxcpqzknfakoesyfwnptilyyggitqoqlsdyfhkjlntzbp", : "vewumsudadjojtgeffybccxtpgegcofczzqxhkoyiebswxtfcltrupexmedcmnelqyyvvkwhqhusrmgbiyuvpkxxehcrdyqmughdkqavonjeaeahxlfyjsxbsthajhvyhepeoomnnakpkeitprwfkh...", : "fknpkaqavheelvrxatubbrluipnffhiklyfxfsyogzhxiexfczkzauocprrbdawkrsdjyjokejajahvunkpmbtjudljcpbtsvtrumxdoclpbhbmdcukqqg", : "wtgaomlyimsjjvpgvbwpbbrrvfrbxibfevgkfrlehvxgwbweevdzugmeithhgsgzojdqhfmcnompmjifkqjzdzamdipadobvxymekfhebbfcwzxiijlhpjdzrbfpjqzgytmusxusthqsdhrmtlyuzt..." } Fri Feb 22 11:38:23.415 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 884 { : "pzzmfifygjrgjoydryrlnhkegvcliayqetgbvuayyigcjmgwedwnrcpgwuwxogjhsvnjavyvilutlhnpicrelgvhfbzexlkhgwlurycsbmiqczewejgrqylxqunyfwffcisikeetcyxsnacdejrsuu...", : "srdirzrvxduqsrheujbszrbijndkrsmbmnzqgpbpjrhqoliowszjjnebufzlhepkwilkaozvtbilaffeauedooacocpfbiwsodzzxnunohgrpzepdftwkwkoxfojxxijtdfosvqxrksyugdwrrxdr", : "uwmsnkugvyepzfghmqnplpyfvinadlgzmvktioqhwmwxluikurzdmwnpqqbefbvsjohrfzjqsimmwcrfyijjbuxspucdyefocpfwlttjjfcwupbkygosrijnpfyynublcstvqprfldftyvceoktuyg...", : "dxtvcagcofggcjymycgwobgsnjkjgwoleahatgbitvyjodqbaqmvpcqjuykbxqkzjsteemraquoyaiyzcqoxevsxvbfbvjmvjfjwhrrbdwdtcerdgvbrnrewrovomedev", : "fpqmyukvohctvwsjnplqckkjbrxygplrewhirbmvphyryaungdimxbgspmvgvcceadzknesfvfclsvvskufuhvmayanjlgtpivznivjlsrpfjdjsaxqrkgmyliviuakgniffkiwilddwhbzmyxwcqh..." } Fri Feb 22 11:38:23.415 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 848 { : "qingasppupvfshyaawqjzvufinrcwduqoswoglmtossrdlenlaniqgplsyp", : "belpbbigumarfmyjzwjrsmmndcpmiehzucatrxobrkyvgxwecxtnyfgaxejxubnetzardthergqsissqahehlalgaryhvgeuhfmelueleitbjnwxabwgwqtlhcthlhxrolwatewhkjommsmqcqnuzp...", : "rueyshkugbefmljphralsrrfqxojrldjmakzkwpmveydfqgfmbhwwjovdjtabhhmchujdibpkrdqismrbhlskocjvuhiojkijrmyqhosxzwfcndzuzjstcpdbrlangsbzjcgcuyuapvzbhmnfrogsh...", : "ddapvdxzeikjgdsjrquiomiybrzgdaugczafhlfcerkhclgpnxighofidkozzlthzukcizirjrnaxqssyvjddfifczzbmxkylucfqzdbbirvjllisvsmmjdesbqtnmepgnvlkisdrvrztibopoaqlv...", : "utkkmridugufujhrfplshsffwvxeovwrhisqbiyzihoptyyptammugbgiuvjxevbfcjxajyxwfvugqnpedvshvoteafgdbctttouipnkfelcjofrvktaqqtcyyksqpgemmpcsqnrswlbyuxokvqkox..." } Fri Feb 22 11:38:23.415 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 844 { : "qjqshahuyusmoaahiriaiytuvxtvdpmuqzfyboimtqezxbatdsvpallrvoynktymepcudekmnnmhfcmzfqtaxkmbdrjghjcyfznlcpkrcdfiisyjcvsnvztlocniiweswendcjkilmtwsfiqixhhecsxdyuc", : "mswgmbbxtzgpsjtisucruumbixyyliwndoxohxjvsxsrfdtvyxnwnkgczlnddewxjzudiqfwxkkxpbpkjobuketocqebtdiypmymmsrmtamqrwbwfzklxzzemesaealhwajawluviorgogsqktswvn...", : "ikvzqpniahmqirovqfbizkowqbllibiotixhedwuejxjkobmfsuxtojcfjgqisgbxpxflipxedppqnygzukkgfqzcdvzmyqspcvkhruynlljngcfzvmzmdrvnghlujebdwxkrhbkhdpnpfxosjxoir...", : "eksihszearxpwfamewozzvgxlhbnplxfslngblmsobqefzxweehtgwbasfzqlovtvohsrtkgtnhwzdxouqgpyvnqdgepwuaiutiewajenvcjwqeeikqinabjculexnuhagpzyjlhwugmhdeuevkwfi...", : "xdatxvkkyqmjxaqfzbdbdbfpyfjrwuslhzkcpwzhlexogiqloggxaagpsavohwnagycrizopbsspuaobkgfajosqyazczlyzmuqjazjtousplytbecykvnzzymnzbaxtmvgjcsuxqmmyrdxfkjblp" } Fri Feb 22 11:38:23.415 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 908 { : "qomfzwvbxuykxqqihqsvkpzindgglrurkpuetbzflavotdjfrxztlqohenhqchpqghzjyvnsgyelmiuybqoybavjvuznyhubgzrjmkcmwotwaadulgbkzvbtrpmtfwqvjglpyst", : "bdgrastkymaboxzczwveuvujvysmlmtbrdbrqrlxterwttoodqzmdvenwbqmtbucohnpoukxfnercqdspfqniccswuhxfowytmufgubnrwojrnkqwdjzargrhqrcsukxixwgoagfrsqshckwpepnlj...", : "scagiotllpjqntppmomlrmssogudjkvxseaqnqgnbqgpkupoegoybdbypisbcewvyekzarfqeizeqzpuomircqlvojgrgpdxqpdfiqojujgqxfrdevzymqwfkowcgukpwatcjukqwjiajubqosjvgg...", : "jodfurlpmcuqdjdpccwvbmudkhayywqdflqbubcrqfsjqsiyikwioaumrnupobruaxbvgbzmrxiluhanluwhddydvurwrvcxirqyjejjzlfpcbgsswsjlzykrslfwgzkdeuijekqsxkyegbbaxmtgy...", : "uauclpmaltighlnhmwrdhacprbzjqrkrbxkmzuqxwttedmmkcoarelguakbdsjbslaomwmymbfpohsmmkzkysaykavpmzloieotdhyfysiihtgvhjfzdqpppszixoebvcmxxmrhmecvntooqqzibsf..." } Fri Feb 22 11:38:23.415 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 873 { : "qqxwdlqemachqlojlbeftjdmrivmelikzeagcdwheqxhgdcuosgbsdsmbhmosjcusathzmzhgzsvqmshlkmmzmitjbvrinryrrhjkxjynbwebvrxmkxlklxmi", : "pgcgjvtiqemznvsqkzkqjobrjjhdpqradcqcuisrlrrkeilwbgymnblslxksucbtolxqsajadibvislqgikyazxrjeznzodgqtkodomhwvcdcdhjexzuytrfdmtwanmhzxbpgfvzkgrfynpcuakvrr...", : "usbvtymgwqbougyiamceenhkyvncrvmltrkvlxdrvmevmtfwylkrcuypzfnayccgmbemehpjikrdlrueylhkuyrpwrgfduuspiunfmkluidhricscuawsxpabonqchegpnlcj", : "hxpysisbuwkiptxrziytxmzzydaiyrukfrfjirmksvwnthvlltxrtjojcmizycgjfjxkqelwyqmlyykjsmgmzhyduhzjsmvruqalaxmeuecsjnauukgujnuzmikxikgdluvkpjzbtesoxgodvkgwjs...", : "fbhreakndruaylmdoksnvucgnvihuqvsaezvsctcgeemclcjbmulugqbupsxpzgnlwcumqzyldnrxehofqdgdayaaglabaamgifmfxnzwleqoybanbnwbbjsrhkdghpeqxetrsqkhcogcofmwfwztz..." } Fri Feb 22 11:38:23.416 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 935 { : "qsohvsmyivnpeuxxdcdmelerzqkzwzzrjaxofbmacfoedegghmbxsddycepqdbslljynmrefikxlkchclgictvfnhwefyjepkibplwxaeuoxtnhydjejsainedjqeqqprlkyekfdsucsgaqkqymujt...", : "ofvxgbefxukjwxfwnwaulkysywzxslreunotucfyhtqafovqmzxjptajmyesaffvbojarifwcqpnzyqhvmydzlhaveerfdcyefhstyjodicrxoopdwmpbbapfpopcrtdoqywzazewxmmwxjasaigje...", : "wbnufqgqmvofqizdheethdkfznwxrukhpyrduxzatqgjxrwwmpbzovxocvcehvqguvwsfsdyhmuambkeegxrwmfgiyhfroavbqfkftoubciwminomlvzhzzssmckumubbbtpibsbezdassprthvqro...", : "ucgpewldfdiwmehlzcidubjtnaoubsikmjoqqwksoqewpishghttwiapmxxyfnvvyzkbdpzhmzxwvrhblqjhnpebpwtugmvqdwytfipenrhrzmdlidmpsdfdqanqcgttnanjjcpaqkkeav", : "pctmflpebsowsodtkcmslalwufwnyrgkzjrdsrkzppsrclysgflikqncwfwclvxpjibhhilbrivaqofegtdkmparwxiwaepmwpfddgisrolfcrzmsbydsgisvffsvnruafpzgicxgizvbmwqhyhbza..." } Fri Feb 22 11:38:23.416 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 887 { : "rarwincwfcqmnyshkwqeeuxrglqmzbyzkxudalnbwjwvpbwgeefmpbftswusawswrxsuaonadpfyhzozqlzdwvmnhtszflwexaggusfauyxrwqsbvtjfvwcnxathuefisnihrqjzzaefmvxqctszey...", : "zptkiwlpidniiorfvctdchcjfrmkdxlwakazwwsjdovooqiivoigujqibtmtcwsazyjtffmghtowjmpnjtlwkzabjyifutsychldnpawwbzytmdffhsqmyejjoshcokptjwfsbzdf", : "ickidxkmszgnodnvibouezobdpkyoyxesftdgbdpzttgahmvwgkcyfkywmyaihrcjvyiplwrbmqyyxdkzcsrbmhmmrjgkhnkjzwrtthpliixddojfnexicmbnpkdjcrtvjahtwgenxyrxfbheafpal...", : "upjfamfokpzwxbevlffbloskfncnwgnkpfesrjtmkzjffmgwdrcpabeieqtcmtrujhpwqlreqjzkpjzrvlxwkdwjqpfuuidgiwroagcpprcqhnsutwawnmfkndpnhiwswrdclonorhtlwubqgxndru...", : "omtpmzymqgmjqpygwrqaaycgkgxcsktsfuxxrywiglffkiuakjahhtfdpfezddtxhyoqefoihrtcgntmjlicvosckwaetgfqxnawzyoobjvefpohuxfqipfdnlyptrkxfuhyokowunsbsxcwugdlzd..." } Fri Feb 22 11:38:23.416 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 871 { : "rdhebopqttlbohercfolgydtxmtovqddaykqdcgaxheuntixogtunyowwyouzwtnpnatpmndhjiwpfihcjizfrotnvwmcyearovpjiqtcowbacdcrwgcncmycwqychzcsyfizcbaptocorzyegmgha...", : "qvkzrhsmzwdgqjbrvorywowpxlvrqlvhgfkxjxmvoxjpqkriovkzhrgeflxtgsetzfesnhgfqaispqqhdwfahymrkdwyvlinavkzlteqkuekiyzqppjvujgcblz", : "epfgcyztpzlasrgxoirkmtnsuebgeduddiskqzgmmuygxqkyiamqyqqjiugbmqhrtynnnhwdtfmeznqsqoahtjazpwckgctfbcfzjstedmrrmvmxoothftjfwhoajjtegndiiaimlczmqusri", : "zojiqfmcpuibhlnhnwcnfkpttkeilphywarieoyqyjdlqihplzailtojcemcetuuknhttitlcolonszyvbbijxlcamlntsjckvxhhcpgpzodwudylksbsolhkzsshlpqouxhgueapigrbjkovfbsnz...", : "kpgwyzhheskmqnmnkheglspltkwcfnvebhzlfdmpcpcramjujtruyubalwqnrtvzsczxlxvnjedaqornunifmvjppnnwllwqlkrdsjkcyfpoinymlnoqmdligecxfatkmoqmtyvpckoaqmindmvtaf..." } Fri Feb 22 11:38:23.417 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 831 { : "rwxkysqyasxwalbppvivzavziomultnfgunnhlscagnxdeilktuhdwbxphrejvpdnavhhhscpbqxzmfecdewkzveripmdnahfmqlnclogmsjgqoxrnlkgsthtfhuvazeolxyjskwxjjvlvuolan", : "albiizygjmdbzrrzdxoasyigwgascdowwkoihdslodxwklbxxmotgzjqtzwsqusowwquxqzyybeiylijqfpqkrhiceookngvvbjdszesaknvexufctmukseuuyxbceooneilqbuwcdbmylwlyupfvf...", : "fpdixxpcchhszlnbcycojiltdpzcejoqjfgswcvgvgicilslhtublmevhniccuavbfodgjqueawxwfgtfqrisfijeqgsatkmwfrxcwqqgilcnxaogdcvslthdanryknpzppslcbrroadfyqhkmqnad...", : "cwskfrjwkpqqxlxsuzbarqkqllcomepzhlqkuhnsrhcrlghrbnhgoqzptopbdzebbuoqyuwlrxdjptmoyzuqitqdvglxxryotaqmxhjxnkukrrakwtlccwdkfdgvjfngussgsokkfxemrjswhifjpt...", : "yjewxjkhacmcmdeowdjkrraoagduypcvweaygbgocyyxrmrbxabudrlbchfaultcxdgepfkgcacvlmrmjiazlgmafv" } Fri Feb 22 11:38:23.417 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 862 { : "scpfwzijjfchyofdrsqucgiuejyhkobrbdbsolcdorckophmqalrbrrazwnooihqltwcngbuznizltdrqwcmsvmahqpuekgbkhhuczywovifgwmkvyyysmfkftebuevcswtyfpvamomzhclkpuimms...", : "koustxcvjflbmsigtjgtehihssivmqcoprquobnpgosexkivdfegbqqkcgehhzrhgjkvazcvqeszesoiuhmfkjmpcceoihohannbeimypjpsdovizbzuqmsfvqnkekjitedpskrzgrkrfpltafhdpp...", : "ciquhwekvpltvsnqupjvaiqhczredhfeglmwjnhhsioburswbtemarzkmxmjzcrmpzbpjjtipiendvxraxzvpwowkfpzuoakzqyziqedeesmmulhshyhsnfieoojqdkysfxwegvpkoybktmjtimcjv...", : "msjszwfxtgrjekvoeebfhuxhzuyveujxveszzznhvhlmgmbyhtzcgujhvltkpkdtubfknfraxslbfbbzqarpuudmpjhakmucldgyykgsijtxzzialsmedjouccdkufacahemfcgnljfwawxyzxqfaz...", : "qauqsyhiziodbzpkiqunzqjmkoxdbhlhiowmbaqsmxozvibgyxnjoeudzrvdocpokbuxbguixfmixgiawgcjatrkqknv" } Fri Feb 22 11:38:23.417 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 828 { : "shtdyinpdynozrcvvuhcgoihzrtknrprmplukoapfintkbquxtytjurzjhglkinvyzpxdgwplqxygjhqmrxbnhfabzchakewsrjbclpkwmtuhvveqivnxjqaangcvcywkdwttsckmtzytnjupue", : "kbbbkxobzfqkuzzmfpolmohaobhiriixjevtjhqecjiosdmsfdpubgindylbglktuesbfezolwnvctyvbfdgphlboiysuxkpiiumidaeyrzgzbvhpllamlzdncfowog", : "bmmbfbycaowgnrjkiacuhtbehudgfpqswiqdkbsgszrtxitmibbtqipsruvzolbqbghsqkgoscfdmaxxhepnepkcdbviupzwqhybopixfurhqxrwlxkgugesskylvxorehlcdllecsoowlrprjkmjbiyagkwsj", : "gttwcgvbgtdutugljkmzhvdhcivtrqhzufyuwyylwfcnccfhisutdjsdlkzviucnmvuzqjfqwclhjojlofqtacjzhskwvomckpzrcufqxppbtaaygdcilitrvfklpiigrzykmmuguouaynrplixfgk...", : "dfcaecmugraldorzpeuvdgnudhebbbmzdgbhbzcgabwwrbxzgpdccekrrqmwcqmggntfgpbliycugunpyqrfhgvcumnxuawsmiyrrsluerdbxraokfloioruvhrceerwlhzqoubmfvbmrpyhgqytab..." } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 841 { : "ssotvjtekficbpwskblnvcwhbgnoyucynmsyvzzymvkwcepefexrmqkaxrwzgbmsmuravyxwvpvrxxaocqukysitxuecaqryhdrsbyrishpyecwlteq", : "ecwwtejzfjapmnoadtqshekjeffercuknrlpfwaafjfoqnzhikibvdsblxganiclyksyxoiaxrajilghdumllsqletlzskjmuipvnojbqopnzakbtcdpfyah", : "nmaulywwpdaotuofgckdsgzrmlghnlrtbfwfcaclmqbsqchyeddfmhvzfwxrsgdeitcjsjdjhsklfllpwiupysrctbwlljkdildqdqgumbjheqljvreshlzbnwjobpreguilwbzlorzsvidytizvgc...", : "qaxkoqvkvfeqpyqacadostcbmhoannxqleaydxqbzsblvuaxvcaqpqrxnvfhuyfchcjkhlnktklutptjttzcucjlrcpitkstzeguvycyktfzmhwtfhdmhiipdvzreypuhktfzbeaatzrmbrhlllqpo...", : "ikalhvqiliwzobskorjarqblzttdgiazqtwhadajlmhfdthriclaaypacorainmeeyhisuvrpzchhcxgcuexhzqlafkblrhcioynasqmukhegmkrgerlbkbmjuxmbgchmvkoieoahdxlxocrkmcald..." } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 842 { : "sufmsttbvalexlvlhdkicuheovzxpzcgwniduvmgmtatzaazcomropmquuwdnbsjbfbfkmafqmnnriygizcpelmmkqhiiomlklbdqyoeedvyewknhkbibtklacvhxpprnoixpudlmjeauhauxmopmd...", : "oenibqrszopjhpgyyyrxhphsvwzycupctyfhjtyczjvxhyjxehjtpatmkzlofawqfbqsdktmjkksscmdueurhdhuyiyrkkcomjpyzgjzlyhbhwlorywsbqbhracuzimvwpcurawrhvutafxehnulsn...", : "dnezpuytavdsxdjqtrhmwtykfaecjrvsxpcqipcwqpgcqbutbyuyszkadqlliwqujasdxzokzucpygjazxxzheecalllgohqjekgkfdohixftidpwt", : "lbzenemklweorvvntsfzxxhvuoolcetnaaquqxbjwntjssvglcqooebweovyomkhmwfpgqnhacfdbdknyowsapfxflwrojminwkqrzpxeeanmhvfyp", : "ojunornbcaqyytvtgpnfurvmqflradkwqzlakitpvnczewficgaklsuowzzumcrzvxhwtwogaxrzajadfjgpdoizxyjezktqhogowbdparjpgxxsaafgzikdwtfbrpouqtfstndyprhqnycldxvfze..." } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 834 { : "szbzilhtuqlavaggkwdfdpzqcmrsxgjmomerkhanlggdoyejokqsfmkehorgwqidobkjrmhcakfsaplstsqggvlqgznmxxitteqjgdtudlsciyrnx", : "uacakbxzydksbswgirsxjgcpskyiritniwyzrobyepxdirqqmbwxahfetkrogywtclgyuqlzdclvblmttzxruwnjzrrzjgpincyviwmkeniteduvjqjgvhyielkjclhbpvfvzdoikkglvzsfeyceizhhrquyo", : "atzkosnnwxmdexfogennjjrrcikyymyhscwiyrvhkfpsmpnniylrtihfxoyulcqkewbvpjxzbkpxxkfvqrondxqtyaigcnzharyucauhomwfltsrsoautokqjtifeuoftfnvieqfmmmunlpjzsragb...", : "fczndwvkyxslnwjqacreltdcwhoyxmlzdobcdgahqfgwbzafzpfzvljynvifzlszldorwskmmnjdntfibucaspekjvjptivhmsdsgryihibmmdpbzmhdflnxbjkgnkypnbcindsmocwxaqwrkoiutq...", : "qvukfejmcqqnebeltnngmrnsxugyhkulqoyhjofquyebgfopacdzxvromckisxrqqstixlfjravocaplbwqesltwvlbyfjaoshynfpyfgrjtlptpqhtaqozfqqnjimpkhedikgkthoqyqictqqbkcl..." } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 909 { : "tbugphtuduccpcqnlskcojbwtnpabkkcyymrmeztqiajpimrgulsdfuouugtwtztwrxnafzpvdubfwnvurpmjnduoouxptgqgdcgwltccprylqtotxajotcssmfilhtcrsokuwdqomjznlfbmoomai...", : "hznkdvruedapjfebonhbfktjntkkqwjmydwawemczsehjowdkbpzojhpuuvjnmxmriednntikceijxvyebngtxktnqhvilxezytkuxsvtpgvtodnroxapivhtsdwywihbqcmkymevrwrwmrffqyhtq...", : "xsqzdysomzxxhugkpquvozcwsiklryztwsiksebzkoornfiisawqcsydsjsdpikoixfkxlsdtnttaueuokxjoykbvnmtutubcqnczjjgzuxqdtzebgcsagvbdjqwwumjcnktvldyssfxtbijdenhfx...", : "diwsfqazbjywtlneyzhhlqghjckaltnzmstpnhjxjtboqrdhwauozaxtyuajhvznynwvuizeygatdhbkswxnarsrpnzpanxtqnqdagmyvifcngdmeptmyznpqouoyiwmzwqmbjbenaclzoxilplbbi...", : "lrtbomeuqkbgbhqehqjisidetstgdhiabxtwfyczkdqlhkiarqicltgpktddwdotezqeubwwkwfekfmrjpjlvnilxoihmtyvyyavbkwbbvimnvfezrxguaicpceisqsugbyxcfjvpnamwcvjsiqqoo..." } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 835 { : "terxwmvemgeuwitxuttfmhljfqwtuhpuejrnqclnbtbqemyzpqaqvrhclofpyxslmhbggcuicbjakvekgbwugumamgroustfsqtfqhbtdzvvlbzdootaecsjdvghhlflnyxzvrkbiaqhgeuaznpdff...", : "orlllhgvhvunyokcaaztzipijqmjvbniirbvfqyivfgvqgoooqpkmorchvwpyetftzbjzwcakyecbnvndigbsflbuvvwazuarclrelvdwtgclqwxgzaolbxotjnufjlighkuxiwefgegqevvqiyhec...", : "azctcagbpymlyhcifbrimjepflgqqgalielghqehomftsdfsetbytcyqnvxlxykmyoaiypvtescgfihdzogvwimyjkxopezvywiaazazwfllzsimcnasshooxmdqwielylkslemmofzhpjaqljmuhl...", : "mrlthazvjzuxycntocctdrbmtyptrptjjlpijjbefnbdiiwcpwbdewhlpyzrhfjieitergrhxoudxmvqbdyiufrsceisejrdggumdlyuaprnseewltdfwcnkisgvsxggecmxmzkigclnokgilowixepsqj", : "nzzbfsnrvdfaoqjravntpwbmvfnexrvszwcpuxcbcdwhjhmbltfxlyueisoykdpmywdwszckhkadsrjdvfmksxzegezcqccjsgifviv" } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 907 { : "tigczlwkiiawwioyiunzxvdphpeoxjtycvudtvjukvzlhzvzygiuwqcdoscmepydiwjvorzgnqtkilfyeaxwcyqiyebxucrwpxltoinrimoatreqbybxoctxrperpkdnyddtjeapnvgwqtnvrlhzpo...", : "vsxylienqivslrhvbetctibkwcpekqrpnempvrcmohqxmwttycewjtxgbfpmmfbvaaogtgaphqyxnvfkvmjffgjyjwfqkamxglrwjvnwsmpgyczihcibpaffshepikamjwfgjtotvfilqyrmxbuhot...", : "uslywaypzpcvdcsomwycwtfotepbbzyuvgoftqzwdxkphzktlplkfvburntfnpxapuoagmlgjczjmdudyvggrovsfozvlpeonrlgmpzwhulvkqnmkbfxeuzrmkirsirwamaseqrforetgepck", : "erfbdlrkojxwozszafrawhvtlsxfkrojrwxqjgcdloebglheymvwioaejjuzpnpldtzncvblmxgksdohehfacazjxhyieginymtjbidgaxlfcmjtwiwaobuywcyxnasdehbrhkuvierlykqgdfnemn...", : "tmhpozutgsjhnyqywabueornsultunveieaihiavpowontexmmunkhdwnzdeafuhgcrmmupzvggyqheyyhjlwtccympqmuwdbrxieflaxuvuiihiktnxcpjtiwtanmxgshoeibibghelwluqpnhxna..." } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 841 { : "tljdesrpjsdcttumdjyyfvdkviiilxiuqgacnhczqhldycwcvelguyejjtjtmbztctpgxctwvclyxttolmgyyfulxihmpvaegelcwbtyjkwkjbyjmildmfprwqgphodmdkokjglqrcjvpjgtmjzjqk...", : "tjebsfugiovyxkkcoywnxsbtxceztihjugswgjgomcphtbkzjybxzgivtkovnogercfklqndnhuyjxwztgxadwufbmpenhhfvdnzmkfbiudguureqqqdxkgrqdxwpfztusgtivhytjcqqcadcdilmh...", : "eowcppcvsdkaemjxvqaieplasdpsxjsdmmbwcczulgodarkjlurybgrfaahefsypmgmgthotrslkmqvixcwiwcpllumjguovgjqwtlqlehibdggqhiifnmejfbhfxbsawxposahoojbseqicsynpkw...", : "kvcnkmmdcufxuwzkqkqusrmxjknjoxsuvaljbvouutxjamfzfomjrrokekewycgkmdjgnzsdlyumbzyobrhizslppiquegqeqcybvnmwiictvmvsgsfnddpwetsbfvyuatjpaqiweagndqhkepppqy...", : "xukaezsikcscqzpemrmzevghnxbzhwsjvmiiponobttcbiowutwuyjzstpbmgwykceplcudiiejgztjxmsrqzlxwrvptornzdfostaysjh" } Fri Feb 22 11:38:23.418 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 853 { : "tozfjntldqmvpnyusmsgufuavlkgwifsqslduoymkunovnlwdxdrjfttvgforqwbthphnantlpsyuegktapmwkawrfeqvtypapkhtpnedoffnqmhdfcjyfnzkkbflbwlwdasrltbxihobpztlbyuip...", : "ceijbtqsndudpyryqatifdohvjvvwikhpclwsnlulwzxlscudrkqxgrcezblaozpxaemkivsqthtyujotxqvdvlygndwkzposxuzdgbmwyfzzwvbxsuplwbnyqduvqahgstyxcjkgnkwigqroenlfy...", : "vxdlcopbqdjbfrwrbfnxstdfljzoymtaxyghcseeiqlxihssqppymozgvovoxynavrqcvatochpwbcodqpopcbxhceaslidcklduqvlizvtlgsxpwacmp", : "neqxvtqxexmoiugxmvvylqbbiuzroakoouskbdumgiqhjmtmcrhhwharotojvjfcxinheaofcmhwwtlgspxmtyznitytmicflfxeejauostofjeltrcajxdynejnonlkfbsqssokooemgt", : "qubaorczxcsvlteblhspeamvqbumunzfynhuoffmtdfhakhzuxfahwqxpfkoqbzsarzhmasvhtulzipsejwimqagvepacohqwcagvwrqvfuxubozxbdsekcnkhukgifjasniaaztcwrcbphytltdus..." } Fri Feb 22 11:38:23.419 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 907 { : "tqwomgexmehtidzvsfeiefeofdwfsqyvaluavstzbpraiqrtkfpfvghvhtfxckqpgliwspdeaqxhiaeyholvxaspjktmsvqeqvaenufwdtijdvcxpsonkmodtiqlpbglliblzjeifzymbvhalspyxt...", : "hkpillsbwpfttwmbeudjaudstvlrxhdsquthhqwksslnmcgmjglgolnssdggihplyolceorzgnjzzmgvdsoktymbtsaxabxzhwniogtkyxqhjufevqniamowckbbrxfkkoheyznksodbtxrlmftu", : "jegnrxoylmqpdfeyhioeudjkbbstkyefhojksorpjpsuwgkfvjlmshvqppxvuefnaugrnggpteomvqkooqrmcmkyekyluwgspwzzsqmxttctqyewpzslmcovoksbjvythlnlqrepvsdguceeibbkct...", : "myjzyaigvimczkppznjtterxonvnmscujrrxfcyxffigxoppokkremtyvtfrasodzvzvdeuhnhefraqovqmshaisxttfkidwmkzvfumelqsurhowubzddymxuxjrferwpbsleqjefyzfmzduyepsyo...", : "ghcgddooqxkswpmxrnxsbizuztlsgxolnbaznnwdzsejhipdaxwnamshmlvoanikkqcvpunvwtakqrbpwtrqryunwgnlcpjklernzziqmndgbightnbzjsclunonvjtwbotctefkxbirdbyeckjsyz..." } Fri Feb 22 11:38:23.419 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 865 { : "tttzsliytjhcbzvbbhffuwrjupskhrrupadmofqfxtqpihouxqqvmzzropkmavxyeraledzlmjxjesadpghpaazvcbtdxshhtbimdhnozztczrpcxcmefozqcwitdpubntshonudeotrphzdojgeps...", : "txnzamwsyzivkfvgkplgsocfmowexqafytmcpynoutbpsmgjjdzpfkrkwniultcqmuetpsibhwhzvipxhtmtnjwfjyzxavpymcfnxotatqimdmfjmspugzttsroykzckfbhyawnuomjnskwvejvvupumywqjwbn", : "zeuqyuvbgdlnbjrtslmvxeukavrogrcigfaalgdciojvkgadrsuucmxadbzxbbmsocaqrvfooxwuossjoivslypflkqrkhpcmcvxovhqqnmtvdxkaplnnuskhtqfyateuofgnhvzlzqzgpzpxzicqvllcjutx", : "oagcwboyrqyentiyvuzgdhpthrjqtpfoaftqfrjgaeeaggfpohfjfywvcxrfawglmuieqlcklshvsrsxsxsmvxkddktyyucyogbbipqiurnbsedtbvnadywkxwhxsmkkhqcsgmkrhafkvxlw", : "uxpauleiohkjbjvbityigismhusnycqozfzhcyppeqxtqvanhjqhdbohhtecihdccvhoquqjukepffkswkypivzamsbsdshqfhujbywmbuenvofczlhtximfrjahjrouyeyzxzshbabvlliixefyza..." } Fri Feb 22 11:38:23.419 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 821 { : "udiowhhoezxorvoyagtvdembgkfxjahaqrmhxjfaipumtagppwhlszkbpsiflpnormyefvwkussmsbyrrqfauayjyvopynavdmqprcqokxibujpmhlozqogmlvdaaoafwstxbfmrkwopfautzlplpl...", : "kkehdbkntxyucxsjjiwetkdujphsrzmpddtqjxkkixoufmjayzmdpbvqgnegqccdjcolsmbtakzhjtxfahkdiaogqgfpdyemsgdxfjloavcuxcnsbxmxlhjzbjzlsxkhovqfzamlhyeymthxirbeur...", : "onvdyzegkvssxkfhqxxjhbghsobswfagobnchkayboipolqwglgwxsbkenruavdsvijanqzjbsanfsqeddwuyaikfbxexiaovvgwdljgfwhwmhxrfintdizhrapnyx", : "cbtxaypqzhgjnolkrlzvvnxcajvjuzupqvwsdjamxbpfnamfisqyhtgzhzpmhliogoktfgswfhgmlytsqvkxwwxerhyaqaauupkwq", : "uwwqenbwullqbjqlnkpzdswcjabfipdylvsvymnvdfkywtxlvqirazhjzcqrhfsuykkmqekhozamtelkrazzaaqcrfahastcriofadnftatzfmheyanofswxeplfvtbbbqmpkpbaqtsejfxgrgsydt..." } Fri Feb 22 11:38:23.419 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 868 { : "upjuwqlntcqjyqdetdcuiddaenuqluncfitzwwwigxclibcnxavzsozchirprwvnunuvkhnebnjizqcgpiszykoliabgjjaudfiafjmubyixdqteqxqqnoqtxymabmybgbfksjjrlvccvhdufcouwv...", : "wmstnfuuftosuzidwhenttpbdbmgegtiytdhwuiwrfallitqltmjchygansdkanhqklvnofxreeuywuxhdtrldfzszgrlbgqvjflupnadhmszgvvyoafnlvuhwmcxmeduabl", : "zacfsknacwcjrkxweuvcabtdnahbeqmqqyrnaosjcmavnfzwitayuofkegiqeufaohzygqzeuwsxsttvwxavytivpueuvltqsmbszmgmtuzpfnwuraeeumchugndjmfaarpldxrcmphenzdtsmovzr...", : "iquqltiyuujyjmausmjbjkhovfqpbgiuzysonsbvowrdaxmrxyqxpfwyjmengezpnjmlkjxlyhvixzbvgzguclswrwujqrxspgbymlptaucrdogmdfqzpwxoatwjhcmgfklawsnkjiwmtngrfsiarf...", : "eoevrfvwcwrzjcpkokahidaftaakdlwcjbhrtsweghiwdvacmbwphxohobsrokuvmcpfjzflnpbvfhxegiymhscdwiyjfvxxbsanewfzceoqvaezjmldumeyiyivmutcslezkhixevfiomculmk" } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 919 { : "urfravwvdqpxnnopyxtijvzckddidibzhehyqgjckobzilrwviercwqogqyiefshubsxweubcyjmfrvoatljibhvkujizdmzlnlsmwqohateehfxgzwpebqkzyoriyemmbfmugttdeiuwajqhbrngd...", : "swtsiryspyrjikvpmmcqhhdxuagwbdxhgfjefwvcoeumsxicybkrnhcosgqtxqpsorkuxblpxvylmzujavqakpkvwykrlsqmetlcyqltaegnuaexombgoncamdyjkcyljiouasluyvgczhmafdofvc...", : "ptwoljeiqegztniuuxodsfkpsdildefvnvripajcnyugazzcpcmutzlzusjmtgxxoiorkawdwnibsygchanipqajahboxquojmcknvabvqihegwlfpsnczvdotsbraesutticstzofbyikvgsvzesr...", : "wvbqjgtotfeowulfwmykggylufytabjfylrcainwsbtzldzmvjauanbreyhlvjopvfdvlgoqtznogcveoqhymtkywqmocfwnwitxwhjnndgtwiyuthvqcsfkkssvbjkmfkmtzuvbpemldqotetmwvk...", : "abafwronnvowcendvccuzlorohevtmxmvyoeunzbgkaygflfebtajngkcodcwpmuavthaskhmevzxsoemwehdqbfpdinvydylmdifdghngvdcloymydxockmulhgxgxnszvdgoawogiqfyxzqczfbd..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 885 { : "usxbdrnhgykmhsdowtcxjdywqztmzfrotpblaccqrqfevfegfwqkezwsitgyzbiuxtcozrmnhagqmssjzmdumwmmvijdhhpvisptmqqesdjqahxwcwlzimstcyoywaamvmpmabljimtznxtjeruyxh...", : "spzcvpoblmntxofwxefvexvmhumjonrvwejyjzgsgfszoslwicwbmsdxfnpvapbzbkmcybgnndrpymlooxasokqmqpnabuwtccirfbpqfuryrkotkqadukngxklyoczhsoneelscehsrvzhzoxvzyd...", : "oqynligoxjrcciivzuubmurfrzkewiqtchttxgrdfehxqdkjloshnriremtjnftwbasuojjaexfkmfonheoimrcoolyujjezrhwbzzmiikxvltjrxpxxwkqhztpopsgfprlttaasmpagbjcqeyqtfd...", : "aeoxmjyzktpgmasappjrxviokwnazzurmnpifdqxoztuqntkvvctszfpbhjhkytdwkzhrckjimlplbqziyestyxfpnpbmthdwwscircyamcjdeasmedowbnjjnqelkxmtixhkgexfrgdxjdcferkuswrgtcrhu", : "pxsurccsgwgnbrbwnpcopbejohdpyzyzrflrejoltguoogrpzeqqubtycazzkerkovjaazwpvamhggolguvwsareaeebwlxqwhvgqpizpewnagdzu" } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 827 { : "usztkviqqckbmmjfwxdbgaspnhfnnqvcpkbssphxrrhugjkltyewdomjndjdvwvqfkdqt", : "hgkmsgvarbnphcwoxetnjetjpucecxlasawrrdfvypsljnfqaqvdoknvautktvibgyhoaexxkcvibanfhrmahlztllwzrpobyhxulybaozqfwgqmjunhqxphqxdahoubgtebnbkuigxieacfi", : "xkokijoxkwxwitvnnlndhkvilcfzazpoqyvmwoponbrtcmufmdusqwqbgfawcbylwtcslhlnpjnhpmoztqhgxwqvknmfpnkasxxpunfwrtirwpyjcpzigvbmadirkzncormnjbgezmtyqjzmiwpjjw...", : "hufsmtsogltgjgwkijbwxrjiwranikqykgregjajlmvwolirnivfegrqopfjqdwszvagrtwmomffstyoinrxyitsidlzqnnldgrlcgsckcfcswxwcgozkpkusbeeryproledvxoplbkimzdtsxreyi...", : "xgwzjjbvqiuuekuifpnydnhilrzzariokcwwdbpggfsyrwzkdvnvtyhdtumwbgjpzikvuikzeniedhympiimthzvltflhgnbbrcahvpurwnfsfntliaamqqwrfcysykkzbinfztwcacufuzyqjmosr..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 875 { : "uxwaeoktbgzbnzsbbgjcxpnalqjlvldiqjkxbvgryuluefoipqobsglqssgbglpizqwmksqukjmyuzzazsfmooqttnvafhtfnzuovsxvqrnlsdsvlijfjktutvnuvrxubyrylcxwvp", : "evftwjoxaojtozsmtxbrmndwkbuunggvaslksftpqguyluiwhtlecynmrzmzxjafwnujlnkyvaqyhqfntgtsxjfomgmjopmxnyysqxhmferuhazgkzlatzpxlizblpjpuiiugdjjmojugppqnawdvc...", : "lquyddzldparegdljjvzenkszlpstzzbyiyeqermusqutftljhjjdeqaxspnptrqkbwopjjfpolvosakzwmhiowtsnilnacpgwmzqacswjogaobligemxuwtarmmavschyvgjapmmjgjsdpxyqpszm...", : "gflpxherkspnqqlovhidlgfgunhibwlporztabwozfrivkmumswgqmujgtzbftbczvpmefiyvayjydlfxlqeqnvucvdnqkductismgasdvlycxgassohgrlqbehbsgwqptzkpjuigpeiykhurxlcyz...", : "tbcqnqzxufwqycbkjmhowhzssznyrgkqjcrefvqdjvtlvnzsqezvlnhbalxcbcbahikltmtbyzgaxwoaarfnzgofqvdilchcnkxxewnzhpjwjonmvyuwruwaextjpggbsqfqroxewhgzohngaryorv..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 875 { : "uxxkntaioaunuutpggoynsiancmapuwutysuvfdocrcvhdagvxcvxhfpxfegbsgsehrteicktxzkbeblvpabgxtrzqfdzuzbojtvjsomxvhufftnibwuufydegphkpuaqac", : "bvrcfbbttiosuiinorgtmlipgvyrlpybtggjvsjhkqpvxaexotuxaaqfgcjsmyomtluabjmpygolwdfwlyitmhrjaxmonjitgveotzrbvgrxchklwqbcszwrxwkksotogubexdhkkpwdkkuafgzozj...", : "edvfrszsazrdugvghktuexrltrrpklxvqklcmofpcwnnlobenteuypsbbbfywxuikqbtjxzbdreajberhhbwlyadmqkwvsywdfsyevgxkqelkobnfvfqefcbsqcmhieijmaeczceazsisksrqgckag...", : "goshpcxymxujciwjrfdiyilqielwbmlmebvjadtpwclskvayqweegplyoeuhzzmsghtdszgdurxcvuzsduyagpihqsanozjtloaabouqvzrtscokzgrnxvytkpvoekxvjzmzpyteadpimtlrpdavia...", : "netcyyxqbbfycxoqeefgzszlltpuboawvdjlageumspqmnucksfzoxohatollshbqkaxqkcpsqlvkncccyipsxhpzpzejeymyrqzsrfxsthhpxmzwymztdkejhyonofczlduaqpoqzrlpxkjoqpltj..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 831 { : "vcyzmdyfydxnxpeephhmvszhqpaigmknabheboaucbfgombufovdcrgqocmyjntmlsolnhvda", : "rsnthdgvbdvygdmuqbnoazarfcwitqlwzuzicbdgtugapxfgdyvjbartljrnpdgaukvuaykhpihbimqilbxjcflyjnckpxjdmjmmwnpnczixtzudxqzvqpnnvsweeuwmimzlwdkwpvmenwsxravoor...", : "lomrjgcifrcqfvlxgbkowmwlbwfooqflzwufhcwdaneuseeggughgsasgqriswimgasafbgsjuapjmjjpodufqrgmmaozrmjdplevvgmtozlihfnaskralvvqokuopibjnnvbyxmjdyzzyvnduxshb...", : "qujsvoayqzvrytvssjucdpepqavtzuxljmajqqotulhhbwujtkwmufcjifafbyvmwqnihcqtxofwxzgovcziqawgwvdraxowgrgnlmejypgzmkwgqlxkruplgiucquiurfkuiwmpzfyahaowvsipnb...", : "wptswjcxvehlsqqyaywpwppqkucxmivehoddsoooozipgobtxatcnluzqivgfpvmzqvayzonnztwuedxcoyaokvvexslakcblmmsdeuoxmkxwqalhlbdfycfunxctfasiyunjlhyabakrrfudpsfmf..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 821 { : "veiyfxwockobfekdeurmcguwfxixqevphrjsgjyudiwpzbidttghsinkxywoxokxhhuxmcuayylhkhtniluphaetbbzasszqbihfofetbwfrfuubhsmpjuzqxavsggngidpedlqwkxmddbhtrmtf", : "qtbdrrzuosmfftetdmmkxqvwevbvudfpfejxwcymnzjlcylzeqgxvivcwssssohtroiwqxompmjblraicvziunbllwvombxswzzacaqwxcgsapvhzlymwfgnsbwhk", : "kvfesalrxuciekrophpqsqjtzwftodvpyqsqhwziuhmnmrisyuegeppssbzabwhvnmyquaborjwrcmbihdqwqnzuqpsvvtjpatcarmehuauociyaytmurebzzypssksyevxmrsyodgfpzdgmaxszgt...", : "kwasugjiinuvylagyayncrjzcdvvnndhzgzqidntzripmdzwsjhyapcqpqudvkcavbderippnsqesbuphujnfwchmtlyxoubhprpjvsdvtlxdxmozcdhytywapvvlliqoekthccutacro", : "fdwbgarankmjoekcwfqbtgqedhvpndfweaxlxgivrevcwzeyrpklsmamfonohuqxsywzxcfxjsaxqamycsooynhxwqdbmqqsgblhrnvjrurwgtikrrroocuwryvavpmjhoabdecvtglyskcfwwdpjs..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 853 { : "vhbuvuhspimbglnkvulqdfigvbhzmymxubarxitmrwxjoimyrjbcewvbxqadmdcktggfdtshhncxyyfhemhpkqybdxqttdaqjydribputbqhfwpjekmvkhvfnhajjvggmmafszdgonwcqiwqmnwngk...", : "dhmxcoliqhsbscdypoosbozhjctypgdfgnkpbiexwypawlaxfykhovxcfgbwfrencjvnnyuutbyntsxygvdsvlkokakdbfvgizizttvqwcmyukaoqsmydxytgrgfzaswhfwsahuygfjalfjbytnovt...", : "ngymzikwufnnhlrrhzzvmlaealvwxcnlunhvmthfkqixuhfsyplhqrkvoxoycksrolikttykaizmjecvqokpcggrvrnfvsvcqwgfygvqonstsfeoafvoshpmbxhowxqlnqytpnljplkvuyivpylpsj...", : "zkovpbwpmziswfcfjzbjcrvfzjmunleekfnvemecpupotxrpcuartklgslamguujakuauhnowqlyqagvnxojstdycpeiph", : "phyxnxyjcnkqpjtaopxpnworhmuqndaortwbjsopkbdufizqyklueogarieyfbtxbzomqwympybljmkyoamsivapwfdnpezoorpcitrngttvtlifovgubiudhsgqsvylmrsujfzqlgvfykwdmbbhff..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 935 { : "vmwaxltosjjgwsltzidphrnxakybjwkadxkmdfoehlzowweuehqdlfzbxbovmfswtwndngrbprjyygwcjjeivjjzgvossresaoimbsidvtrwuxrvhmvksumyfqnlojasewtlgvadcbyfsyaeujhwvh...", : "lrpnudlpmaodbrksosnuktqbffekskdnlfjrredfyfiqjabbikbddjeugfnhpwcaebnqrflfjhugemaklzpslxczzpozzxmbzibdqlnbcvozaawxxaennugfwroggbcjlbzmclxjpchljxruvytegr...", : "wkqmjykysjkwfyeskchudizshmryhjhqoysdswgyskittnoxeoevzwvyqwhkohlnfkgexspbrmcqpmwtdpllbscikmgkaxpifxlozbysdqsyoasbyqogmoetzdeoybvxkjardrqkilmojrpqryjlmh...", : "nxjcwycugdmypjbyiuhgkstvjsdjbucajfnprjgmqrwuunetgsbluobytpucncpqjgqihqgrmgntplgrsthrrxptucaczctquljzmzkfnlnyzsbllfkibkjmfahsfknpwzyqnsjuhirylbpnhbmdmz...", : "zyrulayazzajtvnbvurxwsezlwqodgaixdkpgvgnxpiqtvlrlwlyiakuvprkcgcfbodrsygsudkwljypjxjoytfmnxhazrhhdclafnutrgxynuwcwfyeljdgtnpekpokanskyedyyojglxlpmyptjq..." } Fri Feb 22 11:38:23.420 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 883 { : "vofzlajdjmiiwuewynsoeapzajleyvjkttccdcyszaduaixsuasjrnnmhmyvqxeuwofphpmdtaoscxoryklatawzrhyaguhwzvibthpwyyzujlisyqtyvddrtwnwaqgumokxirptmlblzjcexsrnfc...", : "esvfqwmbtwjlcpgmdcfgwzuybftkdrtezktdwxbtzmfgefbiwzcisrvytyanwbdcwfrluunwyqdxoauqtdjmtggbiwzuifpmarochcxrslijvkoaobygcuuxvheganhvqbhislqihpptewpzvqmqjk...", : "nprvbdwrewhgalywwhcworsjvqpobrdycjlbsrxggwfnaijukzeuhisukcsawigbwmatiipryzexjbribhnycopzpgopjbkwfxeovgrwkodefcekpqbuyrcmijufewjgcmatizu", : "jtlnsbcwhddhxgtsejkhaauactazwfjofisqdvmmzshvddyehyrsradzkwxirvuwwuzpwnebdyngxixfkmixcosxtuwmcydtvmpailziwcdypbrewcjqhnroogjmmykaihgzbblbtfusqtfmppehhc...", : "dbmuoegjcnlnjpqsuvqrhazxeuccbuwntbozadlcmegrtpenxxcjaxlcualtxzfndkrmsudznnykwxdsvhmxgfhtjmzxefzogzxynlsemghxdoyuwhyrwlrqzrfzxdydtlpphrgxfuiudtviywcpwe..." } Fri Feb 22 11:38:23.421 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 829 { : "vulmnoubyksarxjakonuqndfbfptyglobeglwieymyymbmfregldsbgbbujsxxutsnysokwtkxgymzypmaztbtcgetdnedtxrpqtnyuurkynhvvnecdxgddpxdmohdrxubgvzrwpenvweygjyrorfbwi", : "bozaoatrpbwosfwqhhoesqijxkxdxwsanaosmfsuxelcqyoquktsaptvzlarnyipofwvowimpcyxpivoxsimfshozcg", : "guyadafhvtgxfalqcscrrmzoavrsyhorwjoyezokvplbisuffnybalajsszdyroaqqjvodprnhaumpkxjifruanrchmryirvgswdffvckvjveopkotxhvfhohqqxzbajxsrayliykhbrwehseminaouo", : "vdhzeihixqnpuzuxwxszqeceixvehhjjcxdwreqoswoyqjqvtxfxpqixgkozctdztapmbutgoqvtbvvmiifamtiefsvgjukyfwrtmeimzoxssjyrboqrejnhhyrqbvztxaelqzktqhhoxsasmjfdeb...", : "rkrirlusajfrhiyvwzggcjuzcelvttziuiuotwdorzjfkqjuktgqwnqdtnzwaogorhhbghxbdbdlvjyczllfsiayohvzpbhpkcukkztujhoqercyglynenxspivutehsfvubactxotkukjpxynqzin..." } Fri Feb 22 11:38:23.421 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 914 { : "wcpcfgpivfpjpvgynxeqgrbpabztkowhnrnovpprtvgkhhdhrmvwznrwqcvyxobjwjxqyrjcilhwawrkxoipbjlofxfupzbdkazgtlzupkcamkdjprfiiqmgeagrkrolixlhvvooilqgrrqruuhoyq", : "bqhsyhfeqopjwwncwzohzsejphdawgmqrjxpdpcqutakdwcqsueyvzinxphgzteulampvuqnkglrjyyerpzwkbjdyxegriohguxsuoalsdmrwniradcwmukhtdevxwgbpskmtwfylgdmceiamryfwf...", : "dligrnpyllvdgwcrbwninjtltjtquzhvxywdlljwkfskzwwyatzaebiqaaieszbtcnuadtfowpnidwursdtegwhetdnnfkmcojjuxschumswvgztyljvsbrgkmqkpejfagmmkocgblplrmuzktfgva...", : "cvtwexvddxbuahterzpypnrkwaqflvzvpfdjdxzovsnmwfeopfxmpcazgmgqntelyvpptzbjziasryruvaixlxzncxcpbkmxqvxmkulichgzgmacqckaewknjwccblsmzmjtwponogopwedwhegjfl...", : "yvvxnghlaihtxrkyqkvjuwwaknhermpfmvdxnsnhncbqsembloexjspkpnofcvwryzauekiudcneuzqakjtpgyncixbewsisopzdwuoehtvudnnxvfboqvbspvmogazcremmtbggmvawiaidbuksml..." } Fri Feb 22 11:38:23.421 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 854 { : "wcpedixczcwjcmyuzzigauhbbekteapkzhjoptfhrpbnfziiwreklllbgm", : "gvsdzesilloiflrxelvvzolcgaggfpoecvhjtaprlptvufzkbkgergdriyolcccwmqwrudkmapaojajoukhntbvlcckbvwexnjixrhvprntttbyazjqgajutbuqoabyahsgemjtwwbcagpclertdum...", : "nwtsdlgptmokgjfnikshpvwtlygewumltmxaltzsdvtbwjjzdjicksymkhcleuonzmkdtlsomkqmuskfzwyxllzgcnblrhvdpnzirgvygrgdrizrfwpecupsjxbihtmbvxagiyymvriypcxoawdueq...", : "fglpwjzpznpsvwtznbjlatihqumnfmkdtjcgcphlkwuxygsistslytetuosvobmibozlrvzphxudtvrmokfjpoilfilxirfuprgbydehractalxlkphqxqwxioxejcigpyqzkkxjixqbqwpwifvvzh...", : "bnjuwdxhfckjbriaksgkovixbmfdgctyfitklpumxxpgjftsocsihhimguedxrwqwesdjdplcgpbcxprdtvkretarpptpgmtjkowqqjslsrsmjtbwipoyiexvhbndsmquwymnycmihpizvdoesmskm..." } Fri Feb 22 11:38:23.421 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 825 { : "wkjsnwrztzbzerkbbiuqxbhzwubguhnzgwwafrksisrrykrwwjjcqvwafiasqinksinvldijbrccacztwvtfbdejomnie", : "ewsvqpvnkeqnnccydypqsftumvffrkmzwszfkirkdpzszgpmgvzpywwdklpmotgqqhmfbqbpjffoprppztzicqhqcfnyetavtakjsqtmnoqdibznnhyhwlshfjxfpkyxdnbbofhokjvmgtftkzhvvxmjuc", : "rvkxpbkcoqswvkkokhqrfrotdpfrmluyotwpzklhybblogqvpjfhfyugdtdcncvpzhrjmjlsdwxqjfzwolptgcqytuzfxivrwkdgxrjpxbjwgaocqkloiszfreuiecjgaygbgpfkxpfemvoebabvrq...", : "irxkyzzvgbdlmkfdgiwfoabdtwnzpqxcqleudgpfyxqmbjrnscnytmlwnafccdplwpqsjghnruzjfxluhvfvxhmrveouiloqyptcawxestibowvslbmrwewpdbhmbvmrazeqhqzuwzzkvxznfiddkl...", : "zvovnbshsxovsrfmzaxsxfjurrnloisjylhlpyagcrtvolorskwjuxrviwjokdszexmijknatixtwmbeledrlbomlvnabjxdjvfdtrsudyrctjgszqwpvgiqcwfyzodtqjnyptqkaewncnesprdeqc..." } Fri Feb 22 11:38:23.422 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 844 { : "wuzkrhhkjajtgefpetaonvicgsxtblozsvwacoicuggfvqwxkgjphttxqtyfrugajihutyiwfdoxxszxxvmnpdykihozvknlhfmiodkmkrbysmcfsgnspdmwdoxlokzvackrz", : "tphliuzaxhtxhvzaiiwlxeexknxcrcwvwrqqnftphmythvvemrikcdpdvtkythrbmafzpjslkwdgtbubiqgnvrdonqwzsidnmvsvgswapngpsgyfiuilbkqnwbqdknqiaaxewdvvjfbmvlughpikpi...", : "okpqkhmziydgwgwubkqjhmuoetizyazaomfjyvupbdbynlblyszirmuxealxalxxuvmrnrrjyslozyakricxkgzhbzvsgblorgwwuyprisyytmmylelhqsuumcsqgvqncuyzjgxidfksjer", : "xgvjoikxpdoslgerjmrwyzpktmvijxrebqfbgeqjyuzvjllhkmchdyjixuofeahgopvywycgpxytmbpxkzowlqqwznbkgoaxjjcpjxbbblmcfmmkfkpzmdqfreabzcafziriwxopilyhxnqdzdzysc...", : "skrmrqwruulhrktzojucxisqouspiadscuxwstssbyhbyicxqszxuvhwwdqxukykjlftxvgiiowfauivzkpfijuurhyruetjuxfgbjexqkbtvoszffmwksvjtucguugvfhvydiytqzzimshpnrtezv..." } Fri Feb 22 11:38:23.422 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 861 { : "xenyxgkukrdvkrlzjedobrhfsucmrwbvqocmumxiyiilnsktcbzezrndgitsyowocdicroklqltfviilrvpkubrhplccdqxmlobdbhfchksflnyzmgntzrojierianw", : "hvngomsunwzqseyzpouujeztxymcqkjoeimzdscroyutfxynwfvgycsimmagbmbzhsedwmqkkzvztiokvcqvyulqvufsuqoelzgmcanissitjtkwslqmvnxsnimmjsksjnxyimzhzhxqflzxnsrcfh...", : "zxwduaaopxkfouzhmjswlhydamwfnuqkfjcvrimaqhnvjyzhwhtusdlbofuocbvfizzcalerumxwnqnocpaptlvxrsghzsoqqpmqdxqbdqscmqqotmptxvltgacigbhczuusmmbsjqlrwudqzwxgqd...", : "oaodpbtawiiitdpksnnmckgtvtkhtmjuttaafpbeffqpbjymdlhdabitblwxobdiqnoqsrvcxarekqaerjqqanivgdtrnxmxvlznstbovtiwafkgcfksspzoadocjhfsicfotdjzfatsagehpgpime...", : "pnxzyhzenqggqpbtcyvlmljiezgbpbdshsviaqorwcmlbezmrwmzkfjjutfzkoharxvdrrcqadqlectenwmkvxnflfxfxlwxduittusplgxpwfarxdmrejyqxbgecncxreftkimrdjyheemirhrwgj..." } Fri Feb 22 11:38:23.422 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 827 { : "xnzfmjkypgxvjexrknuramtufnxiaqmhoywgbwycihehactligrruapgrmsfgsilzbgfyqbcqwyiqsghtrxfvfhgdnpcbqgwxiggdgpojfpjf", : "jnkrzrvsrkwswaqqwhyztwcjmbghgchdggvrbsbgxddvmjsdydtrfxvvyxaouptsilrsakfqitdddxlpmwvuxpslmjjetgamfwmlnbkumjbogsnotsvfvmrgtlzvkwyptevawkgtczboafareremrkafoclt", : "nzhavdhdnwpfjgfgeotkrkrwqqobalwoqvugfyqbvvozgsfnsdqinynmttmovlrfavnzlrcebdwwdnwdhifctghlreioxcewutaxosgrhhynjrzcrcnzusxhqdowpvckkuuyllgmyhhucebebwsnzqvhghsl", : "drlsrnhtgyhvnktkzduxraokkdmmwilznsgwnahbjqjbxbtebwdautzainpqmfimisxjzdnuehxbvxfchmcreharqfbhgcrczsdbduudwnqsdxzvmvhhcedaocansyjhuikdpceermcdcxxrphatky...", : "oxyejxiczpsjkumlzmqecjbhlggnedmbytinhymaarhdycwwndngxxgjmgtzapzhmdzqqmifcomsrqcdcprocbvuqvwittzfdbtaltxvhdymscxpqjgflotakaaymepattbgdxhnlquyhyygukviwy..." } Fri Feb 22 11:38:23.423 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 825 { : "xpaxraaksvkxglczvqrpqhkkszvoyoglrjipahttnbbcpfsjdjuxljpenwxnosemgbrnqmdzkmtpjmpbpjbyphlusdcsgcmrrypihwupbyrukzifkqbjuueppwccjjzohuszqtsymqbflgararnhlr...", : "zlxjkexbpucyabjcbzscqnixxaitbyfjtxgqfwpsjgfuuzglqcmrmzyhopnjpmawciqibqcoovijrpkrktygzccdnwrmaponngtihlgmvxmscevhxtdwezviwzpuxamhmsayroeeuhkivzuqomwxqw...", : "vilrntebnysuyrzdrmpavnsyxoxhdrphqtoheselcpdbefwwzovkushkyrshfgoukepcpzkombzptpgiokmzlsyjeqtsvwuwfpngluuymrlomgpwulurmtovervygmftsuqjkodzpkapgrktrrdvqk...", : "apgmvbgsqnymewrmwsurhozovjgxatadouvrurqkmdsuromaylapuwnkmeenwnpzatbvnsoksexgqarfdbhdgwmbydwyslzbozxhnvh", : "qfcazykvrencvartbohjtnnumfifskjosgudfsfzwoociolfuaiuuoxcdjzvsmsbmigtmlajeiaxfkhmgdtjhafkzpkhtvppmvpvnzycasjekfdgoycagctnpha" } Fri Feb 22 11:38:23.423 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 891 { : "xxhfurbadfcbhnhktkuvfajhsmtpdljrfursvsxhzvfxvakhfjzztmlxkcatqnuuakunbkplctzvhgxgthlpzjvtoyuobatmoxcflmfuqfcbmmzohethihmwnrhshotenhfgyhqaimkpfqnwgftjgh...", : "pkzydbmawrjxzniikpvxzbjutymtyxaojpqdkilabcrjyofobkvijrjyhadauxtrjttdonxiilrpyabmnowqbjuxennfgfwg", : "firnznucbegxldfbbdeqwiugyfahkccgsvpmtyvqpizxhtjrkjhxsoiaecgdveteievwusnucwshwbjbftmwgbxtjqshruifmvkgykmeyxhqhgaqrygtsljlbrlizqnrxpjovpxdvctromxoeahaed...", : "onmpionenjzsitvbsnbnqrhoonirzwdwcsqenvkhwwbpbuxatlimfwbuucngcgiovlbnmmcvyrkofewyassmzbqkstdlqqzuyhnmqrfspdfocvetqhpibufxpxrmetqmqrfpvfjcophhofklqeckyy...", : "gxpwibjdlghxhmffcckugxafhailbvvqlonhktafdjsnfzvllcxbqsgbqdhqpqnjbmyhgqnmxbphhgdlnjyljzqbcsgqwskysgkcirezqjgnvlsfmflbdzqzswmgycspbvfxrktfnoxoqnlgmbfdhp..." } Fri Feb 22 11:38:23.423 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 861 { : "yimjmgrllgkvwyqpdhujftcqpkyoakoiriadnaaljbokhqaunjihovmkjkpolxgltzuhbkyfvonimzansnydqnoracndaybrkzeigcspqfsiqejzyryuljiqanqcfdvvwkjqvmcbrumflegtnzldlq...", : "cxwtlaugpjtccchsbvspkkbmyuslcofzuzknuggxwtsmnobxetboxoelmexavjqglzvderdbhzifbfgtovfdpgovttzmvvuhplsoczhuopmkxrhlzfxnxryqdegponiqchxb", : "yiyumbnzogvlqwaddrpirfkmbqndxgaxggbiabcbkrohgdfluobmwkbtypzhgseboumzerbwuwzgrqvttdgzwmclttrkpjhhngevgakuepckanvxdmypggenxeatvipjsmwibllfbgteuzmochuldn...", : "ojzdcvrroetbzzfsivqiuysbtkjlchwobekojrveforivwebnbpyndotzfnvawiykvngytbnodpwsgybdqffwkaoovisgehwuznchyqwtrwezmwuaduyslhjwotcdxlpvjvzxrpsmbbrxozsokv", : "ylfhvwavyiiurtcmbzsxugapsbesqfbrlhlwsgmnxgakvbcdenlqggtzqdxwmolqcjhzigvreekukgzhomzmxgmvwmlsylebqtvvzpxrzdepxlzcnsfhhmsvpmtcgbqbwatzjctqmtnwhvpohquhig..." } Fri Feb 22 11:38:23.424 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 843 { : "yzwsxsoturwlrckaofzayvxoasuqoqvnombvhxooarsgjjjdabhtaypjokptxpldgyahguyromzdbvpnvxwlanateqcslzjqjbgacysslyobosgglhqyzwixmofuizndlkeqskijzrfqijajosqnsp...", : "qqbdcnseutvszojmgwnvdbxqsxzabpawlryxeuuklrsfohypfapqyslbppgxfgrrjxtzdosjpbzbqzdxhjjrgdgnizsggdlivebxorhtayezzbsqfyamueqnjudgmbzkwrwhguixbsgsougqlcjrli...", : "theoztthcoximtwhckgqurttscmhdrmpjijvqpbwfciofxsljzjcntyaaoxhnnnohpgsmsjwdhlacjdxzwjkgkfckviovapklmxajuxiakkvqqzuafiouvpxrpmuzswpatnwckpygjpgdeqwxomtwc...", : "blxfbsodqvokfdffrtcmxeifccpvyekwngighalkpbbgnyzjyadahutcnektjpphibifxrsegclwiypyskkfgputheymdlqqaxxbacwivrwqtcxdocvmszoiclukrdqvhdyrfmrhrugvkdimfoggby...", : "awrqzgxmlmmwrcogleuiimvegaeeuzuhobmohlfwyvwhmsostyirpchwggjxfgxjpdlqjdtvgri" } Fri Feb 22 11:38:23.424 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 932 { : "yzxemilbifbxaarlnslltzvdbcbrvkrsoesutfmdmwcncxtrhoierpxocvqowfathqyaoozgerdtjhavihiueotwvtbbczlwkuecwsocopuwnibxkeyfmvrdlcjmayqxequndcfbsdmjtktrf", : "ktgidezsionnqptuwhhgzlsljcsxcukbpwrcwumzkhaennhkeqruglofggnjkrrmephjvlzxtdikrbziecdknyhurmtjesqumdgjfpmdjtvxtlxxdsaxqwwbxpmqbbveqdbkjnxwvghkmtccyrcepw...", : "grwntugolxoxmunhpqiazxvydsjgxblmmeituwyxmyjwlxtfwykxnovwvlcjvbzvbanjjdambpysnsgoyfobbkbyltrprvfaqnkfybkmlqnssoxjjlvwrihpqqygmppvojhdfztqpgqebprckawtqa...", : "uiyapthoetorrlhghfgtpzltdlmmjkczztpwfzdccksgmuqivchjawdngcstworwxsjjgpzhdwhxdzkqtkwcyxlzaobrxlwrzzeqmbshiuswihhhmjqyuvwxzkixixlprhooxiglstzyckbqgfypgv...", : "fkiueyaewfagulghvsbquavtfstqnodygjidpxkebhhtjqryiyayxyrvfdrkndemmfnxzhwbenukdnogrfkpujmjxkavaulziuhmxfxxtpmnnpxodyunmxqrfkohoykoyfjjbtxfzrymfftqszevjy..." } Fri Feb 22 11:38:23.424 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 855 { : "zciaibtohmdrzellbkspfcgtreejyktenrkhgtggbnkdxavnssgcywzqsskbeumtnvdmnjqbqaokbacvpybdidtcsqavdkgdshwakxrwgqmyuywihkgzmcvriremkutip", : "mzrvexbzpyvbvchinptnoddipotvsoohnglcbpasygdzwlvyeswolmmhpilzcguebjacknwidndxwzbkctzvazpacuadudjhvqgtfoijjcokvlzwvvlmzcncpoiqwzefcbfjusvjmakqwvklkvacam...", : "hmhxsbzkctonlwkgxewjhkvdbprlxbjgjkomwatrxitwukuouoxnqizepeelmxpnsappnwbmuzhzqmxesngfbtsqtujpvexzksejevnbdckkqqsdeingfidfxgzqjlitiqtisbwpguedznklpvzviv...", : "sheyfebrhheagnygfongjnemlnptzscccasaazvsxzsswuntcuoevpmmisrrbdkyowcwrrdmrmxwnsrnxuyizbuxnrvempmivrsreurlqdxuzfjbxzjwcyw", : "cxfcxwvbsnsofrkpyjtrsxshkcnydxxkiytcnklhardrzislzzgusdjtvaurqhmknrgydllsusxbhoiuaxfiozfhtxnbshkpccihdwmmaistpvfdogcmnfnhsavecryconmumimgylcvyhvhekkihj..." } Fri Feb 22 11:38:23.424 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 888 { : "zeyykudplnaukftljpgqqihkgzwvbndeliekgudufeqzdhtlhdgbfluppdkobkijrtcqguftbvmjtxtomuucijuhbauvzxjzdqhjztmygbjowhlcugrdpfjksznobukixbsuzwxvpeluhujjnwbdqh...", : "lghfwpgpsijvriwfintfctcwvhpuhaupjiodrsdlysmcfftozqtznomqjkomftyykvapvdjqzyihnmxhhdocdiacjukumprevyyjerljhubrqznooeyfrplaxphjlpxjesnxjqjznxtgfwiplrqnrf...", : "haqltwkjwrpolpbjuiocwlmxosakhudtsgwstcchtvxmofhbeyddsmwnkdzvfphjiffdeqhybebovswhidyvnlunvjdiktlhjlgzdxhgckwphxcairskyoinlswcxmhssuuirwymqnhgukynqijvmp...", : "axcxcyiyloaifdhhybyrxgtupvvfjcvscucuurssgditbcjtodnbkjxzixmwxylhbzxhqfzfdphwgzyyfdzatpzgnixgsdgazdzjajuajwbnprrib", : "vazqimyoneazzacdrzwscihalnpsorjofmxxobohrlojwxtecytliskkfohlwkmeguwltvjlnurzithogeuodgfmykkvgixgvfscfblfbwjffdgwzmstnlkabscssrcnrcblsoywckcdrzfwzpujyi..." } Fri Feb 22 11:38:23.424 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 954 { : "znxwzvfwxcgndaysuwpyyfzahczakpuvorzjxhgbkdbdcfiuexdpeerdlxlnnnufmlxovymwauvolbxljtamgqretwdnrgoqivrwlbncioyjhebosewgknywdvsmbpmngmqeobwomrgtvoddftrabx...", : "mdakgcjslzgzanqpxwjfwboqruurdgrrytrcgsgknyqigvudvxqhyeivyfiobzxnzlvfacrderxdxittzfgldhoyyuowzfeziwpocufyxywmtolommqydpzbctfoisuvmrrshstbcmcgvlsaggwqyv...", : "yranjpsbdmeybtyxrprmpezqpveikhbpahaqffmqquwxzbzsysrvtdzxuhdolhnvjfhfjukkgntuqjrxhafpryronzfvfhjlojntppssldqcqygduewxjlbjdporznvaeffzgbyssqznzacymnxeln...", : "gtwdsldknonryhzvctuwjpmpvavnubkojineqejuzowzdtvigmdamkreoegswyvjptrzflttqphndlveutfniukpuwmxhpddnkponsrdzlsktputvkhqimusgtybwyjxxnmlivyutpbprsanwbqerk...", : "fsfyuozrmqkhflinirskrzkpdgoixbnouqcuvyqmwyhqiqskldklovzdzrekcbymelwvhkpuizbvfibuzesomfmeehqpexoasjezkyfdgeqiwbsfcdkndxbxjaktfkcwkubyjfolobcjjzvnmloygi..." } Fri Feb 22 11:38:23.424 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 836 { : "znytpypsjivrlfgnlpuazedohqpjwigskaasywptqaluashlgpemtjitpvzejpnurldydhbtsbqmvpzillcfbtrwmmpvmyffilkfuuisjhptrcnuxwkaqvbrqduuryovmykkux", : "hcxuqphfzbbkxkdgqhrirmdsbbhdnfztsbkihwdfpfnwwovduognfwwwzpderheoudtryyiqaicabnblselccvhffmnlapnjyzdvxlorebzpswuyqqeghsmuatvlvuwneybfpjgkqvruclthjeqrszhsecrcnv", : "dbbntptrxlsagsiqgiqmmmklbdbesmwhjwdtvlzadjpidaswlltoirxnuxusgjhilhqihwpfeszbyqevljudhqqxydfzzsthjuvelskdzkeevhiqwfxxxqbvkyzbsjzgghqqbvjllwexhsehhvwufd...", : "sqxfubfjdixwcchcxipsnegcjhygfcclevtytzoumvenazfqkjchhppnslolvxigewtxmvttarfmmavcytnlkxrembknwkciowebintusyjwhyscmjcfcnhlgmjdshdqknhnl", : "bdxhezhowyjpetkusldolfnjwvgvnptywuitdjyozreffohbwzkptxhhfiwkjxejwkzwwuurgzznuimqipwjloutsdlfjejakkhrdeamojbgmypvfpmcvtvknensgmvpixypurtrizdqbnbvexsnhh..." } Fri Feb 22 11:38:23.425 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 866 { : "zvtdrclxlevkwxxeudoasaujhccbbsgqksiyiaovyclwqcyfpeaetergupijarvbeufeyabayranxizchjkftexynuzrtayfzbpljngfduenxwczfilttvazdsnuevgorsrlasupepiffrsqsupkpq...", : "wgfmwlvsongirpkvtsjmqtioowsnxrarzcttwojpduijiwfmoofyqzsdjmdjaahwgndedgtlhpddgijhtyilozijrrhxdpqgjxcaztnowjhwrsxxzxuugkryqankoysgvdzswicahvvuamyqmycz", : "xdfdlnfvnzzshnsnywkucfjzzlhhwgbqxcbsfosspabyvyqxlfwhpkaillkdvrberzxqyjmdhhtcpydcpmelcwifxwamgjhfjbxyobcjnfymtrzowauqtnayfssnbqbtyrqjjhitdfbdtubutbpyzr...", : "urdvnjhrjxeepgmdfbcvraviuleznhuvzzuafasfxssvumchwhzbxgykgbggcrykzkhwjsnqetyebfqrjhovgjucxbtsnvmzbhkiwzxkbcedeqgquxkxgcdlyphrtuocevoyxvfgbxjlknevdustvr...", : "zhklvgijhotilljojxpkpwkmrqbjsuujevlmcxaavoxndoimsxqluexlhehjghcshgprhokhovbvsuhrynciglfpusegcpycmmgwgcjduggawhvheilkmzfbvjptbvjmt" } Fri Feb 22 11:38:23.427 [conn4] warning: not all entries were added to the index, probably some keys were too large Fri Feb 22 11:38:23.431 [conn4] build index done. scanned 10000 total records. 0.078 secs Fri Feb 22 11:38:23.431 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:23.431 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:23.431 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 194 Fri Feb 22 11:38:23.690 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:23.690 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:23.690 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 319 Fri Feb 22 11:38:23.868 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:23.868 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:23.868 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 606 Fri Feb 22 11:38:24.278 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:24.278 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:24.278 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:24.604 [conn4] test.test_index_check10 ERROR: key too large len:897 max:819 897 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:24.812 [conn4] test.test_index_check10 ERROR: key too large len:856 max:819 856 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:24.821 [conn4] test.test_index_check10 ERROR: key too large len:888 max:819 888 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:25.346 [conn4] test.test_index_check10 ERROR: key too large len:850 max:819 850 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:25.569 [conn4] test.test_index_check10 ERROR: key too large len:838 max:819 838 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:25.609 [conn4] test.test_index_check10 ERROR: key too large len:828 max:819 828 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 2880 Fri Feb 22 11:38:27.987 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:27.987 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:27.987 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:29.174 [conn4] test.test_index_check10 ERROR: key too large len:851 max:819 851 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:29.727 [conn4] test.test_index_check10 ERROR: key too large len:825 max:819 825 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 4350 Fri Feb 22 11:38:30.920 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:30.920 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:30.921 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:31.870 [conn4] test.test_index_check10 ERROR: key too large len:821 max:819 821 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 6892 Fri Feb 22 11:38:34.729 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:34.729 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:34.730 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 7374 Fri Feb 22 11:38:35.426 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:35.426 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:35.426 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:35.587 [conn4] test.test_index_check10 ERROR: key too large len:857 max:819 857 test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 7495 Fri Feb 22 11:38:35.684 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:35.684 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:35.684 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:39.747 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:39.747 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:39.747 [conn4] validating index 1: test.test_index_check10.$a_1_b_1_c_-1_d_-1_e_1 Fri Feb 22 11:38:39.788 [conn4] CMD: drop test.test_index_check10 Fri Feb 22 11:38:39.794 [conn4] build index test.test_index_check10 { _id: 1 } Fri Feb 22 11:38:39.795 [conn4] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:38:58.735 [conn4] build index test.test_index_check10 { a: -1.0, b: 1.0, c: 1.0, d: -1.0 } Fri Feb 22 11:38:58.829 [conn4] build index done. scanned 10000 total records. 0.094 secs Fri Feb 22 11:38:58.830 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:38:58.830 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:38:58.830 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 1078 Fri Feb 22 11:39:00.120 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:00.120 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:00.121 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 1082 Fri Feb 22 11:39:00.204 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:00.204 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:00.204 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 2936 Fri Feb 22 11:39:03.675 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:03.675 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:03.675 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 3154 Fri Feb 22 11:39:04.119 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:04.119 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:04.119 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 7429 Fri Feb 22 11:39:09.607 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:09.607 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:09.608 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 8172 Fri Feb 22 11:39:10.889 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:10.889 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:10.889 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 9299 Fri Feb 22 11:39:13.002 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:13.002 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:13.003 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 9852 Fri Feb 22 11:39:14.070 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:14.070 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:14.070 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 Fri Feb 22 11:39:14.359 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:14.359 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:14.359 [conn4] validating index 1: test.test_index_check10.$a_-1_b_1_c_1_d_-1 Fri Feb 22 11:39:14.383 [conn4] CMD: drop test.test_index_check10 Fri Feb 22 11:39:14.388 [conn4] build index test.test_index_check10 { _id: 1 } Fri Feb 22 11:39:14.389 [conn4] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:39:31.154 [conn4] build index test.test_index_check10 { a: 1.0, b: -1.0, c: -1.0 } Fri Feb 22 11:39:31.216 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 901 { : "adwnjzexuokslpljxmzrpwafedwykzhthjuamksxmmfwegtoibxcppxoustxwppvztbdbpwdbmibgvyxlsfzbkuurvzvlylgiafrczgvqccyzjwkdlyrltnmbszllnlwskvnmxawogajpxbvnsesbz...", : "sfcqqpqaqeailsslunhnevlqdchprxkwxhtanjehhczviwdlvhwfsdmjfejtdhxrgqpatwgnllchdmymsfjwotmetkevquovmatvoretckvscywukondvpeflyirynveuhmlybbirfwamgewhwgksh...", : "ujqqgggivnziifitypsxphiffqdcmkohljkwcfhypymxxzieirwkmdclgmzivvjdhssuzhssmzsgeidweyklwdlqsuipzogzgupwqalwdrgjwbusqqlmmfbvxtimfnoyppdqpubkokvorzudbmbfrj..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 831 { : "aebvkvswnlkvqcpycmrhnsncknnzltiemlozuqakllnxgoovjunexucvsfvtiyuwxmqlldlcqqwfjuhyddtdyviadvbtjxxyyeqoportgomdofzkvktmhxsedmwtdhgtujclkvckxsmxirquipffru...", : "wqslfpmtqmtkriqgiukiojtabhwdzsvtwuzwzpxcxtdrtpntybnumoyfjlsgmpgqprxspwjjozeygevlyxigqnncrpduholvnirtfszooilleojspmrenqxcbrtchnbmnbpzulkkgmsnhdqohljbod...", : "uqtwpfupwaijhzminqvckrnnuqbjakeduoqflgwfzkpzirbltlqedlehfcosfvggsjjkubmxjofcskutcnjalmechbcinvuqhxdjzeccvwggxsxplptjpafwnktbrfrpmycubopiosunzmbjvhmslq..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 887 { : "ahwknpqitmlcksapyawtxggmistblxhbcqplejwnmzctvwlysnetkamvmwblwehuwtdbkytpegonxodztkezeyperqqzarrjgvdxeflkkpksdhwozdxgywuewcwtrowgcmdxadiqpzvynrsofqfpww...", : "xflvpteulcarrcykjmdibiiyuvmpathahifdbymaondzlirvoilxxkntgoqvatvjhrawscbylyjijejxhdataowhcgpoeciylzptxqgmxjzljcjkdunfnbkvqmznqrhgnhmqnrufsxdxugjowkknex...", : "sevkzgohjbxgszefxqkiqlzscfynkrwdbjgiopuvperrzaqigeoapggrejuhsuaifxsqhsrjiaixddmzjbgmujcmnojjrbhroysnqcllebwdsqixzbolqpdxnlttgobpbqzfvvdajymgciljhwwlji..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 870 { : "antmpurgchdrumxqhlgjfaugwbbshhxxmaanbnkpdjjywonljcwbwdvfggzchvsefoywyjhdujcvgblgbwmagyuviuzcpyjphjtzfyfpjyxozkxbrpxdkjyzejwxvvywguvbukvbrvqfeovcadcagx...", : "ayfxafkvktweohuwzvaglrpxbqrkrhziyeuhrpnviboztdrbxmtrpfymmwzvojcjnqsktrbqvserbawmycdjcijcpvtebgvzvnxodinhzbvgglyczxgolsflumsrztdglenwtcdclpvjdkffhhgixt...", : "wfyfhutphbsxooeaqrmttyajyevpbfptmljreliqvffmufbtqpdnvahdldoxnryxbvmzpnkahcmnaoitbpcguhzhvvrruwawksokbludgyqthfldzngsdlfjkmnnkeijvlcdygytsccqroukxhrohv..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 848 { : "aovhxursywebkrioxuqkfiuafctzujnvdwohbfomakakbijdxdurhrjzgbjccptvjkmwwvohwaxkclhavowrkkuudvncdhnjgoyilqjlipmfjkleqiogwesurtrambsagpzwfyrndaaubcfvdtdtgb...", : "dxbvnocpsejqodlwenlxlseqrrnmrnkmzbnfxumixrjxupeilgtazxwnyyzksfiowfpytkvdvecvpcpjcjaaabvdksriiopaefvllovmzoebypzoplylztpvicxiynwurlwhgiypjclhrcgexndzyr...", : "qhkeavtyhfjsgwdptsqqzkqmyrlwdgyopldtfcsfytoymjsjsaojnwjomaaortpsjazpsngwidtfwznviioghbforjltweyuvrqbkddvgzhredybhxnetjyrunwqpwwslkgljufohgapkakvyltzqd..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 896 { : "apigsbnmwntxhfolwspxtxirqgrqjpdnqhnmrnpzmepogstsmmviebzhpxrxnzeywwismfckagnhtimqualtjbacegknywgkkrlhzxkpftrqadychnvtufewjpetznkjrxvbgmqfkhotnmdyczrlzc...", : "orepjuvoaezcggvwmkruhvqhzscdjcbkpmqcrlogdcsaspyzowafpmzcvalsqobsydfinjmombqiqptgmatialuunecnkkxnqrckwjrqaxgtqeuuqdrfrlsfutvwyybgkotmycgdgrserbnhetdewc...", : "qarxtcfesyuhnwemeljfrjvdianmzsareqzozblainpcpdhwyppxurtkhvxyrtipsnddgyeppjoafkhqfmwnduikqhbhlbrrahrzbdddjqsqeoqfsolrvupknxcueplrnhmyzhuxbfianqsypaviyw..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 894 { : "atrbrhlnfxjpknyorwnqyewkurocapnhlvtczuwgkkkbdvjtjzbydswhctiicgizfeaezwjtdupredtpsrgauknsraysyynkcfddwkfxwgutuythlqufxastljqodkxizusymawbagomwiijnpbxea...", : "ozskvmsswxgrjnevibxidoyqhilrdwupumfukggjdvkxzcdklbldvwpcsjdukhhebdmxhgdiuyretvjjfvvhbbavocqsnpnxnjupuhflrdiitlpwfockilncvronhogufcafrfkjvxksltsyyabmsm...", : "jfqzmfrghpdygtklesubrdqtxkfmhzvbodyipfiqvhfgiulrwiqrfztyjwrehlwefhclclijmxglttjwmgezcjlpuwsunltrbliwnqnqnfvarsfqvdfitrsjftbfkiffyfxrevbtpzqewflrwmnpgr..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 823 { : "avflwfxkmtoqixgxennmhtorhanvkszreauyiyahnelsuvkmraetpkqpjutpubseuemkfrgigrtcifcoukxotldutbrqrjpwckwfevrkpcltjmuacxahncnzjgonamyabdxvapgakozwjytjxddlwn...", : "dgxegjlpkdpvurpjsfjdyytofmercimsmaswyyjpyveomrqybkuvmgqjipdkzpfajoqdxwyhsmsaeohbmkrtvepfbpwfgewqmdhyvirovzlixnqeklvsisytvxhoaoweaalcmftqbhxiulkxuwcmcn...", : "trccgkvvxyblegvcyizvwzqmckvagxaiyezvubpevtrhsirutaorywjrotwffdyyqivfmxajokkeayldwnpxoeeolmxjzrbsunfhhslxithcetutnwidoxusrlahavuzxizbbxymyrpucwcpldfrww..." } Fri Feb 22 11:39:31.217 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 956 { : "awscflxwcsjtpkuumtydanxjkcsmasetkmjncghrnrqnitohypfxwncutkjfmjkcjjjcmnghcvzkzofvgpruxscapwtjvrdxzkjloqogtxkpcydncfjgpvsjyktrhqnlnpvzfqesrytestdyedfucf...", : "ittltrlnfrkxbedshplrdmpextzbzuoqroslhkonvigkfpecizxvrlpwscspdhxlynspmharikbdjwthugxrjuvupwdznilwnegnmfyygsrpzpjlgmljirtbwbgpzjmsfcxsopfqriebhaectvtswx...", : "wpgusfddnglldimbykebtcsrwgotqhfgputcpfbcrpnkyfsdbuacijrmhnitlrbktsxfljtaqhuijwxznoavmcyrkkkxlxlsdhrvfefevcfezdznkfsgdcwsnstbrpbwyliluteciyxwjiicefyket..." } Fri Feb 22 11:39:31.218 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 845 { : "beynibkszsfudskuubwteaavbkopibusamxrzbidvksdwmboqvoygxuyylzmiwlifhudfizhbpxsdkenqynsdsbtjbqqanqeimzvtcbhqeirgqwtqcdfnbheyiovecwujlyyeoswvppkjdxbxkxpwr...", : "ylmqqnlgzsarikkobebtrydxhpnwzlptbxsnkxnnhxyjnqlazvnsllduixfbizhafrvlemxurfmpbarvkhyxsycjcfuplalpjwfksjzkbjchjmmqwmqzicajknapnnpbohzrdslohoxslelzvamxuo...", : "sbktplmowwifnwfuhpxyohlstwgbvolfmzbvgnsekwchktpjnlyfgwtiuohcluqwjbkcyrpycicwfgcuqgaxieqopnkncitgmeyuxwqyovoclfsqmaucnyjgbvkxjkdbljnugowpxcybqedtkterjk..." } Fri Feb 22 11:39:31.218 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 832 { : "bgfcdtgftfjhohlpgwcsyelsaddgxsbblbqfunkirikmbemowtkdsvffwpvpsrxgzrfvwlxullhhdjpxduxyjpfdrcqvcwkatpixgpriedohpowixmbbgkgvttqavexeignhsulyjbaxbzzvkjoepu...", : "btxwnczxithvqqdnpypaiekjzapxxoabgsenelvfcrjxenabyjzdeamdhwllctlrfiffeegfgljqefzkvxpoxczsxntabbbvbimootwbyoqpatzqbmmqzuvrbexkdyojgyooenwtnzprxekltqsxrn...", : "pceqtdxabjcsxbeconhfiftormlwjrheiwievyfjmfzqcirpxnoiqehpkzsubvvrjjxoeoqmwopowmcfyolpceweizzngdzlpmufzmzpoivrjyougmltlgrkbuhaeeuvjxefqjuumgwsvgzbbtlzps..." } Fri Feb 22 11:39:31.218 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 850 { : "bjwxphcuzivnmnwahjzlrfytyrgtnqapavdkspnwqnycykcdrgibjhcinzzvxaswnkbmqfpqlnvlpsjuwlnkfkdncuyqqpumonbbzqfpjpdblrjzkxkgfkfnsqvfmmnrjthzbxrqnklkygcdcypmxu...", : "vegbybzmstulotnvcwqvqvrxkeckvnacihvsygthtibjsrwjndkrobsmevmfkwdfnqwxwzmpaeiqvbrcdclbqgaildurhrywujatgtolebcxnfbynxzvgnfamimvlmkyvnaxkfxyaryicxswmndnni...", : "segvisuqalnmfgumgvneuefcekjykaafbwixortwpkisqoltpcbsoanvpmnjmmhnltxabjodpfmzwaaviationnszkumxhfnuidstpqozhbvuumdgtbdhzofpvwuqqhmskusubiykjmvlzwuutwinm..." } Fri Feb 22 11:39:31.218 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "bmtanncephazvbyquyntnsswpjaduemlkhzcklwgkyapnieovitqmonibwbvmwtbwktqtsgvblmjfkozlygymmathmweclfudrnzizdpwyuhenqqhzwgacetbdqvdsyuumltizrybeutdbfbzedjag...", : "fblrvecuxkfssvajjwfvferjpcpaacokneztigkkfidhguredrluumdojukmbdewkaahymetayuxfmlhegxcuuirgegnpvddvzyvtesfyfhfaeeofzggsdmuzrufcuqhjkksuearxfypctmgngdmde...", : "gssvgiwmokvsjtbcdbmgxgjdckxduyqktsrbpsdzaqbjvlwrvvvmdvfaksgeudameqqrltvwdnbvgxxojuvwpwctzqqlthtdrvaynhddhqhlqpehblqsbetbamtwdjygyzthscikmwruauvybrylok..." } Fri Feb 22 11:39:31.218 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 846 { : "bnekjsnnyynvgfggjbnmfksmrxgiylpciuxycnmblccijveuydhtnwkeymsnhftymnqgxvhkuuxmbciluccwxgobzovjyftfuqletlpzdvkumtadsklppvaiabutxyqapefmaaipuetokdygpvqsio...", : "lgnmwoxkqiohucmlkxrtqacydllbhxdseylkvxlnocgqeqroljvhmcefufferccfhszephegzoindcnuzliienalnxaqpycoujfuzlskqqhaarwjbxwtktnzqxiywtrzwogynnbtkcpxbzamaiybrp...", : "bihwwalbcnzzkuouzficmjzfgswskeesxfhdyokwftokifzvowjxheeoydgfybzxwppokmypidjepjqampeolbaynvqtnhbfarbhbapcmurcgfrpmfcfzofwccfpzugcpynnfdenucpzxpqiivhcnb..." } Fri Feb 22 11:39:31.218 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 921 { : "bolzusqbzrwjohawzbhsftmbzztsvpdthcfmdrxijrjjsztgnfowkmopfkgwfufpkjwlqozdvpxtgcmrzsloskxtdhecfomsalndepqrwdnrjlhgousrmrnfiepqagtzaukanyfxiqjnevtrufdhxl...", : "awjvbyphqhfhvhumeobonrajwbvkqjkmtdkpoousyekwywrahbahmdguqdktkbwhadcjzgxarekkoxzofhnsvtpwcdoiwacwsllkanzlohvzmitybkokfkvrqgmugdigdxnkewnymxjviylrgdhybc...", : "pepmuvqusgfkprwsborwfmoznhynsvngqtsyiphtcixepubzzmudilrztidchhiwhhyuzfgmvuplewdcnigzcwprgxsnezlexdnlrdtrxfrshoxohskmmhndymtkwdcslfzuibwacvwubqaxnyzhml..." } Fri Feb 22 11:39:31.218 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 904 { : "bsghntjmzjuqxyqbqkkhklvitaswrnusttrbmecglnkupikiupviwqdgiwgmvlbsugohcqcfcssykcodxnqlgiqyeznzvrmcrmlpmthxdxgdgmlhnnfriaygaxnkatolayopdxamuilivjudhdudly...", : "zsdwbjiuynrmhkbpagepbqxaruqtikjzjpjymrxyxeuwevzuzdwpawdkemgtmslgletugikosmhutvlrcfxsmffbhnowehzlphwwwdrkbnrxtuboclqmblmnnriguzraybsueuoviqzzdzofnfunqf...", : "ztthvvggzirlkorihjicgeudipcnaawnlkeicwlzkydptqsaeoefxquuzrkwccxbydbbgbopxpjwbsgysqmrtyopaegtdppnzavsaqqvaebsedplkcayjjihmtfieitqzvpvttzikdqyigfqswhstc..." } Fri Feb 22 11:39:31.219 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 822 { : "bwkppolakmwtotkxpyhclpwrdprhblxfippiyoaiqbggvnceiflcqcyiuljipgwvtbljfxzakepwzupohebyaresijkjufoekeylhxpibprtnmfqvdjbouuowydvqatwwonqbdalmspoawgxwbxjdc...", : "srbcnxzlyoceczfiradoorfstxzxebxjzubsaafawmvsfmceivuykgeqctlrzetblbkodnygcczdtwqiujxtpojywwqwqyvcqmdnxtsawdstwewarvmmywzwuqdtluikgkcqaziumoxvgdbwsqhmyg...", : "myvvqsomuemzoizkqvsxxkzoiykqimcdxwlcucpubiquioqqjyxxwquzecfuzdxxrelujgixgdxkfbqdorhxrmqxnipitlqqemzyrxizjionnlxrlmsceenlpqxsyefmkdrxewdcyhakmngztoldjh..." } Fri Feb 22 11:39:31.219 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 842 { : "chirkqdzzmkamftixjnbslmperuxbfeexfvajjjawpyndldxjgbeqpqnsppopbrjuhewdognnxgqztrkcuqjikvhysqzumfuajqjxdwbnwxjynbbrxuddleqiygzfdajqgciinimfxcjkqejpistyy...", : "eznfozjbwdrfdmyfdxulbsvmfcyfwleieircupgizzesnizwpcfbcsymthdfiouiblmekesggcupdqsbbbxckizhmqqzxcsedwqdomhijmbpymyikzzektpeagvsibhrtlrozaundlilktrpmgjxdw...", : "apjkfealjyiwluvxzzeamqamchqzxvzzoholkgiwqdxitgtwgudserywmsvlbgyhjnvmatesfpvnhuhxfjaaotzinqpiiluzntbxfvijsrlwdjdjurnbnhoxdlqlaaektlrqodslmptkwhbtrqwufe..." } Fri Feb 22 11:39:31.220 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 843 { : "ckbuhvstlnnpmhgqqlztfrtdprdcdvsacyvvdxaoabrnbjehyiqswlhlkajodateflojvucqgibwqtoirqwkrjmqondhfqsyqetjjywavjhjpfgvsxgdzcztvdosoatavfzimirspcbhwvpabrgrus...", : "wphuhbwuwoikcerfnunqlxhmiqudsirupbsgmmiqvwnwvvnbyfbrowynwdrqoavfhexwuefqynbafhwogiqyosgkguuobnhxdyloboyitrmyiozxxjmjkvbovkwowjejarkhjavakxvtjoavfulagn...", : "arolwkhguggiklxnuhoxmfolghueuoergctfrbtlvlyujlzwyljhtrqmdytdceapjzanupainrdkqwisxkpdlpkrjptfdbersofowbxpqbxedzvzqxcxohiuxgkdpdpkzxjegysbfmiqnlrqklwtui..." } Fri Feb 22 11:39:31.220 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 947 { : "cnhnydzsmonentbkpbfsjfvatrceienkncycgtvjromkhqeeodcaftojuyrlkuncdkafdavpgyeeeldnjacsudgtktitbnxttokrwengaoohuzrzmcshtruirpxyuqixqbeonlcvetlqnchxudmhpn...", : "qgjuoldpuydopeykhitenfbgnlfqlbyebjmpkfxjjykxxwmjipiqpzmdqmnxmuybgiuhflylkgnmfmkqtibosfnchdncmskgfcvpudiimyldxlyaolblmmjcpwcjelgqzocxrolnlxzgamvmzthliu...", : "gkblstlznnjsxwvdvdamcztczlxyjaupsrtnxlxkchdffirvlbwsrjwntytmnulvfztrhevdivltivochnyjenltwsftxuajkepcydhmosxyjeambnugnqffczkjwfekzjudpksdcsclcyxxlisben..." } Fri Feb 22 11:39:31.220 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 826 { : "cntyoabeiytrujwphfcbzqtojkttoysnlczlvjhqsrbecsmnrluzpttyuztyvpqgfumqkqosdhswjvitkkfvpglclhptviltqpgtuxeufarcggnddebvqboevdpbwdigvwubtaixhkkgrldhhknioz...", : "wqngnllydfullwmxyxvpuvzgmvxwuggyttcelubcokpvnshmpermjmlaejpksiuxwoljgwwbetqsedhvvgprkdwfgrbmcolmnslkobowsrxjgucdoekcujrbndrhpfalerwkneguaohuiswvwuyedf...", : "donbsqlabvoqszyrvuyertifaskbjurkbyarflkxbqnhrfmukuexhkdeiywxpjdshrlsbavmegwbtshpqvfowphtyslthhbpzrffywzazoolvjcdemiwjkppjhmhdngcilmryshozeikggsirdoyba..." } Fri Feb 22 11:39:31.220 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 898 { : "cqiqqbrwplmubpenejgspdcoxhwsrquvebytlpjetzzytkgbhwktfktpjcxfxvftsqfwjvkevsitycliasycllvsbmhckmgcpxniiwggpghdvmwcvfraqszbsdstfitystdizlthxgjyiebsdtftoi...", : "wiigqftybtmyaxtdzpgtmxkklcwyjxgtdzguzaddjmydcfuastazbgotlrxsgpmzczvmwkysmsjihgqlbivzlnrcwrkzvcjtizlhwnilboeyqjtrpmvnwejtvdcoiokhiqcuxyjgrzkfnjszsivdwz...", : "pmoxsdfdwogdpvdlxbvnjngxywhlhmehtugizwpbcumdnhfsqvlcrffpysyzbcdsfqwieqaqacvvcrbwruxbjcnkypppnoxdayospfyyztiamzkxoorbcfueizfqoswugwjcgxeonjvojxhlzuhvci..." } Fri Feb 22 11:39:31.221 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "cqxccfujnzktecfbmkfizvnbblwpjsnzczzdwklemtchczoptbagtcduxekqlvwoxtyjimnqfyqgbfdeaosezmldnaqrusskxfwzmeycmcndvprmgmywcssvvidkfqocudyheehsmgjtdymywvmgjl...", : "wakdsysvglgqputrywrmbvbdrjavbfxusximtaxummzhxwfawvzbirimijgrsvtoxrwywtqwjeygkafiuirfrlcyjbbfewpizuopghtmnnnchigzhruvdyjlfwvgapqosmxctckgykcbmghlibhikh...", : "wlhviekblsyyxxharukloluxdamepcqfmumxbhwiidexewqfegqrztscawwcsdupfllibrtgnlxilpynzefpanawqqrrmlwrcljhaglzdhvrwjkdxmzvxemkgjtdcjagsnvnsadruxxznrbkiudgyl..." } Fri Feb 22 11:39:31.221 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "crtpnegdyxyutailkrzdyxzbknzdahwsxyvqerehmkmzzqcbuulfjscmbhzdnccbhrhkkoqvmumeprpasrjxpmghuuhuerueiwuoswxzgwxgcigdgquavkvqkqtrvwxggepbcumuzbimmauypqhvvs...", : "qfootedzxdnmthgjncylamqvxwxcwhhfbwnodzsrykgilliibxvhpfbvjxzitfovueggmgemwiodsshjxrpwnvyclrjgtatrfpcmuypwioqwtpaqtzgawdtzrlljsrvclwtdivzbqbhpymmubugtav...", : "gxirupyyuotjopmqhdygazzlppsukudpjqjdpxwaejjtmokxukiscutnidikqykoapajdnuwcngunrpknxeayuarkupzlovnptukiniiziqtiteplzcmteetgufqldultyhvvgqihwrokvvcvypann..." } Fri Feb 22 11:39:31.221 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 835 { : "ctorsbnmxicauvgveotckwwgmwizzxpyqmzkdxnrhzzgsgfnyyfadbkoiavcobgjhqlaurviulonaqcaoaqnsxggzbyxbwhdgrrfndfzgnlgdmjyhrdosigvtpgqbmtkvvgfqiqcnqzkulqvhiqmfi...", : "aliwhycwqxouydmehfbkorwhajiepeqqgxaaqzdlfcegcfrzzndnwbkpqdmlshhralsaplqdqlykzhrjhpfyymdpmjfpxtiiybduunnmyabomwxsigwgopogolbgudfyrjpqgmaxcjmnwpzffufidg...", : "inilyodurtdllqjknkzdcuswlbepjpzhahnbrfrnecjafcddxyqeremsydnhdhfooklrkfaywcgwsshmkmgarkmxetxgtohnwbqqhsbpydkmggpmhkeahncvezemfoaorrgjqtqpkyxklgsuqgeqal..." } Fri Feb 22 11:39:31.221 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 899 { : "cxrgrgmoiutdqltyerrwiibtksyybdmpdetctvmdzoxzwqsyiadyyipmnbuecplmtwxmoxwyvppkwvafcqwiyofmuovkiwgaraisxntfgrrssbdibrdequwrukojxhwbetluoiwlwmgdpamjdgnogm...", : "gdscruesgwbuvnkxczpeeegcqptwdggsgcukksiqfrngklkxyaniliowdonarneasaejdlcuwmlwnqiznoajvnaledlqydzhiofzdkhfxwvoixpniwowdrnfvdfreurrlqphwawpsucgeytmppdjlp...", : "pzqeuztbarzllnfbsosdpsceglwkuhffovyljifpzlgqycnrlanktprjmuujqczfgxxrmhlzcpcbvvkdhofzxaekmsdgeegzgkqdedtubupnndomuysxtmlmjyhhcutoyqvvummulrnqyasyixnckw..." } Fri Feb 22 11:39:31.221 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 820 { : "cyjgvpnulduslzmqvpaxwiuvlswzxjxmjyzgsrjjtzwolpjakrndshjvwambcoguuuzjlkynmeupzrcsdkfoinprlxpopqzwcuaaxnpbkrtljkwoqalsfigddtnvtxcbqbebeylswdeezcqmznxsxf...", : "jiiggyoqbayxviywmywbdqonuklpexfdzawkruoauimpkwoqxddgotpmowlmhdtjnsspnlrsiqzbgumwdisphknbsbqpdhclyedenkrxrwwgbbpqxjxinvczllzkismoriokjvfxfcuaijpijegqly...", : "jdbccialkiquftmhxvmadcqsyrbspqeenewgtjkkhgjayvrtwulbpssufmilglxrvcmommknvbkkstcfwgknsqjszraafcavkvnhgprwucbwnuljewctifcouvcavxmsrsvaspkllfurdgytluglcy..." } Fri Feb 22 11:39:31.222 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 852 { : "dbczlproluomhmbmbsbrmtcfqrqevvfujhdrygiptmppuslkutrjrojfohxvduxazfxgybmhihfmvpcmtxkuwxqlhsprjlnnlckmrzvrvrawafspjseicrifzcsnjfxvfnkfutjjvllkpggygurhuj...", : "zlqdnfauysccfievprtjhoqyoiqtbcbphqarwnrynxkmnrqqhslynlgfhxpbyfisrmsoqsalcgmiucrnihxttiyqyrhumjaxvknbdclslcyvfflfgofqxbcvrdbeydqseftquvltbgtgfoxzjaxobd...", : "kkqbdizckzkvlcqvkawcsrkcnuvqlywvabvzvjzejscddnlkexmzohmvxiikpzdisevpkhvfrfwaagfjpnchpzrxycnzcdudjofinpquknwqivdapudvfavcftwnlgdebihllmazrvzftmregxvkkn..." } Fri Feb 22 11:39:31.222 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 834 { : "dgmotfqwdlbeuqlspcgocugzvkwoxlxebtolcfjadjeiwadhsnmskheqyqandbcncnqmndrmecdbtswxkklkjhaefdezsguywimmjjyfbpfbidzifsvycbyuynarsuoyzenenaxrobnwbmetymznjp...", : "hrvalxpzwpqrsyqkctqzqtiouscpbxbzehslppltmvkpzvysqqxdotszqaczeoexbelhkpugbhcdjbaxtadqtpyyyjfjaegzezvgnbhvjqmslqwzlkukjceuswmygwfplaamujovvfsgvwlwkxtvte...", : "bgtuceaxneclaqjtvirkxmwicipinnguupycnrqlpjilnieczrqejbnztvdnuyaonixzpzalxavmcbwwtstzxiukbfeniqwbyscjoxmxnipweedwikkjasqzbzevgapbvmunckntncliryxvzkrfos..." } Fri Feb 22 11:39:31.222 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 935 { : "dlzpmeifbtwgwvvqkyefauuleyipuagnodgkdzezxmpftzgzfcqujmoiezfncadugpgyviklmgswbdkxjgsnxtakifcpbyznkbqajawjlwcuhouvoqfpshbleaqgxkwubtmcwknqcfehjczhhmqwcb...", : "cuhkprywcfgeohjovpkmzupqevsjsppbjdfyyrjcbtcpaunzbaqdhsteccxrmvumhnlonlrcpvwdpxtpfyehrshrpgawkqlurazgggtttdkgguneehnryugogjatukxwpnwnusvcserctdwoeyhrrg...", : "btaftkrauejlnbdmzipvnnkhlvqnitwispfblurbtscxomjqliedcbhlzhotokezzvcarkfyvsnplcjzlbrqcokokzcqxzqljemhuewhblsdaoywkvbbohjmeecuhaoueqoektiegicehkvfklvhok..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 848 { : "dryapdrvtdxtsjtxcadvxhkghaxhtftqbinetxkhyxscohlpyxhcplwpdqkzjtinevyupulbthgyhjcmuzrkwyouqewttnuqarxyexxazmrdsdlentleyqrxohfiijjfxhylxualgmpgyrkpyqgoiu...", : "vujzodmblegbepvesskdjrowzdimrmiphtdhvzehzxlkxwncbqbldbttspyjsfylkrnhjciqyofucuqtndsnjtktnughkacpgooimrbqlhvdxbiqxgnquunbrkcpxxkthnnpmqiezgsgnappldwwfu...", : "vetchytsxenierklgfqdqxxxgxswvqnjlojiiuneqfyjuqkfmxsxyziofidhsgcsczwgaskxvpwyybkdqyenwmygramaqasstsbtkjgjkovvmyyynizkbmpuxmbwqvjzbjeuqgbqestzpdozqiqehx..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "dtcvlrklybdmarwnekvyokaghsvinaqcusizbhclbarvykwvelbdvfstospoyzqlunclaheksctvsmxczzonjomzgezpeeygfuymqenikehelqrokwmxvzpjbtagesbrlspownrycpjszlmfmmbjry...", : "qqavfjlqstiwugiagiufmtxmebjljyhrqncfwokxkgnoclugspcbivwwogcwymlsdbaqmrwswjbkrfdfrkkfxcwxtkltgghcpxqqnztemlrvelnufpdwbvdbjrdtxrvoydwbgtrceigrfzxoxotkkg...", : "cbzmpjrmfwzsrhtjjksbsuhmfjkkinxtybpvkuqfhprqrxmxawwwkfiunfnwsfgjzjasxjcjzunpqzsgffuvdpymtctzwvxvvnwtnltnqpsxtwjyvfwqlehydwioodxdsleqbdcvhtsviwlfnywcea..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 866 { : "dulymbujhfheaufmgcjepjpzyccxzcqjdiiywnndgnwmwzzorrhqnllmiyezzkyuqasququczsovmzhxieimwnfaohgnmkjjslekrspkbuexsvgndwhosjwcriujmelyfpsqcuwwwgzoonwytjkygm...", : "nhqgkfjqprcjrhyjfuxxcyhkesmikivxbagyosftgwyjrbtzwanrwybhlwbqyygfwxpndieosrrcqbqcuabetietjuypjjafnqhdznvtyqbmqacpdxdxyvwpbxhwszygusynoebafbuanaajkqiwim...", : "peivauxlvckhrwqkexjupjzqerqcqrwtnkvrczfraipixglnwirwbzecledtrtglkpimayiexeyyhzrstiroqrknjyagmjbzjvqpizrgrobmuxaesiqviskqeoxaxxujyvrqkkfbtuumcystydzwyj..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 834 { : "dvuqezgjwzebqbficujmwtqqmgitcecrnikhsavoviclnazbovayacysjsqvhqklkyvuekoqhihuwmwkwyydsyukrwfuqqqvodwlwnyzbiihwfwboggwudplngiylthikswpfmjpndgfyqdfxfkeke...", : "khmqtntoykmtzcahizgtfttxyyhvrxbaksvsiyuhoqftqdfpxmqpcwfxkmcdtibinrhsnpytzoyzjzrqjiqwkydhqfybycnnsmrvgwkslnddglczapcmewjhyromzaeguypnuhqvejxfwjifhxkpzs...", : "tsdxwkirfezodjmslugznonjfaxrwzsyosibfrcygjgvpmugcmaydxzgbqcqsmgpfqcixlgeshgkfzpunfwrxlnobotdalmpomkavulcvociolhqayawwaxehwvbariahesbfgmrnrqmrvhlrqhukx..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 886 { : "dwcgsnqvnsxjtnffmycljeajsciywptytoupxdlyegmaapofznyngoicpsilgkoxzvarrjxdnujrbfhsuuwxzcvaclzordgxdswoywkkebxgmmkkqwicreahvlmckeliihadxtmaqqckgadgasigqp...", : "spvrzbmbaslxhyfsggucjhpjkxlmsetsglxfdnanclyakdwfoxxdhixlhtyxqsgfgyfngoiljzsdyizslrcjuvzopjpkekdbbnibsweweqbqdowmzofwwxpazgkirgfipwiizbimnauykuxnzxnrpq...", : "vvmgqbqqnhmjbjfgxcyakiryiocodsljbecepglbjnxzarbxielvyikqygzhxxzsidealuyttsgmuphiowpknixuapwuzmqaxziiqcwbahqjiqpowqxskpqzmrilrifbcizsgkkstacafzdbpbtvme..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 852 { : "dyouzwmgihjupxlcvzyxxnjflnuihbhgpimwsnbrakacfjjpboxkmvamfrybxnkxhrgualwtorzrlotwumskvekcasdarkemjxiyxqferkwhsfzjzibvqyvywbuaconfscnrpwjeueuspaldlwappj...", : "zsuqgjpgpkxwtwaowqpttboqhfremoldnyflfjtsdnrakkjqslegwvwrpwefngbmkgtwmvodpuutmfflzvfatmlzuivrcrikztgkpotdzfqsoydkznhlrenrjpemygxlmnbgpxcmcwcdnpwhxzqoil...", : "geytpiwzxohkdizbexppnmkvbwbfjfzxlztxmlnpylbdnqwspggydrhkmynmprctcklwqoxesiuiqszxoxqsigefxwyovjcndymexmtjgucryhoizeioxoqszkpswanhqqnqzbkpyvsdsammgtylkw..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 888 { : "eafxwmylekxnugtzehhgvgjhclotmagvyrtaqrtsyqxmbhszdijxdtmejpeeatihomkcenzgkiltdlapncvdxsaqucjjqqtxgytxjwbbihknssvchiujnaschjdrycmxtalmaqyzvruhmcyrufkffn...", : "mroubwbzwfymoeccxduwcoloviklvmcskuibhycotvarcynujirxyqejajupeubicquyplwflmhwrhflodjadxyqyvrtwzdriglwklzsntklmdqgnojghyzodocltbysskqoozhsoxqbdnqdnbuapj...", : "yiqnbvjjiyfzqhpclntaitqurbrsdbpdsapkplhtumuqbgeugturnvtoeeqblvlaylgzmctysijlemsemormcpysjhmcmlvpoatefteuhkjedlktlgvuruudjymlankzfaxqmudkotvfkakjgfiogn..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "eatxzciiydqlrwaehbgowoseekpqormorizrdjayupllfdhqyooplxejlainxtdlrrbdxitclxymyqulsgnjlhspyldrhpjimvcqizlznicatstvklsfgzkvjzwzjlarjuclcbkftgqlzjrkwunjwc...", : "whzfqgdsqlyiqlonieqfihxnuiiruexnioaahymtoofjfgcsykxkkfyvmrckvqbzbokwaigzvkigcwywjatgbbywrtvjveywemgxwfcvnqzkeyhzirdfwwgemcqtmcyuzpowrweuxsutdksfszegyb...", : "doytfuybkcacfslhvqceqjphlezhjelftxuksliflcnkgsbctjhyfkawqfsswytawctzaagycuwpyqxbndpioctuxbqnirfkxhxhclubsxnupvzciztkfxluxvioeururgppbxbffujouwdyvossvp..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 948 { : "edzxjxhjksysqebpfatkimufxfrsxkdavpmhtyjyrlfgfoshpftzvkjhfcrqahoclcmpgsoehedqiiiamzdhgzfgxbprmtmcadjrjvtfccrjdjeahaacgwseuxptcbdqlnyguytzstdwchmegiyisp...", : "hspfcnooyuexlgkjeedoyzcgvzvorhxtbmlyfaonmlhoudoxjfhawhcjpvzscebwtxbgozuuniunposorzluwpytiilfqkrunqpfuyovjkxkzjobfoosdtokqisdmpplkeupvxhfcqkphsanxahmia...", : "lxulcjosucwnlmjajoyhztqsfqcfasuhlconlhqbbhpwtmtzeauhipsblfwsdmohwsulsriuhtgqtfohuhjkwsndvmgauvjqpofyybiqwcapkaiqxtfffxehfdsicuvbimblnlhnbbvgcotyeqlzml..." } Fri Feb 22 11:39:31.223 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 867 { : "effxcdutyuumsotvxhwqhgffrqokhnwykhpbobtzacfihidmqfqwticpwlxcwpxluhbpxupryvppukaggmrbkgcpptbcpvtfoqbqoiupsilsgcsqebdurjdfqvoucgokmvigxhpuwxjcsgejlagnzo...", : "gltuvrwjhaewmtcxoexjrqmuyahfjyoftjkncftfqauahohzknlafaisthjkaxwtbcfkiyobmbsmdkagupbteywpbkanmopcidbqlcfcddhrjyeysshzaimpisuubumzkrfxkhsikagmgemfuycpfh...", : "fmclmcwgxpbmpocvzxxmgxlmjxwiulptttdupgzbcwzozqphlhcspcsdgbqlapfkzngevafxcavhzmduimhvwlcasmdzdrsiirxoodbudvogsxpimnswnoqwwlnduwxxzoabjdbhdeqgjlulkyzxgw..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 892 { : "efyuowcftlevplriggjvcefyaubsusswsgunddrcehudroffuazjxegfdezkgviobwbvspoyamypojgxecgwtyqleimvlmglahudalevwhxsnlmyjqtpghcjhxwranxctnyopdnakwcmptcynpwjsi...", : "udlakyjnodjlsqeqrpujieaqxtwofhufriklizjfpmgabqffbvebfxezvixzmkztarqmdhwongxmaysfxgaiebgkmdljqfihutzgqqqntlbagopqszqdjufoghekbqjzcgetgmpmtegvxhcuchdqmx...", : "udaqyevnwzkrqhwjhdpzlrqbfxfkwkpvcfoydqyqveodnwyehqapcwfrjviewqttdnnkicxytioggxkejvhkewvfqlpnsglyqiusujmjubrgahtrvhwskjnbkdhtlraoonjtscbqlmjkuwdpkdhcic..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 888 { : "egueespgmoomsoyvubemfvlbmikhzocwfkmquvxgzthgjyudetqjccynlkbqbdjktlaqrektuwknkypsrxldeecricxmvvmjwexoibnfrvurwidoogpzubtrpvwmsnvqzjriadhhtlirnfpqhlcrbd...", : "biynaooasupjgsycqhxbioxxlndlurvvnkrsvmpommhahkruporqvzuwftgnltoezvseehxiwdnvssqsfqbeuqlekpyjjediwibkvurbkplwcbamrhebkfvfumametcgeqwvnoswyumebmsdsjkpya...", : "svhapvrtfaoqamvlmwenazumlfssawycwccybtqoontfulctlywexzsjgvnhbnjipqucwwmoghicsegcxxlnomkukyhkiwfdgwvwpwsskuqvokxppasgrwmuxndmjvrhybggfxafammvogvsbrbtlb..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 928 { : "eixzisfbzqyfysjbvvnrbbylnceeweqwtapzmafsqxrzuqcfneepcqntnhntmlmuaaflkpuxuaeptsalkyfgzrajftbdhzqncmjugcflplrtritfjrfjhyjbuulydxczljrcxndcjmltjfoyisuhhs...", : "qrnekhzdwsqwhigcpvwgtsvbyhdzrrltittedycmskrgewpkayajbldmmhxceyolqpbpvuupfsfzsyvhhskdxmvrpfgbusyypsxjpqftoqffouyeelguafnypjhvdhnfveewvmhhixyqlusodfaxhg...", : "yiomnpfxqnwxuhgbfjqrskvmoezsrbityirescqjfssovdlbnfydvmvzjbpbsuatwuqfapujnjvgxxddmedgkjeeaplkfrvbgxpjqnkhroywfqtonudripazaumzonhboujefrcqrhjxiazwzupsvi..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 845 { : "emfzthsunetwdkhsvbobpmzwyuhegnloxvkvmpoayfpooqrpouvrnsiioetapqfwnjjqsxrurldfwxnggfllgybecmmihytmohpgvhccchqaoizxumcqeygptlnenrtijtchkgxgjyxwojhazueefx...", : "lujahoczgzkpgntxhxwhfgkdurnuiwyoghodtppxbsisqxwgvjjeoevjrebgaewvnvkhxuujzmiapfdmkvircicnmkmkerukfqsnxorbdtwisxnazqleuayqukmdsdvmhdcunyngqecnjcivrjhfge...", : "ulzdelkvczorsqxjunqkvxhbekqrrcbprnyaozgxelzusxiwmtiptizqnsrlgerueilcqgzpsyqcseivaxhplquxuyyjxdmhupwfywjlokhtdfjblykwvmrsbuhsxfvsfoshxjtrygvhdqndegrysk..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 964 { : "epmrtnfisjbbwkkkgzfdizyiviiuopfbuqftlnxqeavajofjmvtrvjgrvbqcbpvaonjgnguqbwhagliygleycjffzieiwbzugsjadzqmqvtylpthzykrpqvupcdzriiswyoucrqteairnhvabqdskk...", : "hpvvahleompugmvqaycspfeufowjydzmncjyfjcrodhoeckptgraqcihsjtaxgghoowpxntyxkeegktcwkhteuryczqrkudmhuwnspxdanbtofmzdvueoopebucnrmrtgxlpolazbbfmladuxgkrfq...", : "lvkbhjusxvzqkiqkxbdmkjhaahqohlplrpitutppzfqvsztkzvccmpwtahvdguzgxltacoyaarrcyjfaeroqneermjcpammaztuzwjibhlkgzjnmzhomkwlyuqepicwozqqgkhsxjnbrndpcbiinxw..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 939 { : "etgwltdbkbgooofwdzazhuhvdxoczcdqlnaaeqppkhepexlsbjxyjexmavbshactjlbnwscljonufjoeipfdqzfmsvttcphjzlltaxxhmegcpdbkvevufmxgtebpltjvemtxmqherldhqvtullezqf...", : "qzapdxflywdqvuhmajpkkjuuktxvqnxsxtuemxwxpnsatzpbbqnkaamebzsucpyaqftrrlyimmehiwdwqwtgibhjwpatclfnbincsssndezqkxnrmnfwpcvykcdidmzhjilmfrauhehdihrvkmsgtl...", : "mnufgjuggzevyfhajxvcmltmkzwpgjosnrhwyyrzegttkabolkfmgieajwmlktcxcvtdjzrqeiqtsmnygmqwxcvoanzrqockwtpnmutbpnxyujroorptfonukmxzjlvawujsaqltdousohbnparayn..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 875 { : "evzktwgtjzogwfyyvdfmvdphjsqdrnsaouslqcsfukbukrqclmndhdzhqymzinzrwqbmcscksygzqblrmmreyemgcrshtcqhxxvxruoigapuuekyyfrumldrqfmgupfcabepcpxwavypezofojtnnh...", : "cucutrwvcvtjldufsmgmjjplcqixxtjfugwzmcjmcdfspscadukseiwgeamylxjtglhrvkhrcuqwtimzzlwvccgsadqlynozuljnrvswlhrtnxmfexqvidsjbkaewetqzidwpyhspyyzgshaxrsitc...", : "qyqaluyczalhecxsaszpklritrxombketrgprwdsmtqeovrnekloxhcnwlzrndjuffdmcbkkcxjstcpbgqfxmaqsspxdqlswattmezdanukprbgpkpbejywinwjxdhuukkkwzreprpngaftpyfxmpu..." } Fri Feb 22 11:39:31.224 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 914 { : "ewygyekofyrmzvwehzviahyxughdojwevtzmlyasxugasersgfzjokqvxqusubpzziudniuxlhojmvzadzppgarpibhboscgksurdaebikxeutasxwsuaexhelfvvbmjfbqfbhglkfxbblttappoqj...", : "rahyefrtyhkroiolmlygtifaohrdsjrgjpglpotktoywquharerezsyjwhkygdffvbyxbwbwkmdgmvruomkwelsjmrpitvyfktkyjtdsgcxjeqoddcrnpdnzbhfyfbqnypsjeshibcfzshlgxbeofh...", : "wruljuoeemunbqyooadmnyxkpmajzavkcglshbphfmnalsfnfbwlkvzxrxfekwasuvrsqmabeklfdsncfrkjgbnwosrilqlgaffthznsxqtcexoywnkqybwguqhnmpzbhwtgzljcpazwhopmusuycg..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 904 { : "fbkpmccqarbyhhvoxkbbfonzglgvkezwslxtrgfdcqrpebwaifoosoqabhwytmpylxtzpthvmsaeibvqpzjxnirsovvmbwpnaxaqqoevcpfhewjarhgoasxaojlplhrpvifgjorwrhunmyfkhdobzn...", : "kgpznosnxfggcjdoguubhhjfleustqmvpwtphjhqjeytkwyawytkxteusrymflqytexprpbdajsmvpcaaszufpjgmtucnxdwkdfqcovypfcxvpsqmrgdqmesgtzyrxxjouacxdegjcwgznrimjnjnm...", : "kywiclqbfdkyqbuykwdfqujoxyerngdppujbcsfkerqbzltxcfnpkzokmlieldojswkwtgbywnzokrallogsgzatgevftdjnyzvzbitctoxmjdbstscfyzzheqowfyfxsaipoaeuiysyazvlkssvwx..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 868 { : "fbntcxetsbegtgzhckwaqmxvcqjzdgcmbyhvptkmugqpvfvbrudzbyggidnraovsihslqahftrrfzkfdpvobrvzxafftmtwzqrzrqwvfiadcccaqnciupesiofrfsmjdsygpofhjmjsdsgochqktru...", : "zygqcgwpmvkbytbynpyrqrgthellvtoxzrwwhugcahqzsuuxinujodzestcujttrpguenwdjbljxjzuwrueqphuhbubcfotokmlvnckayjwwbqqnfmyyfwqrcdlcuzsrcwapogenxuegpddkniqfkp...", : "sxauhjcfohdwgglywgilqezzaqrglnpgifhqtoktbvckxrsamagedhigswvbpxenlmxscpfsrncrmbdialigbpughvaoebpkoeuhbqyejihiehdtusrkvqqjdpekkyvoueipqpvuxmicjgdsuhyoav..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 837 { : "fcdnpljbgupiqvblamroihjmawdnxphisfcnjcszsjhlrhlwaaxjvsbwnihtrnpnoazbsyhvyvvkwpzgzdtbwxlgsjwvqxuzmiujtnlpvhqikfgvydygnuevfelqqahpqzxlqdfrvcnaocqdhnjlyx...", : "axnzdrurzuuyqhrttccfzawbhclueefznzejychgsfsrrsofheyhvsunxiwwazyfemkmbnfhiwctquuhtypxgduyoiysqwigombdltqxvepciosgiafynibhgshwoulolftgqgzvixqrbwxrhiwdmg...", : "bxvtudpjhofexpawdsfbftayfrbrcyepuwvvkocplnxexeqmmeojpzrnhfhgbobmmugandggxjiovcbmriavibyzyiqwqzjwnnuuhccynsfrdvjqtkhgwioxbqjobghaluaimxcifdklyaysvnzdjm..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 950 { : "fgecbudsaqyybxzfxpzbqnkjxobjbewemwurzlwuyzfgenkfptknhmvovkxnirthelyoaprfzmvzeeynhxuvtgsnjiylyqzlbghqanbxjzcgfdommguyhvmvvtdpzqzehootrahcunowjgataelnaw...", : "mjxkvjbivjagyidpltqxsqcawydeolpisapoyezsehvlkbzxgixodrfdfhswwdvbiznxeqtgfsdibddzkagyeudodphdvliramnohogoktuiksfftsfppgcbvbufkirfywriamzqptsqtorfrgmtft...", : "kaexhbjiesnisfqomcwthiygakqmxjodzgmaiipkeyolpowwfprtqihyibkekqnqdatfptcjnngpetvkbshgxquwjkzyuuioriexfuyvqndherwgvljndjsdhwvqbvcynyvhbetnfwrqznunqmllbz..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 854 { : "fgnrpypiuqpvabxdqhjswgbtpmximaecnrofenwlubnjdoeehyezqpavsfjfgpvkflutcktcqcqcwmzmujybnjuxicbxhtrnamnfduwjgrlelzncyfcaegwxahzahpcyxfguvvkbvfoxbzlyaoddbw...", : "vyhlxkkqyjyrppufagbmudsjhvatidxixcetqdqbgeoqlhfbiovscjoavyrneynkfksykbwaqdjbceuguaqxtqhhkhufjhnicatwssdjjgqhhpddhjxuovzvkboqmkhsaebfpksluuksjshovnscup...", : "craqvpheppfzzahupdmmkxpdipqrtzcjumeesfgltrpelnocpzjmtinxayhzjvszpjhfygpvhrdvxzhtthhshlgjosjehdszjpoczeavsubonbzwawcwmzxcfbcxdjndipnzgydcbhvygaawyryotf..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 849 { : "fiirmjpjxxptoeuewpplmwckbmugltcwlooostkaqhokmervsghmyvwptgisxsfnjrlsqszgsosmabhefonliusrbscwgasisgmrbbbwfftqqnxaitvymbefrqjbgiejsiggqpyfnhjwoeitkquxzw...", : "bvlbonqzpavkdxfshjrhvicwpntaiwlmemmphvmbthzenhncrowvwapfkzhpfznjuliziwhmjjwzhjrsjqtggrpqsphsubhkxuhqamnorshayxozodgikkzpnwqnzpnszrxwtdowjvemzbuuatnuqc...", : "uvdcqhybacpibretncgcfkfngsenedvdsppgnqjwpdmmgnbmyaywqjomtxqjvhdapwxunsxudpanhtqtnvyuaucuqehbwmiaegbapfrkrjdyirarijwhkinvwcgramlpzvimyrxxasekrmfzfxvwbk..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 863 { : "fkxpetgcxlyijywvrinxzvuvokzpqwefgaircxzrcbnefyxamzxfklbcujioksmdkkbogzsgmfupyretqjjqdgdimpnqpadydihsruszlcmehsiuepzulfarddudvadzcldzzpqboolfuwoajsxwkt...", : "mulupsxbejbzjdvqzlipfadpmpcypgkypengspuiuzjhrbiwlqgtbayqcbkrnignfntmibqelboiqptqaazspzbfsltorujsfekvmdbdodobrkiplfawnbgshbxhrzppjzgvobupqjbvsjodoirazl...", : "rqvavarwnktineusatilghqzqsejdyvwsvuwwhzhskxywiyfvdcsmusmakrpqtovgrywwqlpijbtunxmzeyoypmzhnwrtrdwpwdghdlhtqkfdcxdpkhmpjqbcsctjfqdrujstzfxvdnewvyxicvhoz..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 917 { : "fmmtycxwjomrqkbwwoxcloouhahnancrgjvfujzdtnlvkvhbhnpmrffbdxvbuzgvhhphijfhfcrkundyveijobanygsbiamdueygozjukhzrutaeyfmpulsyreptrurfblmpvydtqjtzroiesryghl...", : "rzhwxwmzncdttpfhsnbleatgloysgtzegcthshgufamxiblumhmvcbafuaflbvzkeicllxkjbjekikimootphfyorruhykgicscjigdccsshhfnfjkvqlulljsttjhvzbjtuljfuzwslqongrysscr...", : "lucfqljwhjwvipraxlibgeuytebgdptewpfmymehompizskpsxrgmhguaxjxrbahbzucwjvuihwgisoheyrxewtlldasykleevjbiptabzerlqjiuuryfjejjsvhofgidnagjmffyaqsolbluvllwb..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 917 { : "fnhbjwwcbkgrgfufffmpoulltoifmilheequiztbbgndbsrfgdoiifrwbddzqzgpifqwmbfwllmvcunrzjxgdgimmkvsnzhuatpbsztusrhaffpsubixtlbcbopykgqaswtvrzjdnobsegpnjdoorr...", : "sxqlizhidbesmnimzxsuquixacpfneriqalzhufvdtclxluhkqojqnrjucznzvxnpsgqwafusbsopkhquglcpbqnwstzfhgjxmifcpiumpjasxuqxjlwsfttsgmbdnrgsaksfipbuksysjezdyzqis...", : "haawfpsufdwobwxsghybunffhnllvrixzaeumwfwkvswulnmkycmaqlgaluqgvxpjwauqpcjcutyrkkkvvojinvtkhhnylqzhszqfqlfdiyiutodgipxawbvybjqfsrlobeainqupearualwrzhiae..." } Fri Feb 22 11:39:31.225 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 858 { : "fpdsisjsjeesnaqoqlvcxrgjipscusjkumprzujuczyrqpnyvnvcoyrxhigyuztucxbkxsgpkrsleckmwhbopmjfxezgxghmiorqfagxesejzdvxipohirawcdvzmlqokledgypyeupybbmolsrdcr...", : "gnjdmvvraynkjtfpjuigdhdzblkuubrsbarsynczjccatvbemvnwnsjhbejwncdrxcdwjoidnnxxluklybnsevtfuutlfheczxozhjxorbrjipapowqtaxblnxjypafauaupcdvhhdcveecnodgfyg...", : "rlqkfewwpanhhekczmvrvrgibljmdmhcddoszlarpatkvhvsiynuaugzrizytjuyjbpuualdkrcimpwhpsmngrsxisjjykpxtktgmdgshmizycebtlrwqsnqkbqmgfnpudbmqizbttyznzvgyldmeq..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "fqnydjxehffkjcxrykdujsgpkfxmbzdpioxlvxohamzfalbfckhdjkkpnvyfebdzyljahwbsnsxigmolerutaosdxanbrnjowzpscvcxocsuitbbnrekajnchjthpwqzyoaxfoqctnxyaxktioscsb...", : "qxwuhvbtugnwnulkxogwwlkrhrztvyjhfgsfjcjeawdkcodpsezarhtyptbomnfxuaqhaxdyqybijfdaxmzdalxzvdrqjarajjxgirxwsnjrgpnwigvaqulxmcpdpfinqrlinvdhzaprzhlsppqnib...", : "nkaecbccufzttqhhgssadvzqfjufmunkpkqwoseksdaxylwbueltpcmgtifbrooghrrobvlekffqybluinavxaxxpjufjqflbapqkdkaloyvaoapecplqpltsrhwbtogcfjbvxvvekiaxlrtnfalsj..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "fyyroloxqzbbqughmodreepqbwqpubpshndpypuenwqqwqnambehuxhvuzhemsugtdjykbyxrirnbtrlegowsnsmmadzlzfhkkjkxgclielwasohcyqsgniiookgixokikyveszzbrdzhetayluirh...", : "oqhoouishzyubquldtlfsvrnnwblnmdsdwcuijcknheyilhouecndisvslsmxjjikaxiouqkevzjdwrbvxhqepnrhapxqmheuljjahwfbmviejemhaetzcjihpcpceuzyuyderezficfpchvjrpbic...", : "abzcnlrjiwnkplshozgyxjgtuvdcqzdhstekcattuxpqykjqgjipyincvtfbxjwzobyeadlkkbcewgivqvwoipqhdzuqhgucuuijilkzkhgohbiksjysqocohbsttaalewosbcaknsmebptizvasid..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "gbymmyetuotndihapwooafpzjcwqvvqnqozkwfznovjixeqjtpewznhddijizysxgdaydurtwzfhixcogkdfnbqzgknbrmqzozsevdrkluwabyunpjonbygbalwrvajrfduthyuraywukakhpnriso...", : "nzcjkdlsdmgoxcbzomqxouzbehiwolvfvzbfutfyjucsptoaewamvjxrockfozjlxrnfsqvcrzostnhenlbkvrywdlbmglnauydzpxbzdymtqzzgmdnglptbgwpjyslyeqtbseiujuxxmgfkhwilmo...", : "lqpqepvabaejnrlmzzheuyertxyqfsztoouyoqyjrklzmbrmwwelzoswofbfjazrbxiokbqumwhdjpmzrsowqglcwyowoqrzkqpzylclgpdklmxsftnzzmykhyattcdtwfmtnekcnhvmiryzbprkmp..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 831 { : "gdthrqwfrloxkuahnpqtbkxvnhjsehovuznylgjighughqknluupxxbhobncktbczjxnunxpgmaytgstfzovgohntvrpiosurofmkpioikispsxubnlixglhqoqvmcahnthyfaneylibapbsbnqpbg...", : "zsyylbamiixzirzesxnkjfiwlcqmheckfrpnciytybukcocsxetjvptufqjeozbtvkyvzfdgruvfultbnslwwrcmtntgmyhiybsbohxcpewrnxfthngvanhocvxaypenummwrhwhsplgwmzctepnll...", : "qcmonflbistvcptdlpqmmxhwaetyoaifxhtiacotoiphwtrzgljcsvqjhqbeaiclguocasnjwhyvtwcdzbckvbtshraydoytwiacmkfxtrwwvwvdhjsssvbpkzxeoapysbsqmhovltzwkkqylsokte..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 897 { : "ggdfenjknctvzuttyvzhbwnfgjwocowmpgtgqwnookusfrrhkagxnzsiicphkntlddlqrqccegsmzrqpbsduhozmcubemcqgvdydrfypigtgmvwykpgcdctjmabbmayuvkwfwgfwdvfsuobtfgenrm...", : "ifcdncfcnnkemqcvzxkblqlujfdadmmdfhdezhbqfzkdnhkcvqneslihkufxajrcfqkatwhfrjnsppfbbdfjwfkfenjluxtafeszhhmuctonloygunzaojrlsgprsobwtdgnltechpqbxayxaguqsa...", : "qkrfzjbczpbcjxxcchtvqiabwqvraeosijhrjhtiebrsxqpopflusdacldimcicgxqoczomnjyuavtzswxdnzhthrtatghylodtnymfyzlbvkhjbjttlngsetwtidwfnxvjvkwsizanwvvvozjkwke..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 890 { : "giebcuuccziruicnypuxzpbgubojkgmwynioijjhtxututazlxnhtpeotezozirsfcmagcipqwqrvlojirzzhjtkcunfzxwugoyxnxlzlyubwivbmnpkssglrsrgjioqqmenslzrfklixbvveaxvea...", : "xkiyopapdvolyaoojwikefprshawwfufonvfttossozokmkgzbjtutuniiinqkeozwdzsqpojersejorabkcfugjiexxopdzcgsopidwaammrdwodnmydbkxxqtjdnmwkeftofaihgnvoryusvzihn...", : "pbucstzsuiwqrsqnfyuyyvcsqtnudvaqsdurcjtlkbcqwuwdgoicvwuzsknnnbwoxmfxfbwquedcvzsftlcizosynzjcwyfakdphywmkbpvfoczjkgzclhdlepvbzgmxnjgiqpyhlouzuttpdvswyf..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 907 { : "gijuiozvzzhoaiyxbxgquieisnxqtfeyfiykrhjhbpxicuotoaobeteixcpkvzmkgloahxhioekzybeauvedoutgakgnzouwqftmdffziltsjeyaqbehilolazvixnancneuyecckpyazcqxmxtdql...", : "vhbylqijinfxkmhcemmqcffuvzplmagekvkxxgiyqxyvlvnaqjsfyadgqkhawobionizygmcnqdzarbkzghpjwfgekzkgmedanjezefddequitbvqogkeibkwapduynbwlargoizckkyjsepieqnyx...", : "exhxukehcqkkbtodzecfwqbpcervdyrltuuczxmdfcrozkmvtavlssqhsvuadjjncxwiwtgpjpldcugauzppsdigcneuathsmlwentdhzfkbpvtzjpgmstxccbksboxtchbsnvyskoamyqxtrcghap..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 843 { : "givsrtuwwsduofxihectbmklegqbknpkdihfwfdhpbooccdqcqjwndgezzcbglnjlaplhsypdnidjsipvwwisonmqwtdoubsslzqxkzczvmupxljrdgzyxgyxmsnnvajucqhkmqrygdxxlepdeiwlh...", : "vqqwamawvajjftwbujuxdmvezvmmtjqxculdclaionpiamtffjvgpmiffgzxumsyoqrzpgxwssbzhqgcwdltqyvfiasepfpqmhsrkdledoqpzfajxgtkwmrmmavxhibuyvpcvxnjlauditxyigpcfd...", : "arqekoazdtwdqohfxuqlrymygpbhvsfejieahcuxpamspdweoegxpdwhmyswhwojgregrxapmszxmpamerdkozzoscjnnqutfhjptjothyluvsmwbtrzjlznsljaexlrjjtiuzdzeokthltdelpcvt..." } Fri Feb 22 11:39:31.226 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 864 { : "gjhuzfpvnmmketwdxehjqrfqpaluagyvhkbuzevjkcqzxtcralymlsfewrphrdpvajztlydytzmalorksyxbcjenqtxxpvesnewmhvofssbkqsrraskyizcfqthfykheuffxyjeummelwumnboejfc...", : "vqjnveaxmnbgynbmeypmycetmhyiwqvyknfvwlkfvuokajhbgfalixyskihufizpmnjtihsjkjvluvzbepmospozokjneqwnicyznquwrycpbbjmdzyiegkrwhodumrthpsydvojakcuxqujjgxaal...", : "pbeuyeotycohruclplhfmyhfumyenxgwgvcawbeycqydyvvdojdfnhikqxceavkatamxlktgygmscyimhbuwklfmbnsxndrergzuseywqwobttwrilgsbkqlrbcbyxxzouxjkjksjyjqbwxufcfqwe..." } Fri Feb 22 11:39:31.227 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 899 { : "gkjonzvfgeuorcfreklakzhouaelhqdywxcwncgsrhrcfnocarvdehemkwavpvepqpfaqkcdenlhaomuemrnfwciocbnxegtbzpjutswtaetyqvgbjvzfmdrylsgkjkpqqagnsvqscqfowsnidzjnl...", : "ovasxgycfizgwllmgkjcucaqzxlvjxlpfkzciotjasfcdnvbqablgvbwzcrgtryfgkneesmhixouyrkyxveoajpetkegzurnrgfcmflllzwibxglnipcixzfphnppofljhiwtbespxrmrsmdtuwhve...", : "oahnwmtcezikklxvywmmatzzldsyskaihczttgbpnyquwvboogdrvwdvhptxkvfwawdircencsbaskerrjinttmncfnkkjrdnapmwtajuhiwrdqdansaqgqgecfoqrvekgucwhrviuxvtjxddvmyuu..." } Fri Feb 22 11:39:31.227 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 908 { : "gkmvoenyywdcnovflvrzgyxhgeyzcdvcamqyaprxbrqljbhscbmaztqhpbhahljfetytkptsrgaeagcnjfppbtkhuysjmlohsnfaodtqsrzxprkrvvidtxiyvqyvsfhhxoiuefdrlniqbwazlzdgfe...", : "ifaggvcphobuqwvgdapmkicwgsebkdptizlwvlyrotwfbbrpfrpyykhjjhygjoxknucygfeeybkewhrjycpksqniduzwipmzkenfddlwvynyajeyllpmqmpjcuercxjaqwdkvzsatcavavwaxbmzha...", : "tyrjieqafyvuwxlzaddymagkvzbwkwpezqbjeacxocshuctvsphpkkvtnkyabgvzbuwzglltxxaemjwcbhmbketaemncetgyhkxhjlryrrsaexhohrhfzpqbrqabehizxnrzrmyzdvgcnkmgphbqyh..." } Fri Feb 22 11:39:31.227 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 954 { : "gqvswzcaajvbjeftvkxqavxqmgopuarhdoiqpilccksxmvzzfvxkitmdowovpixkptgwneeslxvnxrhgurahajiqebdicahamjmdysxrwtdqepzccgtttyghhkexvmlnzzdmxesnnemgsmmwzkrjar...", : "dsoaokdbuvooozwyssathshiazwyxdndctwqolxebnbjezffzgrshfpebejzqwpbrrwfmvzcfpxlczicycrkytqnookntuwouivqjhbzwifkxjoyolqwbchmodyxauihmfrcuobvolseonptuexpwc...", : "eqdmpbhztlnsyapwemdvywdgkrhftyqlvlfhhwvctalrhyruozbadqwitlozsobcbdbudpbfdlwafxtpoglwwqbtastrjqqpiwvzscrdfddsmvbzzrpctsgvkrvnzubovkcisjdovfyfwytohnfwoa..." } Fri Feb 22 11:39:31.227 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 898 { : "gtboezybxyyqeicscvjwfucovdqzckgieqnvsubvadyulfwzzjujugzttpinlshyyluusxctagpnfhjjykssdthkvvovwqnblubtqbikpchcbnlheyouucglnnnklugyfyzkzfveziyesbwcmncbik...", : "mecnbtiqxwysubtuzuirolqvgaovoshuekafnbzsaikpsbyxynninoidmmljlkzgldrllcaqdukgkhlgzvkhoiovmiwomddzsjalhwruynwaqkjttxrsgqagskbixaweylnlvgeesbhifutxrgmyki...", : "bwkoszsktjnbzbehyzscljlswjxaylddakcuwpicvkfsmnlltsdxeciwrsfwoktwvhkyopbysofiaqsizypkxooadtxlkmzfycguvjeccaihvpxzggpenvnvlethxcnhchpponhvuvkkrckoizpjte..." } Fri Feb 22 11:39:31.227 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 933 { : "gwfaxwtyeisdnrdqrtnmwyttopyeiropuyadcczlxstsheheotlkqwqgqwuaijjeomxloypcesciydjdupuqzievxyzeblkwuitvuscqcqtvvswrewitrbjponuqndgtuxemurdsulijodhrluorwy...", : "xlmpdiggirygszpkqngpqvonedfaaefzyclxudbxmngmfoghdhtjxevfnbmiwmqvsaitmshgiuxuhivkoydupqmqbmnhdibfmikzhpntllvpqnxonzwnwzxrciamcpmxeygbkqbwxdkucrzpdpsrya...", : "vbmwdepxddqpmllsbrzyvyfuostfskacqdynskzdnbdssmbbgrxfraxnomjdslfkhwipwqnkzdqpqxgpdszemiftjabxkflpnkecxkoautcskkcwcfhjcitainudwxrcmkagbiaxkvjtdridxaxqdp..." } Fri Feb 22 11:39:31.227 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 869 { : "gzkzicpzxllcdkzjrplnvyrlkiodzdkzpxuszubystytnzerubjatxweigqzukhgyhxamsaqyejyptnqonnqqkfckdhkkslzqkxnvasytjycfremtaynrcmnubccjdgiwasaoxqpitqcoznifejhqv...", : "vqwjcsojygmykzlonrkyyztimhmctvyolddzahiznxouggsanuvegbonmlwvftgfliiymcbeupipfpcdwebdzjauaffbgszpyhkeuysylghkmpjwohxpbxqsfpidhawtwbximcfnywnwtemimtrdtd...", : "bsztzlxankweddbucokmbpjyxzizupouciyztglmdftmoxxsaqbtszevksmxcqgipabhisfpfwrreqfefkxopkpfcwbythtouicjahsdpnzrqxbirlqqlfclaebjlhlwmwwwbcgbmlgdovgsymhmch..." } Fri Feb 22 11:39:31.227 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 822 { : "hajikpwjxylwcnjcgngetvvgrkeavjlszyzxnzkshvybyrvvvveufdxxhefjsxsqqgcaupxferyknpmtunnuohcofkpxlnrmuilzdhitmmukgfuekccjgygupqinjigrkpvapnkkskainzcrkaxsuj...", : "mfudsrvwxpxifdauwpmqylzsupsnxunckgvwxohrsbsajjyglnzeescgnkgfgwkhjafxcbungvoigepvarpbjwsandfgosevylclfkcrysmmclapvicewzwqlgrxaqtoycsyzsncqxpwzwwglgfpxh...", : "lpthwpwakittrdnohfigxdcljoggrfdrleafbazwcidpzcfjphbkujootvyavpfrsokixrjangdmouiinxerepvxzulpbzvqxevckysxoiikbipidxugbjlnuodgnafligqnplnavottdyqfyulwri..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 866 { : "hhdmjwkxqpydcsrrusnsuhlmliuiimglzahbkhrjufdlsgtmimmjqelzmhdgehrnzywyirlptfvjsdurdtmlwoskrxugojrklnyvttihjqfbtrwgjcqdxamslumlklluojlmxaeqluipcxgpgeussy...", : "nqgyuthoftpiafcioflwjiddowmzrndrnsqdmjsyiasvegbpmsdfjhademfybmhfytqytriktnklehlmfejtgehvhaogtvuiqxplugkuumaddzskgmbzugnxnxxmvibyxxfkubyxhsdgfaiutkqtsb...", : "hqtekpvruwrlslggzpclvicezizaxgeapinfdrmygaqydpaziznqsczfyusmedcvyxrrqurfohmscxqnxyxhoblrqjgfwvsrbtacnqygpkjdiwzyrkskyfkqsvypeqsidknxjpnhwhuioporakotlf..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 917 { : "hncdnlfhsstnkmocvpnbvocdlpesgalacrrutnceuvvplwylnuarcuaspprdirucfytodfvqjjvkrqnzcophpjqwienfqwzsrxxwnansdbuhowrculjbcmjfymrsytdisbwrpyuobnsihxglxumvdu...", : "zpcfyxetoyncxjqalmotusrhpjhnjdkxnedhftwgnbygeqzzkdzdrupaimqpxwzshbijvqwkxmaigxoogyoilisywmfdyihmbsuweqaipytxqrcjbncvnglxzsifsobedabhtmeujlsodjhqnwwdqh...", : "crlvckovhmiytqicptghwbuyrwwnhsdqbzjsjkcwalyjoermgnbzzmapbgkrzbuydfyniynalctwkfbapatnimnotdeuupotlkgjyyajjdifbrxscoebbwwtkycdaryjszxllyrokssfigsjwrpvmf..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 895 { : "hpbdnhgjltvnthwxgtuqmtxwivmljuoedhsigffdbvnqhiguhfktunjmrnelywvamnpmktxdtrdyvwwaympkiuvwcuidpstacepcapbftcuosykspuggoouzwhmmoygubbqbwyxmnrjhbxbayleonc...", : "gmrocqjrdbwkrbprvvahkpkplixffgcvzqhqboxnioypanonvwntaytjvjgjthvcxpptyqqykvqfbcgvipnhkwnynisjrdhwioxqmaaxskiozgoanremzsrxhtwvhxmsjsdtvkoqmxvohxltmteomk...", : "bpaxfbvtmhvkuxrxdubhwppyvuqutwzuinhrguosyrdsqstkjlruvsssvgeakinpxpgpfrykfmdofaitgzxpxrelojrkycpcjrjgfsxyuhsfhyubtjyigjlzhihsrsoreuoiowkxtwizxlkzqxkthr..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 875 { : "hpvdxloagygomcpmmfmpmhogomyxhumxprgqbmjybnecujjkiyrijflqeoofodhwzxifqfwrerpydoxgdrrsizjrfxszspmwocyndhdtyqwpddrvexaxvnucrxusujjjvggfaabcctnnjetgfbkzfm...", : "xedkjaalxpphvapitrkvuyikvahgpcpujyoufalflsweqtroqmqxhlzvnafmjpcocockzbkpkhdfajxcgwdfefoetuybqraqwoxowoulqtqxtarxlacdoaqfbahcpnykqxdtvryjicqymwxrksshkp...", : "wemwumwpibmvnvholvhpjtdjxiwisctwcnbhzowedbzccdgcwnnjwbtbtckphymdljgstjvuonmsyajtzggzyvnhjpamsxfcdncacmkbumffanibtwqgckalgjozykboxxyabbxapapydipzrkbyxt..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "hqpukczxmzzcwelknocioonsshywuikuyyqpkibhjcucoplbqdymxkyysypebhjwktkjqtzqdkbzhhjrkdblrqeozmlndnulhtiamxuklvjciffsbkaytmbtkisthiceuubchtemjinsyyvzotnpzu...", : "prvsqmjvpjmgcuheynczrzhwieayfvhrrayvdmvjkarsdplamzflrueghwlfmptesizrsnpnnnnecuciinnqdhdrrxeouwfitvbsjfqbgnqyetlauxujahmcnxzcrvtsjxpfatozluxcimllqbakhk...", : "kduqvffdptlnamyooqcfdfnmabgbtecslamsalmfyjjpnuihxhgmrjjxqgyvnvqlsfdxwreaiixzpfexkvyjvrnmvptewsjmdupbeoqtoznvxdlgosyauyclllpbjxmduqrsobupropyxgkdcgbfni..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 861 { : "htkbqrojitvufzjxyxjemhlyqvihuezceuvvtxfyvrzcwhgewgkisrucuazmhgpnsfjcybaavlfyowgodhpufcmoitzpyibegwiajxjetkylekpzrrdjixlinuaykbuwhdtcrhwuvpfcbqweiqyhxl...", : "ulswqtioalykqepmuwgsbgiignflykkjhnvezbtwdkgxzniziuunjvcsajbchlqsviwhdtavvxefeoatdwnmuiujqnzbblwzibgbpfopxogdxwsvpobezguvughknpmemrsuacpgrmlbygdeoghrrh...", : "sndccyjnoslpluiefeiimsbydhzxfcepmsxojqmtfhwauusullxxvrtxecwfmimtlpdiofsjwnemlgplmiqkfaozatxnbsepomwhzablodhjgroreqhnsgczcbioxowtospzkuvkdvuuidehfddrpm..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 912 { : "htrujojhemblnhuobbajekjyaenqbovdxgkrqbtofoybgawzdundclxopxdgyarcuqodxzrwxcufyrxqpvzijzggjjlayazivwnylzzsumgzhryvfxahvdhwgfxdyygbthfysomicompfjuopytiqu...", : "nfuxaevxwdrayusztwmhrvksmmevnexnwrfjzdrinzqpownurrhhsguycvntsmwkqvbhanhrwjroyhzynmrwacagtmgeelsaiswgxfaavclgfgkshtctqlgsjimesvzbphdhzkqnmzmnaerjjvclhs...", : "zupydzxvcfunieyjnsflvcmemhtciflaegwoetracrgmljsjjrwlikpxrohkwnkvguowxnvaqyhayuhhklgqqwkrtdcbirxivuzvlwroeuujrropuvoieccghqodnmcksoiaoodmnsqjfnvuancjcb..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "htwsfeezdljiugcrqhxzlqcpmgfrbzwkuonajutbauqvfvxiskjcjdewefqopdclbamgdhgusuxxpfgnpxrgcvvthdxbaklwsrtjughtuwnfltliozekgstqpphjsaxaqiekgzsazusianahzaepyj...", : "ymflgtrgbgefqgnordvpiyshjkgblfprjqpvbpzihtuetyvymwejpydsteorrfirhqvznvuvgmwpopdlirqrhtpufhcsizudunyrpsyijydiuxmpvirfrcbugkqkabrvastnulpwizjeuiglstnpnl...", : "kildgnxlblpwqaslekrzmebnbqsiwjmnxzemxbmjobdxfbzdexsgqlgxsdygpzejggxvloaqhrlqsihlvszbzbexkhajywmgzwsnhlokotabfkzqztftakknihrdscutgyawbcdqjcxeauoofgrtot..." } Fri Feb 22 11:39:31.228 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 845 { : "htzmlgfjeefxswvsikbukqlpodnfhxtwyelwhrecwtwbkngbhgzxjrsgmaygflseyfqfwfrrxgnckpdxeettjakjedzbuwgtnkiegsgnaprupefswknwxwegkpdwudsicpakfsuqxbswncokgigwyc...", : "uabhakgmeqxdwbxdutsyqnkfqhbjingxobxnpsbfsnzdaksvgmlxlammkmamqpewyglpsfwtdycslxvhwdnxhyyiiatgsrlhxdjntiecqfukctvaxpxrqyfhmckkaldnbxvvashieiuezgfkhmwnus...", : "ktxrawkltvdzlaciyojfngmbwnovrwqchqzyjffdafebdoymnyssxhfwttzxmlijvqqjwpbjmdgzkijygzjotpupzlzeahcprlkrgcdgtvipislyohoslxxllgujnaunaqfbkzlwckumgobtlfvpjb..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "hzhpokkxijlxwfnhrsnhdnjhxonsklzqnpvyhzwbsirtfjdnyuzfnlpssainnvldxpvcjfdigvtcwbaojzdhdurabgeidfodcvplzizkdkqijpjrlflsqserlitwtfyahdraigujaoqriyeiylporv...", : "dpxhhvobszuesofqtykjzvuiltvnivgsspyhcdnmsfcxunbdhtscpgtphxhkztdvzrwczwcbxixwtythcrmulabityuqnrhsbvdqlulvtajpqlpnsahdafvxbmcwrtkrtrojcbrritxyttsddudhyy...", : "kjbhmicakgjmzjqltjnuhvgjmkrrfjdrdxuzufsqojkkbhwxktlfpfiyxkwjdzovdkgxwuvjoadvrqbsvtgmhomghywzggiwnfnevzdqlvgftziyuebauycrunuqhlujpkqnnzuymhjlyfhremuvit..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 871 { : "hznibzzvxndvpzwtrplybkvelsnepwcnivzqkbnscsscbikhcblratiakwaspdpupczhbbaunxosoochvbwuxzkpotudhiqlkexguyruaoyksnlbzutnaqtmbhiyklserdlzlwqhawqjhjyuhgnptn...", : "lymiyujlmwpckjybugulgucegtgriphncpczpjmxosjuwrsunrbnqaatvjgspmvsoukpekogmplatyutpobpfhqgmtcopcqhyxvpolxsqifhvyrfowpddnjruvbfvgvuhmsghvbanurdgjdueskjfj...", : "ieunnsbnvohpngsopalwvoygksvizdzueaijhefblivwatzlfodxfjdfywyptyjblvapbdwikxquqydgbgoipxlupggddooyntoxpdrcsmubtvdbpezorkxizkwyudhqastkjcyhsdlviukgoerqtl..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "iaqryqitslgdnxnzqbzjhcruyhidzguztrdpyawlrknbhvjobunsvhesyssziyskwssgyiwksakvdnrtgofjebbopyhsofirttgcioppsqhnfzbnpgrvvgcaqzmzllhdexzquklnfzxhychkfyxrqm...", : "aaonncrvhuzwesvbanlxhzhjydgmweaeqymydyqvbuqaylaqzywuhmjemvighkpyysvgvewmklisasavgmggqbxggpofkggofrxbdevrodndumjlulaujxecvjdxoyvzgchtpcrljhwhtjendgutnw...", : "pdkzufzwaywfrzaksszqwmiipgvolpgwxrkobsyykjyekdzmcnxjmqvlcuotwmaadtzsoxhdgfmnetrnvkcgqplzfbbovkshohsidijdbrygbxhrlnyegvyihxcpywpcasrkxhodyhlngoarekfgnq..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "ibsmpdhvoedrkildtazkhgnzosguywfssqkipgyrejwwtolrjsltdxrjrjwsbgdtuwyqctqhvvtehnicjsrgmwaqjverinizarrbmwodjzcicvkvurmnysyaoqiurewaziucqpfztmwawrnutlnbsv...", : "bjzawhhwmrtkfycctuprenqebbyqrimdjkjhskbjygtoxckgoeagskizaphzrivszbirmyfedujewietcogvmrpsydejmukcdyopdzfptbizhitiytuxqhmjigocwuqhwaeuwnnmagttmrmjowmdkg...", : "htuqqymrozbmadyoozzespfdydyaytzgsmsmmnzdscwgubyzsiyihadxlartdcbcsnbnjyzuwndyiobxwtylpxuvotymicxxfjugqtoxxboluwyhpcbklybhdbzwpslejhcwlwlxxhvbcptepeipgf..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 830 { : "icxjhewvcdaxebhsbtabtwnwquhrguvpchswbszbhgktrivuhkywmfayxornoxewcesdepnedlchfyrrxrksoqypsdufgwebkcgjrqobwkkegafvykpnpfxirgjrymfpykhhwddwmwitqbysglfark...", : "whxjblvcgkojyxwnlsifnrfudxcfgvqzuqpnysteugwbhnaazuxxghcjkkvblcrpwdholzsdgclyqwxlswbfhcndaqexjfxxgfbyiiakdypklmkwhjcbnwvpieezcxuozikxpfzpuvfjnkqpesxyhw...", : "fduwydblzhlzbrbizqxcnbrbrkhslriivbvtzskuusesjwibdgblpoobpoqdhlfsohoubevapxezcxyswclyjsgwiqwiyqeivncvufvfetotbacwaaxnqbqjlcjtvzlcuscsbgrxzlxbihcrbdsbgz..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 830 { : "iecqbmnshuausvzxndmknhgyxfbhvebneoidlstreobsfvmfvwdpwpelmrnyguhhluvxvckvdmdguhyelqknfentietnyvhwkwxznyawxsdwywihcijzaymyedytczfsgpnvgqmgazonojyzmmwyfs...", : "gcpiqhelitpyahwagyzkryipblvvrxynfnlrngdxizqvimjkwnjuhvhgsqqivvsajqyirtjkjhyjvqiqjnpjkncbgaosghkitmespsaqocqomylwrmnldemiewzkaerrgyuacokqwqibpbknqtraam...", : "rotlxhlzrksemlyimjglqrrzsajgikuijpahhloizwxafkgaahthndrproqijenqdqbxznzdddhejbuhxxuqofjglfnpbvuuiefzvustrrgnjnbnqjsgmnpdgqaphlkmqcvmpkjgmiieoxfwckxfav..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 915 { : "igvyvwdnegpzmypcgxssovpeesiialjnldsskeebhqcwhweszfaelobgtfesftzuqrgfkdtodtqdhvlccayohjnptmortcccosyplutmzopuaqrhyaqgfwgrquexdltylworeevcnswgajwfrrogmv...", : "vlzjrfsrmqvhparmsgmvhuvrbketwlhlhdqigkjzfizswjvmefznszirthwrrlgrtpkynxbmvhglvsyihqwwsurziipxrkgxplmdvjkfggwdssxugaacflytbqtxczfixajjprwibphzvenovotxfn...", : "evnicfwklidslyocqunbkauyohtqvouigdkrfyidrywbeyiupkmqclbizbstbdqxswxhiabutxkweyfnnmytaxrowmmsxbqndomupbmhaaiurtgahvlugcadiflmewhzsjykfaxplvtzaniekdzqde..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 906 { : "igzlvqznegyzermhxxpindqufdhpcrmvxorhuvigewsgayppmnmptqntprwhfivlvueemoghhckxfkwktfuztfgckgumtbveubdnydpidhyaoveiaflyycpdprnoldruxqmrbnvivtkmxtdhftjqty...", : "ujxcrbqxyrkqlnzvropuxjrekvzyfndzajhqnwjcmbjuawyumagngqxpujzgsalvrdfgurrzeuwomzydbbkcrmhfxxhmvvsuwhezbbgkjzuukczauvcassybxnyntauseptsucmowsgikduvmrqpvu...", : "nfumrvtbrbureijilabcamhtfcvbgjysylomnpjuitcrdnhqnfhyvupyertypfesujfdedvdmszuoaisgvbhhqvtjdowtfogjqgqfmgdicydtvkxkisxfpotazcbtiesclwoqvvscrgfnxsnvsicwl..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 982 { : "ihbrxgxomzlymcxurqacuaennszmgtjflgmatxaxzrbxcmsyfsjgrahkqwipnswqlznetjumfvckjfjqwdzroqrkawnpguvzuglxwztyzscwyeefnnmjpzyjinyyuybgssbpnlofvqfrxmigvmyeil...", : "myhnnbipvmxmvakivkpjobhndmulfsqxtodqffjeieudiuehgfcwjawprasbmxpaffxhovayshoueihyybbiwaysxjievzolhaeuwlmkrhcqvyhcnrqlkudqgbrugxzsafrgnbvplwqpulnygvdtht...", : "ieuhwhqtmkbissxxobgzjusnrkrdrvyecdbqaxneqgzwfqwlupnodhqmzprdfjwuilqfufisqrdtpoqraafiqweuoulcmkdrsloqwqbxsqfqvyqpqumwbrcrurjensnbjrckydirjjeezudrnlrnmp..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 933 { : "ikhpipvckyqkbighehhfamyxawaarftzwviixxwzylfofcgnnarseakdsujpjispwkszjjsrcznsrnjvldttfjaxcrtuftotinadkpvkoexfnxrozhkbnfkmhyloogagfalsahxlwclqjoqbbqftxg...", : "dpvfpsjfewancgpcrejnsfberqzfcelnsxjgvfofihhpovhononvwpiddwopzntochikomqslpibcmeaijeclofgvmubxbgysffoqhhtzkvhkjblidbctpiiosylojyvqivivzzramvpdjjfjnakcx...", : "wrkekckehtoienjqjedrfyrcywdvyyvauloritacbfrqvyvjfmurrfhoddcbmzneutkffxnfcyslvdjklimfrtwmemeucosavoghnlnvrijzquxfggeeqskgzifgvdcukcqenvhnlxqkudbqkzzhww..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "iluhoctbxqidkqqxvpoomoescsasszggxhttwocmpykcasptisgnqkdlpfifnxmvwhatvhqksfvotdjsezyzgorfwbrkovixtrtxobwaghnhapzjspxnxohvdqokgthfqrktccettdoxhkjrqyspek...", : "fissnpckrdwukdnkrbuqcukkphbkrfrujqekowyritcwrpeblxysgozsygzapmcbcdosnmqqqfnsdaywimxkclvjhbhlsshgahzaiuehqulnbyxyqpylvxujvbsrdcmcbksloldsqsdeggootbdelv...", : "sprblrfprswpmveomtmpqxannavfpzwwrkplvievpsgwznttneaqvcnmctyqilgdvfrqijxfxeixkkklddxeeioguzwfdhmoezngrzlcazgijlasfbntacrqoqzzvgnjltgrtyvvkzrquepzjmvpaj..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 910 { : "ioyxzagrytequgsdnsigzzbejjhhcwwezjnkldqbfmiliglvznqwllwlmratpgdfoxcvbrwfrdmfunyqaowfrhbxgjpwqhkohytljhtlwhjtjjbhfucihgjhzvjrxyxyzyuaaxwbyeagiknjpdxszi...", : "kwhqwdleritrxjiitojttorlsxdhvajimjevdphxdwklbrwvmjojlcckfallatpdervnfapiixkonlujpmmtxqvduzsjyydwbwiwxzdnrbweuzsketdregfburmivbstqqqgkbvhnmvsvvmaeqhira...", : "ekispavjlufmomrjwkibcshuefhfeivytogotzgwxtaqaxvhhyrlndkswdojmukhzzrnktcodnfgdxfwysvkksbcoyaolwwwefazkvlxnoyrshpxbylwefkuzhndagjhlegoynibeuxiwrpvmgqzpl..." } Fri Feb 22 11:39:31.229 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 855 { : "ipagwgzuoxvogzektjaaqsfoaudrxtsjfrzybzmqgtroagvemdjqvsbkmwsodarnmuinciptvqhgshahxaaknvoeqxrapuqffmvoaqnmbooemqoxjywnultscqotdetpacjaahhbjbqhtxbewobtyf...", : "mkdoxnizrbqjxjoocbtpxelzuueozuiiuzozmnkvdapjlbeptkncwoiydvyvxlxxxgvgwgdegqjgcacmldojtcngwhxsncapcatebntzcrofmmlwsnfroaqxtxxhxvpsffilzwhmdwefddzrocbbed...", : "dvtyaqcaxptplhycaolcekimwkgrysgpnhoybawwwyaifgazofaqazumpexgljapjeddiftuttiekxcfyocuqzsdkwuwwyjahgfidrbuiaauxspnwmhudseqnbpfjjsjzpfhjmoikydvdwdebbbygd..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "ipvkgqkzlecgluuitiddfjhlfxbcmwptdffwlyaignckhyayihgmxqkmofuwizhselxtdadpselrsgspfuyrgfptfczyyuwasdeuqiptzkxbuwexifjsxvljmtzcfexwedciwuqylhhegjcrrtqgem...", : "wzexevybbgmjxrmrfnxffipjpauqfszayynuvetaykcemckkthujijzlnxlirvulkoeugnmyvbbhawhimatljnbveyamtmcxmnbnlqyxpjnvbpefnzypcbsusklfafiwetotrdajrsgwhuykpwcchy...", : "tjwuugywwbwpbhltnwnyvvkodcuegwrxxeosqbikzlxezmgxxbpitvarhlcydynleanoxwghmgsaacxoxkychhghkforzkpbpdlcbzmdfdpemdpycqfjitocjwygwylqjqzufberintkhbcligzvkb..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 881 { : "ismdgzvlkjxzgidffqdtjdqefaksfdihvwzuivucqhbgkffceyvzkooeasdyobmryhezdjdpbpwkgvyrrdmenxdwbcsmidcnfmvpmfxniascdxhcfwxrfrzbopzbbkivgirjpvlvemxavxqflcquoe...", : "nfcvxpbusrrdfwnfmqdmohphdvsgarsmmelauznlguweconkykcidjrudehtqsrldsrroiktbtlhkzbuzbswuzdgylgmftjfzwazrfqvrashcvjyzbbgqxlkocdtxigmjriayogbpzovlkzphyukej...", : "soqpzrslwyronjuaowsloffexnfsozovnzblrpvwxakvlqcshssqaderaafrjwskbcuzgvzxctgzminvxkulhhodcdfjnutbhsassobvmijdzxhcgdzfabvlpzteobxsynpwflzdyuqzphukzevabl..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 903 { : "ispabiktupqgmognjwijddveldtkeskgxxglyrmklirvqrlldbpatfzzxaczfzscryujtdesmtluwogdnymvtpyacvpvcfotcocaypdfzbytywmwxwalbokclwaqrafjnuiqzracuuqjwdyokorswf...", : "xzbdfoumhhyyyjvmooypmwtuegokkrkovgjsnscgcrbybeamaijzzuenfjaftmrxipeoooiwbrievxspdknheekieluvsmalllfqplajjrfdesmgfqhimoxudqzzulkqtaomacltypmwewuwdqxigt...", : "slxpjwxtkedtyslcbmkruahnywyertkzgicgcgkdllgxkgxbntjycqgidogcresmkcawvfwtazgnucokqjzgmocgtzteovaoyxilgytkjriognklxpznfvevgnkykhruyevrdghlcncxvigxeuimlb..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 876 { : "isvqnidzdeucxjhengrrpveandpssxvxezacanzrjarrblotqdhbmevjorgnsvnqivfrhuulwbigbizdgjrhhzomqnkuvyjveururgkxweplbwzsiycbvozbbkkaauzkhfrjhkfvzcmnzgiubrvqnn...", : "zqztzhpatonbvfzusbogdpawqpwsqjzodbpbimxepctgvvqxneppxtzxkxtpiklavmkqkggnxxhvjewpgkmeupowfwffvnvpmokdbfvafccztbefguamkalgzopeqhutmwdxqmvuzpvqtugkunpuqh...", : "qvtyqwqckxvnuzcesgomvlocsvyupcetnbfsdarjoncbbzzntnnqvkhbvlhgpbibbewvkliliiqjseyqvfgrilcfpwaauwmuewwvyrhxuygveoinssdqlorkmzqrtimtoeqkpdfcqjeccwmrasxtqo..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 822 { : "iwadubmvmumawilyjohqmjiawceqeaeympndijiwdhyvzthglignuwhutjuboptimhzmuqskwhpwbgabzqybjpnzjjwgegacnqxtphbpontacmwemwgemfuvhuubklysfpuhyhkvpsxzysrvtjltla...", : "czzlsgoyqwkqmbuvgohdupkmmnrqxcancqrcjvykwrzzqmveexylrgklmexiivabsjugthxumgjalypsipovktbrnwihafrhbcahxicmlfsvqdfxeslsdsoijwicgkbzazkscjfkklqpdprtyporfn...", : "zjpaaoejfzlesaduhuqcacxbatggclkftelbnbxmbhtsgellfissueiwqowuwjcldqdwgvrlcyoyhqigotiukqlsxomjilelysnonvbwisyfcolldlqnuoueigyaxjvwbabpygqgqxuefgpaufnpib..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 897 { : "iwbynikxqoioksifndndjxczpvumptctdiinqqwdlihrvnokxwjuegyvxwlqhhksxbbiicauyjslohezlneozvhmqhmvsdgczswiiqbigbtpacsmwahlspufvgefyrdskjvrqdavgfxwwuqypnopmh...", : "gbfuplsnuvgthemmkdgeqbtvlmglnoeolvgtkwjwnzyzgwisdciapeewbpcjehjfcoagvvfmskhcwtqkpxidrktewsztwgfhmeiwmvkthlgbgftszdhsfqrmdjtquulvrvkghgctuslzqkxmgrsypp...", : "lojqxfwmwoxqopyqlpszbqepotnucywjdwvibdttbjvsvcwgagnvakmaofcrluqpqqxajcupoptnmmgdgigdyikhvbgydvbhvhtloahuijazgynryokugijcxgfhctegptbtwyryjlsmiojaagddzn..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "iwvycxmzynmbqwipaqsxtvpghpfkzopeippmuomxuyjztwbacklfruupogsfvihqteoemqfdnbcpzfmlwzemeewcxlvyagztnbrjyqqyswfyftoqmnaduzotthsgsexrwfbsrzhgkumgzhxcisaxac...", : "alzgbzcxtreubhelgbclgoudpjsidraokrczpcaxpfhdphxdptdceyqjynslaghxwzabhtpmcphrsmiuynenmrlnuxyutjayeljxmyljtmuuqvxnkfrvvgjwslixiyysxujicwkgedceygmieitkji...", : "okconxvtfcrgztfyglobjtgvvkxgquzxoylfavhozsvcqkdatjsgnnpqvgqqybvumfbxajvzjdcnbgfgzmarpnrgppvhtrkgrzpkabocgkwzdpfwgvliukqpqgtjhidvlqlvxpsspdijircuwbumtv..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "ixutjjfwuesenczvtbprdafoqbxkmldwtlyyanjextxtvhzmxmhlgnvleomjdhjlxyhdeqneioafxakmssvixsytkynlmsgllxremgbjlhinxhxubipyntdnkffvyxdwjuzlgetixmabtjkkypymhf...", : "emsletivgornrtiugovlyvkqqqtectxbolphvkdnbgcnhezvjoifqszztyjnoxkruitcsjgotnzpwgduuiteakwzwaaboefkwvehvygtikymfwtdcayfoihlztrgscohzmtbwsdawtkpwkijeoxqgg...", : "fjcmhysgywoqellwaygowqnollfjdqonhbfoekcvofdbenjepevkkcddlseloocsivmjifkkrjsdtvyjejjemwsjuljouyoqlyypssnmsxcrqzhakghojrfpaiqalircvqlxwzpciqzmdmsdbuvgxy..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 824 { : "iyzvvoyrjtpvtpsallriowjhlhegpdqrxxnodbzavcfkuybkjjplxabadlkrtgthbkvlqyxngejgvovvlcgjqcnnvndgtncugvmfhxqusxokyrkdfczempjqfalaxneudoslsglynmfvreoqkwcbvr...", : "bcifddbjuvcnyywodccbhfsogregbheeumfoqugscclpphvkmnomavyicltpvgbnctovomzqdmtyunbwjbrtpsulqbdzpgoyffmjhijzwydjwjcbealtcuvxkyweaiihrfwpudsalorotdrlmbshgv...", : "hvyycqsmuxzguxtqnvogdlzphggwbtwwxwrrxvrjmleoczyjpjglyycmimduyhubjyaykkccucgjsrulthlygxbfcchrrrqyuznyuhmujmmzcvhjyntttcdonfncxcufwzypdxxxhghluaeblebubu..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 840 { : "jasrnnxbnqzuqvakbuhpoultavpoqmmwrmvqgzliqqesjbrrgdcxwvaydkfsplynmziglknnnsxdlyiiffuigcdrqymwayrcpbvjxjdqllsrmzhxpkpaztfyghdipozrjygiiqbphdnjdxtboufmke...", : "lhhxmemeugpilvuwyiqkousdaaheohfpxqyqrckrzplfoypsyvvnrauphsygwgmesidxwqvtndlnjcnlqwzwerfgwceeicndsunautcyepwsjagflivzexaspwrfgeihugutxuxxtpaxvtcgxxzbmg...", : "nokblxwlsithdshvslxlsyccwxvxbpvvompqvmkkvxddxnfrxykbwzehiagxgpnaikikmmnythlzdwbmbynwpydhcolkxkcdwvzsjuajxwiwawqsxnqlbymnxvlfauofngzkzqbccygnkzezjldzfl..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 823 { : "jgxtrdiybqlodcxdgdzwluydarkzazmuworbyhngijrzjjfckgqyajtewonhjpgkjthtpwlpryfazcirjopvjdmlxobdddpqcwyathpftegrhwqhcnazvqhhcoshowejptrsdevlovaokncmqjwpnu...", : "fanjvxzbbexsxmajxbqeqvcoegerhaohpypymahexbvfaogvakfwjhaxgdnsclbeqacpcpecpptkabmnebtexpoiggfgknetugcvlxgosiexarekswwvnpvqckwpkzilhiwlglyllkxxeiripjowpk...", : "amaijhyratgkukksajbnrbtqepdbthdxmqtfyesiijyiaxgixjowwznjulgizuwwxllbkrlhxkwzcihwycexpokptenhairmhguhojsjhryagcgyyswnguzjzwptfeubiuzjkqbvvkpkykvoloasll..." } Fri Feb 22 11:39:31.230 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 867 { : "jgygkypnirohnxkcoiwzmlwisknvogsckpvxlvqkvzxbqidlbyxsaxvcotaeddbtnhbbfgantfkhokbzwrqgcolxbfspojpxxypiuzqxxmizstdlcwesipoaiaofkmucnpwklribvhhiuvjfodgloc...", : "yvdszrylwljfilhymgbddxdtbwvhxcppqkzjtcwwwkwkrzzchebebyakafoigaokqoqszvtqrzvfvfleiuqqhqynsyanypbmevasfsnkiuzwmzsxgqzwoqxzdhszkyugitpbjdhctlolurcllgxzyz...", : "oqwxjmzjsyunnpoumhuqaztyurxnxifoplmgmdksoymwhqvgujmsnovgnameubsalaxuicfkdkeyldklhlimlhymtmjdbsorlcqfupjriramfjuabhzngcumnajszuogkoldtmjblimewecyqxgkfs..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "jhvexzfadaugizieepxmseftoptcjgxppqgesbijlrdwidhrhencszkkajiqzmwkrwhomvdqfwgvcwjoroxurqaeouxldiudupimovdgxmlneibtulbwbqdhmkzlwldattbimlyjqolkgntwvymrmh...", : "ronvlepdwiwviotrfnsegfcqquyjustagrwbcuyokocjvfuxxsqgobmjcvupllzisrwnhmpkqtvbawvpphsgqjmkrwcogjgmtfhrucrurllxxsiumqqjlntgnwpkwtfndnpmullvqyzfxabjwwsfjx...", : "rkayginkgyfvaubkgmkhasgnmmpcitrgzxnhukarjcftqktmtygbmvlewhwyrohatrcwcydanrnzizyowlixdbhcptrchkrnycnjyxvtrtuhffvtgnxeehawupsihkuqvxswbfpubtwojoevvfnmum..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 904 { : "jipfrjwffekekgbnpfexcoxzwvokqjlxjvxeygnjzrymicsblfshhipqrygjslkfrkhdkzxsuakgysauijrhanhckyqffghclwnsugatwqbikgtjekntqkhxmoblhxvohwsxayoyvrnbzzmhltzxmm...", : "vlqkaujvvbwowtrrqlskeolwhrdtwyotyersnjlwkkpcgqslgobogapuyetphsilwmdigvncyrtmijcysczudbkanlkrvgbghjfnygqvsocafblwmfvhdovlmtouqvrbnagjkfpokwuybibfqabbnw...", : "mneqzubguyrafrldwwapkoadpvgudhumiepyqjibopuaqqmylurpvydgbcoasklqtqiivvpqbtkgdytyxabqaijrikvhpjdhgunkifxmbensefibbxoavtslyksxsljkylvcaoafxllzggcehmzlfm..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 837 { : "jjchrfyfmscmuxadrtyrjmphmyxpthbaimusklnwysjxpoqqlwfhkfooinapifohorhozaosdfdlnrrpauyudzngjvawagibjfpgsbdpagntzyqdtxdbrvdmekkfmkacforbnlqwrxpovtjuespvsl...", : "urogvhsmruxvkxvhhvalvdgoabcuoqbxrlwbwkamhlqjmcmrjwctzsgksjeoblgjeynaahdfjiblnvnnehkothpscncmfebxrwqsmeihkyoxtkkybdjredrzhwcbnvobpnbbclgdaijvykxzlxfdus...", : "sphgbnkaqtbzzgpkybssrokucgvfggifdfgizuopqpdzgpdikzjjrkqpcjlormaqqfqjirhyrgnutiajtqatscpwkmrbpprjdsjybadljfjzooisbntcrrrljxjlphsrapvccxkjyqgntmaodxdgqg..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 831 { : "jlfagmvsbsmdnbwobkkzhglrbdkkzybqmefmqyllvqjnyxhshkphllqstbrvexihzpcnvwgpqzjozxmtevktbakmiostnuhiaumseqlarkobjzotqodeczjxhnrstozhknhadxdssfhiybjrhgkvlo...", : "gqnfrnmmlvqxslzqozoyontazetujzteyztjvbxgvljzpjbqnuquwxjlhbsjlngcfsnchnkoqbmjgrllgkxqiarhohlreeqglmaeccnzozmnkfimqozpcvhjksgcvzroyzeanowmhkltcfzgiwxlrv...", : "pdzwwbvycuhcrpptorhejyhqnarhenovsrpxufbxpfnwjllixoilusmuqvvxljzlqtpffauvdfjrfxedgmngkfsrglevxdnzorpjqxtcchyyfnoexvgrlumugzhowscfwyawbzcjydgaymryhhzttl..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "jnidgntfgmgwnlfpsdclbkfzgynorrtojusrpiztjzledlbwjwbiutiwwmhuotsmqxasxataxqvxtxthbtokguqwndmwlnsvffpnzkhagxvjigjkfhubfhuvjduruxtmumllrgvjlvicxwzgyikrtx...", : "xpgbkidkihyvnqbkzgoxjnbzkiwjkhzmastoijwizyppqlwlpyybvkvxurrakvxbhzqwywbazzhwysgmurguotwwscreebmbnrltvjdflbmotexguhnwlmujikffvraklhsbdbyaodnpakvvzwrdyf...", : "rjirapbtasggqxiuaijyncarwvrdjakrueutqwzbcawduxbtpukrilwstobvbcwknwdeawrxipckayzfvfvctijynrbgxwwmcffaygrlcbnbfyhowcsugwavwcamsdbfpjkommlologfhsbznermmy..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "joztrhirfiqaezhipazgtynketnmmhmraqszubvjdicpaxxljgdtvxsoatemobedkfgfivuabvlukksjjkqtrzexfcqbzvgqlxpjjiaajzbhbqceybvnnnsyzlxwfzrvygkbgskveauncfsbjtpayl...", : "ynlppiglfbkzvykhjxzczpbjvioqjenmfpwcdthpgvzvvdshkbhrvdtxpedzlbfulktrhsiitclfbjuqrflkowckukucmhgnxntsxnwqfpsrxyrkprhtzsktnyxfgcychdrxoqyhlirdpcxkzjqnqm...", : "zbjrmxckkghyoijbzqjatuplkdhpjgftfrlmeecqstsseepckjzgqwhgqpqvlfpywawjwdfjxgxsxssiazyorlprzijghgpksogfikkoihkmjkaimvocbqzsagffpwnedooiisofwcdcgzqsyzlifw..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 872 { : "jpnrfesrdfeidwxdyhngcfshbqiuqxxswaimubvelcvzrtouhqpanqnrmfieveqsagjigvtcalsipggtqlqjgfmpydkaajbwxyppjbzarbqcwxttimyyjeetgrbvaginitnkitcsctxqxrstsijgdh...", : "guebpzfbaykqaizsayzvifdykrormdhrjwsfsfaqpcsahoncsbjjwkxpasojzxwqbcybbfjdddchuehqilblxuuqavcffarqqrdaawsdughleskluywmdphjrjvupxnxmemoposqvwspsniyyriyup...", : "ezsuhfmhpiwdabtmzppaxxpoewydhaaydluzbupgnewypbwsuzhgwlapgdhwxptlhenbudxjwepgatwwphadzucybxlkeciqqwopppqyjexnkqpeslqxihajfyalwqwqvtwoprtgmxatekoscdifdq..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 951 { : "jrumqbfbozvwztjheqangdkbwpodqigzejdwcvrbpmtjnyvrbheuvqlkddkkuocwcdzosqteuyeygzgrpduiuekbaanowubvipiajrjfwfknxcfjsclgltidkwiqnqqtkzgbvxxpxsygamxnpnrxxm...", : "kaqvlkknsyjtsakxgefxhnvkrxnmhnfnhqfmduxflzpmizbpizxthzjzaveutiekcsmeljvfmmludiopscmjbkoupdrtvllfcbdwptlryhnynslgffxdzdtamfklorcwmkpgzbeyvgxxfefbiaxaqy...", : "ugxdjsjdpeusobnytawqpqvmijhsdcvqwvxesawwtacajagagxjdyupurlaegzbhdqduhuogrzjcevlmkgqmaknprujanarkvwicvqojewlktpspnifhjwlfsworzulagzzcovgsfwviifadwsnbzj..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 850 { : "jucpmydmupnvztbwsoeobuufiytrbqvacpodxlrwbktyevkbkdanfxiakfjhxaacusoyqpdgpsusulaubbrlnjbvikwnxnjwcwsukkrvzuoeabxicxiiiysubfjchtuceqvwefjsmsmsfkxzndopcl...", : "vbiveflbehhwfvsnhlqrgjgzsqvgobyqmipwhtiertynpcmuqjbgsxkxqxcqfepyqnsslelyrpudphdaptndbtcqwjbbfhqwgcghknanknxqnnycfhptvwhoehljdxlaluspoqtpcjluhqnlrrpndx...", : "bzjfbowtuchfxlryjvudeyktmyndebtequeckxkxrxazrguajtkbncxsldzwzgktiqbhulmyuvmjvruiejtcfctkdbuwjelfosqzzhobuuzctnekpergyekocoadggueiwayutxmamihwuhxykjwqs..." } Fri Feb 22 11:39:31.231 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 915 { : "juyxnkiqtfkxlumkewgzqfyfgrhjuvjanlnutocfwruqjvxhzlbptkmaocuqixesrvwnpiwjnyylvponrjylvmgfnsgmdbapimuwukppjufbdtljshruhzyogwmotkcrafapqfuzdeycdvypvdvxhe...", : "nahbtrmwcgnziufodjcwmgvtxbtkkxzzxzpgjgnyjtxrclfnhzzbmzsmuwkgqlimuvwqdcrrecdkbgdafbnmzqgcnivhoobycdjknwfmfeplvdvndmedldponfdzjulpjeriaqfxuoejafzhuysklm...", : "slwwfivcisxyralscubjjvvrrukqsuilmlxodaucxbakrlpbhtrptdoyxeqnqqvmnlcfdhlshscvrgzszqsxpkurryzufkojtohoxcewevsfoicumoflvjywquxicfovvvwbltltdqakrbprwmyapb..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 883 { : "kdghlhofndjkvjwmqvkfxwxfygxtulbnadpoiehwbvpkaiptjlcwdnwenqgsqnyhqjlmbtaebocjxbsroqpqpagqttzjmhdqydnjltbajgzwvsicwqhetpvitzlbrztzhrucocobhoszfqmyhlvmdv...", : "vzodezjdcfkbbjhuiiwvxpeuzzcpqwcbtnjobhdyioepwtjjsjwononuqbludyojtdcvaehntxqwgeqwgdtibhyssdfucnaidzbnrdhrrsibrpoczyzpncqxtbogxagxsluangimwjzfnqidmfftyu...", : "xzgpmbleouihzpqdclbuknjgypsebfhtcbewoynhtlbtunjxqjyhxvqqqpekzihjvgvfbaoyutlvfyexjmntlwiktrykokondvmjvrteijzpoakzfsgjmhowiqgidoxyuixcyderqiqgiuiovgemtu..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "kfyadaeyirtuqqhckwvjelxznpberlyjzuitpvugevgkryjecnzljhkcwkqnithmaesrnvaxobhikmtygukcavnoirvrpespjpcfpfhydqgzetdxxsopsyabzdykekrjdclkikoqprllzurrehxtvt...", : "cdqbciktdhseudnqudgczjhsxmthxwnwsvyubwnpwoatecanvmlurwidrzxacwvudeekqgzmhtnjbvrliciycersarmffmclbaodmkieluyrjsjcmoylfumvbsmsvcingtyotlhnjbvslggalcdpji...", : "qfdiuiishrwywepuwujzhlvhmjbyrfbrdykcobysfuawhimtloeytqiwwgtgxtgvqonepufzvnwcnbvasvtfxyvosxdygilqfdvqujreaurqvtialxzmcabifxnewtdjdhvvvmimeyvrgsiupsxsmr..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 960 { : "kgnlcvfhfrpjyhidctjyhkwwcnapbqqbehgkmgqriliskjzinrgpnqlcvljjnkfjzbiuzugritrihsqdbqlzmuqawoiknzabbrdoekvqycwhbpgxoitdbcjtzmbkhmqqtoyqisjhvkztpydjwvxytn...", : "cgvtncbhipktnwemykmueqrypgfqmsporbliqinsdnjszbskypyanimhnskizndlacktqfzcpthokdffhjyikzlbzfvvnglhisprdvplhcxbekzyrtujobwyqrcjgfrixmrtezvhxntadctbaxotnt...", : "etjzwplmbwkqovxrychsbwggwvzwtzbxwdrvpjuaosjnmdbapqdwhdwuxbnmticvxumpplwjltrhwmvjxeadzpcivxvhwqgyrmbslvctgwdhwspsindorbditxfqogdysdntmzjxrgnwhnemmvnciw..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 831 { : "khtexcqncsllgtvvyjajgshsgwhtgvwtflqrzzfeiomdzwnqkakjnfkzvhmmunloiidjpsnzwnfbirjzinonhaoazuyskvtzbcgaqlsladghyeyzpkmutlnjbrudrjsszuaxoujdfsfiepjwkdevvg...", : "arfbgjfezevftivwsqwkbytriataplqmsdfadmtcaizmrarsdqswelykqbhahlniblkjtpfevpupyguzdylqbohfnfrbiiyoradxxwxsrikqajhrmoxwmrlrcygcoguxqmychwxikzjleizfnogncs...", : "eosypmyhypfkknumjxwwndwqyqithbudujrkhxrxbrovpwyabguauwhkhbgqchyjkkzvewiccvgqxzjvpszvyhrcmfuvgzrtvkgkpbowtzjimhcpmjysqijukxqpufjkctxyoomibzdyfqtcicfvje..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 849 { : "khyhydxwcgsbfeaikmxjhmngjibzmqgzjdmatrdpvspywdmkobfubrdeatqbwxremjpnpwcibthyzkxcpbwpgoozmrjyqrfpgfzpfizvdekuapkinnihwkovcfhpanbfcnwktxdokhjpyjhbfctrhm...", : "ehrgolgqyrqympawfvphyhkoccmzygcvfmpdfxctaimnqaqhowqprtaxgwfkmaxnosjnalxppzvociwzifzcchdrbfapvgklmobhxpvmssvymxtoffnhjznnoolojjuecglshyeolghxzgopxwxdcz...", : "pjbpklqokdfvvcmiqbsjyuwbqckarvsuqokcyshpzuhncxdvhpmfukkpjreybjuzdnmiaknekxxtpvcimkqrbwzqnqornrytfoekfbtuqlneoogkrxvylqolzqfoztbashyliclqhtaqnnjgexfewb..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "kjjqutczawrjinrbnozlymhermswnoecbsdhbogexrfxthmwprnaocspfwqtdqpcuqcixkhtmiinyeczqdmsscwhzxabtdmgzqgifuwjbzztbhlwtqltqbsglltllsmxlntcohejlhejcuzpjhcpjn...", : "dzqdmbtdorarbbxtqupwuljcurjfbdytolbaimtdrsvnddwzwnkonxwpzxsqqinhcoqjzdsetehztsbkswtyverusgzaysgrknhofjvlolykgdrztdmwllobjrepraloetvanfpnzddapmeatuihnp...", : "owxrzqcbnwbnllqzemafgjcmpexoputnqbgrmcmwuouyhvemukwktljumvfpddlxzetnwjdvechkpjiexxditmjecimramqhqssrulrwhoolkchlimfjjhpghkiwhbxfgipmxxlgfgrmsqyalvrtsh..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 866 { : "kjwtxzikyjryaqujmxlqgkmvsywntottafmgocccouowhpdrpckfhrkzbhcdvvwuzzondqorpkzildxlmwsimdymrrjaocqriflzrdotpxilhwgybslrlakvmdgdxabbvgyngntvvngxynbfvtbius...", : "tlphfuntxewaruuxrllinbmehxvegaxcdwskurlkyzbtgdtmcljyftgvtftkdnmiuiwbiacjortwhhzyxcpjjwanmvdwqcarimiwpdgbghlzsywoclnxsynbrqdlfaklavszirouznzejuspdjzmes...", : "viaqfplmwutasrykujkbtikpjqlricbhjvbwdcrqbmcbndrecgyxjwdmujgnptzisznryujwrnraikstfwgadzxbhitvmgcbmlubhsqykzxkdvnhplodiagrzptkevwvexwqbpouvwukkfsetapunu..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 858 { : "kknqsmqyxxvltdjqtendahpmbuxjhxhufizrubxjguzcziktfuohuqfpflarklzlroeoyonexhjzbyafjteopgvzwvbyphmskhupypfslkcccltxlxnhkcktnczrruytexlecocynnajsajgsepvaz...", : "jwlbkejgbtaosupcqdgluhfprcwhjvfwpdyclwuuygvfsiaduhjohmtiwonexazhucpzhqravunkgxtyvtqtsqoqvzxnoixswinqolbccgpkrrobijunzrzbdwrxrjnjcyyhqioymyuwhsvrgmqrzr...", : "mcfqcxuttlgnnzjimqwltvwnuutbryigiawmohmrbcefhbomdkrikuycclrffczwsmvhopcnxvxppcmqictmlmcrajworovxsivmrumqhwvkvvmtvcyouawlsvesiqnvsiqdfdyhlgizdgltmnxacx..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 895 { : "klphcjpyshjtwzurjtenvpwpcspcafihwrlefjyhyytlvyawdfsfmyernpydaengluomptxfezdnmpaulwgpwontxcpllrpiodavcccksnjnzesyjhlyebtsvdmrjjiwysxbhxuehflgiupkhpwpmw...", : "mpvrbfspeenxnqddpthebpvdigzcyrypvzzgwhzwwdylhwlvylejfzusetlrjcrrvfqntlxuvpqbiohcyguaoaijegfavrycqegpmyttrqnjjbsrmfbykxazepevwfmmudgxtbjyjxkfvrizuybeux...", : "nrpxxcbhidhnthxcspzzqqswxiwixudzsutzbdkhgbntcgsfuuvgrhwndnhpaclaihfpaomdtufdufylnpwinokuoedlvjitlxpxjnhmjrtyukfvtuxfhlgayginiarmqlwyrurhdetgbqxvreaxbn..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "klvgathfgaxtkcragzjlewwvbejrdvwcryotksswkphmnsmqiumjwbfpyvtahitgopuyccnmnskuupsxkgowofipapfdybwiiydpwbzyhxkwcjelhqaghcdayigjxoavidutzwsyiibfaqxvtwzinm...", : "exutgdzobyrogxmkqtckciuhclhsphvksuwnzgpvthzzdokredhqgaymhbmovbhgzaimdwslhvwrvppzjqevfartxghrtmympfrnunqbkfatbiuyzslccfaolyzvmxwcnqnwxpexiqdoibkzderrtp...", : "wwmnbapqrnteixelkwwnsflsauobobjyxragnmecjlrnffndegifvpxgpguiuacgzametnlligaagffvquqgjhlsixhxfncpglypfvtqwqmtknkgbbtkovxicbcxbpqlzpfsxsocknlwfqoouxliek..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 845 { : "kniemgpqfxoahuguhstkwgbfbtuxeqduooylyxuycpgupulzggmamdespwdymprlfpccbojibbnyrpxenbctkimlivslgjwzyoneqbgoeolzcgmjiivhvqfpnooabkfjuwxxhfddysdplpllitwmwx...", : "pnmqjvjxgwzmlycqwxkscskicvlwgvnquvycxnpstqmcxvxickhnmsmvqfmmydcbaqvvvvmexxqjkhibrkkvpfczqmkzefszinydmajzkwxhjyqbwipdqyhjnbfztqayctgqzgcplcucaiujfbvbqn...", : "txjbeavtqbliwntlddcbwbtyetfsutlhlkfuvkaxycuhnltuyibocmyploinxacmzzzkjwwcopvqjjuzlxwgzxxktojpkqkraykopuykpnqwxhcutdpfklhqbffhltceeqxopexeyvxcxnyfxkxdst..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 923 { : "kogpgsqpbpfcriddxmxawcbnstezgothhiunxrioqsydblczxxgegmdokkcmoemhcalfgbivqgwdwvaungbbugcbruqfxeyeqwkcyancssbuypxjytyogivvesqtvxphneojadudijcpmhuthrbdua...", : "ahbaeqpmxrhypbckjvvbokzwhigrcwdskvcqpjcdpavaqxiueqjxjonophvuholoqeafagzixlqsestpnbnkrozpsqvrieugbexkfmvfvpgipvibbbbignlwnryghjdgqdhaazfpfktjlhyydehldj...", : "orkfsadeodurnefqrsxlqfjzhjllhfedhqpwmguetyidojxdrizvlrdanqidcxobvraqkcenfnkpmgbwyhrlhrkeuzhqofpcykebkjayzlrvtnyonvmzwnsbtgpinwujwtqzbxrwlrwsquirtrcivn..." } Fri Feb 22 11:39:31.232 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 830 { : "kpsfurutcvtfelewximoldkdhgqwltfrjcqdovxfcszooizbwiqzwsshrkbnirkqehntbvvetbuychcahuihacaxxdmsfmrmriotieksxowjghgzstkjlkqzcptanhqxeeiiiqjyysffknycjyeqsa...", : "mkxzhmujzpwlztiomwsmwhjbspmtelaiegtnjawdkzvinzdvjnfmkvhigsthnylppeljytebbmrvcaotasqdgiotdfjzgmzayawbqrqjajvrcnxhdstelauqnhfjqmfzlorptcnlmrqfqmbksshzoj...", : "nxxzfdievdjlldyvbdqhokprgbvpbmguuobbyhnmekvwqdcavsnewyfswnouczvdokufeyreycbgnlgcxxusptpykrawxddttkrsxefngtqjdblcglozqphfplqxvdztkfjmxlxsjrapfkigczxrno..." } Fri Feb 22 11:39:31.233 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 831 { : "kwjvtdjlshharzzwkrirsolhzlmuoxautakibceimpjpkugrchmmeyvudmexjzchivnpdjzqplcirngklljmqdjbyzezstkdmqtpvysqucespoicifakerpubqmsjsleqlztkxmfvpzqqjbopkbghe...", : "smhhpxxmiotxuoxsuwutuigvspugctdtfonhrqopqrnprnggepivanelixpnusdiiizfmrfimubajnxizowpjhkhgsvhcrqfrexxpfaoebatbvkwfhjbmqqbwukrwecpnuvksfdtyunlfnhknrcffm...", : "vaicngqmhrxnhidnjmmxenynpxpbvberiwdoxxvquenxovwsoltyjzfmiplpodntjfmxuqfwtvvsymsjqexktifdidiflvztivyxbqplcbbtsqbfslqxsnsiqocetsilaoedptrwclqscdcdkfszpg..." } Fri Feb 22 11:39:31.233 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 897 { : "lbgwqooasfbhpctmkehupfunfbwkcyzbeaqyaretrwlxyuyjywuspkpkpcxibscfdazhsjxiapxbhrnhxrnprufbspxxqmdelrjbhddcxkqisnhcjpjlcbllgmjcctxxdmqyubgrlekeqnawhtzwno...", : "beayhlcbasoqecdwempsslzboxumbbsuignzkykuktauhmgmxmwbisfgoiacefndujcixckyxrajwnhsbnbvtncjavjfeijfyhtlpvgfakslmrkeuhhleorwouqjqcuwtsstdpurrupbuymxurwgwe...", : "evhhjylgbglhsbhjsixtoromvksquydwlvpistfnfygmygqetgiiesaqsylarxjsxbeyvrgagqtqkdgwhsqvyyqxebzgobtmzafgdcgedeyoculztpkodthotlhsszfazzpqrmbdwanjmgutljxdlr..." } Fri Feb 22 11:39:31.233 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 847 { : "lblbeoersefganhzsvgscfaltrqvtzalxwzxzbxyfbqvtntwicbdkmurhuohtqwrgionckydtygadcbmbwdgtvwcruyapjdjyststzebmzjwaxzitzqfgwnruyibnplsyhebzrizfjsicccjngvbcx...", : "uumaizpeqqjuwpxxknvpsfafuknbpoiqjfjdofjbrfouvzpczwscvxuviutjwslralnatjyxhytuhfolsggwsmralbmdtxhrnkyezdasumzxsiobacgruvcunxhjnutgwyjnbsvnqogzdoscpnqthg...", : "opxhwutpinzgmthxhrxdmyrvgcaxsiddrxrgfspdynuifwaijfylrdgdaftuefzgxurnfqhcgjfltbocjdlxlsoszxxcqogopswtpcncogeegmvpmcysygpordlnfyrinokubqnmrtqhrigzbedihg..." } Fri Feb 22 11:39:31.233 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 872 { : "ldtkyajweenwefgkyymvwwoxotzarpghvhnzsmuydjcfbcztwzhjdtlpwsbcbdycsrwsyiscsoeerdqzbibfdzfxndtewvpxjoyytydwvmajppgmtzdeqjwtvwbzswtwxohxnpamdemosmyfxdhdrp...", : "fjlbokprrfutdhsyxltfwsprepcasnevpffyidaqxyxqaongogkhfpntcjfbitkpyurmgcdluvmrptigucdiyxauxohyctltvxqlafnuajgcpxkljtiwphwybazfbritcgwezydkntvunhnnctealw...", : "ewiwzinqchbltxepaeezhlhperarjatywwfcpqyscnzmvlslzrvcpxllbxzfgwufmqoehsklakvddvrjdmbaeqzfniruixeldvvjwmsemhelugvuesggftkhejchpwxtdwjoydbhennbrdjibeubqr..." } Fri Feb 22 11:39:31.233 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 826 { : "lersdgwmfvlmrhhpteixqmahzgmyhiefdyjcickyuzqgabcnvrqxzjfmkwwyedlcsekrjawgmtbdyyoongjbeofrenvxlvlgkdljtwgfduoolpwhzuoxgjbbjbbefzxecjqhfkqgdepxpreyxomapw...", : "twvpxsolblzjqetvdvacesupcixmkzpcyfmgtpwoehynjkmzdubqxunyhnljgxldkkeodrzanrdxlrzbxbkmvzbcunaiqwdjclnlfhytxbenugfhvikjhfbjicrmcfuyxvtjpxblcnjqsmmthydhel...", : "alljorbpbantadsdbkwpfcpotozqovlfjfdgfypwnzeptnhljjzailngzlwzdkuorawgiyziwwxnfbwudnujxeqvandsgobhpsekjvadejhiddflsgtbrrmmakwolkjhaclkjfznucxozqqvpshbfs..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 850 { : "lotpsgzuigoamdoksvtqgfbgcmtnsdqwzvkepguaobelkwlxhgegaeijdhqeleoxurbyzqcoakjrrsytelvgmkguphvfeelpwkoymbzzujthqagwoavlhvyqlmqlvfazpafsynovrufhujmghkpckk...", : "qcvtwifukufdggydjrfqmywbupzfitdsgcuxgindjunxsvogapluvrpofcnfjurrrnxyjckxrqbfrglojsvslcedfdnnieznhzosvxsxhmkkgroppjzxpxkxdahllzkpfoogqxclwvqvucsmrgywvr...", : "ltjmtwndogtbljkofrzlqisqynffmgtkopdcimxsmvidijzciuzaxnzzcafbxwyfnjikpqwyxzbpacaxinnqklpfewceicbfzyjnbmeazmnjeyhingasxkfmjsbhkbcewetcupddzxhxgsbrkdyfxg..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 909 { : "lxrsuuykoigdkdlbjsqrvwczlndztclxfbsvfxcrjwcfqtnwlbjxjevitllknwagoopakgmperusahgbpnrzgpcdlywgyfmshlwsfopcjsqjbrvgtspqgdqnscxdaxzroycbnbnmybhrfnfljkkmhj...", : "plcnafxbffxxtaxwxbvvkltnrbnyjxjwvacbyholaxpaoqyntwivluecxfanirrubofyyyshwhtguwsecmdmareeuhxyahsnxhnldmqejanlzgzwktfxyoqxjvpjeyegvnuovorybqrcyqltbibyfl...", : "scwynnqzucdyjzjmxgejogyrbwgycyqhazxrgbylbqjxabzuuwirfmihnbfmnyvwzotkgppzmsrtjthdfijrxwxejkjrrjanzjjfoeabxountcgljfnhcibnuradxqojzbdxallotmsgmurawgdieh..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 848 { : "lxsznmutfikejxwtuyunfzrxwghyvveriopvskevzqecochcqiojaybpqqqzyfshtjnsulejgxtpsnjfsiznrnbmvksogsydvbtmqdfdgiyubcjyaobbntojxaopagcqizbmdcrwtvuixzmhcgdpzz...", : "mxnsijsbkdqexktgllwlvvxacpqaqfjiypgznckweynssfjiaseuinanfngjvdflemcxqeggdwvedhfxavvucnwwmtxbmibrluudqjfivayswehmalqveskteiskuwlomrjtrqcfigqwqnoobbxstf...", : "xdwsttkqeezrtzgvtgoivghxqiokgrfrxqakgiescbzlefsgwvliinknenvatqmpomqrvzkghrxsxogpetnpqvttpqjuhbteanjewswduamtdopsmxluozycegokbudkupbksfymfbtpckhrbgfgzl..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 904 { : "lyejfusocysahfvxycqzuubcevzrevphjjedaattlqwvpxsxlqxmvfmfbatrsqkhejrnqjtkhjkvaodfqwfuciscjaorhgktrtskybaclxdxhqwtdixjzwwritxpugygiuwwxdsyllkzjqvwgvwvei...", : "xtzipswqstjputoxswmakoksoggaybcwwsmufwcxggtuisgicvswpjkjslrmjznbpstnmxekjteiwqipyjahwhpszsqpnjvqzjtnrezumrrzzdrktmnptwqiwjuzsnbyfamlqlvqdflevenpibcsnc...", : "sdufzryfynwmbvtxnfwovjsasmjuusjqilmcypzwtjyyfrnsrwtsyoavdrvjadliysnkwqxyhqypqldufsgnwxoitxrxgeiugprfhllwgpslorixjiysjrqjhuzrqjvtenhvftnyhapvubxqmvbhqb..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "lzbjdfnudbkrxdqtquhojopxilfbxltxnufxhdnjfvjiqbspqvmzwieabwjljazsvsycyqsmxzqhodieztexvcimjpoiqlmdbystjojglsfwbpekwnryrgyukdgiycaxzhteeyamvxwtcwfzovbyik...", : "wlxzkhncflosdnknlnhosiitlpotgobwqihgqvjchghgxyumvjvdufhnyxbqyhalrwelihhsejjjbsjcrwgkpzgrtncvhdyyixebkfhjqgwghtuktymehwetnjlmrhcyvhpfcnmqejgxnnusskezgm...", : "wnmhnircdjfmqwimsirnhiiokpgkeoirxxphpoxvtpkaalwqtfyqgzohqfohwfaxgkwpwwxhykflzskemrkwctesfabuiepjvdysekudewfwfulipslxfmqlxmqxktmueyynmpmzlymapzghlzvyac..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 852 { : "mavygweueekvlmfsebkgluapzvzsmepreefehszqkrbrwwapqyfgwtldyxuezojvfncwdrvdawysqpjpxterzdgowkubwvqgtwfuucqyhjcwsgiyaokknhlnntxxgbeqvqbznmpshijlljgvccssnu...", : "djnrcixbrmawmkrnsozgeqcfnkywnkfzctpbkrjzynfhtdxmdjfghowvzpdhxzhzxncnerpiqbozjxvydwkguvqahulmmzidrojzkxqgcdyswjjtxburkvcejrdgyhmavblhooujvccgguvhuflxmr...", : "dizvxjaczunmdwtjrlgqkiygbqwteozcdrokevhsrelwkipnmyfpiviirconnbmhpuocswzczbmpdqbzpiluhblsbwqacdmbheqqlyfrcfxsajcmlifloltcqnfloysziztwdorpqyvpsbcmviycjk..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "mayyshbjxwehjuhaueoyvngiykupvgbhnmeffqkpyzhhfqrjnwhldxeacqjkhdbriusckeexhnuslbvwjgygpwivpupnzonxksnjkqcnrkslqqzvaxmfdmewyadkxyxrssrvbommfyimjuzipzbgqy...", : "jfyehroxyvatsyevvjvayykwxwfwuvwxozbwihnuhvyjmtrdcjpacxfizwkolgdiudqbfqmjrqmpqoviqmjdehulqmnalcjhcpylqacgozdxgvegbckkiqnxljjmrolfqgihabminpgibkxvdmyjis...", : "tarmatbtuctteqxzeckcxzbwwcfektgpvywpnymobshxyqlzdyobwcuyggpamsmwmaywsxughxcljpfxzfeyldlkscyiikzcpycztlwhryimojwcsmadqqchxlipkwdtigocixbjmaxtjdneqgsqbm..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "mbirpxioikxmjhkowztezmxukkgsqoevvwpuzmehtiqrwzuudmurrrpahpssbwbshjvnwabinkxatoqtyugisfkvqmzqmgcujnahyuqledegbumjrtvthykagysycpqmhanazhjlytflcnpogxmdwy...", : "bixhhkkoeggxtqavugfsefpucpxsrgauafsplmrdccciyuagfvpholmdcvmrspokwvshahkhinduhobmaaeemgoypgyukzqjmcqokmdywnfqtmdnhntsgwbwjatcfyrxbljvwvyqemloffxxjmflyk...", : "dlcdczugkcbntybcxazhbvucgxoadxtdbqixtarqhcuvmoevnaypxndxvmfimgpdwnrrogeqrnucanhjgmilzlkofzwjgmgpenlmqbpzbbzlhbhcjckduftxbawcfnuntvyvtivpeifvwrnnfqjnmv..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 908 { : "mbjjmeghmvpucosyuvictxwjwysvrtxvzhmnoohdmogflzjktbquxkdkgwkdvzohioekevzlsbitnwjdjismtuiessanfnabyohgedgrjddrtwwzhgygyynnauwoqhfarwtwgzjgxmwwhzjvjjqraa...", : "abdicvsxpwozgvnjtldvriqbrzihavvlygpkcvkhpepnqfibrzrsooanbsohhjeutcqqazjthspurupwbwgxuqqchfwrjzkwakajozuldtlaeidtvjsmchslnoxboaspzxtkkpukaflcawqpmmkxgl...", : "fruhfrgafksyzzfsnqlvssdkvtcecrgznccrqsuevkhvwvxkaaosvjazipuahtlsbljqstmpyxplhruoihfstkdqdfnnzormrdkxubmmpoocbbishgcmllccqellfimbflucokycpcuzjzcrotpmjl..." } Fri Feb 22 11:39:31.234 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 849 { : "mdgpvxymqakakuwkaznyngnkajarccvwhbrayqveqplhxrrujajgwyhppuguiublledsehfmfewbdshiiqjphijcbtrdgvcmnzdnjqwlwklstdnzcjevbzgqflrnnszdnbnciydkpzcpjvvbmurmmz...", : "okpzfsbujwcrqsqbxkkxdhjlnrczfnlcjpxuqgtxtgavugvkekfvnyheecvvefjigbwkcwdrlvnoldydyzyrragnnfuaultsxdpsrigctabvgqeqzkeblakdnxhtxrptdbzccvmifdxbmjglgwihks...", : "kfyomnzukjojpwshqtexnaloxwbnbhxehikjswmxjnfmiondrswujxyipoxumjielchogpewiqwbsbtevykstcfdpqwvlmcbzbfqhziadqiozwenlioztchspunitrkynpgpyowoedxyopkhmophrm..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 916 { : "miyqrfwnzklalpoiptxhvwgamzfgqdagheekopenegfgethymbiqdkeenmmwqvstaklsnnrtqnvksazadwrhtmnjsrhoyedwwlmuzxemlpjhseffijdjikgqhcqqzvmnfzztvlwkxcuqjswuztuadf...", : "uhoucuujbkgugzgmczhbafyzrmkvjssvwkoupyzearmaqxqypenckhkttbzvtohdvcowkyadurooaaclovabpnyqdasxjyscnbbabuhyvjubhyssvypatachtesklyqxwnenlmhbpjbnwltkfbtmov...", : "xgyjlognzgucjruzlaewknmrnmepzzbyuvxfpbbjgdbnjvqeqocnzmlhllmyuczmvhmenmggmvksgmzpfncwvrxrpmtwmoummrihdvpzapczsjysdhtgehfumurmqsxyirzeblelknheeoecrbpnax..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 843 { : "mjphpmsziljdlbkvyyirjacgwdmcogtwgjwzxtfsrinajmkxynwxlvmyxeqvaxyfasaftrhztcdvriwraebgcapqjzexiwhfxmqebncbouucubvcrsejgdyuwrsulmpmoarxogonlphfexxotxrsri...", : "ksrdoypfscgqdhkwarwdosrgmjqkmothbrzmtokahknglnnnovmjhldmijiwfvuxieapmtllpuiqeuotfkvnzapoiizljlbymxyafxtclujlrhjcfoucsaptsvmdmgndpokbpxlukyuarpaillwexq...", : "ujlflrstgsytwexkvqsvlkcokjpjtrvwagiecarztfvtcnbnlrzxnuoxgvvjpekbflgyuwpylquemrhbkrxyvkkscitbewvhvlhpkfzegfocjupazeryvzydbpsatdnpuztyytamqffnamtqszsryc..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 831 { : "mklhpfipkyrffpqrllizxapugupxpelzvzoebhlczogxbvosqgjutpnmwmaylycjyqmgurdbcgkzodgcmfhvctynokrlgvafuahckybgusyvdlcrvpkfeorxtlbxotygbfnipnzvuyfcmlkaffnbgu...", : "zemhheszcyatsgzjpomaicydqkpbsbdfkscfuecntbjoekaidwzrmyfhhvlqgzpqtbjmazcscercebrglnxipbknlocqjwnbldswavecfqgxecbuyubalvbfqtsfvdopxpjlidyadgdrfooiaoayhq...", : "kxxcsbluxxpjahkwrowxkkfqdesuibvlthhikewwtimhdyrwgmnnytkxxdcgsvpxojveidwgpicmgnngnjgxqlxaedhirzqsrnoqqeveplmdqwcuttybqsmazkosdvcxtjavnoalrodmsvpvvjtdkt..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 847 { : "mktozoojbejbgxelmfkrrrugvqziqcvadziaidjvfkscgjappmybirfnygvpbtwpvtszjmogyxnzqjtflqcafowafoyteadrqzkysepyufcykeosagthpbhzqzotiskznqswkyfbkonqecvjgonyhx...", : "ueoyurulabwotxiphrkuhpbetzosyjxbziaqkoalsjzocotbkdrdedakomgvzmqnhvdxslochprfrecbdoctpdqiwyckvnwcjumgibclknjyidglvfvvohzhpzpaihjlfrrlpaxeblqlwstrhyzqvi...", : "xnulqzwhqofgxupjiwaoengkvgamnxiqflcznumskbkraluymdnchjkfzwxefeqhzwbcjybgeurvabwtocmkujkzjqikkodwqbmzjfhtrfuyfyhdjfnswdhkqrhbzvjdipswtfrricblaimdlfriag..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 826 { : "mmwcxsnupgyrrcikstjmfmdjgqbyzdxcydxdbwpfwfjzzjofrihiwzghdbgpbumvgsxbahxzoyfbqcubttluvhopezipwzgptetlknppntmvrqidgpvbkiganxbzozznmrrzbxwajitkkdnhzrajaw...", : "eaqsxazmltpagqvasakmeltuvbpspdcdljmivgayvirbozpqzzrqpfqliqfopfnffegvqhwlpollnhzvdxstjaicupkallsgrgtdnxobdmqeozyhqsavuqismperermgoqcizrcqnjlmifhhoqlynn...", : "xbjpogtnsjcpquzuiqgkxubsgmgdoidkxjvqfgprgfuwjqoraudybillcdkykhojnwrcqpzgofmclrbvhbsbcecjvuhrzsqyjcqszruzemkhoteznwkurfhcfxezbmzlnhutobqkhuawhvkxphxpqs..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 934 { : "mozgekmxpykjjjecbwwldsqesfdzmudshdmlzbxrkmvuhsqgkfgsjxgzygtvpodgtyilydymaquwsdhfsqhtktmauueziyxhkfccfzmbrlpskjwntxaslwphgawnzjacehlywngrdexfdoxasytqpo...", : "raygkkibemvkyvbmorbcedrvybellthokxxwoonuytjzgukrbmqsjmseegwbfcqxeqoyosrvpehwmihurcuenxmzoxltyjceyshzrkjbcozbmgrdgoqxoebskwpznatthqafnoendwlkhrpcjwyxvk...", : "xynsbqawpftmblgckgedcvxpvpbbqfwknbryjhtsokudnryiwhbiwhcwatzuxuxriyapnpmwpeowxfqdtopnzuifoukkdnksvmeslrwfzveyxuoybkibrpzouxcawusnporyswbqimwizmiibcvvxe..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 844 { : "mqlpiqeqqvxfujmvukkrixnvvmcshrnaeihczjbstjfbxuiejsqscknlmzgcoauufoanufmozglkfouecilzqvtujsqfdeseizskjxwxmfpkhjqentqgbpdpbukdryyxwassnllkeszxtdmkrgfusg...", : "otlyomfgokjaxloyaoqwabvffcjunmgmybanpkzsqegqihxwzmkhypvwpndhxuispqskzftjaafkuzplavudvqjxbyozxmuoyrkcqjjktjzntreaocbylqjmbpdsegtrhyvjalwvtjbmenkqwigsxf...", : "wtwaxrdmdzldiukisfwzgbqaalzfaczawxsimirpexcgjejknurwyduplmclcnexpegzrlnsbuwhndtugfbcjuqesnxyqsxpadrafttfgqcswftqqtmivfhlrvphyducurpqbvidzvsdvvnseytfxi..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 893 { : "mqngjghjpilwlsmufowgqsmrsbkpipybvtehgpdkdmlwtdxhjksithvvsmdxxgyfvqmcxhuwzzdeubrrsqvwximaruozrvwpehulubthwjisjsdtsdiwyekxpwqlotaneiwlhzkicxgyhondmlorxm...", : "gfouhyxkezadwjbclhcpfkqdugrjdzqxrjkrwrvncmwkoifdkzifrgahulrvcurbtkzdmfrvlzdxcxjjtjgyydnorzujcatzvaaowzxirdtiwjmxauqirxivpuleyxcdnggqripuszqlbacoybhcnm...", : "oqzxtfimhenbzdgyyleqdygwxlgdfxdoussuaoqvoxcwmgkqkuthbnzsmoiuukndmmyemnimezfglbpjsdfwrusfbnmqcqolwiprlmohqysylpvmmaezmbbyaptxhahdwaywnrlodvyrgrcxxirhdo..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 917 { : "msrwoqrdgbkxihgzmzgtaujalcyposhkrwteqyxbtnzhixzqzmmkiqeqqdsycrxutcvracehcmyoiaiykeodwglwkqszyvwmqcfurjjcgcmtrbopsgngfcprfsncnimjcqrttjlaclkqbzanoyxwhc...", : "rhcsyzcbdmgakiamshyekpwmirjmcdgamkoulxejxvknbtnqmdpwvrfxaujpvhtnphdgnfzbfralfjhiabkxlvgaknxoxxmmfsvfnyghojzmkcvqrsuhltwlqtvibcrxvhplojeenddysfcnzrhouk...", : "hmrjqbfodzjdzswuiewbpiqpgufohonczasdbwmjdjezmcsdrnvqlmylsyemcxhzrksdltijclqsavtmnpduhyejvoyqysnfgdkzewsefjsrsnncazulrqgasjfhpxiqivyyzupmkgvbyiskiefhmy..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 877 { : "mukqwgtxyikponhwuxrizncwlmrhnjckevydrlrmmapsamxhsikiciyfigbryqvgvlbmyeoramsmcqzhtftslrfxwobhgbnfsqbhdwkxredkyiirpkzttlmwfgnvjcczazaldchnmgakfxzcygyycu...", : "ffuyosjznslhtvrgkxofcisyxiksjxjdoetuauomcslkxdkptwwqgnowbgkpybgiuutcgsfilhwlczmhwufvhdexrzlvppatrjirwztfrffipdpzaqsobyfmyizxndbhzhaizvbhcitzketunlfqap...", : "cagjmrxqglntrqcvsgavrzdleofmtmiuhbhfdqzxbwbfjugjcbendlyfxebnhuztykqxlucfndyzwpvatpwhrhgkagixoboouitmstnuxrdkhsyjciducbqrwzyetmyqxjfefagqtimgtugtxbawai..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 886 { : "mvstlpmadaapwrtzpyekiqayxcrwvycqxiibznzjrvwgcfxybcnlrpbmmszujfjfkwudccaqyyibvtdlwxrchlskfpjquwiuwtbxkktfyfemrihtcncdqvlowaqezapsmebwcssnzjdoygtfzaextv...", : "kiorxztsmqvoldzofflmnusnziuzrmefvmazqybkdcqjxexbckjratyxccficehhyahethfhannjjinmgufqyvfwxaobytvdprybwaumodgryhisjuurokztgoijkvxkjpbkxveqwoepzrzfxgjeir...", : "hexprdzxphnerhvdkhttkswpwclrwdxrrokmqijfzmrfeqosvcbidurdyueoqfoswesrhlvtiviccifbkwonlahknqlzpctyfyhlliyeemzstktglwnmvlwsntaffonmbpdxigkhurrqwkioelteeo..." } Fri Feb 22 11:39:31.235 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 844 { : "mvudmzntzvkrwesxmyznuuntyhpghbaphemljvgiqkfujvgfifaojqrligqignwprkicedzarljwciuajpdozktwdydgirnybvvuxoplhymycwsccmrqmqivdwaztdnecxhcclkprcxyzjvuelxygn...", : "wscjdjxkcfzfeuqzjafbdrnrllymrtchhmooffbhfdstuebbwhgocwxmyskddpzcxtguuurlqxohmydxqlhgiylvqwyzrdkdmvxtvhidlsmjlikyahbeegmatwzkkjejwwgwadbtanhdscasnjhapk...", : "vaoswknllxdzohklepdncdxfawitmhyvkfwhxelijwapvpkjoornvpcdzfytlnvtylkbwouynihonptgcgtolnjjunosqvwepeesfrevodjwnebiwusuzigsmyitazffeewypjabynzsyczonoknuk..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 918 { : "myvhvicdnuwgjapacugulxhtwggflbtjdxiffqwcaotnvpticyckmvtzjamscqhhsuzjzugvlincfowctevovnvfvemahltgbtldhweyakskupskituptawtjrabyryqxgtxmwmntomzchzywkwzqh...", : "babhjrhiclrkzqfpoytrwilhhascxdzgeicxactkcvvistxmdnzuzecflkawupoybffzsbwqltobeyacgoaargyxpdgiyahehbkbzhmjfcuhqdpxeelwupalvfgnnhxnsoqpwfxlozhoaddqcqmtsy...", : "qpsrgqpuapucwfpxnxfyrxpbvlhmbnhjezinrwgolfyzivoautpqiikarzqmfzikeyjaqaoouqbrxqcykywsuqxqxqffpmmvautzsqazlwngrsoyotmfgdzfjpnnhfbouorrqrzzokyerhcjdakpxg..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 900 { : "naccwyacceveqfwihxhhddddomjffrbavrbxzwlaoaszhrbknfhvudrznfmwiddrykmkwidiihywikyfonqcnyeqflwkqmdyrqdrwwidwiffviopiwppsoehxnhdpjyhfyxnkrawesjesvssjkuosj...", : "vvxvdxnvatkegmwyntiohnynhnxikickdhrqvadhknnsidmgzrgjvmaxsslrqjfjgorttfhkajhuwoledthpqkqzxwsrcboulhngudgwevqcceqvvfklxhdfdrpovttffkrgngnxcuijugizcpmyly...", : "pjkbbqgvitabgtwsemoglahxmbpunxsfgvzsuyrtxytbtixzmvrwdctfhsbgmldnlzfcwnpyqmrxdryhjksgpwdxfqgftsofddvxzswnieshwyubfljtrcwkeaiomxxqhgkbfvojtmkeulmlkpaauu..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 910 { : "nbxibpnjzqewlxlidoxmkcjymvyhaaaazywkjmcmxkwqxoydftvgfdmzfxsjedxjmxovfwefibtzjtocpgmiintyvcxiewgaswllefjjagmzjcanyiquzuvoqycttjlwjinjgdksuimyxkjswvsxyp...", : "mfdkvivqjfjjwqdgquezxmalnepysmvaijrxrmmqkzhrgsjzfyzhijygzvpymihdlhpvzwpxfhttttjeiebzjyuvnrbgohonisupyzbidiqbkhyazxbdniajndinsvofnwvchobqnvpiqvgqcgnirv...", : "jgqxfepfczgutudpmompzcpfvsecedlgfrjgblcjoblqxetfxinxvbpquhkuueeaqdunsvmfwwvqbyensddystgkkfgemveltuctorbmvqjbvnuljawcfixllflmkvizjtcsgzfbdprbdudcecvoen..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 894 { : "ncctwnqqwoxbrbboxdqbsxhcyrhbxxtrodmitksarjejenbieziiiwucaylchspcigcinkhwjdedigzmiqceduevdnubnjvdzkszlaezutqyaamnqqjghmsfvfdkylyzaqijtlwoidnfbivjzmtabk...", : "mchyvsgucjyjerkmzfpctqtfyhshuixhebpcodzvfskwwaxlvjadtsfrsomttdzqekzojqvanzxgbwejnylvnwbcjroefwgmgykiqvxfxmutpgfhqpljajvjbknoconiudcxyobiqiveklsfjscnip...", : "mhakhmkmrtmzpcvrkplgpykuxixpnrrzsubyhljrxxebwdgzvunijsrceceoycdkiyrrrqisczhdhhrddzvmxdczdwmmgigkauijhcbzeezvgtsiajuzwfzfckjyfvvypolttuqmawetvlmeifrapr..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 889 { : "ncjefnradqicnozwbhiimisclfwsivrjkyzolvdpidyyoztxourbjgsqndheqtmbyhivkxleyyviycqzzymyqofhpomarropeweyvuwslrzbtewgxxrglmabgbxtnrjsjhlmiaajnwjkvyazhtawrt...", : "lqjxlsrzffxfmdiyzrzqnrjfqdvqbypppbvgcuvgfcjgxqntbxppbfnagvunuqoimmxlsyptxbzaewyvdvdvmepdxytveeekmwhhxsowmchslrbjwawmjlqakljrxfqnnjtkelcjofahvpzrjeesfo...", : "qgscuoklvxvxssesmqcabnsanivtrxlnfmgbdiopdmcvdtxcunknuzjnlkxlpqkalikxofjfvpcjwqocnuphsflnlmzcvwnwogqtzvcewwbihtbszdkqnyhyabcfzmhnyezatyifcnvxhkqtkvhcha..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 832 { : "ndshbwpyuzxfnfixouxmwvivimlzkunqxmjnfehhzucpmtylpcumpljprrotnpyodbhdjfcabusvlbbcwzwxzrzowozsndnzjtstouuzdasutxovyiqveaapxtnsjktqunksfefeepqfanlwlbeyvl...", : "zmcznojvnvhbhlmcnvdafsarvrqimwbzccwjvimtbqzrcewmrfikxsoedchmbykkujxrypmzxlsmrtrrilcehkzoakqdpsztkjlwbrydtlhaouxcgmnawnmybbxbiazkdwnkxyruehhcqbtdeumzpc...", : "yestqbqiwnrdgkhmelcuxvcypvtbtualvvlqaxbkxveghldlybmltebzdjkqcxanihkultozqycifmrswseklumpytnqsbgwxoacfnovgiqwfwiopbsqbbqjurufgpkbicxqhfdkfxypnzeacgsjpl..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 905 { : "nezcasakdhsjxqnpjehcjvghtnnqhpxqmnbpyylkvjsglrdhqtkxsfdekajtmgmbplugzlhcnesowtmlifnqpvvvscmsbkgcpobuoavtmfezoiugcsixetcdlfppphlcbmhqnzlwxvbbgmhelfdzwm...", : "srfxdnbibhhfbwabfyymwneazvcupqbuzaplphhidokjbjsxlpuiktalohesidtgfoisespcivuxqvzqmpubxoijqvomcwiqihuofewmisyadegkhyjaoaqvhqdgsbfrkjiutshstdgvracqyyqngc...", : "mzbwzeswqgxqbjcuujamvohbkehmneyvzuiudurzqonkefsvhoacshctbllitvcrhnuwbzimsowglatabmvjgnqnaxnnethitptwutrxnwdfcjcmuskeobksuvapqwicluxupuulklckmnqwyegmap..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "nfpjnngzrviiusezuphdncftbcxqeugsmacwkzzlxpsmbfhwzjfxaqzxyizoccwtgmspmyczgqjdichposmpaklyiiuqprspmyeitqhyvdfuvqupujxgtryfmdrfsmrgenfkvlpqzbbqppwmfycoic...", : "zsggomjdggfgkxhtonsacbxgjxvmmbhxelwbzqzxgszcahmmpvykozpqgdtngxwccvaztyluxlilqeipvekolddngievgazbcdxltwxnharsgeqrvgkpparagxfcbrylajbixansbsvuzcagxjsfbm...", : "jbdzozrtxjvihbyspdbissnobsbdethxzzskvbsgnsblkgtmgdywmfblkxekjomgpmzmcuystuxowrzepmcoacdyjgwczdftcttifqqwwwthweglafghjiogghzyvihyiqronbdlcsuqzpltwywgcy..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "ngcvixpsedkqjxqnpzhirieuimkmrvackbajzkcwsoscexfglejeczfozmmkicnggozcjjfyktmwvmhgueplebgnklwgzjlhsceefqdbuxeqgftevymcysumetzdjfjdmgmnvtmuloczamybwrneyxlu", : "zbeythbjsvddzbeckknkjlndsxplgifmyelpzlwgpvypjftnidvbqfnetsifdxapshyynltdaawinlopodjlwoonnxfrowzfndseuinclttaztaijbrmgdqschetikgjvjtzwjiiogmyvjctknfttx...", : "vvhziertwfbofzdejdmntjonpitplxlnymgwanzjdkrkndaplngmaerzlrnzcodzcoiftrckgbfeozdwomneabzwnvgotvtckpwlnesnwuccdcoconjfjfrqtumgiihnnyrllwoedyztihhlysfusc..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 860 { : "nhduyclvtcjvmfpfgqubsnekxrjywmhvjgjbugcpvoobzdmizyiixadwutpqtqzxewvrkmzfmfmgyxeeuizwmylvhlwmiyfxmvksohxzvrorculyeyksvynabjvzkjucpzpbjqufvnvdmkcykbymrg...", : "acebfayxvguawqxbpxsgbdnjpcoglhfmivkavbsczprhmzlxtcibejqsrrfkkrudxrwzgeknnlvmfgagdrevwsarsdonkugyysjlekktpwrzuvtzvkfrnpbcjttbhyaohmcynpcaeigfaxnpolwqri...", : "wfujcrtbythixiupowpnavyhnfnhwkwjssrmhcpizhpidmkzaldtoqqeitlcfqcjphdgjimeluaaxkxwsrzxbdtlugjnedbyuiuxkqefmegozivfhvpyteofazhygcmamwzyuloyfzroonatmsecjt..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 882 { : "nhxyjrbdeibsfkwhuktrbmlgtpxtusyvozdvreaqwjarzexnexpqyaaemnnkarujucbonezkjbgvxfjzbxosvcqinrtfifzwbjjhvutinydlsypvdsnudgcxbwvlssliwsdlrwdvyboslnbvuawsqt...", : "hehoxnbjlkpmiguzvusmoukmdquwqibzqoolcucuqehnugxdwifodfuavrdvbylmsjsaumvivqtkmpegaddeausdsgeserkalhagdxkbmisdpmribjpgzcwptzawejbsoxixnxffyspoljpnrcgjce...", : "lcyfofcrqcoygxetzilfqaisurfbhbxspxodkjvrnmrsnmhobwsqpcrnyvuzqfxrwpcunwibczypiyrizeloarrfesffhgudwvzahdrkalyyafmmedeuuyhyjttopmattlumdfnnjulzvxmvlmqwvv..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "niyaozjhwoqukvhhwkyjdcpozkcdqafsxcgmuyeomucyedzcxcjthpjmcfhpvnduyiwwxucufqajwihgrjnchdvrtuawtszgitwriqmkepogeopkmjkezyereozxynhiaxgxfvfzsbvkroufxdbgxn...", : "tpqapuvgtchxgjuhobjwdkdlmaritecmokvtlviozvofaypeooizkokcaokqzkjcdsdegxrjxmufrbbcebttcdcxuvxamgzwywmdcaptbqqnvuvxvcpmczocvwehfctilmuntvcgdakxuvxvchpmaq...", : "gdtqguddktzraafngqqwsiftxparsmgpbeeanhyiqdjymbjbxwoktjyatmoomiersfliwbbzygfictuxzrbjuyyrmdxyaetessbrlefmacymaeykekgpyvdumzjsxjhlvwotdeqmvpwoguudhiapwe..." } Fri Feb 22 11:39:31.236 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "njyjvcbwglzgujzepdkjgcshubapjwezflylawxjhdcjzsnlqpumhjxkxnwviwfewwlurcxmmaoagbncfmmxrjhqsdlvfmzkqjpqyvajofhrmtbjcecwumawudqorvhdyzhjmsopefducutvfhyspv...", : "htkcbwmixunhlfcdworqcnnoeczvmkdqzqqsxhysyszgltilcnthyxnxasfkaogkjoymuhxmtiofhkxjhjmghvqswcmuiqpsszykhvzujhgegpsntnzipwtqhgshzetdnggbuehxwxctebcmdfiwdy...", : "rxshccjequivntavmhtsnxwguzirpyacnbsnoqseffdlbaikgficmrmsyrrpjvtmlesluosmsvdgtvdaaecjdfyltoxpzsdfvqxdehmdyqhjinecuolokqgdidoegfexmxvnbsvlftsunzsmewxkat..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 834 { : "nnxvxbsbefckxnbuuemyfzfezbouakwltlnxivlbaskoibortaiqculafataqbgurkbvcyzfqnvpyyvoeqbbshupxhfcxboweggmrkbtpthjfidpednvkouqmsofxfmrqkcgmketcdbhuortkiydtw...", : "zpgemexhohkafvobedfkiksazjjcmdwmnxypazvranuddmtoocdpldgbcqpnxegrgbjweowddcenglydcxgtqeupzpvuvhictobyxgsypxcryjgkfnwrodmkqaatvwfnjvelnafqutswakvifvzxed...", : "xwukvtyenbecowqigwigrxecbnyzazkboobkwvqngjwsfjziiwqdzeyzvpanhunzcnubzztqrpzuzvdiqglwmqfyeyoxxuwyphlddphmsjuhzqikqoplekdzxqqasqcnzjfanmlcdhexqoxvrhlsma..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "nohppwkoehzjutgvjolgmjamoykoqwkvezuequffhcfkyivpgdhczlacdydlzsedjrzviiquqfsfdsxsifhbrdrdkgrchdtqgojhtxuxnybebpyevhqridruzhvejntkwrzzmmwgbpghqcqdptfhcq...", : "rmjsxgzeyycxkeohntbuxlpadlbaoaptmvmlgethqdmoqlgndcfkyvucfyuvwmqvidfhvqvamaapxzvujkbrvmwyjexyjkbhogkwlvkakdpdbqbxorhvcnsmdesmhxrtpfqymrdlnfyfluzhdzlayr...", : "onethvgxxqcwatzaqjunruleciksqgcatzdzzgmkageicetdlbpnfhcoxoybuzksuryuyvewaxvxkxzpumvkvlfrlfamkueeuhxbydsdatxtgxygsengzipievryvusvnesdxumzvunmxgvbrjaqch..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 867 { : "npunoiicyrytuwseornsgvgowaekmkajktjeibxnkufnctrtmtwocqmsxwynlbquzkuwgutrcsqnhyndyanyzakjilibqjfiylkukdndcqibfkbvfygbeywrheltreitdzldlkgupcjkftoupfqzqd...", : "nlmkgwshxyoddbarnimjzlbknfsborexcqcvmsddqqbbhesxthrtaopilbrcdgwstpffuoqxssqoohlwtexhktmsheijdbqgawmveqsmhyvpuqcbxomlpxronqjplhiocyvjrktuzspramhzepagha...", : "yvhdnxeswensksehuxfmfistkpvvjniaokgtqwczavgzdyzrrqjztngkcidzuuohnpbshwjkerecixmbxpzwerinifvzgzkhgggvzebimappsivgcgudcnxgwsqzyvgckmqhfbyrimwhstttapnyuv..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 833 { : "nrgqqeznjghvhbmtvupdllfayygsfdccgplvobixrivuftczpfnpriyzqwyjwudwfxqguvzdiayharuwvdjabvvrrzhkorgijfyvotvttssahkkcametfmmrqpxhsanvwwigsrdjmrimevglzdxrkf...", : "wrghbazobsqluywopmrqxfkeblodsabhslwghhenxdztuycrrajuynmuvjszyygpedtecxrfptkbdmzgwngyqfdmljlaenmmezxsplbewexbszyflxemijmdojfbyegtnqazgobvjrfsszthusrdue...", : "uccvcngvbhgllrmyfrvzspfocrpcflamyssbmudwafrtlwblztzltkyzprstunxlokczplaxdluyrbsspmcxxlidrigjsddbwrcpzjxqmqfhzeoprlmrvjisiutaljkuxaiqnxwwylivjotesdwvqh..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 844 { : "nrxkeaecgxcozxhpbxdrxzoavsdwhzsmmzdsmluelnbkexisrdyfnpbsdytbdjeofefuaatzbanrmcnnycxgmxkvclcnmctxainvplsaxdnoadgztcnudhfprnxxnbfqotrsxwggevjcwfomqhnupt...", : "shxxjdpsnljxykoxzwlmoyiqckbtrynckdxdfkdwilcyujiwjuxviinjhqgchhupbwhyyzaptjrodqxmelscozveoymmgqojaovcmczxyqbmcrqrcevtfgyoxnluccpghzqtvgsurjpxwdsglgmrit...", : "axwazcqybrzhbxzpmaxbmzlmvjoglaxrfbnpovwvojcdfihjiynmfhbnwmzbagmgwypifuqbhzbnhepnwykwarockzmxyqshfwilpsbbjwnuntjznehhkmkhbqojvotsxzvxdsvlygyvynkrysulso..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 822 { : "nscnuefpunohabtgngevvtlcesotannnoqlsfbnfoiffcduivousgoglxsbzhsquxlhgfmeojvldoeiznnfihklqeuetuvdcmjrqnorurfomqvevogwaoduibqgugplyjloxpuxehszjjbmlesfpix...", : "fmyqyflknimkgntgnommbpceqdwcrrvlmjnxwquxrnyvouehkpyahawetqfmqahbdjtwjjhvhhzrqoxuowdzaqsaslbfecswulhryifaqpsgpejbtfgkoaddkbmwtlgegiwfusdlgjeichmkwioxzd...", : "rurqtoyltoxeclpovmlfgayyxdbnswsbbzygitromogxspsnpssygwbnqmtkkhissbuliskwejjoaszxgzawokpsgicgcntqxemdvaqvjzgosccpluvsihzockvrblzplgrtzmuveaeuqdmjhizqtk..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 907 { : "nteuyiemhqmxvmnafpznkwhgoozxhssjyyypvyxulsfysvwurdzdbyqiwqwgtqxgoenjeubykyzckxmjnmxnrvjkhcarhjpojutvwumecjmabjotjdedlnrdxsdxxzuhpplaoxtntonrssvvfeenmw...", : "ybqewkwqweskqlefotdscotgimjpstsrdoizluyqzdmfpulxcokyzizkavkcivbhqpvomojiyfqslklsynjxbyxdarkyhyymtvrzjzdnvjwaqvthrqqmwxnuhxrksfjzmpsvorbakieqwvvkprgxyv...", : "pbhikqfuffixgblndpuhkxhnhrlwuzgkwuxltkepjqabugbzzrbocmnbgpugxznjjllfymrmipqvmlmaixymlhokjipelnswouyykmsrlqrcjdcyzegkauysvysdcofnrxxauqibtmeyvujqpueunh..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 828 { : "nwmiqrhogbkuggfvkskxcmvaqobjiwissepbuorhfahqzarlktfuhortzypxrrcosddvoizkqkagigixscfnatgqtcckotitpjqvepeyebpkhoumcplywnuaoqpsyglnvatnsvunqjbxvpjerqedjk...", : "cnfijtjjjdtmkjslcixvskozpevvruimmrrlmcdhwqzoilotohymjghgtwdymenzdtsnfkhdjdzhihszqthwfpdaeeedjsyyqngvmupfakzwwofyzihccgdiskgydhmfqpdexvmbfdfgbgrommjslk...", : "vhheilqofdrqqhsuatsvegiesmnkhvmqdlbnhjlgsvnphslermoxcdecswoftzbwehohycorlxjbtkhormcykayiebcrxlcrwtfpqrewfyodparurvkqvwcheoqyjkwvlgttmdysidraieoonfpzxs..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 843 { : "nwocutlylqfbonlyjlczlqfkpuxqaebjhjgucwlhgqvpukfsdpujiuyjxdvwrkunwnguggqqtkswjptrnhbynbnnhaywpyzvwehobeiikjrjuifnpdnwbhtqddkicbwxwwuwvbaoqndueyatlllqnm...", : "xnhzcbmqttlnkecrrhojfvfaundnezphpmvqvxflmwwepxtufoqqbvmlidxtzqnakfekhnnwysniufpmsaudxbtqfxenavshvnebiqyszxyjnogzywzjqioxbjbjkohkbxzzvezvztackpgnkznlgu...", : "ltfohhukktklessafvlaeetappfwogmoqsmktyopxfhqiqskglftcbvfpvsmbezmlubqevtjlhkbyhnprhozdphwafptacrhaapvwjsreoxotmgslkumduuwitccgxrmefyagdllxjeirgusdjmndx..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "nxprxwijlqszzhejuenqcmmcuippdstnlqfpaifsozsjqkmxdkarqjckeipafmzhunyrncphrpfzkajgbzhehjjnxfkkxushqmvhxetwzvgvdhkoojgnepbuofkejnovvlnravzjkzmpfxiglzhsvt...", : "zgowsgpxdabulvkuvkvytoieoyitiwztwexaedwxjgbuiepyliuldpruxtoluwgagfgcdktdlsygdolqplwmjluuopnuentlqxqmbfuvndammicxzoffyjftpbelsucbyjlanymclawkbznszfmfnw...", : "wwbnskqpuomnmbdhvvnlebspdvkcrtlbcjxeskyurthzxoujtbprhbimjuqvcgtjoqtjbtzhiesxskbtnghlxuzpkjpmmxcbcysjqlhrgitgkhfbxzaeqjlkdzdsmahepeqvwtrhvkzkuauttjcrqz..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 843 { : "nzxwfnpfhlihnilcmcthlaqkrgrytbcfxjorltlkinxxxtpqgvmcyiauyqgevszrdwiybbhmqtktdosyrdwnzdfretrvnddfzlibaehupbngdgtjknurfjvdfmxszqpgabjcpgoakgonftumzlvqqn...", : "lzdujdzolkbycrahtdwnhmzsnczppkpjepibqmdrnaqknepbfsmjrbookfbqgthsjahvrdfaooefritmdrrktqrttgwuxaortrlbucbyxcuigdwidlbjhxyjcwelendgrcnnhvedhturzcctfsjzdf...", : "hgsbgebzcznzczxorukjopoqqpbhcdbvzfxwdbubiciibhfiywsrgviesyrqcrhfvvvbngqfsnmptxprilnyrfbhupudlhrknmlkmopmoskrtdgyptiwrhxovlsybjczwhvmtdbzkxmbkowtrxpqwv..." } Fri Feb 22 11:39:31.237 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 881 { : "oaqyjlzizdvmteuazbakuwhrkubzqxnazhvyadjyciigsvrhfltgrlxssklwlyuivqvxylwhityrsmybdueeicctsfmpjzdktybztjjtvidrhysievlryjdganvvazrrzvixbboqlsqsaiabjzbqtc...", : "uzshptyqxkfhqkwwylnzfbsbqrzeuhuohptebgsdirneuxqudtkvgrkmqfqnvprrqywtktxreewapbfaslpztnaofykgdutqygdetadsnpyfxksqkgbwepyumlmnmbjzospcwinuqjjvlwttdrvnsi...", : "zqljjhujkdnpsgtnnqbaesgoejhipdapkkzjfbooguyqiosqurmlajsxlcjfveqmpbelzoaabadsxuhnfqyifuyobaocanvylgtxovbsrkmagjxikwklgktrixbebmnxlytazsjaqxvujnzvvzfttp..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 916 { : "oermwfepvxmahmujjhuwmxciucgfcnhjgndashztssenueikedidzqfrjdgtweibztynwpbtehxswpfihvnfwmsthhzihnecgajziqdbmnlfffpmqqlcguyvudmapozfjvfkgpcrsoctrjtzuepxjt...", : "urpycvmzjmhphukhhqbuetcwufwssaxgargbkdkhzmvdrtfxkrpbvrhlleerbmofjbvhdrxrwrmrfoorrqmichxkotjdctpjopgrubbaujrkthiqsfjtbxlrgqadrwxfzvwrwbvgfrcbffjvptfctx...", : "qfapylfyszqvtbkgglhxpckbmbzbprzfsyguzzpwztitxxosflnoyuzfkqdjdvqrreykrknfhmlmqlrpgofdfngbchdwgujsleddmwsdjflgpqcqtpzfdveoighgmlhhwhvmlieibhpgdfryendibx..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 876 { : "ofhpmhpatctgebklveeumbmvwekmbtdownwuyroxawydwfnibdiemueqaycjfloamirafweadayqzoeisqwmhtugfxdaggdqsttacanwjzlrdazotgvuaidhwntfrttcetdbgtgbedzysvzgjkdqrf...", : "madovkqeykahiwmdjgjmtwqtkmteyzdhpmztygybiumsehdggkmjyaslvujkhdthofpgfxusgnqmlmviaftyjziikvsxkzcsawhwsfazgatintqbzpasmpqrytzrokddypksiydmjcbhqpjvvxehpq...", : "mtqtynpbsbzupelfkbvkdgptricapxmlrizqkfeuogkqrsycuxqvkbtpcmqwhilfndiepmbkukqbzistqspgztvoplazmmpabfpecaxvzpegiewynhjetrbyqpatwpcolgitzsrunnbvalqgzelhye..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 843 { : "ofincihpjcmcgdnnbzacaaasebvyxincanctuzcdsruyftcsrnfxeplmskhpzqqkbuxqbeucdjvxknexlnxlzqfcejtkjxsrtvpdmknfapttbjppvvgvhimowtdxagaaykpuanmoptgvwrgmcebcfg...", : "xfpkmfxdspkkoggisnajxgcqmxuiwqlecjlixkrluntgacvijzurrbukqongdfewnyvicbjqhnqgseexlatpcvnvrazezlxgkflbazyvstmmhwhnkjvmdkepvhflojeuogiqubfddbaonxcrljfezx...", : "vgfpdtjnesxdmfxaoyumxqnugkfukjtgjuxrvzkahudivkoccddbmswqsxtedihaamxitmihtrvzkxjyldknqgteejzqjqslyendfpkkjzhzekniygovoywpeohylscyurjmmguwqnzbadwmmxwvob..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 866 { : "ohofvqaoqcfjcqkrvcdaokguuyzhimnzdqyjnfchdlwgmrsxcwablcvfvoqjpmcttuueluzdadldokzplvbgvabrlbqvwuqnqfvwcaoonuuvradewkdvrrshcpmanzjynkbpyeljjcdcgiulufjuyd...", : "jqiqdtlmtnkorulhdnosqvzmnoawewdjaamuekiuoycddogzctjngbmdedncbjiwdulhdlcjcuwhsoqzxeulwwjtddqnctswqgfvelwegnvgjarcfeycfrnwlpawlmtmwzxanpjmlvgeephpehcywk...", : "asqmynrhtknsreliwpdfmovrxibruwngpzylkqtvwwzclrkcvemueewxigcnrdrzuhdkikkeoooqjsbukrbmcolnoksxdboqbdzlmiqqpdzqijfgmqfmadbqieuqqzzirinusrienpikjnqyrcxaxd..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "okwzwtlijcxanadkcrzbbtwzatjlahtxlumecrzybarhgcattroqglyxwhudjjitratawnlegogekmvaxecxbserrmicseulobrvubdqrcmbigavkwyemrargvtihpgumcbxdsqmdhakcbljfmjqbd...", : "injmickbjpzpchcggyfnsjrerqgtmzveusjredbxzeyizyjkbyyktrfrtrlrueefnhwmxzmkwojhuaiabmwrzvkeztcqyxropnhpsoyhsvddsggrfhduhzczjwktinravqridixlyeugguleibpzxx...", : "rwtwjmzvlcqeexrfkifxhfsnupvjamrmfyqiembydcuxslgyysffzdshdxnhgsziekpmlbmjlrgzpphastlyiqspcgvaxpuzzzsbtdqvyofgygsjixbxrdvdympzmzgrmsfyhrmwrvbxqdymuzibhz..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 863 { : "oopuekiiuebipfnwpfcigieutzemrqpdamvjhmxwpomvqlxsfskpiibfnuwnrthnqoxxotlylmonudylztbpqzgghrvqbnvxtxrcylsomgfcduaznraveriicxuulabcjheemcitmatuvaidernfba...", : "cpuinwjsnrkxbiiparxlyemydyfchdqsfjjxksztvcwvxjddpbmfeplectcqauxqnfjbzhalmpkqvosrdedjjjtyyhssztlgogmbaqyafelbcrovlmvxhtsyekdibkgchluojebshosghubcuhrfvd...", : "zuhchcdgvkpblvneiajjrksusxmgoacnynvmvhzkzpdsyulhsxyshfpddxeilndvvmomwcjbdfjzgmnfdyifujmwtudqhyaimjfgetcyarvylwtlbasrjflbnqbjcupjzinehrmjuxexjvudjpmzyb..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 871 { : "opofbgkltqnjyifdpxurqlygqurbjplvupscefrrlmnmypogwrxpmwahqfpuinoibzytsbbklcmozydzzrixihoqzdmqcsnolnfvbksdzrsepviehimtqtbbsvudtloalfnysqkvwqtsikiwocdcvu...", : "tncefzqbjzyvmuiynibzqtgpqwuiebopzqjnxpwjlqwmxdbgoejporyrvbpqejtzmpzpbspjqlvmodvkuofzexbsvozweoilaniklpzkzgoqanjkkqimlkugpqjqjoqqgoiqhyfdludldoqpkmwcur...", : "lxwwvxjfpxfysivravradbjtrnhaqrzyugagaphovpydjsgpkljzikkkatsozcyftdrspbwopknpjrupedbbczwkqrxqlnjwbozomaeqigpurskcajsnbrkvheuaffgjjmsybrlmdylvntepmoxifa..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 892 { : "osuoiuapplfrsusgyejyutergxqyeaakqgoxrlrluymutugvrkqnfnvijaqunrzrrngagkbzkaqczxhubavvrfpetgxmgiynqnafsonmlghbniymvxxikkijspihmlxrebldwzfapyosoohdmtzbts...", : "ndunvfsrjghsccpvzdislxusrmohowrxrhyfnpkvbtwflxnhphzkkfikjbwyyhvhiyagqffouszklxlrrraywykmuuheftlasvzwobxxzlmzndvapazhbubmykqzfipbipwsjlrbheefvuojxwibzs...", : "yosmolokjpqrzymejotncqwteudnamkudrbpziursdpbiopieyywnayndrcnxmxpkardlgeoyklyigbmmazcimydhftafzaqrgoyzxokkqnaqfjpsjelslsdwafpkksmtxirxlvsdaqslbbybnkyeu..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 1012 { : "ouhxpghkbojfkvujmoxacumndqaialrhcvqrlvjcjxhwoimlxvjrvbfwpawzcdvuaogeotwgbhxipcxknrzgolfcoqjfirgfxgiyuaitcjiffmtblxjfnieuafmfgfjpliyosiqzoaldtkewvoadms...", : "tjsooeggzbmdtqvsjpekzhbescnvraaukjggwcwwvbjwsmattguqqttujpnwngbwnrunpthymocmnekqxidrxfofifpxzudczrsaghaqzrwesyggtfaybohafjxlzyhlvuhxahcaniwqmfkrlefmez...", : "qjmdkvwkmfshvfhnyejejzlymfepgrrztrtesszgrhxwjcvhetshmuoabuuvfjfckfylqsnzfwgzotpfqipmgwnderfdjewywagoczffevxmypbovktqrpiaputhrjbnvccvzflvyefaapnzsphpnx..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 847 { : "oulavaymrpobsghewdydnfwefmukaofuqiwhpsvxvnccdjwgsqlregeigodzcfxnnqjpkrctiwisprcneckpxtacjtejjbtizwldaydspntotpwwxsfqhnmjmlwevzuwvhjftfrzclsgyljtwxbnkd...", : "noubzsfapsxbhuusynyennbccvibavmyjgovlkyulijblhgkrgsigdfrnuwzbocbjvkokzkezzpfkwavzbjztjtmwbmtubgfmqdmaskmtaawhnuhsvpgyozoeyudmzrsoefrvdujcrpdonjjompcxo...", : "pvsozeyowwoffmhvtcouqlxjrnrljgmqhefvkvpkqtagpvvfkzyntimzcunwavlbznitbfwnwvifcypfzqfnvsmmlwgngbxfwxktfbqepmcndtawznwafjsqmwbbanrfucesgioojrecejjgmsurii..." } Fri Feb 22 11:39:31.238 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 905 { : "ouocvcvnmiutpvbjoreialrzfqjkdwsdgbwbuzklpjcukvsmsrmseowiaemmnevmcblxhqnxvqgvucatkaxpqwmkexcmqgeqlbkwmkepbwpojxbququueflbysfedihulqfhtphqxdysxhybvuxdvh...", : "bfuhbknvevpbfqomzfgumqffyivyxlglscwqigqitrtixwjtpadlimkcyzhsxyjytpftrhenhyobatifvfskuxoviglkppsuzlqwkycdtztcaqglplgtezyprwxetzurjexztwxagwxwbghezczgmc...", : "jzmjcxcgjgjvhnhysrwtkpyphfkuivhgvcopwwmjmgvqqmjcdliuwpbdorkgaemqobxxgqqtvvttxbblbzuzwarqhgemljawvgblshaemgosfmqlzgjytbrbgfgvenedldrgsiusxyyvnqcwxfjbzs..." } Fri Feb 22 11:39:31.239 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 876 { : "owgiiqbbqsevkbztwzwdbjgbeaypsytksdwezhbnpdxgquzdertoyuprjtvthxrvpluukcpzfkukoshqxjhhqebjtgohgvdavozjhnkrqdelvyrkmehfoeaxyexrlqzfrfyrsmmytxcimgzsfladla...", : "drifwrcmzauescdmcojqkvbuyhpkxluagrongqrlaoylbyouuyerqmgdophwiqpmpoywkeljgjwsfypcnppmrsphntwadmfswgznxgfqwlayqyllthtmpaipiywoxawfihmdkbtpsjdqycdtmkfpre...", : "uuqregydxynotilxvuxpoafxaykzedrnxhpafihqfauojdbcaacsgiegyqzzxriguxxqrxbcnxxuiixwrzxntonneoumstlulitzkfcwvtdhylfypwiipfqndwkydqkyqhyhrzlmboguxilgnxxese..." } Fri Feb 22 11:39:31.239 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 827 { : "owtqacqepgkohwatguoixleiiplyjwvwgmqeltlspxdtaxehtdcfllwykcwddvnhxglmpzeyhdinyciprcqhmfflzlqxadfxvbwuhpvcjhcczxscqclgvvpdjaahlcbtxztigzaovsujynhkssbrdz...", : "fsupbwpzpqynkurxxvjlfnxovkywdzqpxkzxrwmrkksscmzldqtmojowafrjpasysgxniitsumduovtsflcflcctirprvoefasyglsqjkazhvyvqwsdvnatogjehqxkvzgcntelavblplhdgrxderw...", : "ehgmhytfityftzcjdquraxovcwmyytejqlpxqpsdtcfouunaxyovxfxqrnokigupihcjivuzdqlbvbmpqxdceyoeyagaraqfnrzkdsdbjkrxelxnpoddkguqdgntnifaiwgvihelezrqouvyvowanh..." } Fri Feb 22 11:39:31.239 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 873 { : "pahwzytksyvqkgfyxjphhxgxqzgqrzzroocqogcygspqtisompjmjddvrmhinpxbajhmmtyppclgimoiiuggndicusozlcpewlpezcnokrsvjqbwosajwztrhlyqfkpbjkvpzunnpkvyzrxoormxht...", : "jvuahgvmdzxrhatvklnggefmuuylbcnltrlicxcaoipcqpnbvhhrxunmgzsjhqxiaezzxchydccktuqurwhnbdvmvfnjbodfxtwvdufpelnpkplwnpbqnwvncbeyaewctxtqblkrloflfnbljejrbi...", : "tstxzcmbpdvxxktudiqwbljejguqokkdxabqdgpvhsifrdzrxfgxjrkscvggzawmtenapasyijcvxxucejnxojiubknpdzablyrujulqvnpkwszbsungzwaxmfdjhsmrkmbxiyavfegtvociqzhlyp..." } Fri Feb 22 11:39:31.239 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 832 { : "pbmyqgwxzqgxymdymzhbsjdwjijtgztsujmpbmqzthwebvrcosycfcqnqnciouqavrjeqwrqtqyjcbnjczqqbmefnqcolmoqsfedykryixqcgazyhskgtyhyybdulxcyzpaelzatbuffomxqwjktsx...", : "crtlveqehtpyprwmccjxiwlaixnvbtltowzbjrssuucvnggcnrvpkuggaamwdfgweukcfiaopgrnpfhpdxotokoouzvesbmqybbmjvjzylqsoumahkwpcdfivmrjdqcpqgoveiqpalwulqkpgsrclv...", : "otgnnpiyeeuzeidxmdmjcjaqeyidgbcanqvwvvjptjbnpfocssyfwkdyarfqqtwsjsljjhpgmmpxywoxhpdzzjynkibgsuvdgxrhczrbfxteepvblmsagtxivornxjalqojersuxcmxmdrpavlozix..." } Fri Feb 22 11:39:31.239 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 939 { : "pefzraljiilkqtpwwukznquzrirweablefqrrufrgfggiepcjfyyzbseoacfzmbsnrxqaohlebffxwverzocdewaqilvnkomuhgpiccifxjgujfiodonazidageoczuljobyznmxrqpcemfxarhwit...", : "bodpuajwebrlkhvoukgltnzpcltsgzqcmidbabxcjbexnoebglvemvmqqbnobpiashjfrtuuwvjazdankpcwgbdqqywqfrovjfbfgjlmrzcjwyyzgblkqoltynjumuinvturklgwjwqamdjkjsocgy...", : "mxyfymcjevlbbhozywhwsuukksoewgohhbmadazztypjcnrinocbwkjeynodckvvvlcjhagllqwlrdviyubfcoztdgyoplqfpmroiigcjdhytovmtdacdaprpqddrcyscwaxtnrrrsrpvwwwljsmhc..." } Fri Feb 22 11:39:31.239 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 906 { : "pfzcfleavzpsqdakbkkhnhrzpnyrxeokkdwwmtvbjlmfmmtofseveotmmrqontnzabhznhscwrdfrjycnowsvdgnjeotrltizzmzcywsdkospmrasizekzcdwqinimuijqzxevdkmfzfwqxdhafytj...", : "gxtvuzurgsoorkpoipwdcjpukyrgxrjvvrwndoilyoazhaxicbvexqcqwnyeprnprtalhopsbvhrbsicsqkkgxmlsiuputtbhncygvxtqrkhzxskswmptwtsqcrdqqdodsximwajwkenpqfirosovx...", : "qwrtmdndnpfzfoxpzkxbrmazzafbhhhddymgmzedrueoplqmnzuuavwuyzatqkjwgvedsqjdlycpeaukqjrddshygmbdadafizciukjqoiotwtpzdkwjbohysqirfqrgxrsizwvczcyhflsxhutfie..." } Fri Feb 22 11:39:31.239 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 883 { : "pmobuyqhkpedzrbflpugzqehcjcpjcrvtjzdcijqowckpgoitibfstmwfnyinehpujgqbektfvyvyzdbjuhjwirzcmqqudteycbwieekwytysvitsyodvcgjrocubjfjnwwlzfjfvgioxryqxjsjlx...", : "wuypkbzeahlihmcklhmwoggdeihateixzsrywsdbwsxecxjxtnmisqooqbqajavjbbxouqovchlcaqzqnnhjpqnjnojuoqgipurooruwbevppmpetifqtwtdelcrpdwyaznxyfnvammyvvchhxdicl...", : "oylgecmziyknaxvosqjndsdzxphrehsvjzypzhlazrdvecmyqdgktwtmgkflnhkbgyxjijygzxrbjvlkfaqeyrrkgtxxtlsvboovjlyjibvvcqpvfbpjkxhldhkgmlkljktuhviaofubajozxpnzwl..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 862 { : "prmfmnzkvwqidqqozaasrpczsbwakeoycnesgvzkgzoaunevvofmupuosbgoaiipazsdponuqolsrakeyrxriqjaclxgzqvxqvaofrdnfddkusdkgkpxkqpsjnqagcqnojbdocxhqsmfzbqbcgwayl...", : "yxxnwgqcoobboltlixdvmgpbftdwtgrrxkdzeafwfkdkavaisisylrwxfzqktrjmaecowcrueqahfilckklfyogpdqeqotbekrbzyxshgsklrtylpmaevzpblojpblonjefoccoumbwfsjzittdwri...", : "wiykpqbivtbmwmfsvciivvfousufkkwzgggmjfqzuoutlrcwtgyhfnavpqguciwbdubrxrpxshvqpfuxgdrppmgsdclxftysxobhvriyckfqzywfrrqsjnlwhlhurbiuuvsygalshytprueaemomjl..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 862 { : "prwczcaveszjfdctjbpuzwnaclkzfdonsfujxpnwxvnnukyovlwrpsaxsdyxnimkyxnkczwyfciholedjxmrkysknklgmsusigkejiztdawheysfvuwsrkmvtlbbufwilrznoiphefyxkdktlrrpuq...", : "ybjtxwtnynzfghfgvioesdsvzhsdrjonkzusrqegwuvndnsmoyjqgteevwjgmwxdrhuwdlnwimfqqjuvkuajxrttfpxrmwcuryzogmbqjcqltwpkjsrnorajukxmaoyqlqyqvcqwgxpgilqjpjkssy...", : "nycpkieoinvsawnefxieygpbhcbbtcqlyuafqsbvqpczylfueosnabvxybrahpolfguzbqiycqofkaksobcqnpsaaladmxkqgdzghqcouofqnbhfhkylwdcfdvortvmabowipwyrusakavidmumrdu..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 883 { : "ptvtmvztkocmzejrtifjjxejmgowtpjhnkoqisdjjxihekizchdcwohasexhbhzcysmttuhpbvxauodwrhipvdoribdvktlexuoxydjibalniufognxqvmqtbcouhlmwzuztylxpkixogtroefctkc...", : "klwtfjssdqxtxhuxahapbhdlswvfibhzqjqybkdhkfoacvpmnaguqqadukrqqeqfsplyrxrcmsoaaerfikfpddbqhuppqhoscsgjjbjgvoqeiffzpikekboludksztafrvzaldrohpsgqucjvdhydr...", : "hjqqkdlxrvxulwenwbyhqvbeayaverqupxmbtzryujbmolupcngwelkckrhtdywoftewutmptlscmedpsjzxceeafxexcqulhgzyvhpsmzbpguxsivddzhthlxhjwkeowjvoqlcyzldfjazylopkvh..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 896 { : "pujvpazkwpqbvcwbbmavieetirkuqguopivopndbewvfxqgbdwsbieenpfpxeszbuylpavqngwcybdyiogrcpazvhjneceyilrpmjmlfdettnztqbnbduqcomyaigasdsciemwqlyxyowntqgzwvab...", : "wagcpyvpvdyrranorhkcwqdppdkjfuyntnkfysyfaizzliksfdwapkdkpvdksjtozbatknwkxpublmvpjqgafeyrbpotjbvnmmivgthqcjfozfpcijwgvvdrhcctyoetodromwozqlimhgbwomdzmg...", : "cshysuakljrnpvycxpjesklfcfuxthonmxdwpjqpadfiygnlgdjwisdnuklhmkohbvfpbwgtnpynubvxceylzebdlbsnhsocthcidhgbzgjwfiarmpdxmnwuqbarpojdqdlandlvsjavzkpwobczlw..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 888 { : "pvqnhokltloqwzaibzxdpnxgmpaliujnupgtyljvgceyefazohfckeuynwtchkuioxptgcydeelgtcvvitmdtnlblankpeglqbdfdzkxomutpakjhlsrjtktwgpwltkjaeikdykwnunfiugelnwurh...", : "oywdmypgdsgnexybluuoqypbdezfninsyafdmbssqksfojpbphsudftchebbzuwitvhwbvgjvnaxdrsfisnjrlmgczavxudeukdnmxdowesgnynuzwdctwiisyvhnlshxinsnbklxydgpjptueegju...", : "kcretclrqwxorrhopgjfgzwvfhyphikcwqpawnoafpvspksqrsxdndmetnhxbstviuralyuixdaksrgnpdhuvkurxjkdemzzroshskqfsgpagqnjesrobvkxdkfibhjctngfiyuenijduplfgkxyqq..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 859 { : "pxratybcogpxsrbsxdomteoygmrhhilyrwmicmjyslyqxaoqyonbyijywpzscdbhdjcigjwcerknfxstihlvzlyubmhwaavwwutapmzmfymiutpqjtvckgvmzsyhxrmunlfzxhicvgeoconaegsxfx...", : "nejdcqcsrentjsqetzpmbgqhfkwxgbtfbsazrlpoykkrpyrymgmkhnftjmxqwipseslreiyhedegmxgvftvbgmycapxbllrwstuuauidgcnggybxlksfpturblbvzhvdvackjsmpeadcvogdwxfixl...", : "ggleacbyildhzbublxydvcklpnbipnencqdvqxwwsovojxwtjxsmwgtfexkudjfdtrhvsziecakcppgpqnvwnvgxselvurbnngxsfmwzlpjmozfpmccrnfsgocxlsbatohmumywqkbtbqkbzacwave..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 867 { : "pypahrogpsriuvnovionmmhiendqnleiltshxasnesriwwxidghepvjrsocylsgwjxipgucirohdjgnljgkzvpuwmvhgbslhyorxsbjggmbogpftgjorkomajziusdipxrwqmgpzlnnffsjkrnknkt...", : "pzlkqyjaahqjvafpvkvbaeomyedefbmmfxfjbfpgszsvchbbaxybnjxfgwxegxxrhealkttclusnacpjkoukibwirjbqjhrxicebwnnxdqepnudnaqbvvuefcczpbvrwtyrgvvqcmlkyrgdtqlauos...", : "kyxzolrtzxkjqgqcutnylhndcccxovmwbykiaueulibiumordnhzebjnilsmgymazkmuzakzwmqgfwcvowzfsrgsfykmxfduorfoqnhyedluorhwisykwzfpdodpgwzkvhdaiqvxziisqygtjkxnpr..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 871 { : "pzcarffmvjngmskslnuqhvttzbleqzcqxwymbytrvpxcstswkottwtjjignkehgteapbamkcwbbykrccjcbscvwtaagnkaxkuhrcwiacmmwgvbmitgtzzpdezsitthqybdkcfymuccxvppbfwxjiyp...", : "ahbjmrkhrnnepheprqcisrcnlplaksqtpibcnqkkclimizwsfnerzrpdjwubfxvpclystrgpkhmtpnnbolmbjyqgfsixfuhjwtuwvhbbwlwyhmhcomjjsthhbzskycvcbkhvunzjkbwrpupluvuhfv...", : "yaktmmebwgzaolkaxzkbpwunxhxxajtmtdwfaonfptyvgrgotifitqisudsimpjggrqbpygrgapcghvdckmtjivuugktarcgbtnchjlrwmouhjoydvlepdqrxgooeoawyodfqiovpviwsubrdmdmkx..." } Fri Feb 22 11:39:31.240 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 822 { : "qblfhuyskqocxotnahfejrlhtlpactrxphzffynrjlcbdomcecseaybhjgzxzwkpuprxphnuksffngughkvjyqabcveowuoyqoiilwabiobjvelrhsrfjomsrtcrfilfoqifdghsuagdelgfzlxvwa...", : "rqcyfxqzreydygbmfnvxthiemvouxxmciepqhdyexywdnnbdkemojxgbonmsfjbzelippvhhiahcisobqsripggdhxeeeacizrpkxwmcrxcuwjvrjsqariahomwufzeevypeukvqxbcqnjspeundmm...", : "snhqrabibgxxfkakofzijrzqaafwvwvwsrlkejlnnllqvmljsbuyqynjfqdckgplyidcoshxvdjjewrbdiyimdkivvinswgmwnlxdyewlnmpvjjctzvanufzgfhlzwagyfxrrjrznqeothjflahjqz..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 942 { : "qggzriidtchzjpmwrrjhspfvgomdmstvzgwwmscykxqtxekjttybmcndgsatqehamigaeapxlkbwninhqnkmbqhpvitjykxkxzdqipaimfzwzifenkvacgtrlmyhwypbuovnwfstxvlukzztjwuyul...", : "jtnjfzoshwroanjbgqzbwntdadmggasfwoftnozzgvmpsvixwxbyiongwlbbuuigyrwbapguduwgucgjstdxtxlpththwwetrllayetgqjpdxdhqxslaevaexaeclpzjetqxaodgovddjurupcdpap...", : "xpfdhorgbrafiwajgppykmmpnfshthlmbtavfpqjgpsklpqeaeaggakvqsljhnlvzwenzaedvddeirwuziaiihpllqbfpubrcwwwqmgkqicrcfcvewsmtkspuhtboexpxicwoqfuntailgbtncykfr..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 830 { : "qgtywbwdgwgohpkgwvbrgywytnrerierqfaodtcenfoddompmdnegtdghapagctkrabwvlsuvmcnppqaqjqemdxyryqrmezygcmezasueudeaomvnzexjucpeyrztsutkyfrpxviafhkuzgpvmdnly...", : "atfwxufnyxlyklaasvxgntqjrazlphcnjtbxdxiiumxcbpysgepljxqyopqlgirqgesdmpcxufifhxzconpelohprfnzymahtflfshshotgiimjansjpfptrztugavwdqkaxzcicldqqskkwtzwdin...", : "aqthflkuvvrbwxmhjkjksloqjhyvtczyowkxqucoyugxibvvhzxztuhejaytkylmfcddmxbbbbsrhzdrlzwdgoajrfzrzifgxouyyegzhaignclfsvxkfyidxyullbkkqsyiuszntiimfoymgbpfcx..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 872 { : "qjiygimwylhcwabqwimnewfsnzlvxndcdbewkeuagoklrihkqlrggnnhavbdywjcyleyjnczeprooaendjtnjnmbkgrpnrcdpfahyxwjqbtiqwexrehsqnjwvrmwqjzpwxgqsdbawjkhuyhkoxywap...", : "bdayuadzcuintucaopzrazbfgxyxspsfnpgmxegrzwsgtmaefkjqsaymguzeimweblbhhbrpsrdlamkjqnvwfhczpmmiloudpedmksycptzuehfrjfzggmtjsuecnjqcdwypflwhadddlprbuibwyj...", : "tbanyxhiarqulwhvecimlpibaqicgaiuukrgicgzrnarnwowiezuxxwudjulnfzakvkgyhomiooeztrddlblrjirskoyyftniaeakooefilmpwafaaqdpdxeahepexfknobouhdsreukvtlulyqehv..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 849 { : "qmrduzwmvhmfoataimvoirhkynxtkcwxcceqmlyzvqblzpuusxaxvprrkhyfrursnbbylfnyvnbrfpckvztpnddkvtyzbnkeepuoqmerdskqqmdnuwfkzzrcudaoywkhyvdlaojrqukqrcfnhkvnse...", : "wbuipkqgelacbsnqevqlwanekvmkmyyeghowbjuegfpsmfjoueqgszhmeybardmdgueajkagcvwkiackjyagttkdxbwdqvuznubpmtpisgalgaaiatmabpbkkcuiplrwnhewouqftjpochjnjwbnub...", : "clanicjikkszqzwqcqntnrlkdqosmccyhkkcocliargeqiwzylkfuwjhnaktheodyhafwhjkwmedgmpyadvkhjlylxnggdwtdwyhqpsxygndgaauyskixyseakyszxuwdilyatvbmzjudiqksnwhem..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 875 { : "qpcrlxtdgbfpxydzovkmxayrptodnhvtslzortlialitrwuslpkuzcwikttcbqtyhhdjcobrtecpsfusijwrjelcguahtntiuhxewkluwibukxrcfyngnttjqzwyttcgxtjbrbuuxfsytpkoggxmla...", : "qomkejimrjnlvxkiybuxbrtfcjqblcyoeoidjvftheogoqcgjuftipsrvbrdztzvplmdqgspzaldqhlnfwtiylklatbqeheaeaiudgdyojpzfdfiqhsmutvsbmqipgkycibnogfczxknacohychelu...", : "zifjuiqnuyclvtbunlflljrjtegtjxqdykftauhpehztxnqzwxssnjzifrhalsygsiqbcmyxfysrozbjkvmkckbjnhatmkemlzyjszurlwhstdxasywfbpvddesreweviygcyxhxjmlvuadunowitc..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 833 { : "qpkyrolukviytmcfatcegazjbradmuecomzgtkgvdarhekcwkqnuioybthqcmlgchpmpfrgxuwnojcpnfdtbktsndobmcyhllijipwgxgvypblwajkfzertbdgsesgstlqxcwqhpiwskknjxxfxwvn...", : "sjbojchkbefrdlwumxewtnszarijcptkdnzdxnagfzkfzkdnoambjnpatbqrjxxgmqftqynenielvoypkdfgjugaeccsynaqqkvxawouywitcwpzwrqpxikxqriakhxfixhgwvujiodtncmdlhmfja...", : "mhyxbvtdsxxyeggoeoqrrbgjmuzutqvncbodgewvfioohjdhcujzsjlzqghyrjndssplofaugkebjzzrhgjrrbsickelgpadsrecgkqzwtfsjynxijcvrguxoibvvltwuihzioujkucltpaoylccpl..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "qtktaxfjqpslhyxamfzkavltkpwgdftutwdpstakfktoztwickzbroepgxdypjalddmeyaecijqsnownlsgjwcbqlyildsycgdllbkvdvaxinonbmlqagpzatohlfmkdnzsubspwmotfexufdqsazn...", : "izushcywtxksijybgugmoaazcztmavedzezqjusfkwtdnmczrmijbvwgrpgdrxhnqwwkiqxwbybjhkxcwahzdgmlcserskeraghpqjoikbldbxomkxjazzabpanmusndrztagswzessluymexdjmhy...", : "jalydefdlvvcddrxjqtqyxblmaehwfttfvqdpgnoalgvwzfeuiiihbtbrufbrshojisjmcfajfoiamxmkrodocnjgyxnkvozrjbttgicljvdiufzuyxpbpcwdptxsmfriracbkfggwyffjbxrgqtvn..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 823 { : "quhqoqknipotdxkdoshurpabsoqoieolrfxuszzlttnkiotnjzmfqvhmwlwcexqpwbiholkkuwfkjbgqbfulvdvohpekgjttjhdjtghkfbbslegnsqiurvjefyxbbzvkglwnotmdtghogygbkcuacg...", : "sfmxctfzdbluhkhskpihmmgpkargygewndxpekkmlakpasnssoipqwidfznhnymxmyeafdcmazhwytjrfeiwcvtdnmhovxmdbfbadikunvijbrdqqnsktbyouohtjckefnqkuybldrasjhzveudkku...", : "oyrvjojlflzouazpfvigqblffkevejapiilbzysttyuehplgzfddyggbotvymwfkpyvyxqaelauopqeneqvnqwacrqgipppkyrnfnatscrekpoldkyxrxdsowpfpqitbbmzqrgpvnadhsoakkiwwkc..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 848 { : "qumnjhgfqrvslillvvbopdhvzbzbfelacpgfwonpdnhojdznnmvpjodindwyyyyoletkaurnztvgskjmxiorpmkggrowqjobvntoggoadmcjdkqnogglwzxrbdljoxujdnqqiraiirekovvducrjcm...", : "nbzquvvtbxngrcdjbmqjrhptarrwcfdhezarsjkrwqfsnexkphxrectkfyvxqisgpikrixrsrqmyobiovstliapvphfbpewncdisxsfpdbdngknilqiloxicvfcbdnrmwdvfdfuvvothqzpdgpmrwr...", : "vhxtsgmxpjpyfqrzfuxfcwstypgeepnlenxsgdttrqscocqorgyavspqfnyjhkyetwqbnsgienxmfnteltafrzpicsmbqkvosblqgsjcgdffpkacfilvrpxlsgiamsikjppuuicvdhnwypupaznoow..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 904 { : "qwaiaymvnbcwwssldtqtjxmrhwmzouhedweujnkjcmpcfldshkacgbrgdwsqpdhzuxojglvvgtrzjmratypdtdytmjojvbhcrcruwdxshiutymvirvojstyhaquuaawsxinmsnvuhlbmwqhwzmtegc...", : "nljtfgjfelddvncezccklqzsiofahpzskduppugzccrpfzdrwawjoauzemnoxpajdeeycnptviopplurnmrhvtskwtfrdahlummiwtkrlzmnaourxzqizptjxkswdnyddhbifrtmthnxsvtpqnjozc...", : "hzcmrrpmyydagxynclbnlucdivqeuyeihrewzjgbhmgftskrmoorohmqslbnaiymdnxpiskqtfjanxdwaketnjjyzgjaljryrioqaduozoicuhyzupgwnzkinvbasvlainmfbytybgeluoduggejmx..." } Fri Feb 22 11:39:31.241 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 852 { : "qwszplszgibqhepcvjktwobebkljqofdqawavbbcfmdletwqwphumzconbizyjmegytpjquumwqglrordhyymgufihngowgtlszobcwmrqmwkkabpjlvnbmdqpwifpoticbfjuwsoqqqdsxsootoyi...", : "vrvsinrazeyvcjhiokyxiosixegiveaxnbhetkwoucsmsplguplmfdtgukrmhedxwspwvdjbbxbpzkzsseigckxdoefgccwgofqjvxnnzxqwnwsevwptvejqqhbtvjiqxjnfzkhkhlevysfocmmhba...", : "tqofcmlcnxedcurvzvhkhvnktvwttggqnbutkmaimwteeadakrocohoqyycxqpwvbcggmwnnjaqwwfswitgahmgoewzgmreravatkalnkdznhiimhhipitsjfzbcrvgbapxdufqvzeslcdincsohec..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 824 { : "qxlqwhuzgyqmfxxekquwajdmebrrogxxpgdidobpthkevevqahjseqidvgnhjpbtapbpwhllwauwirawmpoztrpvqleumeghrqrowvjthmqhdwgdcjoasvwmcrvmyxhyggtmelggkjhkycictuyafg...", : "xpxsljyksukwzlavqghtonzcvualrapnnhgpgvpkxddsqxvbaxkyfhlicqixfdscpzsnurvrzmljubseshiiqvllpklwqtuhpzoetzjechkfpnbefxsvefwjxjijhbwqbbxpdvfujqyaqboabvaprp...", : "ebriwepqqgpghwgsqhfisfobxijigpqiovuoopfyrgoucyosgdimjodmljmlpgctiygsarxbugmyfnqegkmgclcnegtccvaymogzunavysfeynhrgqzaueocdjllxvmaiobbdqgiivoljbtycexvbl..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 895 { : "qyoenoxqghfiinvctwfgsjhndeczavsnhsvscfjafdmhzofgnsfejvbhigsybwdqrxxeyjevagbflkmuktkfchxtvjxpxvstbcoqmadiucdxhxejebzmnbvztcqdxipyuwkiquleufpdtofgswadvg...", : "sgzoflsnwswbprzmtclfetjadadlmpcgfozczptvfeqeyobdvxxlxtatuasvrummzddlhebqrmgstcltiuvdwubnukxbtuzlvvmihkrkxpnsjnjzmonkhsozsojdqckmjmohjzsujmtblmbsibveva...", : "rpkzmfyhtmepeythhmbnnlizlukmwjokqhhrrikxzefmifdgfiptutrjpqfknqonteirnqwtihzdzteomnvyluckkjinxmyblbbntsgspauiskxdaksrlnosqgzaiqmdexvwsvtfouxtptulxpbnln..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 846 { : "rfacmfwfqqfsuhqxorysesnuvtnznyajrnewtuwqftbwmfefsakuezxvjkkejdvglygdrqpprcloulrzeuvtdxcltsohtqzngiyomoexwncfwtpoeronyypleztnxhudddfhpsljgryszdelkbhknv...", : "qhvqkyytgjmnwuraiabwkeqfeecpiopanipxekgbswowspmnsmvvvfhcwffciswotteyugfduesbxwuyrzvjbmfivnbbnrmpnxsisphmecheingrgojqdlxrbeqqugngqptwpiaghxdhchmncsgyil...", : "keuztejxufvaklegdwjiaepxzcimtdunuafynkwvblzmkgroamoumimgndemzprbflrtsxeipytvyxugrmyqtinyvkovqpnafilcmaikhvshyvipgyghilzkawduodzpodvauayultqggmusgojiqy..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 946 { : "rghvfqwrnvspgguyyzsvrmqziegaemmlwvkxwbnmoumntbaqvldmxvnqczgvheofhwueyymneclemkztonekytufkqtqifvxtvxotdnepvrobfbaqqilrycrtvkxytalniktqggcrfpoeahscklhwt...", : "guhnwjgwbxsxbiqcrgilbtrkrfzhheenannxjobiiptrkgagmsljpaaeszhidtjoumjrjxzhaxzmvkkialalvqtjrdfwjajjmnognwothxewzsexnlmcjwxwnkuzgkovpyiymtilzboivcsdxguvby...", : "cnpiifdxcoefmdvhdxjonxkdcgejobalhgtyndkmbgtlhdfytmcbxiqxxjclnmouyovmdfvdhcgnjciamyeydxxzjbedzeaywshwkksvvhqugkymohxsracvouwzclhqtmjhaqoltrsyviyyeckhlf..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 837 { : "rlfrfmvkvissjrgssxmjtamfqrcksqatromoqmvpvpffmchfgsxuswxwijjfkouhqeobdefbqbmuqnbrpubmlqseuxtakcorhwzlchqtbsryaiouehlasficygvbsltbudglpxeublwsmiliraolwu...", : "qksirkkrkflsdamwqpooldsyuvhdhyqykrxpoktivopjgnbwxhkzdxybqnpbadrbnnnulzzsvjtwpfedxemrtzrrcopxygueufxndvyqfctbtpffaaaolhmsylqjmmnkfvgxefxbwfmkbpyrxfgaed...", : "swlxscvjmzcwbprvchosmoqhrujixojnnhuqrbdxpnhgcjvllsmyzozssqmwsahrsbqxqcoerhnrakxqcnnoykjkrdllafaznszojagfugbyrtpjoswbdzwxibgaitqksritqwcdkbnhbnkoxaennn..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 898 { : "rmagyusvvaubpbmydzyjrzhsnnymekiivnzopkqyvezgmrbkpbmlzbckallwctobwasobncrfapydtpcgrjnvirjicfjskropnsqzgumxwnwfnzdwskerwuyxsyhwedjwhgqnuqsusfavigihrfjja...", : "rkerlxxazxoimazpywyoisyhepytsuzbthngcufwwtqhsqmcfhqzvmqxclmenkhkyktxvtawaqabbpnmpzivxbrlmcfukfsmbeehuirmjemfgpmaznbgmvfqxuifggtdcfvbqwzocfpilxcflqwjzy...", : "ixnpsqmbsbylmlzevduqvpclsyjlcncouefcvcjwxlbmkplbuyffbhpcoygcgasxcnyapqmtgtxbwkeszbihqylbjhckgpzfgqokxcbxhticqorbidpfbahozcyzxdlwwbhxwmdqdylvtamyemyvxc..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "rmwbwyrnggkpgetpigpwehrkprneuadigikkahcdlfmycvibngviqjlvcnbfpvhjewyqjfuuvsxpoqyrcqihqyquntrhlncksavmrotlsstiiugqdzocphdoljeddepiuduxzawobjnvhguslhqecn...", : "gpdgoocrmjpcmehewkcbjhmodegyfzztbgxeqjmlwjdzjseuirronnixwmqewhhmjguazhvzedvabiqylpkjbytabvlgdcrarcrclfkplvrxmjmfrufjrhtkhzcyffcookjtpqybxunigzxopbshsy...", : "bavoxjkkujvuxpgvqkrwybhaftfhvneavgkpnnqxfdydvsqfsbntfaucyjklbhcivqjvjlagvqbjwwdjvzrthduhajzvjlqaphvrvuamoyamyiftwrjapgqdbskpjxqmjefekluqwuxxipdqimfjqg..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 829 { : "rnjgwfhfybvxhiylkfqavdwflhsnedwcjrlmnqowmxgojmhpvesdsnhfwbwrsncezszpszvxbmtdkxgbntsivklxsjglggegudursgznuejkbkhnhauqvbkferyadzggbqlwocdlxqcdnlifahzhxnmqotsqk", : "xqhclgsywxspsbogqwraegyokcfmjnqjaiyqcxhxaoqfvuqttnizcrirezqckwcyysrpmwbtvlfsyaroqwddgncafdixftmeepenpegdwgapjjiboziqcuvngxzochfgsjgfhswgzzunglthckhvxl...", : "znvagnypnjzposcjvvohxsxuypjmfiklaxnrzinsqutcucuijjngcnkbyvzlqhwuuodyxkrbrvlhijqaynmhmnkmfxoabwprhvfntbygifenzffosjbetmgmctwdsrjpvjigdpjlrgikngqtfhywkf..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "rnlrikxbxmguzdtusfvpspsaglbxvwrpljruqbdxomfkjvfufqkhuhtbvwbwwjghkexghkssobjxqgfchefatpqfqiavsxxobdnuylkncuxryzhtliyrymfkfdyxrppxxcokshgcgtsnuubsefamed...", : "hxzuopxfddipspjnckdxjihjvidujkruitemdzmkdugkccbxyklfahpbrawfdpistyipiyvoztxlxyvdmjkmnyjzlisevriprnaczzhkdfytefxiimjsxtkpocmfbewxavhyelfdtrjmevmiqedzhz...", : "hzdnznwrnoazhgaruldnvyurxwyjnurdmuwowihzjsxuxckcrnosltpizbimbztrgsnscdpamsexcpcgqmydssmnaqsfdnrctgybimvqbkkqkemppgfeumfdhlatcicolubhzhjvmmcycjptoqftkg..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "rnybqvfqhpkcriehbrvfgnxeeilffevwosjumlaohflhhtjknvylcatdqreutubzbuzeayecatyzmorfcculaojvxojlfcdpemossfbfhpgilutsfgzsmwqaqbnvhntxqsjxgbyjrkbnmhikzmlvjq...", : "pouympunjrnmjnsllqtqlrfcejeplzdjwzekqcpfamhzukxpikawjuaqrjdeeicowknygvsljjxulmdwxvndzvnnvgqrtktniwyblhgwxkylapujctxsiybyrmcvffejhcoweeuffnvaybreyaumcm...", : "shslgsdaehxikeciwwcklhaopqtgolbjhrnpzrffbiiaittnaonhiddnysswccnavrqukhmxmcrbwmztnoogycygbmtrtpdfgzsqatjzspymitbgzwjoejgylswwwtyatzjadeybmkgpyuuwfzeqvc..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 883 { : "rovaswmstkxfojqpzppodqoeqaluilefqaisodahbdoxjcmqfhfchoptboaljbhyxxguqcapqunvmbksguhturyqrqmmeyimevsatdxqvjsheioxuaihdocujcnwguahebjheuicziozeehzmoizjt...", : "wazzkitqxxwgfmyoqnuwzgrlvhdrglbicdoiadlkoottcdqtnuyhvqgsyrqqwibqflvuqbxzvohvbmqreauqsndedbagdttjifllxqelptixsrntxmxlbkacuwjycfxmuvzjeeuhtfjklngqasshzk...", : "mceasjordrmaogywqcoslbbvqswzkppgfzffijallsbekdugavuadmytjqvusajahfnhmqxuuicnkqkrgxixerwpeihtcqtzfyndnfdulxuecsakfkyryjuphqgxwrlyhjijcqxrqrzdkywytpxgjv..." } Fri Feb 22 11:39:31.242 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 908 { : "rpfvcxmexwgdfuopkqmvzuhapmcxwyhazrnxlsvyioubdodxmjradhzioecqrovbkusssyellecicdifchfhxwslgeijktgwhlcldyjbokhmlxfedythrqlvvpsoeddtwhpdfrjmrxbtzuvqdwljdf...", : "umkbfounyefgrxyqowzxwqdbnuzuuduknhxvezyrevuydvznsgwrkiisyqapqoazwiiniufkflowzthwrrgqvbhfmrymnfvphkahwvzgjjwddjfnlvirmdbokoludtobezaqlwupzpvezjffwxzxhz...", : "blqzhkdswqethajaimgpghwqilxqvwgpupgskbboegyqhueoyohcfmsvvblvlqkhzzmfcjjhznaerzvwqwgrmxbyagictkspxudrsahbifhqqaqkbltgstvpwbsxoslzpvrtcrmugqiosobmwvfdxg..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "rpwwarczulkwlmyqmgtbgxlfbsfkxxgxksnnryncricxodborlecuobjgkzmlgbtkkkguceanzdxpmaxbzzydepgtyrflsakwubjvtcidysvcowvasszuxwleezewglkimcidpwlwicumjegiyavwv...", : "zihelullsqkvufanihgvmrqdrtfnfqgomdbulpuygvanhtqsgzwoktfdgyvitcgohlgddycqhunsxcrcgzeiajmewgsxqpowngreyrzyhltdhfrjcqxibntuhkhwluwpnlxkxxkndjmeworgshrkzn...", : "pshtujkabviuoilwniflixtzkhivdovsbwokdwqxejqjaejdvksgihyrkcifqxmcrwujrvlglcyuflbngcduzmdmrgxhghcdcsqemkrapowhtntjjyandwszkecqtuaewnjdbbhggdupocudqnknxc..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 859 { : "rrfmqmuueqlijazhicvhvaysqnizpppfypqhvomdndaxeyeoopsgikbocwlrsllmzmubtlxxwygzesnsrxlyeiqwlkwgmyzaqjuaedehluawniolefjorppngibnbkftbjbiwibvaljtkkstbpycco...", : "xtjhutwhhbijutcazwfdedquldcaokvjkkjcwcjfqmekyxysmimuaxsnerxazbvnduqkogoyateqkxocuxveajvglwktnjrbzvfqmkkgwimxepbsogofjzybyiockfxjkvoillunfzycnultgjiane...", : "mfkffcijnctuurychypxyqcsdsnpcypbudotisuelfmqvznbdhdnyvunnnpwcdhlcmequjakvkjlfylvysolabvkjwshsmhtimxsajurnisierxtgynwburcshqvctaxtjilgjhuqksysordxmqmnx..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 856 { : "rwjqeqjsomsuqinsqhnvgseubxhgvykcuvqfsmmgaznstvswfurpptgnhmpfusqynzskiddtiyextkimzhsbmtcabcgunfvlcxmhtfjbrgesdiwzstnoxlkfpnidmitswxwnlrqeinbcfhgcnxaerz...", : "zxadstnttnejsmsgpjwddgxqtolcvstqfasakscsecnovggwpzicxeutlgqhjsmewzfmcvxfgbffbwegfyhhfhhwhwochgpruhcghawzgtyveikiufiuzrdvpsjvhbjqahzqilkyojkmtcjczpadup...", : "uljvvhsxssjvhsyctpyuwuwvyobfnlqyworhatdtlehgzypjvfxxqbthrtmaavkjdojmcbfkrrrmwdzquumhfkjhqjuhtexplzsqoyfhbdatnhyhwecxzqxuhdfowyjbhdjlxjhyudetrngqvdnfpw..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 888 { : "rydwwkjxtlcypggmganjufxudzafqiyosuyfjkilpqofeokpyegorxvyetsitkhjtvxmttiswywvcustyyoyebzbztfvrdsqwegqivplmpccxhowsmxxuwtzjsjyrfexyytmuqnpdvucoznaovmola...", : "xvhnxndfgthuflrfkkxitkpajszuzmnnzsumuyrmzoyjkomivjstubotjywyfgihsrpsnbpvntgpjsvunvyhejtlsdxtiycxoqcasocaveeiqhxiwhtqgfynvzikcrcfumfzkpngxncegfzaccwcrx...", : "bgzxjoggwnqdjgqyqnpmjocbfdjsfpcojfahellezqxlilpzzeuazibddefjwdicvnhvmdcgqwyhxpxsvfkqxfaypdeynucbidmpwknethnkoxwxwrejgsewsstnracgfcknamrdqfufwenoescydx..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 889 { : "rywfaksspnwignecsrtlilvnjjtowoipersjnhhovyetywlrpxflzjhevlkclvcskphzaehxtbeutibozvivsotabnnsklbbjtzlxbfyohrreowfebamygbmzbyrswyumghcohkizcpnrtmmufleye...", : "sexptegtliqtksipgxnxwisgocgrzbbxmkidektfzrdlfkhnndrmmmxyzflgvhxdchrkknbfflegefngfujxvlbbgxpiqmugjbfciyauxkmbtbtzzyldkjnrnhqowpjyspmhamiambchjdpxyjfovo...", : "pkgganhdcvtlujmebewymttuushiaovrweiwayjzpwoiyydwlvoffchvcrabatnxprgvbmyyjehseyzkxkagaifjoxwbdxufcswopxdhdvdkuizjlcpesgyagfciqilycscuzbsxcmlqzzqylltgdp..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 830 { : "ryzoglwqsgdndtgqiaotnuprsbqprsxbfokdbnctbstejeqccchajpfujbedrffiozrtnwzogixfaefajoyluayhvzmxaroxxpswufpqunohxirbmpknzgsodncbxryuuhleyfrgjfwgrcdoevsbwm...", : "egppjovtrdxmpidgcelvkvgxxcgnklhoxkgauiqktowqhlsebmlpgitfsqueikckqacjltazacjvxgnespeijwavfarlttvttmcrsgrcfssypnzoxsynjowjbqzdlplimywvnpnkxgsqxqorheoiga...", : "jxnnpzxuimumshjzteboxjrtykukiwhrabckzksbxapvhiowyxjnzqabshqoimbwfjgxcvgfssjrfousitonqdioyletsjnpkjkmtsqwcbgjivvhczgokuqdiojjkgwloanpdixbfzyhssizedqtbg..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 957 { : "scchbimbyfzctjfwynypnsxyftfylwoqtmfkmycmatwkigcjlwtxmvxrfnbfowxekjnbmtpwldegklntgmwmlkejhhcotgniiezfrwupqgvzoruqcvdtyjerfrrhzfwniihthfxnxksvbjmdtoebzn...", : "hovifbxobgdkxrenbzphbolsjozzxszibegngfrymvbjlhpexkzvqpomftpbdzkvjqpespabyopdtmvejiwlaiguewfzuuzwviajjbfbxrejuxsgqswghzxkvfdzrrigetmbleumyscqdxpjwsieak...", : "xibvaecostrpgwunntcxjmlihbkmabksebaivvrwchnbgjdwrpaasdsorgobkfnnufhtxnoxutuwsmxcyiqczyxhyisoklbyhnodvdfaskiubofgiqxoymmczapqbjrtcpbhrxezpqslyjesrgknus..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 896 { : "sdaoufukchqjcfgrvqffftunjbvplxrshdlqjckffgzsuiditqehawnygidbpitefrbqicchccqkpevhhsgxghknrkmsumfaokvevelrscxfbcrsdhbypokhywkcoxivvrzibkwjrdnnwtyqrposxm...", : "govieklfljchsjksjcourrjhgsqjrvaburzyajqhvmgtkxyhjbgzdxpfpflqlqyoxucypoytvtcybgvokffsdiuggeomxfkqljpwtvehkriqcbihouzpkfkyxkdgwfzkplcmezohzhqsuzgxdpemqv...", : "lcmgcaibfbdtkgtvzzqxvvexbsxufqaagntnkjxobfsuohogksrlhxsnvvmzqmrztlfuykhrmesjupxefemjjmgnwuyndeierygnggqtykcosqbnjdjcndyehjubveowdmdhqxwsgxvtssyvbvtmaj..." } Fri Feb 22 11:39:31.243 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 852 { : "sedpkimitzqvwhrqtctpbokobtniflayjfkreivsbvpbbuiqqifvmoxzrzqmjwxgchgptxgoycryczuxfzeafvfonbovffyzsemkziowbmbdfasbwlxelykckvlnqlixcxtyloagfztdfrbdbgleyg...", : "mcezpmbaprllnqwppzjsditntbuimfmuftoeaqcwgujvtcnyrwxcyrijoiixehdpmzjopvchgrjpthvepteqvnvrskabzciuuvamizehgkbgfzlgbnrxsffiqrxzyjauzxbncwbnimzplvpzalxmjb...", : "hnrhrrmfaxupwbpooeclllzxipbxjbrzljnxbkjcqxrzfnunairqvofzsuiutqqafkjuufazwiavssbczrymeqmkcfmbrbjpuamuxfdmqyiwpxtuxyinsylfelruyfpcvkkfssifkeimhpzasrpvrw..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 867 { : "shlqfjinaciswgtrinvlgpodzrcfhftrwmjtxpbfiknwolzzraxwqersynskthlpcxlrorqalebtdpsitnhtovylkclwsnijawpfwtvjslxrhlfjhksnjqyyexrxdyxhlxniatwcojxyflcgdjwneh...", : "lxlmvlpswrxumqvcwgqagtpcakuhkbuzvykakdallnqftowfdykdgwnazxpquzvxzailewpykcugsnhhwdqvwthpglevmsgyoaroflcxrsjppeefxikkdmzsljpoeqxqscfeoiziqxnwoqigqdazdu...", : "vcxsyshomehrpgzztteslkrvelhiiucnqkkbxtdlkqculzcreyqjpxhxavfnvcdcahylclqbaksgwhcwdauiqhhdazrqukahhnhwqlcnnipjdwlvirryvzbshfxudnuzpvtzzvwewgzazssciqaews..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 886 { : "sjvkakkvfghlnrprunhyjclkkgkquriuxmapninkykgsxtyjtjohguqavqcdxliijgebeyqmumprddhxxjpomqdodgyyzadvoqvlzmeqtspruvtxoxgswallwedfuxlgxifymiuexzgpwvrufvcyni...", : "iunvvcveyckzcdvzwreiheqthnvntjvbuqawgwarvcxitpkhultxaeioglejcpppugpnyzxwxcjlkxpevavspublvgiiwcinawxsprxqkxajdimajpdoiafzmjeudmjhqwtgyshgznjzzrtnippfio...", : "aklykfkxqpbefqrqkcautckyozvpwuxplqoywykpxoamloletqjtcouauvzormlceawmjgdjenixipgnguzggsenmnkxshtwnbqynyncuoqcvbvsbwwaizvhjkzbzpomnamkwairulehykurxzhhmb..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 880 { : "sjyroomlwvgipflmcvlnkscakqgvcqerxtydxhioxjeidqjbfttzueejtliyihofsizubezxzqldtvhvastsoswovuaomqmpxqvwewzpwvuxwaxwxjtngzzbopbwyiksooktcyinnrbyulmtsygkfo...", : "ocbdwybsycpgrhkwsaspzmnfqquhxnbtjydnrdfqrdskdlbzhwcxwiawdytqfudysgvlihiwahhwzqfzttkncdkjdbmkrhwetjeytoysyxeahbrgqabvwwiynxrtbjolkpxrhqwmgmuvrihjjwzypv...", : "wcjpepvtcpkhksuijhjpcdmnecxrldifbyuyzaqxtlsdhpvdpczgggcxmrikpklzygyqrlbbwhejaoasnjmxruokubtupmjdxnxctysirycjdzoavzcrwzhobyclpfosyybetzywxyazkkkcwmhejh..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 834 { : "skncancgljmsknngxolbahlobrtucolnbbrdurunbtmqrfndcyacnrippppheamtdafehykasyfrbelibygrkpzsugthtffzpzqzucydypmeaaxraxykidoahlvrfobheyddeotnkqwkszbjglwzje...", : "vtqetcqqepsmakphurryreilwvgkwcbinsbmfceqeasamuiodwqokrrpyzyqvirhfuutjgavbgiqjwjmvupziiqwwvginwyjgoipgqkajcnuevtoecwefbbissqvwicubsymqfskpmfveqxvrrqfqq...", : "arcivpmhqevfoczwodbdhwsjoevfjljgufydkfzcjvhlbxmkeucqmjkpdlyhyxlquxuxvzuqkxstccnjeetnghpsetmuftathfkrgeswushvocgufxmuuumoqzxyrsgyfybpdwmkhswlmzvioaisry..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 910 { : "sknnoahfwmcdicmifwsbuymfgdonjoycasakobqvjswmjwlivwstnhpkzgbhsjgasvnkncjakylggjiygqcqwmjovwwqldxnepittrxnmucosqttogsrobmvjggvcmmagguzkdgvhgcwgtzagfmwhj...", : "xscflnwfutgwpcqsholrumcdlzeskxkdsslvnokprelvqabvkehqkhyqtnpbdxhxplwsbthszarsuyvpwjcdgjqkfqlowjshkdlhgxeifaupgaqirbbthbpbzibydlnyjnkdzshyhiptbksbetldlo...", : "bzfpmffasxwmvstxidqreifekklgtcvybxagpqeyzjumfmptpiaeeyfbraiutfyrjjugjdqinncjssjqiriefzplrncttfdthovlugdtwjtpbtujetbgkfvhssiuzykyqjthbvbododbbzfufsyqxm..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 897 { : "sncmawkjllpirkeelrlhalpewcyuchxiwwclwouxpgaylafcprgfbuetuyyziwfwkatxwdzctiwtgrlxadtijlgxrkvjgjtsafwdfuxbnepibggkwruemvmtdaoesmdcpmhzreayseptpfojssdzou...", : "pmkzfgvgalayhddkzljrfitlhjqpxxgxzjhruavwhmdujvlqbmlaigfslnuavilmxobakxwoywzserehsavggpcbijrvtuwjoqcjcotzqfxxofopunnanpszxwwxrxmgnxeqywdpnxpapfmsvcalvt...", : "lxtgrarjsuzgoynechklkkttrevxjikxqdlykqmbzlduckxxjlzoqpxfmtpwnxrjwqflflpbrzsnqgqnmjrdrrhtwubkhqpqrmwkrvadrlshjshviwizbmszipryisbxdhhrjckrdjeckaziagwzfw..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 833 { : "snjdgtkjakvibpdslhiarsaetbxeyluuvhvvidcbsrsltdsqdmrlbhgsdrzcrifogdbztantdnumkrddigibzvlcrafpvjpiutvefhtojrjmajyfgaxdjphymbgubupafhqjytaophnvalwsevuhra...", : "hykafexlxwnipplayqimodvzgnbeqhpbzegqmrsvwwynqcdmztzkhyccszudmyleirsfxjovqsizslpxlnopdlbauovdbuvkgxufetkoykjtqnublfeouqmwvniunpaxatndvhuarjsbzdglllxzql...", : "dovqwkrmzefhksxqhzqewyihpcgxddkaxqlmsbqqyxjtgxynudpmhmcwvpprnqrletfmgbiznjqznnkobhubimfoihbahcbcrukavjmnxhwlnfzqsshfzcyzswtuzyhxwmzhfoyjggnjygjvdrwwqc..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 885 { : "srthfoyyvzdyokzkkquacrridckzyfhkmvwinzejoqjxikixujvyphhnhzvfpeqfkzsazxfgrjiqxpokiqtnicbnxlziotxpkpfsmmbayjkjwhzkicgiegiqnjnazrliflvcctstvcngxqazurnzls...", : "gsghwiodiyldiwsqvggyxjctghchbhiztkhqvlagwqubzpexcprabzuyrclelqboqwkaeyojlynapmlovcujyketaatvyzlfpmnzauxfefedozqcfxkgzhoiksvirwvigfhmaeajamwuusdyiuzfzh...", : "ewqopcunklwibbzkhgkpylaficovjmgpfdjievdsnbdrzyapwicjluwrddoxvfxznapcddafswyfjsjwiidpiodwyjgxtifpmmdkyurhubmumiystkxlmljjvrzykfzwtvvyyquxveaplqhmlcvfqf..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 885 { : "swxultwwbjwuebhbjoqsibfddnrmapiikknbxtewdtikvtnifhhuhtpxmfyjxlogmpjethoshlgohkqyyvwzqersersbjueljtkchcsseebaaksjhayrsfpcprnhizzcyhuqujdlxsjmlunlguibum...", : "eotjglvjxhyilafsjawndrpovnpsshungdicvigmvegouasvzewydyigyektxnnzfnyivqdkaysklydupivvzyczdhnwigenetiyloajygfnihnqslujcmocxarpoiltcvumhqnwxtlfteemsoimex...", : "vsfotixluaqggiypugshvrhclmzxvnyfqxtbzpyuvdwwqgxvdyhtgqjfootkplxareztxriaqvesriltrvuwwubsvjawfgphdnebfatncvwofnrzsalngjugtdncvhjikfijgrfdecdepmdpqwepnk..." } Fri Feb 22 11:39:31.244 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 855 { : "sycvqksqtpdhgwufrpidddxfahmpqkgggrrjqngkalvzioztyzwjzitpvlvtisdvcgsfgvkrubqgzdlbpzgbhlswuhpxptxfykjdzvxnpglevbzhufylsrmdjbocwdyghrjsrnmcvsspzimxmgxihs...", : "shlqsklodprpgvfdvwgyakufrgknqdfhbffkpytuycyxdxfianvsytbkmvguuatrbycbbigdlaxeybkfypxyvscuxfruhdgzyihzfnyvgnpxnszaknzyzragotpkrppnbgslsialabumivlihvnzks...", : "nuvzzyaenahjkvsphqyappejfqdeyyiteeqpardlafeantvbpisyenssseurgokecvnwkphffdrlugonqykrumisawczuhlbzlxupuoitxhdfqjeapharxorgsvxtjzqgsuzqybrflageuvjclcllq..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 946 { : "tcgqwhfeelwxpdktvsjmkihngnbwrgviarjgxrxexddglgdpnsqiocbmnpdastnkouvbsmwdmnmqliurcgryvylpychjofbuxirykeuntpvxyiyzkkfcrcqcgsskfzaujmhsnyjlhwodqadfbmktot...", : "lvbvybzjikdgbwsndbziidaqzsiztppkfsutlfdmxiuyjwxpbyuzojsotgweurdhekpujthomrpppnsrojdqkkzdsmgslejvbghqmdyrclqqhsjoutciwknluwkwxgajlofrwgvdronffwdfriiueb...", : "plocofwnfzpbkxlecmyvxoboxccfdrlwshfsnepookkthswskltuqivorkfgpgijrbmscoewwvsgrosvikxmcdjhohmryeaupfroajelvmldoleytzuilwfiknnlftbpckfoxdzmmotvyyfuyuzouv..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 893 { : "tcvbosdaaruhgkydqjqwnxtbpahvubryvwtagwemvqdtmxrjvzwwqxafgggorsoqqtkzivocvhstfyzqqdipmgojujogpxndknzctxeunqqsokokqwzgceuhocpwfxhgahbrqhycganbtagsckcfqu...", : "xkpnomgumcozksrommbceynhropveubdlvgxilfkficiwehfoybmcnlqlgbcvrxdyrqbwnkjkcvletgkwtksckfgviuulzowpauwhttmtekdumxhgzfovlomrdzfndvrwbxysvtwfkrefcknazuoue...", : "gzhorvzxoeexomsfyybetqtwgzzxshrcdjrkiuodwoqvmdupximrovotirynbkmajmybutatmckyaisnqrwiqmkqdteozfdfrdymtgepyhvtqutwgzvbvarcekhcntxxrnyoftycpbsvyzcbspkvev..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 938 { : "tebvqlazrfpklgjjvsfpjlickjaiijnxkvdnidiglwpweipmtrfdnaffwpdchdyhlgejtinboxnzhcfyimrcokzocsrkymjbmtgmeeouoekvyqgntgjgdjfboudmapwycgbglauuxbaurfxybqxrhu...", : "chvbxxrvewxjavqdvubusfofgemqgqwfxjbyalblgjuioitcnnzpwbzjzphmaadwmdtthnymskovlcbsdoxswsbdfdkfipomddqowjbzogmqomsaiojhyiaixsiboejltfzmebgnokgmysyqkoilrx...", : "mtlggmvtfztgalbrzcotkzonpxcxgldykfowjbjpyjywfllnfsyljbivyzmlzokznooqfzpeirksasrlggcqtmbspwkglhwpqnipfuokefobjfsmqjrnnmhbtjojmskjdrgfwjfevaeqeggrokqxvm..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 829 { : "tecgreneqsgivggyssrbafnxshqguttpdwfszopangfsyptppeoknnkhbvewurtnkdaoylmztgswxascgiiitihzozuddotkevkymaalokbzefgjitjynojybysrqhocclgtdtoyimmawcygryawgj...", : "jfikdyzpdflaebeziyswqmtomttuwrtgotdmfyqkrngrenfpdnhtdoufnoelesgjosxnuaxdfupdgfacaygcfgtlzvscpqgftahwgimggwfxldyboufkdgdoeqqznjjrxyvduzuzucxumfwuqbwkdd...", : "kxlxsuekwsyvuczxrwkmkvsiwlpjykxgxlnamcmtuksdrouwpqqrsakxdqmoakzwmzildqtmhowgzhikyicsorqvwzyrzfsogtsiwgingtifekkgelskhvqmtwpaumkcbrtfzcisffnsqevgkjwcgu..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 893 { : "tgrnjigxkusxudywmibxtzvmhvlizykxtnfqrmkdodpadlfxixnrjsgnnomihydggbejfvzwdhdbxrgcqpanrmqmwfetdfyvonlbdlowphqdjrxhxodfvqahnxxphirgjrbypylczhpjtuvsdfljyd...", : "mdivfdhdeljtmlucjtevygbsmkdexnrpsfzqlltmzpopqezvafgglftrgxslmwablbxmsnzwoepubtyrxdekwipnumyoaarptgptjqzkligbohsqwwtefgnveikhnaexktdvawqzakugetlgqaeosn...", : "zwzevltdiromxwmpsntgthkxrnthjcwwpiuncqvqxkhunzefeprvxhizifpwupkzkdgffnqkjvjtabcoshnfwfzxdmmlhbbzslohlltdldrxwwhddljecuovrvqsbfyhtxtemtzhrbtcbezidqctoa..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 847 { : "tiaroahctubhsbguwtnfgqfwbuxcrtsauqqyfxhpexslihvdfsysawlwvfikdkpymzagferrlikzsxvmvmwinclapoqbbsovskgmjkeakofgbrjzjsdvedlpuifvuivuccaeuqpnxbwlzyswnxhqug...", : "rrfelsqwgtrhwkelnkhofguhplgmjapzfvotseszozaonfooibdjzyqsmgyselzwwvntzjljuyrcvlwclnctcqgsebwwwhoytyindcrmedladbzhyfhmeceblwedbxifasygoapjdfwlitcvctqjlw...", : "ubqokppvasdtmxmwuqpqvupduxbsyipgkbdljqmyqbvcwijjxquumhhsxxdkktrdurjaqxaeepurwsaoyilusjbfwlbhgqauhkucfgnqkbavhbpvwkqawvqcrywvelpmnfcswpablqlyshxhzbneeo..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 896 { : "tkcdomzmxaegluafdcewrjkokhnwidnmxbipuqqrwfbjlvezoeahaxvgpdrlcuuiuyvtatjwfhyqindzugvqhsfzbysaduelogwclcfykagkyrowilavtwoubdwvyfvcbfcxmndcqgasnncmozrnex...", : "jecgiwytsmevlvwsuzhqycqijkkbknfwxxumanhcvhdxnbpeigkvqwaglbuqwdvxnxfjsqcsdddsedajghuppdczuirudcxzterkgdlcmtdivqgdimuerysnzurterbzqjsplddwdskdblfkaxejvo...", : "ilybxrbfguufnsdotijdxzakjdwqvryxcipxyrklvdvlurtltaokxausmindpsxdkghpyvsnhifmhiuimbijzgcblavqnhlskmbidtpdvelvrrtaynhpyvbdplojinzztaiuxerwffllsyzgpioyse..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 926 { : "tmvaexvrlfrypqefwtzicqcehhbbjmuxuelkdvaxoqourwfoaotqjxcgnxztdipvfnmykhgssghdkdqurfmvbyujayoyttfdlknygqehjrtmbaktdydibhbutwpsrvgzuguyilwwaenpifsdumjxvo...", : "pomjulrxghvokhwomdmzopjlksifjofpspfmzpwmbytkaseetvohtybheoaxencybydkedxipgardsqjreqjvmwckuyrxscemwhfpdfxfxortquoqdehirkanevgjgmxyypbcimefwcompprevlclo...", : "ehnqjjwwktaflngqaacjisqxfoqmplragzukthvhgdzqlvulxqptfxbwglsltnfofjmreycfwgqkyzzdahprfgkrbbwsfazeggsfnchblwjzdsendoowjymouhigxjbzsvhvhjjrgbkuujbaebudoy..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 870 { : "tnitujtyxsespbpbalahjgowyzflunbkficzsfcptpbqvzkxlqalfovotihsudzwyqqwpzoelxciaksfxquevgejfjioakfajkklvipsgiflgekweokkpypalmlecpperxksyvssdwwrswhouwqiid...", : "ogoafletopuhhmvulgpfinddgyzpfcgoenrqbbqbxmopuqrbumicrawrpxowqbgspczpqqjtbjqluemsahqsksdxqnccwefimmuclcihyfrwxvmmdujraxhozskhbziwyknebagpwcouuwsmkdolaj...", : "fjokvaqjywtqyyzvcaaxnooravsfywofiwnotlvklopsbqkotkpgtdyacxhgwdxqrsmclmyafhowpupwtvyxhelckmjazburyddrgyjriphydtuuftmrohzevsuymlmqovocumokogwxmvhvwaaikd..." } Fri Feb 22 11:39:31.245 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 894 { : "tpqwsqxswjhubldljebhmataluhavsgnjidhjnakurtekydcutsoacfxescrllyffupinqngmlhbwvydsupxdlkgbpspuefzulhxhltbnkwqjjmygdzrhsmuscmvqretzlmqktqqodhtmabaxkcrpx...", : "skhcddiyoyyhwpsrxvgzyksfiymznmgrxphcrbocuqmeailthcgyfxzzucxrtlmhujrtwrcuovjectlgxsvufgzgqdgpfcgtyxnsdhiitrhkwqohvrywqiqqtppamgndyaqwpiwuibngrvutcoguuc...", : "cmrmxfbicsfvhnevcehjqihlhkpdivimtqibrhmuhezunfebqpolejmzaogdtyyihuelqznsdvyxvjjowofyajlmnxsftvaeizwmhsdooknlhftzvmajrvztoecyagztdylkdecncmiecbfcakfusp..." } Fri Feb 22 11:39:31.246 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 827 { : "tukvezbzyfubckvqknadcgayztdmhuzwgfwfahswnxpqhkuanetuthwhbunsdoakqmduwicvtxymcaangpxrmxqmlufhosxmckjakdkdlgyuwxgluwbntfqskqabdocwvgptzuawinidgvrzhpphpj...", : "grocmxiarzlrjizzfodvrqoximmjwqjtdnjimgbvefydkgvcjjefnzckmsrrzstpzhyxpdlmaucxromqfttoibbeionvzkreujsulzilecomenpqjuosctnkwmosqxnsumvtiybrsrxgkbekotimjn...", : "bdccxzpsdjohtfrxcqifgxlpuktgciwkbvkvthtgjqydsvyxspwbgvcchjguomrraqbihtikyswvgnzbolrctutzynmnfpsryexccasitemogsuadkawstwdtnnbtysjygosjkppwttkenwveyxupk..." } Fri Feb 22 11:39:31.246 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 837 { : "tuveybrhfqybvorvvaronhtzikazlevrnmjgmrbjhpdfjcsfgcbrwlcnuolnbufqdcmhepjkhecflutqwqyrbdmksiespwedoycgnturjgppspqjrwuhnzinvazizoklxdbrwpwsuqkjlwtsywxwhb...", : "eicyqelxtxvffvsosnxktuplpxdhtlxbbfqoajiljupfejtjejcripofsrlvkpkotjzbqhhybjfxlojfhnbtvckzkrylbpntygtprpmdkvsbxgzugodwowtfbtwmeqtpfnxmotyhsoprryzkxlbkci...", : "bfrpfoppvqtzsynlzbnccfsmiylgjfrobdxffwtrscbeckkaknrevzyjqgswijqqfqyyclfjwijhoqttzdmkkgwzvwjdrlfcbtzztmsxvbspimqsxfpqdfozoinimuivnafuyfpgknhjorqrsmvzfz..." } Fri Feb 22 11:39:31.246 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 957 { : "ubeusywqkkifgugnnltqrdkdprkfgxvdoobxrfadylcyvlucgsftgrgqtkbjtmzhletzmyytsnbyjgepboslxolkpcoxlhwfgdjmlqglhovmwvjbdjegcfbfpjsnwglepanbbpoubunsmmjcjwvvjv...", : "bmpzffhmqenhorshrhzwjvzkzlmnsngrwflcxgmbywvsmintjddwqdtalungazcgkbnpudlpazhmdpdoatqpfsjshofpjsnsjvjauezwfwyrocmdjparyjobphfiemtstvmzsrsmggarppdoqtvumw...", : "rsxrxtqqjesyssnzjcypnmopdftbuqmapycpskigmbgiuepouzfipzhazluuusrvylroxleqipneciwrzzpflrfuzcputqelaneckatffslhxsuvzfeyqgjjtigqcakwseieglnlaxgyrdqpmomtts..." } Fri Feb 22 11:39:31.246 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 841 { : "ubrceydedjeabroffumyvoomxqrqtahhlhchisvheaficiodfspruzaawglfncwtgbocvdtbffiuwnxsuuocvasjzvjzbidhvbllhkyiyyzxjqbcnfragzhuiicuqwpepytxqdqyzhfwzpnrtcryqt...", : "cdrjkdowlaixsxiucofoajyeufiahxxzhgkvqrkiqylegwlmnqnizxvimjyazoeytvdkeqqzjwilrbsavydowjrksofuoidrikldxffccnjjjivrriakhzqpiijsprextzrgculrdgqzcssrwcwiif...", : "fcezyxbrlzobatarfwyokpvccccpialbhmpixfbajxevofprvonzytcgwkkgbkzalwddwhkkwinqiqejvnvspmseipyewaxktksemwssmlyuehvmuqqdfdzdabktkblxhtxottxsrwqvkixogihccd..." } Fri Feb 22 11:39:31.246 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 893 { : "ubyihhmkddaacczzhxryaumjamyfteprapztotyvvxndvfhffvciwetsamcfarsiecrvhozyfcszgrfnfyldihyvwzcpopshlhgiodwliaizbclruatdplvdlcxitxdnsekctttrvlnnjvoyfiiedx...", : "yaqrwsnjqnvuczevqsuyhvbhvohajehiyuibukryccqcwkctmpfulpwspqfgbkubpembljjvmcfbehpkcpaecyzwzfptsdxjhpkhmcnbftcnsgxkunwvqnntagwagpmijddscfjruypqqdhkwseway...", : "uboqcpisrzywsakzabvactvnbvuwpblqtouakpeafwddhaajbtafjjpsszbtvlptjzwvmddgayiwaibuiipmndcaevkaioufouyihhiguomrqilxwvgnpluovxahgnugsphqzvqfpgxbtatpjlayqj..." } Fri Feb 22 11:39:31.246 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 870 { : "uiiiwclzmhxltxzcnakbqcmrjhyruqlutvcrelbxmtduuhxmiciuueswofomtilonzqgmexvavswyzusrwcoysulxppttjlgfucalrbiuxnflbimntgkncxboswtejubuchceluxkaeuvsljyuyskz...", : "evcndflwguxxclepzrjmuwickipvffruiljlahtgtcchtrboprzqqaptzelkvgsfrfxzlbqqwtidynxdrwkrxlpbsmbzwtqwaqgmdouoobicjgpmbeofxjelyjexgoyedzrvgadgtarfdegklzaeoa...", : "cltppfwhtmxdpmdsznrjyuoiksxymtryfxmggevmhpxhhvgqztlraeizcnumdxhklrwpmpiyjgzqpbxbfqratlwerwooxmiptsthnmladnsrkefbscfnelzkukcsxxaiiaxwhoqnxqiwcznetksfat..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "ukbnqqltslfufcinevwrvjswlrivtigktcmtqowjparnmkzxxfjnueupxdrxpcdnapzccimouidylpsefmsbhxslkxfseyzrbktxwtabmqgarnncvjwpefqbbytroewbyjlpakejpnewwxndfvepsq...", : "ttbgmjqsnqghiymofduaiznfjeqxdqkmtqioqigcfhkfzttznzwxymuywojwvxhxkudipjlesveolwnovdxqudsjqmcfjvpfpveugnbuwxympsflvwvaijbqwuzcrsikvuoirswmgmqgkmfjwxyxgm...", : "qhjipmdyddpvyvlwjkthfnqyveobvqgjstdtglpcojlheafocitvrwjzqnedfghbjetpgnlwyfrghhojitglfoodbcttwkilxgehicltwjwofjttudbfslrksaevmyuwmhcorqqdzfgjkedvdptopp..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 861 { : "upjsrxjjfxtcyopjjhyppkfulsnebneexubgukwbcsbrqyrjizpmnympiwgqpnwuwvhluqkrdyfanhujgybmzqqdzkkhuxjchomzyjamxgbledpbgmchppvlbbqwzbbcaxxocbzfaodfqsetqfmolp...", : "hyjzjfinwjtgtqdplyhigemmzzvekzginsmrojfitwdtnzwdkgfxioebivydjrkcxxbyaflwwfphobnkgvljpecycwyapbkhfeznerijnwprgkhqrtsgdbbazomaslpfzbsvuyfkxgcihbmgechvmt...", : "ojctoartxxquziekxdiskwcyoyvsrzfnoezbotsptjamslczwvmxhvrwtkavtjorvelnbopqcyygkreojxcgaxafamtrqtqqyftifzykyzpkyygrrtzwzbyfpqdzzwceujdpoiuwoyzetwphbbrxcy..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 843 { : "upplmhzpsmegnerkubbflpmrebgdsmtoogthybpoezdwhekhlfbzxovwtamwhxtosheiluoxxvlcbpmauaothdqrdqcxtghcuafacknwtbubrwgpuufexysnegnipeeuetwjjgeammjqiubefvitic...", : "qrlhaefrsliytoxytgamxhuekjimfhkwuihgyxsioitstvfaurmmeohyzruphhrvjbjbvivewnnmqjekczwmqxblauhxvfvpvtiygkmefczjniqtcfwdqftuqmokimyqoqtckyzqekhienufhqxcfe...", : "riaqusvysicprhiwrmpoiddzuvxdrlrbrgonjeccuazkgskepnofdfpqcicaayitrsrlnkvzipshzonzfqsemonstbeatrzlukshexlinedkcswfoidorftcgoaydpwntsjlejhyeuiixiqubwcygm..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 867 { : "uqwgksklkpmgwrtbwakmnvxkbfnccrkrbshfezteezdpeevdkkcrqdkbbxdxchoherkvyhqbtpkccwmxwqvcrbhiuxpvjalcvcpjwwgndcfmajipjcurylhcdffsynugneykmcyugpetslfhytzsfe...", : "jsrqlrocipocmroqiapvokbpzgbsehfsvzhphhgfkzjvswelvmwsgegptleuyfmdpdcdvsyjcuuiyuwlpyfmzwwrcbwywatmikvghkgiuokixrzlhauirbsmtinqzkyysqzhlbdssklxvvcfoujuxl...", : "hakvklsodxwwydrtzoleuweqoxuursgwowvpstxxxtknywaefisjnvhjycrsbxzetbejmpohdqzfgbhcsiaitorjmkwnmrmnzctjcpfgigygjwynygjhxnhwkfwfaqeecyiybekvpuadzjjzdehqrc..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "urddecyncibpfaoggblpqvciqvlpunghtoueaoljkpphhwxmkvuxgqnokyjgwftknzpyupfoushybbbzountgmqxutsvtqpxaddsqibrnbqavitxdhzkvkuqxjjkjzlexlnywtxvoitcdhtuayzjlj...", : "dfknucjhkoovwxmdkyjcaftquczxrdpapgcrdonorxbijbbltfukgsfssgnmyabnikoxvmzaegwdtdckksdcszbjesrivfkffbyculbrczcglrizknodfdjnzzddggsmnahjtdohvxcavcfafggtjg...", : "znrbpgjawnntkefbwbnloeomesapzjullbxgbtnjykgiemjhcqgawmropxorcryzmdvdbnecgmchmscgnoufylmpvrvkwdvtextpvhorpufyexldknzlzdngkdfzrojraoblmkqzgnaiyrdnnhxwdf..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 933 { : "urtzoefncszyqtktnnbrtqccldjmedukaqsrornvqelyfqyiriofqqcictvcqgffxdrjnscpfvsdysuqtludwstzogcoglcxfnvhvbsajqfddgdbphydmlkkvhynqkvpxiiohbksqxdlugmnepcvfc...", : "scdtpztjkzyphbsmvgzyrllslyidbaqnqxzkonzyxwcxhszmuselidloyhzjqvyyypqcvwgzuffprxmzsfkrqigymlejkrmtqfcwwmltvdptlnjpvsdmzezrjyelssdqaphkvhymappcixfamzuijg...", : "oogpairnuzqrqiqdeiayewpbeioxorufeeziarfmfyrejmrrollxphznknhbseolawzfejqihsqmoqqzoqwaoguovzgkpzqypfhzlbdzqkmuppajodautnvrmrsmvupfexmxcvypqywbgtomwyilcm..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 844 { : "utqodefktonsutmaftrhxzkrddziwdajlvhwjrdiuvezlagmkcvuvveiiakcxoukfxdnmkhtglytpqtcwzoeegegenwemdqvkysoljhmksbgzhdkpslteluhcwzqcgbrttpnuzikvsrkadcxunanvy...", : "qnblzclsudzpsklyxnoclebkcvgujlzptwcbgruqxgevbyldflugvwveuqnsanwwzoggixywzbcembpymuyicuwfvjikbtpilxpbcboozapzdwsrfvmpivvubykyhokbdhwfyrjesnbhnzrywwigzk...", : "nzxkqlnkzehhhthyaijbtmijrxpcrbfjrrphsgwsmjzjfccoufyodoeombtrieiwxzgpkgaalyxaqjcvtgatqkxtvztextfqezzgjxyapizysralyycsposmzvjusjfvqkolrrzdoluzbduzlgkvym..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 892 { : "uvdoamhderixbiekndmqjpkzxeflvbocdpmiejeskxzczvenvvyjawujpzbyeoxupbzklxnpzefirayzszgqhcauzqdptocjdrkwcpvtfvmdbacepljfmxpubipdateiisvemznixupofedmhvnhaz...", : "afhrcgfcqbstzrdvvclpbhswrydvcpmpafxvlzooruwrpspmlzrzedhprmvfyocpnfukwchsyddgwhpeswmqhracjdimvicbudtrnzogzvzfmlizhbuopkxsdowmanddurbkqhiaajjtknvclqzfka...", : "lwdbpjbhqtxpsmwnhbyuazaprnnshafnziutgeixrtawavacxndokxwxpzzduhvhpmmfmfbwnzpthhfjrjikjxvwpwlixqqctzddmslivsmmqfjifezwwxjfatuxaohcegsyakeonxnzxewfiiobqi..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 958 { : "uwcqlaufkeesipiblgedgviawgihhuffbrrfbjgeorkiiofhxeflwrilgnnfxowfvuruoorijzrghjnnhbfuwanrwxiapdcoddazcveuyzqwiszcnmigboxcejceodaluzchyiugdvmtqagagkdyhw...", : "sgxywlolyvpqkyllnsjpmfcummueoepdymlbybdrxhjztimxrefltiqiqfshwcavbiketywbvlvsiofxddzxqrxftljznvleklhmcuuybqizcbxmxmqviafdhjeomgzfswppmipifriibdnmilhiva...", : "rukggxnjglgxklycruzbyzopmjlyrrxclumbexoqhsjvccweqrxcxrcnlxoizsxanafzukhhxgimzoxqcckhjylpdkpqkjbeqoehkvgwnnyenkdwzoesoefxucckyrzybijuiqxyhshpdirizhjxwb..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 898 { : "uxmgootmlafsmnmkdklzfrwzizlpoejasycwttswfkhxsuzhmawxjbogrrncoqzvwvstyfwliqgtuyhquoakqlkeowduvivjkzoiwzwclzawuapvyohyeoygtfxsjmkgwaezcqberquqqpvqhrgubv...", : "lqqvkjyphtmziakkhouceqjsqzfionguxoaajxpqobgseccethxkhcqvbmpnwogntzrwshswfdqfeuhdfzxetkfyxsdmoyybudpqvhparzgwcxykgaxnkeilteeykrtukkckbemctnbvptrnyshmdn...", : "cowcaryqhpcqcydjnncjcztlugjkixjkzeszeeeweqnvvqdbyjuoyrtlblvuxbyvvtfimuvanshbtqescfprhyumwilogtavtazgaitwrllwbdnvomzolcluovrnigbzteifhucycewgpnsvyibidr..." } Fri Feb 22 11:39:31.247 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 910 { : "vbhgycbgmrjzaariczvjjouhueeyaldioioizlwfwbufewbcrdhzyvusruosyloygfzhcpbyjrfekdhjjpnlrkddmkagbcgpjsrnkljwolletatwpaxqgoqkegewmnevgdhmgtocttwokpwyklhpom...", : "keouafhvwculhfcmckukrerwzklhfmkqdwibyrvnfcutbmtkjclevuijjeldplxnxxldqzuvhtndqcwwpjedmeeecdsslnfsgmidrofhrrxzdyjlcfyrxlsszljbxsdkejtsvdajmiebrdzuskipjv...", : "fkscllcztuilsneyasqzphendjwynqudvsxzvxkomfkshcboatzsdtvhmlkbjurpqtivwjhiqkjodibgdjnmbvkdocwosnlogpzsvailmjfkcrnxccwtpatlkkkwvbjmslbnuotjudbmzelrfqxwxh..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "vdhtjtamrrxeauxantmflcltmlnvnqkplatuuftxxjvtwgsylvfzeybvgondhpwlhcxvanpdpjnfrfpqjekgayvjsuuyesezrfumhlanusgefyfvrpmlzrjgmnpfhmraddealgulqfhzwbqgasupvz...", : "aekbquppjyggucnxedukyxlfenxkcngpamtrloxsojzrpwqyjcwzrnpmujjohimpjqttcqekploazqfubcdudstywncdyvhdpxpbwlczrftdckrstspnfddoyzdgmxeqdjcdjtizsswpowsftpuneu...", : "putmzmkdpufmrfpppzpbxcrcnjilyermpkzkpckokcwvrhiqubipyydiuqjbkbjxkrlqzcqczkzaoxdsdvymialonmyytgwrpqsxoybbiowijeaqorehovlpkqdrqmqllzuowgksybgnhluzqrgaee..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "veafmsjibgsdzdrbdcichdclyxauvxtvahmxuwkatdyucfqdfujaxccvmlevnclyzkceqhmjpatgvcwlsqicrkwqdorwabvsbmvkspwaihyxswvqmhpoqevgssgvoqdyosmugiedtoefsgjwlnjshe...", : "nudohxsakewcmrdrolvdtdsjikucosvhlunwlfripvnwwewtkmuwbzibmlxswaupegrwvllfkbcyggfkenrzuqfyfgeuhvppcknvhkuqrdkigoihbqdbgloyskrerzhiefjoralnxatntceyvpccwa...", : "vzwoefwawykgqqbdtcwvgmniiwgasonpbzclvbovvemxewlfdpvgtjnyljoqglfpaouenuehjmnroxpaosjhrdymmwiyikegyjgqcxeepcitzqslyxmckwqtwxnugmozmvdfkcwhgzbttezjvmnfkw..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 912 { : "veaowvhzqzrosjjxuyagjavcuyjkmtxwglttewcrcrnyhjlxkrhlnnaupbhgqfgbxzdliaalesxxbnughcrktcmlqcgcmwedhynnpgbjgibnovocimyiomegdyibdakztobosegrnxtuqvzahxkety...", : "jxzqztfmsneyzcvcowuncutfrdfeaznvhniboczuizbrnthjcepuzicihdyggvmomrrprfuswbkqjwfnmbfrxrqnplucrkspwruhlloyleoqfxpcswhrucarhcppjmjgnzhbqxprfzsqkzravejgfo...", : "ecvcrqblzugsfbpdesfhxoqvgezzaqmfubeobnayirikwowwmnrvckcxpvptvvdarbgruvdapdmhxfsybynccoavkalsokhxdvpkmvpxxlazyrhfklkckuhtieicjyhaxpauptcqohhfonhekzcbkv..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 870 { : "vgqjmimgqevzhlvqiyssgrtqdfobjjdwiyzdbehiiknelvkhlbmqtolpqzhnjmgzyshhvtlhhjoswtapneprdaebqnxxsfsvdzkhvphssjukgzwudhotkwttddixpdnivhmkfwzmfvfxhyxmnndoqo...", : "fmjpsvzlidhwyyzdhectegvtaevhgdjuxnmxkkicfxdntbscvobrqpqfgornctktismjyjcdwwqqsvmkcgqexwelepsxmqlzdlhutfdsmvzkvqebqgapbxexihvrkyvgujfcpfepbxudeorosxujkm...", : "qolbxrmphdckgnykkhftygqkkumygennteonxggefteycjviveyyyhptvxknhwgyfolxtiqgmegllrvgbkhitngxulcdsbtouudipznxdhamcdtrflmsnzjjdbcqvbjtzrahqkoewqscvqkbajbkmz..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 823 { : "vgwgsdagzjhmvgivpfvndhssgyqzhycgqxdmlpoejrbyhkimjssyxwngtwtzfgveueborhxijwnasgagyxmqyiwnuyicjnfsjkqemqnnrhogzdcpiaupwuokhwmmvfxvfsfpahoqqenofpcaaapekl...", : "xftvjpbxcriwoaihscnectuswirfeypozquziontgcviepernkjnoydavvocbbvgwureyuqggtvfgtfovccartprogfdieehtehwjamaoyicozdnzkrenjceeheqeunpepqtrxuixqmvtxcysxcucy...", : "qgjehtvbxibsbxqctusxacmzbcoyqnyiipjvgzsawnkydtpnmclwrupujjgqcfwhozgckihtpsbgaygupcpggxdiwrpjwjthqigwfajoijpslchhdmccsihyctkautcfjfvndlqpxpbykadjstcacz..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 848 { : "vinqafykegjwetioeypzilcxdalfnkmovrpxilterkgvmghupwbligmcdmsyaldpdwtdyzjwheyitwtpxjqbffwgbidrdpfdcupodhovogiwefqukrjhyvdpwaknicgmbyjupxgdyxyflycbnnuwyr...", : "udxcwluluoxlqhmvvykqrjarycayuhjatqlzxuokldsogqacpdpiophhcvvdrehridekqmrneaekfqudxcjhleynvcpaaobszzxgtdpzlbqcngdezdeqmjwpydtkvpwxrfvtwotahxubfwhopbzvxq...", : "qhgzpuukkkluofclsbibnoghjxfvfigbjwwmdsutlyypclgweakebadjcklfhiyvnnrsgjlikzzafxcgdhhpaxiyynfblrkgisqtrlrymlblnyebmvwtnxnydanpzjjlfvphutyrayqvefkzfkzpjf..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 845 { : "vkyqqrmgfcbdqsbsuaurgcuejoufhcijqzuavzlezxcjfmrjzbnfivanafqvcwmupvtdzujmpprbvatlhxmphvlrpiczmvdfrlsnwhfovigkbqrlcgjjefovrqvjnrrkmoerpozodptppktmhakeex...", : "iencqvwelnfdxrfinfzirzkahgycgicnxsjtlusotpsnjkiervwyxlkiiaxwjvmngdgpoavdxydpfaqznlklvgndbhvfvipikspfhnnpmslbrhkibwvihprtlexfzfrshrjzesevyzcigjyaasyupe...", : "tzpmwknkhwokzrfaezpxxcchwfrvtszcoznlqmccqofpekrykjahqwavocnrfpxlnpdiatjbsxizzksmowouupftquglgsvwjawiewnftddpoixcfleubqjxsvjtysfhkzhcitpiscitazqemtffhl..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "vlbhgxsuxohijxryecyrvaqjgwraaeqdhnpvelfbaxlxsplgjkvllniqmdlkyvrgczshhrwiaitfhiurpgepwtjevsitdtzdlpdmpmamzozlrhxskygibwkccjfdszpeggwczlpvkjbomgmluxutys...", : "yerchjnjayryqncytjparhukgtlxiiommrsvfiuptlpbpkflgxjwdnledqlpqktovrjisxfzaktsdplimzwkfxfuyylvdlblulczutuuksqezsrdrtjrfurxjwhagcnqdyynxurdoskdkmajdxwbtn...", : "bnzrbsugauchbocqjylgqqahushvmcfdntzdcjozoflmumgvcjyafiezltpuhmhglosfdkcmwtabayqtrwtgbirofbtyrhqtivhagyjpbkttjsvgoohgyuqsspmtnumarykxbfqagydrwqtdgdsnfy..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 852 { : "vnkiohstfmnhxokgehvexjvvtcdkhypdwgdkaktwoparmshqhqpilyqocpoctgabtihijtobnfycqgmpatctaxyxzcnqgrykpqmrvqwxcegtvjlgbclydapdjnzragkoogexfctolkusfxwfdhljyq...", : "izboteapnzuadkgswjkifntsquljtwruvoilyrgtfmhunedgmbxmtvmmizaunfayigurzbxanijcjuopfyzgmnieicykcdwiyouzjaurkszixdtzmrrkzgpuixiegvnmmgdguewnlcjaouyglepwdm...", : "xfxkkixblpmutsribfshcvamponzubjxrvzsfvxasgvxrkyvqbiwuighqngzzvwnuspfpdlyhhktfnwgdbcjnncjzmdwgqpclyaayfdtxavxpevulmwqcimxlmwivyuqhcqnnbfcqgwfxktjxemqgs..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 938 { : "vnkvnhmbcgufwelexxylrqxvszeihvaijhphqhlsqrveklmihzzogemqdlxircqtzlhpbrjtmrlehnrctpefkajsyjjhdndcteqzgrneprlrthucyeuuaoobgbyjxsnmxhvcmsszewrqwxcatdyjsl...", : "tsjevxpalfdpruggqwrfoyoblrflxibgrccirgdgqzjvajgkffpxbynunihiizucxblozatoelpkmwzjiijrikwfshclfydvhpcoztqyrdfrihrrdarhkjoihhlvidqkdhwirbngbbrfrpbtynlufy...", : "rxoyhuzcpcqffwpmijkcntdkigcluvqpvmqqqionrgzxsspmmkluybcyqjlfffrzaepyculqtzmbrhkaszfgwvhhzsbjtsrwcpsyahwtlasvxenszrnnisejnkpgpaxxfffjhfnhnwhyxfvjovczew..." } Fri Feb 22 11:39:31.248 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 903 { : "vpnplkayyfzaxksljybmggqkmbmnyhhttjgmgutcurrrbgaixdzkdwrxxqxtnrygjrdimovvhntuzfgfycejcyuygpnedlphshhlfexcfwmqvxoytwyiecnydxiepwytiuelniluxonvwgcjgkxsob...", : "gthamnvmbnpnvzadowjiwauyidoxkzidxktxepnbwymthhqarmrgambhjlypgqjrnztrvjeekvgctdvluglykeweyqqtgbrezdibdcortzqznvgzywqmmgrrrdqwzxccjukriuuxdnbzhmatfkbcjp...", : "mwxxobvkckvaqmiaqcaespuoetwqjdmejhavizniywtaxuauaolbipwkdhypsejspinwdowmdablnziiuxyrkemqsernleffptumbgotefdfpnqimwkixoosailpcufmzfgruzrbytexxftrvztujf..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 874 { : "vsqowkrrtwjqlvbmshcuujhkvuljxojjqdfnoftuqznuxiewwvdxyjcvdkdvwtbcnuyjcacggeydnfssvuetgnavvsqopnrcrgvtpqpyjmvpucpusojkibxrazwqqlrycohjhrcmlkujjlumjujgba...", : "bnqzbvlzfwfszaaqrgyrfukxqmhqerxnlodqrulooeqputftvlzsjsatfeznzhslpdlgvcaxkhwqrmcjdvdymyxasyeozlmtutotdqfqwvnblcshhxpsgzzmdzkxpevxcyxwluapulxveyiqtdebsa...", : "rdkmydlvfmcvaavvavtvvrplbsjdcpizjiddwhqeqrayqxrprzbsjonfbqrsgtficjsjjkydgvkumzwiwqkxhjcjmrloyufvexwlceummjhybcakxrwvxwxqfamubbmhufczdkuwthmgpfxzatfhcx..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 836 { : "vswlwpwmreyhpmeoeckjlhbwdbwwhasslocfrxlyjbjdoenmnrhjykewcdtmeclrrfqkoyuuagnuhatjqoamsphdnqhohndbifhohmyeuisjtdkgubgytfdgmlfggmmscvmocjfeuozdjuezfghjns...", : "hraxzzffpixtqpwvqdzsdpwklbqzfcckypdogfcnwgwslijzvgfcngbgapqdpmcdythpuxqzyjutrxassqtzrrryoybnggchnsgpcnnsmcqgfzljekkfmmdpxlifvsbueavtkvwykuhenoughkyhpq...", : "utrzhyworavarvijcfflpaeucfbwqkssxbnlxedkpxfadrwxqrlosqvqhurxkqrfwojaebwexpgqskqlfxzzyvqpsgbljjujhvhyegvojkkeltpfnuiccgnisrontwrvxzdrikzbufultoxidsqmjq..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 825 { : "vtrulhtkxbenoywohpiicdhhlpkuuvtysksxarljqrahzahlsyjeldwdookjqyhvidubzucaozklofvmqmiiqfotbhalxzgxvjawstgwmypbbjcyajnaqwvyavkentthcvzcxmiastogrgrspbvouj...", : "rseehlgdfkjkhaikblablfmcaygymqtyuctjaojhnxpturravjzykcemdztuhydfhkucfxkswamwwspwmgviikziuxnolmgzncykrdyuilvynyieowiadwmothcpktslhidxhdnynsxmtlzeftqksk...", : "zpubiyibbzjijmdibtbndtbzjtvliqfthtjzfjsadfwandzkrmumpjejzamdztmgvsmgfogozsaigiwxogaldwyyaolqvutgdkwezdcvaptmkwqsqisjkamhpkjngcgrkmrogykxexkkuszuktavet..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 942 { : "vtwcthunjdjlkwjbnxukhaoqtpiwzqgkmhwitqyocjaggwuggglmkpekocolyhojxgnheuugqourexsqruxwejdrnnqfuzhfcirzzopxoomnzurdqcqwcscqmaoujpyitanafzktzwgcqldnonohti...", : "kxwxazcsslkechyawnuqvnbfjkvidaavkakvztfxeldqdhphibqsbvrucqcrpvhwsffcnshyvbqixhbtegqrqiucmhbygnzjwnblkakvrorclaguijgekzyzgjuntfzchkgsropcipoyxydppzmfpr...", : "lytaeeqvcdgcqqswhzimgqfqldivwlqzrzqmxmzvuckiilyvlrpmjwrycqjvfvvcwsxljpfmbooygykrckjogigznxckdgghrrieqdrxrsjbvhulrwaomrsipygtalljftwzrslaphrbjujvwgalkk..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 822 { : "vzcnduulsrnwmjpakwtvlaxrnbzfvyaixooppqgrnmjghqjrquqgfsobsxzmskyugrcewryzgskurlricvnmmyduposjuyhpsbcaryazekwldbxpdieehyydnffgnrdrhqrvlazttyszwjspjvfchu...", : "jmwevcerrqmvjycqlarjiuiaxzpsetiohistykrbemlonwlglovpgvqepifsvidumygdsqegfspawvwcrqtlwfklgtepiukwcpekujugqgssbyzminxmpkndfsruppwvmrarmxypjekblcreutuhnp...", : "dtawqdtcouzqcdujewwmrbgapmnfuaiyedgzqiuzqagnmpjvosqetyeeqbqsaetoufikajmaoinkfbgwqqnnrlncalmohiytnizrmfbcmrgmeuqzasgedxiorvanxtjjehcnqshwyiwdzamhtmuzqr..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "vzlekexxbngwqmlyadpyifqvoxpgfhhpeymbakwbcqjasmwijrbfogjarxnpmlfocerwtdjcyvhyzutpvlrdqffmpejrxlkndwmowjlcwkitxkfairgazyhlqlcnrqoiqezrarcjbrgklwobrmctnp...", : "kmrttyjaxfogvzotalqoqdkbfhhxmvkxfgianeqzrototzrdjkhgqtzznkcdyopbtdgqbvjdgrvgouiejuilldtmocpghhcgugtyenyndodatkupafuyxhnpomexmsbrmsoctppighhkuizitjziwu...", : "jeniopxoznclunmcforcvfbjckvnubhlnpfkjgdnfcqdvsvddsxsnohbgwrqctocrswtqcctdwhqiwasyywkqpxoeebvtccrjvrqjldvllgvwnqxfamixttabzieljphcdhczzoormfdgqnvhrghjz..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 881 { : "wbuyhngmtwnktwjtfstmxtmmzgdznitnqqqieerdayzviormfmlfandzsraqrbyjbizzlaxtsjzpvxfmopeluowanhaicqlntvledhkgavbsedyhrpeptrpnufzvsjbxvmapjjpnotfrjnlkmteqoz...", : "zviuzkczevmdaopefoqqqhherwwrytsuzdpitjdaheemwkdzsafdsmsvykuhpvpplkohbigsqqxcpvwswoongscgjsplscvuzivlucpmzrnafrehghpvwuvscbzumblufhpbjnqnykpqczsoluyltq...", : "nycjvbhbyiglgritduqodoavqobrjhgmpggwdmgaoepqsgwrbsardtxfggqxrlyljlhddmquavvsgijlhvoupjwpyiiauackebmogclaqbwhaudkwcpofrprdnoltmifqmecovynxrrqxudwziihmi..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 1002 { : "wcuirzzvqmwgzfenhkpvrnfvhoqnudfkvuqeuhwhzrpluwxsajlnbyxeiwwdblmdhuxiqequbdkyxidsfalgeldafeohlhqzxiragxqxpnkovemnkdoeskuqccdcrxkbcttfbfgkclivlitizkfduw...", : "zaeirmicloytuvqjrsuyyroqccanakptyijbtqemptgibeknzbtcvhzovvmbckmuqsollbfrqsbpzkgjshtkrsydajjrsasoncydwdmtnpaspyxeanqizitrtqistezflxwfyerjocudpgwyllgkdj...", : "pbvzzcbxvtjjolnbwecdzvcnxrszpxheepfzyrrayezjodnonlzjklvvtcqrphsvacwdkuiwleobrtvamgkjvdcfzsguvcphoroxxjqbitkynxayeezmnzfsaodhbqqezlpnmrljkwlhyqwarvwwcn..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 888 { : "wdusxqpuhhsycgvqnsmyvmfhobmzrqltznbyymmgidaxwihdomxudhyqcrkvtdbqxqlygzszmzgyihnerddrwdjnfvnxixhjnzijijwzvxpgagvsvbaajuinqbzrfgfzvwyliezkpuhodsasgoivrn...", : "hkgahfpzxjheukpcdecxrbeilgluvttyygfcfpdivfbddacynhhvyspkwpdecegbbtrgohgvplrknxrptrcgqzzrogzkqvdbwpsiddzpezivwbzzjitpptjsgxykwioweasosehpemrrerpzmywxqe...", : "ksnzrlkeednwjvslgfkpgxwkpwitpiszycqgbbmtckxfjynqxvrvucidcfhqheeveepfasnewyghexrxzxghcrynkrbyclvluyuscxdhjlidgiyrdtxynehjurckuhztmxixhfbuhhmbrygfbfbesg..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 864 { : "wewlmpwooybqmzbacgputtvvuqrimtcbgjbistvnxrnkhuskwamoplekchjalqpwareqfvivppkmtsqanmhcfyhncjqvopllhjvvlswodchafoqmvugznvcfoqdqhqhbnpwvyorushatckhoexalqu...", : "ypmczeuvdaosskpqaxztrrxlhfslorflahzedgdvktwscqpezjecuqfdghusqooqlwkeljkhduityelitkdeobjxprkxsnkllitpoydcinqcvvuqsrasjysmaatxvkovqahhtllyivrhfktgsiappf...", : "dyywefcznhufefvsgytzurzanfnzxhwkabnicwgkwothdgjttxtbxrgsthoftnnzyzgiddoqdzrfqozvkquejcmpgaxdwsfhzqencjhxshjcddzonkzucjvkdaaegorwsqrbwdqlkwwpggqlafytae..." } Fri Feb 22 11:39:31.249 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 897 { : "winpfmhlllvxjctezjqdcheytyzrnmcctioizrtyhdwdmahvlyglzfihcwmnmnffhavnhifgcahquskemxlwhutwcjicszugwrdifcfhmnxtmnpqxrurftohippecrgypvvacrjnzlawdogffahdfp...", : "kwmlveuefkfnvktswbrnjbfnclcbsngfxdnrmcghqovanztvfqzkwudkxkvxegmsorcnmovbuevansvwgyhzaqqskmbiuinunqtzavltlltiwufqycuoolbpphlwsbptsxipihumxjldtpzxztbugj...", : "ynjoqqvwdwqqpylwxzdjxrzfpwdinyijrvhnaduepuxiikpnfyfagornmxkmykiyqzpkeficpgsvovlzzpezqcmcfgmupojdfxtglpyojxytkaxgzzdhnvjrwgoifnhfpxtclvcdxtbvbpvxtotaqf..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 841 { : "wixyjbxghmbdhtlfcfipurcqwbyfcxvhadtpdwsksdfgfjhxkbcntmtpnvcoonbdwwnwmnyewjmcmcrtwqjjupukaixtwqrjkecijgehtejovwqdccbzgjmwjliisyyyvolzopiqljkxvarrxjbshq...", : "tawkstmwqtsyncvwergcwepvkxfpckvqrfsfpsftqpsuuckntdojvkcywfbympsoqkqjogldrceumsuftkfgjefnykhdrphieuudsrlojyhtcmpxuhdygztusdhhgqiburwnjrouhttemhqvdnuglt...", : "hiprerhnryicejhvtltxvwxqjqnykwlvgisweqwzaezslvuwiabzyskonksgtsgwtvtbklzcfwvxoasgzeggvsstdmoffglcjhbbtwxirpywqrnssiyxolfvpvytqkbvoclkoddnrxznfbufmxgxpu..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 904 { : "wlfpcborpasfahqgcpzpukzylzwaikbwrooibsfyfxcfknkjreuvfubiukeuagmhzqhqehsmdcnkyslawvizigsadwboxdpjcqmlliryondhlzlvrkuyofkhxxjgwmlytkjnwanhgimhwywsoxnknp...", : "tbrkhehhuxpnxlamgtqysiotnbdznzssxskwmaensnqjtnljxyjfkijkwcbjtbmavkmgdcjjhgxwvhwnjihtwrasvgerlyehglqnmyrpwrbryaougnprdzfjduxipiquvribhfwvrwgxixevfrboqx...", : "lmjklrliiwxmryhrpnkffvfrwdhtgjmcfxfdcbxdcphfxvyleuqimqfcvrnbtklorakseergzqzudwxhwfyidcjykqtvsvmzujgroxbuxlzsehfsiztgroysxprpevbrbsismxnpmwsnecurcoajwc..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 945 { : "wltwncouxfydphvunxycqqrbojikjxwktnixgaydjxzyohmpymgqdknxvqcufgqiqbkvwtfrjhlmtjcmxmbsdrudqpowmuvtuwqjfkodxxgxyfkksxamfolqmjzrhqhhrzugqzopamdepikrwxseov...", : "oblbxbaykbbwfmvijqbhkowasawhbjszunasyffghnkvzuhkgxqrlojgbsseoaaegiiintdmbchgaghogzkzpyrtecxbhseqncwmorgkbjnstjnxaijntikvyeexdapnjdaguilurvwilydrofkhks...", : "ihwjtaqeqvkkgsapenfxjvzpumgxwfacbsluztcpbdxxcptzdrttxxnkitdjzunwznmbpurqxyptanzputlgdcutbgarlhnzyadcmtqhgprypanpfxjlynumsoslffavealakfyhoxmvbourtsslyh..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 870 { : "wnhasxkspqcplukqcwxkthokfuojklugixemtjkolkxnexteauumlqzsvmjeajqqlwdmzvdhbxxqdyvvqycctinyfvwvhegletardnlxgzzoudmyhknrofrnecgcjbnzloibgwkiudsharsulyjufd...", : "tqklckwttyszjsriuelvshlnlfiprpolmpouzptwyghfhueqywhztdfrrajylocdspwlsvjkunvhirntbyamsnadkszxoiurmizqbywfzhnohzrmbodlaopjrttcxgtxtlfkcjlixbfdetuuvwttmw...", : "lnomzksncpckevgnbhlgqslvlfvusdusnslpizjgqtphtwfswyosfjutyditsxcxohgmkpxczrslcwldirgvgfpcrmeikkhdewkcxbjbarsmaqutebmlkrehbyrkzoykgafeipiftuunajnliobbyr..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 997 { : "wnksmvsitzwfhorgpvltcfnjpaphdmcfpijgqskmkhsucwoosofogarkjnavyepatlmazsrbastoupaxftdguubefdmtafqlnfirfyinayzadrfbofkwkzxjknclqifxrzsuydwujuqpwrxbqylkro...", : "zhdeshiivvmpyukhnogwygceyrikubedctwkuduefvzlhwvbtbkrpofznfwdmfyjokjwxwdipgorhyaddgpixioikwixypbcdnuquuzclkifmiwliisfdpapgjaiyqjuhdrkhnqsrksjtncxogwfsn...", : "zrwbaphqebzizeprlygjcejpojgeeeslnocbeiswnraoxtbojhfuhebekyciphgmohyhpwzavkyynbwxlwontjgmwmgmwicpqjrjznjdlmdllzucifnqxwuwssorvpdwnmlvmkmupgvcxfvysdfesq..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 820 { : "wqqcwztnewwpgwipzosnymdntrcytcsqnuszksulfaabfrddqromesmqpwnohdzepcbpmmwstqozpvtpfqozdvvouivlbwgzfdckbavzjhwoiwfgovvqmbjvfbplbyntadogidhsijpkkooeflvouv...", : "nxaxybmqtynjqmuwtseyhwnidhgeaunihlbgztauzzdplmcaqhjvfjajwjwfcrhtdizjfpvegawhhiubzkgbprkgytvnaaqgazootusdfkaxfxofrzmugfbyjuuscnhhtuiweynfdkrlrqcclyiylu...", : "igoyiwfotbfbzlttxeidrpeduehgviaevgfzhsjmdmytojobvuagiyvpvrhqsirteyvnjukwnlhvmafadhlvucjwelkpbebyjuusdfnlegurzkkoetecmrunvcvamyfywpszyemusugvyomckvtoui..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 937 { : "wqvnhyeasbvcufpdvickyxqahoycglgjbwhfbggsjxqmngzblrbnaszuqzbazjmyutfqcywthiuurgsycnenwggsntfyqaeaauitrnmgtouoghjddqkevbxjcaoprcgbjfubeomgqqktgsldkqzsix...", : "gzwugyjrqzklvkcbifersvdkiupjuipttrrtzjvrszkotxgpqsqahcdncrarrdydndoiuuchzkppgbzvqfblbimsfcaogusujlrsineopuzvukosqlabzdlrzqnstuuwewdxwplwxtkkhjnsahzzqy...", : "wvsdciaweqjblulucfkfgjnwksggjuecxjxgoohdfninpxdnajwievnkulmmtqcktcwqjzkejclybawcbnhwmckkgcunewjpfbimtrvppctsbihrzztddhvkiickzefjevudisfvxwbkfchirqdtmn..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 839 { : "wrjfgqxuwzzujxfwtwkoprqctsurlpeioefldybvjbcxmfahyzrguicdbwlekejilnjzlkqjywwobvawdgvnwwhjoimrbsbnqgcbnrfwzfpeqzppxmwpitfamqkcnushbiebvdimqgagxqfdktjxuk...", : "glvxojwoqmlfwbpvshqxsbyvaztchaqriuenbvpapjefvvtmfsmpirybtfdyyetctdtkxdgpksabztjyzdwqfhnwgnnnkhrawrpevtvpftxqfhrsurdaffttssopgbgovpicurpncueptlzkfmfjnc...", : "wjbjdbdxidkxqcbrzletxrmxljyaqdvcfhwseupoqqqwhtfvoiuhyokqaxltqwubvgogjqcjwcizkqktnozkzlezpoezyrbyqwozeqbzolsxzltfqfpdhjfobbgyrsiykmadmvzydgtrkvluvudynl..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 838 { : "wrjxoeuzowpxpxlnmcodkizgfwsxogxfoeeankjogtdzgedmdgowiwsqrmjdizttpapdrhcxomsavecuivittyyvdegqadfdtdbjyvshmergkithcvzwsjwymuaeiqtkjshukrtpshvxfzpkesaljv...", : "scrdzskpilguwcivtgcsaltvagnttonqnglkrcvoiqaymvunpnlzyjivwdkijhkdyqpeespzbgbvlyfnrvftyxmwauexlplmajiitgbijhmrnmqaewijdctocjdqginogtctlccmmysqmavoyluasqn", : "zzvpcwgpqaqylejpclipcvgackyststmeyzpffzdosvjwajfkbgkbscsjsghidhtoqmktzudzguktscebglrazjdwmnoojovpmnnxjkvhhwccamtemfvsspggvpbjktcbcljvdkuiopnygbmkkbgrm..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 872 { : "wrxzvtlbvyrevpvmztovxdfbkykmmxypnooniovhvydbpdffejaivxadlyahnntfjbmpdlbjfvfdlzprfnutyjlqwrnlinnejdcyobxundnnzoygxypicngjquheritqeejkcvfuyphkeffwtpmllh...", : "euirrtgttqefoomdrpjlihdczrgmflyyxtsnnscgwhxdpdsuhknjkgemiohokwjthiupveskgoxbcbfueigkdcggfolblrgfvrhydnlgnisqsrbavmtoyyyayvivijohexxhtmfyofwauszncvefxe...", : "qpixymitlybsgmlxjkjuvfpkkvurysczbuafajrgpijhrkhimvyfchyqenbgxhehefxnzhmrwjtmkhvpdrnrwaroxxqsgxfrrghtztfvtudojkyemtilpzxzeqwctlhdhcknuoqozfjhdgznytqilf..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 863 { : "wsbqrqeeervhttrqrmizlupdkntcjfyybqkrrfaqdvhvofvrxlhgqskihvoggprqlaheqptglgxdyjpdtprfcvvbouqwoylhyauverdmziyokqsktbsoxvoofiwquykltwggvyodyjyfiztfyeutdy...", : "xqfbvrmranxpzgjzoqbcipcqvajxjbskwgndwjzztmsubfrnfixnjfweupbomwzplcfbmkxjcyhaarrxdjbhemlytoscdizzdfnzypdsqecaaxicqxzlmbplksltoeaoxuzyokjehnwkeyhejefzbd...", : "cyenponykkwbtturltsbhymxgohegwbtgjwtxsyeooohapmpxaeoogqfqrxuyvvhjgyjgqifrhdziqtqspddtvmhohtsqrsuktcllcldvnnklrrluapxhlzdndqwiugjhsplryjxbwjfqulobkgoga..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 837 { : "wsfzefjfmitezioxhcxnlpuvvhlbguwwyrjvadquqbscotrzyxslxivuprztiesdvypcvnprqrdhgpdxgfjrkptjgzckuiajxmoysbbvxsbfmzmyksdpntrrmbzcfpiheyczttninivqvidefaqcxi...", : "ofjeqmwwwqscuwfkmkxszyzttnmwlhmyitslulashqveoxbzscwgqpnqmitsovuqqduauhorxikesxsevepuqnaiciifookcecphzkwaudvfhsgwahmamvvfnijgdpywojwezpkeuizbrrtfgfmiza...", : "hdklyzmghkyzzjqqujdipmglohtgabdoagemqvktecnysjseommxgiukxicuoglcpxkgupxlcvbthhpnmypgujcxieupogskxztamlglebcevgivopqmbjuqlsivkiwpbfvrjsjdrdcrcqniydrgsl..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 888 { : "wtmaxdaefxwibxngjawutlnwmqngssdcprukpqneiqxisrzwybydbmqwsvykmfjvyjepraclukbgebqjgegfvwpcswdhtxbmoqvonufknskrtpqcfcluhvcobpfugodlkfaxdxsqrwarqdsmwnfgjz...", : "yzlnbuldtetlneyqsnomgticlqzmnjnikykzeeuyhqcsxiavvyekfpwjalxjidvtagnuglbfrifcazhsiaofrehdifojmkoqgyqfgwqrhxjppomdfxhewolxpjnxgwlzngcqmnxrgjlendikdcnrbt...", : "qvdhymnxoqeyukiodjoexqxrsaykxrnufcwunaiqhwpaqegpacgnjilaaqfbdyxntacioknzkaarzepmavhhifqgbefqjaokhosphqfhceykrmlienmoxghzohwrsiteyuleqkwlroqrewessknksy..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 868 { : "wvlvyuiqfsjdmeqegsumkamsdxonbjkphppuilwyuukydhqousxacweovzqpxrrbywnvvffkogqdntourmsdriexhgtglfmtffhcnxxvpmrxuaxmgccvyimdxduzwgfmldnmyqmcsxqgibbmrnowbi...", : "xgnscuodrovseyobryilghhzcgaanrkcocvchtdbzeefpzaqghhzxiugweknsxaeopxcalrxlteonxkdkbooxkxyfmauvpekoytayngoslpxorkspfkznpuhdnidoqbvwtvhyjzogupqoauecymggt...", : "irdjrlnojhqymlagwcxxsbmjcwibfqehsshpepdgodevcotcqwsxbwvbtsrnjtafkeditmqzzatluesmmprmquunjfyxjvmyddvxjubhaxzeluztbvvwdvawrfcciapzoockisptwnggjionzarbnk..." } Fri Feb 22 11:39:31.250 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 824 { : "wzwpfkzmbvkaamwrvgrlfmrnjriruolvnsmiaofimucslhsgmevvqpeobggzwrybskkiprcmvyvkauqiavrnguasylruorzajtblhzwinzysyfgxeujmxzxltigjczdhxffzlbuinmgpmadbfjiine...", : "ivwmhumaqfcgrtvrcecaqnfqewsmlkfkedpkkcnselwcicivlxlrvjmuldogqkjqvwpaxltgdqlfvoqlrhdvalskzfziugypgnugfhwctjxrhkxbypdlqqivpxohjfkfufedtwbknqjsqvwxsqmrxt...", : "ihnhbqslnskgighhdvbgwctsilwpdktnhapyizkyywjhvnmgtgbkpmbkcjcoizhkubqrrowuinwzaurrdxfrquxbmrhncplpnbtceksnabfgbpyjapqexypmkldkyhmqiaatriyfmcmbujsnhjgyaq..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 943 { : "xeawpqmpdshlactgexcmwhshoiarfhwlexdilemvrwedmmdlbihgitfbpvbxdraljxgggdcwwjewfmbhdjicmdskitukfnqtntgotrbmzxeqbtbauuxzbjixwnhvwxfyghmjtqyxzqptbuqmbpbaci...", : "unwxnzyxjervksftwutoitersrckiylgmyxvgfppljfhkpecqseeyzzfccmhdcvesflohashznclgsdnvycgrmepdblfigoddmhsjshuwsicbhrxgkknabqjgxnrpqmishiexranhmyhznysdkazko...", : "slovoyndwyxczoxhzyyesfcfutkcjqcixajnnajjoabrnhznyihstwtttqaheyyaggqreuearmfaaubgtdqklqtqfomurahewtwrynxhvlfxhfeokqvdseajesjiuvsonndyjhmbighdjliynncxwz..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 921 { : "xfkybznlxhnbfqldyyedtsfwwclbcmcwlupbxzoxihhmbozjaaoorwabqsrxsccfsvyiqtmxpnlrxtjuzukniuajfiualeknngiajczixtytcejlfqpuejruzqgjmiylywpfxchhupuotecplszphy...", : "wiyhdybdiioxcqiqqgslkpsmtxdppjmrhvdxctnpggxjegfgoffyczplpmmyaivdnjdishxxmgkthyfhrluspowzjecgvwyyriyyownikepidvwfjwznmydmfxnqfwgfmiuyfkijxabbkqltoqrbih...", : "bhakdczjkkfmtqnteydwkbzlmeirbfxpngtupbzrvfrcfktqndqvzzjidsftxatnbeughgkzuvwfnyeicmxxcvtqiwaqhxhiyrueabqjuvslznsxmlqqkmkienelaculuvoelvsoricoylyekutspv..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 832 { : "xggztpeeayozhgozcbgeakybwpgkfwjgawqcgfjrdmwfmflzqojqsozgijkrssdzfgnogonxwgelqcymtjtpirwoogdjjnxwefmbpruywmkpxizsxdoafmeelyoekxoobfkwexhyjhzdprnxbekkuu...", : "anvtxvehmokhjvfbliwtteueonmxurlhbbyhfjhndutzmmngyftcfwotppniuwovzoivjrkfmzjvhuqcpirewcmxjgxplbmvsrzfgsfmsftltoqvpygcmdqxupgghudirdgbrbgsmoohwdkfpgizyw...", : "cqqralxbajvbqwsuskcfdaqtcvwwnxzyrirumyxecdmjdibgwazswhvfswtttjccyimbnfbbpgoukupaqbeivvegswtugsrvgttmkydgvnshanmiasassesqrgpakrpgjyihfsqvmcsxltkektwoaa..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 864 { : "xgskhamzusanuuhmatnqtsxyrenzntrnldrgpucqmbevcimisxiuaptpdacvmxzivdvgxjvrizkkvgwnayqnxjronogznazsqxavikqyrppqxbhfahfzxdecehgbirruiqxvrmqevsliugvlpzmpxb...", : "ornrnztytrvkjteyojkflevbjbradzlplcevzbcbemjlmvhdetaryhqjclzmzjlvuvdhikjrtzyrznrnyjdlusnvbqcehzevdwjybewuoilshyyugdiqcggmrccknbevtbbetsunihbrxioodrmyyt...", : "eqqynbbbwwadevljxfzgcqpwpgttqateworczqzimjhhnletzsgukprgppbsaghrbvqvzenrhbujijgauitoojteevemhthmlhmhdaugblurgzyxaaswugctsfdbflsrvhxynpcctrhxjwwzmogkas..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 829 { : "xhytmiublvodgwplxtsgcejmdoywyvlmcdgvlxojkdewzarmynkqmptaboojjmjrwlvoakdthxgkkaeoamdlocjhkzejqpifempphjpdypxcpyipsbzfydexhxnovpxckuronzsuuvmaqnxugtgnjq...", : "zmrfhubsabwlcprtqlbeyjmyvhuhlpdmfpygnnlidqthmhklpvfjwzlcflpchpcdxbvsfoynnioljrgtjwmjhjibpucnogcnkprmbglbbhwgxyceeylrogxwhsowmrocvehnxtpimxtckbvxzsdudf...", : "kqgooiiummtbjahxlcrdxrcdqcnuekavdzojcxpnvysetmfdtcdijmdiyaxajblultmhbecmvabzjkqhcbrdlmtfxpmxztzwdinpqvzgcvtienxcpnoulfmhxnntnszbsmunhjhfytkdfkowckprzd..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 967 { : "xintevtmhzcdatfhwijzmwwnxuulmfrpurbtjsuopovfrpfsyeukgnjcfvgrmkjueuxjlinzftzyugjgjraanpteidvcytrnnysnukiqqdhxdpkcmnoslaxefhnrvalebgvyecrlhhbizbfsolnbjg...", : "noigycijseuamrnzwyiyrjuiiiycynxbntonlepxnrvijnsiwqrbmqsdwiymglsxkhqqyosluniqqtkwiibfboeszitiioparfqnpsdytokfjkouwdmwfnihksxamgqfawldyvsustakhemsgorlkn...", : "efcwjsjdqrfpcvguiwsxearrktuajcvpxeqzzcpjqymqwjxakvouaovsqfbdfduqoaxynhioozkupceuyiqxbgtcdzayjavqgqjvzlxpydiecnrwryyhqszkplaajbgshhnnyobukznytugqkmhxpe..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 833 { : "xlljxstipodyixdrylrwgjokpfrjoprhrjhozfrjdvdmvgsxehnzidyjhvdpcmxdogzfladzkdcnldmuodaqznjuqvdiplvtvpcbganhegzibppteufhrlnaqrqxsykjcjgfeducnjtjfpwwfjhpot...", : "vsvzdavsiobvohgixpizgdkkvbcwzjjxvuofwplmmsmymrukguazdwssxaoyyepvgbgpubeodiikzqgjddtsddpiilcapypxmzwwwqrtutlxasujvqsowqsivudwwgcaqgywglpoppycsjtagrrucw...", : "ofnlmqiwfbmbmaxyhofcpmbxkqvwokawcdvlvyrqbvglxgfyucpetjmpoorewwgnthjgmpljrndnjdapqloiufbtmqpezzidjqyeybdefgxtprunarpyklveerpwifqroltqyqirolusmsvwwgrmke..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 938 { : "xmgkmmblrvmsfksjfnwxnzrqjaqhmevddbeghfacedmywsopkljazddqyxrsxzdnnzcqwsnompgmstmjmjipdgieomonwuxxoucbwvcfueyuehbjjopwusncmauhnjuzpkwjjooilliqafzfywviiz...", : "cxuzviosobdtqplzfsezkmbtwpkbndjbkozlhjwzvhfdvwkdcyklopgfbyicxopfdgtbnxizuwmdrmlyfxymftiogtrevehqeaunrndsdkjckjsbokmxlbjstvghywxjtkwuuuabtyhrckawpswufn...", : "kyqabssvjwhthfntgwjupxymlgdstyhxctrnfnoltbefuvbgweuhcdbqfoyfqtcccdzthpehjnuzcbfurlrpljrdbtmmpxvyxxhxsvhyjkgtaqsucaampulumgnencvpubvgzulmnfycnbibgqhsuo..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 941 { : "xmxzrkzmbqkxlwdambpakojxqyrfnpsmglfogarkwzkfxozpkpjuohspgvzulcrxvwbdifigegwcugcvmrdkksvchoeiqpuuumqjxseowprvedsuasersbruhnqsnfqnrnizthamftuyyhpeurxngo...", : "ixyksdonsbaioijurteklqcntkfxegthasiftywkvnpexswqdyeplnivcznnrqzfyjohlexwlxlnccetkcnfpptcchxgdsdwjwbdtptesjgobvjklhwukmifeaqsicrtxjnhrngbjdvjntifpszrfh...", : "wacppvlnxjrpzlahwwnjqfvkvvyhbzrcjvrffukkhjqopkhrkasgbnrafzfrpbxmeajfecsikiubbzcamtkkjwhzaftbhrmremumlmlzxpxxmaaujlikparnvpaovglgxxuiatmycebocqpnvcnkxz..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 863 { : "xqhcrjqyydaxlqyatcwergpstbeymilwuklgeavtqfmxrrycvkbdibzxxlgbzfceaaiiupijasrbwmjhujgkvziyumsgggcfojekxewnwefeefigwznftawgzlmvybpxaqtudshhqnjduyzedpgtuc...", : "dhfqmoawajrxayqpsylsxecizeqpdwqmuutzexgvwvgoxkdhcgreojuusjtbgnctpfnivrgucvdydiowdhociucwxtfujnjjnipcrasynpkkpukxhqbksefejzpecbdtomuwkeljzczmryouzkxlpi...", : "vuqdpljofosrzemhjvptqosnmwmdfdobozlixdjharkdwaqxrzildqxxzibndtuawqhqwymsrwwfmqqewyyzfypzneuxlldaysqgssdyxsjdlzlqusclldcmznmysjwtkficpznjswjzfntewhqson..." } Fri Feb 22 11:39:31.251 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 832 { : "xqimajxlahocohbpleueigdjihburnvuwgwbktgwuhjkwjkgzsfrinczrmmmyeexypfvrpeckeghynhfpwjmsevoqnwskexfajooanrwamcatztnorqyrxdhfsvxonkwpzljujnxmkegegrhrxrbrq...", : "isvgfjemwipzfvuxswbtukhlfasbnyjwmgmcbvmtbiqtswrjfxotrvkfonnysaglxbemtjcesnzvkcvuuhonmjlvhmwhfqefiaddifktpyclfftbxylvrzqzkmsvvvyixfwttbnuwupwylollaserg...", : "tldnxgfosuyhegfblvvkuohekgcyezgdilmmvkfwvjcwaefcllcsellfwmcraguyudaqctofajvwhzzmtyiyhvqeuquqzmpcjifnjhtldywtvxpvvmhhzwhjurnmklhdtnzclffesyhlohzuqzymzv..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 892 { : "xuwukulzfuenxjepjsscmqkmbinvtixcxaybudyhthrbvedvlunouphszsgllqaqemdgvymuhxmgfpogqnbacaloypfcarosqkbjjzvoxppkumlpxjhsyrvbnuetmyemfocvpacnpypewvaicmzgil...", : "ggvwxlhgisixpdwrlkgmxjnlwmqalfzdlcgzwbbypnfbowvqdpafeihrhisrjwfahqppypswwhfzhyujzmyamppytpooeqtwaonprhhifgqfocxskoruhxlkxtyxaccxkcucnmoibcupuwvehuzwba...", : "ofkbbdxteoomtryvavqyfuxbmznzrmxffbmmfovgaqnywideitwifijeaezzknqhkfzklytfujfwusosndqwqdvdzpynevkulpsvniszydnvvapeuoolznwxfqonljsooffnpevcthycoyvohmxqad..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 851 { : "xwytbkcpowbrepfwkpioauzmnphcjtblonzfxkjqijuxqvbneehhppdpmcyfoodauspbwwsuxzzbwowcvhbwhhgesxaqpqgmjfokbtdjcrrvugrxyrngtpdffndkkusiowxdlqfadkarhadmmiwilw...", : "npatswvwgnzexyrttmgjrpfhcsrltgbhihokwmbebjqxibgdyeabucllfgabtythoazncawyowrxdmnfmmvvbujlfmkwsejrxuzwvveltwmnlhkqgsnexgvbudxmfsidojnuoruzcwulkwucvahdzg...", : "expblpnbyihsctnshrjavuyqxmsgmqguhnmehyvzftfcymnqmvcstmidbsvvhsvntwtajvnnzvlbozcwjalfpakgbausxunkchkltzpqozillomusnzggxknxjabfgjwgymydqnnmlhtytkznldahz..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 890 { : "xxyyedbohqdtxagvhdghzaqsewzhgbvvbhlldjlkjywdxajdnbhxpnqebnycraohfeosxuhtrotbueojzzwlavgbwvejvgsopqnadbducbffozpkvqmtnzctunyyvgphebbunrrfchgkkuetxbarka...", : "hhnemkxkgzboxpsfipkmmfzkbbrwyagsegqjvyodgjqbmzisgqirapzasgnabjtrbunxkgcoqftkowaftrzpbojqlmnhylanephssctwpgtekpdvslpcotjbocsrxxwodhlrdapnrifsieuznqtyjl...", : "hlzicuqxlqvrasjgqvspwjgfyajodkekquatymciwrwxnmrsnnudjthvfntsnchqdfdoqxzuxlyxrpqozxxjctyrjzgjcfbvudwemysiziquvzzmvehxbxduheqmgrfffpdmcspcinugkqjdliowhy..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 927 { : "xzjzreqemtzskwluqlrzwyqcrocrlhmvmdfhzwoldwczpmpmynhvwrciszerletkpmffwidwigepcorgomvzbmeeuvfgvdlptdwdwtqfvqysdsrrijfgehhprgwmmdcnedjfwvnphsvhiermahvcel...", : "ofknrbjcgqzpnprxlfstpkrkplgicjxkmjjdmvaqvvdptgzqkoniviiffbhejhltkmnfvqmhijzqiihcaaquehhzbitonjyuxozcwjlqjelaezaytjkbhpgmdjdvssgcfzjgwvrrfqwcpjmorosccj...", : "wzsvmbffuwxuoaicljwwfgwglexxfurimvkmageagtkrngmndmazuoycumlupgbjubigvyvpuzkjvgcqxfqnqbcevasjxphjuuviyoqtlsyzmyiopihgkrtowbmvfoweekszqlbwahpqqxgyvzbzye..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 855 { : "yaklduxmitxrhueoeveyqmrcynzereeqiacifxxxqhbnmklekzdviscpjzbcdtmlzctqtbuefjtpzyzlpmrpqtxwyxgvnffubzvhdaeyhuctueksybdsrjlslsadwsevuppfijcigsysgtsyqtunev...", : "jxsgonmdcndqwgduuxkqumgwbvfzvmzkyngfdnpuqjhyvaxhwquwqflddrryrvlwxoowaqwcmfmbdyeuflubjsznyaqcxbrnynnidttzfzqclthozqeybnykbfkqxacsvnryyywgjkijxlurldhome...", : "dnvanreqbmyjiyjzmkgktmejteffztvsqmfginkgqvgeoeejgejgnzqzzpzzhcsetzfnqtofdguvutkcwtjurlffuhzimhesyhddofgwnqelvajdilvumoomyrphzybxstaenihslxpzwwlsmmbrql..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 869 { : "ybllrnifplemklgaafcxwbxenijtbxragzmdhjhkrihfttlqwaiddpsjuwlawhagiywkpzhsbyozafohnebvfmmeiqsahmehhvanisnnmjukvfortsitjxsjxmzlykatszlvftwbvfyjrbdivfdyre...", : "gsqurdiqazdbyjccftrvlqoyqwagjlyccckptlrnqmbxfishqcpimnayulkysxsnxrumvbilotzwrxrkxsebytucxdgyimrrummdakukfidpliuskcucxatpcmvtwekpkslchmfwkrvkrhouzazlir...", : "aoicmspqiojrfjunoiqhcukkuqnyvmjueamsbotamttzwkxetveexikbndtolruqfwcukbftelwoeqdmvzxudvmmhpkzojgfnuzhrxiyoogqkoxrlmvkdobxckwhekumepmwjcmaqzafhiuhfarcnk..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 947 { : "ycknnuqzvhcnwyykvnaamqewpjhbnlffxujxxphsvqcjzznondxayzgxcaxkkeaybvmawwxxbyhcnqdwbgsksrbiwhjyrxtxwpkinfqpniyeqgthvuwmrmgwjlemqrjdsjjzkpozfctxwplacmlgbr...", : "gbictjtnayoczxrxhleeoegebqnvueqbjcyjrovkyjqqwqmjpmmemsroqaspawosyqlraasyirhmmksbfmlcikijpzxmqxlavjrawsikzrnhwrkergyyraxqbwtpnaqkeeplfejjeknovpgizqeqcy...", : "izzpmghgeafkmarieuwmlapiwwghzztldojazbwnlizipsudpilmcdavxofzramslgpiepztdpdgkdqcsysmvwxurldtpaahtutywttrsrugqtivtjwyywhswutbiuyazxaitaaidnlivowlkmecpn..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 850 { : "ygonemeetcznnuxucuwaupytpnpfpdzaqpbkmzfcliocfteczokdfzmduitptskmnwwsddfpogqjlxouicvyqqqksadobrezbxgofroejepkbzjuzcghjvjlphnkdutgreglkluffentlrevysphja...", : "qpcxdatomxafujmsabxfuukhsugkfmlnopperakhtdtuezpqxxxwiixpnakgikbsyabkshtnmgjvcojudhmdsfysqhlmyjvavcwfnxaxhhsauvytryvpwrkxcrocjuxebcaxpfsiqjivtytjiqsjyk...", : "mbnmueyktpgyzbwwelldywnzsikqdinbddmsaprxdzvpwyexnuvzqlbrbxsellbtptrclfzclzhlybiogynkwoedipnhvtraershhvwummgyhcqgmqtgmtjcqqrdndutvnqlynguucsxfeiizfgxiq..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 857 { : "yhbyulmopzqmzjmwfdgbobzxpzndmmbwkwedvwstwfoncpntgezkchqwiiwfrjfupzdypbilmuxqamaskupnapamlnlfyaminblkmmkfqylabsjvrrxolomhrpwzuovcfdgykrjlkkricybdcdmcyt...", : "dtsjhhlsxwovujsldxgvdeixkvgloxvfcvqxoqmpiphcwxtucfvqevbkabqeeethwrpdwiqwjgsufdxrmfrvnmdzqrecrbcyfpiwdpnsfxtodmsqoqomlnxaptgmcordztplaxfpfqgkpnfewcegyp...", : "zircneyuhbpqdtrsnmzextsyhrtdhkixeoipaztldfipboxfuhwhnyakqdcxoagawqgpzidlnlagymqgykfchftrrmvcypaeveslcyannwfueqohchnhbyxqjjgsonwjlplekvoswsxzpplaclxhcz..." } Fri Feb 22 11:39:31.252 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 913 { : "yhrwlnrlpjteevpsksvbmgxejfhrdqcgckovocqlxhztvsdjvkdliuvrvnljlgjablejupbuelveyzimfxwdjcwkqhzitvswcskeqcclzqzhuapdgeitlokelllvesdakuxwiliybvzuizrttipzym...", : "pojfendvoqozdshdmgpydjelctvyzuwejpydnbzwgwckhldxzqvtcorrmattoqbwkgfkmidnjtvmysmbntveqaiaufkaofpxdbpndmupkyuqutkcohafqsxlaregvaumxuladljiykvhyniynbownj...", : "lavxaqlhwuwbzkkzkeqidshekrsnpovkocyuebqgjtzlpofufddnrxjbibyhckacqcnwiugdqhfhgtcpupkwtyazgcwzcbqnemtkipbuvqfolybfwehthnkogtnettutfcgvhdrpwldjxuecxzfrnf..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 903 { : "ykmudemtpixrcgyjhlmfhhieneesqsrqvisqolygsldjarinodvcagfcicqirdkickuqenvhoeptyahblpkzbneipivjfhxrbkmpzjiwgfazlmwqoeofjjubdrnwicczofuevlldhiicbxwdvixvvo...", : "smyrabubxdiztraiwdfieypsbbtuhvlgjsqnvbysxcgnuamvythfwqfuwmiwnyqdolmxctaafsofxusxmbozxmjjydnappvtdftelphtpbohilzpwjcsdnxqdjxurdicycjaxdgkutgmyitwjputvv...", : "ishnjroqabobdfkphstbpkirlmuppdspgpmvycfugyjkvpnxjkgevcklcvcpavfzxaxdtgqpbqogogfuvwnpqhoqzagxokrpzzwxhfbhxpctzqraouvwonvliuuggfssoassyykypvbfmcxgxmruvi..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 866 { : "ymbcjosymqofxycdxqmkpftqfujpdzkrkeplsdbtstuuneksdfyqmxpwobwepkgxaedldpfoknzlaxgiqphdpqhceydqvighsdntwvulpdoyxmaehhcvozxfkxtpkgrkdjbldaejxpgrxmyhjljzgs...", : "wdahoqouuuumogspucedqricyyismnerldfrbxuzvosidkupmppbpahsuzkpnvlpjzahhqfyaymcrsrbogenitwcqskksomvgeoorxouircirscqdtqahuieknqhgtakmojzslthmterxvupxchcdw...", : "miczjkjufikneysamyeoqkwuszcwlvwpaquvhalsuyiallzryqzshmeuegjbsbvhawakiszjsyiftzrjfvsehgvnsflvirptfnpjmzdefdoewgsvsiqebffuqmmixtvputjqlhohwuhejtolgkfgrt..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 833 { : "ymmtqbsswdnugazjmtutapmuakzalznkskyxdvkcybuwyfnswbsailtrbdugcpmeyggeucynjxoeabrenaxaoouvidivoxaalbbghizhwqzfdgvizhcydugszorjqmxhnehxqudfxqxsflgfnzobpv...", : "mieyhbhbjljylfsrqjxqgnqohkqfaogzvjsblogdfrvweobukzbzhvexhzuaqwaxysnhziaffgkwbyfhvsqxseesrpymkufdqsvoglecqdnshwjrgxcsdedaqujcgzlxptimdgfxkiguiftstxolob...", : "nseizjpjiartelqavqwyzjiqmbfpjcusqfpemgjxtmsesmrunamyfguzaxcqakyzjrjzidprrnovcrtbsphseyyqbvgeuvgqcawbjwuirkpprcwzqrokkjzvqzmkeesycydlvsstagxbdgbvombzuc..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 989 { : "ypvitgzqczulvqiyksvetukdygnezezryoojtjvjhztitqfwavzkussppuexkzlbkjuzgyxjkenpjjulqyrcjjjhoqqunelvihvoohgauocertekptihwalaywnxwinsjijxeuuqkbxhtxjmwpupuj...", : "avlasxlzbsjqzekhcxbsqowhywhajksgfamrtxuhyjpmexicbgdfflxnpvdpoerhekokapzewogehklnipzaojfdpfcddlxnxmmhvlprarxggsjyfuoxzwgpzcjiozliymivobyvjrgsdfovokccyr...", : "poquemflmylgmvtvslbtruyyyuitokomwgcvaohondzcybjhdrgwgxfhrkrxfeuwcqatkuxiqtmuwiizegfhtqzcuwlgbdqqenocfkyayxekpcyqpkxjsjqiuiccqnkryopzrakdgsxquhnediydda..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 916 { : "yqwqmczsaqhohtlmthhbwfhnnvjmyqtryunbvecszijtqhjkzjiegdbrpmthgvkhtaubnlwacqbzyvhyralfyvahmlocwnvondcbcdtcelgqgfxadcrkhhxnodrrermmpwayuytpgousknlrvezlju...", : "yxpjmgluqhadimkpglrwshjplbcgonvnwbreanjfyzdoftvijvfhobkcnnixgvqitiygmafaraswoirldeunxsbfexueykizteqitesrrblwwaxsuvebqigvtfaaniojfuyqqraqwjxvmbyzsbazhj...", : "mtmcynoudctmvwrdrjatpfyvhsktlvsosrpplonooilowdqyhqqxjjiyfoxsgdwdkaujpnrllxhcyhczxnkgjygyaovnvzrfvjwcbmstokanrlbgqmwrttxfvuajsncevbyxoxgughvtsvlpcrvqxs..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 946 { : "yviriyrkdfclbnxzthefflossxbcezdsumutgiywfahpjlmzqowiklbeqxyqpmxtmanuxsshhhqavvgmrspstawwlaatlbsjdoagbdzumbzuooabplgykngqihbaokalchpqdifxymxagfxtgqqkzc...", : "yvtswpjgwfyhhosacylkwpxhsffjaldzsjwmzjyajisvtdysrimqfbhonafyzuxfuzmxkwdjjohxvnlhzdysqwsjwssogianqbmtrvkmiuynovnbenjrhffxnmhxugrqfobgoacohrlgahjhdkpakq...", : "opsglzwprofilqwnfpgvnadinvmxcqbxicfdqwiqgyowchufckegylmkybkdmgdwjgicvkvlgbacqhpyqfhybnozjvmdejfptadtqdqzwnnbvhgnzycjlmvoqbrhgudcsdqfkcmiopylbehznfckgb..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 826 { : "ywbgktejjcywlnlbezgbrqjzfnsldowjnnyrraqmlcvfkydogsabhaabzujefnvipdcxonitulxbwlzaymbhdbvgjkaxkwvtgokuqrlxvpkdcyuzqcnbaxclqhpfsrgrkphyykfztzatwumbshehdd...", : "kwivkviryeshbztnnvplqyssihtdsqsmshlpaczeiqwicywqtzwruidbfnaejirdykgfyutqujkylaokgtttpvcltvmxqunzrtprfxtbenmavwmlwqldgitzrwyzzwogevxuhcepcdtcjnoqjaqylb...", : "qtjejhwxyrvqkdwsgfbjatyopyetpwuafqeohuqsyelitgudjdnhqlhqznirjmfyfxrfhhxzaxghdwbzyiwhavvekfhiskpsipakeutwmqaxtziynwjnlrmdjfmmsyxeywpesyyaabxhshwacznycmwswwqsn" } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 911 { : "ywdukqnfzkszywudpromfwawnupdljklmqhzzrdmnsoejzwzonibumrvdbcuxoxowvbieldmacwtghrclwdxerjdftxllfutbmgkavrcyyeadjbmfdbtcjlosanaslmixjfexwdbvdrpzvbwrgxtup...", : "jfpygoanckpyghygetclkxtoatogznenvlqjtmwytlelweqjdrlhprzqhwvxdjytqecsaigialpkopezvogwfmjnzoqvqmjjrnrnwihginobctyiykxxpckxtjuzfbwcnilsfwzitqadgxhrvyfqfo...", : "wmahdvxvydukwrbkjdwbrjdmtfulazpxuaugnvgcbzlaxgddbbdtsnpyyglwarahzcgxbgkogyxyplanwdebhyqzhfbtuvacmtqvwywcwdiwhuzzbmvqtmdezwbhjiwbafmvkuzcwdjspsinvnngpw..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 852 { : "ywyfzixmghoogkgeaftchuixwapqnjrlxzelfuelborwkkmzrjychktglcmluitqvctvndkkrdyxmjrmmwyfoxuutevcwczwvnspfyizivwbcgiilumxwnncfmjvntsjqfkmralurfijbiviazqcdq...", : "gdirtfiuaxgwqdgbawtzantwqrahwmpwlqppnshwxqunpjkrxbioryphhumzvcakursmhcaosemzqnagpwnecofwgqlsregonytrbhlabthtoxjxtqzdsqukfovkcxcrxktnodvwrsafunwgizzsmi...", : "hiopkuowgirgwhygguzbnzjenyrruhobthlbixeieojwegpaubacwrzucfclfqygguhpdlbmmcqfknqjyyfhzquxxtdoobvakbatwmltaowtqvoyqfcowehcpcgineqqgiuxmofwenqqaooqdqltme..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 862 { : "yyxuxcxewyhutvwdcmbvfqyviehpvilqwcgqazywulcbiykxtqsnloxouzvmfzjbumrejuzwjccmkhsuvorqxbpgrbpxfbwgtcvbprzmexjsqvsxdsiryqoszibwrgxmlauqslccmrofxairkqqwfe...", : "mqwjdcdglcemzucvqkmwmessajkpcwudvjcmsbvxtmuvlljonzzhoecfuhhlstthsfqkfwbniswbnuobyxnnkbamkjjzircobcycdnuawqqmjdiadobxpesrgkbkotmihzqicqovndsgfkdproxruk...", : "bdcqmjodbxihzdgopkkdezxhyxiazzpehuqhetdkqzqrsdnstgaiugrzylxkurvewldbemwstghlibkbfqluwubdfyzzmwhhlnlhzpgrhjpztxkasdinrmgocqlbhypqxremnwdlfbwlmpildlhhzz..." } Fri Feb 22 11:39:31.253 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 849 { : "yzimjdnmqotwrztawyutfujwerikcylcmoqujmookspzklsbymmdcepnyaotejcznaomwifmzbkzgqsugyaonzozlcyudxbacjdyvipcmfygltbwmoxavvnnvglzfgovpyhvczfcfkehfzqttxruff...", : "ertscblbglxtekuwxorkxvlqdblcfyddcwojapvrgsdpjvwerlxtiigvyipufbmczfrhertayvloxskapjjxjqwxctcozcxjdvpwdpgaintqfkvjfqhiuaumluebmwrekehpuohccypatgkqwuknnq...", : "zqkxumxcipamgscnzdbbzqsxigqybnpqxjfoflqhnapgbmrimcdpmhalhhhcmfedphuconervswgaibjrcflikxjpwypccarzrcocvvsnjrugpudaburbjwvosjfqjkyzbbcrbjjyzszhjumaddiwi..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 869 { : "zbbrjdjofxnmbvwumqpuimmwenvjkdzbbarpkmzgisamvfyxhdsbfjtjzgtybdhkqeusoeuphmbffdfiumkozqicbmahcpvebpnbuxgptuznsttvsucvsrqfajxhrvfqlhnofvxndjvbiqsgqzvauj...", : "cjbimsdhcjjmohxdfpbdrtcbocyvqteduggunvhjcegxwcqthchjdhfpuriesfxemvpnbiwdkythigopgsqbmuwlvpwfqlrensmxubywjdgxzbavcfacvzijezzhjgpdqzmlirdppldeuxygfzpqfn...", : "jmohfrzeqdcolqptemowwidlqccbbwjwgelvmgreswamipjxxnnidwcolvsaezriawxxzggguxixmebtoikufrdrwzbtdoyjtbliuttmwddxtfecuttjmzcxygrbqdzgtannqdvcdenfruvtmpzscl..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "zceopnevqvwyxshzlrrkfygwafyitmynayxtbhlrrxepksuqteeheqkyacimkavcqpjsjuuznggqfgvgrehojlyrbbokpnxaqpbakvzjgjtvuzsqeamxmskqwwvkervmhznluehgfglckofeqkmedr...", : "ovpxpxlydcxedyzkwmersgsfmpzpnwmstxvdogucorbnrzloxfujjwlumqhhphjylcqafzvuaazxqgymshxqwlijudjogbgqezzbutmxviptyfhhidestdxftnmzerdylwkbafybjjnwblpcrvezxd...", : "zfjijjiioikqlgvcntydefxkrekqhrhuyliqvxifwaijjbijpxmbqfynykexmdhxlicbhdebahygntwyqlbsbybhzyzhpxlhqfvijwloucknpplwvxsiycmrallznrwqtukfhozzavufurmnmbgfey..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 865 { : "zcgaqgajtzwecpvjmyuqqkwxdubascccswyfcgxwjiigrsmdnchuvadczyinxkqhmbogktkotxxjjngttpunammrlnctagphxmvbdvwqtvtftxvsyrpeqeszcsplejofaxqjqfnrqkckmjttkuewfd...", : "nopjekarkdxglksholjetdtgiwmpwgofhqmfknsatxiuhtrzhgaqpjvtxpnywvakkgyitfmvhqxwzlxuzfpaviczgtdyrscujfaxmiiefrjfyqlggugrqilsfwmadadgatvispwvdrnailztdacbbq...", : "mddvawaimvfuaeamabckobtakazaucgquhutxostbhrrcvcztdxvanvgmoeawjctrmclfnhuxulixvbvcrdjwznqugkbnuahbknwlqbgnfsjntytarpbhojviwukzmsotjghmzrqhdmugiakgrulxj..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 833 { : "zcyxmmmlzgxwbzoevjgnrmysvhpgsgqrkohpopavohapuctdvyyqagtqrhsmwgvfdwnqfmkidpukhyaksyauqpukcmgnqnjyrrnmobonzeipvpxogmpbpmvgdtzwvsfkdlowtkfbzubzrirawhbzce...", : "merrpckphykouwdnvzrliklfazlvzmvhuscjqnlwpjxhsjctublfmvbdrzxoezuiqpgawwndbrenlfupcebrlypwmwvlltvtyhkgshgnrwjlonmysqkhoqvzbelxsbzpsexhkjqcsxsvfvawusfawc...", : "gvyxqoigrxdeeipqacowzirterrdrsvaaepbcfetaqttzbyecrdcuvxpemrnavmwwoygbyqqmntdxszntmeoywkvmufwndvtyajihqrvinczrmgtbegnvrcqspbzwqapftqduxhupvicehkrurimai..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 937 { : "zdbjkqltpiuwsphpafghblkaayahwflonxjlntsdfhodoscjqagiqyatodldmckwtippiadehcptwvyaxfifmjnianmcwvcbcwyevvmvcfitxfictxakxmunvonrblraailqqmvqafcdnxanekugsy...", : "needsfyrvuanqqzpqlklehznlatznnuxsmluqddaisrwzfrtdjktctfszqdbflpohroghdparbyfsdlyegsnymobunnuvcytlfowevppfplsvcmvbrfnxzwutywjoodylviufnqfjdqluniqbnrmtm...", : "vehyhugmboetsetsfbuzdsozrwodqaevbluiyfnrcljxsghlfqdqorpcalgdbwdevxppoonrvttbzscpgaazfwdtkgigutmqrnykdlhhdlchspqviboiikubrowuutsuawssaoddwdxbqappjbohoq..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 936 { : "zefezrvnktelxkisrqkrcpygklsopbkcmqpzgxmfjrfafewiyjveumgflxszwqtgxtdfgxuoesbfzkidujwtnhlzwxxepqjrwiyahkvnzdkvqgstvenrlghknzlooenjcqduleexjrbajxsndrbmqq...", : "tcjewxyqwsdatehdnvyreyofcgxnnwdlttlzhmhwgqcfkqqmnwujflrvfpkvpdbldbydrzqalqpdovtpyrvszrbnyiemsgiljxchyaaxgsmxkrderadaqvjodrjtugnzyxwlmksdokfypjlptgoywd...", : "czpmlmzzzutemxwtpeaajhatljbnhvushamkgiajtyecmfokvekpyqvehnuppfijxfbxnawhnhuoewondzeydidvswdraopqyzajbacnvsqhthqpjrgziinuolvpxdsvaaqhwtijyiafrqmsarbtdk..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 830 { : "zeqirlsheieicfqibtkamvhricvingbouadqhpzvyoarvfvirdvkmglsqubwonmsovuoidgmksxehsptmsitbzkviibaksyposxlvbkhmanfdgaavlufgsdpxfnmfkgrfoxeqssgimptvrqqheylrn...", : "mpwnwtcctvegaqeltjjshjrvwnalsswghkqnkzmvducunbaclmuanzwlmqxavmfbixamyfepsrkuwhjntvitycrohraayqsvgoqpqdhadpfeyarluziewhrqrkrvoktmqbmezgxkqazwvvjckxmpwz...", : "imdanjczqavjpillnqpqqiolpcmnonwfffdvttsvajhzozvfiqdneiabocnetkplcxhhddsitcbouvkijoheerbsrrumlovvbturnlycbpyfrxbbruekiddduwmphhnjihqghhqahpnccwzcmyszvg..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 823 { : "zflibovzoutlaspbtdaofpblapkpzsiwcloivhynnlpardglltqqtqkexdkgoghbdsozlemvvtbcnuyjejgtucmdrzxxwqjrdqdrwvpxgmebaubjcpacbguythyhrctfktyksdvyrdyyklkyptbldq...", : "sbqrtycaejnqftsxjtpohhcbyizvfxyjlkvbajaasmqzpuqvngyuvnffjpbuicozbelsmwkiefowvoehnvshxqhlimsqlsybcioburvtdbrbxuuftgotsomopxpsrplbumlauhtbsgvpdkuldbmirj...", : "qgcpyeankpgnuvfiiubdkhsswonuyddkoyslrdmpfstsseykawyotpwkronehxpzhplhepgtazbxtwbkxgnjveacgsgtwobacmbxschhnskhfuaosziwabuqcohxubapdoyiwevmeuukbynxburwfh..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 921 { : "zmsjxryaecbwwfvewihqhfslmlzrfubdkgcxyuuxjconlqdtnugszfudtfhudlhjmduneabsrpueztoplfwmcuybupypkyfewhjnuricpznbuaeascyzykthyvkoylkyxosrfganfkxykdcnvrnrvs...", : "dskltvmnnzodaldkwfffbpzurlqdkhqnpvywfnaqetcsmpnqwylrpniwtlpgmfgljprjogaoamwxzipcgxqodmlofmrxqcmrrjfdwkdwedpxgvzqvhervgqxetxljibatdwtmjhathpfmknbgyhikv...", : "vhbyfznycccjnjgtnawpgxzphkrukfpdrtpxgmkdsxdhfcotkgyeffhcxowxbbvhrztbuivwjpbtuholsbkgurjuedbctxpplydnomsydhvevyaxupllggnxjzhzbelzbcmmfrrvmsqguahxipjroq..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 893 { : "zohzeszfitqhoiuxujppjylncnuehytuftqtafdnvguovszrvtchnajctncrsryfdfogphooijtuyuujzzxurglpnyzlmtukckwaffrbergwlvtvdmfagpcxnuxfrsjzkpdkmrczkomhclogzotqam...", : "cwijgqslzhictilghmblwjzephtufxzzqcmygbimdoxbxbgvwrlrrlvoqmijgaaqtwzjkrpawzrheebserjcsqlzaunxcivexmpowmxqqalerdijkurnsuapmgthvoegyiffwbhgjekbmcesfnlpif...", : "oftvccyjnlcbzpguawechkdrhmtelhebpamhhfszcppzuradercbllycejebrorwegewzsfqsktxbcxdgzmkworfoyblgbfyekizutdwhejlmeuxzqhuyxckxkcsbqhrktfkyolxuvyqdhuheeadpu..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 877 { : "zplpfjbejykufbvbfvdqpbdizrcknzuaxckevhiweccvaqllckfvyefpeptbtniqrbjhapyhvyhsprvpxtrglwttbdhujsnqohoglrdmbhcpllmgonjogajusncnxytdgfrztbuldsdbdjsysaqvpw...", : "higeebooiggbpwljotkxbpsdtlnhvjxbaixuyrxvfnrpllxtbjuzfpjrnyoptxrsmxyrvnzuuvcdekllxpkqrnvxpupiwhprgpunyigllyejzpdfyiuhpengdqajaxcunphloxzrckyjhdwxrdytho...", : "ghrefszqgctpbplsdulamneisbdfuzygkcebxjbxxdckwxkpbnsoukkvkqyabjarqbrpaajqxdkybliubuykwgdlvarjimuyofyutebfjsegdsyfnrbljijfhhlsegfdhrylfxbkwqaiqvrbfidlwl..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 897 { : "zqdzjymjfpgkjdknshlnxkzjjzbqupxrwcnhuastijswneuumyzatewhcjaaygxhqkpwizipttnihkoppqszcccgyhbzccvwihkqeproatzeqsxxukeymwkippqxdjpeidathntzxzrxjwolbuapjf...", : "svoaibuigvbhionxkvbxatnykmsfvhzgeynwswlieskazblvlqmngsruanctuqeaqyhsfqqvfrgkuzchzbzushapxgjadqvhlmiaheqguwbkgbnnmooztwdiefwhudrlcxhwcodcrierokezpnhyjh...", : "mhunbrtjmyxzguvozdjfzjpmyvctljwpzexjamgtnpqvvoyshlbqgrhvepwlcipkofhkaifdwxulhudcrckldpwxzehcegtrcpsxmbsutqlwuvdpkgnegtjznvdooqpullmuixwbgofwxkrhtdzfra..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 930 { : "zqegfcgaomerkbyjkkswjiinaghrdwobcfovetpmizztmbwkkrswmvqkghxwwlhuurocenihewhpmcqmziezqcchmmfkskmiloudabevauahjynvsfzdsxfcmjsrffvwvygoreokpijjfqnrsxgqse...", : "jpoaucqbtyjyjudvnzoxwatcyfcolpfszyktasvzzfbwtnmouyrsqppwerrtvceowlurcfkrphlhvzlztwjbiiuvkqkoixtdplwvestefbiohqgewzpgtzdynmfbrsebjzhuspckafsmvjbgbfobtp...", : "zjhvyzhkowmphoqeheldzpizxcvzxdkyztetinbcqwkbdzswruposamkblfozzcaogoccizbetszzraamfwsbxlmwdzjjbesnqsedcrkjmcepmycdkapgyzqrxrgqpshwcwnckgzupdqkejfjxwlkt..." } Fri Feb 22 11:39:31.254 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 853 { : "zrphvfjfxzjcgmxulknpzxxstqyjhekfpvttbuhvjycloiqtxxmjfkuwpoegyohxynhaugaedzdgcioosgshtvgytiaicckxdgjlzlxcsjhlmbekhholqfsfiebrwbrlyimuqrgjdeteilvlsrsixw...", : "umxtclgsnfubfrwuacflhqdchcdyiklsgzfeepronnhtwaxzsndhvsfgehscctfxxrjeernwfmcvacyrevmbpdzhdpqesoncyoyvdwdnejpknmqrdpgwzkysuniusdpiotjbrcgvavaorjwijqfpuv...", : "hkiwwppcokecmmbsiqbyoscalugzkknvdtonhtcufxcfmpywmbkgbwhqpedpxvxgfdxayhpqqlmccoffrnqxgjytbdbtwqbwartuxefpoazjydouqvglzpzggbbphmablwoiydqjzgdsmaggigtmbv..." } Fri Feb 22 11:39:31.255 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 821 { : "ztprhiocwcvaezagfrcigzhmpuytlxjcqeegdkezntteghwuwxkhoxyszygyivjflbhshdjqdvtnhsfrpqhdxkautzzsuaaievogzdgtgkwxdappnlybqcbdktkmiqtlhfyfqyxrfypgxujvlffkfl...", : "mygagirlrepmhvbudcmhmpwqgmukokdrjrkmexigxgqkisdtojltqgrxyjnufwzznyivwfzwdhudqcrzofhtuwybyyajdsnvazrvrvksdrnqtrpbnvaklvmoyboblvdkobsxfwijouwsmmwurnfnly...", : "uwpqyvqluejelzyaoknqhrvznnzhmtxywgghlrdapclexafqxaqgvftjvzktshklmocapzdnkgdjclqxnsyxbzpvhybjhnxrjhtrbhoanczhqtpkcvtshcwipjqmydrryqeosjgkvaxcdqgusmqewy..." } Fri Feb 22 11:39:31.255 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 911 { : "zxpnckorhwjsidecgeqqtwlpzellaraoaaqgvotmyyqjkqveqezubntnvtqwfhdicmiwtnkdamtftvnmcpktocfonxtzmmnjwausjhybszvkdaoqaryiduwhjfktldtwiezwwifukybsbuvixiquje...", : "nsephvpvimkhktuwbbnoivsjgmujyobmhzirsvvqqmrtdynunxmlmksrypyebrvxaihewsqtumrejotdjutdwmuchfhzkirpsqeclpotkfcnulxilaglyibribrdsacbfrsbnzhjwjepjvobownzku...", : "ttayuplbacxkkxwmzoyetaumiududoqwyhklpbheimqvmcdlbsurwrznegnupfajixoarmlfvsoaakcsonkfmxvsveycngbczevzyurmcggedymtgxgjvsvlyyioxnrdgmrexifkvjextgneausiuq..." } Fri Feb 22 11:39:31.255 [conn4] test.system.indexes Btree::insert: key too large to index, skipping test.test_index_check10.$a_1_b_-1_c_-1 898 { : "zyxgboniingttyykflvflvbdqizeusjdwsmzfyvbdfzdohpdlmbrsvpntelsaecqpiyyocfbghgzkpbsxvsipgvnnuwqdxclwazlsalpdrgdoykjqvgmuyadiusiaklzbiusnngnypsonfuykfoogr...", : "egsjjrtacfqdqqjtmwrgmtacgkvuuovvxdqqhhhgurzdqqwekirvssntlwrtnvaettrejahifndkdmiyjzilncmjdehnbkcjehshzbjngrolylzvmbnmwukxlqszetpjzbshodaogxxowftajwlaoj...", : "jvlvomdinurqvcbchdbtsfzamostsxhzzsymbmqwhfqqtypnouegcyjbzwrekawtqpqxjknbohvanwxehzyxiikavpwuillgxniceacemiznhjzobcowugmdtyxzxfqijcsnpixiifnfszaukzcskr..." } Fri Feb 22 11:39:31.257 [conn4] warning: not all entries were added to the index, probably some keys were too large Fri Feb 22 11:39:31.262 [conn4] build index done. scanned 10000 total records. 0.108 secs Fri Feb 22 11:39:31.263 [conn4] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:109077 109ms Fri Feb 22 11:39:31.263 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:31.263 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:31.263 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:31.357 [conn4] test.test_index_check10 ERROR: key too large len:894 max:819 894 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:31.739 [conn4] test.test_index_check10 ERROR: key too large len:829 max:819 829 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:31.818 [conn4] test.test_index_check10 ERROR: key too large len:846 max:819 846 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:32.390 [conn4] test.test_index_check10 ERROR: key too large len:832 max:819 832 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:32.748 [conn4] test.test_index_check10 ERROR: key too large len:826 max:819 826 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:32.810 [conn4] test.test_index_check10 ERROR: key too large len:832 max:819 832 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:32.929 [conn4] test.test_index_check10 ERROR: key too large len:844 max:819 844 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:33.093 [conn4] test.test_index_check10 ERROR: key too large len:843 max:819 843 test.test_index_check10.$a_1_b_-1_c_-1 1260 Fri Feb 22 11:39:33.257 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:33.257 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:33.257 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:33.458 [conn4] test.test_index_check10 ERROR: key too large len:847 max:819 847 test.test_index_check10.$a_1_b_-1_c_-1 1446 Fri Feb 22 11:39:33.539 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:33.539 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:33.539 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:33.749 [conn4] test.test_index_check10 ERROR: key too large len:894 max:819 894 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:33.799 [conn4] test.test_index_check10 ERROR: key too large len:822 max:819 822 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:34.858 [conn4] test.test_index_check10 ERROR: key too large len:921 max:819 921 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:35.630 [conn4] test.test_index_check10 ERROR: key too large len:862 max:819 862 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:35.660 [conn4] test.test_index_check10 ERROR: key too large len:874 max:819 874 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:36.539 [conn4] test.test_index_check10 ERROR: key too large len:861 max:819 861 test.test_index_check10.$a_1_b_-1_c_-1 3192 Fri Feb 22 11:39:36.587 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:36.587 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:36.588 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 3845 Fri Feb 22 11:39:37.801 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:37.801 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:37.801 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:38.851 [conn4] test.test_index_check10 ERROR: key too large len:939 max:819 939 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:39.270 [conn4] test.test_index_check10 ERROR: key too large len:835 max:819 835 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:39.314 [conn4] test.test_index_check10 ERROR: key too large len:897 max:819 897 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:39.848 [conn4] test.test_index_check10 ERROR: key too large len:907 max:819 907 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:40.322 [conn4] test.test_index_check10 ERROR: key too large len:851 max:819 851 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:40.387 [conn4] test.test_index_check10 ERROR: key too large len:841 max:819 841 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:41.021 [conn4] test.test_index_check10 ERROR: key too large len:870 max:819 870 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:41.580 [conn4] test.test_index_check10 ERROR: key too large len:952 max:819 952 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:41.600 [conn4] test.test_index_check10 ERROR: key too large len:868 max:819 868 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:41.916 [conn4] test.test_index_check10 ERROR: key too large len:879 max:819 879 test.test_index_check10.$a_1_b_-1_c_-1 6270 Fri Feb 22 11:39:41.991 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:41.991 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:41.992 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:42.128 [conn4] test.test_index_check10 ERROR: key too large len:826 max:819 826 test.test_index_check10.$a_1_b_-1_c_-1 6694 Fri Feb 22 11:39:42.500 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:42.500 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:42.500 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:42.547 [conn4] test.test_index_check10 ERROR: key too large len:849 max:819 849 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:42.800 [conn4] test.test_index_check10 ERROR: key too large len:826 max:819 826 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:43.106 [conn4] test.test_index_check10 ERROR: key too large len:842 max:819 842 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:43.325 [conn4] test.test_index_check10 ERROR: key too large len:927 max:819 927 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:43.502 [conn4] test.test_index_check10 ERROR: key too large len:886 max:819 886 test.test_index_check10.$a_1_b_-1_c_-1 7389 Fri Feb 22 11:39:43.516 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:43.516 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:43.517 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 7442 Fri Feb 22 11:39:43.636 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:43.636 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:43.636 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:43.787 [conn4] test.test_index_check10 ERROR: key too large len:868 max:819 868 test.test_index_check10.$a_1_b_-1_c_-1 7698 Fri Feb 22 11:39:44.093 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:44.093 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:44.093 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:44.357 [conn4] test.test_index_check10 ERROR: key too large len:869 max:819 869 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:45.282 [conn4] test.test_index_check10 ERROR: key too large len:909 max:819 909 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:45.508 [conn4] test.test_index_check10 ERROR: key too large len:837 max:819 837 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:45.741 [conn4] test.test_index_check10 ERROR: key too large len:836 max:819 836 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:46.254 [conn4] test.test_index_check10 ERROR: key too large len:822 max:819 822 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:46.332 [conn4] test.test_index_check10 ERROR: key too large len:866 max:819 866 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:46.701 [conn4] test.test_index_check10 ERROR: key too large len:822 max:819 822 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:46.874 [conn4] test.test_index_check10 ERROR: key too large len:841 max:819 841 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:47.438 [conn4] test.test_index_check10 ERROR: key too large len:870 max:819 870 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:47.608 [conn4] test.test_index_check10 ERROR: key too large len:833 max:819 833 test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:47.734 [conn4] test.test_index_check10 ERROR: key too large len:846 max:819 846 test.test_index_check10.$a_1_b_-1_c_-1 9967 Fri Feb 22 11:39:47.772 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:47.772 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:47.773 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:47.838 [conn4] CMD: validate test.test_index_check10 Fri Feb 22 11:39:47.838 [conn4] validating index 0: test.test_index_check10.$_id_ Fri Feb 22 11:39:47.838 [conn4] validating index 1: test.test_index_check10.$a_1_b_-1_c_-1 Fri Feb 22 11:39:48.506 [conn4] end connection 127.0.0.1:60518 (0 connections now open) 2.9060 minutes Fri Feb 22 11:39:48.533 [initandlisten] connection accepted from 127.0.0.1:60315 #5 (1 connection now open) Fri Feb 22 11:39:48.534 [conn5] end connection 127.0.0.1:60315 (0 connections now open) ******************************************* Test : index_check9.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_check9.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_check9.js";TestData.testFile = "index_check9.js";TestData.testName = "index_check9";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:39:48 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:39:48.703 [initandlisten] connection accepted from 127.0.0.1:54800 #6 (1 connection now open) null setting random seed: 1361533188712 Fri Feb 22 11:39:48.718 [conn6] CMD: drop test.test_index_check9 Fri Feb 22 11:39:48.719 [conn6] build index test.test_index_check9 { _id: 1 } Fri Feb 22 11:39:48.723 [conn6] build index done. scanned 0 total records. 0.003 secs Fri Feb 22 11:39:48.723 [conn6] info: creating collection test.test_index_check9 on add index Fri Feb 22 11:39:48.723 [conn6] build index test.test_index_check9 { a: 1.0 } Fri Feb 22 11:39:48.725 [conn6] build index done. scanned 0 total records. 0.001 secs 4344 Fri Feb 22 11:39:49.069 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:49.069 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:49.069 [conn6] validating index 1: test.test_index_check9.$a_1 6846 Fri Feb 22 11:39:49.619 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:49.619 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:49.619 [conn6] validating index 1: test.test_index_check9.$a_1 7085 Fri Feb 22 11:39:49.652 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:49.652 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:49.652 [conn6] validating index 1: test.test_index_check9.$a_1 7340 Fri Feb 22 11:39:49.791 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:49.791 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:49.791 [conn6] validating index 1: test.test_index_check9.$a_1 7585 Fri Feb 22 11:39:49.822 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:49.822 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:49.822 [conn6] validating index 1: test.test_index_check9.$a_1 7659 Fri Feb 22 11:39:50.835 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:50.835 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:50.835 [conn6] validating index 1: test.test_index_check9.$a_1 7975 Fri Feb 22 11:39:51.446 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:51.446 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:51.446 [conn6] validating index 1: test.test_index_check9.$a_1 8380 Fri Feb 22 11:39:51.479 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:51.479 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:51.479 [conn6] validating index 1: test.test_index_check9.$a_1 8515 Fri Feb 22 11:39:52.481 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:52.481 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:52.481 [conn6] validating index 1: test.test_index_check9.$a_1 8730 Fri Feb 22 11:39:52.631 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:52.631 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:52.631 [conn6] validating index 1: test.test_index_check9.$a_1 9408 Fri Feb 22 11:39:53.856 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:53.856 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:53.856 [conn6] validating index 1: test.test_index_check9.$a_1 706 Fri Feb 22 11:39:54.074 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:54.075 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:54.075 [conn6] validating index 1: test.test_index_check9.$a_1 2947 Fri Feb 22 11:39:54.256 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:54.256 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:54.256 [conn6] validating index 1: test.test_index_check9.$a_1 3822 Fri Feb 22 11:39:54.330 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:54.330 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:54.330 [conn6] validating index 1: test.test_index_check9.$a_1 3832 Fri Feb 22 11:39:54.341 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:54.342 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:54.342 [conn6] validating index 1: test.test_index_check9.$a_1 3854 Fri Feb 22 11:39:55.266 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:55.267 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:55.267 [conn6] validating index 1: test.test_index_check9.$a_1 4191 Fri Feb 22 11:39:55.299 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:55.299 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:55.299 [conn6] validating index 1: test.test_index_check9.$a_1 6060 Fri Feb 22 11:39:55.473 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:55.473 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:55.474 [conn6] validating index 1: test.test_index_check9.$a_1 6285 Fri Feb 22 11:39:56.447 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:56.448 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:56.448 [conn6] validating index 1: test.test_index_check9.$a_1 6529 Fri Feb 22 11:39:57.513 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:57.514 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:57.514 [conn6] validating index 1: test.test_index_check9.$a_1 7846 Fri Feb 22 11:39:58.642 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:58.643 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:58.643 [conn6] validating index 1: test.test_index_check9.$a_1 8260 Fri Feb 22 11:39:59.806 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:59.807 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:59.807 [conn6] validating index 1: test.test_index_check9.$a_1 9693 Fri Feb 22 11:39:59.976 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:39:59.977 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:39:59.977 [conn6] validating index 1: test.test_index_check9.$a_1 13119 Fri Feb 22 11:40:00.329 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:00.330 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:00.330 [conn6] validating index 1: test.test_index_check9.$a_1 14001 Fri Feb 22 11:40:01.109 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:01.110 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:01.110 [conn6] validating index 1: test.test_index_check9.$a_1 14170 Fri Feb 22 11:40:02.256 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:02.257 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:02.257 [conn6] validating index 1: test.test_index_check9.$a_1 14298 Fri Feb 22 11:40:03.250 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:03.251 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:03.252 [conn6] validating index 1: test.test_index_check9.$a_1 14499 Fri Feb 22 11:40:04.338 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:04.340 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:04.340 [conn6] validating index 1: test.test_index_check9.$a_1 18254 Fri Feb 22 11:40:04.755 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:04.757 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:04.757 [conn6] validating index 1: test.test_index_check9.$a_1 19282 Fri Feb 22 11:40:05.489 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:05.490 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:05.490 [conn6] validating index 1: test.test_index_check9.$a_1 22110 Fri Feb 22 11:40:05.684 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:05.685 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:05.685 [conn6] validating index 1: test.test_index_check9.$a_1 23548 Fri Feb 22 11:40:05.799 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:05.799 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:05.800 [conn6] validating index 1: test.test_index_check9.$a_1 27188 Fri Feb 22 11:40:06.290 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:06.291 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:06.291 [conn6] validating index 1: test.test_index_check9.$a_1 27192 Fri Feb 22 11:40:06.305 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:06.306 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:06.307 [conn6] validating index 1: test.test_index_check9.$a_1 27950 Fri Feb 22 11:40:07.486 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:07.487 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:07.487 [conn6] validating index 1: test.test_index_check9.$a_1 28415 Fri Feb 22 11:40:08.301 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:08.303 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:08.303 [conn6] validating index 1: test.test_index_check9.$a_1 29639 Fri Feb 22 11:40:09.571 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:09.572 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:09.572 [conn6] validating index 1: test.test_index_check9.$a_1 29693 Fri Feb 22 11:40:09.585 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:09.586 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:09.586 [conn6] validating index 1: test.test_index_check9.$a_1 35671 Fri Feb 22 11:40:10.842 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:10.843 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:10.843 [conn6] validating index 1: test.test_index_check9.$a_1 36238 Fri Feb 22 11:40:11.495 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:11.496 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:11.496 [conn6] validating index 1: test.test_index_check9.$a_1 37314 Fri Feb 22 11:40:12.190 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:12.190 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:12.191 [conn6] validating index 1: test.test_index_check9.$a_1 37762 Fri Feb 22 11:40:13.324 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:13.325 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:13.325 [conn6] validating index 1: test.test_index_check9.$a_1 38011 Fri Feb 22 11:40:13.349 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:13.349 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:13.349 [conn6] validating index 1: test.test_index_check9.$a_1 39287 Fri Feb 22 11:40:13.439 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:13.440 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:13.440 [conn6] validating index 1: test.test_index_check9.$a_1 40640 Fri Feb 22 11:40:13.533 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:13.534 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:13.534 [conn6] validating index 1: test.test_index_check9.$a_1 40968 Fri Feb 22 11:40:14.820 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:14.821 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:14.822 [conn6] validating index 1: test.test_index_check9.$a_1 42168 Fri Feb 22 11:40:14.905 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:14.906 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:14.906 [conn6] validating index 1: test.test_index_check9.$a_1 42646 Fri Feb 22 11:40:16.063 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:16.064 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:16.065 [conn6] validating index 1: test.test_index_check9.$a_1 43146 Fri Feb 22 11:40:17.302 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:17.303 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:17.304 [conn6] validating index 1: test.test_index_check9.$a_1 45686 Fri Feb 22 11:40:18.717 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:18.718 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:18.718 [conn6] validating index 1: test.test_index_check9.$a_1 46673 Fri Feb 22 11:40:18.982 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:18.984 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:18.984 [conn6] validating index 1: test.test_index_check9.$a_1 48147 Fri Feb 22 11:40:19.113 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:19.114 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:19.114 [conn6] validating index 1: test.test_index_check9.$a_1 48155 Fri Feb 22 11:40:19.130 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:19.130 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:19.131 [conn6] validating index 1: test.test_index_check9.$a_1 49555 Fri Feb 22 11:40:19.274 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:19.275 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:19.275 [conn6] validating index 1: test.test_index_check9.$a_1 50057 Fri Feb 22 11:40:19.340 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:19.341 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:19.341 [conn6] validating index 1: test.test_index_check9.$a_1 50500 Fri Feb 22 11:40:19.745 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:19.747 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:19.747 [conn6] validating index 1: test.test_index_check9.$a_1 50829 Fri Feb 22 11:40:19.791 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:19.792 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:19.792 [conn6] validating index 1: test.test_index_check9.$a_1 53756 Fri Feb 22 11:40:21.463 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:21.464 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:21.465 [conn6] validating index 1: test.test_index_check9.$a_1 53780 Fri Feb 22 11:40:21.476 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:21.477 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:21.477 [conn6] validating index 1: test.test_index_check9.$a_1 54536 Fri Feb 22 11:40:21.538 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:21.539 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:21.539 [conn6] validating index 1: test.test_index_check9.$a_1 55613 Fri Feb 22 11:40:22.415 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:22.416 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:22.416 [conn6] validating index 1: test.test_index_check9.$a_1 56906 Fri Feb 22 11:40:23.662 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:23.663 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:23.663 [conn6] validating index 1: test.test_index_check9.$a_1 57042 Fri Feb 22 11:40:25.202 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:25.203 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:25.203 [conn6] validating index 1: test.test_index_check9.$a_1 58478 Fri Feb 22 11:40:25.344 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:25.345 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:25.345 [conn6] validating index 1: test.test_index_check9.$a_1 59358 Fri Feb 22 11:40:25.436 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:25.437 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:25.437 [conn6] validating index 1: test.test_index_check9.$a_1 59614 Fri Feb 22 11:40:25.473 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:25.474 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:25.475 [conn6] validating index 1: test.test_index_check9.$a_1 60154 Fri Feb 22 11:40:25.532 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:25.533 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:25.533 [conn6] validating index 1: test.test_index_check9.$a_1 61621 Fri Feb 22 11:40:26.483 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:26.484 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:26.484 [conn6] validating index 1: test.test_index_check9.$a_1 63118 Fri Feb 22 11:40:26.603 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:26.604 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:26.604 [conn6] validating index 1: test.test_index_check9.$a_1 63296 Fri Feb 22 11:40:27.273 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:27.275 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:27.275 [conn6] validating index 1: test.test_index_check9.$a_1 66897 Fri Feb 22 11:40:27.610 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:27.611 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:27.611 [conn6] validating index 1: test.test_index_check9.$a_1 68396 Fri Feb 22 11:40:27.776 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:27.777 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:27.777 [conn6] validating index 1: test.test_index_check9.$a_1 68408 Fri Feb 22 11:40:28.050 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:28.051 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:28.051 [conn6] validating index 1: test.test_index_check9.$a_1 74683 Fri Feb 22 11:40:28.696 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:28.697 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:28.697 [conn6] validating index 1: test.test_index_check9.$a_1 Fri Feb 22 11:40:29.070 [conn6] query test.test_index_check9 query: { query: { a: { $gt: "", $lte: "xepqv" } }, orderby: { a: 1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:11701 scanAndOrder:1 keyUpdates:0 locks(micros) r:135848 nreturned:10461 reslen:199528 135ms 75424 Fri Feb 22 11:40:30.453 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:30.454 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:30.455 [conn6] validating index 1: test.test_index_check9.$a_1 75780 Fri Feb 22 11:40:31.534 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:31.535 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:31.535 [conn6] validating index 1: test.test_index_check9.$a_1 76922 Fri Feb 22 11:40:31.797 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:31.799 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:31.799 [conn6] validating index 1: test.test_index_check9.$a_1 77144 Fri Feb 22 11:40:32.070 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:32.071 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:32.072 [conn6] validating index 1: test.test_index_check9.$a_1 Fri Feb 22 11:40:32.355 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: 1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:11860 scanAndOrder:1 keyUpdates:0 numYields: 1 locks(micros) r:145916 nreturned:11860 reslen:226292 103ms 77925 Fri Feb 22 11:40:33.881 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:33.883 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:33.883 [conn6] validating index 1: test.test_index_check9.$a_1 78803 Fri Feb 22 11:40:34.002 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:34.003 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:34.004 [conn6] validating index 1: test.test_index_check9.$a_1 79296 Fri Feb 22 11:40:34.072 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:34.074 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:34.074 [conn6] validating index 1: test.test_index_check9.$a_1 Fri Feb 22 11:40:34.390 [conn6] query test.test_index_check9 query: { query: { a: { $gte: "", $lt: "x" } }, orderby: { a: -1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:11998 scanAndOrder:1 keyUpdates:0 numYields: 1 locks(micros) r:187954 nreturned:10654 reslen:203240 113ms 82924 Fri Feb 22 11:40:35.799 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:35.801 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:35.801 [conn6] validating index 1: test.test_index_check9.$a_1 83410 Fri Feb 22 11:40:35.862 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:35.863 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:35.863 [conn6] validating index 1: test.test_index_check9.$a_1 Fri Feb 22 11:40:36.157 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: 1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:12247 scanAndOrder:1 keyUpdates:0 locks(micros) r:104306 nreturned:12247 reslen:233744 104ms 83991 Fri Feb 22 11:40:37.671 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:37.672 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:37.672 [conn6] validating index 1: test.test_index_check9.$a_1 85027 Fri Feb 22 11:40:37.750 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:37.750 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:37.751 [conn6] validating index 1: test.test_index_check9.$a_1 85279 Fri Feb 22 11:40:37.776 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:37.777 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:37.777 [conn6] validating index 1: test.test_index_check9.$a_1 85309 Fri Feb 22 11:40:37.780 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:37.780 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:37.780 [conn6] validating index 1: test.test_index_check9.$a_1 85810 Fri Feb 22 11:40:37.824 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:37.824 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:37.825 [conn6] validating index 1: test.test_index_check9.$a_1 86344 Fri Feb 22 11:40:37.871 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:37.872 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:37.872 [conn6] validating index 1: test.test_index_check9.$a_1 86455 Fri Feb 22 11:40:37.891 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:37.892 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:37.892 [conn6] validating index 1: test.test_index_check9.$a_1 86465 Fri Feb 22 11:40:39.416 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:39.417 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:39.417 [conn6] validating index 1: test.test_index_check9.$a_1 87161 Fri Feb 22 11:40:40.701 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:40.702 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:40.702 [conn6] validating index 1: test.test_index_check9.$a_1 88219 Fri Feb 22 11:40:42.157 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:42.158 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:42.159 [conn6] validating index 1: test.test_index_check9.$a_1 89241 Fri Feb 22 11:40:42.722 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:42.723 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:42.723 [conn6] validating index 1: test.test_index_check9.$a_1 90112 Fri Feb 22 11:40:44.174 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:44.175 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:44.175 [conn6] validating index 1: test.test_index_check9.$a_1 92053 Fri Feb 22 11:40:44.810 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:44.811 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:44.811 [conn6] validating index 1: test.test_index_check9.$a_1 93656 Fri Feb 22 11:40:44.942 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:44.943 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:44.943 [conn6] validating index 1: test.test_index_check9.$a_1 94259 Fri Feb 22 11:40:44.998 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:44.999 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:44.999 [conn6] validating index 1: test.test_index_check9.$a_1 94560 Fri Feb 22 11:40:45.037 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:45.038 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:45.038 [conn6] validating index 1: test.test_index_check9.$a_1 94679 Fri Feb 22 11:40:45.058 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:45.059 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:45.059 [conn6] validating index 1: test.test_index_check9.$a_1 97390 Fri Feb 22 11:40:46.316 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:46.317 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:46.317 [conn6] validating index 1: test.test_index_check9.$a_1 97396 Fri Feb 22 11:40:46.331 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:46.332 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:46.332 [conn6] validating index 1: test.test_index_check9.$a_1 97803 Fri Feb 22 11:40:47.813 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:47.814 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:47.814 [conn6] validating index 1: test.test_index_check9.$a_1 99325 Fri Feb 22 11:40:47.926 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:47.927 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:47.927 [conn6] validating index 1: test.test_index_check9.$a_1 Fri Feb 22 11:40:47.981 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:47.982 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:47.982 [conn6] validating index 1: test.test_index_check9.$a_1 Fri Feb 22 11:40:49.417 [conn6] CMD: drop test.test_index_check9 Fri Feb 22 11:40:49.421 [conn6] build index test.test_index_check9 { _id: 1 } Fri Feb 22 11:40:49.422 [conn6] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:40:49.422 [conn6] info: creating collection test.test_index_check9 on add index Fri Feb 22 11:40:49.422 [conn6] build index test.test_index_check9 { a: -1.0, b: -1.0, c: 1.0, d: 1.0, e: 1.0 } Fri Feb 22 11:40:49.423 [conn6] build index done. scanned 0 total records. 0 secs 385 Fri Feb 22 11:40:49.457 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:49.457 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:49.457 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 1385 Fri Feb 22 11:40:49.549 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:49.549 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:49.549 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 2576 Fri Feb 22 11:40:49.646 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:49.646 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:49.646 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 2701 Fri Feb 22 11:40:49.677 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:49.677 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:49.677 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 3445 Fri Feb 22 11:40:49.788 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:49.788 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:49.788 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 4055 Fri Feb 22 11:40:49.865 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:49.865 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:49.865 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 4763 Fri Feb 22 11:40:49.952 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:49.952 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:49.952 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 6828 Fri Feb 22 11:40:50.196 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:50.196 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:50.196 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 8017 Fri Feb 22 11:40:50.340 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:50.340 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:50.340 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 9208 Fri Feb 22 11:40:50.655 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:50.655 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:50.656 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 315 Fri Feb 22 11:40:50.808 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:50.808 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:50.808 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 481 Fri Feb 22 11:40:51.202 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:51.202 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:51.202 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 1200 Fri Feb 22 11:40:51.327 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:51.327 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:51.327 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 2303 Fri Feb 22 11:40:51.566 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:51.566 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:51.566 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 3225 Fri Feb 22 11:40:51.742 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:51.742 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:51.742 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 3353 Fri Feb 22 11:40:51.856 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:51.856 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:51.856 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 3731 Fri Feb 22 11:40:52.083 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.083 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.083 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 3824 Fri Feb 22 11:40:52.119 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.119 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.119 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 4494 Fri Feb 22 11:40:52.316 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.316 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.316 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 5173 Fri Feb 22 11:40:52.521 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.521 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.521 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 6378 Fri Feb 22 11:40:52.745 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.745 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.745 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 6436 Fri Feb 22 11:40:52.775 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.775 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.775 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 6496 Fri Feb 22 11:40:52.913 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.913 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.913 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 6729 Fri Feb 22 11:40:52.966 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:52.966 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:52.966 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 7943 Fri Feb 22 11:40:53.301 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:53.301 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:53.302 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 8238 Fri Feb 22 11:40:53.347 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:53.348 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:53.348 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 10280 Fri Feb 22 11:40:53.873 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:53.873 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:53.873 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 10333 Fri Feb 22 11:40:54.152 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:54.152 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:54.152 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 10473 Fri Feb 22 11:40:54.177 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:54.177 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:54.177 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 10791 Fri Feb 22 11:40:54.347 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:54.347 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:54.347 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 10804 Fri Feb 22 11:40:54.389 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:54.389 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:54.389 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 11159 Fri Feb 22 11:40:54.490 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:54.490 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:54.490 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 13609 Fri Feb 22 11:40:54.854 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:54.854 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:54.854 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 14175 Fri Feb 22 11:40:55.007 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:55.007 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:55.008 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 15847 Fri Feb 22 11:40:55.376 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:55.376 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:55.376 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 16672 Fri Feb 22 11:40:55.711 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:55.711 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:55.711 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 16739 Fri Feb 22 11:40:55.741 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:55.741 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:55.741 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 16837 Fri Feb 22 11:40:55.772 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:55.772 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:55.772 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 17077 Fri Feb 22 11:40:56.027 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:56.027 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:56.027 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 17561 Fri Feb 22 11:40:56.150 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:56.150 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:56.151 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 19840 Fri Feb 22 11:40:56.527 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:56.527 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:56.527 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 19895 Fri Feb 22 11:40:56.559 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:56.559 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:56.560 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 21064 Fri Feb 22 11:40:56.758 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:56.758 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:56.758 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 22719 Fri Feb 22 11:40:57.006 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:57.007 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:57.007 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 23072 Fri Feb 22 11:40:57.080 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:57.081 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:57.081 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 23121 Fri Feb 22 11:40:57.134 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:57.134 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:57.134 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 23407 Fri Feb 22 11:40:57.194 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:57.194 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:57.195 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 26467 Fri Feb 22 11:40:57.689 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:57.690 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:57.690 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 28077 Fri Feb 22 11:40:58.113 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.113 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.113 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 28130 Fri Feb 22 11:40:58.139 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.139 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.139 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 28928 Fri Feb 22 11:40:58.304 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.304 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.304 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 29580 Fri Feb 22 11:40:58.410 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.410 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.410 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 30353 Fri Feb 22 11:40:58.696 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.696 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.696 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 31437 Fri Feb 22 11:40:58.884 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.884 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.884 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 31462 Fri Feb 22 11:40:58.907 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.907 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.907 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 31652 Fri Feb 22 11:40:58.956 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:58.956 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:58.956 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 33520 Fri Feb 22 11:40:59.244 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:59.244 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:59.244 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 34416 Fri Feb 22 11:40:59.377 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:59.377 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:59.377 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 34931 Fri Feb 22 11:40:59.472 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:59.472 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:59.472 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 37800 Fri Feb 22 11:40:59.842 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:59.843 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:59.843 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 38317 Fri Feb 22 11:40:59.906 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:40:59.906 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:40:59.906 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 45013 Fri Feb 22 11:41:00.885 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:00.885 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:00.885 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 45536 Fri Feb 22 11:41:01.029 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:01.029 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:01.029 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 46669 Fri Feb 22 11:41:01.200 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:01.200 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:01.200 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 47144 Fri Feb 22 11:41:01.294 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:01.294 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:01.294 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 48303 Fri Feb 22 11:41:01.538 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:01.538 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:01.538 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 48926 Fri Feb 22 11:41:01.682 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:01.682 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:01.682 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 49808 Fri Feb 22 11:41:01.807 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:01.807 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:01.808 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 50466 Fri Feb 22 11:41:01.920 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:01.921 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:01.921 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:02.164 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: -1.0, b: 1.0, c: 1.0, d: 1.0, e: 1.0 }, $hint: { a: -1.0, b: -1.0, c: 1.0, d: 1.0, e: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:14461 scanAndOrder:1 keyUpdates:0 locks(micros) r:242235 nreturned:14461 reslen:892330 242ms Fri Feb 22 11:41:02.407 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: -1.0, b: 1.0, c: 1.0, d: 1.0, e: 1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:14461 scanAndOrder:1 keyUpdates:0 locks(micros) r:144496 nreturned:1 reslen:396 144ms Fri Feb 22 11:41:02.551 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: -1.0, b: 1.0, c: 1.0, d: 1.0, e: 1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:14461 scanAndOrder:1 keyUpdates:0 locks(micros) r:143504 nreturned:14461 reslen:892330 143ms 50991 Fri Feb 22 11:41:05.187 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:05.187 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:05.187 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 52190 Fri Feb 22 11:41:05.492 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:05.492 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:05.493 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 52398 Fri Feb 22 11:41:05.660 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:05.660 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:05.661 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 52777 Fri Feb 22 11:41:05.725 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:05.725 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:05.725 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 53979 Fri Feb 22 11:41:05.873 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:05.873 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:05.873 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 55005 Fri Feb 22 11:41:05.986 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:05.986 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:05.986 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 58059 Fri Feb 22 11:41:06.336 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:06.336 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:06.336 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 58655 Fri Feb 22 11:41:06.410 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:06.410 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:06.410 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 59782 Fri Feb 22 11:41:06.712 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:06.712 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:06.713 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 59991 Fri Feb 22 11:41:06.738 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:06.738 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:06.738 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 62554 Fri Feb 22 11:41:07.029 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.029 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.029 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 63154 Fri Feb 22 11:41:07.138 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.138 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.138 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 63676 Fri Feb 22 11:41:07.233 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.233 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.233 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 63724 Fri Feb 22 11:41:07.322 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.322 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.322 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 65664 Fri Feb 22 11:41:07.563 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.563 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.563 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 65869 Fri Feb 22 11:41:07.626 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.626 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.627 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 67904 Fri Feb 22 11:41:07.852 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.852 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.852 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 68955 Fri Feb 22 11:41:07.979 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:07.979 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:07.979 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 69866 Fri Feb 22 11:41:08.090 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:08.090 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:08.090 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 73928 Fri Feb 22 11:41:08.633 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:08.633 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:08.633 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 74402 Fri Feb 22 11:41:08.743 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:08.743 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:08.743 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:08.928 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 0.0, 2.0, 4.0, 7.0, 7.0, 3.0, 8.0, 6.0, 0.0, 9.0, 0.0, 6.0, 4.0 ] }, d: 6.0, e: { $gt: 0.0, $lt: 2.0 } }, orderby: { a: 1.0, b: 1.0, c: 1.0, d: 1.0, e: 1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:21682 scanAndOrder:1 keyUpdates:0 locks(micros) r:101381 nreturned:1 reslen:1456 101ms 75011 Fri Feb 22 11:41:09.187 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:09.187 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:09.187 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:09.328 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 0.0, 6.0, 2.0, 3.0, 0.0, 4.0, 7.0, 0.0, 4.0, 1.0, 4.0, 5.0, 6.0, 6.0 ] }, b: { $gt: "b", $lt: "o" } }, orderby: { a: -1.0, b: -1.0, c: -1.0, d: 1.0, e: 1.0 }, $hint: { a: -1.0, b: -1.0, c: 1.0, d: 1.0, e: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:6240 scanAndOrder:1 keyUpdates:0 locks(micros) r:140355 nreturned:6226 reslen:386754 140ms Fri Feb 22 11:41:09.532 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 0.0, 6.0, 2.0, 3.0, 0.0, 4.0, 7.0, 0.0, 4.0, 1.0, 4.0, 5.0, 6.0, 6.0 ] }, b: { $gt: "b", $lt: "o" } }, orderby: { a: -1.0, b: -1.0, c: -1.0, d: 1.0, e: 1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:6341 scanAndOrder:1 keyUpdates:0 locks(micros) r:141992 nreturned:1 reslen:1504 141ms Fri Feb 22 11:41:09.658 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 0.0, 6.0, 2.0, 3.0, 0.0, 4.0, 7.0, 0.0, 4.0, 1.0, 4.0, 5.0, 6.0, 6.0 ] }, b: { $gt: "b", $lt: "o" } }, orderby: { a: -1.0, b: -1.0, c: -1.0, d: 1.0, e: 1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:16708 scanAndOrder:1 keyUpdates:0 numYields: 1 locks(micros) r:231591 nreturned:6226 reslen:386754 126ms 75046 Fri Feb 22 11:41:10.959 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:10.960 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:10.960 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 75486 Fri Feb 22 11:41:11.024 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:11.024 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:11.024 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 77366 Fri Feb 22 11:41:11.242 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:11.242 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:11.242 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 78291 Fri Feb 22 11:41:11.389 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:11.389 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:11.389 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 79340 Fri Feb 22 11:41:11.512 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:11.512 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:11.513 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 80284 Fri Feb 22 11:41:11.730 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:11.730 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:11.730 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 80512 Fri Feb 22 11:41:11.816 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:11.816 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:11.816 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 81584 Fri Feb 22 11:41:11.937 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:11.937 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:11.937 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 84271 Fri Feb 22 11:41:12.244 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:12.244 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:12.244 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 84979 Fri Feb 22 11:41:12.342 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:12.342 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:12.342 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 89853 Fri Feb 22 11:41:13.064 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:13.064 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:13.064 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:13.177 [conn6] query test.test_index_check9 query: { query: { b: { $gte: "", $lt: "vfpzhnqc" }, e: { $gt: 5.0, $lte: 6.0 } }, orderby: { a: -1.0, b: 1.0, c: 1.0, d: 1.0, e: 1.0 }, $hint: { a: -1.0, b: -1.0, c: 1.0, d: 1.0, e: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:14625 scanAndOrder:1 keyUpdates:0 locks(micros) r:112932 nreturned:1567 reslen:96639 112ms 90026 Fri Feb 22 11:41:13.702 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:13.702 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:13.702 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 90954 Fri Feb 22 11:41:13.881 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:13.882 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:13.882 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 90999 Fri Feb 22 11:41:13.892 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:13.892 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:13.892 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 93647 Fri Feb 22 11:41:14.377 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:14.377 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:14.377 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 94676 Fri Feb 22 11:41:14.664 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:14.664 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:14.664 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 95370 Fri Feb 22 11:41:14.849 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:14.849 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:14.850 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:14.999 [conn6] query test.test_index_check9 query: { query: { b: { $gt: "", $lt: "zxqj" }, c: { $in: [ 4.0, 9.0, 0.0, 6.0 ] }, d: { $gt: 0.0, $lt: 5.0 } }, orderby: { a: -1.0, b: 1.0, c: -1.0, d: 1.0, e: 1.0 }, $hint: { a: -1.0, b: -1.0, c: 1.0, d: 1.0, e: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:16746 scanAndOrder:1 keyUpdates:0 locks(micros) r:148881 nreturned:2888 reslen:179237 148ms 95664 Fri Feb 22 11:41:15.933 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:15.933 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:15.933 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 95743 Fri Feb 22 11:41:16.017 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:16.017 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:16.017 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 95955 Fri Feb 22 11:41:16.097 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:16.097 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:16.097 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:16.273 [conn6] query test.test_index_check9 query: { query: { a: { $gt: 0.0, $lte: 8.0 }, c: { $gte: 1.0, $lt: 9.0 }, d: { $in: [ 6.0, 4.0, 7.0, 2.0, 2.0, 1.0, 8.0, 0.0, 8.0, 3.0, 3.0, 5.0, 3.0 ] }, e: { $in: [ 9.0, 8.0, 8.0, 4.0, 0.0 ] } }, orderby: { a: 1.0, b: -1.0, c: 1.0, d: -1.0, e: -1.0 }, $hint: { a: -1.0, b: -1.0, c: 1.0, d: 1.0, e: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:14607 scanAndOrder:1 keyUpdates:0 locks(micros) r:174879 nreturned:4239 reslen:262015 174ms Fri Feb 22 11:41:16.485 [conn6] query test.test_index_check9 query: { query: { a: { $gt: 0.0, $lte: 8.0 }, c: { $gte: 1.0, $lt: 9.0 }, d: { $in: [ 6.0, 4.0, 7.0, 2.0, 2.0, 1.0, 8.0, 0.0, 8.0, 3.0, 3.0, 5.0, 3.0 ] }, e: { $in: [ 9.0, 8.0, 8.0, 4.0, 0.0 ] } }, orderby: { a: 1.0, b: -1.0, c: 1.0, d: -1.0, e: -1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:14918 scanAndOrder:1 keyUpdates:0 locks(micros) r:171165 nreturned:1 reslen:1640 171ms Fri Feb 22 11:41:16.600 [conn6] query test.test_index_check9 query: { query: { a: { $gt: 0.0, $lte: 8.0 }, c: { $gte: 1.0, $lt: 9.0 }, d: { $in: [ 6.0, 4.0, 7.0, 2.0, 2.0, 1.0, 8.0, 0.0, 8.0, 3.0, 3.0, 5.0, 3.0 ] }, e: { $in: [ 9.0, 8.0, 8.0, 4.0, 0.0 ] } }, orderby: { a: 1.0, b: -1.0, c: 1.0, d: -1.0, e: -1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:18609 scanAndOrder:1 keyUpdates:0 locks(micros) r:114886 nreturned:4239 reslen:262015 114ms Fri Feb 22 11:41:16.745 [conn6] command test.$cmd command: { count: "test_index_check9", query: { a: { $gt: 0.0, $lte: 8.0 }, c: { $gte: 1.0, $lt: 9.0 }, d: { $in: [ 6.0, 4.0, 7.0, 2.0, 2.0, 1.0, 8.0, 0.0, 8.0, 3.0, 3.0, 5.0, 3.0 ] }, e: { $in: [ 9.0, 8.0, 8.0, 4.0, 0.0 ] } }, fields: {} } ntoreturn:1 keyUpdates:0 locks(micros) r:102653 reslen:48 102ms 96918 Fri Feb 22 11:41:17.741 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:17.741 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:17.741 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 97371 Fri Feb 22 11:41:17.996 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:17.996 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:17.996 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 98788 Fri Feb 22 11:41:18.143 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:18.143 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:18.143 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:18.282 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 6.0, 6.0, 6.0, 0.0, 0.0, 0.0, 1.0, 5.0, 9.0, 1.0, 8.0, 5.0, 5.0, 5.0 ] }, e: { $in: [ 5.0, 1.0, 7.0, 8.0, 5.0, 1.0, 1.0, 8.0, 0.0, 2.0, 2.0 ] } }, orderby: { a: -1.0, b: 1.0, c: -1.0, d: -1.0, e: -1.0 }, $hint: { a: -1.0, b: -1.0, c: 1.0, d: 1.0, e: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:11219 scanAndOrder:1 keyUpdates:0 locks(micros) r:138316 nreturned:6769 reslen:418440 138ms Fri Feb 22 11:41:18.473 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 6.0, 6.0, 6.0, 0.0, 0.0, 0.0, 1.0, 5.0, 9.0, 1.0, 8.0, 5.0, 5.0, 5.0 ] }, e: { $in: [ 5.0, 1.0, 7.0, 8.0, 5.0, 1.0, 1.0, 8.0, 0.0, 2.0, 2.0 ] } }, orderby: { a: -1.0, b: 1.0, c: -1.0, d: -1.0, e: -1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:11370 scanAndOrder:1 keyUpdates:0 locks(micros) r:141802 nreturned:1 reslen:1692 141ms Fri Feb 22 11:41:18.577 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 6.0, 6.0, 6.0, 0.0, 0.0, 0.0, 1.0, 5.0, 9.0, 1.0, 8.0, 5.0, 5.0, 5.0 ] }, e: { $in: [ 5.0, 1.0, 7.0, 8.0, 5.0, 1.0, 1.0, 8.0, 0.0, 2.0, 2.0 ] } }, orderby: { a: -1.0, b: 1.0, c: -1.0, d: -1.0, e: -1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:18865 scanAndOrder:1 keyUpdates:0 locks(micros) r:103020 nreturned:6769 reslen:418440 103ms 99501 Fri Feb 22 11:41:19.861 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:19.861 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:19.861 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 99931 Fri Feb 22 11:41:19.921 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:19.921 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:19.921 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:20.058 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:20.058 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:20.059 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_1_d_1_e_1 Fri Feb 22 11:41:20.076 [conn6] CMD: drop test.test_index_check9 Fri Feb 22 11:41:20.079 [conn6] build index test.test_index_check9 { _id: 1 } Fri Feb 22 11:41:20.080 [conn6] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:41:20.080 [conn6] info: creating collection test.test_index_check9 on add index Fri Feb 22 11:41:20.080 [conn6] build index test.test_index_check9 { a: -1.0, b: -1.0, c: -1.0 } Fri Feb 22 11:41:20.081 [conn6] build index done. scanned 0 total records. 0.001 secs 57 Fri Feb 22 11:41:20.085 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:20.085 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:20.085 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 1197 Fri Feb 22 11:41:20.174 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:20.174 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:20.174 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 1790 Fri Feb 22 11:41:20.456 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:20.456 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:20.456 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 2119 Fri Feb 22 11:41:20.507 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:20.507 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:20.507 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 2597 Fri Feb 22 11:41:20.634 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:20.634 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:20.634 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 4688 Fri Feb 22 11:41:20.956 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:20.956 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:20.956 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 5136 Fri Feb 22 11:41:21.252 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:21.252 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:21.252 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 5798 Fri Feb 22 11:41:21.344 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:21.344 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:21.344 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 6242 Fri Feb 22 11:41:21.390 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:21.390 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:21.390 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 7137 Fri Feb 22 11:41:21.511 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:21.512 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:21.512 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 Fri Feb 22 11:41:21.631 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: -1.0, b: 1.0, c: -1.0 }, $hint: { a: -1.0, b: -1.0, c: -1.0 } } ntoreturn:0 ntoskip:0 nscanned:7138 scanAndOrder:1 keyUpdates:0 locks(micros) r:118784 nreturned:7138 reslen:271264 118ms 7622 Fri Feb 22 11:41:22.896 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:22.896 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:22.897 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 8620 Fri Feb 22 11:41:22.967 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:22.967 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:22.967 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 9811 Fri Feb 22 11:41:23.566 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:23.566 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:23.567 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 Fri Feb 22 11:41:23.858 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: -1.0, b: -1.0, c: -1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:9812 scanAndOrder:1 keyUpdates:0 locks(micros) r:129566 nreturned:9812 reslen:372876 129ms 9911 Fri Feb 22 11:41:25.395 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:25.395 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:25.395 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 2962 Fri Feb 22 11:41:26.340 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:26.343 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:26.343 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 4355 Fri Feb 22 11:41:26.702 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:26.705 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:26.705 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 5069 Fri Feb 22 11:41:26.824 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:26.828 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:26.828 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 6770 Fri Feb 22 11:41:27.046 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.050 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.050 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 6841 Fri Feb 22 11:41:27.062 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.065 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.065 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 8033 Fri Feb 22 11:41:27.203 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.206 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.206 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 8736 Fri Feb 22 11:41:27.286 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.289 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.289 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 10308 Fri Feb 22 11:41:27.477 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.481 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.481 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 11213 Fri Feb 22 11:41:27.586 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.589 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.589 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 11628 Fri Feb 22 11:41:27.638 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.641 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.641 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 12406 Fri Feb 22 11:41:27.722 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.725 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.725 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 13313 Fri Feb 22 11:41:27.821 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.824 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.824 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 14060 Fri Feb 22 11:41:27.910 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.913 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.913 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 14693 Fri Feb 22 11:41:27.981 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:27.984 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:27.984 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 16103 Fri Feb 22 11:41:28.136 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.139 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.139 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 17196 Fri Feb 22 11:41:28.249 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.252 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.252 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 19107 Fri Feb 22 11:41:28.447 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.450 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.450 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 19762 Fri Feb 22 11:41:28.519 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.522 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.522 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 20636 Fri Feb 22 11:41:28.625 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.628 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.628 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 20788 Fri Feb 22 11:41:28.646 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.649 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.649 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 23067 Fri Feb 22 11:41:28.887 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.890 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.890 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 24012 Fri Feb 22 11:41:28.990 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:28.993 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:28.993 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 24836 Fri Feb 22 11:41:29.081 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.084 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.084 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 25247 Fri Feb 22 11:41:29.132 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.135 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.135 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 25854 Fri Feb 22 11:41:29.215 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.218 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.218 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 26839 Fri Feb 22 11:41:29.345 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.348 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.348 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 28390 Fri Feb 22 11:41:29.536 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.540 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.540 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 29737 Fri Feb 22 11:41:29.704 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.707 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.707 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 29881 Fri Feb 22 11:41:29.727 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.729 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.729 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 29918 Fri Feb 22 11:41:29.736 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.739 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.739 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 31869 Fri Feb 22 11:41:29.978 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:29.982 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:29.982 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 32342 Fri Feb 22 11:41:30.041 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.045 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.045 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 32366 Fri Feb 22 11:41:30.051 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.054 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.054 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 32664 Fri Feb 22 11:41:30.096 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.099 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.099 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 33928 Fri Feb 22 11:41:30.252 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.255 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.256 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 35852 Fri Feb 22 11:41:30.501 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.504 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.504 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 36238 Fri Feb 22 11:41:30.556 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.559 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.559 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 37632 Fri Feb 22 11:41:30.692 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.694 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.694 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 38612 Fri Feb 22 11:41:30.772 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.774 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.774 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 40498 Fri Feb 22 11:41:30.920 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.922 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.922 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 41107 Fri Feb 22 11:41:30.975 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:30.977 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:30.977 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 42301 Fri Feb 22 11:41:31.072 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:31.074 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:31.074 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 46636 Fri Feb 22 11:41:31.428 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:31.431 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:31.431 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 46947 Fri Feb 22 11:41:31.460 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:31.463 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:31.463 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 48906 Fri Feb 22 11:41:31.631 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:31.633 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:31.633 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 50601 Fri Feb 22 11:41:31.773 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:31.775 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:31.775 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 51873 Fri Feb 22 11:41:31.875 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:31.878 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:31.878 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 52739 Fri Feb 22 11:41:31.946 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:31.948 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:31.948 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 53542 Fri Feb 22 11:41:32.015 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.017 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.017 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 54091 Fri Feb 22 11:41:32.062 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.065 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.065 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 55213 Fri Feb 22 11:41:32.167 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.170 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.170 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 55471 Fri Feb 22 11:41:32.196 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.199 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.199 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 55475 Fri Feb 22 11:41:32.206 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.208 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.208 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 55708 Fri Feb 22 11:41:32.231 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.234 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.234 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 56114 Fri Feb 22 11:41:32.274 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.277 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.277 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 57220 Fri Feb 22 11:41:32.383 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.385 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.385 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 58496 Fri Feb 22 11:41:32.512 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.516 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.516 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 58684 Fri Feb 22 11:41:32.536 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:32.539 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:32.539 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 63623 Fri Feb 22 11:41:33.120 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.124 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.124 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 64215 Fri Feb 22 11:41:33.172 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.174 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.174 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 64760 Fri Feb 22 11:41:33.226 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.228 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.228 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 66387 Fri Feb 22 11:41:33.357 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.359 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.359 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 66803 Fri Feb 22 11:41:33.394 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.396 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.396 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 67191 Fri Feb 22 11:41:33.429 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.431 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.431 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 67728 Fri Feb 22 11:41:33.476 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.478 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.478 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 68232 Fri Feb 22 11:41:33.525 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.527 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.527 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 69804 Fri Feb 22 11:41:33.650 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.653 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.653 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 69874 Fri Feb 22 11:41:33.661 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.663 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.663 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 71267 Fri Feb 22 11:41:33.779 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.781 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.781 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 71638 Fri Feb 22 11:41:33.813 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.815 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.815 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 72237 Fri Feb 22 11:41:33.867 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.869 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.869 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 72364 Fri Feb 22 11:41:33.881 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.883 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.883 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 73063 Fri Feb 22 11:41:33.942 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:33.944 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:33.944 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 77809 Fri Feb 22 11:41:34.405 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:34.408 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:34.408 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 79162 Fri Feb 22 11:41:34.540 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:34.543 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:34.543 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 81563 Fri Feb 22 11:41:34.839 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:34.842 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:34.842 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 86326 Fri Feb 22 11:41:35.443 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:35.447 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:35.447 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 86352 Fri Feb 22 11:41:35.455 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:35.458 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:35.458 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 86761 Fri Feb 22 11:41:35.509 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:35.512 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:35.512 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 87629 Fri Feb 22 11:41:35.621 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:35.625 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:35.625 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 89471 Fri Feb 22 11:41:35.846 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:35.850 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:35.850 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 90120 Fri Feb 22 11:41:35.931 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:35.933 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:35.934 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 90653 Fri Feb 22 11:41:36.007 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.011 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.011 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 91719 Fri Feb 22 11:41:36.156 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.160 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.160 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 93041 Fri Feb 22 11:41:36.318 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.321 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.321 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 94060 Fri Feb 22 11:41:36.472 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.475 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.475 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 96400 Fri Feb 22 11:41:36.751 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.755 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.755 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 97132 Fri Feb 22 11:41:36.848 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.851 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.851 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 97298 Fri Feb 22 11:41:36.876 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.879 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.879 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 97959 Fri Feb 22 11:41:36.963 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:36.966 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:36.966 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 98769 Fri Feb 22 11:41:37.069 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:37.073 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:37.073 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 98927 Fri Feb 22 11:41:37.097 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:37.101 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:37.101 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 99866 Fri Feb 22 11:41:37.220 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:37.224 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:37.224 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 Fri Feb 22 11:41:37.240 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:37.243 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:37.243 [conn6] validating index 1: test.test_index_check9.$a_-1_b_-1_c_-1 Fri Feb 22 11:41:37.247 [conn6] CMD: drop test.test_index_check9 Fri Feb 22 11:41:37.250 [conn6] build index test.test_index_check9 { _id: 1 } Fri Feb 22 11:41:37.251 [conn6] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:41:37.251 [conn6] info: creating collection test.test_index_check9 on add index Fri Feb 22 11:41:37.251 [conn6] build index test.test_index_check9 { a: -1.0, b: 1.0, c: 1.0 } Fri Feb 22 11:41:37.252 [conn6] build index done. scanned 0 total records. 0.001 secs 43 Fri Feb 22 11:41:37.257 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:37.257 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:37.257 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 2409 Fri Feb 22 11:41:37.477 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:37.477 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:37.477 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 2850 Fri Feb 22 11:41:38.012 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:38.012 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:38.012 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 4352 Fri Feb 22 11:41:38.206 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:38.206 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:38.206 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 5817 Fri Feb 22 11:41:38.359 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:38.359 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:38.359 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 6067 Fri Feb 22 11:41:38.391 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:38.391 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:38.391 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 6719 Fri Feb 22 11:41:38.726 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:38.726 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:38.727 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:41:38.840 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: 1.0, b: -1.0, c: 1.0 }, $hint: { a: -1.0, b: 1.0, c: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:6720 scanAndOrder:1 keyUpdates:0 locks(micros) r:112939 nreturned:6720 reslen:265598 112ms 7665 Fri Feb 22 11:41:40.074 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:40.074 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:40.074 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 9823 Fri Feb 22 11:41:40.251 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:40.251 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:40.252 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:41:40.355 [conn6] query test.test_index_check9 query: { query: {}, orderby: { a: 1.0, b: 1.0, c: -1.0 }, $hint: { a: -1.0, b: 1.0, c: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:9824 scanAndOrder:1 keyUpdates:0 locks(micros) r:103200 nreturned:9824 reslen:388280 103ms 2156 Fri Feb 22 11:41:42.144 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:42.144 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:42.144 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 2514 Fri Feb 22 11:41:42.313 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:42.314 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:42.314 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 5189 Fri Feb 22 11:41:42.591 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:42.591 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:42.591 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 5251 Fri Feb 22 11:41:42.606 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:42.607 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:42.607 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 6341 Fri Feb 22 11:41:42.840 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:42.840 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:42.840 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 6719 Fri Feb 22 11:41:42.887 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:42.887 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:42.887 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 8764 Fri Feb 22 11:41:43.098 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:43.099 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:43.099 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 9401 Fri Feb 22 11:41:43.388 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:43.389 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:43.389 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 9821 Fri Feb 22 11:41:43.439 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:43.439 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:43.439 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 11946 Fri Feb 22 11:41:43.659 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:43.660 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:43.660 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 12431 Fri Feb 22 11:41:43.724 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:43.724 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:43.724 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 13789 Fri Feb 22 11:41:43.951 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:43.952 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:43.952 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 14375 Fri Feb 22 11:41:44.056 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:44.057 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:44.057 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 14832 Fri Feb 22 11:41:44.162 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:44.163 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:44.163 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 15336 Fri Feb 22 11:41:44.235 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:44.235 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:44.235 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 16079 Fri Feb 22 11:41:44.344 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:44.344 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:44.344 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 18572 Fri Feb 22 11:41:44.757 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:44.757 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:44.758 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 19207 Fri Feb 22 11:41:44.861 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:44.861 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:44.861 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 19604 Fri Feb 22 11:41:44.940 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:44.940 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:44.940 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 19827 Fri Feb 22 11:41:45.101 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:45.101 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:45.102 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 21658 Fri Feb 22 11:41:45.377 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:45.377 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:45.377 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 22134 Fri Feb 22 11:41:45.454 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:45.455 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:45.455 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 22191 Fri Feb 22 11:41:45.479 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:45.480 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:45.480 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 23734 Fri Feb 22 11:41:45.695 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:45.695 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:45.695 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 26278 Fri Feb 22 11:41:46.042 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:46.043 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:46.043 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 27698 Fri Feb 22 11:41:46.229 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:46.230 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:46.230 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 30757 Fri Feb 22 11:41:46.609 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:46.609 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:46.609 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 31619 Fri Feb 22 11:41:46.717 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:46.717 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:46.717 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 32864 Fri Feb 22 11:41:46.869 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:46.869 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:46.869 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 33799 Fri Feb 22 11:41:47.007 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:47.008 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:47.008 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:41:47.157 [conn6] query test.test_index_check9 query: { query: { b: { $gt: "bxqxrdf", $lt: "xqvekgyb" } }, orderby: { a: 1.0, b: 1.0, c: -1.0 }, $hint: { a: -1.0, b: 1.0, c: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:9409 scanAndOrder:1 keyUpdates:0 numYields: 1 locks(micros) r:164516 nreturned:9389 reslen:378261 148ms Fri Feb 22 11:41:47.363 [conn6] query test.test_index_check9 query: { query: { b: { $gt: "bxqxrdf", $lt: "xqvekgyb" } }, orderby: { a: 1.0, b: 1.0, c: -1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:11205 scanAndOrder:1 keyUpdates:0 locks(micros) r:128250 nreturned:1 reslen:396 128ms Fri Feb 22 11:41:47.487 [conn6] query test.test_index_check9 query: { query: { b: { $gt: "bxqxrdf", $lt: "xqvekgyb" } }, orderby: { a: 1.0, b: 1.0, c: -1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:11205 scanAndOrder:1 keyUpdates:0 locks(micros) r:122801 nreturned:9389 reslen:378261 122ms 34227 Fri Feb 22 11:41:48.911 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:48.911 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:48.911 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 34766 Fri Feb 22 11:41:49.016 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:49.016 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:49.016 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 36495 Fri Feb 22 11:41:49.247 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:49.247 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:49.247 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 37258 Fri Feb 22 11:41:49.354 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:49.354 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:49.354 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 38319 Fri Feb 22 11:41:49.518 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:49.518 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:49.518 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 39656 Fri Feb 22 11:41:50.037 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:50.037 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:50.037 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 41394 Fri Feb 22 11:41:50.248 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:50.249 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:50.249 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 42240 Fri Feb 22 11:41:50.385 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:50.386 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:50.386 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 44378 Fri Feb 22 11:41:50.681 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:50.681 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:50.681 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 45820 Fri Feb 22 11:41:50.917 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:50.917 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:50.917 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 46742 Fri Feb 22 11:41:51.113 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:51.113 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:51.113 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 48925 Fri Feb 22 11:41:51.386 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:51.386 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:51.387 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 50420 Fri Feb 22 11:41:51.718 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:51.719 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:51.719 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 50494 Fri Feb 22 11:41:51.745 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:51.745 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:51.745 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 53411 Fri Feb 22 11:41:52.185 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:52.185 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:52.185 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 53458 Fri Feb 22 11:41:52.207 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:52.207 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:52.207 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 54149 Fri Feb 22 11:41:52.459 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:52.459 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:52.459 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 56113 Fri Feb 22 11:41:52.764 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:52.764 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:52.764 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 56543 Fri Feb 22 11:41:52.985 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:52.986 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:52.986 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 57001 Fri Feb 22 11:41:53.064 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:53.064 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:53.064 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 57597 Fri Feb 22 11:41:53.163 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:53.163 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:53.163 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 58333 Fri Feb 22 11:41:53.282 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:53.282 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:53.282 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 59052 Fri Feb 22 11:41:53.392 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:53.392 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:53.392 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 59058 Fri Feb 22 11:41:53.503 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:53.503 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:53.503 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 62164 Fri Feb 22 11:41:53.912 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:53.912 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:53.913 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 62242 Fri Feb 22 11:41:54.204 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:54.204 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:54.204 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 63189 Fri Feb 22 11:41:54.389 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:54.389 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:54.390 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 65312 Fri Feb 22 11:41:54.722 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:54.722 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:54.722 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 65785 Fri Feb 22 11:41:54.805 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:54.805 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:54.806 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:41:54.990 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 7.0, 5.0, 4.0, 9.0, 4.0, 4.0, 8.0, 4.0, 6.0, 6.0, 3.0, 6.0 ] } }, orderby: { a: 1.0, b: -1.0, c: 1.0 }, $hint: { a: -1.0, b: 1.0, c: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:9466 scanAndOrder:1 keyUpdates:0 locks(micros) r:184084 nreturned:9466 reslen:382464 184ms Fri Feb 22 11:41:55.273 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 7.0, 5.0, 4.0, 9.0, 4.0, 4.0, 8.0, 4.0, 6.0, 6.0, 3.0, 6.0 ] } }, orderby: { a: 1.0, b: -1.0, c: 1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:9567 scanAndOrder:1 keyUpdates:0 locks(micros) r:190186 nreturned:1 reslen:1226 190ms Fri Feb 22 11:41:55.423 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 7.0, 5.0, 4.0, 9.0, 4.0, 4.0, 8.0, 4.0, 6.0, 6.0, 3.0, 6.0 ] } }, orderby: { a: 1.0, b: -1.0, c: 1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:13581 scanAndOrder:1 keyUpdates:0 locks(micros) r:149333 nreturned:9466 reslen:382464 149ms 66665 Fri Feb 22 11:41:57.251 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:57.251 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:57.252 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 66817 Fri Feb 22 11:41:57.749 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:57.749 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:57.749 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 67165 Fri Feb 22 11:41:58.689 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:58.689 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:58.690 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 67258 Fri Feb 22 11:41:59.836 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:59.836 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:59.836 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 67561 Fri Feb 22 11:41:59.911 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:41:59.911 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:41:59.911 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:42:00.068 [conn6] query test.test_index_check9 query: { query: { b: { $gte: "armce", $lte: "yxqwaa" } }, orderby: { a: 1.0, b: 1.0, c: -1.0 }, $hint: { a: -1.0, b: 1.0, c: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:12775 scanAndOrder:1 keyUpdates:0 locks(micros) r:156126 nreturned:12755 reslen:515399 156ms Fri Feb 22 11:42:00.287 [conn6] query test.test_index_check9 query: { query: { b: { $gte: "armce", $lte: "yxqwaa" } }, orderby: { a: 1.0, b: 1.0, c: -1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:13686 scanAndOrder:1 keyUpdates:0 locks(micros) r:128970 nreturned:1 reslen:396 128ms Fri Feb 22 11:42:00.425 [conn6] query test.test_index_check9 query: { query: { b: { $gte: "armce", $lte: "yxqwaa" } }, orderby: { a: 1.0, b: 1.0, c: -1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:13686 scanAndOrder:1 keyUpdates:0 locks(micros) r:137484 nreturned:12755 reslen:515399 137ms 67587 Fri Feb 22 11:42:02.072 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:02.072 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:02.073 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 67726 Fri Feb 22 11:42:02.215 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:02.216 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:02.216 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 69801 Fri Feb 22 11:42:02.401 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:02.401 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:02.401 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 70978 Fri Feb 22 11:42:02.520 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:02.520 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:02.520 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 71517 Fri Feb 22 11:42:02.581 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:02.581 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:02.581 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 73265 Fri Feb 22 11:42:03.218 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:03.218 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:03.219 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 74194 Fri Feb 22 11:42:03.364 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:03.365 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:03.365 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 75082 Fri Feb 22 11:42:03.512 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:03.512 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:03.512 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 75892 Fri Feb 22 11:42:03.615 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:03.615 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:03.615 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 76028 Fri Feb 22 11:42:03.661 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:03.661 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:03.661 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 76500 Fri Feb 22 11:42:03.719 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:03.719 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:03.719 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 76527 Fri Feb 22 11:42:04.025 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:04.025 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:04.025 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 78982 Fri Feb 22 11:42:05.388 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:05.388 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:05.388 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 79732 Fri Feb 22 11:42:05.513 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:05.513 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:05.514 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:42:05.652 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 8.0, 8.0, 5.0, 8.0, 2.0, 9.0, 0.0 ] } }, orderby: { a: -1.0, b: 1.0, c: -1.0 }, $hint: { a: -1.0, b: 1.0, c: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:7375 scanAndOrder:1 keyUpdates:0 locks(micros) r:137396 nreturned:7372 reslen:298061 137ms Fri Feb 22 11:42:05.865 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 8.0, 8.0, 5.0, 8.0, 2.0, 9.0, 0.0 ] } }, orderby: { a: -1.0, b: 1.0, c: -1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:7476 scanAndOrder:1 keyUpdates:0 numYields: 1 locks(micros) r:197640 nreturned:1 reslen:1106 142ms Fri Feb 22 11:42:05.974 [conn6] query test.test_index_check9 query: { query: { a: { $in: [ 8.0, 8.0, 5.0, 8.0, 2.0, 9.0, 0.0 ] } }, orderby: { a: -1.0, b: 1.0, c: -1.0 }, $hint: { $natural: 1.0 } } ntoreturn:0 ntoskip:0 nscanned:14637 scanAndOrder:1 keyUpdates:0 locks(micros) r:108054 nreturned:7372 reslen:298061 108ms 82826 Fri Feb 22 11:42:07.380 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:07.381 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:07.381 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 84171 Fri Feb 22 11:42:07.576 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:07.576 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:07.576 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 84272 Fri Feb 22 11:42:07.615 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:07.615 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:07.616 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 84633 Fri Feb 22 11:42:07.681 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:07.681 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:07.682 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 85393 Fri Feb 22 11:42:08.221 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:08.221 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:08.221 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 87251 Fri Feb 22 11:42:08.688 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:08.688 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:08.688 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 88251 Fri Feb 22 11:42:08.845 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:08.845 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:08.845 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 90556 Fri Feb 22 11:42:09.224 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:09.225 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:09.225 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 91345 Fri Feb 22 11:42:09.406 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:09.406 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:09.406 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 91519 Fri Feb 22 11:42:09.452 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:09.452 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:09.452 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 92094 Fri Feb 22 11:42:09.547 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:09.547 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:09.547 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 92564 Fri Feb 22 11:42:09.925 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:09.925 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:09.925 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 94223 Fri Feb 22 11:42:10.170 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:10.170 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:10.170 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 94604 Fri Feb 22 11:42:10.287 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:10.287 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:10.287 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 94724 Fri Feb 22 11:42:10.321 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:10.322 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:10.322 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 96516 Fri Feb 22 11:42:10.535 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:10.535 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:10.535 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 96994 Fri Feb 22 11:42:10.786 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:10.787 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:10.787 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 97087 Fri Feb 22 11:42:10.889 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:10.889 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:10.889 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:42:11.261 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.262 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.262 [conn6] validating index 1: test.test_index_check9.$a_-1_b_1_c_1 Fri Feb 22 11:42:11.289 [conn6] CMD: drop test.test_index_check9 Fri Feb 22 11:42:11.292 [conn6] build index test.test_index_check9 { _id: 1 } Fri Feb 22 11:42:11.293 [conn6] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:42:11.293 [conn6] info: creating collection test.test_index_check9 on add index Fri Feb 22 11:42:11.293 [conn6] build index test.test_index_check9 { a: 1.0, b: -1.0, c: -1.0, d: -1.0 } Fri Feb 22 11:42:11.295 [conn6] build index done. scanned 0 total records. 0.001 secs 213 Fri Feb 22 11:42:11.327 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.327 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.327 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 245 Fri Feb 22 11:42:11.334 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.334 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.334 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 354 Fri Feb 22 11:42:11.355 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.355 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.355 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 640 Fri Feb 22 11:42:11.398 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.398 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.398 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 3126 Fri Feb 22 11:42:11.747 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.747 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.747 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 4078 Fri Feb 22 11:42:11.884 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.884 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.884 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 4903 Fri Feb 22 11:42:11.998 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:11.998 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:11.998 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 5958 Fri Feb 22 11:42:12.150 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:12.150 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:12.151 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 6512 Fri Feb 22 11:42:12.234 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:12.234 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:12.234 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 7062 Fri Feb 22 11:42:12.320 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:12.321 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:12.321 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 7668 Fri Feb 22 11:42:12.404 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:12.404 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:12.404 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 9721 Fri Feb 22 11:42:12.773 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:12.773 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:12.773 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 1538 Fri Feb 22 11:42:13.043 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:13.043 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:13.043 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 1671 Fri Feb 22 11:42:13.078 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:13.078 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:13.079 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 3746 Fri Feb 22 11:42:13.290 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:13.290 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:13.290 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 4324 Fri Feb 22 11:42:13.394 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:13.394 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:13.394 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 6219 Fri Feb 22 11:42:13.578 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:13.578 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:13.578 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 6585 Fri Feb 22 11:42:13.620 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:13.620 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:13.620 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 7044 Fri Feb 22 11:42:13.834 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:13.834 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:13.834 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 9907 Fri Feb 22 11:42:14.098 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.098 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.098 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 10104 Fri Feb 22 11:42:14.125 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.125 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.125 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 11366 Fri Feb 22 11:42:14.308 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.308 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.308 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 11533 Fri Feb 22 11:42:14.343 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.343 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.343 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 13099 Fri Feb 22 11:42:14.582 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.582 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.582 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 14848 Fri Feb 22 11:42:14.760 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.760 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.760 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 15827 Fri Feb 22 11:42:14.862 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.863 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.863 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 16413 Fri Feb 22 11:42:14.980 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:14.980 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:14.981 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 16983 Fri Feb 22 11:42:15.046 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.046 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.046 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 19707 Fri Feb 22 11:42:15.303 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.303 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.303 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 20368 Fri Feb 22 11:42:15.376 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.376 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.376 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 20921 Fri Feb 22 11:42:15.435 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.435 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.435 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 22316 Fri Feb 22 11:42:15.569 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.569 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.569 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 24122 Fri Feb 22 11:42:15.751 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.751 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.751 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 24441 Fri Feb 22 11:42:15.833 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.833 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.833 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 24714 Fri Feb 22 11:42:15.869 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:15.869 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:15.869 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 26700 Fri Feb 22 11:42:16.057 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:16.057 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:16.057 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 27415 Fri Feb 22 11:42:16.151 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:16.151 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:16.151 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 28224 Fri Feb 22 11:42:16.416 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:16.416 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:16.416 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 30895 Fri Feb 22 11:42:16.665 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:16.665 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:16.665 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 32910 Fri Feb 22 11:42:16.872 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:16.872 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:16.873 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 36455 Fri Feb 22 11:42:17.346 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:17.346 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:17.346 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 37039 Fri Feb 22 11:42:17.447 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:17.447 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:17.448 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 37930 Fri Feb 22 11:42:17.586 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:17.586 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:17.587 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 38076 Fri Feb 22 11:42:17.668 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:17.668 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:17.668 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 38143 Fri Feb 22 11:42:17.899 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:17.899 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:17.899 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 41396 Fri Feb 22 11:42:18.271 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:18.271 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:18.271 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 44729 Fri Feb 22 11:42:18.593 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:18.593 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:18.593 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 45098 Fri Feb 22 11:42:18.637 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:18.637 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:18.637 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 46704 Fri Feb 22 11:42:18.793 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:18.793 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:18.793 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 46854 Fri Feb 22 11:42:18.819 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:18.819 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:18.819 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 47108 Fri Feb 22 11:42:18.914 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:18.914 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:18.914 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 48860 Fri Feb 22 11:42:19.217 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:19.218 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:19.218 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 49282 Fri Feb 22 11:42:19.272 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:19.272 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:19.272 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 50039 Fri Feb 22 11:42:19.443 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:19.443 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:19.443 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 50166 Fri Feb 22 11:42:19.470 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:19.471 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:19.471 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 50646 Fri Feb 22 11:42:20.628 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:20.628 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:20.628 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 54827 Fri Feb 22 11:42:21.360 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:21.360 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:21.360 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 56228 Fri Feb 22 11:42:21.872 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:21.872 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:21.873 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 56955 Fri Feb 22 11:42:22.207 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:22.207 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:22.207 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 58018 Fri Feb 22 11:42:22.375 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:22.375 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:22.375 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 58540 Fri Feb 22 11:42:22.598 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:22.598 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:22.598 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 59581 Fri Feb 22 11:42:22.723 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:22.723 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:22.723 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 60959 Fri Feb 22 11:42:22.856 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:22.856 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:22.856 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 61628 Fri Feb 22 11:42:22.978 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:22.978 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:22.978 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 63807 Fri Feb 22 11:42:23.226 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:23.227 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:23.227 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 64113 Fri Feb 22 11:42:23.291 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:23.291 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:23.291 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 64289 Fri Feb 22 11:42:23.491 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:23.491 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:23.491 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 65850 Fri Feb 22 11:42:23.740 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:23.740 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:23.741 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 65860 Fri Feb 22 11:42:23.907 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:23.907 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:23.907 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 66025 Fri Feb 22 11:42:23.952 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:23.952 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:23.952 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 68238 Fri Feb 22 11:42:24.360 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:24.360 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:24.361 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 68711 Fri Feb 22 11:42:24.469 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:24.469 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:24.469 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 70679 Fri Feb 22 11:42:24.679 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:24.679 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:24.679 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 71517 Fri Feb 22 11:42:24.944 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:24.944 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:24.944 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 71742 Fri Feb 22 11:42:25.250 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:25.250 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:25.250 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 72695 Fri Feb 22 11:42:25.401 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:25.401 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:25.401 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 73834 Fri Feb 22 11:42:25.812 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:25.812 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:25.813 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 74641 Fri Feb 22 11:42:25.927 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:25.927 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:25.927 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 75232 Fri Feb 22 11:42:26.036 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.036 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.037 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 75342 Fri Feb 22 11:42:26.075 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.075 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.076 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 75487 Fri Feb 22 11:42:26.097 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.097 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.097 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 76970 Fri Feb 22 11:42:26.319 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.320 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.320 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 77198 Fri Feb 22 11:42:26.466 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.466 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.466 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 77584 Fri Feb 22 11:42:26.541 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.541 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.541 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 78397 Fri Feb 22 11:42:26.713 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.713 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.713 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 78887 Fri Feb 22 11:42:26.820 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:26.820 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:26.820 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 81800 Fri Feb 22 11:42:27.208 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:27.208 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:27.208 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 83331 Fri Feb 22 11:42:27.359 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:27.359 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:27.359 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 83470 Fri Feb 22 11:42:27.419 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:27.419 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:27.419 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 84445 Fri Feb 22 11:42:27.566 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:27.566 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:27.566 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 86047 Fri Feb 22 11:42:27.748 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:27.749 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:27.749 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 89165 Fri Feb 22 11:42:28.352 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:28.353 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:28.353 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 90182 Fri Feb 22 11:42:28.525 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:28.526 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:28.526 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 90639 Fri Feb 22 11:42:28.631 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:28.631 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:28.631 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 92293 Fri Feb 22 11:42:28.894 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:28.894 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:28.894 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 92528 Fri Feb 22 11:42:28.934 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:28.934 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:28.935 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 92921 Fri Feb 22 11:42:28.984 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:28.984 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:28.984 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 94264 Fri Feb 22 11:42:29.802 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:29.803 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:29.803 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 94616 Fri Feb 22 11:42:29.914 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:29.914 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:29.914 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 96186 Fri Feb 22 11:42:30.160 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:30.160 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:30.160 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 97286 Fri Feb 22 11:42:30.353 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:30.353 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:30.353 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 97701 Fri Feb 22 11:42:30.550 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:30.550 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:30.551 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 97704 Fri Feb 22 11:42:30.576 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:30.576 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:30.576 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 98310 Fri Feb 22 11:42:30.714 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:30.714 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:30.715 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 98347 Fri Feb 22 11:42:30.803 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:30.803 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:30.803 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 98775 Fri Feb 22 11:42:30.890 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:30.890 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:30.890 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 Fri Feb 22 11:42:31.080 [conn6] CMD: validate test.test_index_check9 Fri Feb 22 11:42:31.080 [conn6] validating index 0: test.test_index_check9.$_id_ Fri Feb 22 11:42:31.080 [conn6] validating index 1: test.test_index_check9.$a_1_b_-1_c_-1_d_-1 Fri Feb 22 11:42:31.124 [conn6] end connection 127.0.0.1:54800 (0 connections now open) 2.7102 minutes Fri Feb 22 11:42:31.147 [initandlisten] connection accepted from 127.0.0.1:54646 #7 (1 connection now open) Fri Feb 22 11:42:31.148 [conn7] end connection 127.0.0.1:54646 (0 connections now open) ******************************************* Test : index_hammer1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_hammer1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_hammer1.js";TestData.testFile = "index_hammer1.js";TestData.testName = "index_hammer1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:42:31 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:42:31.293 [initandlisten] connection accepted from 127.0.0.1:61838 #8 (1 connection now open) null Fri Feb 22 11:42:31.304 [conn8] CMD: drop test.index_hammer1 Fri Feb 22 11:42:31.305 [conn8] build index test.index_hammer1 { _id: 1 } Fri Feb 22 11:42:31.308 [conn8] build index done. scanned 0 total records. 0.003 secs Fri Feb 22 11:42:31.846 [initandlisten] connection accepted from 127.0.0.1:56794 #9 (2 connections now open) Fri Feb 22 11:42:31.846 [conn9] end connection 127.0.0.1:56794 (1 connection now open) Fri Feb 22 11:42:31.846 [initandlisten] connection accepted from 127.0.0.1:57335 #10 (2 connections now open) Fri Feb 22 11:42:31.847 [initandlisten] connection accepted from 127.0.0.1:57651 #11 (3 connections now open) Fri Feb 22 11:42:31.847 [initandlisten] connection accepted from 127.0.0.1:36384 #12 (4 connections now open) Fri Feb 22 11:42:31.847 [initandlisten] connection accepted from 127.0.0.1:40421 #13 (5 connections now open) Fri Feb 22 11:42:31.847 [initandlisten] connection accepted from 127.0.0.1:62179 #14 (6 connections now open) Fri Feb 22 11:42:31.951 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:31.987 [conn11] build index done. scanned 10000 total records. 0.036 secs Fri Feb 22 11:42:32.107 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1451975930107791 ntoreturn:0 keyUpdates:0 locks(micros) r:111078 nreturned:4898 reslen:215532 111ms Fri Feb 22 11:42:32.117 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1452017935499114 ntoreturn:0 keyUpdates:0 locks(micros) r:116233 nreturned:4898 reslen:215532 116ms Fri Feb 22 11:42:32.125 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1452018447585256 ntoreturn:0 keyUpdates:0 locks(micros) r:117714 nreturned:4898 reslen:215532 117ms Fri Feb 22 11:42:32.227 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1452490570729146 ntoreturn:0 keyUpdates:0 locks(micros) r:115391 nreturned:4898 reslen:215532 115ms Fri Feb 22 11:42:32.237 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1452533841924613 ntoreturn:0 keyUpdates:0 locks(micros) r:115864 nreturned:4898 reslen:215532 115ms Fri Feb 22 11:42:32.244 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1452575208535838 ntoreturn:0 keyUpdates:0 locks(micros) r:114866 nreturned:4898 reslen:215532 114ms Fri Feb 22 11:42:32.349 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1453004790374479 ntoreturn:0 keyUpdates:0 locks(micros) r:117612 nreturned:4898 reslen:215532 117ms Fri Feb 22 11:42:32.359 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1453047571465911 ntoreturn:0 keyUpdates:0 locks(micros) r:117364 nreturned:4898 reslen:215532 117ms Fri Feb 22 11:42:32.366 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1453090924746585 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:228937 nreturned:4898 reslen:215532 117ms Fri Feb 22 11:42:32.470 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1453520370182801 ntoreturn:0 keyUpdates:0 numYields: 2 locks(micros) r:127473 nreturned:4898 reslen:215532 116ms Fri Feb 22 11:42:32.479 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1453563754872211 ntoreturn:0 keyUpdates:0 numYields: 2 locks(micros) r:220205 nreturned:4898 reslen:215532 114ms Fri Feb 22 11:42:32.488 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1453607469049979 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:214281 nreturned:4898 reslen:215532 116ms Fri Feb 22 11:42:32.592 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1454036586107313 ntoreturn:0 keyUpdates:0 locks(micros) r:116718 nreturned:4898 reslen:215532 116ms Fri Feb 22 11:42:32.601 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1454078716006462 ntoreturn:0 keyUpdates:0 locks(micros) r:117363 nreturned:4898 reslen:215532 117ms Fri Feb 22 11:42:32.610 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1454121386649012 ntoreturn:0 keyUpdates:0 locks(micros) r:117221 nreturned:4898 reslen:215532 117ms Fri Feb 22 11:42:32.620 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:32.660 [conn11] build index done. scanned 10000 total records. 0.039 secs Fri Feb 22 11:42:32.665 [conn13] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1454379534989603 ntoreturn:0 keyUpdates:0 numYields: 2 locks(micros) r:134780 nreturned:4898 reslen:215532 109ms Fri Feb 22 11:42:32.728 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1454551803453154 ntoreturn:0 keyUpdates:0 numYields: 2 locks(micros) r:121287 nreturned:4898 reslen:215532 131ms Fri Feb 22 11:42:32.735 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1454593774838219 ntoreturn:0 keyUpdates:0 numYields: 2 locks(micros) r:109861 nreturned:4898 reslen:215532 129ms Fri Feb 22 11:42:32.742 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1454637584639387 ntoreturn:0 keyUpdates:0 numYields: 2 locks(micros) r:98127 nreturned:4898 reslen:215532 127ms Fri Feb 22 11:42:33.215 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1456741746253608 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:181967 nreturned:4898 reslen:215532 108ms Fri Feb 22 11:42:33.215 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1456742501294467 ntoreturn:0 keyUpdates:0 locks(micros) r:107616 nreturned:4898 reslen:215532 107ms Fri Feb 22 11:42:33.219 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1456784517299099 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:179372 nreturned:4898 reslen:215532 108ms Fri Feb 22 11:42:33.330 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:33.331 [conn12] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1457215364260577 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:219506 nreturned:4736 reslen:208404 110ms Fri Feb 22 11:42:33.331 [conn10] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1457257021384199 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:211340 nreturned:4608 reslen:202772 106ms Fri Feb 22 11:42:33.331 [conn14] getmore test.index_hammer1 query: { x: { $gt: 5000.0 }, y: { $gt: 5000.0 } } cursorid:1457215648394561 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:221275 nreturned:4736 reslen:208404 111ms Fri Feb 22 11:42:33.332 [conn13] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:33.332 [conn13] dropIndexes: x_1 not found Fri Feb 22 11:42:33.332 dropIndex failed: { ok: 0.0, errmsg: "index not found" } Fri Feb 22 11:42:33.347 [conn13] end connection 127.0.0.1:40421 (5 connections now open) Fri Feb 22 11:42:33.689 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:33.690 [conn12] ClientCursor::find(): cursor not found in map '1459104573300947' (ok after a drop) Fri Feb 22 11:42:33.710 [conn12] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:33.710 [conn12] dropIndexes: x_1 not found Fri Feb 22 11:42:33.710 dropIndex failed: { ok: 0.0, errmsg: "index not found" } Fri Feb 22 11:42:33.710 [conn12] end connection 127.0.0.1:36384 (4 connections now open) Fri Feb 22 11:42:33.720 [conn10] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:33.720 [conn10] dropIndexes: x_1 not found Fri Feb 22 11:42:33.720 dropIndex failed: { ok: 0.0, errmsg: "index not found" } Fri Feb 22 11:42:33.720 [conn10] end connection 127.0.0.1:57335 (3 connections now open) Fri Feb 22 11:42:33.730 [conn14] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:33.730 [conn14] dropIndexes: x_1 not found Fri Feb 22 11:42:33.730 dropIndex failed: { ok: 0.0, errmsg: "index not found" } Fri Feb 22 11:42:33.730 [conn14] end connection 127.0.0.1:62179 (2 connections now open) Fri Feb 22 11:42:33.886 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:33.921 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:34.094 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:34.131 [conn11] build index done. scanned 10000 total records. 0.037 secs Fri Feb 22 11:42:34.302 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:34.470 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:34.669 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:34.703 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:34.872 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:34.907 [conn11] build index done. scanned 10000 total records. 0.035 secs Fri Feb 22 11:42:35.082 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:35.254 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:35.451 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:35.485 [conn11] build index done. scanned 10000 total records. 0.033 secs Fri Feb 22 11:42:35.654 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:35.689 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:35.858 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:36.026 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:36.235 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:36.270 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:36.445 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:36.480 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:36.649 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:36.825 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:37.026 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:37.059 [conn11] build index done. scanned 10000 total records. 0.032 secs Fri Feb 22 11:42:37.238 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:37.278 [conn11] build index done. scanned 10000 total records. 0.04 secs Fri Feb 22 11:42:37.470 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:37.652 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:37.850 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:37.883 [conn11] build index done. scanned 10000 total records. 0.033 secs Fri Feb 22 11:42:38.051 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:38.085 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:38.262 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:38.444 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:38.656 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:38.691 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:38.859 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:38.895 [conn11] build index done. scanned 10000 total records. 0.035 secs Fri Feb 22 11:42:39.065 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:39.236 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:39.434 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:39.468 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:39.643 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:39.678 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:39.854 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:40.023 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:40.226 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:40.261 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:40.436 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:40.473 [conn11] build index done. scanned 10000 total records. 0.036 secs Fri Feb 22 11:42:40.653 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:40.828 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:41.026 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:41.060 [conn11] build index done. scanned 10000 total records. 0.033 secs Fri Feb 22 11:42:41.225 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:41.259 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:41.425 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:41.591 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:41.823 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:41.864 [conn11] build index done. scanned 10000 total records. 0.04 secs Fri Feb 22 11:42:42.064 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:42.105 [conn11] build index done. scanned 10000 total records. 0.041 secs Fri Feb 22 11:42:42.332 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:42.561 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:42.776 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:42.810 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:42.985 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:43.020 [conn11] build index done. scanned 10000 total records. 0.035 secs Fri Feb 22 11:42:43.191 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:43.359 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:43.558 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:43.592 [conn11] build index done. scanned 10000 total records. 0.033 secs Fri Feb 22 11:42:43.760 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:43.795 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:43.964 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:44.139 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:44.337 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:44.371 [conn11] build index done. scanned 10000 total records. 0.033 secs Fri Feb 22 11:42:44.540 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:44.575 [conn11] build index done. scanned 10000 total records. 0.035 secs Fri Feb 22 11:42:44.745 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:44.919 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:45.118 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:45.168 [conn11] build index done. scanned 10000 total records. 0.049 secs Fri Feb 22 11:42:45.432 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:45.487 [conn11] build index done. scanned 10000 total records. 0.054 secs Fri Feb 22 11:42:45.751 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:46.012 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:46.252 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:46.288 [conn11] build index done. scanned 10000 total records. 0.036 secs Fri Feb 22 11:42:46.458 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:46.493 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:46.679 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:46.847 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:47.047 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:47.084 [conn11] build index done. scanned 10000 total records. 0.036 secs Fri Feb 22 11:42:47.252 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:47.287 [conn11] build index done. scanned 10000 total records. 0.035 secs Fri Feb 22 11:42:47.465 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:47.635 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:47.847 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:47.881 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:48.049 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:48.084 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:48.254 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:48.423 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:48.629 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:48.663 [conn11] build index done. scanned 10000 total records. 0.033 secs Fri Feb 22 11:42:48.833 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:48.868 [conn11] build index done. scanned 10000 total records. 0.035 secs Fri Feb 22 11:42:49.039 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:49.257 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:49.569 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:49.623 [conn11] build index done. scanned 10000 total records. 0.053 secs Fri Feb 22 11:42:49.894 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:49.950 [conn11] build index done. scanned 10000 total records. 0.055 secs Fri Feb 22 11:42:50.188 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:50.357 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:50.555 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:50.589 [conn11] build index done. scanned 10000 total records. 0.033 secs Fri Feb 22 11:42:50.760 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:50.797 [conn11] build index done. scanned 10000 total records. 0.036 secs Fri Feb 22 11:42:50.968 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:51.137 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:51.334 [conn11] build index test.index_hammer1 { x: 1.0 } Fri Feb 22 11:42:51.369 [conn11] build index done. scanned 10000 total records. 0.034 secs Fri Feb 22 11:42:51.537 [conn11] build index test.index_hammer1 { y: 1.0 } Fri Feb 22 11:42:51.572 [conn11] build index done. scanned 10000 total records. 0.035 secs Fri Feb 22 11:42:51.741 [conn11] CMD: dropIndexes test.index_hammer1 Fri Feb 22 11:42:51.856 [conn11] end connection 127.0.0.1:57651 (1 connection now open) Fri Feb 22 11:42:51.856 [initandlisten] connection accepted from 127.0.0.1:54158 #15 (2 connections now open) Fri Feb 22 11:42:51.856 [conn15] end connection 127.0.0.1:54158 (1 connection now open) { "note" : "values per second", "errCount" : NumberLong(0), "trapped" : "error: not implemented", "queryLatencyAverageMicros" : 23360.814335060448, "insert" : 2.7, "query" : 57.9, "update" : 0, "delete" : 0, "getmore" : 57.8, "command" : 3.1 } Fri Feb 22 11:42:51.869 [conn8] end connection 127.0.0.1:61838 (0 connections now open) 20.7409 seconds Fri Feb 22 11:42:51.890 [initandlisten] connection accepted from 127.0.0.1:33355 #16 (1 connection now open) Fri Feb 22 11:42:51.891 [conn16] end connection 127.0.0.1:33355 (0 connections now open) ******************************************* Test : index_killop.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_killop.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_killop.js";TestData.testFile = "index_killop.js";TestData.testName = "index_killop";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:42:51 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:42:52.048 [initandlisten] connection accepted from 127.0.0.1:58005 #17 (1 connection now open) null Fri Feb 22 11:42:52.053 [conn17] CMD: drop test.jstests_slownightly_index_killop Fri Feb 22 11:42:52.054 [conn17] build index test.jstests_slownightly_index_killop { _id: 1 } Fri Feb 22 11:42:52.054 [conn17] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:43:13.515 [FileAllocator] allocating new datafile /data/db/sconsTests/test.2, filling with zeroes... Fri Feb 22 11:43:13.515 [FileAllocator] done allocating datafile /data/db/sconsTests/test.2, size: 256MB, took 0 secs Fri Feb 22 11:43:45.079 [initandlisten] connection accepted from 127.0.0.1:40736 #18 (2 connections now open) Fri Feb 22 11:43:45.080 [conn18] build index test.jstests_slownightly_index_killop { a: 1.0 } Fri Feb 22 11:43:45.081 [conn17] going to kill op: op: 1665466.0 Fri Feb 22 11:43:45.284 [initandlisten] connection accepted from 127.0.0.1:35956 #19 (3 connections now open) Fri Feb 22 11:43:45.285 [conn19] build index test.jstests_slownightly_index_killop { a: 1.0 } background Fri Feb 22 11:43:45.285 [conn17] going to kill op: op: 1665470.0 Fri Feb 22 11:43:45.499 [conn17] end connection 127.0.0.1:58005 (2 connections now open) Fri Feb 22 11:43:45.499 [conn18] end connection 127.0.0.1:40736 (2 connections now open) Fri Feb 22 11:43:45.499 [conn19] end connection 127.0.0.1:35956 (2 connections now open) 53.6280 seconds Fri Feb 22 11:43:45.520 [initandlisten] connection accepted from 127.0.0.1:59242 #20 (1 connection now open) Fri Feb 22 11:43:45.521 [conn20] end connection 127.0.0.1:59242 (0 connections now open) ******************************************* Test : index_multi.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js";TestData.testFile = "index_multi.js";TestData.testName = "index_multi";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:43:45 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:43:45.675 [initandlisten] connection accepted from 127.0.0.1:56795 #21 (1 connection now open) null setting random seed: 1361533425682 Populate the collection with random data inserted 0 Fri Feb 22 11:43:45.686 [conn21] build index test.index_multi { _id: 1 } Fri Feb 22 11:43:45.688 [conn21] build index done. scanned 0 total records. 0.001 secs inserted 1000 inserted 2000 inserted 3000 inserted 4000 inserted 5000 inserted 6000 inserted 7000 inserted 8000 inserted 9000 Create 3 triple indexes Fri Feb 22 11:43:54.115 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field90" : 1, "field91" : 1, "field92" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.118 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field91" : 1, "field92" : 1, "field93" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.121 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field92" : 1, "field93" : 1, "field94" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Create 30 compound indexes Fri Feb 22 11:43:54.123 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field30" : 1, "field31" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.127 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field32" : 1, "field33" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.130 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field34" : 1, "field35" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.133 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field36" : 1, "field37" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28610| MongoDB shell version: 2.4.0-rc1-pre- sh28611| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.137 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field38" : 1, "field39" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28612| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.140 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field40" : 1, "field41" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28613| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.142 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field42" : 1, "field43" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.146 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field44" : 1, "field45" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28614| MongoDB shell version: 2.4.0-rc1-pre- sh28616| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.149 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field46" : 1, "field47" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28618| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.153 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field48" : 1, "field49" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28619| MongoDB shell version: 2.4.0-rc1-pre- sh28617| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.158 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field50" : 1, "field51" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.162 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field52" : 1, "field53" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28620| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.166 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field54" : 1, "field55" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28621| MongoDB shell version: 2.4.0-rc1-pre- sh28623| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.172 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field56" : 1, "field57" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28622| MongoDB shell version: 2.4.0-rc1-pre- sh28625| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.177 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field58" : 1, "field59" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28626| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.182 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field60" : 1, "field61" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.189 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field62" : 1, "field63" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.198 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field64" : 1, "field65" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.199 [initandlisten] connection accepted from 127.0.0.1:34462 #22 (2 connections now open) sh28611| connecting to: 127.0.0.1:27999/admin sh28628| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.201 [initandlisten] connection accepted from 127.0.0.1:44540 #23 (3 connections now open) Fri Feb 22 11:43:54.202 [initandlisten] connection accepted from 127.0.0.1:34139 #24 (4 connections now open) Fri Feb 22 11:43:54.203 [conn22] build index test.index_multi { field91: 1.0, field92: 1.0, field93: 1.0 } background sh28613| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.210 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field66" : 1, "field67" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.214 [conn24] build index test.index_multi { field92: 1.0, field93: 1.0, field94: 1.0 } background sh28630| MongoDB shell version: 2.4.0-rc1-pre- sh28627| MongoDB shell version: 2.4.0-rc1-pre- sh28629| MongoDB shell version: 2.4.0-rc1-pre- sh28612| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.216 [initandlisten] connection accepted from 127.0.0.1:40571 #25 (5 connections now open) Fri Feb 22 11:43:54.219 [initandlisten] connection accepted from 127.0.0.1:36553 #26 (6 connections now open) Fri Feb 22 11:43:54.220 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field68" : 1, "field69" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28631| MongoDB shell version: 2.4.0-rc1-pre- sh28618| connecting to: 127.0.0.1:27999/admin sh28619| connecting to: 127.0.0.1:27999/admin sh28614| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.222 [initandlisten] connection accepted from 127.0.0.1:59923 #27 (7 connections now open) Fri Feb 22 11:43:54.224 [initandlisten] connection accepted from 127.0.0.1:39802 #28 (8 connections now open) Fri Feb 22 11:43:54.224 [initandlisten] connection accepted from 127.0.0.1:45623 #29 (9 connections now open) sh28616| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.225 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field70" : 1, "field71" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.227 [initandlisten] connection accepted from 127.0.0.1:42960 #30 (10 connections now open) sh28610| connecting to: 127.0.0.1:27999/admin sh28632| MongoDB shell version: 2.4.0-rc1-pre- sh28620| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.230 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field72" : 1, "field73" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.235 [conn30] build index test.index_multi { field42: 1.0, field43: 1.0 } background sh28633| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.235 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field74" : 1, "field75" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28621| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.236 [initandlisten] connection accepted from 127.0.0.1:42612 #31 (11 connections now open) Fri Feb 22 11:43:54.239 [conn31] build index test.index_multi { field44: 1.0, field45: 1.0 } background Fri Feb 22 11:43:54.240 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field76" : 1, "field77" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28634| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.244 [initandlisten] connection accepted from 127.0.0.1:62880 #32 (12 connections now open) sh28617| connecting to: 127.0.0.1:27999/admin sh28626| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.244 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field78" : 1, "field79" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.245 [initandlisten] connection accepted from 127.0.0.1:55338 #33 (13 connections now open) sh28635| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.251 [initandlisten] connection accepted from 127.0.0.1:47308 #34 (14 connections now open) sh28623| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.254 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field80" : 1, "field81" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.254 [conn34] build index test.index_multi { field48: 1.0, field49: 1.0 } background sh28637| MongoDB shell version: 2.4.0-rc1-pre- sh28636| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.257 [initandlisten] connection accepted from 127.0.0.1:65093 #35 (15 connections now open) sh28629| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.260 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field82" : 1, "field83" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.261 [conn35] build index test.index_multi { field58: 1.0, field59: 1.0 } background sh28622| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.261 [initandlisten] connection accepted from 127.0.0.1:33894 #36 (16 connections now open) Fri Feb 22 11:43:54.264 [initandlisten] connection accepted from 127.0.0.1:49465 #37 (17 connections now open) sh28630| connecting to: 127.0.0.1:27999/admin sh28639| MongoDB shell version: 2.4.0-rc1-pre- sh28638| MongoDB shell version: 2.4.0-rc1-pre- sh28625| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.265 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field84" : 1, "field85" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.266 [initandlisten] connection accepted from 127.0.0.1:54608 #38 (18 connections now open) Fri Feb 22 11:43:54.268 [conn37] build index test.index_multi { field60: 1.0, field61: 1.0 } background sh28628| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.271 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field86" : 1, "field87" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.271 [initandlisten] connection accepted from 127.0.0.1:53471 #39 (19 connections now open) Fri Feb 22 11:43:54.275 [initandlisten] connection accepted from 127.0.0.1:47833 #40 (20 connections now open) sh28627| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.276 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field88" : 1, "field89" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.276 [initandlisten] connection accepted from 127.0.0.1:49877 #41 (21 connections now open) sh28631| connecting to: 127.0.0.1:27999/admin Create 30 indexes Fri Feb 22 11:43:54.277 [conn39] build index test.index_multi { field56: 1.0, field57: 1.0 } background sh28644| MongoDB shell version: 2.4.0-rc1-pre- sh28640| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.281 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field0" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28646| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.283 [conn40] build index test.index_multi { field54: 1.0, field55: 1.0 } background Fri Feb 22 11:43:54.286 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field1" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.287 [conn41] build index test.index_multi { field62: 1.0, field63: 1.0 } background Fri Feb 22 11:43:54.290 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field2" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28647| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.295 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field3" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.301 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field4" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.305 [initandlisten] connection accepted from 127.0.0.1:49093 #42 (22 connections now open) sh28650| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.306 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field5" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28635| connecting to: 127.0.0.1:27999/admin sh28634| connecting to: 127.0.0.1:27999/admin sh28648| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.307 [initandlisten] connection accepted from 127.0.0.1:58716 #43 (23 connections now open) Fri Feb 22 11:43:54.308 [initandlisten] connection accepted from 127.0.0.1:36125 #44 (24 connections now open) Fri Feb 22 11:43:54.311 [conn43] build index test.index_multi { field70: 1.0, field71: 1.0 } background Fri Feb 22 11:43:54.314 [initandlisten] connection accepted from 127.0.0.1:39901 #45 (25 connections now open) sh28637| connecting to: 127.0.0.1:27999/admin sh28632| connecting to: 127.0.0.1:27999/admin sh28652| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.315 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field6" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28649| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.318 [initandlisten] connection accepted from 127.0.0.1:40000 #46 (26 connections now open) Fri Feb 22 11:43:54.318 [conn45] build index test.index_multi { field64: 1.0, field65: 1.0 } background sh28633| connecting to: 127.0.0.1:27999/admin sh28655| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.321 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field7" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28654| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.326 [conn46] build index test.index_multi { field66: 1.0, field67: 1.0 } background Fri Feb 22 11:43:54.327 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field8" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.333 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field9" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.334 [initandlisten] connection accepted from 127.0.0.1:48103 #47 (27 connections now open) sh28636| connecting to: 127.0.0.1:27999/admin sh28656| MongoDB shell version: 2.4.0-rc1-pre- sh28657| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.340 [initandlisten] connection accepted from 127.0.0.1:47703 #48 (28 connections now open) Fri Feb 22 11:43:54.341 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field10" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.342 [initandlisten] connection accepted from 127.0.0.1:46411 #49 (29 connections now open) Fri Feb 22 11:43:54.343 [initandlisten] connection accepted from 127.0.0.1:47056 #50 (30 connections now open) sh28646| connecting to: 127.0.0.1:27999/admin sh28644| connecting to: 127.0.0.1:27999/admin sh28639| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.346 [conn49] build index test.index_multi { field82: 1.0, field83: 1.0 } background sh28658| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.349 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field11" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28659| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.352 [conn50] build index test.index_multi { field78: 1.0, field79: 1.0 } background Fri Feb 22 11:43:54.353 [initandlisten] connection accepted from 127.0.0.1:34248 #51 (31 connections now open) Fri Feb 22 11:43:54.354 [initandlisten] connection accepted from 127.0.0.1:36006 #52 (32 connections now open) sh28638| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.355 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field12" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28647| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.358 [conn52] build index test.index_multi { field76: 1.0, field77: 1.0 } background sh28660| MongoDB shell version: 2.4.0-rc1-pre- sh28661| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.362 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field13" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.369 [initandlisten] connection accepted from 127.0.0.1:46529 #53 (33 connections now open) Fri Feb 22 11:43:54.370 [initandlisten] connection accepted from 127.0.0.1:52623 #54 (34 connections now open) sh28640| connecting to: 127.0.0.1:27999/admin sh28652| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.373 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field14" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.373 [initandlisten] connection accepted from 127.0.0.1:39685 #55 (35 connections now open) sh28650| connecting to: 127.0.0.1:27999/admin sh28662| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.378 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field15" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.380 [initandlisten] connection accepted from 127.0.0.1:51794 #56 (36 connections now open) Fri Feb 22 11:43:54.382 [conn55] build index test.index_multi { field2: 1.0 } background sh28655| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.384 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field16" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28663| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.388 [initandlisten] connection accepted from 127.0.0.1:60673 #57 (37 connections now open) Fri Feb 22 11:43:54.390 [initandlisten] connection accepted from 127.0.0.1:41057 #58 (38 connections now open) sh28657| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.392 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field17" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28665| MongoDB shell version: 2.4.0-rc1-pre- sh28648| connecting to: 127.0.0.1:27999/admin sh28664| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.396 [initandlisten] connection accepted from 127.0.0.1:44698 #59 (39 connections now open) sh28649| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.399 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field18" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.400 [conn59] build index test.index_multi { field0: 1.0 } background Fri Feb 22 11:43:54.405 [initandlisten] connection accepted from 127.0.0.1:34676 #60 (40 connections now open) sh28666| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.406 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field19" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28659| connecting to: 127.0.0.1:27999/admin sh28667| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.410 [conn60] build index test.index_multi { field8: 1.0 } background Fri Feb 22 11:43:54.411 [initandlisten] connection accepted from 127.0.0.1:60190 #61 (41 connections now open) sh28654| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.412 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field20" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.417 [conn61] build index test.index_multi { field3: 1.0 } background Fri Feb 22 11:43:54.421 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field21" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28669| MongoDB shell version: 2.4.0-rc1-pre- sh28668| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.424 [initandlisten] connection accepted from 127.0.0.1:34154 #62 (42 connections now open) Fri Feb 22 11:43:54.428 [initandlisten] connection accepted from 127.0.0.1:40839 #63 (43 connections now open) sh28660| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.429 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field22" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.431 [conn62] build index test.index_multi { field9: 1.0 } background sh28656| connecting to: 127.0.0.1:27999/admin sh28670| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.437 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field23" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28671| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.442 [initandlisten] connection accepted from 127.0.0.1:65225 #64 (44 connections now open) Fri Feb 22 11:43:54.443 [initandlisten] connection accepted from 127.0.0.1:46369 #65 (45 connections now open) Fri Feb 22 11:43:54.447 [initandlisten] connection accepted from 127.0.0.1:54782 #66 (46 connections now open) Fri Feb 22 11:43:54.448 [conn65] build index test.index_multi { field14: 1.0 } background sh28658| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.449 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field24" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28672| MongoDB shell version: 2.4.0-rc1-pre- sh28661| connecting to: 127.0.0.1:27999/admin sh28665| connecting to: 127.0.0.1:27999/admin sh28673| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.456 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field25" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.461 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field26" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.461 [initandlisten] connection accepted from 127.0.0.1:48527 #67 (47 connections now open) Fri Feb 22 11:43:54.461 [initandlisten] connection accepted from 127.0.0.1:63928 #68 (48 connections now open) sh28667| connecting to: 127.0.0.1:27999/admin sh28669| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.465 [conn68] build index test.index_multi { field18: 1.0 } background Fri Feb 22 11:43:54.466 [initandlisten] connection accepted from 127.0.0.1:60918 #69 (49 connections now open) Fri Feb 22 11:43:54.467 [initandlisten] connection accepted from 127.0.0.1:34550 #70 (50 connections now open) Fri Feb 22 11:43:54.467 [initandlisten] connection accepted from 127.0.0.1:64402 #71 (51 connections now open) sh28674| MongoDB shell version: 2.4.0-rc1-pre- sh28662| connecting to: 127.0.0.1:27999/admin Fri Feb 22 11:43:54.469 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field27" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Fri Feb 22 11:43:54.469 [initandlisten] connection accepted from 127.0.0.1:47966 #72 (52 connections now open) sh28666| connecting to: 127.0.0.1:27999/admin sh28663| connecting to: 127.0.0.1:27999/admin sh28664| connecting to: 127.0.0.1:27999/admin sh28675| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.470 [conn70] build index test.index_multi { field15: 1.0 } background Fri Feb 22 11:43:54.475 [conn72] build index test.index_multi { field13: 1.0 } background Fri Feb 22 11:43:54.475 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field28" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin sh28677| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 11:43:54.482 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_multi.js", "testFile" : "index_multi.js", "testName" : "index_multi", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.index_multi.createIndex({ "field29" : 1 }, {background:true});db.results.insert(db.runCommand({getlasterror:1})); 127.0.0.1:27999/admin Do some sets and unsets Fri Feb 22 11:43:54.496 [initandlisten] connection accepted from 127.0.0.1:57799 #73 (53 connections now open) Fri Feb 22 11:43:54.499 [conn73] build index test.index_multi { field17: 1.0 } background Fri Feb 22 11:43:54.512 [initandlisten] connection accepted from 127.0.0.1:52672 #74 (54 connections now open) Fri Feb 22 11:43:54.517 [conn74] build index test.index_multi { field20: 1.0 } background Fri Feb 22 11:43:54.520 [initandlisten] connection accepted from 127.0.0.1:60890 #75 (55 connections now open) Fri Feb 22 11:43:54.521 [initandlisten] connection accepted from 127.0.0.1:34392 #76 (56 connections now open) Fri Feb 22 11:43:54.541 [initandlisten] connection accepted from 127.0.0.1:45520 #77 (57 connections now open) Fri Feb 22 11:43:54.541 [conn76] build index test.index_multi { field19: 1.0 } background Fri Feb 22 11:43:54.542 [initandlisten] connection accepted from 127.0.0.1:33356 #78 (58 connections now open) Fri Feb 22 11:43:54.545 [conn77] build index test.index_multi { field21: 1.0 } background Fri Feb 22 11:43:54.546 [initandlisten] connection accepted from 127.0.0.1:52532 #79 (59 connections now open) Fri Feb 22 11:43:54.547 [initandlisten] connection accepted from 127.0.0.1:64592 #80 (60 connections now open) Fri Feb 22 11:43:54.551 [conn80] build index test.index_multi { field27: 1.0 } background Fri Feb 22 11:43:54.554 [initandlisten] connection accepted from 127.0.0.1:44842 #81 (61 connections now open) Fri Feb 22 11:43:54.554 [initandlisten] connection accepted from 127.0.0.1:34546 #82 (62 connections now open) Fri Feb 22 11:43:54.557 [conn82] build index test.index_multi { field23: 1.0 } background Fri Feb 22 11:43:54.568 [initandlisten] connection accepted from 127.0.0.1:50311 #83 (63 connections now open) Fri Feb 22 11:43:54.570 [conn83] build index test.index_multi { field25: 1.0 } background Fri Feb 22 11:43:54.576 [initandlisten] connection accepted from 127.0.0.1:58168 #84 (64 connections now open) Fri Feb 22 11:43:54.607 [conn84] build index test.index_multi { field29: 1.0 } background Fri Feb 22 11:43:54.768 [conn81] build index test.index_multi { field26: 1.0 } background Fri Feb 22 11:43:54.940 [conn79] build index test.index_multi { field22: 1.0 } background Fri Feb 22 11:43:55.278 [conn75] build index test.index_multi { field24: 1.0 } background Fri Feb 22 11:43:55.480 [conn69] build index test.index_multi { field12: 1.0 } background Fri Feb 22 11:43:55.662 [conn71] build index test.index_multi { field11: 1.0 } background Fri Feb 22 11:43:55.820 [conn67] build index test.index_multi { field16: 1.0 } background Fri Feb 22 11:43:55.970 [conn66] build index test.index_multi { field10: 1.0 } background Fri Feb 22 11:43:56.161 [conn63] build index test.index_multi { field5: 1.0 } background Fri Feb 22 11:43:56.662 [conn57] build index test.index_multi { field6: 1.0 } background Fri Feb 22 11:43:56.755 [conn56] build index test.index_multi { field4: 1.0 } background Fri Feb 22 11:43:56.933 [conn54] build index test.index_multi { field80: 1.0, field81: 1.0 } background Fri Feb 22 11:43:57.006 [conn65] Background Index Build Progress: 7900/10000 79% Fri Feb 22 11:43:57.039 [conn53] build index test.index_multi { field1: 1.0 } background Fri Feb 22 11:43:57.137 [conn45] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:43:57.181 [conn51] build index test.index_multi { field86: 1.0, field87: 1.0 } background Fri Feb 22 11:43:57.263 [conn48] build index test.index_multi { field84: 1.0, field85: 1.0 } background Fri Feb 22 11:43:57.388 [conn65] build index done. scanned 10000 total records. 2.939 secs Fri Feb 22 11:43:57.388 [conn65] switching indexes at position 22 and 1 Fri Feb 22 11:43:57.388 [conn65] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:687612 2939ms Fri Feb 22 11:43:57.395 [conn65] build index test.results { _id: 1 } Fri Feb 22 11:43:57.399 [conn65] build index done. scanned 0 total records. 0.004 secs Fri Feb 22 11:43:57.400 [conn65] end connection 127.0.0.1:46369 (63 connections now open) Fri Feb 22 11:43:57.417 [conn46] Background Index Build Progress: 200/10000 2% Fri Feb 22 11:43:57.418 [conn47] build index test.index_multi { field72: 1.0, field73: 1.0 } background Fri Feb 22 11:43:57.473 [conn37] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:43:57.661 [conn40] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:43:57.664 [conn44] build index test.index_multi { field74: 1.0, field75: 1.0 } background Fri Feb 22 11:43:57.757 [conn42] build index test.index_multi { field68: 1.0, field69: 1.0 } background Fri Feb 22 11:43:57.945 [conn39] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:43:57.952 [conn35] Background Index Build Progress: 200/10000 2% Fri Feb 22 11:43:58.027 [conn34] Background Index Build Progress: 200/10000 2% Fri Feb 22 11:43:58.100 [conn38] build index test.index_multi { field50: 1.0, field51: 1.0 } background Fri Feb 22 11:43:58.324 [conn36] build index test.index_multi { field46: 1.0, field47: 1.0 } background Fri Feb 22 11:43:58.458 [conn24] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:43:58.466 [conn31] Background Index Build Progress: 200/10000 2% Fri Feb 22 11:43:58.547 [conn33] build index test.index_multi { field36: 1.0, field37: 1.0 } background Fri Feb 22 11:43:58.730 [conn32] build index test.index_multi { field52: 1.0, field53: 1.0 } background Fri Feb 22 11:43:58.745 [conn22] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:43:58.774 [conn29] build index test.index_multi { field34: 1.0, field35: 1.0 } background Fri Feb 22 11:43:59.049 [conn28] build index test.index_multi { field90: 1.0, field91: 1.0, field92: 1.0 } background Fri Feb 22 11:43:59.192 [conn27] build index test.index_multi { field32: 1.0, field33: 1.0 } background Fri Feb 22 11:43:59.317 [conn26] build index test.index_multi { field40: 1.0, field41: 1.0 } background Fri Feb 22 11:43:59.439 [conn25] build index test.index_multi { field38: 1.0, field39: 1.0 } background Fri Feb 22 11:43:59.589 [conn23] build index test.index_multi { field30: 1.0, field31: 1.0 } background Fri Feb 22 11:43:59.597 [conn30] Background Index Build Progress: 200/10000 2% Fri Feb 22 11:43:59.600 [conn41] Background Index Build Progress: 200/10000 2% Fri Feb 22 11:43:59.924 [conn43] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:43:59.928 [conn58] build index test.index_multi { field88: 1.0, field89: 1.0 } background Fri Feb 22 11:44:00.034 [conn64] build index test.index_multi { field7: 1.0 } background Fri Feb 22 11:44:00.187 [conn72] Background Index Build Progress: 200/10000 2% Fri Feb 22 11:44:00.223 [conn78] build index test.index_multi { field28: 1.0 } background Fri Feb 22 11:44:00.414 [conn83] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:44:00.497 [conn84] Background Index Build Progress: 700/10000 7% Fri Feb 22 11:44:00.584 [conn82] Background Index Build Progress: 1300/10000 13% Fri Feb 22 11:44:00.659 [conn80] Background Index Build Progress: 2100/10000 21% Fri Feb 22 11:44:00.833 [conn74] Background Index Build Progress: 800/10000 8% Fri Feb 22 11:44:00.924 [conn76] Background Index Build Progress: 1800/10000 18% Fri Feb 22 11:44:01.013 [conn68] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:01.028 [conn61] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:01.101 [conn79] Background Index Build Progress: 2200/10000 22% Fri Feb 22 11:44:01.191 [conn81] Background Index Build Progress: 4000/10000 40% Fri Feb 22 11:44:01.272 [conn70] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:01.362 [conn73] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:01.509 [conn77] Background Index Build Progress: 7100/10000 71% Fri Feb 22 11:44:01.539 [conn75] Background Index Build Progress: 3100/10000 31% Fri Feb 22 11:44:01.631 [conn67] Background Index Build Progress: 400/10000 4% Fri Feb 22 11:44:01.692 [conn60] Background Index Build Progress: 900/10000 9% Fri Feb 22 11:44:01.781 [conn69] Background Index Build Progress: 3300/10000 33% Fri Feb 22 11:44:01.846 [conn66] Background Index Build Progress: 1200/10000 12% Fri Feb 22 11:44:01.933 [conn63] Background Index Build Progress: 600/10000 6% Fri Feb 22 11:44:02.002 [conn52] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:02.087 [conn59] Background Index Build Progress: 800/10000 8% Fri Feb 22 11:44:02.174 [conn62] Background Index Build Progress: 2400/10000 24% Fri Feb 22 11:44:02.236 [conn71] Background Index Build Progress: 5600/10000 56% Fri Feb 22 11:44:02.385 [conn50] Background Index Build Progress: 1300/10000 13% Fri Feb 22 11:44:02.408 [conn55] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:02.477 [conn49] Background Index Build Progress: 3100/10000 31% Fri Feb 22 11:44:02.496 [conn56] Background Index Build Progress: 900/10000 9% Fri Feb 22 11:44:02.639 [conn57] Background Index Build Progress: 2500/10000 25% Fri Feb 22 11:44:02.715 [conn45] Background Index Build Progress: 400/10000 4% Fri Feb 22 11:44:02.816 [conn53] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:02.905 [conn51] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:03.096 [conn71] build index done. scanned 10000 total records. 7.433 secs Fri Feb 22 11:44:03.096 [conn71] switching indexes at position 38 and 2 Fri Feb 22 11:44:03.096 [conn71] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:985694 7434ms Fri Feb 22 11:44:03.098 [conn54] Background Index Build Progress: 2700/10000 27% Fri Feb 22 11:44:03.120 [conn71] end connection 127.0.0.1:64402 (62 connections now open) Fri Feb 22 11:44:03.129 [conn46] Background Index Build Progress: 1200/10000 12% Fri Feb 22 11:44:03.143 [conn47] Background Index Build Progress: 1300/10000 13% Fri Feb 22 11:44:03.304 [conn37] Background Index Build Progress: 1800/10000 18% Fri Feb 22 11:44:03.319 [conn40] Background Index Build Progress: 1700/10000 17% Fri Feb 22 11:44:03.378 [conn44] Background Index Build Progress: 1600/10000 16% Fri Feb 22 11:44:03.462 [conn39] Background Index Build Progress: 800/10000 8% Fri Feb 22 11:44:03.486 [conn35] Background Index Build Progress: 1200/10000 12% Fri Feb 22 11:44:03.531 [conn48] Background Index Build Progress: 5300/10000 53% Fri Feb 22 11:44:03.714 [conn42] Background Index Build Progress: 3300/10000 33% Fri Feb 22 11:44:03.791 [conn38] Background Index Build Progress: 2000/10000 20% Fri Feb 22 11:44:03.859 [conn34] Background Index Build Progress: 3300/10000 33% Fri Feb 22 11:44:03.895 [conn24] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:03.935 [conn33] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:04.017 [conn31] Background Index Build Progress: 2700/10000 27% Fri Feb 22 11:44:04.053 [conn29] Background Index Build Progress: 900/10000 9% Fri Feb 22 11:44:04.197 [conn32] Background Index Build Progress: 2000/10000 20% Fri Feb 22 11:44:04.309 [conn36] Background Index Build Progress: 3900/10000 39% Fri Feb 22 11:44:04.570 [conn27] Background Index Build Progress: 700/10000 7% Fri Feb 22 11:44:04.583 [conn22] Background Index Build Progress: 3900/10000 39% Fri Feb 22 11:44:04.827 [conn26] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:04.949 [conn41] Background Index Build Progress: 600/10000 6% Fri Feb 22 11:44:05.062 [conn23] Background Index Build Progress: 300/10000 3% Fri Feb 22 11:44:05.243 [conn25] Background Index Build Progress: 2400/10000 24% Fri Feb 22 11:44:05.387 [conn58] Background Index Build Progress: 400/10000 4% Fri Feb 22 11:44:05.543 [conn64] Background Index Build Progress: 700/10000 7% Fri Feb 22 11:44:05.668 [conn72] Background Index Build Progress: 800/10000 8% Fri Feb 22 11:44:05.690 [conn28] Background Index Build Progress: 5900/10000 59% Fri Feb 22 11:44:05.793 [conn78] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:05.885 [conn83] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:05.968 [conn43] Background Index Build Progress: 3100/10000 31% Fri Feb 22 11:44:06.030 [conn30] Background Index Build Progress: 7200/10000 72% Fri Feb 22 11:44:06.047 [conn82] Background Index Build Progress: 2700/10000 27% Fri Feb 22 11:44:06.165 [conn84] Background Index Build Progress: 3400/10000 34% Fri Feb 22 11:44:06.245 [conn80] Background Index Build Progress: 4500/10000 45% Fri Feb 22 11:44:06.323 [conn68] Background Index Build Progress: 1700/10000 17% Fri Feb 22 11:44:06.385 [conn74] Background Index Build Progress: 3100/10000 31% Fri Feb 22 11:44:06.477 [conn81] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:06.531 [conn70] Background Index Build Progress: 1600/10000 16% Fri Feb 22 11:44:06.535 [conn61] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:06.603 [conn77] Background Index Build Progress: 7200/10000 72% Fri Feb 22 11:44:06.726 [conn73] Background Index Build Progress: 3000/10000 30% Fri Feb 22 11:44:06.785 [conn76] Background Index Build Progress: 6800/10000 68% Fri Feb 22 11:44:06.868 [conn60] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:06.932 [conn79] Background Index Build Progress: 6600/10000 66% Fri Feb 22 11:44:07.009 [conn75] Background Index Build Progress: 5600/10000 56% Fri Feb 22 11:44:07.064 [conn77] build index done. scanned 10000 total records. 12.518 secs Fri Feb 22 11:44:07.064 [conn77] switching indexes at position 29 and 3 Fri Feb 22 11:44:07.064 [conn77] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:853943 12519ms Fri Feb 22 11:44:07.076 [conn77] end connection 127.0.0.1:45520 (61 connections now open) Fri Feb 22 11:44:07.083 [conn66] Background Index Build Progress: 2100/10000 21% Fri Feb 22 11:44:07.101 [conn52] Background Index Build Progress: 1700/10000 17% Fri Feb 22 11:44:07.174 [conn69] Background Index Build Progress: 5800/10000 58% Fri Feb 22 11:44:07.241 [conn59] Background Index Build Progress: 1800/10000 18% Fri Feb 22 11:44:07.260 [conn62] Background Index Build Progress: 3500/10000 35% Fri Feb 22 11:44:07.362 [FileAllocator] allocating new datafile /data/db/sconsTests/test.3, filling with zeroes... Fri Feb 22 11:44:07.362 [FileAllocator] done allocating datafile /data/db/sconsTests/test.3, size: 512MB, took 0 secs Fri Feb 22 11:44:07.379 [conn50] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:07.395 [conn67] Background Index Build Progress: 5900/10000 59% Fri Feb 22 11:44:07.512 [conn79] build index done. scanned 10000 total records. 12.571 secs Fri Feb 22 11:44:07.512 [conn79] switching indexes at position 35 and 4 Fri Feb 22 11:44:07.512 [conn79] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:789245 12572ms Fri Feb 22 11:44:07.523 [conn63] Background Index Build Progress: 4700/10000 47% Fri Feb 22 11:44:07.585 [conn49] Background Index Build Progress: 4100/10000 41% Fri Feb 22 11:44:07.674 [conn57] Background Index Build Progress: 2900/10000 29% Fri Feb 22 11:44:07.773 [conn45] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:07.833 [conn55] Background Index Build Progress: 4500/10000 45% Fri Feb 22 11:44:07.914 [conn51] Background Index Build Progress: 1700/10000 17% Fri Feb 22 11:44:07.988 [conn54] Background Index Build Progress: 2900/10000 29% Fri Feb 22 11:44:08.076 [conn56] Background Index Build Progress: 4800/10000 48% Fri Feb 22 11:44:08.163 [conn53] Background Index Build Progress: 4300/10000 43% Fri Feb 22 11:44:08.244 [conn46] Background Index Build Progress: 3000/10000 30% Fri Feb 22 11:44:08.264 [conn62] build index done. scanned 10000 total records. 13.832 secs Fri Feb 22 11:44:08.264 [conn62] switching indexes at position 21 and 5 Fri Feb 22 11:44:08.264 [conn62] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:623980 13832ms Fri Feb 22 11:44:08.268 [conn44] Background Index Build Progress: 2100/10000 21% Fri Feb 22 11:44:08.321 [conn62] end connection 127.0.0.1:34154 (60 connections now open) Fri Feb 22 11:44:08.322 [conn39] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:08.431 [conn48] Background Index Build Progress: 5700/10000 57% Fri Feb 22 11:44:08.603 [conn47] Background Index Build Progress: 4500/10000 45% Fri Feb 22 11:44:08.736 [conn42] Background Index Build Progress: 3800/10000 38% Fri Feb 22 11:44:08.863 [conn38] Background Index Build Progress: 2500/10000 25% Fri Feb 22 11:44:09.010 [conn24] Background Index Build Progress: 1700/10000 17% Fri Feb 22 11:44:09.073 [conn33] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:09.298 [conn35] Background Index Build Progress: 4700/10000 47% Fri Feb 22 11:44:09.412 [conn34] Background Index Build Progress: 5400/10000 54% Fri Feb 22 11:44:09.435 [conn29] Background Index Build Progress: 1800/10000 18% Fri Feb 22 11:44:09.541 [conn31] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:09.605 [conn37] Background Index Build Progress: 8500/10000 85% Fri Feb 22 11:44:09.693 [conn32] Background Index Build Progress: 3600/10000 36% Fri Feb 22 11:44:09.709 [conn40] Background Index Build Progress: 9400/10000 94% Fri Feb 22 11:44:09.791 [conn40] build index done. scanned 10000 total records. 15.508 secs Fri Feb 22 11:44:09.791 [conn40] switching indexes at position 9 and 6 Fri Feb 22 11:44:09.791 [conn40] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:579603 15508ms Fri Feb 22 11:44:09.803 [conn27] Background Index Build Progress: 2000/10000 20% Fri Feb 22 11:44:09.819 [conn40] end connection 127.0.0.1:47833 (59 connections now open) Fri Feb 22 11:44:09.825 [conn36] Background Index Build Progress: 6600/10000 66% Fri Feb 22 11:44:09.954 [conn23] Background Index Build Progress: 1100/10000 11% Fri Feb 22 11:44:09.963 [conn41] Background Index Build Progress: 1200/10000 12% Fri Feb 22 11:44:10.087 [conn26] Background Index Build Progress: 3900/10000 39% Fri Feb 22 11:44:10.171 [conn25] Background Index Build Progress: 3800/10000 38% Fri Feb 22 11:44:10.236 [conn35] build index done. scanned 10000 total records. 15.975 secs Fri Feb 22 11:44:10.236 [conn35] switching indexes at position 9 and 7 Fri Feb 22 11:44:10.236 [conn35] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:537486 15975ms Fri Feb 22 11:44:10.242 [conn58] Background Index Build Progress: 1800/10000 18% Fri Feb 22 11:44:10.440 [conn28] Background Index Build Progress: 6300/10000 63% Fri Feb 22 11:44:10.467 [conn78] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:10.687 [conn22] Background Index Build Progress: 9900/10000 99% Fri Feb 22 11:44:10.706 [conn22] build index done. scanned 10000 total records. 16.502 secs Fri Feb 22 11:44:10.706 [conn22] switching indexes at position 22 and 8 Fri Feb 22 11:44:10.706 [conn22] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:1215766 16503ms Fri Feb 22 11:44:10.708 [conn64] Background Index Build Progress: 3500/10000 35% Fri Feb 22 11:44:10.718 [conn22] end connection 127.0.0.1:34462 (58 connections now open) Fri Feb 22 11:44:10.782 [conn30] Background Index Build Progress: 7700/10000 77% Fri Feb 22 11:44:10.813 [conn32] build index done. scanned 10000 total records. 12.082 secs Fri Feb 22 11:44:10.813 [conn32] switching indexes at position 54 and 9 Fri Feb 22 11:44:10.813 [conn32] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:997581 12082ms Fri Feb 22 11:44:10.816 [conn43] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:10.824 [conn84] Background Index Build Progress: 3900/10000 39% Fri Feb 22 11:44:10.825 [conn32] end connection 127.0.0.1:62880 (57 connections now open) Fri Feb 22 11:44:10.864 [conn80] Background Index Build Progress: 5000/10000 50% Fri Feb 22 11:44:10.987 [conn83] Background Index Build Progress: 4900/10000 49% Fri Feb 22 11:44:11.012 [conn74] Background Index Build Progress: 3600/10000 36% Fri Feb 22 11:44:11.138 [conn72] Background Index Build Progress: 6600/10000 66% Fri Feb 22 11:44:11.208 [conn61] Background Index Build Progress: 4900/10000 49% Fri Feb 22 11:44:11.270 [conn68] Background Index Build Progress: 4000/10000 40% Fri Feb 22 11:44:11.288 [conn81] Background Index Build Progress: 6300/10000 63% Fri Feb 22 11:44:11.401 [conn73] Background Index Build Progress: 3500/10000 35% Fri Feb 22 11:44:11.417 [conn70] Background Index Build Progress: 4100/10000 41% Fri Feb 22 11:44:11.454 [conn60] Background Index Build Progress: 2000/10000 20% Fri Feb 22 11:44:11.459 [conn76] Background Index Build Progress: 8500/10000 85% Fri Feb 22 11:44:11.500 [conn82] Background Index Build Progress: 9800/10000 98% Fri Feb 22 11:44:11.542 [conn82] build index done. scanned 10000 total records. 16.984 secs Fri Feb 22 11:44:11.542 [conn82] switching indexes at position 31 and 10 Fri Feb 22 11:44:11.542 [conn82] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:872101 16985ms Fri Feb 22 11:44:11.555 [conn82] end connection 127.0.0.1:34546 (56 connections now open) Fri Feb 22 11:44:11.569 [conn75] Background Index Build Progress: 6500/10000 65% Fri Feb 22 11:44:11.638 [conn69] Background Index Build Progress: 6300/10000 63% Fri Feb 22 11:44:11.660 [conn59] Background Index Build Progress: 2200/10000 22% Fri Feb 22 11:44:11.693 [conn66] Background Index Build Progress: 4100/10000 41% Fri Feb 22 11:44:11.704 [conn76] build index done. scanned 10000 total records. 17.162 secs Fri Feb 22 11:44:11.704 [conn76] switching indexes at position 28 and 11 Fri Feb 22 11:44:11.704 [conn76] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:643648 17162ms Fri Feb 22 11:44:11.716 [conn76] end connection 127.0.0.1:34392 (55 connections now open) Fri Feb 22 11:44:11.728 [conn50] Background Index Build Progress: 2000/10000 20% Fri Feb 22 11:44:11.760 [conn79] end connection 127.0.0.1:52532 (54 connections now open) Fri Feb 22 11:44:11.777 [conn63] Background Index Build Progress: 5200/10000 52% Fri Feb 22 11:44:11.845 [conn49] Background Index Build Progress: 4700/10000 47% Fri Feb 22 11:44:11.907 [conn67] Background Index Build Progress: 8400/10000 84% Fri Feb 22 11:44:11.959 [conn52] Background Index Build Progress: 7200/10000 72% Fri Feb 22 11:44:12.004 [conn55] Background Index Build Progress: 5000/10000 50% Fri Feb 22 11:44:12.066 [conn57] Background Index Build Progress: 4900/10000 49% Fri Feb 22 11:44:12.132 [conn69] build index done. scanned 10000 total records. 16.652 secs Fri Feb 22 11:44:12.132 [conn69] switching indexes at position 37 and 12 Fri Feb 22 11:44:12.132 [conn69] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:756139 16652ms Fri Feb 22 11:44:12.170 [conn54] Background Index Build Progress: 3300/10000 33% Fri Feb 22 11:44:12.221 [conn67] build index done. scanned 10000 total records. 16.401 secs Fri Feb 22 11:44:12.221 [conn67] switching indexes at position 39 and 13 Fri Feb 22 11:44:12.222 [conn67] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:945131 16401ms Fri Feb 22 11:44:12.222 [conn69] end connection 127.0.0.1:60918 (53 connections now open) Fri Feb 22 11:44:12.224 [conn56] Background Index Build Progress: 5300/10000 53% Fri Feb 22 11:44:12.245 [conn67] end connection 127.0.0.1:48527 (52 connections now open) Fri Feb 22 11:44:12.246 [conn51] Background Index Build Progress: 3600/10000 36% Fri Feb 22 11:44:12.413 [conn53] Background Index Build Progress: 5000/10000 50% Fri Feb 22 11:44:12.559 [conn39] Background Index Build Progress: 1700/10000 17% Fri Feb 22 11:44:12.573 [conn46] Background Index Build Progress: 4700/10000 47% Fri Feb 22 11:44:12.749 [conn45] Background Index Build Progress: 6300/10000 63% Fri Feb 22 11:44:12.847 [conn48] Background Index Build Progress: 7300/10000 73% Fri Feb 22 11:44:12.970 [conn44] Background Index Build Progress: 5400/10000 54% Fri Feb 22 11:44:13.104 [conn38] Background Index Build Progress: 3100/10000 31% Fri Feb 22 11:44:13.120 [conn33] Background Index Build Progress: 1800/10000 18% Fri Feb 22 11:44:13.364 [conn42] Background Index Build Progress: 6500/10000 65% Fri Feb 22 11:44:13.470 [conn24] Background Index Build Progress: 3400/10000 34% Fri Feb 22 11:44:13.599 [conn47] Background Index Build Progress: 8900/10000 89% Fri Feb 22 11:44:13.725 [conn29] Background Index Build Progress: 2600/10000 26% Fri Feb 22 11:44:13.806 [conn47] build index done. scanned 10000 total records. 16.387 secs Fri Feb 22 11:44:13.806 [conn47] switching indexes at position 48 and 14 Fri Feb 22 11:44:13.806 [conn47] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:1177561 16387ms Fri Feb 22 11:44:13.834 [conn47] end connection 127.0.0.1:48103 (51 connections now open) Fri Feb 22 11:44:13.842 [conn37] Background Index Build Progress: 9000/10000 90% Fri Feb 22 11:44:13.854 [conn31] Background Index Build Progress: 5600/10000 56% Fri Feb 22 11:44:13.987 [conn34] Background Index Build Progress: 8200/10000 82% Fri Feb 22 11:44:14.038 [conn37] build index done. scanned 10000 total records. 19.77 secs Fri Feb 22 11:44:14.038 [conn37] switching indexes at position 54 and 15 Fri Feb 22 11:44:14.038 [conn37] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:899233 19770ms Fri Feb 22 11:44:14.041 [conn36] Background Index Build Progress: 7200/10000 72% Fri Feb 22 11:44:14.073 [conn37] end connection 127.0.0.1:49465 (50 connections now open) Fri Feb 22 11:44:14.109 [conn23] Background Index Build Progress: 1500/10000 15% Fri Feb 22 11:44:14.136 [conn27] Background Index Build Progress: 3800/10000 38% Fri Feb 22 11:44:14.288 [conn26] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:14.403 [conn35] end connection 127.0.0.1:65093 (49 connections now open) Fri Feb 22 11:44:14.406 [conn41] Background Index Build Progress: 3800/10000 38% Fri Feb 22 11:44:14.426 [conn25] Background Index Build Progress: 5300/10000 53% Fri Feb 22 11:44:14.498 [conn78] Background Index Build Progress: 2000/10000 20% Fri Feb 22 11:44:14.606 [conn58] Background Index Build Progress: 3900/10000 39% Fri Feb 22 11:44:14.718 [conn64] Background Index Build Progress: 4000/10000 40% Fri Feb 22 11:44:14.786 [conn84] Background Index Build Progress: 4300/10000 43% Fri Feb 22 11:44:14.871 [conn28] Background Index Build Progress: 9100/10000 91% Fri Feb 22 11:44:15.008 [conn43] Background Index Build Progress: 5800/10000 58% Fri Feb 22 11:44:15.121 [conn28] build index done. scanned 10000 total records. 16.07 secs Fri Feb 22 11:44:15.121 [conn28] switching indexes at position 56 and 16 Fri Feb 22 11:44:15.121 [conn28] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 88 locks(micros) w:1557451 16071ms Fri Feb 22 11:44:15.123 [conn30] Background Index Build Progress: 9800/10000 98% Fri Feb 22 11:44:15.128 [conn83] Background Index Build Progress: 6500/10000 65% Fri Feb 22 11:44:15.132 [conn28] end connection 127.0.0.1:39802 (48 connections now open) Fri Feb 22 11:44:15.145 [conn61] Background Index Build Progress: 5400/10000 54% Fri Feb 22 11:44:15.155 [conn30] build index done. scanned 10000 total records. 20.92 secs Fri Feb 22 11:44:15.155 [conn30] switching indexes at position 29 and 17 Fri Feb 22 11:44:15.155 [conn30] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:510218 20920ms Fri Feb 22 11:44:15.178 [conn68] Background Index Build Progress: 4500/10000 45% Fri Feb 22 11:44:15.205 [conn74] Background Index Build Progress: 6500/10000 65% Fri Feb 22 11:44:15.275 [conn80] Background Index Build Progress: 8800/10000 88% Fri Feb 22 11:44:15.335 [conn73] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:15.339 [conn81] Background Index Build Progress: 8400/10000 84% Fri Feb 22 11:44:15.341 [conn26] build index done. scanned 10000 total records. 16.023 secs Fri Feb 22 11:44:15.341 [conn26] switching indexes at position 58 and 18 Fri Feb 22 11:44:15.341 [conn26] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 82 locks(micros) w:1231272 16023ms Fri Feb 22 11:44:15.343 [conn70] Background Index Build Progress: 5600/10000 56% Fri Feb 22 11:44:15.352 [conn26] end connection 127.0.0.1:36553 (47 connections now open) Fri Feb 22 11:44:15.361 [conn72] Background Index Build Progress: 10000/10000 100% Fri Feb 22 11:44:15.361 [conn72] build index done. scanned 10000 total records. 20.885 secs Fri Feb 22 11:44:15.361 [conn72] switching indexes at position 25 and 19 Fri Feb 22 11:44:15.361 [conn72] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:683425 20885ms Fri Feb 22 11:44:15.372 [conn66] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:15.372 [conn72] end connection 127.0.0.1:47966 (46 connections now open) Fri Feb 22 11:44:15.413 [conn60] Background Index Build Progress: 4500/10000 45% Fri Feb 22 11:44:15.483 [conn63] Background Index Build Progress: 5700/10000 57% Fri Feb 22 11:44:15.519 [conn49] Background Index Build Progress: 5000/10000 50% Fri Feb 22 11:44:15.575 [conn75] Background Index Build Progress: 9000/10000 90% Fri Feb 22 11:44:15.612 [conn50] Background Index Build Progress: 4000/10000 40% Fri Feb 22 11:44:15.706 [conn55] Background Index Build Progress: 5600/10000 56% Fri Feb 22 11:44:15.725 [conn59] Background Index Build Progress: 5800/10000 58% Fri Feb 22 11:44:15.736 [conn61] build index done. scanned 10000 total records. 21.319 secs Fri Feb 22 11:44:15.737 [conn61] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:475280 21320ms Fri Feb 22 11:44:15.767 [conn61] end connection 127.0.0.1:60190 (45 connections now open) Fri Feb 22 11:44:15.770 [conn57] Background Index Build Progress: 5700/10000 57% Fri Feb 22 11:44:15.859 [conn52] Background Index Build Progress: 9700/10000 97% Fri Feb 22 11:44:15.931 [conn52] build index done. scanned 10000 total records. 21.572 secs Fri Feb 22 11:44:15.931 [conn52] switching indexes at position 56 and 21 Fri Feb 22 11:44:15.931 [conn52] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:593767 21572ms Fri Feb 22 11:44:15.940 [conn56] Background Index Build Progress: 6600/10000 66% Fri Feb 22 11:44:15.943 [conn52] end connection 127.0.0.1:36006 (44 connections now open) Fri Feb 22 11:44:15.952 [conn54] Background Index Build Progress: 5300/10000 53% Fri Feb 22 11:44:15.994 [conn53] Background Index Build Progress: 6500/10000 65% Fri Feb 22 11:44:16.035 [conn63] build index done. scanned 10000 total records. 19.873 secs Fri Feb 22 11:44:16.035 [conn63] switching indexes at position 41 and 22 Fri Feb 22 11:44:16.035 [conn63] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:567279 19873ms Fri Feb 22 11:44:16.049 [conn51] Background Index Build Progress: 6300/10000 63% Fri Feb 22 11:44:16.080 [conn39] Background Index Build Progress: 3600/10000 36% Fri Feb 22 11:44:16.112 [conn73] build index done. scanned 10000 total records. 21.613 secs Fri Feb 22 11:44:16.112 [conn73] switching indexes at position 26 and 23 Fri Feb 22 11:44:16.112 [conn73] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:488823 21613ms Fri Feb 22 11:44:16.124 [conn73] end connection 127.0.0.1:57799 (43 connections now open) Fri Feb 22 11:44:16.230 [conn45] Background Index Build Progress: 7700/10000 77% Fri Feb 22 11:44:16.303 [conn48] Background Index Build Progress: 8800/10000 88% Fri Feb 22 11:44:16.548 [conn33] Background Index Build Progress: 2400/10000 24% Fri Feb 22 11:44:16.587 [conn48] build index done. scanned 10000 total records. 19.324 secs Fri Feb 22 11:44:16.587 [conn48] switching indexes at position 47 and 24 Fri Feb 22 11:44:16.588 [conn48] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:1062021 19324ms Fri Feb 22 11:44:16.588 [conn44] Background Index Build Progress: 7300/10000 73% Fri Feb 22 11:44:16.619 [conn48] end connection 127.0.0.1:47703 (42 connections now open) Fri Feb 22 11:44:16.620 [conn46] Background Index Build Progress: 9100/10000 91% Fri Feb 22 11:44:16.749 [conn42] Background Index Build Progress: 7700/10000 77% Fri Feb 22 11:44:16.773 [conn46] build index done. scanned 10000 total records. 22.446 secs Fri Feb 22 11:44:16.773 [conn46] switching indexes at position 39 and 25 Fri Feb 22 11:44:16.773 [conn46] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:853463 22447ms Fri Feb 22 11:44:16.790 [conn38] Background Index Build Progress: 6300/10000 63% Fri Feb 22 11:44:16.856 [conn46] end connection 127.0.0.1:40000 (41 connections now open) Fri Feb 22 11:44:16.858 [conn24] Background Index Build Progress: 5400/10000 54% Fri Feb 22 11:44:16.999 [conn31] Background Index Build Progress: 6200/10000 62% Fri Feb 22 11:44:17.081 [conn36] Background Index Build Progress: 7600/10000 76% Fri Feb 22 11:44:17.107 [conn34] Background Index Build Progress: 9500/10000 95% Fri Feb 22 11:44:17.134 [conn42] build index done. scanned 10000 total records. 19.377 secs Fri Feb 22 11:44:17.134 [conn42] switching indexes at position 50 and 26 Fri Feb 22 11:44:17.134 [conn42] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:1128129 19377ms Fri Feb 22 11:44:17.150 [conn42] end connection 127.0.0.1:49093 (40 connections now open) Fri Feb 22 11:44:17.154 [conn27] Background Index Build Progress: 4800/10000 48% Fri Feb 22 11:44:17.189 [conn29] Background Index Build Progress: 6500/10000 65% Fri Feb 22 11:44:17.234 [conn34] build index done. scanned 10000 total records. 22.979 secs Fri Feb 22 11:44:17.234 [conn34] switching indexes at position 56 and 27 Fri Feb 22 11:44:17.234 [conn34] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:645421 22980ms Fri Feb 22 11:44:17.264 [conn34] end connection 127.0.0.1:47308 (39 connections now open) Fri Feb 22 11:44:17.268 [conn25] Background Index Build Progress: 5700/10000 57% Fri Feb 22 11:44:17.279 [conn23] Background Index Build Progress: 4300/10000 43% Fri Feb 22 11:44:17.541 [conn58] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:17.644 [conn64] Background Index Build Progress: 4400/10000 44% Fri Feb 22 11:44:17.649 [conn41] Background Index Build Progress: 7000/10000 70% Fri Feb 22 11:44:17.728 [conn78] Background Index Build Progress: 4300/10000 43% Fri Feb 22 11:44:17.886 [conn84] Background Index Build Progress: 6100/10000 61% Fri Feb 22 11:44:17.887 [conn30] end connection 127.0.0.1:42960 (38 connections now open) Fri Feb 22 11:44:18.011 [conn74] Background Index Build Progress: 7500/10000 75% Fri Feb 22 11:44:18.155 [conn80] Background Index Build Progress: 9100/10000 91% Fri Feb 22 11:44:18.173 [conn70] Background Index Build Progress: 5800/10000 58% Fri Feb 22 11:44:18.224 [conn81] Background Index Build Progress: 9000/10000 90% Fri Feb 22 11:44:18.227 [conn83] Background Index Build Progress: 9000/10000 90% Fri Feb 22 11:44:18.247 [conn43] Background Index Build Progress: 9400/10000 94% Fri Feb 22 11:44:18.277 [conn80] build index done. scanned 10000 total records. 23.726 secs Fri Feb 22 11:44:18.277 [conn80] switching indexes at position 30 and 28 Fri Feb 22 11:44:18.277 [conn80] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:730034 23726ms Fri Feb 22 11:44:18.307 [conn60] Background Index Build Progress: 5600/10000 56% Fri Feb 22 11:44:18.347 [conn75] Background Index Build Progress: 9300/10000 93% Fri Feb 22 11:44:18.348 [conn83] build index done. scanned 10000 total records. 23.777 secs Fri Feb 22 11:44:18.348 [conn83] switching indexes at position 32 and 29 Fri Feb 22 11:44:18.348 [conn83] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:496717 23777ms Fri Feb 22 11:44:18.354 [conn81] build index done. scanned 10000 total records. 23.585 secs Fri Feb 22 11:44:18.354 [conn81] switching indexes at position 34 and 30 Fri Feb 22 11:44:18.354 [conn81] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:637902 23586ms Fri Feb 22 11:44:18.355 [conn43] build index done. scanned 10000 total records. 24.043 secs Fri Feb 22 11:44:18.355 [conn43] switching indexes at position 34 and 31 Fri Feb 22 11:44:18.355 [conn43] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:769900 24043ms Fri Feb 22 11:44:18.358 [conn49] Background Index Build Progress: 5900/10000 59% Fri Feb 22 11:44:18.360 [conn83] end connection 127.0.0.1:50311 (37 connections now open) Fri Feb 22 11:44:18.366 [conn81] end connection 127.0.0.1:44842 (36 connections now open) Fri Feb 22 11:44:18.367 [conn43] end connection 127.0.0.1:58716 (35 connections now open) Fri Feb 22 11:44:18.374 [conn55] Background Index Build Progress: 5700/10000 57% Fri Feb 22 11:44:18.381 [conn66] Background Index Build Progress: 7000/10000 70% Fri Feb 22 11:44:18.417 [conn57] Background Index Build Progress: 5900/10000 59% Fri Feb 22 11:44:18.438 [conn50] Background Index Build Progress: 5400/10000 54% Fri Feb 22 11:44:18.456 [conn75] build index done. scanned 10000 total records. 23.178 secs Fri Feb 22 11:44:18.456 [conn75] switching indexes at position 36 and 32 Fri Feb 22 11:44:18.456 [conn75] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:873642 23178ms Fri Feb 22 11:44:18.469 [conn75] end connection 127.0.0.1:60890 (34 connections now open) Fri Feb 22 11:44:18.496 [conn56] Background Index Build Progress: 6800/10000 68% Fri Feb 22 11:44:18.566 [conn63] end connection 127.0.0.1:40839 (33 connections now open) Fri Feb 22 11:44:18.614 [conn59] Background Index Build Progress: 8500/10000 85% Fri Feb 22 11:44:18.737 [conn78] build index done. scanned 10000 total records. 18.514 secs Fri Feb 22 11:44:18.737 [conn78] switching indexes at position 63 and 33 Fri Feb 22 11:44:18.738 [conn78] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:1103708 18514ms Fri Feb 22 11:44:18.758 [conn78] end connection 127.0.0.1:33356 (32 connections now open) Fri Feb 22 11:44:18.758 [conn53] Background Index Build Progress: 9100/10000 91% Fri Feb 22 11:44:18.837 [conn59] build index done. scanned 10000 total records. 24.436 secs Fri Feb 22 11:44:18.837 [conn59] switching indexes at position 58 and 34 Fri Feb 22 11:44:18.838 [conn59] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:616275 24437ms Fri Feb 22 11:44:18.879 [conn59] end connection 127.0.0.1:44698 (31 connections now open) Fri Feb 22 11:44:18.887 [conn54] Background Index Build Progress: 9700/10000 97% Fri Feb 22 11:44:18.939 [conn54] build index done. scanned 10000 total records. 22.006 secs Fri Feb 22 11:44:18.939 [conn54] switching indexes at position 44 and 35 Fri Feb 22 11:44:18.939 [conn54] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 81 locks(micros) w:1366068 22006ms Fri Feb 22 11:44:18.963 [conn54] end connection 127.0.0.1:52623 (30 connections now open) Fri Feb 22 11:44:19.003 [conn33] Background Index Build Progress: 3700/10000 37% Fri Feb 22 11:44:19.008 [conn44] Background Index Build Progress: 9100/10000 91% Fri Feb 22 11:44:19.017 [conn38] Background Index Build Progress: 7500/10000 75% Fri Feb 22 11:44:19.100 [conn24] Background Index Build Progress: 6800/10000 68% Fri Feb 22 11:44:19.140 [conn44] build index done. scanned 10000 total records. 21.475 secs Fri Feb 22 11:44:19.140 [conn44] switching indexes at position 49 and 36 Fri Feb 22 11:44:19.140 [conn44] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:1188682 21476ms Fri Feb 22 11:44:19.152 [conn44] end connection 127.0.0.1:36125 (29 connections now open) Fri Feb 22 11:44:19.269 [conn31] Background Index Build Progress: 8100/10000 81% Fri Feb 22 11:44:19.887 [conn68] Background Index Build Progress: 5300/10000 53% Fri Feb 22 11:44:19.925 [conn25] build index done. scanned 10000 total records. 20.485 secs Fri Feb 22 11:44:19.925 [conn25] switching indexes at position 59 and 37 Fri Feb 22 11:44:19.925 [conn25] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:1224290 20486ms Fri Feb 22 11:44:19.948 [conn25] end connection 127.0.0.1:40571 (28 connections now open) Fri Feb 22 11:44:20.004 [conn23] Background Index Build Progress: 8900/10000 89% Fri Feb 22 11:44:20.019 [conn84] Background Index Build Progress: 7600/10000 76% Fri Feb 22 11:44:20.162 [conn58] Background Index Build Progress: 9100/10000 91% Fri Feb 22 11:44:20.230 [conn80] end connection 127.0.0.1:64592 (27 connections now open) Fri Feb 22 11:44:20.268 [conn74] build index done. scanned 10000 total records. 25.751 secs Fri Feb 22 11:44:20.268 [conn74] switching indexes at position 56 and 38 Fri Feb 22 11:44:20.268 [conn74] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:949400 25751ms Fri Feb 22 11:44:20.283 [conn74] end connection 127.0.0.1:52672 (26 connections now open) Fri Feb 22 11:44:20.316 [conn58] build index done. scanned 10000 total records. 20.387 secs Fri Feb 22 11:44:20.316 [conn58] switching indexes at position 61 and 39 Fri Feb 22 11:44:20.316 [conn58] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:1050927 20388ms Fri Feb 22 11:44:20.328 [conn58] end connection 127.0.0.1:41057 (25 connections now open) Fri Feb 22 11:44:20.348 [conn39] Background Index Build Progress: 4100/10000 41% Fri Feb 22 11:44:20.506 [conn45] Background Index Build Progress: 9300/10000 93% Fri Feb 22 11:44:20.544 [conn53] build index done. scanned 10000 total records. 23.505 secs Fri Feb 22 11:44:20.544 [conn53] switching indexes at position 45 and 40 Fri Feb 22 11:44:20.544 [conn53] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:796707 23505ms Fri Feb 22 11:44:20.551 [conn51] Background Index Build Progress: 9800/10000 98% Fri Feb 22 11:44:20.556 [conn53] end connection 127.0.0.1:46529 (24 connections now open) Fri Feb 22 11:44:20.613 [conn66] build index done. scanned 10000 total records. 24.642 secs Fri Feb 22 11:44:20.613 [conn66] switching indexes at position 45 and 41 Fri Feb 22 11:44:20.613 [conn66] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:723423 24643ms Fri Feb 22 11:44:20.625 [conn66] end connection 127.0.0.1:54782 (23 connections now open) Fri Feb 22 11:44:20.625 [conn51] build index done. scanned 10000 total records. 23.444 secs Fri Feb 22 11:44:20.625 [conn51] switching indexes at position 46 and 42 Fri Feb 22 11:44:20.625 [conn51] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:982553 23444ms Fri Feb 22 11:44:20.633 [conn56] build index done. scanned 10000 total records. 23.877 secs Fri Feb 22 11:44:20.633 [conn56] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:767345 23878ms Fri Feb 22 11:44:20.637 [conn51] end connection 127.0.0.1:34248 (22 connections now open) Fri Feb 22 11:44:20.639 [conn45] build index done. scanned 10000 total records. 26.32 secs Fri Feb 22 11:44:20.639 [conn45] switching indexes at position 59 and 44 Fri Feb 22 11:44:20.639 [conn45] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:819790 26321ms Fri Feb 22 11:44:20.643 [conn36] Background Index Build Progress: 8900/10000 89% Fri Feb 22 11:44:20.645 [conn56] end connection 127.0.0.1:51794 (21 connections now open) Fri Feb 22 11:44:20.655 [conn45] end connection 127.0.0.1:39901 (20 connections now open) Fri Feb 22 11:44:20.774 [conn50] build index done. scanned 10000 total records. 26.421 secs Fri Feb 22 11:44:20.774 [conn50] switching indexes at position 54 and 45 Fri Feb 22 11:44:20.774 [conn50] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:663726 26421ms Fri Feb 22 11:44:20.814 [conn31] build index done. scanned 10000 total records. 26.574 secs Fri Feb 22 11:44:20.814 [conn31] switching indexes at position 59 and 46 Fri Feb 22 11:44:20.814 [conn31] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 82 locks(micros) w:673564 26575ms Fri Feb 22 11:44:20.818 [conn27] Background Index Build Progress: 8200/10000 82% Fri Feb 22 11:44:20.825 [conn36] build index done. scanned 10000 total records. 22.5 secs Fri Feb 22 11:44:20.825 [conn36] switching indexes at position 52 and 47 Fri Feb 22 11:44:20.825 [conn36] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:1248597 22500ms Fri Feb 22 11:44:20.826 [conn31] end connection 127.0.0.1:42612 (19 connections now open) Fri Feb 22 11:44:20.839 [conn36] end connection 127.0.0.1:33894 (18 connections now open) Fri Feb 22 11:44:20.841 [conn41] Background Index Build Progress: 8600/10000 86% Fri Feb 22 11:44:20.856 [conn29] Background Index Build Progress: 10000/10000 100% Fri Feb 22 11:44:20.857 [conn29] build index done. scanned 10000 total records. 22.082 secs Fri Feb 22 11:44:20.857 [conn29] switching indexes at position 55 and 48 Fri Feb 22 11:44:20.857 [conn29] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:1125066 22083ms Fri Feb 22 11:44:20.868 [conn64] Background Index Build Progress: 6600/10000 66% Fri Feb 22 11:44:20.869 [conn29] end connection 127.0.0.1:45623 (17 connections now open) Fri Feb 22 11:44:20.888 [conn24] build index done. scanned 10000 total records. 26.674 secs Fri Feb 22 11:44:20.888 [conn24] switching indexes at position 56 and 49 Fri Feb 22 11:44:20.888 [conn24] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:1198882 26674ms Fri Feb 22 11:44:20.901 [conn24] end connection 127.0.0.1:34139 (16 connections now open) Fri Feb 22 11:44:20.971 [conn23] build index done. scanned 10000 total records. 21.381 secs Fri Feb 22 11:44:20.971 [conn23] switching indexes at position 60 and 50 Fri Feb 22 11:44:20.971 [conn23] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 81 locks(micros) w:1182846 21381ms Fri Feb 22 11:44:20.983 [conn23] end connection 127.0.0.1:44540 (15 connections now open) Fri Feb 22 11:44:21.003 [conn49] Background Index Build Progress: 7000/10000 70% Fri Feb 22 11:44:21.008 [conn70] Background Index Build Progress: 8500/10000 85% Fri Feb 22 11:44:21.067 [conn84] build index done. scanned 10000 total records. 26.459 secs Fri Feb 22 11:44:21.067 [conn84] switching indexes at position 63 and 51 Fri Feb 22 11:44:21.067 [conn84] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:682641 26460ms Fri Feb 22 11:44:21.103 [conn84] end connection 127.0.0.1:58168 (14 connections now open) Fri Feb 22 11:44:21.105 [conn55] Background Index Build Progress: 8100/10000 81% Fri Feb 22 11:44:21.120 [conn60] Background Index Build Progress: 7700/10000 77% Fri Feb 22 11:44:21.188 [conn57] Background Index Build Progress: 9100/10000 91% Fri Feb 22 11:44:21.331 [conn50] end connection 127.0.0.1:47056 (13 connections now open) Fri Feb 22 11:44:21.342 [conn57] build index done. scanned 10000 total records. 24.679 secs Fri Feb 22 11:44:21.342 [conn57] switching indexes at position 59 and 52 Fri Feb 22 11:44:21.342 [conn57] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:748944 24680ms Fri Feb 22 11:44:21.342 [conn55] build index done. scanned 10000 total records. 26.96 secs Fri Feb 22 11:44:21.342 [conn55] switching indexes at position 56 and 53 Fri Feb 22 11:44:21.342 [conn55] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:413701 26960ms Fri Feb 22 11:44:21.354 [conn57] end connection 127.0.0.1:60673 (12 connections now open) Fri Feb 22 11:44:21.354 [conn55] end connection 127.0.0.1:39685 (11 connections now open) Fri Feb 22 11:44:21.471 [conn49] build index done. scanned 10000 total records. 27.123 secs Fri Feb 22 11:44:21.471 [conn49] switching indexes at position 55 and 54 Fri Feb 22 11:44:21.471 [conn49] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:741352 27124ms Fri Feb 22 11:44:21.483 [conn49] end connection 127.0.0.1:46411 (10 connections now open) Fri Feb 22 11:44:21.613 [conn38] build index done. scanned 10000 total records. 23.512 secs Fri Feb 22 11:44:21.613 [conn38] switching indexes at position 63 and 55 Fri Feb 22 11:44:21.613 [conn38] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 81 locks(micros) w:1367736 23512ms Fri Feb 22 11:44:21.616 [conn41] build index done. scanned 10000 total records. 27.328 secs Fri Feb 22 11:44:21.616 [conn41] switching indexes at position 58 and 56 Fri Feb 22 11:44:21.616 [conn41] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 80 locks(micros) w:617417 27329ms Fri Feb 22 11:44:21.625 [conn38] end connection 127.0.0.1:54608 (9 connections now open) Fri Feb 22 11:44:21.629 [conn41] end connection 127.0.0.1:49877 (8 connections now open) Fri Feb 22 11:44:21.638 [conn70] build index done. scanned 10000 total records. 27.166 secs Fri Feb 22 11:44:21.638 [conn70] switching indexes at position 59 and 57 Fri Feb 22 11:44:21.638 [conn70] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:481543 27167ms Fri Feb 22 11:44:21.650 [conn70] end connection 127.0.0.1:34550 (7 connections now open) Fri Feb 22 11:44:21.656 [conn27] build index done. scanned 10000 total records. 22.464 secs Fri Feb 22 11:44:21.656 [conn27] switching indexes at position 59 and 58 Fri Feb 22 11:44:21.656 [conn27] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 81 locks(micros) w:925575 22464ms Fri Feb 22 11:44:21.668 [conn27] end connection 127.0.0.1:59923 (6 connections now open) Fri Feb 22 11:44:21.718 [conn68] build index done. scanned 10000 total records. 27.252 secs Fri Feb 22 11:44:21.718 [conn68] switching indexes at position 60 and 59 Fri Feb 22 11:44:21.718 [conn68] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:514889 27252ms Fri Feb 22 11:44:21.730 [conn68] end connection 127.0.0.1:63928 (5 connections now open) Fri Feb 22 11:44:21.789 [conn33] build index done. scanned 10000 total records. 23.242 secs Fri Feb 22 11:44:21.790 [conn33] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 81 locks(micros) w:839796 23242ms Fri Feb 22 11:44:21.793 [conn60] build index done. scanned 10000 total records. 27.383 secs Fri Feb 22 11:44:21.793 [conn60] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 78 locks(micros) w:406120 27383ms Fri Feb 22 11:44:21.801 [conn33] end connection 127.0.0.1:55338 (4 connections now open) Fri Feb 22 11:44:21.806 [conn60] end connection 127.0.0.1:34676 (3 connections now open) Fri Feb 22 11:44:21.910 [conn39] build index done. scanned 10000 total records. 27.631 secs Fri Feb 22 11:44:21.910 [conn39] switching indexes at position 63 and 62 Fri Feb 22 11:44:21.910 [conn39] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 83 locks(micros) w:722608 27632ms Fri Feb 22 11:44:21.922 [conn39] end connection 127.0.0.1:53471 (2 connections now open) Fri Feb 22 11:44:21.986 [conn64] build index done. scanned 10000 total records. 21.951 secs Fri Feb 22 11:44:21.986 [conn64] insert test.system.indexes ninserted:1 keyUpdates:0 numYields: 79 locks(micros) w:784372 21952ms Fri Feb 22 11:44:21.999 [conn64] end connection 127.0.0.1:65225 (1 connection now open) sh28676| MongoDB shell version: 2.4.0-rc1-pre- sh28678| MongoDB shell version: 2.4.0-rc1-pre- sh28679| MongoDB shell version: 2.4.0-rc1-pre- sh28668| connecting to: 127.0.0.1:27999/admin sh28680| MongoDB shell version: 2.4.0-rc1-pre- sh28671| connecting to: 127.0.0.1:27999/admin sh28675| connecting to: 127.0.0.1:27999/admin sh28670| connecting to: 127.0.0.1:27999/admin sh28672| connecting to: 127.0.0.1:27999/admin sh28679| connecting to: 127.0.0.1:27999/admin sh28673| connecting to: 127.0.0.1:27999/admin sh28678| connecting to: 127.0.0.1:27999/admin sh28677| connecting to: 127.0.0.1:27999/admin sh28674| connecting to: 127.0.0.1:27999/admin sh28676| connecting to: 127.0.0.1:27999/admin sh28680| connecting to: 127.0.0.1:27999/admin [ { "_id" : ObjectId("512759fd0ab21d8dbc26c258"), "n" : 0, "connectionId" : 65, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a03e070692f322cd7d7"), "n" : 0, "connectionId" : 71, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a07ed5ccb6931921715"), "n" : 0, "connectionId" : 77, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0848f805f60be00be9"), "n" : 0, "connectionId" : 62, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a09c0aeae2be5d7de90"), "n" : 0, "connectionId" : 40, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0a1adb8317b98128d4"), "n" : 0, "connectionId" : 22, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0a6ace55d1e9d0f167"), "n" : 0, "connectionId" : 32, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0b141b3dde763705a8"), "n" : 0, "connectionId" : 82, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0b4b177337d413618d"), "n" : 0, "connectionId" : 76, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a07a2645859aefc01b0"), "n" : 0, "connectionId" : 79, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0c4c89cd2a2ab97f9a"), "n" : 0, "connectionId" : 69, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0c0eae8e4f06e2ec43"), "n" : 0, "connectionId" : 67, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0d602ee101ca58c68e"), "n" : 0, "connectionId" : 47, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0ef607204a15f835d9"), "n" : 0, "connectionId" : 37, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0ae22caa56a197eec3"), "n" : 0, "connectionId" : 35, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0fa919b8decfec4621"), "n" : 0, "connectionId" : 28, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0f8da8cd49860cce88"), "n" : 0, "connectionId" : 26, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0f4b8e8c5a431b0a82"), "n" : 0, "connectionId" : 72, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0fb5c51d592e172b33"), "n" : 0, "connectionId" : 61, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0f5f3a0857dd53f6e0"), "n" : 0, "connectionId" : 52, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a10d2a0701e60434105"), "n" : 0, "connectionId" : 73, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a1047c56c0cb32a3c11"), "n" : 0, "connectionId" : 48, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a108de07db103398c9d"), "n" : 0, "connectionId" : 46, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a113d2310c031c520ad"), "n" : 0, "connectionId" : 42, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a111c4c7bd9c363a1a1"), "n" : 0, "connectionId" : 34, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a0f03716d095da3aa6a"), "n" : 0, "connectionId" : 30, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a12085d37fd7198c635"), "n" : 0, "connectionId" : 83, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a121b88f69db2f82b94"), "n" : 0, "connectionId" : 81, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a12ef5b7611e1012385"), "n" : 0, "connectionId" : 43, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a124e96e6a914540105"), "n" : 0, "connectionId" : 75, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a10d317b7c3f907a044"), "n" : 0, "connectionId" : 63, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a12f99d87ca714451bc"), "n" : 0, "connectionId" : 78, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a12eb30f7db2cb64194"), "n" : 0, "connectionId" : 59, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a124d5a9f9a696913f3"), "n" : 0, "connectionId" : 54, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a1390e672a38fab6905"), "n" : 0, "connectionId" : 44, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a13edb86b1ab9b11532"), "n" : 0, "connectionId" : 25, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a12ff588784386ba8a7"), "n" : 0, "connectionId" : 80, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a14162e58aa501acc72"), "n" : 0, "connectionId" : 74, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a141eea7fc43a28a93b"), "n" : 0, "connectionId" : 58, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a143e46bf7d03c7f513"), "n" : 0, "connectionId" : 53, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a146d98c6a541486a0e"), "n" : 0, "connectionId" : 66, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a1491708d6661bf664d"), "n" : 0, "connectionId" : 51, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a142fe3eb37d4477afa"), "n" : 0, "connectionId" : 56, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a14d920d9b9478b681e"), "n" : 0, "connectionId" : 45, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a14198ed746492339b5"), "n" : 0, "connectionId" : 31, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a148c2ede3bd319e249"), "n" : 0, "connectionId" : 36, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a14fc200da6ebbadfd4"), "n" : 0, "connectionId" : 29, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a14275a4aa8604c1f5d"), "n" : 0, "connectionId" : 24, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a1468f20592aa7aa564"), "n" : 0, "connectionId" : 23, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15909ef20438dcacff"), "n" : 0, "connectionId" : 84, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a148337d03265e9241b"), "n" : 0, "connectionId" : 50, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15fb4a71762045c7d0"), "n" : 0, "connectionId" : 55, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a1587cb14790d9bfa5e"), "n" : 0, "connectionId" : 57, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15a94526b6478b6184"), "n" : 0, "connectionId" : 49, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15307c8ea4e206ae0b"), "n" : 0, "connectionId" : 38, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a159573b190ed3b8291"), "n" : 0, "connectionId" : 41, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15472127f5aaae71fd"), "n" : 0, "connectionId" : 70, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15044182cb6303f671"), "n" : 0, "connectionId" : 27, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15d8360e9d5c3cdadb"), "n" : 0, "connectionId" : 68, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a157e5979114b4c5030"), "n" : 0, "connectionId" : 33, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a158bf64660c636dbcf"), "n" : 0, "connectionId" : 60, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a1523dd13164fc63894"), "n" : 0, "connectionId" : 39, "err" : null, "ok" : 1 }, { "_id" : ObjectId("51275a15c40919bac336a9a0"), "n" : 0, "connectionId" : 64, "err" : null, "ok" : 1 } ] [ { "v" : 1, "key" : { "_id" : 1 }, "ns" : "test.index_multi", "name" : "_id_" }, { "v" : 1, "key" : { "field91" : 1, "field92" : 1, "field93" : 1 }, "ns" : "test.index_multi", "name" : "field91_1_field92_1_field93_1", "background" : true }, { "v" : 1, "key" : { "field92" : 1, "field93" : 1, "field94" : 1 }, "ns" : "test.index_multi", "name" : "field92_1_field93_1_field94_1", "background" : true }, { "v" : 1, "key" : { "field42" : 1, "field43" : 1 }, "ns" : "test.index_multi", "name" : "field42_1_field43_1", "background" : true }, { "v" : 1, "key" : { "field44" : 1, "field45" : 1 }, "ns" : "test.index_multi", "name" : "field44_1_field45_1", "background" : true }, { "v" : 1, "key" : { "field2" : 1 }, "ns" : "test.index_multi", "name" : "field2_1", "background" : true }, { "v" : 1, "key" : { "field48" : 1, "field49" : 1 }, "ns" : "test.index_multi", "name" : "field48_1_field49_1", "background" : true }, { "v" : 1, "key" : { "field58" : 1, "field59" : 1 }, "ns" : "test.index_multi", "name" : "field58_1_field59_1", "background" : true }, { "v" : 1, "key" : { "field60" : 1, "field61" : 1 }, "ns" : "test.index_multi", "name" : "field60_1_field61_1", "background" : true }, { "v" : 1, "key" : { "field56" : 1, "field57" : 1 }, "ns" : "test.index_multi", "name" : "field56_1_field57_1", "background" : true }, { "v" : 1, "key" : { "field54" : 1, "field55" : 1 }, "ns" : "test.index_multi", "name" : "field54_1_field55_1", "background" : true }, { "v" : 1, "key" : { "field62" : 1, "field63" : 1 }, "ns" : "test.index_multi", "name" : "field62_1_field63_1", "background" : true }, { "v" : 1, "key" : { "field70" : 1, "field71" : 1 }, "ns" : "test.index_multi", "name" : "field70_1_field71_1", "background" : true }, { "v" : 1, "key" : { "field64" : 1, "field65" : 1 }, "ns" : "test.index_multi", "name" : "field64_1_field65_1", "background" : true }, { "v" : 1, "key" : { "field66" : 1, "field67" : 1 }, "ns" : "test.index_multi", "name" : "field66_1_field67_1", "background" : true }, { "v" : 1, "key" : { "field82" : 1, "field83" : 1 }, "ns" : "test.index_multi", "name" : "field82_1_field83_1", "background" : true }, { "v" : 1, "key" : { "field78" : 1, "field79" : 1 }, "ns" : "test.index_multi", "name" : "field78_1_field79_1", "background" : true }, { "v" : 1, "key" : { "field76" : 1, "field77" : 1 }, "ns" : "test.index_multi", "name" : "field76_1_field77_1", "background" : true }, { "v" : 1, "key" : { "field0" : 1 }, "ns" : "test.index_multi", "name" : "field0_1", "background" : true }, { "v" : 1, "key" : { "field8" : 1 }, "ns" : "test.index_multi", "name" : "field8_1", "background" : true }, { "v" : 1, "key" : { "field3" : 1 }, "ns" : "test.index_multi", "name" : "field3_1", "background" : true }, { "v" : 1, "key" : { "field9" : 1 }, "ns" : "test.index_multi", "name" : "field9_1", "background" : true }, { "v" : 1, "key" : { "field14" : 1 }, "ns" : "test.index_multi", "name" : "field14_1", "background" : true }, { "v" : 1, "key" : { "field18" : 1 }, "ns" : "test.index_multi", "name" : "field18_1", "background" : true }, { "v" : 1, "key" : { "field15" : 1 }, "ns" : "test.index_multi", "name" : "field15_1", "background" : true }, { "v" : 1, "key" : { "field13" : 1 }, "ns" : "test.index_multi", "name" : "field13_1", "background" : true }, { "v" : 1, "key" : { "field17" : 1 }, "ns" : "test.index_multi", "name" : "field17_1", "background" : true }, { "v" : 1, "key" : { "field20" : 1 }, "ns" : "test.index_multi", "name" : "field20_1", "background" : true }, { "v" : 1, "key" : { "field19" : 1 }, "ns" : "test.index_multi", "name" : "field19_1", "background" : true }, { "v" : 1, "key" : { "field21" : 1 }, "ns" : "test.index_multi", "name" : "field21_1", "background" : true }, { "v" : 1, "key" : { "field27" : 1 }, "ns" : "test.index_multi", "name" : "field27_1", "background" : true }, { "v" : 1, "key" : { "field23" : 1 }, "ns" : "test.index_multi", "name" : "field23_1", "background" : true }, { "v" : 1, "key" : { "field25" : 1 }, "ns" : "test.index_multi", "name" : "field25_1", "background" : true }, { "v" : 1, "key" : { "field29" : 1 }, "ns" : "test.index_multi", "name" : "field29_1", "background" : true }, { "v" : 1, "key" : { "field26" : 1 }, "ns" : "test.index_multi", "name" : "field26_1", "background" : true }, { "v" : 1, "key" : { "field22" : 1 }, "ns" : "test.index_multi", "name" : "field22_1", "background" : true }, { "v" : 1, "key" : { "field24" : 1 }, "ns" : "test.index_multi", "name" : "field24_1", "background" : true }, { "v" : 1, "key" : { "field12" : 1 }, "ns" : "test.index_multi", "name" : "field12_1", "background" : true }, { "v" : 1, "key" : { "field11" : 1 }, "ns" : "test.index_multi", "name" : "field11_1", "background" : true }, { "v" : 1, "key" : { "field16" : 1 }, "ns" : "test.index_multi", "name" : "field16_1", "background" : true }, { "v" : 1, "key" : { "field10" : 1 }, "ns" : "test.index_multi", "name" : "field10_1", "background" : true }, { "v" : 1, "key" : { "field5" : 1 }, "ns" : "test.index_multi", "name" : "field5_1", "background" : true }, { "v" : 1, "key" : { "field6" : 1 }, "ns" : "test.index_multi", "name" : "field6_1", "background" : true }, { "v" : 1, "key" : { "field4" : 1 }, "ns" : "test.index_multi", "name" : "field4_1", "background" : true }, { "v" : 1, "key" : { "field80" : 1, "field81" : 1 }, "ns" : "test.index_multi", "name" : "field80_1_field81_1", "background" : true }, { "v" : 1, "key" : { "field1" : 1 }, "ns" : "test.index_multi", "name" : "field1_1", "background" : true }, { "v" : 1, "key" : { "field86" : 1, "field87" : 1 }, "ns" : "test.index_multi", "name" : "field86_1_field87_1", "background" : true }, { "v" : 1, "key" : { "field84" : 1, "field85" : 1 }, "ns" : "test.index_multi", "name" : "field84_1_field85_1", "background" : true }, { "v" : 1, "key" : { "field72" : 1, "field73" : 1 }, "ns" : "test.index_multi", "name" : "field72_1_field73_1", "background" : true }, { "v" : 1, "key" : { "field74" : 1, "field75" : 1 }, "ns" : "test.index_multi", "name" : "field74_1_field75_1", "background" : true }, { "v" : 1, "key" : { "field68" : 1, "field69" : 1 }, "ns" : "test.index_multi", "name" : "field68_1_field69_1", "background" : true }, { "v" : 1, "key" : { "field50" : 1, "field51" : 1 }, "ns" : "test.index_multi", "name" : "field50_1_field51_1", "background" : true }, { "v" : 1, "key" : { "field46" : 1, "field47" : 1 }, "ns" : "test.index_multi", "name" : "field46_1_field47_1", "background" : true }, { "v" : 1, "key" : { "field36" : 1, "field37" : 1 }, "ns" : "test.index_multi", "name" : "field36_1_field37_1", "background" : true }, { "v" : 1, "key" : { "field52" : 1, "field53" : 1 }, "ns" : "test.index_multi", "name" : "field52_1_field53_1", "background" : true }, { "v" : 1, "key" : { "field34" : 1, "field35" : 1 }, "ns" : "test.index_multi", "name" : "field34_1_field35_1", "background" : true }, { "v" : 1, "key" : { "field90" : 1, "field91" : 1, "field92" : 1 }, "ns" : "test.index_multi", "name" : "field90_1_field91_1_field92_1", "background" : true }, { "v" : 1, "key" : { "field32" : 1, "field33" : 1 }, "ns" : "test.index_multi", "name" : "field32_1_field33_1", "background" : true }, { "v" : 1, "key" : { "field40" : 1, "field41" : 1 }, "ns" : "test.index_multi", "name" : "field40_1_field41_1", "background" : true }, { "v" : 1, "key" : { "field38" : 1, "field39" : 1 }, "ns" : "test.index_multi", "name" : "field38_1_field39_1", "background" : true }, { "v" : 1, "key" : { "field30" : 1, "field31" : 1 }, "ns" : "test.index_multi", "name" : "field30_1_field31_1", "background" : true }, { "v" : 1, "key" : { "field88" : 1, "field89" : 1 }, "ns" : "test.index_multi", "name" : "field88_1_field89_1", "background" : true }, { "v" : 1, "key" : { "field7" : 1 }, "ns" : "test.index_multi", "name" : "field7_1", "background" : true }, { "v" : 1, "key" : { "field28" : 1 }, "ns" : "test.index_multi", "name" : "field28_1", "background" : true } ] Make sure we end up with 64 indexes trying to hint on { "field90" : 1, "field91" : 1, "field92" : 1 } { "cursor" : "BtreeCursor field90_1_field91_1_field92_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 28072, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 28072, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 83, "indexBounds" : { "field90" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field91" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field92" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field91" : 1, "field92" : 1, "field93" : 1 } { "cursor" : "BtreeCursor field91_1_field92_1_field93_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 1, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field91" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field92" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field93" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field92" : 1, "field93" : 1, "field94" : 1 } { "cursor" : "BtreeCursor field92_1_field93_1_field94_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field92" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field93" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field94" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field30" : 1, "field31" : 1 } { "cursor" : "BtreeCursor field30_1_field31_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 27933, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 27933, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 68, "indexBounds" : { "field30" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field31" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field32" : 1, "field33" : 1 } { "cursor" : "BtreeCursor field32_1_field33_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field32" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field33" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field34" : 1, "field35" : 1 } { "cursor" : "BtreeCursor field34_1_field35_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field34" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field35" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field36" : 1, "field37" : 1 } { "cursor" : "BtreeCursor field36_1_field37_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field36" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field37" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field38" : 1, "field39" : 1 } { "cursor" : "BtreeCursor field38_1_field39_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 1, "nChunkSkips" : 0, "millis" : 29, "indexBounds" : { "field38" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field39" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field40" : 1, "field41" : 1 } { "cursor" : "BtreeCursor field40_1_field41_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 28031, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 28031, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 72, "indexBounds" : { "field40" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field41" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field42" : 1, "field43" : 1 } { "cursor" : "BtreeCursor field42_1_field43_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field42" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field43" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field44" : 1, "field45" : 1 } { "cursor" : "BtreeCursor field44_1_field45_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field44" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field45" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field46" : 1, "field47" : 1 } { "cursor" : "BtreeCursor field46_1_field47_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field46" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field47" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field48" : 1, "field49" : 1 } { "cursor" : "BtreeCursor field48_1_field49_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field48" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field49" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field50" : 1, "field51" : 1 } { "cursor" : "BtreeCursor field50_1_field51_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 27928, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 27928, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 70, "indexBounds" : { "field50" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field51" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field52" : 1, "field53" : 1 } { "cursor" : "BtreeCursor field52_1_field53_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 25, "indexBounds" : { "field52" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field53" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field54" : 1, "field55" : 1 } { "cursor" : "BtreeCursor field54_1_field55_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 25, "indexBounds" : { "field54" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field55" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field56" : 1, "field57" : 1 } { "cursor" : "BtreeCursor field56_1_field57_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 26, "indexBounds" : { "field56" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field57" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field58" : 1, "field59" : 1 } { "cursor" : "BtreeCursor field58_1_field59_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 25, "indexBounds" : { "field58" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field59" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field60" : 1, "field61" : 1 } { "cursor" : "BtreeCursor field60_1_field61_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 27916, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 27916, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 66, "indexBounds" : { "field60" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field61" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field62" : 1, "field63" : 1 } { "cursor" : "BtreeCursor field62_1_field63_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field62" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field63" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field64" : 1, "field65" : 1 } { "cursor" : "BtreeCursor field64_1_field65_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field64" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field65" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field66" : 1, "field67" : 1 } { "cursor" : "BtreeCursor field66_1_field67_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field66" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field67" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field68" : 1, "field69" : 1 } { "cursor" : "BtreeCursor field68_1_field69_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field68" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field69" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field70" : 1, "field71" : 1 } { "cursor" : "BtreeCursor field70_1_field71_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 27942, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 27942, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 65, "indexBounds" : { "field70" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field71" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field72" : 1, "field73" : 1 } { "cursor" : "BtreeCursor field72_1_field73_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field72" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field73" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field74" : 1, "field75" : 1 } { "cursor" : "BtreeCursor field74_1_field75_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field74" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field75" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field76" : 1, "field77" : 1 } { "cursor" : "BtreeCursor field76_1_field77_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field76" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field77" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field78" : 1, "field79" : 1 } { "cursor" : "BtreeCursor field78_1_field79_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field78" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field79" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field80" : 1, "field81" : 1 } { "cursor" : "BtreeCursor field80_1_field81_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 27968, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 27968, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 61, "indexBounds" : { "field80" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field81" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field82" : 1, "field83" : 1 } { "cursor" : "BtreeCursor field82_1_field83_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field82" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field83" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field84" : 1, "field85" : 1 } { "cursor" : "BtreeCursor field84_1_field85_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field84" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field85" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field86" : 1, "field87" : 1 } { "cursor" : "BtreeCursor field86_1_field87_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field86" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field87" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field88" : 1, "field89" : 1 } { "cursor" : "BtreeCursor field88_1_field89_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field88" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ], "field89" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field0" : 1 } { "cursor" : "BtreeCursor field0_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 27945, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 27945, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 60, "indexBounds" : { "field0" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field1" : 1 } { "cursor" : "BtreeCursor field1_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 21, "indexBounds" : { "field1" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field2" : 1 } { "cursor" : "BtreeCursor field2_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 1, "nChunkSkips" : 0, "millis" : 50, "indexBounds" : { "field2" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field3" : 1 } { "cursor" : "BtreeCursor field3_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 30, "indexBounds" : { "field3" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field4" : 1 } { "cursor" : "BtreeCursor field4_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field4" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field5" : 1 } { "cursor" : "BtreeCursor field5_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field5" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field6" : 1 } { "cursor" : "BtreeCursor field6_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field6" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field7" : 1 } { "cursor" : "BtreeCursor field7_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 24, "indexBounds" : { "field7" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field8" : 1 } { "cursor" : "BtreeCursor field8_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 21, "indexBounds" : { "field8" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field9" : 1 } { "cursor" : "BtreeCursor field9_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 21, "indexBounds" : { "field9" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field10" : 1 } { "cursor" : "BtreeCursor field10_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 28040, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 28040, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 61, "indexBounds" : { "field10" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field11" : 1 } { "cursor" : "BtreeCursor field11_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field11" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field12" : 1 } { "cursor" : "BtreeCursor field12_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field12" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field13" : 1 } { "cursor" : "BtreeCursor field13_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 21, "indexBounds" : { "field13" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field14" : 1 } { "cursor" : "BtreeCursor field14_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field14" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field15" : 1 } { "cursor" : "BtreeCursor field15_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 21, "indexBounds" : { "field15" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field16" : 1 } { "cursor" : "BtreeCursor field16_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field16" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field17" : 1 } { "cursor" : "BtreeCursor field17_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field17" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field18" : 1 } { "cursor" : "BtreeCursor field18_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field18" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field19" : 1 } { "cursor" : "BtreeCursor field19_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 23, "indexBounds" : { "field19" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field20" : 1 } { "cursor" : "BtreeCursor field20_1", "isMultiKey" : true, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 27911, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 27911, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 65, "indexBounds" : { "field20" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field21" : 1 } { "cursor" : "BtreeCursor field21_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field21" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field22" : 1 } { "cursor" : "BtreeCursor field22_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field22" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field23" : 1 } { "cursor" : "BtreeCursor field23_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 22, "indexBounds" : { "field23" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field24" : 1 } { "cursor" : "BtreeCursor field24_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 17, "indexBounds" : { "field24" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field25" : 1 } { "cursor" : "BtreeCursor field25_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 16, "indexBounds" : { "field25" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field26" : 1 } { "cursor" : "BtreeCursor field26_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 16, "indexBounds" : { "field26" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field27" : 1 } { "cursor" : "BtreeCursor field27_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 16, "indexBounds" : { "field27" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field28" : 1 } { "cursor" : "BtreeCursor field28_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 16, "indexBounds" : { "field28" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } trying to hint on { "field29" : 1 } { "cursor" : "BtreeCursor field29_1", "isMultiKey" : false, "n" : 10000, "nscannedObjects" : 10000, "nscanned" : 10000, "nscannedObjectsAllPlans" : 10000, "nscannedAllPlans" : 10000, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 16, "indexBounds" : { "field29" : [ [ { "$minElement" : 1 }, { "$maxElement" : 1 } ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:27999" } SUCCESS! Fri Feb 22 11:44:26.742 [conn21] end connection 127.0.0.1:56795 (0 connections now open) 41.2386 seconds Fri Feb 22 11:44:26.761 [initandlisten] connection accepted from 127.0.0.1:56171 #85 (1 connection now open) Fri Feb 22 11:44:26.761 [conn85] end connection 127.0.0.1:56171 (0 connections now open) ******************************************* Test : index_retry.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_retry.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/index_retry.js";TestData.testFile = "index_retry.js";TestData.testName = "index_retry";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:44:26 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:44:26.899 [initandlisten] connection accepted from 127.0.0.1:40271 #86 (1 connection now open) null Fri Feb 22 11:44:26.911 [conn86] end connection 127.0.0.1:40271 (0 connections now open) 163.8041 ms Fri Feb 22 11:44:26.926 [initandlisten] connection accepted from 127.0.0.1:64143 #87 (1 connection now open) Fri Feb 22 11:44:26.927 [conn87] end connection 127.0.0.1:64143 (0 connections now open) ******************************************* Test : large_chunk.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/large_chunk.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/large_chunk.js";TestData.testFile = "large_chunk.js";TestData.testName = "large_chunk";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:44:26 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:44:27.071 [initandlisten] connection accepted from 127.0.0.1:45328 #88 (1 connection now open) null Resetting db path '/data/db/large_chunk0' Fri Feb 22 11:44:27.220 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/large_chunk0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:44:27.314 [initandlisten] MongoDB starting : pid=28725 port=30000 dbpath=/data/db/large_chunk0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:44:27.314 [initandlisten] m30000| Fri Feb 22 11:44:27.314 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:44:27.314 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:44:27.314 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:44:27.314 [initandlisten] m30000| Fri Feb 22 11:44:27.314 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:44:27.314 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:44:27.314 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:44:27.314 [initandlisten] allocator: system m30000| Fri Feb 22 11:44:27.314 [initandlisten] options: { dbpath: "/data/db/large_chunk0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:44:27.314 [initandlisten] journal dir=/data/db/large_chunk0/journal m30000| Fri Feb 22 11:44:27.315 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:44:27.329 [FileAllocator] allocating new datafile /data/db/large_chunk0/local.ns, filling with zeroes... m30000| Fri Feb 22 11:44:27.329 [FileAllocator] creating directory /data/db/large_chunk0/_tmp m30000| Fri Feb 22 11:44:27.329 [FileAllocator] done allocating datafile /data/db/large_chunk0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:44:27.329 [FileAllocator] allocating new datafile /data/db/large_chunk0/local.0, filling with zeroes... m30000| Fri Feb 22 11:44:27.329 [FileAllocator] done allocating datafile /data/db/large_chunk0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:44:27.333 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:44:27.333 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:44:27.422 [initandlisten] connection accepted from 127.0.0.1:47988 #1 (1 connection now open) Resetting db path '/data/db/large_chunk1' Fri Feb 22 11:44:27.425 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/large_chunk1 --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:44:27.493 [initandlisten] MongoDB starting : pid=28726 port=30001 dbpath=/data/db/large_chunk1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:44:27.494 [initandlisten] m30001| Fri Feb 22 11:44:27.494 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:44:27.494 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:44:27.494 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:44:27.494 [initandlisten] m30001| Fri Feb 22 11:44:27.494 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:44:27.494 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:44:27.494 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:44:27.494 [initandlisten] allocator: system m30001| Fri Feb 22 11:44:27.494 [initandlisten] options: { dbpath: "/data/db/large_chunk1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:44:27.494 [initandlisten] journal dir=/data/db/large_chunk1/journal m30001| Fri Feb 22 11:44:27.494 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:44:27.523 [FileAllocator] allocating new datafile /data/db/large_chunk1/local.ns, filling with zeroes... m30001| Fri Feb 22 11:44:27.523 [FileAllocator] creating directory /data/db/large_chunk1/_tmp m30001| Fri Feb 22 11:44:27.523 [FileAllocator] done allocating datafile /data/db/large_chunk1/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:44:27.523 [FileAllocator] allocating new datafile /data/db/large_chunk1/local.0, filling with zeroes... m30001| Fri Feb 22 11:44:27.524 [FileAllocator] done allocating datafile /data/db/large_chunk1/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:44:27.526 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:44:27.526 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:44:27.626 [initandlisten] connection accepted from 127.0.0.1:42270 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 11:44:27.627 [initandlisten] connection accepted from 127.0.0.1:49130 #2 (2 connections now open) ShardingTest large_chunk : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 11:44:27.634 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -vv --chunkSize 1024 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:44:27.664 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:44:27.665 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=28727 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:44:27.665 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:44:27.665 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:44:27.665 [mongosMain] options: { chunkSize: 1024, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], vv: true } m30999| Fri Feb 22 11:44:27.665 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 11:44:27.665 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:44:27.666 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:44:27.666 [mongosMain] connected connection! m30000| Fri Feb 22 11:44:27.666 [initandlisten] connection accepted from 127.0.0.1:40415 #3 (3 connections now open) m30999| Fri Feb 22 11:44:27.667 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:44:27.667 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:44:27.667 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:44:27.667 [mongosMain] connected connection! m30000| Fri Feb 22 11:44:27.667 [initandlisten] connection accepted from 127.0.0.1:60065 #4 (4 connections now open) m30000| Fri Feb 22 11:44:27.668 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:44:27.681 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:44:27.681 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 11:44:27.681 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 11:44:27.682 [mongosMain] skew from remote server localhost:30000 found: -1 m30999| Fri Feb 22 11:44:27.682 [mongosMain] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds. m30999| Fri Feb 22 11:44:27.682 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838 ) m30999| Fri Feb 22 11:44:27.682 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:44:27.682 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 11:44:27.682 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:44:27 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "51275a1b4bf5e77a20960cf2" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 11:44:27.682 [FileAllocator] allocating new datafile /data/db/large_chunk0/config.ns, filling with zeroes... m30000| Fri Feb 22 11:44:27.683 [FileAllocator] done allocating datafile /data/db/large_chunk0/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:44:27.683 [FileAllocator] allocating new datafile /data/db/large_chunk0/config.0, filling with zeroes... m30000| Fri Feb 22 11:44:27.683 [FileAllocator] done allocating datafile /data/db/large_chunk0/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:44:27.683 [FileAllocator] allocating new datafile /data/db/large_chunk0/config.1, filling with zeroes... m30000| Fri Feb 22 11:44:27.684 [FileAllocator] done allocating datafile /data/db/large_chunk0/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:44:27.688 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:44:27.690 [conn3] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:44:27.691 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 11:44:27.691 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:44:27.692 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 11:44:27 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838', sleeping for 30000ms m30000| Fri Feb 22 11:44:27.692 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 11:44:27.693 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:44:27.694 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838' acquired, ts : 51275a1b4bf5e77a20960cf2 m30999| Fri Feb 22 11:44:27.696 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:44:27.696 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:44:27.697 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:27-51275a1b4bf5e77a20960cf3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533467696), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:44:27.697 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 11:44:27.697 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:44:27.698 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 11:44:27.698 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 11:44:27.699 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:44:27.699 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:27-51275a1b4bf5e77a20960cf5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533467699), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:44:27.699 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:44:27.700 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838' unlocked. m30000| Fri Feb 22 11:44:27.701 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:44:27.702 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:44:27.702 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:44:27.702 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:44:27.702 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:44:27.702 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:44:27.702 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:44:27.702 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 11:44:27.702 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:44:27.703 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 11:44:27.703 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:44:27.705 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:44:27.705 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 11:44:27.705 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 11:44:27.706 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:44:27.706 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 11:44:27.706 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:44:27.706 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:44:27.707 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:44:27.708 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:44:27.709 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:44:27.709 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 11:44:27.709 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 11:44:27.710 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:44:27.710 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:44:27.710 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:44:27 m30999| Fri Feb 22 11:44:27.710 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:44:27.710 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:44:27.711 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:44:27.711 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 11:44:27.711 [Balancer] connected connection! m30000| Fri Feb 22 11:44:27.711 [initandlisten] connection accepted from 127.0.0.1:51072 #5 (5 connections now open) m30000| Fri Feb 22 11:44:27.712 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:44:27.712 [Balancer] Refreshing MaxChunkSize: 1024 m30999| Fri Feb 22 11:44:27.712 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838 ) m30999| Fri Feb 22 11:44:27.712 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:44:27.712 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:44:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275a1b4bf5e77a20960cf7" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:44:27.713 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838' acquired, ts : 51275a1b4bf5e77a20960cf7 m30999| Fri Feb 22 11:44:27.713 [Balancer] *** start balancing round m30999| Fri Feb 22 11:44:27.713 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:44:27.713 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:44:27.713 [Balancer] no collections to balance m30999| Fri Feb 22 11:44:27.713 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:44:27.713 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:44:27.713 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533467:16838' unlocked. m30999| Fri Feb 22 11:44:27.835 [mongosMain] connection accepted from 127.0.0.1:58505 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:44:27.837 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 11:44:27.838 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 11:44:27.839 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:44:27.839 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 11:44:27.841 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 11:44:27.842 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:44:27.842 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:44:27.842 [conn1] connected connection! m30001| Fri Feb 22 11:44:27.842 [initandlisten] connection accepted from 127.0.0.1:55659 #2 (2 connections now open) m30999| Fri Feb 22 11:44:27.844 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 11:44:27.845 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:44:27.845 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:44:27.845 [conn1] connected connection! m30000| Fri Feb 22 11:44:27.845 [initandlisten] connection accepted from 127.0.0.1:63378 #6 (6 connections now open) m30999| Fri Feb 22 11:44:27.845 [conn1] creating WriteBackListener for: localhost:30000 serverID: 51275a1b4bf5e77a20960cf6 m30999| Fri Feb 22 11:44:27.845 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 11:44:27.845 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 11:44:27.845 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('51275a1b4bf5e77a20960cf6'), authoritative: true } m30999| Fri Feb 22 11:44:27.845 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:44:27.846 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:44:27.846 [conn1] connected connection! m30001| Fri Feb 22 11:44:27.846 [initandlisten] connection accepted from 127.0.0.1:39849 #3 (3 connections now open) m30999| Fri Feb 22 11:44:27.846 [conn1] creating WriteBackListener for: localhost:30001 serverID: 51275a1b4bf5e77a20960cf6 m30999| Fri Feb 22 11:44:27.846 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 11:44:27.846 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Fri Feb 22 11:44:27.846 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('51275a1b4bf5e77a20960cf6'), authoritative: true } m30999| Fri Feb 22 11:44:27.846 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.settings", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:44:27.846 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:44:27.846 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:44:27.846 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:27.846 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:44:27.846 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:27.846 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "chunksize", value: 1024 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "chunksize", "value" : 1024 } { "_id" : "balancer", "stopped" : true } m30999| Fri Feb 22 11:44:27.847 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 11:44:27.848 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:44:27.848 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:44:27.848 [conn1] connected connection! m30000| Fri Feb 22 11:44:27.848 [initandlisten] connection accepted from 127.0.0.1:46337 #7 (7 connections now open) m30999| Fri Feb 22 11:44:27.848 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:44:27.849 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:44:27.849 [conn1] connected connection! m30001| Fri Feb 22 11:44:27.849 [initandlisten] connection accepted from 127.0.0.1:35501 #4 (4 connections now open) m30999| Fri Feb 22 11:44:27.849 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:44:27.849 [conn1] put [test] on: shard0001:localhost:30001 m30001| Fri Feb 22 11:44:27.850 [FileAllocator] allocating new datafile /data/db/large_chunk1/test.ns, filling with zeroes... m30001| Fri Feb 22 11:44:27.850 [FileAllocator] done allocating datafile /data/db/large_chunk1/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:44:27.850 [FileAllocator] allocating new datafile /data/db/large_chunk1/test.0, filling with zeroes... m30001| Fri Feb 22 11:44:27.850 [FileAllocator] done allocating datafile /data/db/large_chunk1/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:44:27.850 [FileAllocator] allocating new datafile /data/db/large_chunk1/test.1, filling with zeroes... m30001| Fri Feb 22 11:44:27.851 [FileAllocator] done allocating datafile /data/db/large_chunk1/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:44:27.854 [conn3] build index test.foo { _id: 1 } m30001| Fri Feb 22 11:44:27.856 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:44:28.642 [FileAllocator] allocating new datafile /data/db/large_chunk1/test.2, filling with zeroes... m30001| Fri Feb 22 11:44:28.642 [FileAllocator] done allocating datafile /data/db/large_chunk1/test.2, size: 256MB, took 0 secs m30001| Fri Feb 22 11:44:30.462 [FileAllocator] allocating new datafile /data/db/large_chunk1/test.3, filling with zeroes... m30001| Fri Feb 22 11:44:30.463 [FileAllocator] done allocating datafile /data/db/large_chunk1/test.3, size: 512MB, took 0 secs m30999| Fri Feb 22 11:44:33.714 [Balancer] Refreshing MaxChunkSize: 1024 m30999| Fri Feb 22 11:44:33.714 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:44:34.740 [FileAllocator] allocating new datafile /data/db/large_chunk1/test.4, filling with zeroes... m30001| Fri Feb 22 11:44:34.740 [FileAllocator] done allocating datafile /data/db/large_chunk1/test.4, size: 1024MB, took 0 secs m30999| Fri Feb 22 11:44:34.833 [conn1] enabling sharding on: test m30999| Fri Feb 22 11:44:34.835 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 11:44:34.835 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:44:34.835 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 51275a224bf5e77a20960cf8 m30999| Fri Feb 22 11:44:34.836 [conn1] major version query from 0|0||51275a224bf5e77a20960cf8 and over 0 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 0|0 } } ] } m30999| Fri Feb 22 11:44:34.836 [conn1] loaded 1 chunks into new chunk manager for test.foo with version 1|0||51275a224bf5e77a20960cf8 m30999| Fri Feb 22 11:44:34.836 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||51275a224bf5e77a20960cf8 based on: (empty) m30000| Fri Feb 22 11:44:34.837 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:44:34.840 [conn3] build index done. scanned 0 total records. 0.002 secs m30999| Fri Feb 22 11:44:34.840 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275a224bf5e77a20960cf8 manager: 0x1183c70 m30999| Fri Feb 22 11:44:34.840 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275a224bf5e77a20960cf8'), serverID: ObjectId('51275a1b4bf5e77a20960cf6'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f890 2 m30999| Fri Feb 22 11:44:34.841 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:44:34.841 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275a224bf5e77a20960cf8 manager: 0x1183c70 m30999| Fri Feb 22 11:44:34.841 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275a224bf5e77a20960cf8'), serverID: ObjectId('51275a1b4bf5e77a20960cf6'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x117f890 2 m30001| Fri Feb 22 11:44:34.841 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 11:44:34.841 [initandlisten] connection accepted from 127.0.0.1:34497 #8 (8 connections now open) m30999| Fri Feb 22 11:44:34.842 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: {} }, fields: {} } and CInfo { v_ns: "config.chunks", filter: {} } m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 1.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: -1, options: 0, query: { _id: "test" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:44:34.843 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test", partitioned: true, primary: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: -1, options: 0, query: { _id: "shard0001" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.844 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } Checkpoint 1a m30999| Fri Feb 22 11:44:34.845 [conn1] CMD: movechunk: { movechunk: "test.foo", find: { _id: 1.0 }, to: "localhost:30000", maxChunkSizeBytes: 209715200.0 } m30999| Fri Feb 22 11:44:34.845 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:44:34.846 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 209715200, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30000| Fri Feb 22 11:44:34.846 [initandlisten] connection accepted from 127.0.0.1:35679 #9 (9 connections now open) m30001| Fri Feb 22 11:44:34.847 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361533474:4411 (sleeping for 30000ms) m30001| Fri Feb 22 11:44:34.848 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361533474:4411' acquired, ts : 51275a22d3fea2f11001b5a2 m30001| Fri Feb 22 11:44:34.848 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:34-51275a22d3fea2f11001b5a3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:35501", time: new Date(1361533474848), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:44:34.849 [conn4] moveChunk request accepted at version 1|0||51275a224bf5e77a20960cf8 m30001| Fri Feb 22 11:44:34.898 [conn4] warning: can't move chunk of size (approximately) 427843728 because maximum size allowed to move is 209715200 ns: test.foo { _id: MinKey } -> { _id: MaxKey } m30001| Fri Feb 22 11:44:34.898 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:44:34.898 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:44:34.901 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361533474:4411' unlocked. m30001| Fri Feb 22 11:44:34.901 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:34-51275a22d3fea2f11001b5a4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:35501", time: new Date(1361533474901), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 3, note: "aborted" } } m30999| Fri Feb 22 11:44:34.901 [conn1] moveChunk result: { chunkTooBig: true, estimatedChunkSize: 427843728, ok: 0.0, errmsg: "chunk too big to move" } checkpoint 1b m30999| Fri Feb 22 11:44:34.902 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:44:34.902 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:44:34.902 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.902 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.902 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:44:34.902 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:34.902 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('51275a224bf5e77a20960cf8'), ns: "test.foo", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:44:34.903 [conn1] CMD: movechunk: { movechunk: "test.foo", find: { _id: 1.0 }, to: "localhost:30000" } m30999| Fri Feb 22 11:44:34.903 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:44:34.903 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 1073741824, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Fri Feb 22 11:44:34.904 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361533474:4411' acquired, ts : 51275a22d3fea2f11001b5a5 m30001| Fri Feb 22 11:44:34.904 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:34-51275a22d3fea2f11001b5a6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:35501", time: new Date(1361533474904), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:44:34.904 [conn4] moveChunk request accepted at version 1|0||51275a224bf5e77a20960cf8 m30001| Fri Feb 22 11:44:34.957 [conn4] moveChunk number of documents: 41847 m30000| Fri Feb 22 11:44:34.957 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: MaxKey } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:44:34.958 [initandlisten] connection accepted from 127.0.0.1:61925 #5 (5 connections now open) m30000| Fri Feb 22 11:44:34.959 [FileAllocator] allocating new datafile /data/db/large_chunk0/test.ns, filling with zeroes... m30000| Fri Feb 22 11:44:34.959 [FileAllocator] done allocating datafile /data/db/large_chunk0/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:44:34.959 [FileAllocator] allocating new datafile /data/db/large_chunk0/test.0, filling with zeroes... m30000| Fri Feb 22 11:44:34.959 [FileAllocator] done allocating datafile /data/db/large_chunk0/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:44:34.960 [FileAllocator] allocating new datafile /data/db/large_chunk0/test.1, filling with zeroes... m30000| Fri Feb 22 11:44:34.960 [FileAllocator] done allocating datafile /data/db/large_chunk0/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:44:34.963 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 11:44:34.964 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:44:34.964 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 11:44:34.968 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:34.978 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:34.988 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:34.998 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:35.014 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:35.047 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:35.111 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:35.239 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1290, clonedBytes: 12963210, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:35.495 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2942, clonedBytes: 29564158, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:44:35.719 [FileAllocator] allocating new datafile /data/db/large_chunk0/test.2, filling with zeroes... m30000| Fri Feb 22 11:44:35.719 [FileAllocator] done allocating datafile /data/db/large_chunk0/test.2, size: 256MB, took 0 secs m30001| Fri Feb 22 11:44:36.007 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6672, clonedBytes: 67046928, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:37.032 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 14324, clonedBytes: 143941876, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:44:37.198 [FileAllocator] allocating new datafile /data/db/large_chunk0/test.3, filling with zeroes... m30000| Fri Feb 22 11:44:37.198 [FileAllocator] done allocating datafile /data/db/large_chunk0/test.3, size: 512MB, took 0 secs m30001| Fri Feb 22 11:44:38.056 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 20774, clonedBytes: 208757926, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:39.080 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 27959, clonedBytes: 280959991, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:44:39.715 [Balancer] Refreshing MaxChunkSize: 1024 m30999| Fri Feb 22 11:44:39.715 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:44:40.105 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 36185, clonedBytes: 363623065, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:44:40.735 [FileAllocator] allocating new datafile /data/db/large_chunk0/test.4, filling with zeroes... m30000| Fri Feb 22 11:44:40.735 [FileAllocator] done allocating datafile /data/db/large_chunk0/test.4, size: 1024MB, took 0 secs m30000| Fri Feb 22 11:44:40.796 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:44:40.796 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: MaxKey } m30000| Fri Feb 22 11:44:40.831 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: MaxKey } m30001| Fri Feb 22 11:44:41.129 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 41847, clonedBytes: 420520503, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:44:41.129 [conn4] moveChunk setting version to: 2|0||51275a224bf5e77a20960cf8 m30000| Fri Feb 22 11:44:41.129 [initandlisten] connection accepted from 127.0.0.1:46813 #10 (10 connections now open) m30000| Fri Feb 22 11:44:41.129 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:44:41.134 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: MaxKey } m30000| Fri Feb 22 11:44:41.134 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: MaxKey } m30000| Fri Feb 22 11:44:41.134 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:41-51275a29dc35fdc25e177e85", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533481134), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 5832, step4 of 5: 0, step5 of 5: 337 } } m30000| Fri Feb 22 11:44:41.134 [initandlisten] connection accepted from 127.0.0.1:41861 #11 (11 connections now open) m30001| Fri Feb 22 11:44:41.139 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 41847, clonedBytes: 420520503, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:44:41.140 [conn4] moveChunk moved last chunk out for collection 'test.foo' m30000| Fri Feb 22 11:44:41.140 [initandlisten] connection accepted from 127.0.0.1:34283 #12 (12 connections now open) m30001| Fri Feb 22 11:44:41.141 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:41-51275a29d3fea2f11001b5a7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:35501", time: new Date(1361533481140), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:44:41.141 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:44:41.141 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:44:41.141 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:44:41.141 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:44:41.141 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:44:41.141 [cleanupOldData-51275a29d3fea2f11001b5a8] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: MaxKey }, # cursors remaining: 0 m30001| Fri Feb 22 11:44:41.141 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361533474:4411' unlocked. m30001| Fri Feb 22 11:44:41.141 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:41-51275a29d3fea2f11001b5a9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:35501", time: new Date(1361533481141), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 53, step4 of 6: 6171, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:44:41.141 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 1073741824, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:20 r:52615 w:7 reslen:37 6238ms m30999| Fri Feb 22 11:44:41.141 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:44:41.142 [conn1] loading chunk manager for collection test.foo using old chunk manager w/ version 1|0||51275a224bf5e77a20960cf8 and 1 chunks m30999| Fri Feb 22 11:44:41.142 [conn1] major version query from 1|0||51275a224bf5e77a20960cf8 and over 1 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 1000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 1000|0 } } ] } m30999| Fri Feb 22 11:44:41.142 [conn1] loaded 1 chunks into new chunk manager for test.foo with version 2|0||51275a224bf5e77a20960cf8 m30999| Fri Feb 22 11:44:41.142 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|0||51275a224bf5e77a20960cf8 based on: 1|0||51275a224bf5e77a20960cf8 m30999| Fri Feb 22 11:44:41.143 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:44:41.143 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:44:41.143 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:44:41.143 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:41.143 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:44:41.143 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:41.143 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('51275a224bf5e77a20960cf8'), ns: "test.foo", min: { _id: MinKey }, max: { _id: MaxKey }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:44:41.144 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.changelog", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:44:41.144 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:44:41.144 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:44:41.144 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:41.144 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:44:41.144 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:44:41.144 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:27-51275a1b4bf5e77a20960cf3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533467696), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:27-51275a1b4bf5e77a20960cf3", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "N/A", "time" : ISODate("2013-02-22T11:44:27.696Z"), "what" : "starting upgrade of config database", "ns" : "config.version", "details" : { "from" : 0, "to" : 4 } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:27-51275a1b4bf5e77a20960cf5", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "N/A", "time" : ISODate("2013-02-22T11:44:27.699Z"), "what" : "finished upgrade of config database", "ns" : "config.version", "details" : { "from" : 0, "to" : 4 } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:34-51275a22d3fea2f11001b5a3", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "127.0.0.1:35501", "time" : ISODate("2013-02-22T11:44:34.848Z"), "what" : "moveChunk.start", "ns" : "test.foo", "details" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "from" : "shard0001", "to" : "shard0000" } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:34-51275a22d3fea2f11001b5a4", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "127.0.0.1:35501", "time" : ISODate("2013-02-22T11:44:34.901Z"), "what" : "moveChunk.from", "ns" : "test.foo", "details" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "step1 of 6" : 0, "step2 of 6" : 3, "note" : "aborted" } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:34-51275a22d3fea2f11001b5a6", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "127.0.0.1:35501", "time" : ISODate("2013-02-22T11:44:34.904Z"), "what" : "moveChunk.start", "ns" : "test.foo", "details" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "from" : "shard0001", "to" : "shard0000" } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:41-51275a29dc35fdc25e177e85", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : ":27017", "time" : ISODate("2013-02-22T11:44:41.134Z"), "what" : "moveChunk.to", "ns" : "test.foo", "details" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "step1 of 5" : 6, "step2 of 5" : 0, "step3 of 5" : 5832, "step4 of 5" : 0, "step5 of 5" : 337 } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:41-51275a29d3fea2f11001b5a7", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "127.0.0.1:35501", "time" : ISODate("2013-02-22T11:44:41.140Z"), "what" : "moveChunk.commit", "ns" : "test.foo", "details" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "from" : "shard0001", "to" : "shard0000" } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:44:41-51275a29d3fea2f11001b5a9", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "127.0.0.1:35501", "time" : ISODate("2013-02-22T11:44:41.141Z"), "what" : "moveChunk.from", "ns" : "test.foo", "details" : { "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "step1 of 6" : 0, "step2 of 6" : 1, "step3 of 6" : 53, "step4 of 6" : 6171, "step5 of 6" : 11, "step6 of 6" : 0 } } m30999| Fri Feb 22 11:44:41.152 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 11:44:41.161 [cleanupOldData-51275a29d3fea2f11001b5a8] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: MaxKey } m30001| Fri Feb 22 11:44:41.161 [cleanupOldData-51275a29d3fea2f11001b5a8] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: MaxKey } m30001| Fri Feb 22 11:44:41.170 [conn4] end connection 127.0.0.1:35501 (4 connections now open) m30000| Fri Feb 22 11:44:41.171 [conn7] end connection 127.0.0.1:46337 (11 connections now open) m30000| Fri Feb 22 11:44:41.171 [conn6] end connection 127.0.0.1:63378 (11 connections now open) m30001| Fri Feb 22 11:44:41.171 [conn3] end connection 127.0.0.1:39849 (4 connections now open) m30000| Fri Feb 22 11:44:41.171 [conn3] end connection 127.0.0.1:40415 (11 connections now open) m30000| Fri Feb 22 11:44:41.171 [conn5] end connection 127.0.0.1:51072 (11 connections now open) Fri Feb 22 11:44:42.152 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 11:44:42.152 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:44:42.152 [interruptThread] now exiting m30000| Fri Feb 22 11:44:42.152 dbexit: m30000| Fri Feb 22 11:44:42.152 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:44:42.152 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:44:42.152 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:44:42.152 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:44:42.152 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:44:42.152 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:44:42.152 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:44:42.153 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:44:42.153 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:44:42.153 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:44:42.153 [conn1] end connection 127.0.0.1:47988 (7 connections now open) m30000| Fri Feb 22 11:44:42.153 [conn2] end connection 127.0.0.1:49130 (7 connections now open) m30000| Fri Feb 22 11:44:42.153 [conn8] end connection 127.0.0.1:34497 (7 connections now open) m30000| Fri Feb 22 11:44:42.153 [conn9] end connection 127.0.0.1:35679 (7 connections now open) m30000| Fri Feb 22 11:44:42.153 [conn10] end connection 127.0.0.1:46813 (7 connections now open) m30001| Fri Feb 22 11:44:42.153 [conn5] end connection 127.0.0.1:61925 (2 connections now open) m30000| Fri Feb 22 11:44:42.153 [conn12] end connection 127.0.0.1:34283 (7 connections now open) m30000| Fri Feb 22 11:44:42.153 [conn11] end connection 127.0.0.1:41861 (5 connections now open) m30000| Fri Feb 22 11:44:43.236 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:44:43.273 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:44:43.273 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:44:43.273 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:44:43.274 dbexit: really exiting now Fri Feb 22 11:44:44.152 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 11:44:44.152 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:44:44.152 [interruptThread] now exiting m30001| Fri Feb 22 11:44:44.152 dbexit: m30001| Fri Feb 22 11:44:44.153 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:44:44.153 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 11:44:44.153 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 11:44:44.153 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 11:44:44.153 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:44:44.153 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:44:44.153 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:44:44.153 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:44:44.153 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:44:44.153 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:44:44.153 [conn1] end connection 127.0.0.1:42270 (1 connection now open) m30001| Fri Feb 22 11:44:45.364 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:44:45.497 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:44:45.497 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:44:45.497 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:44:45.497 dbexit: really exiting now Fri Feb 22 11:44:46.153 shell: stopped mongo program on port 30001 *** ShardingTest large_chunk completed successfully in 19.138 seconds *** Fri Feb 22 11:44:46.367 [conn88] end connection 127.0.0.1:45328 (0 connections now open) 19.4593 seconds Fri Feb 22 11:44:46.388 [initandlisten] connection accepted from 127.0.0.1:36841 #89 (1 connection now open) Fri Feb 22 11:44:46.389 [conn89] end connection 127.0.0.1:36841 (0 connections now open) ******************************************* Test : logpath.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/logpath.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/logpath.js";TestData.testFile = "logpath.js";TestData.testName = "logpath";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:44:46 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:44:46.562 [initandlisten] connection accepted from 127.0.0.1:57862 #90 (1 connection now open) null ------ Creating directories ------ Cleaning up old files ------ Start mongod with logpath set to new file Resetting db path '/data/db/logpath' Fri Feb 22 11:44:46.575 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 31000 --dbpath /data/db/logpath --logpath /data/db/logpathfiles/logdir/logpath_token1 --setParameter enableTestCommands=1 m31000| all output going to: /data/db/logpathfiles/logdir/logpath_token1 Fri Feb 22 11:44:47.779 shell: stopped mongo program on port 31000 ------ Start mongod with logpath set to existing file Resetting db path '/data/db/logpath' Fri Feb 22 11:44:47.783 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 31001 --dbpath /data/db/logpath --logpath /data/db/logpathfiles/logdir/logpath_token1 --setParameter enableTestCommands=1 m31001| all output going to: /data/db/logpathfiles/logdir/logpath_token1 m31001| log file [/data/db/logpathfiles/logdir/logpath_token1] exists; copied to temporary file [/data/db/logpathfiles/logdir/logpath_token1.2013-02-22T11-44-47] Fri Feb 22 11:44:48.986 shell: stopped mongo program on port 31001 Fri Feb 22 11:44:48.988 [conn90] end connection 127.0.0.1:57862 (0 connections now open) 2614.7010 ms Fri Feb 22 11:44:49.005 [initandlisten] connection accepted from 127.0.0.1:35645 #91 (1 connection now open) Fri Feb 22 11:44:49.005 [conn91] end connection 127.0.0.1:35645 (0 connections now open) ******************************************* Test : memory.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/memory.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/memory.js";TestData.testFile = "memory.js";TestData.testName = "memory";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:44:49 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:44:49.199 [initandlisten] connection accepted from 127.0.0.1:50495 #92 (1 connection now open) null Processing memoryTest0 Processing memoryTest1000 Processing memoryTest2000 Processing memoryTest3000 Processing memoryTest4000 Processing memoryTest5000 Processing memoryTest6000 Processing memoryTest7000 Processing memoryTest8000 Fri Feb 22 11:47:09.767 [conn92] command test.$cmd command: { $eval: function (col) { for (var i = 0; i < 100; ++i) {db[col + "_" + i].find();} }, args: [ "memoryTest8256" ] } ntoreturn:1 keyUpdates:0 locks(micros) W:131491 reslen:45 131ms Fri Feb 22 11:47:17.205 [conn92] command test.$cmd command: { $eval: function (col) { for (var i = 0; i < 100; ++i) {db[col + "_" + i].find();} }, args: [ "memoryTest8676" ] } ntoreturn:1 keyUpdates:0 locks(micros) W:124354 reslen:45 124ms Processing memoryTest9000 Fri Feb 22 11:47:43.079 [conn92] build index test.system.js { _id: 1 } Fri Feb 22 11:47:43.082 [conn92] build index done. scanned 0 total records. 0.002 secs Fri Feb 22 11:47:45.372 [conn92] JavaScript execution failed -- v8 is out of memory Fri Feb 22 11:47:45.450 [conn92] Clearing all idle JS contexts due to out of memory Fri Feb 22 11:47:45.450 [conn92] command test.$cmd command: { $eval: "f1(100000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:2366896 reslen:121 2366ms Fri Feb 22 11:47:47.761 [conn92] JavaScript execution failed -- v8 is out of memory Fri Feb 22 11:47:47.815 [conn92] Clearing all idle JS contexts due to out of memory Fri Feb 22 11:47:47.815 [conn92] command test.$cmd command: { $eval: "f1(1000000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:2317642 reslen:121 2317ms Fri Feb 22 11:47:48.946 [conn92] dbeval slow, time: 1102ms test Fri Feb 22 11:47:48.946 [conn92] f1(1000000) Fri Feb 22 11:47:48.946 [conn92] command test.$cmd command: { $eval: "f1(1000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:1130097 reslen:45 1130ms Fri Feb 22 11:47:50.382 [conn92] dbeval slow, time: 1434ms test Fri Feb 22 11:47:50.382 [conn92] f1(1000000) Fri Feb 22 11:47:50.382 [conn92] command test.$cmd command: { $eval: "f1(1000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:1435149 reslen:45 1435ms Fri Feb 22 11:47:51.555 [conn92] dbeval slow, time: 1173ms test Fri Feb 22 11:47:51.556 [conn92] f1(1000000) Fri Feb 22 11:47:51.556 [conn92] command test.$cmd command: { $eval: "f1(1000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:1173284 reslen:45 1173ms Fri Feb 22 11:47:53.470 [conn92] JavaScript execution failed -- v8 is out of memory Fri Feb 22 11:47:53.528 [conn92] Clearing all idle JS contexts due to out of memory Fri Feb 22 11:47:53.528 [conn92] command test.$cmd command: { $eval: "f1(100000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:1958170 reslen:121 1971ms Fri Feb 22 11:47:54.588 [conn92] dbeval slow, time: 1027ms test Fri Feb 22 11:47:54.588 [conn92] f1(1000000) Fri Feb 22 11:47:54.588 [conn92] command test.$cmd command: { $eval: "f1(1000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:1028020 reslen:45 1028ms Fri Feb 22 11:47:55.674 [conn92] dbeval slow, time: 1085ms test Fri Feb 22 11:47:55.674 [conn92] f1(1000000) Fri Feb 22 11:47:55.674 [conn92] command test.$cmd command: { $eval: "f1(1000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:1085767 reslen:45 1085ms Fri Feb 22 11:47:56.727 [conn92] dbeval slow, time: 1052ms test Fri Feb 22 11:47:56.727 [conn92] f1(1000000) Fri Feb 22 11:47:56.727 [conn92] command test.$cmd command: { $eval: "f1(1000000)" } ntoreturn:1 keyUpdates:0 locks(micros) W:1053004 reslen:45 1053ms Fri Feb 22 11:47:56.739 [conn92] CMD: drop test.memoryTest Fri Feb 22 11:47:56.740 [conn92] build index test.memoryTest { _id: 1 } Fri Feb 22 11:47:56.746 [conn92] build index done. scanned 0 total records. 0.005 secs Fri Feb 22 11:47:56.908 [conn92] query test.memoryTest query: { $where: "var arr = []; for (var i = 0; i < 1000000; ++i) {arr.push(0);}" } ntoreturn:1 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:161708 nreturned:0 reslen:20 161ms Fri Feb 22 11:47:57.418 [conn92] JavaScript execution failed -- v8 is out of memory Fri Feb 22 11:47:57.441 [conn92] Clearing all idle JS contexts due to out of memory Fri Feb 22 11:47:57.492 [conn92] assertion 16722 JavaScript execution failed -- v8 is out of memory ns:test.memoryTest query:{ $where: "var arr = []; for (var i = 0; i < 1000000000; ++i) {arr.push(0);}" } Fri Feb 22 11:47:57.492 [conn92] ntoskip:0 ntoreturn:-1 Fri Feb 22 11:47:57.492 [conn92] problem detected during query over test.memoryTest : { $err: "JavaScript execution failed -- v8 is out of memory", code: 16722 } Fri Feb 22 11:47:57.492 [conn92] query test.memoryTest query: { $where: "var arr = []; for (var i = 0; i < 1000000000; ++i) {arr.push(0);}" } ntoreturn:1 keyUpdates:0 exception: JavaScript execution failed -- v8 is out of memory code:16722 locks(micros) r:583728 reslen:96 583ms Fri Feb 22 11:47:57.669 [conn92] query test.memoryTest query: { $where: "var arr = []; for (var i = 0; i < 1000000; ++i) {arr.push(0);}" } ntoreturn:1 ntoskip:0 nscanned:1 keyUpdates:0 locks(micros) r:175587 nreturned:0 reslen:20 175ms Fri Feb 22 11:47:57.694 [conn92] end connection 127.0.0.1:50495 (0 connections now open) 3.1452 minutes Fri Feb 22 11:47:57.720 [initandlisten] connection accepted from 127.0.0.1:60269 #93 (1 connection now open) Fri Feb 22 11:47:57.721 [conn93] end connection 127.0.0.1:60269 (0 connections now open) ******************************************* Test : moveprimary-replset.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/moveprimary-replset.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/moveprimary-replset.js";TestData.testFile = "moveprimary-replset.js";TestData.testName = "moveprimary-replset";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:47:57 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:47:57.932 [initandlisten] connection accepted from 127.0.0.1:49550 #94 (1 connection now open) null ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "repset1", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "repset1" } } ReplSetTest Starting.... Resetting db path '/data/db/repset1-0' Fri Feb 22 11:47:57.968 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet repset1 --dbpath /data/db/repset1-0 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Fri Feb 22 11:47:58.116 [initandlisten] MongoDB starting : pid=103 port=31000 dbpath=/data/db/repset1-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 11:47:58.116 [initandlisten] m31000| Fri Feb 22 11:47:58.116 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 11:47:58.116 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 11:47:58.116 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 11:47:58.116 [initandlisten] m31000| Fri Feb 22 11:47:58.117 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 11:47:58.117 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 11:47:58.117 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 11:47:58.117 [initandlisten] allocator: system m31000| Fri Feb 22 11:47:58.117 [initandlisten] options: { dbpath: "/data/db/repset1-0", noprealloc: true, oplogSize: 40, port: 31000, replSet: "repset1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Fri Feb 22 11:47:58.117 [initandlisten] journal dir=/data/db/repset1-0/journal m31000| Fri Feb 22 11:47:58.118 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 11:47:58.147 [FileAllocator] allocating new datafile /data/db/repset1-0/local.ns, filling with zeroes... m31000| Fri Feb 22 11:47:58.147 [FileAllocator] creating directory /data/db/repset1-0/_tmp m31000| Fri Feb 22 11:47:58.147 [FileAllocator] done allocating datafile /data/db/repset1-0/local.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:47:58.148 [FileAllocator] allocating new datafile /data/db/repset1-0/local.0, filling with zeroes... m31000| Fri Feb 22 11:47:58.148 [FileAllocator] done allocating datafile /data/db/repset1-0/local.0, size: 16MB, took 0 secs m31000| Fri Feb 22 11:47:58.155 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 11:47:58.155 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 11:47:58.157 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Fri Feb 22 11:47:58.157 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31000| Fri Feb 22 11:47:58.172 [initandlisten] connection accepted from 127.0.0.1:40940 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000 ] { "replSetInitiate" : { "_id" : "repset1", "members" : [ { "_id" : 0, "host" : "127.0.0.1:31000" } ] } } m31000| Fri Feb 22 11:47:58.186 [conn1] replSet replSetInitiate admin command received from client m31000| Fri Feb 22 11:47:58.186 [conn1] replSet replSetInitiate config object parses ok, 1 members specified m31000| Fri Feb 22 11:47:58.187 [initandlisten] connection accepted from 127.0.0.1:63232 #2 (2 connections now open) m31000| Fri Feb 22 11:47:58.187 [conn1] replSet replSetInitiate all members seem up m31000| Fri Feb 22 11:47:58.187 [conn1] ****** m31000| Fri Feb 22 11:47:58.187 [conn1] creating replication oplog of size: 40MB... m31000| Fri Feb 22 11:47:58.188 [FileAllocator] allocating new datafile /data/db/repset1-0/local.1, filling with zeroes... m31000| Fri Feb 22 11:47:58.188 [FileAllocator] done allocating datafile /data/db/repset1-0/local.1, size: 64MB, took 0 secs m31000| Fri Feb 22 11:47:58.197 [conn1] ****** m31000| Fri Feb 22 11:47:58.197 [conn1] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 11:47:58.202 [conn1] replSet saveConfigLocally done m31000| Fri Feb 22 11:47:58.202 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31000| Fri Feb 22 11:47:58.219 [conn2] end connection 127.0.0.1:63232 (1 connection now open) m31000| Fri Feb 22 11:48:08.157 [rsStart] replSet I am 127.0.0.1:31000 m31000| Fri Feb 22 11:48:08.157 [rsStart] replSet STARTUP2 m31000| Fri Feb 22 11:48:09.158 [rsSync] replSet SECONDARY m31000| Fri Feb 22 11:48:09.158 [rsMgr] replSet info electSelf 0 m31000| Fri Feb 22 11:48:10.158 [rsMgr] replSet PRIMARY ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31000, is { "t" : 1361533678000, "i" : 11 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361533678000, "i" : 11 } ReplSetTest awaitReplication: finished: all 0 secondaries synced at timestamp { "t" : 1361533678000, "i" : 11 } m31000| Fri Feb 22 11:48:10.213 [FileAllocator] allocating new datafile /data/db/repset1-0/test.ns, filling with zeroes... m31000| Fri Feb 22 11:48:10.213 [FileAllocator] done allocating datafile /data/db/repset1-0/test.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:48:10.213 [FileAllocator] allocating new datafile /data/db/repset1-0/test.0, filling with zeroes... m31000| Fri Feb 22 11:48:10.214 [FileAllocator] done allocating datafile /data/db/repset1-0/test.0, size: 16MB, took 0 secs m31000| Fri Feb 22 11:48:10.217 [conn1] build index test.foo { _id: 1 } m31000| Fri Feb 22 11:48:10.219 [conn1] build index done. scanned 0 total records. 0.001 secs ReplSetTest Next port: 31001 [ 31000, 31001 ] [ connection to bs-smartos-x86-64-1.10gen.cc:31000 ] ReplSetTest nextId:1 ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31000, 31001 ] 31001 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31001, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "repset1", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 1, "set" : "repset1" } } ReplSetTest Starting.... Resetting db path '/data/db/repset1-1' Fri Feb 22 11:48:10.750 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31001 --noprealloc --smallfiles --rest --replSet repset1 --dbpath /data/db/repset1-1 --setParameter enableTestCommands=1 m31001| note: noprealloc may hurt performance in many applications m31001| Fri Feb 22 11:48:10.841 [initandlisten] MongoDB starting : pid=161 port=31001 dbpath=/data/db/repset1-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31001| Fri Feb 22 11:48:10.841 [initandlisten] m31001| Fri Feb 22 11:48:10.841 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31001| Fri Feb 22 11:48:10.841 [initandlisten] ** uses to detect impending page faults. m31001| Fri Feb 22 11:48:10.841 [initandlisten] ** This may result in slower performance for certain use cases m31001| Fri Feb 22 11:48:10.841 [initandlisten] m31001| Fri Feb 22 11:48:10.841 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31001| Fri Feb 22 11:48:10.841 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31001| Fri Feb 22 11:48:10.841 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31001| Fri Feb 22 11:48:10.841 [initandlisten] allocator: system m31001| Fri Feb 22 11:48:10.841 [initandlisten] options: { dbpath: "/data/db/repset1-1", noprealloc: true, oplogSize: 40, port: 31001, replSet: "repset1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31001| Fri Feb 22 11:48:10.841 [initandlisten] journal dir=/data/db/repset1-1/journal m31001| Fri Feb 22 11:48:10.842 [initandlisten] recover : no journal files present, no recovery needed m31001| Fri Feb 22 11:48:10.856 [FileAllocator] allocating new datafile /data/db/repset1-1/local.ns, filling with zeroes... m31001| Fri Feb 22 11:48:10.856 [FileAllocator] creating directory /data/db/repset1-1/_tmp m31001| Fri Feb 22 11:48:10.856 [FileAllocator] done allocating datafile /data/db/repset1-1/local.ns, size: 16MB, took 0 secs m31001| Fri Feb 22 11:48:10.856 [FileAllocator] allocating new datafile /data/db/repset1-1/local.0, filling with zeroes... m31001| Fri Feb 22 11:48:10.856 [FileAllocator] done allocating datafile /data/db/repset1-1/local.0, size: 16MB, took 0 secs m31001| Fri Feb 22 11:48:10.860 [initandlisten] waiting for connections on port 31001 m31001| Fri Feb 22 11:48:10.860 [websvr] admin web console waiting for connections on port 32001 m31001| Fri Feb 22 11:48:10.863 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31001| Fri Feb 22 11:48:10.863 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31001| Fri Feb 22 11:48:10.952 [initandlisten] connection accepted from 127.0.0.1:46730 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001 ] { "replSetReconfig" : { "_id" : "repset1", "members" : [ { "_id" : 0, "host" : "127.0.0.1:31000" }, { "_id" : 1, "host" : "127.0.0.1:31001" } ], "version" : 2 } } m31000| Fri Feb 22 11:48:10.954 [conn1] replSet replSetReconfig config object parses ok, 2 members specified m31001| Fri Feb 22 11:48:10.954 [initandlisten] connection accepted from 127.0.0.1:62260 #2 (2 connections now open) m31000| Fri Feb 22 11:48:10.955 [conn1] replSet replSetReconfig [2] m31000| Fri Feb 22 11:48:10.955 [conn1] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 11:48:10.965 [conn1] replSet saveConfigLocally done m31000| Fri Feb 22 11:48:10.965 [conn1] replSet info : additive change to configuration m31000| Fri Feb 22 11:48:10.965 [conn1] replSet replSetReconfig new config saved locally m31000| Fri Feb 22 11:48:10.966 [rsHealthPoll] replSet member 127.0.0.1:31001 is up { "ok" : 1 } m31000| Fri Feb 22 11:48:10.966 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31001| Fri Feb 22 11:48:20.863 [rsStart] trying to contact 127.0.0.1:31000 m31000| Fri Feb 22 11:48:20.864 [initandlisten] connection accepted from 127.0.0.1:48826 #3 (2 connections now open) m31001| Fri Feb 22 11:48:20.864 [initandlisten] connection accepted from 127.0.0.1:59186 #3 (3 connections now open) m31001| Fri Feb 22 11:48:20.865 [rsStart] replSet I am 127.0.0.1:31001 m31001| Fri Feb 22 11:48:20.865 [rsStart] replSet got config version 2 from a remote, saving locally m31001| Fri Feb 22 11:48:20.865 [rsStart] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 11:48:20.868 [rsStart] replSet saveConfigLocally done m31001| Fri Feb 22 11:48:20.871 [rsStart] replSet STARTUP2 m31001| Fri Feb 22 11:48:20.871 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31001| Fri Feb 22 11:48:20.871 [rsSync] ****** m31001| Fri Feb 22 11:48:20.871 [rsSync] creating replication oplog of size: 40MB... m31001| Fri Feb 22 11:48:20.871 [FileAllocator] allocating new datafile /data/db/repset1-1/local.1, filling with zeroes... m31001| Fri Feb 22 11:48:20.871 [FileAllocator] done allocating datafile /data/db/repset1-1/local.1, size: 64MB, took 0 secs m31001| Fri Feb 22 11:48:20.878 [conn3] end connection 127.0.0.1:59186 (2 connections now open) m31001| Fri Feb 22 11:48:20.888 [rsSync] ****** m31001| Fri Feb 22 11:48:20.888 [rsSync] replSet initial sync pending m31001| Fri Feb 22 11:48:20.888 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31000| Fri Feb 22 11:48:20.967 [rsHealthPoll] replset info 127.0.0.1:31001 thinks that we are down m31000| Fri Feb 22 11:48:20.967 [rsHealthPoll] replSet member 127.0.0.1:31001 is now in state STARTUP2 m31001| Fri Feb 22 11:48:22.865 [rsHealthPoll] replSet member 127.0.0.1:31000 is up m31001| Fri Feb 22 11:48:22.865 [rsHealthPoll] replSet member 127.0.0.1:31000 is now in state PRIMARY m31001| Fri Feb 22 11:48:36.888 [rsSync] replSet initial sync pending m31001| Fri Feb 22 11:48:36.888 [rsSync] replSet syncing to: 127.0.0.1:31000 m31000| Fri Feb 22 11:48:36.889 [initandlisten] connection accepted from 127.0.0.1:47099 #4 (3 connections now open) m31001| Fri Feb 22 11:48:36.898 [rsSync] build index local.me { _id: 1 } m31001| Fri Feb 22 11:48:36.902 [rsSync] build index done. scanned 0 total records. 0.004 secs m31001| Fri Feb 22 11:48:36.904 [rsSync] build index local.replset.minvalid { _id: 1 } m31001| Fri Feb 22 11:48:36.906 [rsSync] build index done. scanned 0 total records. 0.001 secs m31001| Fri Feb 22 11:48:36.906 [rsSync] replSet initial sync drop all databases m31001| Fri Feb 22 11:48:36.906 [rsSync] dropAllDatabasesExceptLocal 1 m31001| Fri Feb 22 11:48:36.906 [rsSync] replSet initial sync clone all databases m31001| Fri Feb 22 11:48:36.907 [rsSync] replSet initial sync cloning db: test m31000| Fri Feb 22 11:48:36.917 [initandlisten] connection accepted from 127.0.0.1:49719 #5 (4 connections now open) m31001| Fri Feb 22 11:48:36.918 [FileAllocator] allocating new datafile /data/db/repset1-1/test.ns, filling with zeroes... m31001| Fri Feb 22 11:48:36.918 [FileAllocator] done allocating datafile /data/db/repset1-1/test.ns, size: 16MB, took 0 secs m31001| Fri Feb 22 11:48:36.919 [FileAllocator] allocating new datafile /data/db/repset1-1/test.0, filling with zeroes... m31001| Fri Feb 22 11:48:36.919 [FileAllocator] done allocating datafile /data/db/repset1-1/test.0, size: 16MB, took 0 secs m31001| Fri Feb 22 11:48:37.130 [rsSync] build index test.foo { _id: 1 } m31001| Fri Feb 22 11:48:37.177 [rsSync] fastBuildIndex dupsToDrop:0 m31001| Fri Feb 22 11:48:37.178 [rsSync] build index done. scanned 10000 total records. 0.047 secs m31001| Fri Feb 22 11:48:37.178 [rsSync] replSet initial sync data copy, starting syncup m31001| Fri Feb 22 11:48:37.178 [rsSync] oplog sync 1 of 3 m31001| Fri Feb 22 11:48:37.179 [rsSync] oplog sync 2 of 3 m31001| Fri Feb 22 11:48:37.179 [rsSync] replSet initial sync building indexes m31001| Fri Feb 22 11:48:37.179 [rsSync] replSet initial sync cloning indexes for : test m31001| Fri Feb 22 11:48:37.189 [rsSync] oplog sync 3 of 3 m31000| Fri Feb 22 11:48:37.189 [conn5] end connection 127.0.0.1:49719 (3 connections now open) m31001| Fri Feb 22 11:48:37.189 [rsSync] replSet initial sync finishing up m31001| Fri Feb 22 11:48:37.207 [rsSync] replSet set minValid=51275afa:2711 m31001| Fri Feb 22 11:48:37.223 [rsSync] replSet RECOVERING m31001| Fri Feb 22 11:48:37.223 [rsSync] replSet initial sync done m31000| Fri Feb 22 11:48:37.223 [conn4] end connection 127.0.0.1:47099 (2 connections now open) m31001| Fri Feb 22 11:48:37.872 [rsBackgroundSync] replSet syncing to: 127.0.0.1:31000 m31000| Fri Feb 22 11:48:37.872 [initandlisten] connection accepted from 127.0.0.1:60918 #6 (3 connections now open) m31001| Fri Feb 22 11:48:38.223 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.1:31000 m31000| Fri Feb 22 11:48:38.224 [initandlisten] connection accepted from 127.0.0.1:63573 #7 (4 connections now open) m31001| Fri Feb 22 11:48:38.969 [conn2] end connection 127.0.0.1:62260 (1 connection now open) m31001| Fri Feb 22 11:48:38.969 [initandlisten] connection accepted from 127.0.0.1:41788 #4 (2 connections now open) m31000| Fri Feb 22 11:48:38.970 [rsHealthPoll] replSet member 127.0.0.1:31001 is now in state RECOVERING m31001| Fri Feb 22 11:48:39.224 [rsSync] replSet SECONDARY m31000| Fri Feb 22 11:48:39.232 [slaveTracking] build index local.slaves { _id: 1 } m31000| Fri Feb 22 11:48:39.235 [slaveTracking] build index done. scanned 0 total records. 0.002 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31000, is { "t" : 1361533690000, "i" : 10001 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361533690000, "i" : 10001 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31001 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31001, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp { "t" : 1361533690000, "i" : 10001 } Fri Feb 22 11:48:39.383 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 31002 --dbpath /data/db/config31002 --noprealloc --smallfiles --oplogSize 8 --nohttpinterface --rest --setParameter enableTestCommands=1 m31002| note: noprealloc may hurt performance in many applications m31002| Fri Feb 22 11:48:39.472 [initandlisten] MongoDB starting : pid=270 port=31002 dbpath=/data/db/config31002 64-bit host=bs-smartos-x86-64-1.10gen.cc m31002| Fri Feb 22 11:48:39.472 [initandlisten] m31002| Fri Feb 22 11:48:39.472 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31002| Fri Feb 22 11:48:39.472 [initandlisten] ** uses to detect impending page faults. m31002| Fri Feb 22 11:48:39.472 [initandlisten] ** This may result in slower performance for certain use cases m31002| Fri Feb 22 11:48:39.472 [initandlisten] m31002| Fri Feb 22 11:48:39.472 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31002| Fri Feb 22 11:48:39.472 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31002| Fri Feb 22 11:48:39.472 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31002| Fri Feb 22 11:48:39.472 [initandlisten] allocator: system m31002| Fri Feb 22 11:48:39.472 [initandlisten] options: { dbpath: "/data/db/config31002", nohttpinterface: true, noprealloc: true, oplogSize: 8, port: 31002, rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31002| Fri Feb 22 11:48:39.473 [initandlisten] journal dir=/data/db/config31002/journal m31002| Fri Feb 22 11:48:39.473 [initandlisten] recover : no journal files present, no recovery needed m31002| Fri Feb 22 11:48:39.489 [FileAllocator] allocating new datafile /data/db/config31002/local.ns, filling with zeroes... m31002| Fri Feb 22 11:48:39.489 [FileAllocator] creating directory /data/db/config31002/_tmp m31002| Fri Feb 22 11:48:39.490 [FileAllocator] done allocating datafile /data/db/config31002/local.ns, size: 16MB, took 0 secs m31002| Fri Feb 22 11:48:39.490 [FileAllocator] allocating new datafile /data/db/config31002/local.0, filling with zeroes... m31002| Fri Feb 22 11:48:39.490 [FileAllocator] done allocating datafile /data/db/config31002/local.0, size: 16MB, took 0 secs m31002| Fri Feb 22 11:48:39.493 [initandlisten] waiting for connections on port 31002 m31002| Fri Feb 22 11:48:39.585 [initandlisten] connection accepted from 127.0.0.1:43950 #1 (1 connection now open) Fri Feb 22 11:48:39.588 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 31003 --dbpath /data/db/config31003 --noprealloc --smallfiles --oplogSize 8 --nohttpinterface --rest --setParameter enableTestCommands=1 m31003| note: noprealloc may hurt performance in many applications m31003| Fri Feb 22 11:48:39.677 [initandlisten] MongoDB starting : pid=271 port=31003 dbpath=/data/db/config31003 64-bit host=bs-smartos-x86-64-1.10gen.cc m31003| Fri Feb 22 11:48:39.677 [initandlisten] m31003| Fri Feb 22 11:48:39.677 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31003| Fri Feb 22 11:48:39.677 [initandlisten] ** uses to detect impending page faults. m31003| Fri Feb 22 11:48:39.677 [initandlisten] ** This may result in slower performance for certain use cases m31003| Fri Feb 22 11:48:39.677 [initandlisten] m31003| Fri Feb 22 11:48:39.677 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31003| Fri Feb 22 11:48:39.677 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31003| Fri Feb 22 11:48:39.677 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31003| Fri Feb 22 11:48:39.677 [initandlisten] allocator: system m31003| Fri Feb 22 11:48:39.677 [initandlisten] options: { dbpath: "/data/db/config31003", nohttpinterface: true, noprealloc: true, oplogSize: 8, port: 31003, rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31003| Fri Feb 22 11:48:39.678 [initandlisten] journal dir=/data/db/config31003/journal m31003| Fri Feb 22 11:48:39.678 [initandlisten] recover : no journal files present, no recovery needed m31003| Fri Feb 22 11:48:39.692 [FileAllocator] allocating new datafile /data/db/config31003/local.ns, filling with zeroes... m31003| Fri Feb 22 11:48:39.692 [FileAllocator] creating directory /data/db/config31003/_tmp m31003| Fri Feb 22 11:48:39.693 [FileAllocator] done allocating datafile /data/db/config31003/local.ns, size: 16MB, took 0 secs m31003| Fri Feb 22 11:48:39.693 [FileAllocator] allocating new datafile /data/db/config31003/local.0, filling with zeroes... m31003| Fri Feb 22 11:48:39.693 [FileAllocator] done allocating datafile /data/db/config31003/local.0, size: 16MB, took 0 secs m31003| Fri Feb 22 11:48:39.696 [initandlisten] waiting for connections on port 31003 m31003| Fri Feb 22 11:48:39.789 [initandlisten] connection accepted from 127.0.0.1:34767 #1 (1 connection now open) Fri Feb 22 11:48:39.793 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 31004 --dbpath /data/db/config31004 --noprealloc --smallfiles --oplogSize 8 --nohttpinterface --rest --setParameter enableTestCommands=1 m31004| note: noprealloc may hurt performance in many applications m31004| Fri Feb 22 11:48:39.888 [initandlisten] MongoDB starting : pid=272 port=31004 dbpath=/data/db/config31004 64-bit host=bs-smartos-x86-64-1.10gen.cc m31004| Fri Feb 22 11:48:39.888 [initandlisten] m31004| Fri Feb 22 11:48:39.888 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31004| Fri Feb 22 11:48:39.888 [initandlisten] ** uses to detect impending page faults. m31004| Fri Feb 22 11:48:39.888 [initandlisten] ** This may result in slower performance for certain use cases m31004| Fri Feb 22 11:48:39.888 [initandlisten] m31004| Fri Feb 22 11:48:39.888 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31004| Fri Feb 22 11:48:39.888 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31004| Fri Feb 22 11:48:39.888 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31004| Fri Feb 22 11:48:39.888 [initandlisten] allocator: system m31004| Fri Feb 22 11:48:39.888 [initandlisten] options: { dbpath: "/data/db/config31004", nohttpinterface: true, noprealloc: true, oplogSize: 8, port: 31004, rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31004| Fri Feb 22 11:48:39.889 [initandlisten] journal dir=/data/db/config31004/journal m31004| Fri Feb 22 11:48:39.889 [initandlisten] recover : no journal files present, no recovery needed m31004| Fri Feb 22 11:48:39.905 [FileAllocator] allocating new datafile /data/db/config31004/local.ns, filling with zeroes... m31004| Fri Feb 22 11:48:39.905 [FileAllocator] creating directory /data/db/config31004/_tmp m31004| Fri Feb 22 11:48:39.905 [FileAllocator] done allocating datafile /data/db/config31004/local.ns, size: 16MB, took 0 secs m31004| Fri Feb 22 11:48:39.905 [FileAllocator] allocating new datafile /data/db/config31004/local.0, filling with zeroes... m31004| Fri Feb 22 11:48:39.905 [FileAllocator] done allocating datafile /data/db/config31004/local.0, size: 16MB, took 0 secs m31004| Fri Feb 22 11:48:39.909 [initandlisten] waiting for connections on port 31004 m31004| Fri Feb 22 11:48:39.994 [initandlisten] connection accepted from 127.0.0.1:36017 #1 (1 connection now open) Fri Feb 22 11:48:39.999 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 31005 --configdb 127.0.0.1:31002,127.0.0.1:31003,127.0.0.1:31004 --chunkSize 1 --setParameter enableTestCommands=1 m31005| Fri Feb 22 11:48:40.018 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=274 port=31005 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m31005| Fri Feb 22 11:48:40.018 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31005| Fri Feb 22 11:48:40.018 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31005| Fri Feb 22 11:48:40.018 [mongosMain] options: { chunkSize: 1, configdb: "127.0.0.1:31002,127.0.0.1:31003,127.0.0.1:31004", port: 31005, setParameter: [ "enableTestCommands=1" ] } m31002| Fri Feb 22 11:48:40.019 [initandlisten] connection accepted from 127.0.0.1:59475 #2 (2 connections now open) m31003| Fri Feb 22 11:48:40.020 [initandlisten] connection accepted from 127.0.0.1:37883 #2 (2 connections now open) m31004| Fri Feb 22 11:48:40.021 [initandlisten] connection accepted from 127.0.0.1:38927 #2 (2 connections now open) m31005| Fri Feb 22 11:48:40.022 [mongosMain] SyncClusterConnection connecting to [127.0.0.1:31002] m31002| Fri Feb 22 11:48:40.022 [initandlisten] connection accepted from 127.0.0.1:50990 #3 (3 connections now open) m31005| Fri Feb 22 11:48:40.022 [mongosMain] SyncClusterConnection connecting to [127.0.0.1:31003] m31003| Fri Feb 22 11:48:40.022 [initandlisten] connection accepted from 127.0.0.1:58796 #3 (3 connections now open) m31005| Fri Feb 22 11:48:40.022 [mongosMain] SyncClusterConnection connecting to [127.0.0.1:31004] m31004| Fri Feb 22 11:48:40.023 [initandlisten] connection accepted from 127.0.0.1:53295 #3 (3 connections now open) m31002| Fri Feb 22 11:48:40.023 [conn3] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.027 [conn3] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.031 [conn3] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:40.035 [mongosMain] scoped connection to 127.0.0.1:31002,127.0.0.1:31003,127.0.0.1:31004 not being returned to the pool m31005| Fri Feb 22 11:48:40.035 [mongosMain] SyncClusterConnection connecting to [127.0.0.1:31002] m31002| Fri Feb 22 11:48:40.036 [initandlisten] connection accepted from 127.0.0.1:42898 #4 (4 connections now open) m31005| Fri Feb 22 11:48:40.036 [mongosMain] SyncClusterConnection connecting to [127.0.0.1:31003] m31005| Fri Feb 22 11:48:40.036 [mongosMain] SyncClusterConnection connecting to [127.0.0.1:31004] m31003| Fri Feb 22 11:48:40.036 [initandlisten] connection accepted from 127.0.0.1:43169 #4 (4 connections now open) m31004| Fri Feb 22 11:48:40.036 [initandlisten] connection accepted from 127.0.0.1:40852 #4 (4 connections now open) m31005| Fri Feb 22 11:48:40.037 [LockPinger] creating distributed lock ping thread for 127.0.0.1:31002,127.0.0.1:31003,127.0.0.1:31004 and process bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838 (sleeping for 30000ms) m31005| Fri Feb 22 11:48:40.037 [LockPinger] SyncClusterConnection connecting to [127.0.0.1:31002] m31002| Fri Feb 22 11:48:40.037 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:40.038 [LockPinger] SyncClusterConnection connecting to [127.0.0.1:31003] m31002| Fri Feb 22 11:48:40.038 [initandlisten] connection accepted from 127.0.0.1:48175 #5 (5 connections now open) m31003| Fri Feb 22 11:48:40.038 [initandlisten] connection accepted from 127.0.0.1:40056 #5 (5 connections now open) m31005| Fri Feb 22 11:48:40.038 [LockPinger] SyncClusterConnection connecting to [127.0.0.1:31004] m31004| Fri Feb 22 11:48:40.038 [initandlisten] connection accepted from 127.0.0.1:49593 #5 (5 connections now open) m31002| Fri Feb 22 11:48:40.039 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.041 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.042 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.044 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.045 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.047 [FileAllocator] allocating new datafile /data/db/config31003/config.ns, filling with zeroes... m31002| Fri Feb 22 11:48:40.047 [FileAllocator] allocating new datafile /data/db/config31002/config.ns, filling with zeroes... m31003| Fri Feb 22 11:48:40.047 [FileAllocator] done allocating datafile /data/db/config31003/config.ns, size: 16MB, took 0 secs m31002| Fri Feb 22 11:48:40.047 [FileAllocator] done allocating datafile /data/db/config31002/config.ns, size: 16MB, took 0 secs m31003| Fri Feb 22 11:48:40.047 [FileAllocator] allocating new datafile /data/db/config31003/config.0, filling with zeroes... m31002| Fri Feb 22 11:48:40.047 [FileAllocator] allocating new datafile /data/db/config31002/config.0, filling with zeroes... m31003| Fri Feb 22 11:48:40.047 [FileAllocator] done allocating datafile /data/db/config31003/config.0, size: 16MB, took 0 secs m31002| Fri Feb 22 11:48:40.047 [FileAllocator] done allocating datafile /data/db/config31002/config.0, size: 16MB, took 0 secs m31003| Fri Feb 22 11:48:40.050 [conn4] build index config.locks { _id: 1 } m31002| Fri Feb 22 11:48:40.051 [conn4] build index config.locks { _id: 1 } m31003| Fri Feb 22 11:48:40.051 [conn4] build index done. scanned 0 total records. 0 secs m31002| Fri Feb 22 11:48:40.051 [conn4] build index done. scanned 0 total records. 0 secs m31003| Fri Feb 22 11:48:40.053 [conn5] build index config.lockpings { _id: 1 } m31002| Fri Feb 22 11:48:40.053 [conn5] build index config.lockpings { _id: 1 } m31003| Fri Feb 22 11:48:40.053 [conn3] end connection 127.0.0.1:58796 (4 connections now open) m31003| Fri Feb 22 11:48:40.054 [conn5] build index done. scanned 0 total records. 0.001 secs m31002| Fri Feb 22 11:48:40.054 [conn5] build index done. scanned 0 total records. 0.001 secs m31004| Fri Feb 22 11:48:40.054 [conn3] end connection 127.0.0.1:53295 (4 connections now open) m31002| Fri Feb 22 11:48:40.056 [conn3] end connection 127.0.0.1:50990 (4 connections now open) m31004| Fri Feb 22 11:48:40.057 [FileAllocator] allocating new datafile /data/db/config31004/config.ns, filling with zeroes... m31004| Fri Feb 22 11:48:40.057 [FileAllocator] done allocating datafile /data/db/config31004/config.ns, size: 16MB, took 0 secs m31004| Fri Feb 22 11:48:40.057 [FileAllocator] allocating new datafile /data/db/config31004/config.0, filling with zeroes... m31004| Fri Feb 22 11:48:40.057 [FileAllocator] done allocating datafile /data/db/config31004/config.0, size: 16MB, took 0 secs m31004| Fri Feb 22 11:48:40.060 [conn4] build index config.locks { _id: 1 } m31004| Fri Feb 22 11:48:40.061 [conn4] build index done. scanned 0 total records. 0 secs m31004| Fri Feb 22 11:48:40.062 [conn5] build index config.lockpings { _id: 1 } m31004| Fri Feb 22 11:48:40.064 [conn5] build index done. scanned 0 total records. 0.001 secs m31002| Fri Feb 22 11:48:40.110 [conn5] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:40.110 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.121 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.121 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.132 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.134 [conn5] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:40.212 [conn5] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:40.213 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.223 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.223 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.234 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.234 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.247 [conn5] build index config.lockpings { ping: new Date(1) } m31003| Fri Feb 22 11:48:40.247 [conn5] build index config.lockpings { ping: new Date(1) } m31002| Fri Feb 22 11:48:40.247 [conn5] build index config.lockpings { ping: new Date(1) } m31003| Fri Feb 22 11:48:40.249 [conn5] build index done. scanned 1 total records. 0.001 secs m31002| Fri Feb 22 11:48:40.249 [conn5] build index done. scanned 1 total records. 0.001 secs m31004| Fri Feb 22 11:48:40.249 [conn5] build index done. scanned 1 total records. 0.001 secs m31005| Fri Feb 22 11:48:40.315 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' acquired, ts : 51275b18f31988f751311785 m31005| Fri Feb 22 11:48:40.324 [mongosMain] starting upgrade of config server from v0 to v4 m31005| Fri Feb 22 11:48:40.324 [mongosMain] starting next upgrade step from v0 to v4 m31005| Fri Feb 22 11:48:40.324 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:48:40-51275b18f31988f751311786", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533720324), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m31002| Fri Feb 22 11:48:40.324 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.333 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.342 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:40.350 [conn4] build index config.changelog { _id: 1 } m31002| Fri Feb 22 11:48:40.350 [conn4] build index done. scanned 0 total records. 0 secs m31003| Fri Feb 22 11:48:40.354 [conn4] build index config.changelog { _id: 1 } m31003| Fri Feb 22 11:48:40.355 [conn4] build index done. scanned 0 total records. 0 secs m31004| Fri Feb 22 11:48:40.358 [conn4] build index config.changelog { _id: 1 } m31004| Fri Feb 22 11:48:40.359 [conn4] build index done. scanned 0 total records. 0 secs m31002| Fri Feb 22 11:48:40.418 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.427 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.437 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:40.486 [mongosMain] writing initial config version at v4 m31002| Fri Feb 22 11:48:40.486 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.495 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.504 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.510 [conn4] build index config.version { _id: 1 } m31003| Fri Feb 22 11:48:40.510 [conn4] build index config.version { _id: 1 } m31002| Fri Feb 22 11:48:40.511 [conn4] build index config.version { _id: 1 } m31004| Fri Feb 22 11:48:40.511 [conn4] build index done. scanned 0 total records. 0 secs m31003| Fri Feb 22 11:48:40.512 [conn4] build index done. scanned 0 total records. 0.001 secs m31002| Fri Feb 22 11:48:40.512 [conn4] build index done. scanned 0 total records. 0.001 secs m31005| Fri Feb 22 11:48:40.556 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:48:40-51275b18f31988f751311788", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533720556), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m31002| Fri Feb 22 11:48:40.556 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.566 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.575 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:40.623 [mongosMain] upgrade of config server to v4 successful m31002| Fri Feb 22 11:48:40.623 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.633 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.642 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:40.692 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' unlocked. m31002| Fri Feb 22 11:48:40.695 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.702 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.709 [conn5] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:40.720 [conn5] build index config.settings { _id: 1 } m31003| Fri Feb 22 11:48:40.724 [conn5] build index config.settings { _id: 1 } m31004| Fri Feb 22 11:48:40.724 [conn5] build index config.settings { _id: 1 } m31002| Fri Feb 22 11:48:40.724 [conn5] build index done. scanned 0 total records. 0.004 secs m31003| Fri Feb 22 11:48:40.728 [conn5] build index done. scanned 0 total records. 0.003 secs m31004| Fri Feb 22 11:48:40.728 [conn5] build index done. scanned 0 total records. 0.004 secs m31002| Fri Feb 22 11:48:40.794 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.802 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.809 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.822 [conn5] build index config.chunks { _id: 1 } m31002| Fri Feb 22 11:48:40.822 [conn5] build index config.chunks { _id: 1 } m31004| Fri Feb 22 11:48:40.823 [conn5] build index config.chunks { _id: 1 } m31003| Fri Feb 22 11:48:40.827 [conn5] build index done. scanned 0 total records. 0.004 secs m31002| Fri Feb 22 11:48:40.827 [conn5] build index done. scanned 0 total records. 0.004 secs m31002| Fri Feb 22 11:48:40.827 [conn5] info: creating collection config.chunks on add index m31003| Fri Feb 22 11:48:40.827 [conn5] info: creating collection config.chunks on add index m31002| Fri Feb 22 11:48:40.827 [conn5] build index config.chunks { ns: 1, min: 1 } m31003| Fri Feb 22 11:48:40.827 [conn5] build index config.chunks { ns: 1, min: 1 } m31004| Fri Feb 22 11:48:40.827 [conn5] build index done. scanned 0 total records. 0.004 secs m31004| Fri Feb 22 11:48:40.827 [conn5] info: creating collection config.chunks on add index m31004| Fri Feb 22 11:48:40.827 [conn5] build index config.chunks { ns: 1, min: 1 } m31003| Fri Feb 22 11:48:40.829 [conn5] build index done. scanned 0 total records. 0.002 secs m31002| Fri Feb 22 11:48:40.829 [conn5] build index done. scanned 0 total records. 0.002 secs m31004| Fri Feb 22 11:48:40.829 [conn5] build index done. scanned 0 total records. 0.002 secs m31002| Fri Feb 22 11:48:40.897 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:40.905 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:40.914 [conn5] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:40.927 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 } m31003| Fri Feb 22 11:48:40.927 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 } m31004| Fri Feb 22 11:48:40.927 [conn5] build index config.chunks { ns: 1, shard: 1, min: 1 } m31002| Fri Feb 22 11:48:40.928 [conn5] build index done. scanned 0 total records. 0.001 secs m31003| Fri Feb 22 11:48:40.929 [conn5] build index done. scanned 0 total records. 0.002 secs m31004| Fri Feb 22 11:48:40.929 [conn5] build index done. scanned 0 total records. 0.002 secs m31000| Fri Feb 22 11:48:40.971 [rsHealthPoll] replSet member 127.0.0.1:31001 is now in state SECONDARY m31002| Fri Feb 22 11:48:41.000 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.009 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.017 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.029 [conn5] build index config.chunks { ns: 1, lastmod: 1 } m31003| Fri Feb 22 11:48:41.029 [conn5] build index config.chunks { ns: 1, lastmod: 1 } m31002| Fri Feb 22 11:48:41.029 [conn5] build index config.chunks { ns: 1, lastmod: 1 } m31004| Fri Feb 22 11:48:41.032 [conn5] build index done. scanned 0 total records. 0.002 secs m31003| Fri Feb 22 11:48:41.032 [conn5] build index done. scanned 0 total records. 0.002 secs m31002| Fri Feb 22 11:48:41.032 [conn5] build index done. scanned 0 total records. 0.003 secs m31002| Fri Feb 22 11:48:41.103 [conn5] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.111 [conn5] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.119 [conn5] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:41.131 [conn5] build index config.shards { _id: 1 } m31003| Fri Feb 22 11:48:41.133 [conn5] build index config.shards { _id: 1 } m31002| Fri Feb 22 11:48:41.133 [conn5] build index done. scanned 0 total records. 0.002 secs m31002| Fri Feb 22 11:48:41.133 [conn5] info: creating collection config.shards on add index m31004| Fri Feb 22 11:48:41.133 [conn5] build index config.shards { _id: 1 } m31002| Fri Feb 22 11:48:41.133 [conn5] build index config.shards { host: 1 } m31003| Fri Feb 22 11:48:41.136 [conn5] build index done. scanned 0 total records. 0.003 secs m31003| Fri Feb 22 11:48:41.137 [conn5] info: creating collection config.shards on add index m31003| Fri Feb 22 11:48:41.137 [conn5] build index config.shards { host: 1 } m31004| Fri Feb 22 11:48:41.137 [conn5] build index done. scanned 0 total records. 0.003 secs m31004| Fri Feb 22 11:48:41.137 [conn5] info: creating collection config.shards on add index m31004| Fri Feb 22 11:48:41.137 [conn5] build index config.shards { host: 1 } m31002| Fri Feb 22 11:48:41.137 [conn5] build index done. scanned 0 total records. 0.004 secs m31003| Fri Feb 22 11:48:41.140 [conn5] build index done. scanned 0 total records. 0.003 secs m31004| Fri Feb 22 11:48:41.141 [conn5] build index done. scanned 0 total records. 0.003 secs m31005| Fri Feb 22 11:48:41.207 [Balancer] about to contact config servers and shards m31005| Fri Feb 22 11:48:41.207 [websvr] admin web console waiting for connections on port 32005 m31005| Fri Feb 22 11:48:41.207 [Balancer] config servers and shards contacted successfully m31005| Fri Feb 22 11:48:41.207 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:31005 started at Feb 22 11:48:41 m31005| Fri Feb 22 11:48:41.207 [mongosMain] waiting for connections on port 31005 m31005| Fri Feb 22 11:48:41.207 [Balancer] SyncClusterConnection connecting to [127.0.0.1:31002] m31005| Fri Feb 22 11:48:41.207 [Balancer] SyncClusterConnection connecting to [127.0.0.1:31003] m31002| Fri Feb 22 11:48:41.207 [initandlisten] connection accepted from 127.0.0.1:57523 #6 (5 connections now open) m31005| Fri Feb 22 11:48:41.208 [Balancer] SyncClusterConnection connecting to [127.0.0.1:31004] m31003| Fri Feb 22 11:48:41.208 [initandlisten] connection accepted from 127.0.0.1:45573 #6 (5 connections now open) m31004| Fri Feb 22 11:48:41.208 [conn5] build index config.mongos { _id: 1 } m31003| Fri Feb 22 11:48:41.208 [conn5] build index config.mongos { _id: 1 } m31004| Fri Feb 22 11:48:41.208 [initandlisten] connection accepted from 127.0.0.1:51617 #6 (5 connections now open) m31002| Fri Feb 22 11:48:41.208 [conn5] build index config.mongos { _id: 1 } m31004| Fri Feb 22 11:48:41.210 [conn5] build index done. scanned 0 total records. 0.002 secs m31003| Fri Feb 22 11:48:41.211 [conn5] build index done. scanned 0 total records. 0.002 secs m31002| Fri Feb 22 11:48:41.211 [conn5] build index done. scanned 0 total records. 0.002 secs m31002| Fri Feb 22 11:48:41.212 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.221 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.230 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:41.308 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.315 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.324 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:41.377 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.384 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.393 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:41.403 [mongosMain] connection accepted from 127.0.0.1:44026 #1 (1 connection now open) m31005| Fri Feb 22 11:48:41.405 [conn1] couldn't find database [admin] in config db m31002| Fri Feb 22 11:48:41.406 [conn6] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.414 [conn6] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.422 [conn6] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:41.423 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' acquired, ts : 51275b19f31988f75131178a m31002| Fri Feb 22 11:48:41.423 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:41.431 [conn6] build index config.databases { _id: 1 } m31003| Fri Feb 22 11:48:41.432 [conn6] build index config.databases { _id: 1 } m31004| Fri Feb 22 11:48:41.432 [conn6] build index config.databases { _id: 1 } m31002| Fri Feb 22 11:48:41.433 [conn6] build index done. scanned 0 total records. 0.001 secs m31003| Fri Feb 22 11:48:41.433 [conn6] build index done. scanned 0 total records. 0.001 secs m31004| Fri Feb 22 11:48:41.434 [conn6] build index done. scanned 0 total records. 0.001 secs m31003| Fri Feb 22 11:48:41.434 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.444 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:41.479 [conn1] put [admin] on: config:127.0.0.1:31002,127.0.0.1:31003,127.0.0.1:31004 m31005| Fri Feb 22 11:48:41.479 [conn1] starting new replica set monitor for replica set repset1 with seed of 127.0.0.1:31000,127.0.0.1:31001 m31005| Fri Feb 22 11:48:41.479 [conn1] successfully connected to seed 127.0.0.1:31000 for replica set repset1 m31000| Fri Feb 22 11:48:41.479 [initandlisten] connection accepted from 127.0.0.1:55585 #8 (5 connections now open) m31005| Fri Feb 22 11:48:41.480 [conn1] changing hosts to { 0: "127.0.0.1:31000", 1: "127.0.0.1:31001" } from repset1/ m31005| Fri Feb 22 11:48:41.480 [conn1] trying to add new host 127.0.0.1:31000 to replica set repset1 m31000| Fri Feb 22 11:48:41.480 [initandlisten] connection accepted from 127.0.0.1:38304 #9 (6 connections now open) m31005| Fri Feb 22 11:48:41.480 [conn1] successfully connected to new host 127.0.0.1:31000 in replica set repset1 m31005| Fri Feb 22 11:48:41.480 [conn1] trying to add new host 127.0.0.1:31001 to replica set repset1 m31005| Fri Feb 22 11:48:41.480 [conn1] successfully connected to new host 127.0.0.1:31001 in replica set repset1 m31001| Fri Feb 22 11:48:41.480 [initandlisten] connection accepted from 127.0.0.1:42341 #5 (3 connections now open) m31000| Fri Feb 22 11:48:41.480 [initandlisten] connection accepted from 127.0.0.1:58786 #10 (7 connections now open) m31000| Fri Feb 22 11:48:41.481 [conn8] end connection 127.0.0.1:55585 (6 connections now open) m31005| Fri Feb 22 11:48:41.481 [conn1] Primary for replica set repset1 changed to 127.0.0.1:31000 m31001| Fri Feb 22 11:48:41.481 [initandlisten] connection accepted from 127.0.0.1:56392 #6 (4 connections now open) m31005| Fri Feb 22 11:48:41.482 [conn1] replica set monitor for replica set repset1 started, address is repset1/127.0.0.1:31000,127.0.0.1:31001 m31005| Fri Feb 22 11:48:41.482 [ReplicaSetMonitorWatcher] starting m31000| Fri Feb 22 11:48:41.482 [initandlisten] connection accepted from 127.0.0.1:40922 #11 (7 connections now open) m31005| Fri Feb 22 11:48:41.483 [conn1] going to add shard: { _id: "repset1", host: "repset1/127.0.0.1:31000,127.0.0.1:31001" } m31002| Fri Feb 22 11:48:41.483 [conn6] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.492 [conn6] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.500 [conn6] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:41.500 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' unlocked. m31005| Fri Feb 22 11:48:41.547 [conn1] couldn't find database [test] in config db m31002| Fri Feb 22 11:48:41.548 [conn6] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:41.556 [conn6] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:41.564 [conn6] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:41.616 [conn1] put [test] on: repset1:repset1/127.0.0.1:31000,127.0.0.1:31001 m31005| Fri Feb 22 11:48:41.617 [conn1] creating WriteBackListener for: 127.0.0.1:31000 serverID: 51275b19f31988f751311789 m31005| Fri Feb 22 11:48:41.617 [conn1] creating WriteBackListener for: 127.0.0.1:31001 serverID: 51275b19f31988f751311789 m31000| Fri Feb 22 11:48:41.617 [initandlisten] connection accepted from 127.0.0.1:35448 #12 (8 connections now open) m31000| Fri Feb 22 11:48:41.617 [initandlisten] connection accepted from 127.0.0.1:36757 #13 (9 connections now open) m31001| Fri Feb 22 11:48:41.617 [initandlisten] connection accepted from 127.0.0.1:39662 #7 (5 connections now open) m31000| Fri Feb 22 11:48:42.880 [conn12] update test.foo update: { $set: { y: "hello" } } nscanned:19999 nmoved:9999 nupdated:10000 keyUpdates:0 numYields: 16 locks(micros) w:2336060 1262ms ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31006, 31007 ] 31006 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31006, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "repset2", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "repset2" } } ReplSetTest Starting.... Resetting db path '/data/db/repset2-0' Fri Feb 22 11:48:42.896 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31006 --noprealloc --smallfiles --rest --replSet repset2 --dbpath /data/db/repset2-0 --setParameter enableTestCommands=1 m31006| note: noprealloc may hurt performance in many applications m31006| Fri Feb 22 11:48:42.985 [initandlisten] MongoDB starting : pid=281 port=31006 dbpath=/data/db/repset2-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31006| Fri Feb 22 11:48:42.985 [initandlisten] m31006| Fri Feb 22 11:48:42.985 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31006| Fri Feb 22 11:48:42.985 [initandlisten] ** uses to detect impending page faults. m31006| Fri Feb 22 11:48:42.985 [initandlisten] ** This may result in slower performance for certain use cases m31006| Fri Feb 22 11:48:42.985 [initandlisten] m31006| Fri Feb 22 11:48:42.985 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31006| Fri Feb 22 11:48:42.985 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31006| Fri Feb 22 11:48:42.985 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31006| Fri Feb 22 11:48:42.985 [initandlisten] allocator: system m31006| Fri Feb 22 11:48:42.985 [initandlisten] options: { dbpath: "/data/db/repset2-0", noprealloc: true, oplogSize: 40, port: 31006, replSet: "repset2", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31006| Fri Feb 22 11:48:42.986 [initandlisten] journal dir=/data/db/repset2-0/journal m31006| Fri Feb 22 11:48:42.986 [initandlisten] recover : no journal files present, no recovery needed m31006| Fri Feb 22 11:48:43.002 [FileAllocator] allocating new datafile /data/db/repset2-0/local.ns, filling with zeroes... m31006| Fri Feb 22 11:48:43.002 [FileAllocator] creating directory /data/db/repset2-0/_tmp m31006| Fri Feb 22 11:48:43.002 [FileAllocator] done allocating datafile /data/db/repset2-0/local.ns, size: 16MB, took 0 secs m31006| Fri Feb 22 11:48:43.002 [FileAllocator] allocating new datafile /data/db/repset2-0/local.0, filling with zeroes... m31006| Fri Feb 22 11:48:43.002 [FileAllocator] done allocating datafile /data/db/repset2-0/local.0, size: 16MB, took 0 secs m31006| Fri Feb 22 11:48:43.006 [websvr] admin web console waiting for connections on port 32006 m31006| Fri Feb 22 11:48:43.006 [initandlisten] waiting for connections on port 31006 m31006| Fri Feb 22 11:48:43.009 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31006| Fri Feb 22 11:48:43.009 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31006| Fri Feb 22 11:48:43.098 [initandlisten] connection accepted from 127.0.0.1:34547 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31006 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31006, 31007 ] 31007 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31007, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "repset2", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 1, "set" : "repset2" } } ReplSetTest Starting.... Resetting db path '/data/db/repset2-1' Fri Feb 22 11:48:43.102 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31007 --noprealloc --smallfiles --rest --replSet repset2 --dbpath /data/db/repset2-1 --setParameter enableTestCommands=1 m31007| note: noprealloc may hurt performance in many applications m31007| Fri Feb 22 11:48:43.201 [initandlisten] MongoDB starting : pid=282 port=31007 dbpath=/data/db/repset2-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31007| Fri Feb 22 11:48:43.202 [initandlisten] m31007| Fri Feb 22 11:48:43.202 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31007| Fri Feb 22 11:48:43.202 [initandlisten] ** uses to detect impending page faults. m31007| Fri Feb 22 11:48:43.202 [initandlisten] ** This may result in slower performance for certain use cases m31007| Fri Feb 22 11:48:43.202 [initandlisten] m31007| Fri Feb 22 11:48:43.202 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31007| Fri Feb 22 11:48:43.202 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31007| Fri Feb 22 11:48:43.202 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31007| Fri Feb 22 11:48:43.202 [initandlisten] allocator: system m31007| Fri Feb 22 11:48:43.202 [initandlisten] options: { dbpath: "/data/db/repset2-1", noprealloc: true, oplogSize: 40, port: 31007, replSet: "repset2", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31007| Fri Feb 22 11:48:43.202 [initandlisten] journal dir=/data/db/repset2-1/journal m31007| Fri Feb 22 11:48:43.202 [initandlisten] recover : no journal files present, no recovery needed m31007| Fri Feb 22 11:48:43.216 [FileAllocator] allocating new datafile /data/db/repset2-1/local.ns, filling with zeroes... m31007| Fri Feb 22 11:48:43.216 [FileAllocator] creating directory /data/db/repset2-1/_tmp m31007| Fri Feb 22 11:48:43.216 [FileAllocator] done allocating datafile /data/db/repset2-1/local.ns, size: 16MB, took 0 secs m31007| Fri Feb 22 11:48:43.216 [FileAllocator] allocating new datafile /data/db/repset2-1/local.0, filling with zeroes... m31007| Fri Feb 22 11:48:43.216 [FileAllocator] done allocating datafile /data/db/repset2-1/local.0, size: 16MB, took 0 secs m31007| Fri Feb 22 11:48:43.220 [initandlisten] waiting for connections on port 31007 m31007| Fri Feb 22 11:48:43.220 [websvr] admin web console waiting for connections on port 32007 m31007| Fri Feb 22 11:48:43.222 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31007| Fri Feb 22 11:48:43.222 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31007| Fri Feb 22 11:48:43.303 [initandlisten] connection accepted from 127.0.0.1:59525 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31006, connection to bs-smartos-x86-64-1.10gen.cc:31007 ] { "replSetInitiate" : { "_id" : "repset2", "members" : [ { "_id" : 0, "host" : "127.0.0.1:31006" }, { "_id" : 1, "host" : "127.0.0.1:31007" } ] } } m31006| Fri Feb 22 11:48:43.308 [conn1] replSet replSetInitiate admin command received from client m31006| Fri Feb 22 11:48:43.309 [conn1] replSet replSetInitiate config object parses ok, 2 members specified m31006| Fri Feb 22 11:48:43.309 [initandlisten] connection accepted from 127.0.0.1:64388 #2 (2 connections now open) m31007| Fri Feb 22 11:48:43.310 [initandlisten] connection accepted from 127.0.0.1:41272 #2 (2 connections now open) m31006| Fri Feb 22 11:48:43.311 [conn1] replSet replSetInitiate all members seem up m31006| Fri Feb 22 11:48:43.311 [conn1] ****** m31006| Fri Feb 22 11:48:43.311 [conn1] creating replication oplog of size: 40MB... m31006| Fri Feb 22 11:48:43.311 [FileAllocator] allocating new datafile /data/db/repset2-0/local.1, filling with zeroes... m31006| Fri Feb 22 11:48:43.311 [FileAllocator] done allocating datafile /data/db/repset2-0/local.1, size: 64MB, took 0 secs m31006| Fri Feb 22 11:48:43.326 [conn1] ****** m31006| Fri Feb 22 11:48:43.326 [conn1] replSet info saving a newer config version to local.system.replset m31006| Fri Feb 22 11:48:43.332 [conn2] end connection 127.0.0.1:64388 (1 connection now open) m31006| Fri Feb 22 11:48:43.344 [conn1] replSet saveConfigLocally done m31006| Fri Feb 22 11:48:43.344 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31002| Fri Feb 22 11:48:47.501 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:47.510 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:47.518 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:47.573 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:47.580 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:47.588 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:47.641 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' acquired, ts : 51275b1ff31988f75131178b m31002| Fri Feb 22 11:48:47.642 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:47.650 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:47.658 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:47.709 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' unlocked. m31000| Fri Feb 22 11:48:50.868 [conn3] end connection 127.0.0.1:48826 (8 connections now open) m31000| Fri Feb 22 11:48:50.868 [initandlisten] connection accepted from 127.0.0.1:45004 #14 (9 connections now open) m31006| Fri Feb 22 11:48:53.009 [rsStart] replSet I am 127.0.0.1:31006 m31006| Fri Feb 22 11:48:53.009 [rsStart] replSet STARTUP2 m31006| Fri Feb 22 11:48:53.009 [rsHealthPoll] replSet member 127.0.0.1:31007 is up m31006| Fri Feb 22 11:48:53.010 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31007| Fri Feb 22 11:48:53.223 [rsStart] trying to contact 127.0.0.1:31006 m31006| Fri Feb 22 11:48:53.223 [initandlisten] connection accepted from 127.0.0.1:35546 #3 (2 connections now open) m31007| Fri Feb 22 11:48:53.224 [initandlisten] connection accepted from 127.0.0.1:57268 #3 (3 connections now open) m31007| Fri Feb 22 11:48:53.224 [rsStart] replSet I am 127.0.0.1:31007 m31007| Fri Feb 22 11:48:53.224 [rsStart] replSet got config version 1 from a remote, saving locally m31007| Fri Feb 22 11:48:53.224 [rsStart] replSet info saving a newer config version to local.system.replset m31007| Fri Feb 22 11:48:53.228 [rsStart] replSet saveConfigLocally done m31007| Fri Feb 22 11:48:53.228 [rsStart] replSet STARTUP2 m31007| Fri Feb 22 11:48:53.228 [rsMgr] replSet total number of votes is even - add arbiter or give one member an extra vote m31007| Fri Feb 22 11:48:53.228 [rsSync] ****** m31007| Fri Feb 22 11:48:53.228 [rsSync] creating replication oplog of size: 40MB... m31007| Fri Feb 22 11:48:53.229 [FileAllocator] allocating new datafile /data/db/repset2-1/local.1, filling with zeroes... m31007| Fri Feb 22 11:48:53.229 [FileAllocator] done allocating datafile /data/db/repset2-1/local.1, size: 64MB, took 0 secs m31007| Fri Feb 22 11:48:53.238 [conn3] end connection 127.0.0.1:57268 (2 connections now open) m31007| Fri Feb 22 11:48:53.240 [rsSync] ****** m31007| Fri Feb 22 11:48:53.240 [rsSync] replSet initial sync pending m31007| Fri Feb 22 11:48:53.240 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31002| Fri Feb 22 11:48:53.711 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:53.719 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:53.728 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:48:53.802 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:53.810 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:53.819 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:53.871 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' acquired, ts : 51275b25f31988f75131178c m31002| Fri Feb 22 11:48:53.871 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:53.879 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:53.887 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:48:53.939 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' unlocked. m31006| Fri Feb 22 11:48:54.010 [rsSync] replSet SECONDARY m31006| Fri Feb 22 11:48:55.010 [rsHealthPoll] replset info 127.0.0.1:31007 thinks that we are down m31006| Fri Feb 22 11:48:55.010 [rsHealthPoll] replSet member 127.0.0.1:31007 is now in state STARTUP2 m31006| Fri Feb 22 11:48:55.010 [rsMgr] not electing self, 127.0.0.1:31007 would veto with 'I don't think 127.0.0.1:31006 is electable' m31007| Fri Feb 22 11:48:55.225 [rsHealthPoll] replSet member 127.0.0.1:31006 is up m31007| Fri Feb 22 11:48:55.225 [rsHealthPoll] replSet member 127.0.0.1:31006 is now in state SECONDARY m31002| Fri Feb 22 11:48:59.940 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:48:59.950 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:48:59.960 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:49:00.032 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:00.041 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:00.050 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:00.100 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' acquired, ts : 51275b2bf31988f75131178d m31002| Fri Feb 22 11:49:00.101 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:00.108 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:00.117 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:00.168 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' unlocked. m31006| Fri Feb 22 11:49:01.011 [rsMgr] replSet info electSelf 0 m31007| Fri Feb 22 11:49:01.011 [conn2] replSet RECOVERING m31007| Fri Feb 22 11:49:01.011 [conn2] replSet info voting yea for 127.0.0.1:31006 (0) m31006| Fri Feb 22 11:49:02.010 [rsMgr] replSet PRIMARY m31006| Fri Feb 22 11:49:03.011 [rsHealthPoll] replSet member 127.0.0.1:31007 is now in state RECOVERING m31007| Fri Feb 22 11:49:03.226 [rsHealthPoll] replSet member 127.0.0.1:31006 is now in state PRIMARY m31002| Fri Feb 22 11:49:06.170 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:06.178 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:06.186 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:49:06.261 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:06.268 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:06.277 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:06.329 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' acquired, ts : 51275b32f31988f75131178e m31002| Fri Feb 22 11:49:06.329 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:06.337 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:06.345 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:06.397 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' unlocked. m31001| Fri Feb 22 11:49:08.974 [conn4] end connection 127.0.0.1:41788 (4 connections now open) m31001| Fri Feb 22 11:49:08.975 [initandlisten] connection accepted from 127.0.0.1:62619 #8 (5 connections now open) m31007| Fri Feb 22 11:49:09.240 [rsSync] replSet initial sync pending m31007| Fri Feb 22 11:49:09.240 [rsSync] replSet syncing to: 127.0.0.1:31006 m31006| Fri Feb 22 11:49:09.241 [initandlisten] connection accepted from 127.0.0.1:45649 #4 (3 connections now open) m31007| Fri Feb 22 11:49:09.248 [rsSync] build index local.me { _id: 1 } m31007| Fri Feb 22 11:49:09.252 [rsSync] build index done. scanned 0 total records. 0.003 secs m31007| Fri Feb 22 11:49:09.254 [rsSync] build index local.replset.minvalid { _id: 1 } m31007| Fri Feb 22 11:49:09.255 [rsSync] build index done. scanned 0 total records. 0.001 secs m31007| Fri Feb 22 11:49:09.255 [rsSync] replSet initial sync drop all databases m31007| Fri Feb 22 11:49:09.255 [rsSync] dropAllDatabasesExceptLocal 1 m31007| Fri Feb 22 11:49:09.255 [rsSync] replSet initial sync clone all databases m31007| Fri Feb 22 11:49:09.256 [rsSync] replSet initial sync data copy, starting syncup m31007| Fri Feb 22 11:49:09.256 [rsSync] oplog sync 1 of 3 m31007| Fri Feb 22 11:49:09.256 [rsSync] oplog sync 2 of 3 m31007| Fri Feb 22 11:49:09.256 [rsSync] replSet initial sync building indexes m31007| Fri Feb 22 11:49:09.256 [rsSync] oplog sync 3 of 3 m31007| Fri Feb 22 11:49:09.256 [rsSync] replSet initial sync finishing up m31007| Fri Feb 22 11:49:09.262 [rsSync] replSet set minValid=51275b1b:1 m31007| Fri Feb 22 11:49:09.266 [rsSync] replSet initial sync done m31006| Fri Feb 22 11:49:09.266 [conn4] end connection 127.0.0.1:45649 (2 connections now open) m31007| Fri Feb 22 11:49:10.229 [rsBackgroundSync] replSet syncing to: 127.0.0.1:31006 m31006| Fri Feb 22 11:49:10.229 [initandlisten] connection accepted from 127.0.0.1:51575 #5 (3 connections now open) m31007| Fri Feb 22 11:49:10.266 [rsSyncNotifier] replset setting oplog notifier to 127.0.0.1:31006 m31006| Fri Feb 22 11:49:10.266 [initandlisten] connection accepted from 127.0.0.1:40007 #6 (4 connections now open) m31002| Fri Feb 22 11:49:10.315 [conn6] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:10.325 [conn6] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:10.334 [conn6] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:49:10.381 [conn6] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:10.390 [conn6] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:10.399 [conn6] CMD fsync: sync:1 lock:0 m31007| Fri Feb 22 11:49:11.267 [rsSync] replSet SECONDARY m31006| Fri Feb 22 11:49:11.278 [slaveTracking] build index local.slaves { _id: 1 } m31006| Fri Feb 22 11:49:11.281 [slaveTracking] build index done. scanned 0 total records. 0.002 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31006, is { "t" : 1361533723000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361533723000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31007 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31007, is synced ReplSetTest awaitReplication: finished: all 1 secondaries synced at timestamp { "t" : 1361533723000, "i" : 1 } m31005| Fri Feb 22 11:49:11.420 [conn1] starting new replica set monitor for replica set repset2 with seed of 127.0.0.1:31006,127.0.0.1:31007 m31005| Fri Feb 22 11:49:11.420 [conn1] successfully connected to seed 127.0.0.1:31006 for replica set repset2 m31006| Fri Feb 22 11:49:11.420 [initandlisten] connection accepted from 127.0.0.1:41312 #7 (5 connections now open) m31005| Fri Feb 22 11:49:11.420 [conn1] changing hosts to { 0: "127.0.0.1:31006", 1: "127.0.0.1:31007" } from repset2/ m31005| Fri Feb 22 11:49:11.420 [conn1] trying to add new host 127.0.0.1:31006 to replica set repset2 m31006| Fri Feb 22 11:49:11.421 [initandlisten] connection accepted from 127.0.0.1:47470 #8 (6 connections now open) m31005| Fri Feb 22 11:49:11.421 [conn1] successfully connected to new host 127.0.0.1:31006 in replica set repset2 m31005| Fri Feb 22 11:49:11.421 [conn1] trying to add new host 127.0.0.1:31007 to replica set repset2 m31005| Fri Feb 22 11:49:11.421 [conn1] successfully connected to new host 127.0.0.1:31007 in replica set repset2 m31007| Fri Feb 22 11:49:11.421 [initandlisten] connection accepted from 127.0.0.1:60064 #4 (3 connections now open) m31006| Fri Feb 22 11:49:11.421 [initandlisten] connection accepted from 127.0.0.1:56863 #9 (7 connections now open) m31006| Fri Feb 22 11:49:11.421 [conn7] end connection 127.0.0.1:41312 (6 connections now open) m31005| Fri Feb 22 11:49:11.421 [conn1] Primary for replica set repset2 changed to 127.0.0.1:31006 m31007| Fri Feb 22 11:49:11.422 [initandlisten] connection accepted from 127.0.0.1:39638 #5 (4 connections now open) m31005| Fri Feb 22 11:49:11.422 [conn1] replica set monitor for replica set repset2 started, address is repset2/127.0.0.1:31006,127.0.0.1:31007 m31006| Fri Feb 22 11:49:11.422 [initandlisten] connection accepted from 127.0.0.1:34073 #10 (7 connections now open) m31005| Fri Feb 22 11:49:11.423 [conn1] going to add shard: { _id: "repset2", host: "repset2/127.0.0.1:31006,127.0.0.1:31007" } m31002| Fri Feb 22 11:49:11.423 [conn6] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:11.432 [conn6] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:11.442 [conn6] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:11.507 [conn1] creating WriteBackListener for: 127.0.0.1:31006 serverID: 51275b19f31988f751311789 m31005| Fri Feb 22 11:49:11.507 [conn1] creating WriteBackListener for: 127.0.0.1:31007 serverID: 51275b19f31988f751311789 m31006| Fri Feb 22 11:49:11.507 [initandlisten] connection accepted from 127.0.0.1:62249 #11 (8 connections now open) m31006| Fri Feb 22 11:49:11.507 [initandlisten] connection accepted from 127.0.0.1:52339 #12 (9 connections now open) m31007| Fri Feb 22 11:49:11.507 [initandlisten] connection accepted from 127.0.0.1:34689 #6 (5 connections now open) m31005| Fri Feb 22 11:49:11.507 [conn1] SyncClusterConnection connecting to [127.0.0.1:31002] m31005| Fri Feb 22 11:49:11.508 [conn1] SyncClusterConnection connecting to [127.0.0.1:31003] m31002| Fri Feb 22 11:49:11.508 [initandlisten] connection accepted from 127.0.0.1:58080 #7 (6 connections now open) m31005| Fri Feb 22 11:49:11.508 [conn1] SyncClusterConnection connecting to [127.0.0.1:31004] m31003| Fri Feb 22 11:49:11.508 [initandlisten] connection accepted from 127.0.0.1:35516 #7 (6 connections now open) m31004| Fri Feb 22 11:49:11.508 [initandlisten] connection accepted from 127.0.0.1:43263 #7 (6 connections now open) --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275b18f31988f751311787") } shards: { "_id" : "repset1", "host" : "repset1/127.0.0.1:31000,127.0.0.1:31001" } { "_id" : "repset2", "host" : "repset2/127.0.0.1:31006,127.0.0.1:31007" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "repset1" } { "databases" : [ { "name" : "local", "sizeOnDisk" : 100663296, "empty" : false } ], "totalSize" : 100663296, "ok" : 1 } m31005| Fri Feb 22 11:49:11.514 [conn1] Moving test primary from: repset1:repset1/127.0.0.1:31000,127.0.0.1:31001 to: repset2:repset2/127.0.0.1:31006,127.0.0.1:31007 m31002| Fri Feb 22 11:49:11.514 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:11.525 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:11.536 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:49:11.607 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:11.617 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:11.627 [conn4] CMD fsync: sync:1 lock:0 m31002| Fri Feb 22 11:49:11.676 [conn4] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:11.686 [conn4] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:11.697 [conn4] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:11.744 [conn1] distributed lock 'test-movePrimary/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' acquired, ts : 51275b37f31988f75131178f m31006| Fri Feb 22 11:49:11.745 [conn10] starting new replica set monitor for replica set repset1 with seed of 127.0.0.1:31000,127.0.0.1:31001 m31006| Fri Feb 22 11:49:11.745 [conn10] successfully connected to seed 127.0.0.1:31000 for replica set repset1 m31000| Fri Feb 22 11:49:11.745 [initandlisten] connection accepted from 127.0.0.1:51202 #15 (10 connections now open) m31006| Fri Feb 22 11:49:11.745 [conn10] changing hosts to { 0: "127.0.0.1:31000", 1: "127.0.0.1:31001" } from repset1/ m31006| Fri Feb 22 11:49:11.745 [conn10] trying to add new host 127.0.0.1:31000 to replica set repset1 m31000| Fri Feb 22 11:49:11.746 [initandlisten] connection accepted from 127.0.0.1:37777 #16 (11 connections now open) m31006| Fri Feb 22 11:49:11.746 [conn10] successfully connected to new host 127.0.0.1:31000 in replica set repset1 m31006| Fri Feb 22 11:49:11.746 [conn10] trying to add new host 127.0.0.1:31001 to replica set repset1 m31006| Fri Feb 22 11:49:11.746 [conn10] successfully connected to new host 127.0.0.1:31001 in replica set repset1 m31001| Fri Feb 22 11:49:11.746 [initandlisten] connection accepted from 127.0.0.1:60614 #9 (6 connections now open) m31000| Fri Feb 22 11:49:11.746 [initandlisten] connection accepted from 127.0.0.1:55954 #17 (12 connections now open) m31000| Fri Feb 22 11:49:11.747 [conn15] end connection 127.0.0.1:51202 (11 connections now open) m31006| Fri Feb 22 11:49:11.747 [conn10] Primary for replica set repset1 changed to 127.0.0.1:31000 m31001| Fri Feb 22 11:49:11.747 [initandlisten] connection accepted from 127.0.0.1:62224 #10 (7 connections now open) m31006| Fri Feb 22 11:49:11.748 [conn10] replica set monitor for replica set repset1 started, address is repset1/127.0.0.1:31000,127.0.0.1:31001 m31006| Fri Feb 22 11:49:11.748 [ReplicaSetMonitorWatcher] starting m31000| Fri Feb 22 11:49:11.748 [initandlisten] connection accepted from 127.0.0.1:59767 #18 (12 connections now open) m31006| Fri Feb 22 11:49:11.749 [FileAllocator] allocating new datafile /data/db/repset2-0/test.ns, filling with zeroes... m31006| Fri Feb 22 11:49:11.749 [FileAllocator] done allocating datafile /data/db/repset2-0/test.ns, size: 16MB, took 0 secs m31006| Fri Feb 22 11:49:11.749 [FileAllocator] allocating new datafile /data/db/repset2-0/test.0, filling with zeroes... m31006| Fri Feb 22 11:49:11.750 [FileAllocator] done allocating datafile /data/db/repset2-0/test.0, size: 16MB, took 0 secs m31007| Fri Feb 22 11:49:11.753 [FileAllocator] allocating new datafile /data/db/repset2-1/test.ns, filling with zeroes... m31007| Fri Feb 22 11:49:11.754 [FileAllocator] done allocating datafile /data/db/repset2-1/test.ns, size: 16MB, took 0 secs m31007| Fri Feb 22 11:49:11.754 [FileAllocator] allocating new datafile /data/db/repset2-1/test.0, filling with zeroes... m31007| Fri Feb 22 11:49:11.754 [FileAllocator] done allocating datafile /data/db/repset2-1/test.0, size: 16MB, took 0 secs m31007| Fri Feb 22 11:49:11.758 [repl writer worker 1] build index test.foo { _id: 1 } m31007| Fri Feb 22 11:49:11.760 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31006| Fri Feb 22 11:49:12.191 [conn10] build index test.foo { _id: 1 } m31006| Fri Feb 22 11:49:12.253 [conn10] fastBuildIndex dupsToDrop:0 m31006| Fri Feb 22 11:49:12.255 [conn10] build index done. scanned 10000 total records. 0.063 secs m31006| Fri Feb 22 11:49:12.287 [conn10] command test.$cmd command: { clone: "repset1/127.0.0.1:31000,127.0.0.1:31001", collsToIgnore: {} } ntoreturn:1 keyUpdates:0 numYields: 82 locks(micros) W:392817 w:154847 reslen:71 542ms m31000| Fri Feb 22 11:49:12.287 [conn18] end connection 127.0.0.1:59767 (11 connections now open) m31002| Fri Feb 22 11:49:12.287 [conn6] CMD fsync: sync:1 lock:0 m31003| Fri Feb 22 11:49:12.298 [conn6] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:12.308 [conn6] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:12.357 [conn1] movePrimary dropping database on repset1/127.0.0.1:31000,127.0.0.1:31001, no sharded collections in test m31000| Fri Feb 22 11:49:12.358 [conn11] dropDatabase test starting m31000| Fri Feb 22 11:49:12.372 [conn11] removeJournalFiles m31000| Fri Feb 22 11:49:12.375 [conn11] dropDatabase test finished m31002| Fri Feb 22 11:49:12.376 [conn4] CMD fsync: sync:1 lock:0 m31001| Fri Feb 22 11:49:12.380 [repl writer worker 1] dropDatabase test starting m31003| Fri Feb 22 11:49:12.387 [conn4] CMD fsync: sync:1 lock:0 m31001| Fri Feb 22 11:49:12.399 [repl writer worker 1] removeJournalFiles m31005| Fri Feb 22 11:49:12.399 [Balancer] SyncClusterConnection connecting to [127.0.0.1:31002] m31005| Fri Feb 22 11:49:12.399 [Balancer] SyncClusterConnection connecting to [127.0.0.1:31003] m31002| Fri Feb 22 11:49:12.399 [initandlisten] connection accepted from 127.0.0.1:52378 #8 (7 connections now open) m31005| Fri Feb 22 11:49:12.400 [Balancer] SyncClusterConnection connecting to [127.0.0.1:31004] m31003| Fri Feb 22 11:49:12.400 [initandlisten] connection accepted from 127.0.0.1:42968 #8 (7 connections now open) m31004| Fri Feb 22 11:49:12.400 [initandlisten] connection accepted from 127.0.0.1:56775 #8 (7 connections now open) m31002| Fri Feb 22 11:49:12.400 [conn8] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:12.401 [conn4] CMD fsync: sync:1 lock:0 m31001| Fri Feb 22 11:49:12.402 [repl writer worker 1] dropDatabase test finished m31003| Fri Feb 22 11:49:12.412 [conn8] CMD fsync: sync:1 lock:0 m31004| Fri Feb 22 11:49:12.422 [conn8] CMD fsync: sync:1 lock:0 m31005| Fri Feb 22 11:49:12.460 [conn1] distributed lock 'test-movePrimary/bs-smartos-x86-64-1.10gen.cc:31005:1361533720:16838' unlocked. { "primary " : "repset2:repset2/127.0.0.1:31006,127.0.0.1:31007", "ok" : 1 } --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275b18f31988f751311787") } shards: { "_id" : "repset1", "host" : "repset1/127.0.0.1:31000,127.0.0.1:31001" } { "_id" : "repset2", "host" : "repset2/127.0.0.1:31006,127.0.0.1:31007" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "repset2" } { "databases" : [ { "name" : "local", "sizeOnDisk" : 100663296, "empty" : false }, { "name" : "test", "sizeOnDisk" : 33554432, "empty" : false } ], "totalSize" : 134217728, "ok" : 1 } m31005| Fri Feb 22 11:49:12.466 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m31002| Fri Feb 22 11:49:12.466 [conn2] end connection 127.0.0.1:59475 (6 connections now open) m31003| Fri Feb 22 11:49:12.467 [conn2] end connection 127.0.0.1:37883 (6 connections now open) m31003| Fri Feb 22 11:49:12.467 [conn4] end connection 127.0.0.1:43169 (6 connections now open) m31004| Fri Feb 22 11:49:12.467 [conn2] end connection 127.0.0.1:38927 (6 connections now open) m31004| Fri Feb 22 11:49:12.467 [conn4] end connection 127.0.0.1:40852 (6 connections now open) m31003| Fri Feb 22 11:49:12.467 [conn5] end connection 127.0.0.1:40056 (6 connections now open) m31004| Fri Feb 22 11:49:12.467 [conn5] end connection 127.0.0.1:49593 (6 connections now open) m31002| Fri Feb 22 11:49:12.467 [conn4] end connection 127.0.0.1:42898 (6 connections now open) m31004| Fri Feb 22 11:49:12.467 [conn6] end connection 127.0.0.1:51617 (4 connections now open) m31002| Fri Feb 22 11:49:12.467 [conn5] end connection 127.0.0.1:48175 (5 connections now open) m31003| Fri Feb 22 11:49:12.467 [conn6] end connection 127.0.0.1:45573 (3 connections now open) m31002| Fri Feb 22 11:49:12.467 [conn6] end connection 127.0.0.1:57523 (5 connections now open) m31001| Fri Feb 22 11:49:12.467 [conn6] end connection 127.0.0.1:56392 (6 connections now open) m31000| Fri Feb 22 11:49:12.467 [conn9] end connection 127.0.0.1:38304 (10 connections now open) m31000| Fri Feb 22 11:49:12.467 [conn11] end connection 127.0.0.1:40922 (10 connections now open) m31000| Fri Feb 22 11:49:12.467 [conn10] end connection 127.0.0.1:58786 (10 connections now open) m31000| Fri Feb 22 11:49:12.467 [conn12] end connection 127.0.0.1:35448 (9 connections now open) m31006| Fri Feb 22 11:49:12.467 [conn9] end connection 127.0.0.1:56863 (8 connections now open) m31006| Fri Feb 22 11:49:12.467 [conn10] end connection 127.0.0.1:34073 (8 connections now open) m31006| Fri Feb 22 11:49:12.467 [conn8] end connection 127.0.0.1:47470 (8 connections now open) m31006| Fri Feb 22 11:49:12.467 [conn11] end connection 127.0.0.1:62249 (8 connections now open) m31003| Fri Feb 22 11:49:12.467 [conn7] end connection 127.0.0.1:35516 (2 connections now open) m31001| Fri Feb 22 11:49:12.467 [conn5] end connection 127.0.0.1:42341 (6 connections now open) m31004| Fri Feb 22 11:49:12.467 [conn8] end connection 127.0.0.1:56775 (2 connections now open) m31002| Fri Feb 22 11:49:12.467 [conn8] end connection 127.0.0.1:52378 (2 connections now open) m31007| Fri Feb 22 11:49:12.467 [conn5] end connection 127.0.0.1:39638 (4 connections now open) m31004| Fri Feb 22 11:49:12.467 [conn7] end connection 127.0.0.1:43263 (1 connection now open) m31002| Fri Feb 22 11:49:12.467 [conn7] end connection 127.0.0.1:58080 (1 connection now open) m31007| Fri Feb 22 11:49:12.467 [conn4] end connection 127.0.0.1:60064 (4 connections now open) m31003| Fri Feb 22 11:49:12.489 [conn8] end connection 127.0.0.1:42968 (1 connection now open) m31006| Fri Feb 22 11:49:13.013 [rsHealthPoll] replSet member 127.0.0.1:31007 is now in state SECONDARY Fri Feb 22 11:49:13.466 shell: stopped mongo program on port 31005 Stopping mongod on port 31002 m31002| Fri Feb 22 11:49:13.466 got signal 15 (Terminated), will terminate after current cmd ends m31002| Fri Feb 22 11:49:13.467 [interruptThread] now exiting m31002| Fri Feb 22 11:49:13.467 dbexit: m31002| Fri Feb 22 11:49:13.467 [interruptThread] shutdown: going to close listening sockets... m31002| Fri Feb 22 11:49:13.467 [interruptThread] closing listening socket: 18 m31002| Fri Feb 22 11:49:13.467 [interruptThread] closing listening socket: 19 m31002| Fri Feb 22 11:49:13.467 [interruptThread] removing socket file: /tmp/mongodb-31002.sock m31002| Fri Feb 22 11:49:13.467 [interruptThread] shutdown: going to flush diaglog... m31002| Fri Feb 22 11:49:13.467 [interruptThread] shutdown: going to close sockets... m31002| Fri Feb 22 11:49:13.467 [interruptThread] shutdown: waiting for fs preallocator... m31002| Fri Feb 22 11:49:13.467 [interruptThread] shutdown: lock for final commit... m31002| Fri Feb 22 11:49:13.467 [interruptThread] shutdown: final commit... m31002| Fri Feb 22 11:49:13.467 [conn1] end connection 127.0.0.1:43950 (0 connections now open) m31002| Fri Feb 22 11:49:13.475 [interruptThread] shutdown: closing all files... m31002| Fri Feb 22 11:49:13.475 [interruptThread] closeAllFiles() finished m31002| Fri Feb 22 11:49:13.475 [interruptThread] journalCleanup... m31002| Fri Feb 22 11:49:13.475 [interruptThread] removeJournalFiles m31002| Fri Feb 22 11:49:13.476 dbexit: really exiting now Fri Feb 22 11:49:14.466 shell: stopped mongo program on port 31002 Stopping mongod on port 31003 m31003| Fri Feb 22 11:49:14.470 got signal 15 (Terminated), will terminate after current cmd ends m31003| Fri Feb 22 11:49:14.470 [interruptThread] now exiting m31003| Fri Feb 22 11:49:14.470 dbexit: m31003| Fri Feb 22 11:49:14.470 [interruptThread] shutdown: going to close listening sockets... m31003| Fri Feb 22 11:49:14.470 [interruptThread] closing listening socket: 21 m31003| Fri Feb 22 11:49:14.470 [interruptThread] closing listening socket: 22 m31003| Fri Feb 22 11:49:14.470 [interruptThread] removing socket file: /tmp/mongodb-31003.sock m31003| Fri Feb 22 11:49:14.470 [interruptThread] shutdown: going to flush diaglog... m31003| Fri Feb 22 11:49:14.470 [interruptThread] shutdown: going to close sockets... m31003| Fri Feb 22 11:49:14.470 [interruptThread] shutdown: waiting for fs preallocator... m31003| Fri Feb 22 11:49:14.470 [interruptThread] shutdown: lock for final commit... m31003| Fri Feb 22 11:49:14.470 [interruptThread] shutdown: final commit... m31003| Fri Feb 22 11:49:14.471 [conn1] end connection 127.0.0.1:34767 (0 connections now open) m31003| Fri Feb 22 11:49:14.479 [interruptThread] shutdown: closing all files... m31003| Fri Feb 22 11:49:14.479 [interruptThread] closeAllFiles() finished m31003| Fri Feb 22 11:49:14.479 [interruptThread] journalCleanup... m31003| Fri Feb 22 11:49:14.479 [interruptThread] removeJournalFiles m31003| Fri Feb 22 11:49:14.480 dbexit: really exiting now Fri Feb 22 11:49:15.470 shell: stopped mongo program on port 31003 Stopping mongod on port 31004 m31004| Fri Feb 22 11:49:15.474 got signal 15 (Terminated), will terminate after current cmd ends m31004| Fri Feb 22 11:49:15.474 [interruptThread] now exiting m31004| Fri Feb 22 11:49:15.474 dbexit: m31004| Fri Feb 22 11:49:15.474 [interruptThread] shutdown: going to close listening sockets... m31004| Fri Feb 22 11:49:15.474 [interruptThread] closing listening socket: 24 m31004| Fri Feb 22 11:49:15.474 [interruptThread] closing listening socket: 25 m31004| Fri Feb 22 11:49:15.474 [interruptThread] removing socket file: /tmp/mongodb-31004.sock m31004| Fri Feb 22 11:49:15.474 [interruptThread] shutdown: going to flush diaglog... m31004| Fri Feb 22 11:49:15.474 [interruptThread] shutdown: going to close sockets... m31004| Fri Feb 22 11:49:15.474 [interruptThread] shutdown: waiting for fs preallocator... m31004| Fri Feb 22 11:49:15.474 [interruptThread] shutdown: lock for final commit... m31004| Fri Feb 22 11:49:15.474 [interruptThread] shutdown: final commit... m31004| Fri Feb 22 11:49:15.474 [conn1] end connection 127.0.0.1:36017 (0 connections now open) m31004| Fri Feb 22 11:49:15.484 [interruptThread] shutdown: closing all files... m31004| Fri Feb 22 11:49:15.485 [interruptThread] closeAllFiles() finished m31004| Fri Feb 22 11:49:15.485 [interruptThread] journalCleanup... m31004| Fri Feb 22 11:49:15.485 [interruptThread] removeJournalFiles m31004| Fri Feb 22 11:49:15.485 dbexit: really exiting now Fri Feb 22 11:49:16.474 shell: stopped mongo program on port 31004 ReplSetTest n: 0 ports: [ 31006, 31007 ] 31006 number ReplSetTest stop *** Shutting down mongod in port 31006 *** m31006| Fri Feb 22 11:49:16.479 got signal 15 (Terminated), will terminate after current cmd ends m31006| Fri Feb 22 11:49:16.479 [interruptThread] now exiting m31006| Fri Feb 22 11:49:16.479 dbexit: m31006| Fri Feb 22 11:49:16.479 [interruptThread] shutdown: going to close listening sockets... m31006| Fri Feb 22 11:49:16.479 [interruptThread] closing listening socket: 30 m31006| Fri Feb 22 11:49:16.479 [interruptThread] closing listening socket: 31 m31006| Fri Feb 22 11:49:16.479 [interruptThread] closing listening socket: 32 m31006| Fri Feb 22 11:49:16.479 [interruptThread] removing socket file: /tmp/mongodb-31006.sock m31006| Fri Feb 22 11:49:16.479 [interruptThread] shutdown: going to flush diaglog... m31006| Fri Feb 22 11:49:16.479 [interruptThread] shutdown: going to close sockets... m31006| Fri Feb 22 11:49:16.479 [interruptThread] shutdown: waiting for fs preallocator... m31006| Fri Feb 22 11:49:16.479 [interruptThread] shutdown: lock for final commit... m31006| Fri Feb 22 11:49:16.479 [interruptThread] shutdown: final commit... m31006| Fri Feb 22 11:49:16.479 [conn1] end connection 127.0.0.1:34547 (4 connections now open) m31006| Fri Feb 22 11:49:16.479 [conn3] end connection 127.0.0.1:35546 (4 connections now open) m31007| Fri Feb 22 11:49:16.479 [conn2] end connection 127.0.0.1:41272 (2 connections now open) m31000| Fri Feb 22 11:49:16.479 [conn16] end connection 127.0.0.1:37777 (6 connections now open) m31000| Fri Feb 22 11:49:16.480 [conn17] end connection 127.0.0.1:55954 (6 connections now open) m31001| Fri Feb 22 11:49:16.479 [conn9] end connection 127.0.0.1:60614 (4 connections now open) m31001| Fri Feb 22 11:49:16.480 [conn10] end connection 127.0.0.1:62224 (4 connections now open) m31007| Fri Feb 22 11:49:16.480 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: 127.0.0.1:31006 m31007| Fri Feb 22 11:49:16.480 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.1:31006 m31006| Fri Feb 22 11:49:16.497 [interruptThread] shutdown: closing all files... m31006| Fri Feb 22 11:49:16.498 [interruptThread] closeAllFiles() finished m31006| Fri Feb 22 11:49:16.498 [interruptThread] journalCleanup... m31006| Fri Feb 22 11:49:16.498 [interruptThread] removeJournalFiles m31006| Fri Feb 22 11:49:16.498 dbexit: really exiting now m31007| Fri Feb 22 11:49:17.228 [rsHealthPoll] DBClientCursor::init call() failed m31007| Fri Feb 22 11:49:17.228 [rsHealthPoll] replset info 127.0.0.1:31006 heartbeat failed, retrying m31007| Fri Feb 22 11:49:17.228 [rsHealthPoll] replSet info 127.0.0.1:31006 is down (or slow to respond): m31007| Fri Feb 22 11:49:17.229 [rsHealthPoll] replSet member 127.0.0.1:31006 is now in state DOWN m31007| Fri Feb 22 11:49:17.229 [rsMgr] replSet can't see a majority, will not try to elect self Fri Feb 22 11:49:17.479 shell: stopped mongo program on port 31006 ReplSetTest n: 1 ports: [ 31006, 31007 ] 31007 number ReplSetTest stop *** Shutting down mongod in port 31007 *** m31007| Fri Feb 22 11:49:17.480 got signal 15 (Terminated), will terminate after current cmd ends m31007| Fri Feb 22 11:49:17.480 [interruptThread] now exiting m31007| Fri Feb 22 11:49:17.480 dbexit: m31007| Fri Feb 22 11:49:17.480 [interruptThread] shutdown: going to close listening sockets... m31007| Fri Feb 22 11:49:17.480 [interruptThread] closing listening socket: 33 m31007| Fri Feb 22 11:49:17.480 [interruptThread] closing listening socket: 34 m31007| Fri Feb 22 11:49:17.480 [interruptThread] closing listening socket: 35 m31007| Fri Feb 22 11:49:17.480 [interruptThread] removing socket file: /tmp/mongodb-31007.sock m31007| Fri Feb 22 11:49:17.480 [interruptThread] shutdown: going to flush diaglog... m31007| Fri Feb 22 11:49:17.480 [interruptThread] shutdown: going to close sockets... m31007| Fri Feb 22 11:49:17.480 [interruptThread] shutdown: waiting for fs preallocator... m31007| Fri Feb 22 11:49:17.480 [interruptThread] shutdown: lock for final commit... m31007| Fri Feb 22 11:49:17.480 [interruptThread] shutdown: final commit... m31007| Fri Feb 22 11:49:17.480 [conn1] end connection 127.0.0.1:59525 (1 connection now open) m31007| Fri Feb 22 11:49:17.496 [interruptThread] shutdown: closing all files... m31007| Fri Feb 22 11:49:17.497 [interruptThread] closeAllFiles() finished m31007| Fri Feb 22 11:49:17.497 [interruptThread] journalCleanup... m31007| Fri Feb 22 11:49:17.497 [interruptThread] removeJournalFiles m31007| Fri Feb 22 11:49:17.498 dbexit: really exiting now Fri Feb 22 11:49:18.480 shell: stopped mongo program on port 31007 ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** ReplSetTest n: 0 ports: [ 31000, 31001 ] 31000 number ReplSetTest stop *** Shutting down mongod in port 31000 *** m31000| Fri Feb 22 11:49:18.485 got signal 15 (Terminated), will terminate after current cmd ends m31000| Fri Feb 22 11:49:18.485 [interruptThread] now exiting m31000| Fri Feb 22 11:49:18.486 dbexit: m31000| Fri Feb 22 11:49:18.486 [interruptThread] shutdown: going to close listening sockets... m31000| Fri Feb 22 11:49:18.486 [interruptThread] closing listening socket: 12 m31000| Fri Feb 22 11:49:18.486 [interruptThread] closing listening socket: 13 m31000| Fri Feb 22 11:49:18.486 [interruptThread] closing listening socket: 14 m31000| Fri Feb 22 11:49:18.486 [interruptThread] removing socket file: /tmp/mongodb-31000.sock m31000| Fri Feb 22 11:49:18.486 [interruptThread] shutdown: going to flush diaglog... m31000| Fri Feb 22 11:49:18.486 [interruptThread] shutdown: going to close sockets... m31000| Fri Feb 22 11:49:18.486 [interruptThread] shutdown: waiting for fs preallocator... m31000| Fri Feb 22 11:49:18.486 [interruptThread] shutdown: lock for final commit... m31000| Fri Feb 22 11:49:18.486 [interruptThread] shutdown: final commit... m31000| Fri Feb 22 11:49:18.486 [conn1] end connection 127.0.0.1:40940 (4 connections now open) m31001| Fri Feb 22 11:49:18.486 [conn8] end connection 127.0.0.1:62619 (2 connections now open) m31000| Fri Feb 22 11:49:18.486 [conn14] end connection 127.0.0.1:45004 (4 connections now open) m31000| Fri Feb 22 11:49:18.486 [conn7] end connection 127.0.0.1:63573 (3 connections now open) m31001| Fri Feb 22 11:49:18.486 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: 127.0.0.1:31000 m31000| Fri Feb 22 11:49:18.492 [interruptThread] shutdown: closing all files... m31000| Fri Feb 22 11:49:18.492 [interruptThread] closeAllFiles() finished m31000| Fri Feb 22 11:49:18.492 [interruptThread] journalCleanup... m31000| Fri Feb 22 11:49:18.492 [interruptThread] removeJournalFiles m31000| Fri Feb 22 11:49:18.493 dbexit: really exiting now m31001| Fri Feb 22 11:49:18.873 [rsHealthPoll] DBClientCursor::init call() failed m31001| Fri Feb 22 11:49:18.873 [rsHealthPoll] replset info 127.0.0.1:31000 heartbeat failed, retrying m31001| Fri Feb 22 11:49:18.873 [rsHealthPoll] replSet info 127.0.0.1:31000 is down (or slow to respond): m31001| Fri Feb 22 11:49:18.873 [rsHealthPoll] replSet member 127.0.0.1:31000 is now in state DOWN m31001| Fri Feb 22 11:49:18.873 [rsMgr] replSet can't see a majority, will not try to elect self Fri Feb 22 11:49:19.486 shell: stopped mongo program on port 31000 ReplSetTest n: 1 ports: [ 31000, 31001 ] 31001 number ReplSetTest stop *** Shutting down mongod in port 31001 *** m31001| Fri Feb 22 11:49:19.486 got signal 15 (Terminated), will terminate after current cmd ends m31001| Fri Feb 22 11:49:19.486 [interruptThread] now exiting m31001| Fri Feb 22 11:49:19.486 dbexit: m31001| Fri Feb 22 11:49:19.486 [interruptThread] shutdown: going to close listening sockets... m31001| Fri Feb 22 11:49:19.486 [interruptThread] closing listening socket: 15 m31001| Fri Feb 22 11:49:19.486 [interruptThread] closing listening socket: 16 m31001| Fri Feb 22 11:49:19.486 [interruptThread] closing listening socket: 17 m31001| Fri Feb 22 11:49:19.486 [interruptThread] removing socket file: /tmp/mongodb-31001.sock m31001| Fri Feb 22 11:49:19.486 [interruptThread] shutdown: going to flush diaglog... m31001| Fri Feb 22 11:49:19.486 [interruptThread] shutdown: going to close sockets... m31001| Fri Feb 22 11:49:19.486 [interruptThread] shutdown: waiting for fs preallocator... m31001| Fri Feb 22 11:49:19.486 [interruptThread] shutdown: lock for final commit... m31001| Fri Feb 22 11:49:19.486 [interruptThread] shutdown: final commit... m31001| Fri Feb 22 11:49:19.486 [conn1] end connection 127.0.0.1:46730 (1 connection now open) m31001| Fri Feb 22 11:49:19.501 [interruptThread] shutdown: closing all files... m31001| Fri Feb 22 11:49:19.501 [interruptThread] closeAllFiles() finished m31001| Fri Feb 22 11:49:19.501 [interruptThread] journalCleanup... m31001| Fri Feb 22 11:49:19.501 [interruptThread] removeJournalFiles m31001| Fri Feb 22 11:49:19.501 dbexit: really exiting now Fri Feb 22 11:49:20.486 shell: stopped mongo program on port 31001 ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** moveprimary-replset.js SUCCESS Fri Feb 22 11:49:20.503 [conn94] end connection 127.0.0.1:49550 (0 connections now open) 1.3800 minutes Fri Feb 22 11:49:20.525 [initandlisten] connection accepted from 127.0.0.1:48907 #95 (1 connection now open) Fri Feb 22 11:49:20.526 [conn95] end connection 127.0.0.1:48907 (0 connections now open) ******************************************* Test : mr_noscripting.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/mr_noscripting.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/mr_noscripting.js";TestData.testFile = "mr_noscripting.js";TestData.testName = "mr_noscripting";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:49:20 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:49:20.696 [initandlisten] connection accepted from 127.0.0.1:39218 #96 (1 connection now open) null Resetting db path '/data/db/mongod-27000' Fri Feb 22 11:49:20.713 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --noscripting --port 27000 --dbpath /data/db/mongod-27000 --setParameter enableTestCommands=1 Fri Feb 22 11:49:21.962 [conn96] end connection 127.0.0.1:39218 (0 connections now open) 1454.1750 ms Fri Feb 22 11:49:21.981 [initandlisten] connection accepted from 127.0.0.1:53844 #97 (1 connection now open) Fri Feb 22 11:49:21.982 [conn97] end connection 127.0.0.1:53844 (0 connections now open) ******************************************* Test : mr_shard_version.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/mr_shard_version.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/mr_shard_version.js";TestData.testFile = "mr_shard_version.js";TestData.testName = "mr_shard_version";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:49:21 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:49:22.135 [initandlisten] connection accepted from 127.0.0.1:64512 #98 (1 connection now open) null Resetting db path '/data/db/test0' Fri Feb 22 11:49:22.151 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/test0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:49:22.239 [initandlisten] MongoDB starting : pid=346 port=30000 dbpath=/data/db/test0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:49:22.239 [initandlisten] m30000| Fri Feb 22 11:49:22.239 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:49:22.239 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:49:22.239 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:49:22.239 [initandlisten] m30000| Fri Feb 22 11:49:22.239 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:49:22.239 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:49:22.239 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:49:22.239 [initandlisten] allocator: system m30000| Fri Feb 22 11:49:22.239 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:49:22.240 [initandlisten] journal dir=/data/db/test0/journal m30000| Fri Feb 22 11:49:22.240 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:49:22.254 [FileAllocator] allocating new datafile /data/db/test0/local.ns, filling with zeroes... m30000| Fri Feb 22 11:49:22.254 [FileAllocator] creating directory /data/db/test0/_tmp m30000| Fri Feb 22 11:49:22.254 [FileAllocator] done allocating datafile /data/db/test0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:49:22.254 [FileAllocator] allocating new datafile /data/db/test0/local.0, filling with zeroes... m30000| Fri Feb 22 11:49:22.255 [FileAllocator] done allocating datafile /data/db/test0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:49:22.258 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:49:22.258 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:49:22.353 [initandlisten] connection accepted from 127.0.0.1:37036 #1 (1 connection now open) Resetting db path '/data/db/test1' Fri Feb 22 11:49:22.357 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/test1 --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:49:22.443 [initandlisten] MongoDB starting : pid=347 port=30001 dbpath=/data/db/test1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:49:22.444 [initandlisten] m30001| Fri Feb 22 11:49:22.444 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:49:22.444 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:49:22.444 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:49:22.444 [initandlisten] m30001| Fri Feb 22 11:49:22.444 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:49:22.444 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:49:22.444 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:49:22.444 [initandlisten] allocator: system m30001| Fri Feb 22 11:49:22.444 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:49:22.444 [initandlisten] journal dir=/data/db/test1/journal m30001| Fri Feb 22 11:49:22.444 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:49:22.460 [FileAllocator] allocating new datafile /data/db/test1/local.ns, filling with zeroes... m30001| Fri Feb 22 11:49:22.460 [FileAllocator] creating directory /data/db/test1/_tmp m30001| Fri Feb 22 11:49:22.460 [FileAllocator] done allocating datafile /data/db/test1/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:49:22.460 [FileAllocator] allocating new datafile /data/db/test1/local.0, filling with zeroes... m30001| Fri Feb 22 11:49:22.460 [FileAllocator] done allocating datafile /data/db/test1/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:49:22.463 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:49:22.463 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:49:22.558 [initandlisten] connection accepted from 127.0.0.1:55334 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 11:49:22.559 [initandlisten] connection accepted from 127.0.0.1:41515 #2 (2 connections now open) ShardingTest test : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 11:49:22.567 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -vv --chunkSize 50 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:49:22.584 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:49:22.585 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=348 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:49:22.585 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:49:22.585 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:49:22.585 [mongosMain] options: { chunkSize: 50, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], vv: true } m30999| Fri Feb 22 11:49:22.585 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 11:49:22.585 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:49:22.586 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:49:22.586 [mongosMain] connected connection! m30000| Fri Feb 22 11:49:22.586 [initandlisten] connection accepted from 127.0.0.1:64529 #3 (3 connections now open) m30999| Fri Feb 22 11:49:22.587 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:49:22.587 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:49:22.587 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:49:22.587 [initandlisten] connection accepted from 127.0.0.1:46498 #4 (4 connections now open) m30999| Fri Feb 22 11:49:22.587 [mongosMain] connected connection! m30000| Fri Feb 22 11:49:22.588 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:49:22.595 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:49:22.596 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 11:49:22.596 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 11:49:22.596 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 11:49:22.596 [mongosMain] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds. m30999| Fri Feb 22 11:49:22.596 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838 ) m30999| Fri Feb 22 11:49:22.596 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:49:22.596 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30000| Fri Feb 22 11:49:22.597 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes... m30999| Fri Feb 22 11:49:22.597 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:49:22 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "51275b42d33e7c60dead152e" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 11:49:22.597 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:49:22.597 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes... m30000| Fri Feb 22 11:49:22.597 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:49:22.598 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes... m30000| Fri Feb 22 11:49:22.598 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:49:22.601 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:49:22.602 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:49:22.603 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 11:49:22.603 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:49:22.604 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 11:49:22 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838', sleeping for 30000ms m30000| Fri Feb 22 11:49:22.604 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 11:49:22.605 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:49:22.605 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838' acquired, ts : 51275b42d33e7c60dead152e m30999| Fri Feb 22 11:49:22.608 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:49:22.608 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:49:22.608 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:49:22-51275b42d33e7c60dead152f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533762608), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:49:22.608 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 11:49:22.609 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:49:22.609 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 11:49:22.609 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 11:49:22.610 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:49:22.610 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:49:22-51275b42d33e7c60dead1531", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533762610), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:49:22.610 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:49:22.611 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838' unlocked. m30000| Fri Feb 22 11:49:22.612 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:49:22.613 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:49:22.613 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:49:22.613 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:49:22.613 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:49:22.613 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:49:22.613 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:49:22.613 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 11:49:22.613 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:49:22.613 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 11:49:22.614 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:49:22.615 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:49:22.615 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 11:49:22.615 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 11:49:22.616 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:49:22.616 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 11:49:22.617 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:49:22.617 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:49:22.617 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:49:22.618 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:49:22.619 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:49:22.619 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 11:49:22.619 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 11:49:22.620 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:49:22.620 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:49:22.620 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:49:22 m30999| Fri Feb 22 11:49:22.620 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:49:22.620 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:49:22.620 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:49:22.621 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 11:49:22.621 [Balancer] connected connection! m30000| Fri Feb 22 11:49:22.621 [initandlisten] connection accepted from 127.0.0.1:41399 #5 (5 connections now open) m30000| Fri Feb 22 11:49:22.622 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:49:22.622 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:49:22.622 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838 ) m30999| Fri Feb 22 11:49:22.622 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:49:22.622 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:49:22 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275b42d33e7c60dead1533" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:49:22.623 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838' acquired, ts : 51275b42d33e7c60dead1533 m30999| Fri Feb 22 11:49:22.623 [Balancer] *** start balancing round m30999| Fri Feb 22 11:49:22.623 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:49:22.623 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:49:22.623 [Balancer] no collections to balance m30999| Fri Feb 22 11:49:22.623 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:49:22.623 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:49:22.623 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838' unlocked. m30999| Fri Feb 22 11:49:22.768 [mongosMain] connection accepted from 127.0.0.1:61159 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:49:22.771 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 11:49:22.772 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 11:49:22.773 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:49:22.773 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 11:49:22.774 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 11:49:22.776 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:49:22.776 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 11:49:22.776 [initandlisten] connection accepted from 127.0.0.1:47673 #2 (2 connections now open) m30999| Fri Feb 22 11:49:22.777 [conn1] connected connection! m30999| Fri Feb 22 11:49:22.778 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 11:49:22.779 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:49:22.779 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:49:22.779 [conn1] connected connection! m30000| Fri Feb 22 11:49:22.779 [initandlisten] connection accepted from 127.0.0.1:38027 #6 (6 connections now open) m30999| Fri Feb 22 11:49:22.779 [conn1] creating WriteBackListener for: localhost:30000 serverID: 51275b42d33e7c60dead1532 m30999| Fri Feb 22 11:49:22.779 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 11:49:22.779 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 11:49:22.779 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('51275b42d33e7c60dead1532'), authoritative: true } m30999| Fri Feb 22 11:49:22.780 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:49:22.780 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 11:49:22.780 [initandlisten] connection accepted from 127.0.0.1:46470 #3 (3 connections now open) m30999| Fri Feb 22 11:49:22.780 [conn1] connected connection! m30999| Fri Feb 22 11:49:22.780 [conn1] creating WriteBackListener for: localhost:30001 serverID: 51275b42d33e7c60dead1532 m30999| Fri Feb 22 11:49:22.780 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 11:49:22.780 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Fri Feb 22 11:49:22.780 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('51275b42d33e7c60dead1532'), authoritative: true } m30999| Fri Feb 22 11:49:22.781 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.mongos", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:49:22.781 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:49:22.781 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:22.781 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:22.781 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:22.781 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:22.781 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "bs-smartos-x86-64-1.10gen.cc:30999", mongoVersion: "2.4.0-rc1-pre-", ping: new Date(1361533762623), up: 0, waiting: true }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } Waiting for active hosts... Waiting for the balancer lock... m30999| Fri Feb 22 11:49:22.782 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.locks", n2skip: 0, n2return: -1, options: 0, query: { _id: "balancer" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:49:22.782 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:49:22.782 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:22.782 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:22.782 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:22.782 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:22.782 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "balancer", process: "bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838", state: 0, ts: ObjectId('51275b42d33e7c60dead1533'), when: new Date(1361533762622), who: "bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838:Balancer:10113", why: "doing balance round" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } Waiting again for active hosts after balancer is off... m30999| Fri Feb 22 11:49:22.783 [conn1] couldn't find database [mr_shard_version] in config db m30999| Fri Feb 22 11:49:22.783 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:49:22.783 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:49:22.784 [conn1] connected connection! m30000| Fri Feb 22 11:49:22.784 [initandlisten] connection accepted from 127.0.0.1:57355 #7 (7 connections now open) m30999| Fri Feb 22 11:49:22.784 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:49:22.784 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:49:22.784 [conn1] connected connection! m30001| Fri Feb 22 11:49:22.784 [initandlisten] connection accepted from 127.0.0.1:59854 #4 (4 connections now open) m30999| Fri Feb 22 11:49:22.785 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:49:22.785 [conn1] put [mr_shard_version] on: shard0001:localhost:30001 m30001| Fri Feb 22 11:49:22.785 [FileAllocator] allocating new datafile /data/db/test1/mr_shard_version.ns, filling with zeroes... m30001| Fri Feb 22 11:49:22.786 [FileAllocator] done allocating datafile /data/db/test1/mr_shard_version.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:49:22.786 [FileAllocator] allocating new datafile /data/db/test1/mr_shard_version.0, filling with zeroes... m30001| Fri Feb 22 11:49:22.786 [FileAllocator] done allocating datafile /data/db/test1/mr_shard_version.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:49:22.786 [FileAllocator] allocating new datafile /data/db/test1/mr_shard_version.1, filling with zeroes... m30001| Fri Feb 22 11:49:22.786 [FileAllocator] done allocating datafile /data/db/test1/mr_shard_version.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:49:22.790 [conn3] build index mr_shard_version.coll { _id: 1 } m30001| Fri Feb 22 11:49:22.791 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:49:28.624 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:49:28.624 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:49:34.625 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:49:34.625 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:49:40.626 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:49:40.626 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:49:46.627 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:49:46.627 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:49:47.103 [FileAllocator] allocating new datafile /data/db/test1/mr_shard_version.2, filling with zeroes... m30001| Fri Feb 22 11:49:47.103 [FileAllocator] done allocating datafile /data/db/test1/mr_shard_version.2, size: 256MB, took 0 secs m30999| Fri Feb 22 11:49:49.447 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.coll", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:49:49.447 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ shard0001:localhost:30001] m30999| Fri Feb 22 11:49:49.447 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:49.447 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:49.447 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:49.447 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:49.447 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "shard0001:localhost:30001", cursor: { _id: 0.0, key: "0", value: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:49:52.605 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 11:49:52 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838', sleeping for 30000ms m30999| Fri Feb 22 11:49:52.628 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:49:52.628 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:49:54.648 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.coll", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:49:54.648 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ shard0001:localhost:30001] m30999| Fri Feb 22 11:49:54.648 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:54.648 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:54.648 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:54.648 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:54.649 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "shard0001:localhost:30001", cursor: { _id: 0.0, key: "0", value: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:49:58.628 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:49:58.628 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 11:49:59.280 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "coll", query: {} }, fields: {} } and CInfo { v_ns: "mr_shard_version.coll", filter: {} } m30999| Fri Feb 22 11:49:59.280 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ shard0001:localhost:30001] m30999| Fri Feb 22 11:49:59.280 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.280 [conn1] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.280 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:59.280 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.281 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "shard0001:localhost:30001", cursor: { n: 500000.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:49:59.281 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: -1, options: 0, query: { _id: "mr_shard_version" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:49:59.281 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:49:59.281 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.281 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.281 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:59.281 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.282 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:49:59.283 [conn1] enabling sharding on: mr_shard_version m30999| Fri Feb 22 11:49:59.284 [conn1] CMD: shardcollection: { shardcollection: "mr_shard_version.coll", key: { _id: 1.0 } } m30999| Fri Feb 22 11:49:59.284 [conn1] enable sharding on: mr_shard_version.coll with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:49:59.285 [conn1] going to create 1 chunk(s) for: mr_shard_version.coll using new epoch 51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:49:59.286 [conn1] major version query from 0|0||51275b67d33e7c60dead1534 and over 0 shards is { ns: "mr_shard_version.coll", $or: [ { lastmod: { $gte: Timestamp 0|0 } } ] } m30999| Fri Feb 22 11:49:59.286 [conn1] loaded 1 chunks into new chunk manager for mr_shard_version.coll with version 1|0||51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:49:59.286 [conn1] ChunkManager: time to load chunks for mr_shard_version.coll: 0ms sequenceNumber: 2 version: 1|0||51275b67d33e7c60dead1534 based on: (empty) m30000| Fri Feb 22 11:49:59.287 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:49:59.291 [conn3] build index done. scanned 0 total records. 0.003 secs m30999| Fri Feb 22 11:49:59.291 [conn1] have to set shard version for conn: localhost:30001 ns:mr_shard_version.coll my last seq: 0 current: 2 version: 1|0||51275b67d33e7c60dead1534 manager: 0x1183e40 m30999| Fri Feb 22 11:49:59.291 [conn1] setShardVersion shard0001 localhost:30001 mr_shard_version.coll { setShardVersion: "mr_shard_version.coll", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275b67d33e7c60dead1534'), serverID: ObjectId('51275b42d33e7c60dead1532'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f560 2 m30999| Fri Feb 22 11:49:59.292 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mr_shard_version.coll", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'mr_shard_version.coll'" } m30999| Fri Feb 22 11:49:59.292 [conn1] have to set shard version for conn: localhost:30001 ns:mr_shard_version.coll my last seq: 0 current: 2 version: 1|0||51275b67d33e7c60dead1534 manager: 0x1183e40 m30999| Fri Feb 22 11:49:59.292 [conn1] setShardVersion shard0001 localhost:30001 mr_shard_version.coll { setShardVersion: "mr_shard_version.coll", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275b67d33e7c60dead1534'), serverID: ObjectId('51275b42d33e7c60dead1532'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x117f560 2 m30001| Fri Feb 22 11:49:59.292 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 11:49:59.293 [initandlisten] connection accepted from 127.0.0.1:52142 #8 (8 connections now open) m30999| Fri Feb 22 11:49:59.293 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:49:59.294 [conn1] splitting: mr_shard_version.coll shard: ns:mr_shard_version.collshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m30001| Fri Feb 22 11:49:59.295 [conn4] received splitChunk request: { splitChunk: "mr_shard_version.coll", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 250000.0 } ], shardId: "mr_shard_version.coll-_id_MinKey", configdb: "localhost:30000" } m30000| Fri Feb 22 11:49:59.295 [initandlisten] connection accepted from 127.0.0.1:34668 #9 (9 connections now open) m30001| Fri Feb 22 11:49:59.296 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361533799:29667 (sleeping for 30000ms) m30001| Fri Feb 22 11:49:59.297 [conn4] distributed lock 'mr_shard_version.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361533799:29667' acquired, ts : 51275b6768faedda20dee02c m30001| Fri Feb 22 11:49:59.298 [conn4] splitChunk accepted at version 1|0||51275b67d33e7c60dead1534 m30001| Fri Feb 22 11:49:59.298 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:49:59-51275b6768faedda20dee02d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59854", time: new Date(1361533799298), what: "split", ns: "mr_shard_version.coll", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 250000.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51275b67d33e7c60dead1534') }, right: { min: { _id: 250000.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('51275b67d33e7c60dead1534') } } } m30001| Fri Feb 22 11:49:59.299 [conn4] distributed lock 'mr_shard_version.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361533799:29667' unlocked. m30999| Fri Feb 22 11:49:59.299 [conn1] loading chunk manager for collection mr_shard_version.coll using old chunk manager w/ version 1|0||51275b67d33e7c60dead1534 and 1 chunks m30999| Fri Feb 22 11:49:59.299 [conn1] major version query from 1|0||51275b67d33e7c60dead1534 and over 1 shards is { ns: "mr_shard_version.coll", $or: [ { lastmod: { $gte: Timestamp 1000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 1000|0 } } ] } m30999| Fri Feb 22 11:49:59.300 [conn1] loaded 2 chunks into new chunk manager for mr_shard_version.coll with version 1|2||51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:49:59.300 [conn1] ChunkManager: time to load chunks for mr_shard_version.coll: 0ms sequenceNumber: 3 version: 1|2||51275b67d33e7c60dead1534 based on: 1|0||51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:49:59.301 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: -1, options: 0, query: { _id: "mr_shard_version" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:49:59.301 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:49:59.301 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.301 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.301 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:59.301 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.301 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "mr_shard_version", partitioned: true, primary: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:49:59.302 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: -1, options: 0, query: { _id: "shard0001" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:49:59.302 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:49:59.302 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.302 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.302 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:49:59.302 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:49:59.302 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:49:59.303 [conn1] CMD: movechunk: { movechunk: "mr_shard_version.coll", find: { _id: 250000.0 }, to: "localhost:30000" } m30999| Fri Feb 22 11:49:59.303 [conn1] moving chunk ns: mr_shard_version.coll moving ( ns:mr_shard_version.collshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 250000.0 }max: { _id: MaxKey }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:49:59.303 [conn4] received moveChunk request: { moveChunk: "mr_shard_version.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 250000.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "mr_shard_version.coll-_id_250000.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Fri Feb 22 11:49:59.304 [conn4] distributed lock 'mr_shard_version.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361533799:29667' acquired, ts : 51275b6768faedda20dee02e m30001| Fri Feb 22 11:49:59.304 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:49:59-51275b6768faedda20dee02f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59854", time: new Date(1361533799304), what: "moveChunk.start", ns: "mr_shard_version.coll", details: { min: { _id: 250000.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:49:59.304 [conn4] moveChunk request accepted at version 1|2||51275b67d33e7c60dead1534 m30001| Fri Feb 22 11:49:59.747 [conn4] moveChunk number of documents: 250000 m30000| Fri Feb 22 11:49:59.748 [migrateThread] starting receiving-end of migration of chunk { _id: 250000.0 } -> { _id: MaxKey } for collection mr_shard_version.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:49:59.748 [initandlisten] connection accepted from 127.0.0.1:44939 #5 (5 connections now open) m30000| Fri Feb 22 11:49:59.749 [FileAllocator] allocating new datafile /data/db/test0/mr_shard_version.ns, filling with zeroes... m30000| Fri Feb 22 11:49:59.749 [FileAllocator] done allocating datafile /data/db/test0/mr_shard_version.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:49:59.749 [FileAllocator] allocating new datafile /data/db/test0/mr_shard_version.0, filling with zeroes... m30000| Fri Feb 22 11:49:59.750 [FileAllocator] done allocating datafile /data/db/test0/mr_shard_version.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:49:59.750 [FileAllocator] allocating new datafile /data/db/test0/mr_shard_version.1, filling with zeroes... m30000| Fri Feb 22 11:49:59.750 [FileAllocator] done allocating datafile /data/db/test0/mr_shard_version.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:49:59.754 [migrateThread] build index mr_shard_version.coll { _id: 1 } m30000| Fri Feb 22 11:49:59.755 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:49:59.755 [migrateThread] info: creating collection mr_shard_version.coll on add index m30001| Fri Feb 22 11:49:59.758 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:49:59.768 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:49:59.778 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:49:59.789 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:49:59.805 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:49:59.837 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:49:59.902 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:00.030 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:00.286 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:00.294 [conn5] command admin.$cmd command: { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:463415 reslen:13361441 538ms m30001| Fri Feb 22 11:50:00.798 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 10012, clonedBytes: 459430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:01.823 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 31568, clonedBytes: 1448608, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:02.847 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 54362, clonedBytes: 2494602, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:03.871 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 80328, clonedBytes: 3686178, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:50:04.629 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:04.629 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:04.896 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 107015, clonedBytes: 4910895, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:05.920 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 131369, clonedBytes: 6028454, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:06.944 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 153551, clonedBytes: 7046406, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:07.968 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 175753, clonedBytes: 8065278, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:08.993 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 197001, clonedBytes: 9040374, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:10.017 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 216241, clonedBytes: 9923216, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:50:10.630 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:10.630 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:11.041 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 237466, clonedBytes: 10897256, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:11.627 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:50:11.627 [migrateThread] migrate commit succeeded flushing to secondaries for 'mr_shard_version.coll' { _id: 250000.0 } -> { _id: MaxKey } m30000| Fri Feb 22 11:50:11.631 [migrateThread] migrate commit flushed to journal for 'mr_shard_version.coll' { _id: 250000.0 } -> { _id: MaxKey } m30001| Fri Feb 22 11:50:12.065 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 250000, clonedBytes: 11472500, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:12.065 [conn4] moveChunk setting version to: 2|0||51275b67d33e7c60dead1534 m30000| Fri Feb 22 11:50:12.066 [initandlisten] connection accepted from 127.0.0.1:53352 #10 (10 connections now open) m30000| Fri Feb 22 11:50:12.066 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:50:12.067 [migrateThread] migrate commit succeeded flushing to secondaries for 'mr_shard_version.coll' { _id: 250000.0 } -> { _id: MaxKey } m30000| Fri Feb 22 11:50:12.067 [migrateThread] migrate commit flushed to journal for 'mr_shard_version.coll' { _id: 250000.0 } -> { _id: MaxKey } m30000| Fri Feb 22 11:50:12.067 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:12-51275b74dc5301d37b2aa1dc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533812067), what: "moveChunk.to", ns: "mr_shard_version.coll", details: { min: { _id: 250000.0 }, max: { _id: MaxKey }, step1 of 5: 7, step2 of 5: 0, step3 of 5: 11871, step4 of 5: 0, step5 of 5: 439 } } m30000| Fri Feb 22 11:50:12.067 [initandlisten] connection accepted from 127.0.0.1:39008 #11 (11 connections now open) m30001| Fri Feb 22 11:50:12.076 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: 250000.0 }, max: { _id: MaxKey }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 250000, clonedBytes: 11472500, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:50:12.076 [conn4] moveChunk updating self version to: 2|1||51275b67d33e7c60dead1534 through { _id: MinKey } -> { _id: 250000.0 } for collection 'mr_shard_version.coll' m30000| Fri Feb 22 11:50:12.076 [initandlisten] connection accepted from 127.0.0.1:45374 #12 (12 connections now open) m30001| Fri Feb 22 11:50:12.077 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:12-51275b7468faedda20dee030", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59854", time: new Date(1361533812077), what: "moveChunk.commit", ns: "mr_shard_version.coll", details: { min: { _id: 250000.0 }, max: { _id: MaxKey }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:50:12.077 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:50:12.077 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:50:12.077 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:50:12.077 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:50:12.077 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:50:12.077 [cleanupOldData-51275b7468faedda20dee031] (start) waiting to cleanup mr_shard_version.coll from { _id: 250000.0 } -> { _id: MaxKey }, # cursors remaining: 0 m30001| Fri Feb 22 11:50:12.078 [conn4] distributed lock 'mr_shard_version.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361533799:29667' unlocked. m30001| Fri Feb 22 11:50:12.078 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:12-51275b7468faedda20dee032", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59854", time: new Date(1361533812078), what: "moveChunk.from", ns: "mr_shard_version.coll", details: { min: { _id: 250000.0 }, max: { _id: MaxKey }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 442, step4 of 6: 12317, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 11:50:12.078 [conn4] command admin.$cmd command: { moveChunk: "mr_shard_version.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 250000.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 52428800, shardId: "mr_shard_version.coll-_id_250000.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:32 r:592327 w:11 reslen:37 12774ms m30999| Fri Feb 22 11:50:12.078 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:50:12.078 [conn1] loading chunk manager for collection mr_shard_version.coll using old chunk manager w/ version 1|2||51275b67d33e7c60dead1534 and 2 chunks m30999| Fri Feb 22 11:50:12.078 [conn1] major version query from 1|2||51275b67d33e7c60dead1534 and over 1 shards is { ns: "mr_shard_version.coll", $or: [ { lastmod: { $gte: Timestamp 1000|2 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 1000|2 } } ] } m30999| Fri Feb 22 11:50:12.078 [conn1] loaded 2 chunks into new chunk manager for mr_shard_version.coll with version 2|1||51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:50:12.078 [conn1] ChunkManager: time to load chunks for mr_shard_version.coll: 0ms sequenceNumber: 4 version: 2|1||51275b67d33e7c60dead1534 based on: 1|2||51275b67d33e7c60dead1534 { "millis" : 12775, "ok" : 1 } m30999| Fri Feb 22 11:50:12.080 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:12.080 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:50:12.080 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3, minCompatibleVersion: 3, currentVersion: 4, clusterId: ObjectId('51275b42d33e7c60dead1530') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.version", n2skip: 0, n2return: -1, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.081 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: 1, version: 3, minCompatibleVersion: 3, currentVersion: 4, clusterId: ObjectId('51275b42d33e7c60dead1530') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:12.082 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:12.082 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:50:12.082 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.082 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.082 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:12.082 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.082 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:12.083 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.databases", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { name: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:12.083 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:50:12.083 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.083 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.083 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:12.083 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.084 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "admin", partitioned: false, primary: "config" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:12.085 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.collections", n2skip: 0, n2return: 0, options: 0, query: { query: { _id: /^mr_shard_version\./ }, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:12.085 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:50:12.085 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.085 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.085 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:12.085 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.085 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "mr_shard_version.coll", lastmod: new Date(1361533799), dropped: false, key: { _id: 1.0 }, unique: false, lastmodEpoch: ObjectId('51275b67d33e7c60dead1534') }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30001| Fri Feb 22 11:50:12.097 [cleanupOldData-51275b7468faedda20dee031] waiting to remove documents for mr_shard_version.coll from { _id: 250000.0 } -> { _id: MaxKey } m30001| Fri Feb 22 11:50:12.097 [cleanupOldData-51275b7468faedda20dee031] moveChunk starting delete for: mr_shard_version.coll from { _id: 250000.0 } -> { _id: MaxKey } m30999| Fri Feb 22 11:50:12.130 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "mr_shard_version.coll" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:12.130 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:50:12.130 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.130 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.130 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:12.130 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.131 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "mr_shard_version.coll-_id_MinKey", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('51275b67d33e7c60dead1534'), ns: "mr_shard_version.coll", min: { _id: MinKey }, max: { _id: 250000.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:12.132 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.tags", n2skip: 0, n2return: 0, options: 0, query: { query: { ns: "mr_shard_version.coll" }, orderby: { min: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:12.132 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 11:50:12.132 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.132 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.132 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:12.132 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.133 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("51275b42d33e7c60dead1530") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "mr_shard_version", "partitioned" : true, "primary" : "shard0001" } mr_shard_version.coll shard key: { "_id" : 1 } chunks: shard0001 1 shard0000 1 { "_id" : { "$minKey" : 1 } } -->> { "_id" : 250000 } on : shard0001 { "t" : 2000, "i" : 1 } { "_id" : 250000 } -->> { "_id" : { "$maxKey" : 1 } } on : shard0000 { "t" : 2000, "i" : 0 } ---- Collection now initialized with keys and values... ---- ---- Starting migrations... ---- m30999| Fri Feb 22 11:50:12.134 [mongosMain] connection accepted from 127.0.0.1:36114 #2 (2 connections now open) ---- Starting m/r... ---- ---- Output coll : mr_shard_version.mrOutput ---- m30999| Fri Feb 22 11:50:12.135 [mongosMain] connection accepted from 127.0.0.1:61377 #3 (3 connections now open) m30999| Fri Feb 22 11:50:12.135 [conn3] CMD: movechunk: { moveChunk: "mr_shard_version.coll", find: { _id: 0.0 }, to: "shard0000" } m30999| Fri Feb 22 11:50:12.135 [conn3] moving chunk ns: mr_shard_version.coll moving ( ns:mr_shard_version.collshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: MinKey }max: { _id: 250000.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30999| Fri Feb 22 11:50:12.135 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.$cmd", n2skip: 0, n2return: 1, options: 0, query: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30999| var total = 0 m30999| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533812_0", shardedFirstPass: true }, fields: {} } and CInfo { v_ns: "mr_shard_version.coll", filter: {} } m30001| Fri Feb 22 11:50:12.136 [conn4] received moveChunk request: { moveChunk: "mr_shard_version.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 250000.0 }, maxChunkSizeBytes: 52428800, shardId: "mr_shard_version.coll-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30999| Fri Feb 22 11:50:12.136 [conn1] [pcursor] initializing over 2 shards required by [mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534] m30999| Fri Feb 22 11:50:12.136 [conn1] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.136 [conn1] have to set shard version for conn: localhost:30000 ns:mr_shard_version.coll my last seq: 0 current: 4 version: 2|0||51275b67d33e7c60dead1534 manager: 0x1183e40 m30999| Fri Feb 22 11:50:12.136 [conn1] setShardVersion shard0000 localhost:30000 mr_shard_version.coll { setShardVersion: "mr_shard_version.coll", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('51275b67d33e7c60dead1534'), serverID: ObjectId('51275b42d33e7c60dead1532'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f1e0 4 m30999| Fri Feb 22 11:50:12.136 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "mr_shard_version.coll", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'mr_shard_version.coll'" } m30999| Fri Feb 22 11:50:12.136 [conn1] have to set shard version for conn: localhost:30000 ns:mr_shard_version.coll my last seq: 0 current: 4 version: 2|0||51275b67d33e7c60dead1534 manager: 0x1183e40 m30999| Fri Feb 22 11:50:12.137 [conn1] setShardVersion shard0000 localhost:30000 mr_shard_version.coll { setShardVersion: "mr_shard_version.coll", configdb: "localhost:30000", version: Timestamp 2000|0, versionEpoch: ObjectId('51275b67d33e7c60dead1534'), serverID: ObjectId('51275b42d33e7c60dead1532'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f1e0 4 m30001| Fri Feb 22 11:50:12.137 [conn4] distributed lock 'mr_shard_version.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361533799:29667' acquired, ts : 51275b7468faedda20dee033 m30000| Fri Feb 22 11:50:12.137 [conn6] no current chunk manager found for this shard, will initialize m30001| Fri Feb 22 11:50:12.137 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:12-51275b7468faedda20dee034", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59854", time: new Date(1361533812137), what: "moveChunk.start", ns: "mr_shard_version.coll", details: { min: { _id: MinKey }, max: { _id: 250000.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 11:50:12.138 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:50:12.138 [conn1] [pcursor] needed to set remote version on connection to value compatible with [mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534] m30999| Fri Feb 22 11:50:12.138 [conn1] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.138 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.138 [conn1] have to set shard version for conn: localhost:30001 ns:mr_shard_version.coll my last seq: 2 current: 4 version: 2|1||51275b67d33e7c60dead1534 manager: 0x1183e40 m30999| Fri Feb 22 11:50:12.138 [conn1] setShardVersion shard0001 localhost:30001 mr_shard_version.coll { setShardVersion: "mr_shard_version.coll", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('51275b67d33e7c60dead1534'), serverID: ObjectId('51275b42d33e7c60dead1532'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f560 4 m30001| Fri Feb 22 11:50:12.138 [conn4] moveChunk request accepted at version 2|1||51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:50:12.138 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51275b67d33e7c60dead1534'), ok: 1.0 } m30999| Fri Feb 22 11:50:12.138 [conn1] [pcursor] needed to set remote version on connection to value compatible with [mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534] m30999| Fri Feb 22 11:50:12.138 [conn1] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.138 [conn1] [pcursor] finishing over 2 shards m30999| Fri Feb 22 11:50:12.138 [conn1] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:12.148 [conn2] SocketException: remote: 127.0.0.1:36114 error: 9001 socket exception [0] server [127.0.0.1:36114] m30999| Fri Feb 22 11:50:12.148 [conn2] end connection 127.0.0.1:36114 (2 connections now open) m30000| Fri Feb 22 11:50:12.168 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_0 m30000| Fri Feb 22 11:50:12.169 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_0_inc m30000| Fri Feb 22 11:50:12.170 [conn6] build index mr_shard_version.tmp.mr.coll_0_inc { 0: 1 } m30001| Fri Feb 22 11:50:12.172 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_0 m30000| Fri Feb 22 11:50:12.173 [conn6] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 11:50:12.173 [conn6] build index mr_shard_version.tmp.mr.coll_0 { _id: 1 } m30001| Fri Feb 22 11:50:12.174 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_0_inc m30001| Fri Feb 22 11:50:12.175 [conn3] build index mr_shard_version.tmp.mr.coll_0_inc { 0: 1 } m30000| Fri Feb 22 11:50:12.177 [conn6] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 11:50:12.178 [conn3] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:50:12.178 [conn3] build index mr_shard_version.tmp.mr.coll_0 { _id: 1 } m30001| Fri Feb 22 11:50:12.182 [conn3] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 11:50:13.074 [conn4] moveChunk number of documents: 250000 m30000| Fri Feb 22 11:50:13.077 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 250000.0 } for collection mr_shard_version.coll from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:50:13.087 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.097 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.107 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.118 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.134 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.166 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.230 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.359 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.615 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:13.899 [conn5] command admin.$cmd command: { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:635929 reslen:13361441 813ms m30001| Fri Feb 22 11:50:14.127 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1074, clonedBytes: 49210, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:15.000 [conn6] M/R: (1/3) Emit Progress: 80700/250000 32% m30001| Fri Feb 22 11:50:15.001 [conn3] M/R: (1/3) Emit Progress: 68800/499451 13% m30001| Fri Feb 22 11:50:15.152 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4471, clonedBytes: 205116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:16.176 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8899, clonedBytes: 408364, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:50:16.631 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:16.631 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:17.200 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 13799, clonedBytes: 633214, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:18.001 [conn3] M/R: (1/3) Emit Progress: 167200/499451 33% m30000| Fri Feb 22 11:50:18.002 [conn6] M/R: (1/3) Emit Progress: 179500/250000 71% m30001| Fri Feb 22 11:50:18.225 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 16779, clonedBytes: 769964, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:19.249 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 18922, clonedBytes: 868322, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:20.273 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 23166, clonedBytes: 1062996, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:20.471 [conn6] CMD: drop mr_shard_version.tmp.mrs.coll_1361533812_0 m30000| Fri Feb 22 11:50:20.477 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_0 m30000| Fri Feb 22 11:50:20.477 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_0 m30000| Fri Feb 22 11:50:20.478 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_0_inc m30000| Fri Feb 22 11:50:20.482 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_0 m30000| Fri Feb 22 11:50:20.482 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_0_inc m30000| Fri Feb 22 11:50:20.483 [conn6] command mr_shard_version.$cmd command: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30000| var total = 0 m30000| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533812_0", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 3501 locks(micros) W:9289 r:13392140 w:43254 reslen:149 8345ms m30999| Fri Feb 22 11:50:20.483 [conn1] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: { result: "tmp.mrs.coll_1361533812_0", timeMillis: 8339, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:20.483 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30001| Fri Feb 22 11:50:20.964 [conn3] CMD: drop mr_shard_version.tmp.mrs.coll_1361533812_0 m30001| Fri Feb 22 11:50:20.973 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_0 m30001| Fri Feb 22 11:50:20.973 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_0 m30001| Fri Feb 22 11:50:20.973 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_0_inc m30001| Fri Feb 22 11:50:20.980 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_0 m30001| Fri Feb 22 11:50:20.980 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_0_inc m30001| Fri Feb 22 11:50:20.980 [conn3] command mr_shard_version.$cmd command: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30001| var total = 0 m30001| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533812_0", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 3501 locks(micros) W:13597 r:14281889 w:37576 reslen:149 8842ms m30999| Fri Feb 22 11:50:20.980 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: { result: "tmp.mrs.coll_1361533812_0", timeMillis: 8835, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:20.980 [conn1] MR with single shard output, NS=mr_shard_version.mrOutput primary=shard0001:localhost:30001 m30001| Fri Feb 22 11:50:20.981 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_1 m30001| Fri Feb 22 11:50:20.981 [conn3] build index mr_shard_version.tmp.mr.coll_1 { _id: 1 } m30001| Fri Feb 22 11:50:20.981 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:50:20.984 [conn3] ChunkManager: time to load chunks for mr_shard_version.coll: 0ms sequenceNumber: 2 version: 2|1||51275b67d33e7c60dead1534 based on: (empty) m30000| Fri Feb 22 11:50:20.984 [initandlisten] connection accepted from 127.0.0.1:60501 #13 (13 connections now open) m30001| Fri Feb 22 11:50:20.984 [initandlisten] connection accepted from 127.0.0.1:48046 #6 (6 connections now open) m30001| Fri Feb 22 11:50:20.994 [initandlisten] connection accepted from 127.0.0.1:60951 #7 (7 connections now open) m30001| Fri Feb 22 11:50:21.063 [conn3] CMD: drop mr_shard_version.mrOutput m30001| Fri Feb 22 11:50:21.068 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_1 m30001| Fri Feb 22 11:50:21.069 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_1 m30001| Fri Feb 22 11:50:21.069 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_1 m30000| Fri Feb 22 11:50:21.069 [conn7] CMD: drop mr_shard_version.tmp.mrs.coll_1361533812_0 m30999| Fri Feb 22 11:50:21.073 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:50:21.073 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:50:21.073 [conn1] connected connection! m30001| Fri Feb 22 11:50:21.073 [initandlisten] connection accepted from 127.0.0.1:58706 #8 (8 connections now open) m30001| Fri Feb 22 11:50:21.073 [conn8] CMD: drop mr_shard_version.tmp.mrs.coll_1361533812_0 m30999| Fri Feb 22 11:50:21.076 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.mrOutput", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:21.076 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ shard0001:localhost:30001] m30999| Fri Feb 22 11:50:21.076 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:21.076 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:21.076 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:21.076 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:21.077 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "shard0001:localhost:30001", cursor: { _id: "0", value: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [ { "_id" : "0", "value" : 0 }, { "_id" : "1", "value" : 500 }, { "_id" : "10", "value" : 5000 }, { "_id" : "100", "value" : 50000 }, { "_id" : "101", "value" : 50500 }, { "_id" : "102", "value" : 51000 }, { "_id" : "103", "value" : 51500 }, { "_id" : "104", "value" : 52000 }, { "_id" : "105", "value" : 52500 }, { "_id" : "106", "value" : 53000 }, { "_id" : "107", "value" : 53500 }, { "_id" : "108", "value" : 54000 }, { "_id" : "109", "value" : 54500 }, { "_id" : "11", "value" : 5500 }, { "_id" : "110", "value" : 55000 }, { "_id" : "111", "value" : 55500 }, { "_id" : "112", "value" : 56000 }, { "_id" : "113", "value" : 56500 }, { "_id" : "114", "value" : 57000 }, { "_id" : "115", "value" : 57500 }, { "_id" : "116", "value" : 58000 }, { "_id" : "117", "value" : 58500 }, { "_id" : "118", "value" : 59000 }, { "_id" : "119", "value" : 59500 }, { "_id" : "12", "value" : 6000 }, { "_id" : "120", "value" : 60000 }, { "_id" : "121", "value" : 60500 }, { "_id" : "122", "value" : 61000 }, { "_id" : "123", "value" : 61500 }, { "_id" : "124", "value" : 62000 }, { "_id" : "125", "value" : 62500 }, { "_id" : "126", "value" : 63000 }, { "_id" : "127", "value" : 63500 }, { "_id" : "128", "value" : 64000 }, { "_id" : "129", "value" : 64500 }, { "_id" : "13", "value" : 6500 }, { "_id" : "130", "value" : 65000 }, { "_id" : "131", "value" : 65500 }, { "_id" : "132", "value" : 66000 }, { "_id" : "133", "value" : 66500 }, { "_id" : "134", "value" : 67000 }, { "_id" : "135", "value" : 67500 }, { "_id" : "136", "value" : 68000 }, { "_id" : "137", "value" : 68500 }, { "_id" : "138", "value" : 69000 }, { "_id" : "139", "value" : 69500 }, { "_id" : "14", "value" : 7000 }, { "_id" : "140", "value" : 70000 }, { "_id" : "141", "value" : 70500 }, { "_id" : "142", "value" : 71000 }, { "_id" : "143", "value" : 71500 }, { "_id" : "144", "value" : 72000 }, { "_id" : "145", "value" : 72500 }, { "_id" : "146", "value" : 73000 }, { "_id" : "147", "value" : 73500 }, { "_id" : "148", "value" : 74000 }, { "_id" : "149", "value" : 74500 }, { "_id" : "15", "value" : 7500 }, { "_id" : "150", "value" : 75000 }, { "_id" : "151", "value" : 75500 }, { "_id" : "152", "value" : 76000 }, { "_id" : "153", "value" : 76500 }, { "_id" : "154", "value" : 77000 }, { "_id" : "155", "value" : 77500 }, { "_id" : "156", "value" : 78000 }, { "_id" : "157", "value" : 78500 }, { "_id" : "158", "value" : 79000 }, { "_id" : "159", "value" : 79500 }, { "_id" : "16", "value" : 8000 }, { "_id" : "160", "value" : 80000 }, { "_id" : "161", "value" : 80500 }, { "_id" : "162", "value" : 81000 }, { "_id" : "163", "value" : 81500 }, { "_id" : "164", "value" : 82000 }, { "_id" : "165", "value" : 82500 }, { "_id" : "166", "value" : 83000 }, { "_id" : "167", "value" : 83500 }, { "_id" : "168", "value" : 84000 }, { "_id" : "169", "value" : 84500 }, { "_id" : "17", "value" : 8500 }, { "_id" : "170", "value" : 85000 }, { "_id" : "171", "value" : 85500 }, { "_id" : "172", "value" : 86000 }, { "_id" : "173", "value" : 86500 }, { "_id" : "174", "value" : 87000 }, { "_id" : "175", "value" : 87500 }, { "_id" : "176", "value" : 88000 }, { "_id" : "177", "value" : 88500 }, { "_id" : "178", "value" : 89000 }, { "_id" : "179", "value" : 89500 }, { "_id" : "18", "value" : 9000 }, { "_id" : "180", "value" : 90000 }, { "_id" : "181", "value" : 90500 }, { "_id" : "182", "value" : 91000 }, { "_id" : "183", "value" : 91500 }, { "_id" : "184", "value" : 92000 }, { "_id" : "185", "value" : 92500 }, { "_id" : "186", "value" : 93000 }, { "_id" : "187", "value" : 93500 }, { "_id" : "188", "value" : 94000 }, { "_id" : "189", "value" : 94500 }, { "_id" : "19", "value" : 9500 }, { "_id" : "190", "value" : 95000 }, { "_id" : "191", "value" : 95500 }, { "_id" : "192", "value" : 96000 }, { "_id" : "193", "value" : 96500 }, { "_id" : "194", "value" : 97000 }, { "_id" : "195", "value" : 97500 }, { "_id" : "196", "value" : 98000 }, { "_id" : "197", "value" : 98500 }, { "_id" : "198", "value" : 99000 }, { "_id" : "199", "value" : 99500 }, { "_id" : "2", "value" : 1000 }, { "_id" : "20", "value" : 10000 }, { "_id" : "200", "value" : 100000 }, { "_id" : "201", "value" : 100500 }, { "_id" : "202", "value" : 101000 }, { "_id" : "203", "value" : 101500 }, { "_id" : "204", "value" : 102000 }, { "_id" : "205", "value" : 102500 }, { "_id" : "206", "value" : 103000 }, { "_id" : "207", "value" : 103500 }, { "_id" : "208", "value" : 104000 }, { "_id" : "209", "value" : 104500 }, { "_id" : "21", "value" : 10500 }, { "_id" : "210", "value" : 105000 }, { "_id" : "211", "value" : 105500 }, { "_id" : "212", "value" : 106000 }, { "_id" : "213", "value" : 106500 }, { "_id" : "214", "value" : 107000 }, { "_id" : "215", "value" : 107500 }, { "_id" : "216", "value" : 108000 }, { "_id" : "217", "value" : 108500 }, { "_id" : "218", "value" : 109000 }, { "_id" : "219", "value" : 109500 }, { "_id" : "22", "value" : 11000 }, { "_id" : "220", "value" : 110000 }, { "_id" : "221", "value" : 110500 }, { "_id" : "222", "value" : 111000 }, { "_id" : "223", "value" : 111500 }, { "_id" : "224", "value" : 112000 }, { "_id" : "225", "value" : 112500 }, { "_id" : "226", "value" : 113000 }, { "_id" : "227", "value" : 113500 }, { "_id" : "228", "value" : 114000 }, { "_id" : "229", "value" : 114500 }, { "_id" : "23", "value" : 11500 }, { "_id" : "230", "value" : 115000 }, { "_id" : "231", "value" : 115500 }, { "_id" : "232", "value" : 116000 }, { "_id" : "233", "value" : 116500 }, { "_id" : "234", "value" : 117000 }, { "_id" : "235", "value" : 117500 }, { "_id" : "236", "value" : 118000 }, { "_id" : "237", "value" : 118500 }, { "_id" : "238", "value" : 119000 }, { "_id" : "239", "value" : 119500 }, { "_id" : "24", "value" : 12000 }, { "_id" : "240", "value" : 120000 }, { "_id" : "241", "value" : 120500 }, { "_id" : "242", "value" : 121000 }, { "_id" : "243", "value" : 121500 }, { "_id" : "244", "value" : 122000 }, { "_id" : "245", "value" : 122500 }, { "_id" : "246", "value" : 123000 }, { "_id" : "247", "value" : 123500 }, { "_id" : "248", "value" : 124000 }, { "_id" : "249", "value" : 124500 }, { "_id" : "25", "value" : 12500 }, { "_id" : "250", "value" : 125000 }, { "_id" : "251", "value" : 125500 }, { "_id" : "252", "value" : 126000 }, { "_id" : "253", "value" : 126500 }, { "_id" : "254", "value" : 127000 }, { "_id" : "255", "value" : 127500 }, { "_id" : "256", "value" : 128000 }, { "_id" : "257", "value" : 128500 }, { "_id" : "258", "value" : 129000 }, { "_id" : "259", "value" : 129500 }, { "_id" : "26", "value" : 13000 }, { "_id" : "260", "value" : 130000 }, { "_id" : "261", "value" : 130500 }, { "_id" : "262", "value" : 131000 }, { "_id" : "263", "value" : 131500 }, { "_id" : "264", "value" : 132000 }, { "_id" : "265", "value" : 132500 }, { "_id" : "266", "value" : 133000 }, { "_id" : "267", "value" : 133500 }, { "_id" : "268", "value" : 134000 }, { "_id" : "269", "value" : 134500 }, { "_id" : "27", "value" : 13500 }, { "_id" : "270", "value" : 135000 }, { "_id" : "271", "value" : 135500 }, { "_id" : "272", "value" : 136000 }, { "_id" : "273", "value" : 136500 }, { "_id" : "274", "value" : 137000 }, { "_id" : "275", "value" : 137500 }, { "_id" : "276", "value" : 138000 }, { "_id" : "277", "value" : 138500 }, { "_id" : "278", "value" : 139000 }, { "_id" : "279", "value" : 139500 }, { "_id" : "28", "value" : 14000 }, { "_id" : "280", "value" : 140000 }, { "_id" : "281", "value" : 140500 }, { "_id" : "282", "value" : 141000 }, { "_id" : "283", "value" : 141500 }, { "_id" : "284", "value" : 142000 }, { "_id" : "285", "value" : 142500 }, { "_id" : "286", "value" : 143000 }, { "_id" : "287", "value" : 143500 }, { "_id" : "288", "value" : 144000 }, { "_id" : "289", "value" : 144500 }, { "_id" : "29", "value" : 14500 }, { "_id" : "290", "value" : 145000 }, { "_id" : "291", "value" : 145500 }, { "_id" : "292", "value" : 146000 }, { "_id" : "293", "value" : 146500 }, { "_id" : "294", "value" : 147000 }, { "_id" : "295", "value" : 147500 }, { "_id" : "296", "value" : 148000 }, { "_id" : "297", "value" : 148500 }, { "_id" : "298", "value" : 149000 }, { "_id" : "299", "value" : 149500 }, { "_id" : "3", "value" : 1500 }, { "_id" : "30", "value" : 15000 }, { "_id" : "300", "value" : 150000 }, { "_id" : "301", "value" : 150500 }, { "_id" : "302", "value" : 151000 }, { "_id" : "303", "value" : 151500 }, { "_id" : "304", "value" : 152000 }, { "_id" : "305", "value" : 152500 }, { "_id" : "306", "value" : 153000 }, { "_id" : "307", "value" : 153500 }, { "_id" : "308", "value" : 154000 }, { "_id" : "309", "value" : 154500 }, { "_id" : "31", "value" : 15500 }, { "_id" : "310", "value" : 155000 }, { "_id" : "311", "value" : 155500 }, { "_id" : "312", "value" : 156000 }, { "_id" : "313", "value" : 156500 }, { "_id" : "314", "value" : 157000 }, { "_id" : "315", "value" : 157500 }, { "_id" : "316", "value" : 158000 }, { "_id" : "317", "value" : 158500 }, { "_id" : "318", "value" : 159000 }, { "_id" : "319", "value" : 159500 }, { "_id" : "32", "value" : 16000 }, { "_id" : "320", "value" : 160000 }, { "_id" : "321", "value" : 160500 }, { "_id" : "322", "value" : 161000 }, { "_id" : "323", "value" : 161500 }, { "_id" : "324", "value" : 162000 }, { "_id" : "325", "value" : 162500 }, { "_id" : "326", "value" : 163000 }, { "_id" : "327", "value" : 163500 }, { "_id" : "328", "value" : 164000 }, { "_id" : "329", "value" : 164500 }, { "_id" : "33", "value" : 16500 }, { "_id" : "330", "value" : 165000 }, { "_id" : "331", "value" : 165500 }, { "_id" : "332", "value" : 166000 }, { "_id" : "333", "value" : 166500 }, { "_id" : "334", "value" : 167000 }, { "_id" : "335", "value" : 167500 }, { "_id" : "336", "value" : 168000 }, { "_id" : "337", "value" : 168500 }, { "_id" : "338", "value" : 169000 }, { "_id" : "339", "value" : 169500 }, { "_id" : "34", "value" : 17000 }, { "_id" : "340", "value" : 170000 }, { "_id" : "341", "value" : 170500 }, { "_id" : "342", "value" : 171000 }, { "_id" : "343", "value" : 171500 }, { "_id" : "344", "value" : 172000 }, { "_id" : "345", "value" : 172500 }, { "_id" : "346", "value" : 173000 }, { "_id" : "347", "value" : 173500 }, { "_id" : "348", "value" : 174000 }, { "_id" : "349", "value" : 174500 }, { "_id" : "35", "value" : 17500 }, { "_id" : "350", "value" : 175000 }, { "_id" : "351", "value" : 175500 }, { "_id" : "352", "value" : 176000 }, { "_id" : "353", "value" : 176500 }, { "_id" : "354", "value" : 177000 }, { "_id" : "355", "value" : 177500 }, { "_id" : "356", "value" : 178000 }, { "_id" : "357", "value" : 178500 }, { "_id" : "358", "value" : 179000 }, { "_id" : "359", "value" : 179500 }, { "_id" : "36", "value" : 18000 }, { "_id" : "360", "value" : 180000 }, { "_id" : "361", "value" : 180500 }, { "_id" : "362", "value" : 181000 }, { "_id" : "363", "value" : 181500 }, { "_id" : "364", "value" : 182000 }, { "_id" : "365", "value" : 182500 }, { "_id" : "366", "value" : 183000 }, { "_id" : "367", "value" : 183500 }, { "_id" : "368", "value" : 184000 }, { "_id" : "369", "value" : 184500 }, { "_id" : "37", "value" : 18500 }, { "_id" : "370", "value" : 185000 }, { "_id" : "371", "value" : 185500 }, { "_id" : "372", "value" : 186000 }, { "_id" : "373", "value" : 186500 }, { "_id" : "374", "value" : 187000 }, { "_id" : "375", "value" : 187500 }, { "_id" : "376", "value" : 188000 }, { "_id" : "377", "value" : 188500 }, { "_id" : "378", "value" : 189000 }, { "_id" : "379", "value" : 189500 }, { "_id" : "38", "value" : 19000 }, { "_id" : "380", "value" : 190000 }, { "_id" : "381", "value" : 190500 }, { "_id" : "382", "value" : 191000 }, { "_id" : "383", "value" : 191500 }, { "_id" : "384", "value" : 192000 }, { "_id" : "385", "value" : 192500 }, { "_id" : "386", "value" : 193000 }, { "_id" : "387", "value" : 193500 }, { "_id" : "388", "value" : 194000 }, { "_id" : "389", "value" : 194500 }, { "_id" : "39", "value" : 19500 }, { "_id" : "390", "value" : 195000 }, { "_id" : "391", "value" : 195500 }, { "_id" : "392", "value" : 196000 }, { "_id" : "393", "value" : 196500 }, { "_id" : "394", "value" : 197000 }, { "_id" : "395", "value" : 197500 }, { "_id" : "396", "value" : 198000 }, { "_id" : "397", "value" : 198500 }, { "_id" : "398", "value" : 199000 }, { "_id" : "399", "value" : 199500 }, { "_id" : "4", "value" : 2000 }, { "_id" : "40", "value" : 20000 }, { "_id" : "400", "value" : 200000 }, { "_id" : "401", "value" : 200500 }, { "_id" : "402", "value" : 201000 }, { "_id" : "403", "value" : 201500 }, { "_id" : "404", "value" : 202000 }, { "_id" : "405", "value" : 202500 }, { "_id" : "406", "value" : 203000 }, { "_id" : "407", "value" : 203500 }, { "_id" : "408", "value" : 204000 }, { "_id" : "409", "value" : 204500 }, { "_id" : "41", "value" : 20500 }, { "_id" : "410", "value" : 205000 }, { "_id" : "411", "value" : 205500 }, { "_id" : "412", "value" : 206000 }, { "_id" : "413", "value" : 206500 }, { "_id" : "414", "value" : 207000 }, { "_id" : "415", "value" : 207500 }, { "_id" : "416", "value" : 208000 }, { "_id" : "417", "value" : 208500 }, { "_id" : "418", "value" : 209000 }, { "_id" : "419", "value" : 209500 }, { "_id" : "42", "value" : 21000 }, { "_id" : "420", "value" : 210000 }, { "_id" : "421", "value" : 210500 }, { "_id" : "422", "value" : 211000 }, { "_id" : "423", "value" : 211500 }, { "_id" : "424", "value" : 212000 }, { "_id" : "425", "value" : 212500 }, { "_id" : "426", "value" : 213000 }, { "_id" : "427", "value" : 213500 }, { "_id" : "428", "value" : 214000 }, { "_id" : "429", "value" : 214500 }, { "_id" : "43", "value" : 21500 }, { "_id" : "430", "value" : 215000 }, { "_id" : "431", "value" : 215500 }, { "_id" : "432", "value" : 216000 }, { "_id" : "433", "value" : 216500 }, { "_id" : "434", "value" : 217000 }, { "_id" : "435", "value" : 217500 }, { "_id" : "436", "value" : 218000 }, { "_id" : "437", "value" : 218500 }, { "_id" : "438", "value" : 219000 }, { "_id" : "439", "value" : 219500 }, { "_id" : "44", "value" : 22000 }, { "_id" : "440", "value" : 220000 }, { "_id" : "441", "value" : 220500 }, { "_id" : "442", "value" : 221000 }, { "_id" : "443", "value" : 221500 }, { "_id" : "444", "value" : 222000 }, { "_id" : "445", "value" : 222500 }, { "_id" : "446", "value" : 223000 }, { "_id" : "447", "value" : 223500 }, { "_id" : "448", "value" : 224000 }, { "_id" : "449", "value" : 224500 }, { "_id" : "45", "value" : 22500 }, { "_id" : "450", "value" : 225000 }, { "_id" : "451", "value" : 225500 }, { "_id" : "452", "value" : 226000 }, { "_id" : "453", "value" : 226500 }, { "_id" : "454", "value" : 227000 }, { "_id" : "455", "value" : 227500 }, { "_id" : "456", "value" : 228000 }, { "_id" : "457", "value" : 228500 }, { "_id" : "458", "value" : 229000 }, { "_id" : "459", "value" : 229500 }, { "_id" : "46", "value" : 23000 }, { "_id" : "460", "value" : 230000 }, { "_id" : "461", "value" : 230500 }, { "_id" : "462", "value" : 231000 }, { "_id" : "463", "value" : 231500 }, { "_id" : "464", "value" : 232000 }, { "_id" : "465", "value" : 232500 }, { "_id" : "466", "value" : 233000 }, { "_id" : "467", "value" : 233500 }, { "_id" : "468", "value" : 234000 }, { "_id" : "469", "value" : 234500 }, { "_id" : "47", "value" : 23500 }, { "_id" : "470", "value" : 235000 }, { "_id" : "471", "value" : 235500 }, { "_id" : "472", "value" : 236000 }, { "_id" : "473", "value" : 236500 }, { "_id" : "474", "value" : 237000 }, { "_id" : "475", "value" : 237500 }, { "_id" : "476", "value" : 238000 }, { "_id" : "477", "value" : 238500 }, { "_id" : "478", "value" : 239000 }, { "_id" : "479", "value" : 239500 }, { "_id" : "48", "value" : 24000 }, { "_id" : "480", "value" : 240000 }, { "_id" : "481", "value" : 240500 }, { "_id" : "482", "value" : 241000 }, { "_id" : "483", "value" : 241500 }, { "_id" : "484", "value" : 242000 }, { "_id" : "485", "value" : 242500 }, { "_id" : "486", "value" : 243000 }, { "_id" : "487", "value" : 243500 }, { "_id" : "488", "value" : 244000 }, { "_id" : "489", "value" : 244500 }, { "_id" : "49", "value" : 24500 }, { "_id" : "490", "value" : 245000 }, { "_id" : "491", "value" : 245500 }, { "_id" : "492", "value" : 246000 }, { "_id" : "493", "value" : 246500 }, { "_id" : "494", "value" : 247000 }, { "_id" : "495", "value" : 247500 }, { "_id" : "496", "value" : 248000 }, { "_id" : "497", "value" : 248500 }, { "_id" : "498", "value" : 249000 }, { "_id" : "499", "value" : 249500 }, { "_id" : "5", "value" : 2500 }, { "_id" : "50", "value" : 25000 }, { "_id" : "500", "value" : 250000 }, { "_id" : "501", "value" : 250500 }, { "_id" : "502", "value" : 251000 }, { "_id" : "503", "value" : 251500 }, { "_id" : "504", "value" : 252000 }, { "_id" : "505", "value" : 252500 }, { "_id" : "506", "value" : 253000 }, { "_id" : "507", "value" : 253500 }, { "_id" : "508", "value" : 254000 }, { "_id" : "509", "value" : 254500 }, { "_id" : "51", "value" : 25500 }, { "_id" : "510", "value" : 255000 }, { "_id" : "511", "value" : 255500 }, { "_id" : "512", "value" : 256000 }, { "_id" : "513", "value" : 256500 }, { "_id" : "514", "value" : 257000 }, { "_id" : "515", "value" : 257500 }, { "_id" : "516", "value" : 258000 }, { "_id" : "517", "value" : 258500 }, { "_id" : "518", "value" : 259000 }, { "_id" : "519", "value" : 259500 }, { "_id" : "52", "value" : 26000 }, { "_id" : "520", "value" : 260000 }, { "_id" : "521", "value" : 260500 }, { "_id" : "522", "value" : 261000 }, { "_id" : "523", "value" : 261500 }, { "_id" : "524", "value" : 262000 }, { "_id" : "525", "value" : 262500 }, { "_id" : "526", "value" : 263000 }, { "_id" : "527", "value" : 263500 }, { "_id" : "528", "value" : 264000 }, { "_id" : "529", "value" : 264500 }, { "_id" : "53", "value" : 26500 }, { "_id" : "530", "value" : 265000 }, { "_id" : "531", "value" : 265500 }, { "_id" : "532", "value" : 266000 }, { "_id" : "533", "value" : 266500 }, { "_id" : "534", "value" : 267000 }, { "_id" : "535", "value" : 267500 }, { "_id" : "536", "value" : 268000 }, { "_id" : "537", "value" : 268500 }, { "_id" : "538", "value" : 269000 }, { "_id" : "539", "value" : 269500 }, { "_id" : "54", "value" : 27000 }, { "_id" : "540", "value" : 270000 }, { "_id" : "541", "value" : 270500 }, { "_id" : "542", "value" : 271000 }, { "_id" : "543", "value" : 271500 }, { "_id" : "544", "value" : 272000 }, { "_id" : "545", "value" : 272500 }, { "_id" : "546", "value" : 273000 }, { "_id" : "547", "value" : 273500 }, { "_id" : "548", "value" : 274000 }, { "_id" : "549", "value" : 274500 }, { "_id" : "55", "value" : 27500 }, { "_id" : "550", "value" : 275000 }, { "_id" : "551", "value" : 275500 }, { "_id" : "552", "value" : 276000 }, { "_id" : "553", "value" : 276500 }, { "_id" : "554", "value" : 277000 }, { "_id" : "555", "value" : 277500 }, { "_id" : "556", "value" : 278000 }, { "_id" : "557", "value" : 278500 }, { "_id" : "558", "value" : 279000 }, { "_id" : "559", "value" : 279500 }, { "_id" : "56", "value" : 28000 }, { "_id" : "560", "value" : 280000 }, { "_id" : "561", "value" : 280500 }, { "_id" : "562", "value" : 281000 }, { "_id" : "563", "value" : 281500 }, { "_id" : "564", "value" : 282000 }, { "_id" : "565", "value" : 282500 }, { "_id" : "566", "value" : 283000 }, { "_id" : "567", "value" : 283500 }, { "_id" : "568", "value" : 284000 }, { "_id" : "569", "value" : 284500 }, { "_id" : "57", "value" : 28500 }, { "_id" : "570", "value" : 285000 }, { "_id" : "571", "value" : 285500 }, { "_id" : "572", "value" : 286000 }, { "_id" : "573", "value" : 286500 }, { "_id" : "574", "value" : 287000 }, { "_id" : "575", "value" : 287500 }, { "_id" : "576", "value" : 288000 }, { "_id" : "577", "value" : 288500 }, { "_id" : "578", "value" : 289000 }, { "_id" : "579", "value" : 289500 }, { "_id" : "58", "value" : 29000 }, { "_id" : "580", "value" : 290000 }, { "_id" : "581", "value" : 290500 }, { "_id" : "582", "value" : 291000 }, { "_id" : "583", "value" : 291500 }, { "_id" : "584", "value" : 292000 }, { "_id" : "585", "value" : 292500 }, { "_id" : "586", "value" : 293000 }, { "_id" : "587", "value" : 293500 }, { "_id" : "588", "value" : 294000 }, { "_id" : "589", "value" : 294500 }, { "_id" : "59", "value" : 29500 }, { "_id" : "590", "value" : 295000 }, { "_id" : "591", "value" : 295500 }, { "_id" : "592", "value" : 296000 }, { "_id" : "593", "value" : 296500 }, { "_id" : "594", "value" : 297000 }, { "_id" : "595", "value" : 297500 }, { "_id" : "596", "value" : 298000 }, { "_id" : "597", "value" : 298500 }, { "_id" : "598", "value" : 299000 }, { "_id" : "599", "value" : 299500 }, { "_id" : "6", "value" : 3000 }, { "_id" : "60", "value" : 30000 }, { "_id" : "600", "value" : 300000 }, { "_id" : "601", "value" : 300500 }, { "_id" : "602", "value" : 301000 }, { "_id" : "603", "value" : 301500 }, { "_id" : "604", "value" : 302000 }, { "_id" : "605", "value" : 302500 }, { "_id" : "606", "value" : 303000 }, { "_id" : "607", "value" : 303500 }, { "_id" : "608", "value" : 304000 }, { "_id" : "609", "value" : 304500 }, { "_id" : "61", "value" : 30500 }, { "_id" : "610", "value" : 305000 }, { "_id" : "611", "value" : 305500 }, { "_id" : "612", "value" : 306000 }, { "_id" : "613", "value" : 306500 }, { "_id" : "614", "value" : 307000 }, { "_id" : "615", "value" : 307500 }, { "_id" : "616", "value" : 308000 }, { "_id" : "617", "value" : 308500 }, { "_id" : "618", "value" : 309000 }, { "_id" : "619", "value" : 309500 }, { "_id" : "62", "value" : 31000 }, { "_id" : "620", "value" : 310000 }, { "_id" : "621", "value" : 310500 }, { "_id" : "622", "value" : 311000 }, { "_id" : "623", "value" : 311500 }, { "_id" : "624", "value" : 312000 }, { "_id" : "625", "value" : 312500 }, { "_id" : "626", "value" : 313000 }, { "_id" : "627", "value" : 313500 }, { "_id" : "628", "value" : 314000 }, { "_id" : "629", "value" : 314500 }, { "_id" : "63", "value" : 31500 }, { "_id" : "630", "value" : 315000 }, { "_id" : "631", "value" : 315500 }, { "_id" : "632", "value" : 316000 }, { "_id" : "633", "value" : 316500 }, { "_id" : "634", "value" : 317000 }, { "_id" : "635", "value" : 317500 }, { "_id" : "636", "value" : 318000 }, { "_id" : "637", "value" : 318500 }, { "_id" : "638", "value" : 319000 }, { "_id" : "639", "value" : 319500 }, { "_id" : "64", "value" : 32000 }, { "_id" : "640", "value" : 320000 }, { "_id" : "641", "value" : 320500 }, { "_id" : "642", "value" : 321000 }, { "_id" : "643", "value" : 321500 }, { "_id" : "644", "value" : 322000 }, { "_id" : "645", "value" : 322500 }, { "_id" : "646", "value" : 323000 }, { "_id" : "647", "value" : 323500 }, { "_id" : "648", "value" : 324000 }, { "_id" : "649", "value" : 324500 }, { "_id" : "65", "value" : 32500 }, { "_id" : "650", "value" : 325000 }, { "_id" : "651", "value" : 325500 }, { "_id" : "652", "value" : 326000 }, { "_id" : "653", "value" : 326500 }, { "_id" : "654", "value" : 327000 }, { "_id" : "655", "value" : 327500 }, { "_id" : "656", "value" : 328000 }, { "_id" : "657", "value" : 328500 }, { "_id" : "658", "value" : 329000 }, { "_id" : "659", "value" : 329500 }, { "_id" : "66", "value" : 33000 }, { "_id" : "660", "value" : 330000 }, { "_id" : "661", "value" : 330500 }, { "_id" : "662", "value" : 331000 }, { "_id" : "663", "value" : 331500 }, { "_id" : "664", "value" : 332000 }, { "_id" : "665", "value" : 332500 }, { "_id" : "666", "value" : 333000 }, { "_id" : "667", "value" : 333500 }, { "_id" : "668", "value" : 334000 }, { "_id" : "669", "value" : 334500 }, { "_id" : "67", "value" : 33500 }, { "_id" : "670", "value" : 335000 }, { "_id" : "671", "value" : 335500 }, { "_id" : "672", "value" : 336000 }, { "_id" : "673", "value" : 336500 }, { "_id" : "674", "value" : 337000 }, { "_id" : "675", "value" : 337500 }, { "_id" : "676", "value" : 338000 }, { "_id" : "677", "value" : 338500 }, { "_id" : "678", "value" : 339000 }, { "_id" : "679", "value" : 339500 }, { "_id" : "68", "value" : 34000 }, { "_id" : "680", "value" : 340000 }, { "_id" : "681", "value" : 340500 }, { "_id" : "682", "value" : 341000 }, { "_id" : "683", "value" : 341500 }, { "_id" : "684", "value" : 342000 }, { "_id" : "685", "value" : 342500 }, { "_id" : "686", "value" : 343000 }, { "_id" : "687", "value" : 343500 }, { "_id" : "688", "value" : 344000 }, { "_id" : "689", "value" : 344500 }, { "_id" : "69", "value" : 34500 }, { "_id" : "690", "value" : 345000 }, { "_id" : "691", "value" : 345500 }, { "_id" : "692", "value" : 346000 }, { "_id" : "693", "value" : 346500 }, { "_id" : "694", "value" : 347000 }, { "_id" : "695", "value" : 347500 }, { "_id" : "696", "value" : 348000 }, { "_id" : "697", "value" : 348500 }, { "_id" : "698", "value" : 349000 }, { "_id" : "699", "value" : 349500 }, { "_id" : "7", "value" : 3500 }, { "_id" : "70", "value" : 35000 }, { "_id" : "700", "value" : 350000 }, { "_id" : "701", "value" : 350500 }, { "_id" : "702", "value" : 351000 }, { "_id" : "703", "value" : 351500 }, { "_id" : "704", "value" : 352000 }, { "_id" : "705", "value" : 352500 }, { "_id" : "706", "value" : 353000 }, { "_id" : "707", "value" : 353500 }, { "_id" : "708", "value" : 354000 }, { "_id" : "709", "value" : 354500 }, { "_id" : "71", "value" : 35500 }, { "_id" : "710", "value" : 355000 }, { "_id" : "711", "value" : 355500 }, { "_id" : "712", "value" : 356000 }, { "_id" : "713", "value" : 356500 }, { "_id" : "714", "value" : 357000 }, { "_id" : "715", "value" : 357500 }, { "_id" : "716", "value" : 358000 }, { "_id" : "717", "value" : 358500 }, { "_id" : "718", "value" : 359000 }, { "_id" : "719", "value" : 359500 }, { "_id" : "72", "value" : 36000 }, { "_id" : "720", "value" : 360000 }, { "_id" : "721", "value" : 360500 }, { "_id" : "722", "value" : 361000 }, { "_id" : "723", "value" : 361500 }, { "_id" : "724", "value" : 362000 }, { "_id" : "725", "value" : 362500 }, { "_id" : "726", "value" : 363000 }, { "_id" : "727", "value" : 363500 }, { "_id" : "728", "value" : 364000 }, { "_id" : "729", "value" : 364500 }, { "_id" : "73", "value" : 36500 }, { "_id" : "730", "value" : 365000 }, { "_id" : "731", "value" : 365500 }, { "_id" : "732", "value" : 366000 }, { "_id" : "733", "value" : 366500 }, { "_id" : "734", "value" : 367000 }, { "_id" : "735", "value" : 367500 }, { "_id" : "736", "value" : 368000 }, { "_id" : "737", "value" : 368500 }, { "_id" : "738", "value" : 369000 }, { "_id" : "739", "value" : 369500 }, { "_id" : "74", "value" : 37000 }, { "_id" : "740", "value" : 370000 }, { "_id" : "741", "value" : 370500 }, { "_id" : "742", "value" : 371000 }, { "_id" : "743", "value" : 371500 }, { "_id" : "744", "value" : 372000 }, { "_id" : "745", "value" : 372500 }, { "_id" : "746", "value" : 373000 }, { "_id" : "747", "value" : 373500 }, { "_id" : "748", "value" : 374000 }, { "_id" : "749", "value" : 374500 }, { "_id" : "75", "value" : 37500 }, { "_id" : "750", "value" : 375000 }, { "_id" : "751", "value" : 375500 }, { "_id" : "752", "value" : 376000 }, { "_id" : "753", "value" : 376500 }, { "_id" : "754", "value" : 377000 }, { "_id" : "755", "value" : 377500 }, { "_id" : "756", "value" : 378000 }, { "_id" : "757", "value" : 378500 }, { "_id" : "758", "value" : 379000 }, { "_id" : "759", "value" : 379500 }, { "_id" : "76", "value" : 38000 }, { "_id" : "760", "value" : 380000 }, { "_id" : "761", "value" : 380500 }, { "_id" : "762", "value" : 381000 }, { "_id" : "763", "value" : 381500 }, { "_id" : "764", "value" : 382000 }, { "_id" : "765", "value" : 382500 }, { "_id" : "766", "value" : 383000 }, { "_id" : "767", "value" : 383500 }, { "_id" : "768", "value" : 384000 }, { "_id" : "769", "value" : 384500 }, { "_id" : "77", "value" : 38500 }, { "_id" : "770", "value" : 385000 }, { "_id" : "771", "value" : 385500 }, { "_id" : "772", "value" : 386000 }, { "_id" : "773", "value" : 386500 }, { "_id" : "774", "value" : 387000 }, { "_id" : "775", "value" : 387500 }, { "_id" : "776", "value" : 388000 }, { "_id" : "777", "value" : 388500 }, { "_id" : "778", "value" : 389000 }, { "_id" : "779", "value" : 389500 }, { "_id" : "78", "value" : 39000 }, { "_id" : "780", "value" : 390000 }, { "_id" : "781", "value" : 390500 }, { "_id" : "782", "value" : 391000 }, { "_id" : "783", "value" : 391500 }, { "_id" : "784", "value" : 392000 }, { "_id" : "785", "value" : 392500 }, { "_id" : "786", "value" : 393000 }, { "_id" : "787", "value" : 393500 }, { "_id" : "788", "value" : 394000 }, { "_id" : "789", "value" : 394500 }, { "_id" : "79", "value" : 39500 }, { "_id" : "790", "value" : 395000 }, { "_id" : "791", "value" : 395500 }, { "_id" : "792", "value" : 396000 }, { "_id" : "793", "value" : 396500 }, { "_id" : "794", "value" : 397000 }, { "_id" : "795", "value" : 397500 }, { "_id" : "796", "value" : 398000 }, { "_id" : "797", "value" : 398500 }, { "_id" : "798", "value" : 399000 }, { "_id" : "799", "value" : 399500 }, { "_id" : "8", "value" : 4000 }, { "_id" : "80", "value" : 40000 }, { "_id" : "800", "value" : 400000 }, { "_id" : "801", "value" : 400500 }, { "_id" : "802", "value" : 401000 }, { "_id" : "803", "value" : 401500 }, { "_id" : "804", "value" : 402000 }, { "_id" : "805", "value" : 402500 }, { "_id" : "806", "value" : 403000 }, { "_id" : "807", "value" : 403500 }, { "_id" : "808", "value" : 404000 }, { "_id" : "809", "value" : 404500 }, { "_id" : "81", "value" : 40500 }, { "_id" : "810", "value" : 405000 }, { "_id" : "811", "value" : 405500 }, { "_id" : "812", "value" : 406000 }, { "_id" : "813", "value" : 406500 }, { "_id" : "814", "value" : 407000 }, { "_id" : "815", "value" : 407500 }, { "_id" : "816", "value" : 408000 }, { "_id" : "817", "value" : 408500 }, { "_id" : "818", "value" : 409000 }, { "_id" : "819", "value" : 409500 }, { "_id" : "82", "value" : 41000 }, { "_id" : "820", "value" : 410000 }, { "_id" : "821", "value" : 410500 }, { "_id" : "822", "value" : 411000 }, { "_id" : "823", "value" : 411500 }, { "_id" : "824", "value" : 412000 }, { "_id" : "825", "value" : 412500 }, { "_id" : "826", "value" : 413000 }, { "_id" : "827", "value" : 413500 }, { "_id" : "828", "value" : 414000 }, { "_id" : "829", "value" : 414500 }, { "_id" : "83", "value" : 41500 }, { "_id" : "830", "value" : 415000 }, { "_id" : "831", "value" : 415500 }, { "_id" : "832", "value" : 416000 }, { "_id" : "833", "value" : 416500 }, { "_id" : "834", "value" : 417000 }, { "_id" : "835", "value" : 417500 }, { "_id" : "836", "value" : 418000 }, { "_id" : "837", "value" : 418500 }, { "_id" : "838", "value" : 419000 }, { "_id" : "839", "value" : 419500 }, { "_id" : "84", "value" : 42000 }, { "_id" : "840", "value" : 420000 }, { "_id" : "841", "value" : 420500 }, { "_id" : "842", "value" : 421000 }, { "_id" : "843", "value" : 421500 }, { "_id" : "844", "value" : 422000 }, { "_id" : "845", "value" : 422500 }, { "_id" : "846", "value" : 423000 }, { "_id" : "847", "value" : 423500 }, { "_id" : "848", "value" : 424000 }, { "_id" : "849", "value" : 424500 }, { "_id" : "85", "value" : 42500 }, { "_id" : "850", "value" : 425000 }, { "_id" : "851", "value" : 425500 }, { "_id" : "852", "value" : 426000 }, { "_id" : "853", "value" : 426500 }, { "_id" : "854", "value" : 427000 }, { "_id" : "855", "value" : 427500 }, { "_id" : "856", "value" : 428000 }, { "_id" : "857", "value" : 428500 }, { "_id" : "858", "value" : 429000 }, { "_id" : "859", "value" : 429500 }, { "_id" : "86", "value" : 43000 }, { "_id" : "860", "value" : 430000 }, { "_id" : "861", "value" : 430500 }, { "_id" : "862", "value" : 431000 }, { "_id" : "863", "value" : 431500 }, { "_id" : "864", "value" : 432000 }, { "_id" : "865", "value" : 432500 }, { "_id" : "866", "value" : 433000 }, { "_id" : "867", "value" : 433500 }, { "_id" : "868", "value" : 434000 }, { "_id" : "869", "value" : 434500 }, { "_id" : "87", "value" : 43500 }, { "_id" : "870", "value" : 435000 }, { "_id" : "871", "value" : 435500 }, { "_id" : "872", "value" : 436000 }, { "_id" : "873", "value" : 436500 }, { "_id" : "874", "value" : 437000 }, { "_id" : "875", "value" : 437500 }, { "_id" : "876", "value" : 438000 }, { "_id" : "877", "value" : 438500 }, { "_id" : "878", "value" : 439000 }, { "_id" : "879", "value" : 439500 }, { "_id" : "88", "value" : 44000 }, { "_id" : "880", "value" : 440000 }, { "_id" : "881", "value" : 440500 }, { "_id" : "882", "value" : 441000 }, { "_id" : "883", "value" : 441500 }, { "_id" : "884", "value" : 442000 }, { "_id" : "885", "value" : 442500 }, { "_id" : "886", "value" : 443000 }, { "_id" : "887", "value" : 443500 }, { "_id" : "888", "value" : 444000 }, { "_id" : "889", "value" : 444500 }, { "_id" : "89", "value" : 44500 }, { "_id" : "890", "value" : 445000 }, { "_id" : "891", "value" : 445500 }, { "_id" : "892", "value" : 446000 }, { "_id" : "893", "value" : 446500 }, { "_id" : "894", "value" : 447000 }, { "_id" : "895", "value" : 447500 }, { "_id" : "896", "value" : 448000 }, { "_id" : "897", "value" : 448500 }, { "_id" : "898", "value" : 449000 }, { "_id" : "899", "value" : 449500 }, { "_id" : "9", "value" : 4500 }, { "_id" : "90", "value" : 45000 }, { "_id" : "900", "value" : 450000 }, { "_id" : "901", "value" : 450500 }, { "_id" : "902", "value" : 451000 }, { "_id" : "903", "value" : 451500 }, { "_id" : "904", "value" : 452000 }, { "_id" : "905", "value" : 452500 }, { "_id" : "906", "value" : 453000 }, { "_id" : "907", "value" : 453500 }, { "_id" : "908", "value" : 454000 }, { "_id" : "909", "value" : 454500 }, { "_id" : "91", "value" : 45500 }, { "_id" : "910", "value" : 455000 }, { "_id" : "911", "value" : 455500 }, { "_id" : "912", "value" : 456000 }, { "_id" : "913", "value" : 456500 }, { "_id" : "914", "value" : 457000 }, { "_id" : "915", "value" : 457500 }, { "_id" : "916", "value" : 458000 }, { "_id" : "917", "value" : 458500 }, { "_id" : "918", "value" : 459000 }, { "_id" : "919", "value" : 459500 }, { "_id" : "92", "value" : 46000 }, { "_id" : "920", "value" : 460000 }, { "_id" : "921", "value" : 460500 }, { "_id" : "922", "value" : 461000 }, { "_id" : "923", "value" : 461500 }, { "_id" : "924", "value" : 462000 }, { "_id" : "925", "value" : 462500 }, { "_id" : "926", "value" : 463000 }, { "_id" : "927", "value" : 463500 }, { "_id" : "928", "value" : 464000 }, { "_id" : "929", "value" : 464500 }, { "_id" : "93", "value" : 46500 }, { "_id" : "930", "value" : 465000 }, { "_id" : "931", "value" : 465500 }, { "_id" : "932", "value" : 466000 }, { "_id" : "933", "value" : 466500 }, { "_id" : "934", "value" : 467000 }, { "_id" : "935", "value" : 467500 }, { "_id" : "936", "value" : 468000 }, { "_id" : "937", "value" : 468500 }, { "_id" : "938", "value" : 469000 }, { "_id" : "939", "value" : 469500 }, { "_id" : "94", "value" : 47000 }, { "_id" : "940", "value" : 470000 }, { "_id" : "941", "value" : 470500 }, { "_id" : "942", "value" : 471000 }, { "_id" : "943", "value" : 471500 }, { "_id" : "944", "value" : 472000 }, { "_id" : "945", "value" : 472500 }, { "_id" : "946", "value" : 473000 }, { "_id" : "947", "value" : 473500 }, { "_id" : "948", "value" : 474000 }, { "_id" : "949", "value" : 474500 }, { "_id" : "95", "value" : 47500 }, { "_id" : "950", "value" : 475000 }, { "_id" : "951", "value" : 475500 }, { "_id" : "952", "value" : 476000 }, { "_id" : "953", "value" : 476500 }, { "_id" : "954", "value" : 477000 }, { "_id" : "955", "value" : 477500 }, { "_id" : "956", "value" : 478000 }, { "_id" : "957", "value" : 478500 }, { "_id" : "958", "value" : 479000 }, { "_id" : "959", "value" : 479500 }, { "_id" : "96", "value" : 48000 }, { "_id" : "960", "value" : 480000 }, { "_id" : "961", "value" : 480500 }, { "_id" : "962", "value" : 481000 }, { "_id" : "963", "value" : 481500 }, { "_id" : "964", "value" : 482000 }, { "_id" : "965", "value" : 482500 }, { "_id" : "966", "value" : 483000 }, { "_id" : "967", "value" : 483500 }, { "_id" : "968", "value" : 484000 }, { "_id" : "969", "value" : 484500 }, { "_id" : "97", "value" : 48500 }, { "_id" : "970", "value" : 485000 }, { "_id" : "971", "value" : 485500 }, { "_id" : "972", "value" : 486000 }, { "_id" : "973", "value" : 486500 }, { "_id" : "974", "value" : 487000 }, { "_id" : "975", "value" : 487500 }, { "_id" : "976", "value" : 488000 }, { "_id" : "977", "value" : 488500 }, { "_id" : "978", "value" : 489000 }, { "_id" : "979", "value" : 489500 }, { "_id" : "98", "value" : 49000 }, { "_id" : "980", "value" : 490000 }, { "_id" : "981", "value" : 490500 }, { "_id" : "982", "value" : 491000 }, { "_id" : "983", "value" : 491500 }, { "_id" : "984", "value" : 492000 }, { "_id" : "985", "value" : 492500 }, { "_id" : "986", "value" : 493000 }, { "_id" : "987", "value" : 493500 }, { "_id" : "988", "value" : 494000 }, { "_id" : "989", "value" : 494500 }, { "_id" : "99", "value" : 49500 }, { "_id" : "990", "value" : 495000 }, { "_id" : "991", "value" : 495500 }, { "_id" : "992", "value" : 496000 }, { "_id" : "993", "value" : 496500 }, { "_id" : "994", "value" : 497000 }, { "_id" : "995", "value" : 497500 }, { "_id" : "996", "value" : 498000 }, { "_id" : "997", "value" : 498500 }, { "_id" : "998", "value" : 499000 }, { "_id" : "999", "value" : 499500 } ] m30999| Fri Feb 22 11:50:21.144 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.$cmd", n2skip: 0, n2return: 1, options: 0, query: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30999| var total = 0 m30999| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533821_1", shardedFirstPass: true }, fields: {} } and CInfo { v_ns: "mr_shard_version.coll", filter: {} } m30999| Fri Feb 22 11:50:21.144 [conn1] [pcursor] initializing over 2 shards required by [mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534] m30999| Fri Feb 22 11:50:21.144 [conn1] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:21.144 [conn1] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:21.144 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:21.144 [conn1] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:21.145 [conn1] [pcursor] finishing over 2 shards m30999| Fri Feb 22 11:50:21.145 [conn1] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30001| Fri Feb 22 11:50:21.145 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_2 m30000| Fri Feb 22 11:50:21.146 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_1 m30000| Fri Feb 22 11:50:21.146 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_1_inc m30000| Fri Feb 22 11:50:21.146 [conn6] build index mr_shard_version.tmp.mr.coll_1_inc { 0: 1 } m30001| Fri Feb 22 11:50:21.147 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_2_inc m30001| Fri Feb 22 11:50:21.147 [conn3] build index mr_shard_version.tmp.mr.coll_2_inc { 0: 1 } m30000| Fri Feb 22 11:50:21.147 [conn6] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:50:21.148 [conn6] build index mr_shard_version.tmp.mr.coll_1 { _id: 1 } m30000| Fri Feb 22 11:50:21.148 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:50:21.150 [conn3] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:50:21.151 [conn3] build index mr_shard_version.tmp.mr.coll_2 { _id: 1 } m30001| Fri Feb 22 11:50:21.153 [conn3] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 11:50:21.298 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 44231, clonedBytes: 2029676, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:22.322 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 46477, clonedBytes: 2132772, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:50:22.613 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 11:50:22 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361533762:16838', sleeping for 30000ms m30999| Fri Feb 22 11:50:22.632 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:22.632 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:23.346 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 49464, clonedBytes: 2269844, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:24.001 [conn3] M/R: (1/3) Emit Progress: 73200/481056 15% m30000| Fri Feb 22 11:50:24.018 [conn6] M/R: (1/3) Emit Progress: 71700/294192 24% m30001| Fri Feb 22 11:50:24.371 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 51463, clonedBytes: 2361578, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:25.395 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 54298, clonedBytes: 2491658, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:26.419 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 57376, clonedBytes: 2632916, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:27.000 [conn6] M/R: (1/3) Emit Progress: 161200/294192 54% m30001| Fri Feb 22 11:50:27.001 [conn3] M/R: (1/3) Emit Progress: 165400/481056 34% m30001| Fri Feb 22 11:50:27.444 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 61199, clonedBytes: 2808334, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:28.468 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 64149, clonedBytes: 2943704, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:50:28.633 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:28.633 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:29.492 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 67340, clonedBytes: 3090160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:30.000 [conn6] M/R: (1/3) Emit Progress: 246900/294192 83% m30001| Fri Feb 22 11:50:30.391 [conn3] CMD: drop mr_shard_version.tmp.mrs.coll_1361533821_1 m30001| Fri Feb 22 11:50:30.393 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_2 m30001| Fri Feb 22 11:50:30.393 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_2 m30001| Fri Feb 22 11:50:30.393 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_2_inc m30001| Fri Feb 22 11:50:30.395 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_2 m30001| Fri Feb 22 11:50:30.395 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_2_inc m30001| Fri Feb 22 11:50:30.395 [conn3] command mr_shard_version.$cmd command: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30001| var total = 0 m30001| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533821_1", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 3501 locks(micros) W:2339 r:14995617 w:30198 reslen:149 9250ms m30000| Fri Feb 22 11:50:30.408 [conn6] CMD: drop mr_shard_version.tmp.mrs.coll_1361533821_1 m30000| Fri Feb 22 11:50:30.412 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_1 m30000| Fri Feb 22 11:50:30.412 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_1 m30000| Fri Feb 22 11:50:30.412 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_1_inc m30000| Fri Feb 22 11:50:30.415 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_1 m30000| Fri Feb 22 11:50:30.415 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_1_inc m30000| Fri Feb 22 11:50:30.415 [conn6] command mr_shard_version.$cmd command: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30000| var total = 0 m30000| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533821_1", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 3501 locks(micros) W:4759 r:14934148 w:35541 reslen:149 9270ms m30999| Fri Feb 22 11:50:30.415 [conn1] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: { result: "tmp.mrs.coll_1361533821_1", timeMillis: 9267, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:30.415 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.415 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: { result: "tmp.mrs.coll_1361533821_1", timeMillis: 9248, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:30.415 [conn1] MR with single shard output, NS=mr_shard_version.mrOutput primary=shard0001:localhost:30001 m30001| Fri Feb 22 11:50:30.416 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_3 m30001| Fri Feb 22 11:50:30.416 [conn3] build index mr_shard_version.tmp.mr.coll_3 { _id: 1 } m30001| Fri Feb 22 11:50:30.417 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:50:30.486 [conn3] CMD: drop mr_shard_version.mrOutput m30001| Fri Feb 22 11:50:30.490 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_3 m30001| Fri Feb 22 11:50:30.491 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_3 m30001| Fri Feb 22 11:50:30.491 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_3 m30000| Fri Feb 22 11:50:30.492 [conn7] CMD: drop mr_shard_version.tmp.mrs.coll_1361533821_1 m30001| Fri Feb 22 11:50:30.497 [conn8] CMD: drop mr_shard_version.tmp.mrs.coll_1361533821_1 m30999| Fri Feb 22 11:50:30.498 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.mrOutput", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:30.498 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ shard0001:localhost:30001] m30999| Fri Feb 22 11:50:30.498 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.498 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.498 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:30.498 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.499 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "shard0001:localhost:30001", cursor: { _id: "0", value: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30001| Fri Feb 22 11:50:30.517 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 73952, clonedBytes: 3393652, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [ { "_id" : "0", "value" : 0 }, { "_id" : "1", "value" : 500 }, { "_id" : "10", "value" : 5000 }, { "_id" : "100", "value" : 50000 }, { "_id" : "101", "value" : 50500 }, { "_id" : "102", "value" : 51000 }, { "_id" : "103", "value" : 51500 }, { "_id" : "104", "value" : 52000 }, { "_id" : "105", "value" : 52500 }, { "_id" : "106", "value" : 53000 }, { "_id" : "107", "value" : 53500 }, { "_id" : "108", "value" : 54000 }, { "_id" : "109", "value" : 54500 }, { "_id" : "11", "value" : 5500 }, { "_id" : "110", "value" : 55000 }, { "_id" : "111", "value" : 55500 }, { "_id" : "112", "value" : 56000 }, { "_id" : "113", "value" : 56500 }, { "_id" : "114", "value" : 57000 }, { "_id" : "115", "value" : 57500 }, { "_id" : "116", "value" : 58000 }, { "_id" : "117", "value" : 58500 }, { "_id" : "118", "value" : 59000 }, { "_id" : "119", "value" : 59500 }, { "_id" : "12", "value" : 6000 }, { "_id" : "120", "value" : 60000 }, { "_id" : "121", "value" : 60500 }, { "_id" : "122", "value" : 61000 }, { "_id" : "123", "value" : 61500 }, { "_id" : "124", "value" : 62000 }, { "_id" : "125", "value" : 62500 }, { "_id" : "126", "value" : 63000 }, { "_id" : "127", "value" : 63500 }, { "_id" : "128", "value" : 64000 }, { "_id" : "129", "value" : 64500 }, { "_id" : "13", "value" : 6500 }, { "_id" : "130", "value" : 65000 }, { "_id" : "131", "value" : 65500 }, { "_id" : "132", "value" : 66000 }, { "_id" : "133", "value" : 66500 }, { "_id" : "134", "value" : 67000 }, { "_id" : "135", "value" : 67500 }, { "_id" : "136", "value" : 68000 }, { "_id" : "137", "value" : 68500 }, { "_id" : "138", "value" : 69000 }, { "_id" : "139", "value" : 69500 }, { "_id" : "14", "value" : 7000 }, { "_id" : "140", "value" : 70000 }, { "_id" : "141", "value" : 70500 }, { "_id" : "142", "value" : 71000 }, { "_id" : "143", "value" : 71500 }, { "_id" : "144", "value" : 72000 }, { "_id" : "145", "value" : 72500 }, { "_id" : "146", "value" : 73000 }, { "_id" : "147", "value" : 73500 }, { "_id" : "148", "value" : 74000 }, { "_id" : "149", "value" : 74500 }, { "_id" : "15", "value" : 7500 }, { "_id" : "150", "value" : 75000 }, { "_id" : "151", "value" : 75500 }, { "_id" : "152", "value" : 76000 }, { "_id" : "153", "value" : 76500 }, { "_id" : "154", "value" : 77000 }, { "_id" : "155", "value" : 77500 }, { "_id" : "156", "value" : 78000 }, { "_id" : "157", "value" : 78500 }, { "_id" : "158", "value" : 79000 }, { "_id" : "159", "value" : 79500 }, { "_id" : "16", "value" : 8000 }, { "_id" : "160", "value" : 80000 }, { "_id" : "161", "value" : 80500 }, { "_id" : "162", "value" : 81000 }, { "_id" : "163", "value" : 81500 }, { "_id" : "164", "value" : 82000 }, { "_id" : "165", "value" : 82500 }, { "_id" : "166", "value" : 83000 }, { "_id" : "167", "value" : 83500 }, { "_id" : "168", "value" : 84000 }, { "_id" : "169", "value" : 84500 }, { "_id" : "17", "value" : 8500 }, { "_id" : "170", "value" : 85000 }, { "_id" : "171", "value" : 85500 }, { "_id" : "172", "value" : 86000 }, { "_id" : "173", "value" : 86500 }, { "_id" : "174", "value" : 87000 }, { "_id" : "175", "value" : 87500 }, { "_id" : "176", "value" : 88000 }, { "_id" : "177", "value" : 88500 }, { "_id" : "178", "value" : 89000 }, { "_id" : "179", "value" : 89500 }, { "_id" : "18", "value" : 9000 }, { "_id" : "180", "value" : 90000 }, { "_id" : "181", "value" : 90500 }, { "_id" : "182", "value" : 91000 }, { "_id" : "183", "value" : 91500 }, { "_id" : "184", "value" : 92000 }, { "_id" : "185", "value" : 92500 }, { "_id" : "186", "value" : 93000 }, { "_id" : "187", "value" : 93500 }, { "_id" : "188", "value" : 94000 }, { "_id" : "189", "value" : 94500 }, { "_id" : "19", "value" : 9500 }, { "_id" : "190", "value" : 95000 }, { "_id" : "191", "value" : 95500 }, { "_id" : "192", "value" : 96000 }, { "_id" : "193", "value" : 96500 }, { "_id" : "194", "value" : 97000 }, { "_id" : "195", "value" : 97500 }, { "_id" : "196", "value" : 98000 }, { "_id" : "197", "value" : 98500 }, { "_id" : "198", "value" : 99000 }, { "_id" : "199", "value" : 99500 }, { "_id" : "2", "value" : 1000 }, { "_id" : "20", "value" : 10000 }, { "_id" : "200", "value" : 100000 }, { "_id" : "201", "value" : 100500 }, { "_id" : "202", "value" : 101000 }, { "_id" : "203", "value" : 101500 }, { "_id" : "204", "value" : 102000 }, { "_id" : "205", "value" : 102500 }, { "_id" : "206", "value" : 103000 }, { "_id" : "207", "value" : 103500 }, { "_id" : "208", "value" : 104000 }, { "_id" : "209", "value" : 104500 }, { "_id" : "21", "value" : 10500 }, { "_id" : "210", "value" : 105000 }, { "_id" : "211", "value" : 105500 }, { "_id" : "212", "value" : 106000 }, { "_id" : "213", "value" : 106500 }, { "_id" : "214", "value" : 107000 }, { "_id" : "215", "value" : 107500 }, { "_id" : "216", "value" : 108000 }, { "_id" : "217", "value" : 108500 }, { "_id" : "218", "value" : 109000 }, { "_id" : "219", "value" : 109500 }, { "_id" : "22", "value" : 11000 }, { "_id" : "220", "value" : 110000 }, { "_id" : "221", "value" : 110500 }, { "_id" : "222", "value" : 111000 }, { "_id" : "223", "value" : 111500 }, { "_id" : "224", "value" : 112000 }, { "_id" : "225", "value" : 112500 }, { "_id" : "226", "value" : 113000 }, { "_id" : "227", "value" : 113500 }, { "_id" : "228", "value" : 114000 }, { "_id" : "229", "value" : 114500 }, { "_id" : "23", "value" : 11500 }, { "_id" : "230", "value" : 115000 }, { "_id" : "231", "value" : 115500 }, { "_id" : "232", "value" : 116000 }, { "_id" : "233", "value" : 116500 }, { "_id" : "234", "value" : 117000 }, { "_id" : "235", "value" : 117500 }, { "_id" : "236", "value" : 118000 }, { "_id" : "237", "value" : 118500 }, { "_id" : "238", "value" : 119000 }, { "_id" : "239", "value" : 119500 }, { "_id" : "24", "value" : 12000 }, { "_id" : "240", "value" : 120000 }, { "_id" : "241", "value" : 120500 }, { "_id" : "242", "value" : 121000 }, { "_id" : "243", "value" : 121500 }, { "_id" : "244", "value" : 122000 }, { "_id" : "245", "value" : 122500 }, { "_id" : "246", "value" : 123000 }, { "_id" : "247", "value" : 123500 }, { "_id" : "248", "value" : 124000 }, { "_id" : "249", "value" : 124500 }, { "_id" : "25", "value" : 12500 }, { "_id" : "250", "value" : 125000 }, { "_id" : "251", "value" : 125500 }, { "_id" : "252", "value" : 126000 }, { "_id" : "253", "value" : 126500 }, { "_id" : "254", "value" : 127000 }, { "_id" : "255", "value" : 127500 }, { "_id" : "256", "value" : 128000 }, { "_id" : "257", "value" : 128500 }, { "_id" : "258", "value" : 129000 }, { "_id" : "259", "value" : 129500 }, { "_id" : "26", "value" : 13000 }, { "_id" : "260", "value" : 130000 }, { "_id" : "261", "value" : 130500 }, { "_id" : "262", "value" : 131000 }, { "_id" : "263", "value" : 131500 }, { "_id" : "264", "value" : 132000 }, { "_id" : "265", "value" : 132500 }, { "_id" : "266", "value" : 133000 }, { "_id" : "267", "value" : 133500 }, { "_id" : "268", "value" : 134000 }, { "_id" : "269", "value" : 134500 }, { "_id" : "27", "value" : 13500 }, { "_id" : "270", "value" : 135000 }, { "_id" : "271", "value" : 135500 }, { "_id" : "272", "value" : 136000 }, { "_id" : "273", "value" : 136500 }, { "_id" : "274", "value" : 137000 }, { "_id" : "275", "value" : 137500 }, { "_id" : "276", "value" : 138000 }, { "_id" : "277", "value" : 138500 }, { "_id" : "278", "value" : 139000 }, { "_id" : "279", "value" : 139500 }, { "_id" : "28", "value" : 14000 }, { "_id" : "280", "value" : 140000 }, { "_id" : "281", "value" : 140500 }, { "_id" : "282", "value" : 141000 }, { "_id" : "283", "value" : 141500 }, { "_id" : "284", "value" : 142000 }, { "_id" : "285", "value" : 142500 }, { "_id" : "286", "value" : 143000 }, { "_id" : "287", "value" : 143500 }, { "_id" : "288", "value" : 144000 }, { "_id" : "289", "value" : 144500 }, { "_id" : "29", "value" : 14500 }, { "_id" : "290", "value" : 145000 }, { "_id" : "291", "value" : 145500 }, { "_id" : "292", "value" : 146000 }, { "_id" : "293", "value" : 146500 }, { "_id" : "294", "value" : 147000 }, { "_id" : "295", "value" : 147500 }, { "_id" : "296", "value" : 148000 }, { "_id" : "297", "value" : 148500 }, { "_id" : "298", "value" : 149000 }, { "_id" : "299", "value" : 149500 }, { "_id" : "3", "value" : 1500 }, { "_id" : "30", "value" : 15000 }, { "_id" : "300", "value" : 150000 }, { "_id" : "301", "value" : 150500 }, { "_id" : "302", "value" : 151000 }, { "_id" : "303", "value" : 151500 }, { "_id" : "304", "value" : 152000 }, { "_id" : "305", "value" : 152500 }, { "_id" : "306", "value" : 153000 }, { "_id" : "307", "value" : 153500 }, { "_id" : "308", "value" : 154000 }, { "_id" : "309", "value" : 154500 }, { "_id" : "31", "value" : 15500 }, { "_id" : "310", "value" : 155000 }, { "_id" : "311", "value" : 155500 }, { "_id" : "312", "value" : 156000 }, { "_id" : "313", "value" : 156500 }, { "_id" : "314", "value" : 157000 }, { "_id" : "315", "value" : 157500 }, { "_id" : "316", "value" : 158000 }, { "_id" : "317", "value" : 158500 }, { "_id" : "318", "value" : 159000 }, { "_id" : "319", "value" : 159500 }, { "_id" : "32", "value" : 16000 }, { "_id" : "320", "value" : 160000 }, { "_id" : "321", "value" : 160500 }, { "_id" : "322", "value" : 161000 }, { "_id" : "323", "value" : 161500 }, { "_id" : "324", "value" : 162000 }, { "_id" : "325", "value" : 162500 }, { "_id" : "326", "value" : 163000 }, { "_id" : "327", "value" : 163500 }, { "_id" : "328", "value" : 164000 }, { "_id" : "329", "value" : 164500 }, { "_id" : "33", "value" : 16500 }, { "_id" : "330", "value" : 165000 }, { "_id" : "331", "value" : 165500 }, { "_id" : "332", "value" : 166000 }, { "_id" : "333", "value" : 166500 }, { "_id" : "334", "value" : 167000 }, { "_id" : "335", "value" : 167500 }, { "_id" : "336", "value" : 168000 }, { "_id" : "337", "value" : 168500 }, { "_id" : "338", "value" : 169000 }, { "_id" : "339", "value" : 169500 }, { "_id" : "34", "value" : 17000 }, { "_id" : "340", "value" : 170000 }, { "_id" : "341", "value" : 170500 }, { "_id" : "342", "value" : 171000 }, { "_id" : "343", "value" : 171500 }, { "_id" : "344", "value" : 172000 }, { "_id" : "345", "value" : 172500 }, { "_id" : "346", "value" : 173000 }, { "_id" : "347", "value" : 173500 }, { "_id" : "348", "value" : 174000 }, { "_id" : "349", "value" : 174500 }, { "_id" : "35", "value" : 17500 }, { "_id" : "350", "value" : 175000 }, { "_id" : "351", "value" : 175500 }, { "_id" : "352", "value" : 176000 }, { "_id" : "353", "value" : 176500 }, { "_id" : "354", "value" : 177000 }, { "_id" : "355", "value" : 177500 }, { "_id" : "356", "value" : 178000 }, { "_id" : "357", "value" : 178500 }, { "_id" : "358", "value" : 179000 }, { "_id" : "359", "value" : 179500 }, { "_id" : "36", "value" : 18000 }, { "_id" : "360", "value" : 180000 }, { "_id" : "361", "value" : 180500 }, { "_id" : "362", "value" : 181000 }, { "_id" : "363", "value" : 181500 }, { "_id" : "364", "value" : 182000 }, { "_id" : "365", "value" : 182500 }, { "_id" : "366", "value" : 183000 }, { "_id" : "367", "value" : 183500 }, { "_id" : "368", "value" : 184000 }, { "_id" : "369", "value" : 184500 }, { "_id" : "37", "value" : 18500 }, { "_id" : "370", "value" : 185000 }, { "_id" : "371", "value" : 185500 }, { "_id" : "372", "value" : 186000 }, { "_id" : "373", "value" : 186500 }, { "_id" : "374", "value" : 187000 }, { "_id" : "375", "value" : 187500 }, { "_id" : "376", "value" : 188000 }, { "_id" : "377", "value" : 188500 }, { "_id" : "378", "value" : 189000 }, { "_id" : "379", "value" : 189500 }, { "_id" : "38", "value" : 19000 }, { "_id" : "380", "value" : 190000 }, { "_id" : "381", "value" : 190500 }, { "_id" : "382", "value" : 191000 }, { "_id" : "383", "value" : 191500 }, { "_id" : "384", "value" : 192000 }, { "_id" : "385", "value" : 192500 }, { "_id" : "386", "value" : 193000 }, { "_id" : "387", "value" : 193500 }, { "_id" : "388", "value" : 194000 }, { "_id" : "389", "value" : 194500 }, { "_id" : "39", "value" : 19500 }, { "_id" : "390", "value" : 195000 }, { "_id" : "391", "value" : 195500 }, { "_id" : "392", "value" : 196000 }, { "_id" : "393", "value" : 196500 }, { "_id" : "394", "value" : 197000 }, { "_id" : "395", "value" : 197500 }, { "_id" : "396", "value" : 198000 }, { "_id" : "397", "value" : 198500 }, { "_id" : "398", "value" : 199000 }, { "_id" : "399", "value" : 199500 }, { "_id" : "4", "value" : 2000 }, { "_id" : "40", "value" : 20000 }, { "_id" : "400", "value" : 200000 }, { "_id" : "401", "value" : 200500 }, { "_id" : "402", "value" : 201000 }, { "_id" : "403", "value" : 201500 }, { "_id" : "404", "value" : 202000 }, { "_id" : "405", "value" : 202500 }, { "_id" : "406", "value" : 203000 }, { "_id" : "407", "value" : 203500 }, { "_id" : "408", "value" : 204000 }, { "_id" : "409", "value" : 204500 }, { "_id" : "41", "value" : 20500 }, { "_id" : "410", "value" : 205000 }, { "_id" : "411", "value" : 205500 }, { "_id" : "412", "value" : 206000 }, { "_id" : "413", "value" : 206500 }, { "_id" : "414", "value" : 207000 }, { "_id" : "415", "value" : 207500 }, { "_id" : "416", "value" : 208000 }, { "_id" : "417", "value" : 208500 }, { "_id" : "418", "value" : 209000 }, { "_id" : "419", "value" : 209500 }, { "_id" : "42", "value" : 21000 }, { "_id" : "420", "value" : 210000 }, { "_id" : "421", "value" : 210500 }, { "_id" : "422", "value" : 211000 }, { "_id" : "423", "value" : 211500 }, { "_id" : "424", "value" : 212000 }, { "_id" : "425", "value" : 212500 }, { "_id" : "426", "value" : 213000 }, { "_id" : "427", "value" : 213500 }, { "_id" : "428", "value" : 214000 }, { "_id" : "429", "value" : 214500 }, { "_id" : "43", "value" : 21500 }, { "_id" : "430", "value" : 215000 }, { "_id" : "431", "value" : 215500 }, { "_id" : "432", "value" : 216000 }, { "_id" : "433", "value" : 216500 }, { "_id" : "434", "value" : 217000 }, { "_id" : "435", "value" : 217500 }, { "_id" : "436", "value" : 218000 }, { "_id" : "437", "value" : 218500 }, { "_id" : "438", "value" : 219000 }, { "_id" : "439", "value" : 219500 }, { "_id" : "44", "value" : 22000 }, { "_id" : "440", "value" : 220000 }, { "_id" : "441", "value" : 220500 }, { "_id" : "442", "value" : 221000 }, { "_id" : "443", "value" : 221500 }, { "_id" : "444", "value" : 222000 }, { "_id" : "445", "value" : 222500 }, { "_id" : "446", "value" : 223000 }, { "_id" : "447", "value" : 223500 }, { "_id" : "448", "value" : 224000 }, { "_id" : "449", "value" : 224500 }, { "_id" : "45", "value" : 22500 }, { "_id" : "450", "value" : 225000 }, { "_id" : "451", "value" : 225500 }, { "_id" : "452", "value" : 226000 }, { "_id" : "453", "value" : 226500 }, { "_id" : "454", "value" : 227000 }, { "_id" : "455", "value" : 227500 }, { "_id" : "456", "value" : 228000 }, { "_id" : "457", "value" : 228500 }, { "_id" : "458", "value" : 229000 }, { "_id" : "459", "value" : 229500 }, { "_id" : "46", "value" : 23000 }, { "_id" : "460", "value" : 230000 }, { "_id" : "461", "value" : 230500 }, { "_id" : "462", "value" : 231000 }, { "_id" : "463", "value" : 231500 }, { "_id" : "464", "value" : 232000 }, { "_id" : "465", "value" : 232500 }, { "_id" : "466", "value" : 233000 }, { "_id" : "467", "value" : 233500 }, { "_id" : "468", "value" : 234000 }, { "_id" : "469", "value" : 234500 }, { "_id" : "47", "value" : 23500 }, { "_id" : "470", "value" : 235000 }, { "_id" : "471", "value" : 235500 }, { "_id" : "472", "value" : 236000 }, { "_id" : "473", "value" : 236500 }, { "_id" : "474", "value" : 237000 }, { "_id" : "475", "value" : 237500 }, { "_id" : "476", "value" : 238000 }, { "_id" : "477", "value" : 238500 }, { "_id" : "478", "value" : 239000 }, { "_id" : "479", "value" : 239500 }, { "_id" : "48", "value" : 24000 }, { "_id" : "480", "value" : 240000 }, { "_id" : "481", "value" : 240500 }, { "_id" : "482", "value" : 241000 }, { "_id" : "483", "value" : 241500 }, { "_id" : "484", "value" : 242000 }, { "_id" : "485", "value" : 242500 }, { "_id" : "486", "value" : 243000 }, { "_id" : "487", "value" : 243500 }, { "_id" : "488", "value" : 244000 }, { "_id" : "489", "value" : 244500 }, { "_id" : "49", "value" : 24500 }, { "_id" : "490", "value" : 245000 }, { "_id" : "491", "value" : 245500 }, { "_id" : "492", "value" : 246000 }, { "_id" : "493", "value" : 246500 }, { "_id" : "494", "value" : 247000 }, { "_id" : "495", "value" : 247500 }, { "_id" : "496", "value" : 248000 }, { "_id" : "497", "value" : 248500 }, { "_id" : "498", "value" : 249000 }, { "_id" : "499", "value" : 249500 }, { "_id" : "5", "value" : 2500 }, { "_id" : "50", "value" : 25000 }, { "_id" : "500", "value" : 250000 }, { "_id" : "501", "value" : 250500 }, { "_id" : "502", "value" : 251000 }, { "_id" : "503", "value" : 251500 }, { "_id" : "504", "value" : 252000 }, { "_id" : "505", "value" : 252500 }, { "_id" : "506", "value" : 253000 }, { "_id" : "507", "value" : 253500 }, { "_id" : "508", "value" : 254000 }, { "_id" : "509", "value" : 254500 }, { "_id" : "51", "value" : 25500 }, { "_id" : "510", "value" : 255000 }, { "_id" : "511", "value" : 255500 }, { "_id" : "512", "value" : 256000 }, { "_id" : "513", "value" : 256500 }, { "_id" : "514", "value" : 257000 }, { "_id" : "515", "value" : 257500 }, { "_id" : "516", "value" : 258000 }, { "_id" : "517", "value" : 258500 }, { "_id" : "518", "value" : 259000 }, { "_id" : "519", "value" : 259500 }, { "_id" : "52", "value" : 26000 }, { "_id" : "520", "value" : 260000 }, { "_id" : "521", "value" : 260500 }, { "_id" : "522", "value" : 261000 }, { "_id" : "523", "value" : 261500 }, { "_id" : "524", "value" : 262000 }, { "_id" : "525", "value" : 262500 }, { "_id" : "526", "value" : 263000 }, { "_id" : "527", "value" : 263500 }, { "_id" : "528", "value" : 264000 }, { "_id" : "529", "value" : 264500 }, { "_id" : "53", "value" : 26500 }, { "_id" : "530", "value" : 265000 }, { "_id" : "531", "value" : 265500 }, { "_id" : "532", "value" : 266000 }, { "_id" : "533", "value" : 266500 }, { "_id" : "534", "value" : 267000 }, { "_id" : "535", "value" : 267500 }, { "_id" : "536", "value" : 268000 }, { "_id" : "537", "value" : 268500 }, { "_id" : "538", "value" : 269000 }, { "_id" : "539", "value" : 269500 }, { "_id" : "54", "value" : 27000 }, { "_id" : "540", "value" : 270000 }, { "_id" : "541", "value" : 270500 }, { "_id" : "542", "value" : 271000 }, { "_id" : "543", "value" : 271500 }, { "_id" : "544", "value" : 272000 }, { "_id" : "545", "value" : 272500 }, { "_id" : "546", "value" : 273000 }, { "_id" : "547", "value" : 273500 }, { "_id" : "548", "value" : 274000 }, { "_id" : "549", "value" : 274500 }, { "_id" : "55", "value" : 27500 }, { "_id" : "550", "value" : 275000 }, { "_id" : "551", "value" : 275500 }, { "_id" : "552", "value" : 276000 }, { "_id" : "553", "value" : 276500 }, { "_id" : "554", "value" : 277000 }, { "_id" : "555", "value" : 277500 }, { "_id" : "556", "value" : 278000 }, { "_id" : "557", "value" : 278500 }, { "_id" : "558", "value" : 279000 }, { "_id" : "559", "value" : 279500 }, { "_id" : "56", "value" : 28000 }, { "_id" : "560", "value" : 280000 }, { "_id" : "561", "value" : 280500 }, { "_id" : "562", "value" : 281000 }, { "_id" : "563", "value" : 281500 }, { "_id" : "564", "value" : 282000 }, { "_id" : "565", "value" : 282500 }, { "_id" : "566", "value" : 283000 }, { "_id" : "567", "value" : 283500 }, { "_id" : "568", "value" : 284000 }, { "_id" : "569", "value" : 284500 }, { "_id" : "57", "value" : 28500 }, { "_id" : "570", "value" : 285000 }, { "_id" : "571", "value" : 285500 }, { "_id" : "572", "value" : 286000 }, { "_id" : "573", "value" : 286500 }, { "_id" : "574", "value" : 287000 }, { "_id" : "575", "value" : 287500 }, { "_id" : "576", "value" : 288000 }, { "_id" : "577", "value" : 288500 }, { "_id" : "578", "value" : 289000 }, { "_id" : "579", "value" : 289500 }, { "_id" : "58", "value" : 29000 }, { "_id" : "580", "value" : 290000 }, { "_id" : "581", "value" : 290500 }, { "_id" : "582", "value" : 291000 }, { "_id" : "583", "value" : 291500 }, { "_id" : "584", "value" : 292000 }, { "_id" : "585", "value" : 292500 }, { "_id" : "586", "value" : 293000 }, { "_id" : "587", "value" : 293500 }, { "_id" : "588", "value" : 294000 }, { "_id" : "589", "value" : 294500 }, { "_id" : "59", "value" : 29500 }, { "_id" : "590", "value" : 295000 }, { "_id" : "591", "value" : 295500 }, { "_id" : "592", "value" : 296000 }, { "_id" : "593", "value" : 296500 }, { "_id" : "594", "value" : 297000 }, { "_id" : "595", "value" : 297500 }, { "_id" : "596", "value" : 298000 }, { "_id" : "597", "value" : 298500 }, { "_id" : "598", "value" : 299000 }, { "_id" : "599", "value" : 299500 }, { "_id" : "6", "value" : 3000 }, { "_id" : "60", "value" : 30000 }, { "_id" : "600", "value" : 300000 }, { "_id" : "601", "value" : 300500 }, { "_id" : "602", "value" : 301000 }, { "_id" : "603", "value" : 301500 }, { "_id" : "604", "value" : 302000 }, { "_id" : "605", "value" : 302500 }, { "_id" : "606", "value" : 303000 }, { "_id" : "607", "value" : 303500 }, { "_id" : "608", "value" : 304000 }, { "_id" : "609", "value" : 304500 }, { "_id" : "61", "value" : 30500 }, { "_id" : "610", "value" : 305000 }, { "_id" : "611", "value" : 305500 }, { "_id" : "612", "value" : 306000 }, { "_id" : "613", "value" : 306500 }, { "_id" : "614", "value" : 307000 }, { "_id" : "615", "value" : 307500 }, { "_id" : "616", "value" : 308000 }, { "_id" : "617", "value" : 308500 }, { "_id" : "618", "value" : 309000 }, { "_id" : "619", "value" : 309500 }, { "_id" : "62", "value" : 31000 }, { "_id" : "620", "value" : 310000 }, { "_id" : "621", "value" : 310500 }, { "_id" : "622", "value" : 311000 }, { "_id" : "623", "value" : 311500 }, { "_id" : "624", "value" : 312000 }, { "_id" : "625", "value" : 312500 }, { "_id" : "626", "value" : 313000 }, { "_id" : "627", "value" : 313500 }, { "_id" : "628", "value" : 314000 }, { "_id" : "629", "value" : 314500 }, { "_id" : "63", "value" : 31500 }, { "_id" : "630", "value" : 315000 }, { "_id" : "631", "value" : 315500 }, { "_id" : "632", "value" : 316000 }, { "_id" : "633", "value" : 316500 }, { "_id" : "634", "value" : 317000 }, { "_id" : "635", "value" : 317500 }, { "_id" : "636", "value" : 318000 }, { "_id" : "637", "value" : 318500 }, { "_id" : "638", "value" : 319000 }, { "_id" : "639", "value" : 319500 }, { "_id" : "64", "value" : 32000 }, { "_id" : "640", "value" : 320000 }, { "_id" : "641", "value" : 320500 }, { "_id" : "642", "value" : 321000 }, { "_id" : "643", "value" : 321500 }, { "_id" : "644", "value" : 322000 }, { "_id" : "645", "value" : 322500 }, { "_id" : "646", "value" : 323000 }, { "_id" : "647", "value" : 323500 }, { "_id" : "648", "value" : 324000 }, { "_id" : "649", "value" : 324500 }, { "_id" : "65", "value" : 32500 }, { "_id" : "650", "value" : 325000 }, { "_id" : "651", "value" : 325500 }, { "_id" : "652", "value" : 326000 }, { "_id" : "653", "value" : 326500 }, { "_id" : "654", "value" : 327000 }, { "_id" : "655", "value" : 327500 }, { "_id" : "656", "value" : 328000 }, { "_id" : "657", "value" : 328500 }, { "_id" : "658", "value" : 329000 }, { "_id" : "659", "value" : 329500 }, { "_id" : "66", "value" : 33000 }, { "_id" : "660", "value" : 330000 }, { "_id" : "661", "value" : 330500 }, { "_id" : "662", "value" : 331000 }, { "_id" : "663", "value" : 331500 }, { "_id" : "664", "value" : 332000 }, { "_id" : "665", "value" : 332500 }, { "_id" : "666", "value" : 333000 }, { "_id" : "667", "value" : 333500 }, { "_id" : "668", "value" : 334000 }, { "_id" : "669", "value" : 334500 }, { "_id" : "67", "value" : 33500 }, { "_id" : "670", "value" : 335000 }, { "_id" : "671", "value" : 335500 }, { "_id" : "672", "value" : 336000 }, { "_id" : "673", "value" : 336500 }, { "_id" : "674", "value" : 337000 }, { "_id" : "675", "value" : 337500 }, { "_id" : "676", "value" : 338000 }, { "_id" : "677", "value" : 338500 }, { "_id" : "678", "value" : 339000 }, { "_id" : "679", "value" : 339500 }, { "_id" : "68", "value" : 34000 }, { "_id" : "680", "value" : 340000 }, { "_id" : "681", "value" : 340500 }, { "_id" : "682", "value" : 341000 }, { "_id" : "683", "value" : 341500 }, { "_id" : "684", "value" : 342000 }, { "_id" : "685", "value" : 342500 }, { "_id" : "686", "value" : 343000 }, { "_id" : "687", "value" : 343500 }, { "_id" : "688", "value" : 344000 }, { "_id" : "689", "value" : 344500 }, { "_id" : "69", "value" : 34500 }, { "_id" : "690", "value" : 345000 }, { "_id" : "691", "value" : 345500 }, { "_id" : "692", "value" : 346000 }, { "_id" : "693", "value" : 346500 }, { "_id" : "694", "value" : 347000 }, { "_id" : "695", "value" : 347500 }, { "_id" : "696", "value" : 348000 }, { "_id" : "697", "value" : 348500 }, { "_id" : "698", "value" : 349000 }, { "_id" : "699", "value" : 349500 }, { "_id" : "7", "value" : 3500 }, { "_id" : "70", "value" : 35000 }, { "_id" : "700", "value" : 350000 }, { "_id" : "701", "value" : 350500 }, { "_id" : "702", "value" : 351000 }, { "_id" : "703", "value" : 351500 }, { "_id" : "704", "value" : 352000 }, { "_id" : "705", "value" : 352500 }, { "_id" : "706", "value" : 353000 }, { "_id" : "707", "value" : 353500 }, { "_id" : "708", "value" : 354000 }, { "_id" : "709", "value" : 354500 }, { "_id" : "71", "value" : 35500 }, { "_id" : "710", "value" : 355000 }, { "_id" : "711", "value" : 355500 }, { "_id" : "712", "value" : 356000 }, { "_id" : "713", "value" : 356500 }, { "_id" : "714", "value" : 357000 }, { "_id" : "715", "value" : 357500 }, { "_id" : "716", "value" : 358000 }, { "_id" : "717", "value" : 358500 }, { "_id" : "718", "value" : 359000 }, { "_id" : "719", "value" : 359500 }, { "_id" : "72", "value" : 36000 }, { "_id" : "720", "value" : 360000 }, { "_id" : "721", "value" : 360500 }, { "_id" : "722", "value" : 361000 }, { "_id" : "723", "value" : 361500 }, { "_id" : "724", "value" : 362000 }, { "_id" : "725", "value" : 362500 }, { "_id" : "726", "value" : 363000 }, { "_id" : "727", "value" : 363500 }, { "_id" : "728", "value" : 364000 }, { "_id" : "729", "value" : 364500 }, { "_id" : "73", "value" : 36500 }, { "_id" : "730", "value" : 365000 }, { "_id" : "731", "value" : 365500 }, { "_id" : "732", "value" : 366000 }, { "_id" : "733", "value" : 366500 }, { "_id" : "734", "value" : 367000 }, { "_id" : "735", "value" : 367500 }, { "_id" : "736", "value" : 368000 }, { "_id" : "737", "value" : 368500 }, { "_id" : "738", "value" : 369000 }, { "_id" : "739", "value" : 369500 }, { "_id" : "74", "value" : 37000 }, { "_id" : "740", "value" : 370000 }, { "_id" : "741", "value" : 370500 }, { "_id" : "742", "value" : 371000 }, { "_id" : "743", "value" : 371500 }, { "_id" : "744", "value" : 372000 }, { "_id" : "745", "value" : 372500 }, { "_id" : "746", "value" : 373000 }, { "_id" : "747", "value" : 373500 }, { "_id" : "748", "value" : 374000 }, { "_id" : "749", "value" : 374500 }, { "_id" : "75", "value" : 37500 }, { "_id" : "750", "value" : 375000 }, { "_id" : "751", "value" : 375500 }, { "_id" : "752", "value" : 376000 }, { "_id" : "753", "value" : 376500 }, { "_id" : "754", "value" : 377000 }, { "_id" : "755", "value" : 377500 }, { "_id" : "756", "value" : 378000 }, { "_id" : "757", "value" : 378500 }, { "_id" : "758", "value" : 379000 }, { "_id" : "759", "value" : 379500 }, { "_id" : "76", "value" : 38000 }, { "_id" : "760", "value" : 380000 }, { "_id" : "761", "value" : 380500 }, { "_id" : "762", "value" : 381000 }, { "_id" : "763", "value" : 381500 }, { "_id" : "764", "value" : 382000 }, { "_id" : "765", "value" : 382500 }, { "_id" : "766", "value" : 383000 }, { "_id" : "767", "value" : 383500 }, { "_id" : "768", "value" : 384000 }, { "_id" : "769", "value" : 384500 }, { "_id" : "77", "value" : 38500 }, { "_id" : "770", "value" : 385000 }, { "_id" : "771", "value" : 385500 }, { "_id" : "772", "value" : 386000 }, { "_id" : "773", "value" : 386500 }, { "_id" : "774", "value" : 387000 }, { "_id" : "775", "value" : 387500 }, { "_id" : "776", "value" : 388000 }, { "_id" : "777", "value" : 388500 }, { "_id" : "778", "value" : 389000 }, { "_id" : "779", "value" : 389500 }, { "_id" : "78", "value" : 39000 }, { "_id" : "780", "value" : 390000 }, { "_id" : "781", "value" : 390500 }, { "_id" : "782", "value" : 391000 }, { "_id" : "783", "value" : 391500 }, { "_id" : "784", "value" : 392000 }, { "_id" : "785", "value" : 392500 }, { "_id" : "786", "value" : 393000 }, { "_id" : "787", "value" : 393500 }, { "_id" : "788", "value" : 394000 }, { "_id" : "789", "value" : 394500 }, { "_id" : "79", "value" : 39500 }, { "_id" : "790", "value" : 395000 }, { "_id" : "791", "value" : 395500 }, { "_id" : "792", "value" : 396000 }, { "_id" : "793", "value" : 396500 }, { "_id" : "794", "value" : 397000 }, { "_id" : "795", "value" : 397500 }, { "_id" : "796", "value" : 398000 }, { "_id" : "797", "value" : 398500 }, { "_id" : "798", "value" : 399000 }, { "_id" : "799", "value" : 399500 }, { "_id" : "8", "value" : 4000 }, { "_id" : "80", "value" : 40000 }, { "_id" : "800", "value" : 400000 }, { "_id" : "801", "value" : 400500 }, { "_id" : "802", "value" : 401000 }, { "_id" : "803", "value" : 401500 }, { "_id" : "804", "value" : 402000 }, { "_id" : "805", "value" : 402500 }, { "_id" : "806", "value" : 403000 }, { "_id" : "807", "value" : 403500 }, { "_id" : "808", "value" : 404000 }, { "_id" : "809", "value" : 404500 }, { "_id" : "81", "value" : 40500 }, { "_id" : "810", "value" : 405000 }, { "_id" : "811", "value" : 405500 }, { "_id" : "812", "value" : 406000 }, { "_id" : "813", "value" : 406500 }, { "_id" : "814", "value" : 407000 }, { "_id" : "815", "value" : 407500 }, { "_id" : "816", "value" : 408000 }, { "_id" : "817", "value" : 408500 }, { "_id" : "818", "value" : 409000 }, { "_id" : "819", "value" : 409500 }, { "_id" : "82", "value" : 41000 }, { "_id" : "820", "value" : 410000 }, { "_id" : "821", "value" : 410500 }, { "_id" : "822", "value" : 411000 }, { "_id" : "823", "value" : 411500 }, { "_id" : "824", "value" : 412000 }, { "_id" : "825", "value" : 412500 }, { "_id" : "826", "value" : 413000 }, { "_id" : "827", "value" : 413500 }, { "_id" : "828", "value" : 414000 }, { "_id" : "829", "value" : 414500 }, { "_id" : "83", "value" : 41500 }, { "_id" : "830", "value" : 415000 }, { "_id" : "831", "value" : 415500 }, { "_id" : "832", "value" : 416000 }, { "_id" : "833", "value" : 416500 }, { "_id" : "834", "value" : 417000 }, { "_id" : "835", "value" : 417500 }, { "_id" : "836", "value" : 418000 }, { "_id" : "837", "value" : 418500 }, { "_id" : "838", "value" : 419000 }, { "_id" : "839", "value" : 419500 }, { "_id" : "84", "value" : 42000 }, { "_id" : "840", "value" : 420000 }, { "_id" : "841", "value" : 420500 }, { "_id" : "842", "value" : 421000 }, { "_id" : "843", "value" : 421500 }, { "_id" : "844", "value" : 422000 }, { "_id" : "845", "value" : 422500 }, { "_id" : "846", "value" : 423000 }, { "_id" : "847", "value" : 423500 }, { "_id" : "848", "value" : 424000 }, { "_id" : "849", "value" : 424500 }, { "_id" : "85", "value" : 42500 }, { "_id" : "850", "value" : 425000 }, { "_id" : "851", "value" : 425500 }, { "_id" : "852", "value" : 426000 }, { "_id" : "853", "value" : 426500 }, { "_id" : "854", "value" : 427000 }, { "_id" : "855", "value" : 427500 }, { "_id" : "856", "value" : 428000 }, { "_id" : "857", "value" : 428500 }, { "_id" : "858", "value" : 429000 }, { "_id" : "859", "value" : 429500 }, { "_id" : "86", "value" : 43000 }, { "_id" : "860", "value" : 430000 }, { "_id" : "861", "value" : 430500 }, { "_id" : "862", "value" : 431000 }, { "_id" : "863", "value" : 431500 }, { "_id" : "864", "value" : 432000 }, { "_id" : "865", "value" : 432500 }, { "_id" : "866", "value" : 433000 }, { "_id" : "867", "value" : 433500 }, { "_id" : "868", "value" : 434000 }, { "_id" : "869", "value" : 434500 }, { "_id" : "87", "value" : 43500 }, { "_id" : "870", "value" : 435000 }, { "_id" : "871", "value" : 435500 }, { "_id" : "872", "value" : 436000 }, { "_id" : "873", "value" : 436500 }, { "_id" : "874", "value" : 437000 }, { "_id" : "875", "value" : 437500 }, { "_id" : "876", "value" : 438000 }, { "_id" : "877", "value" : 438500 }, { "_id" : "878", "value" : 439000 }, { "_id" : "879", "value" : 439500 }, { "_id" : "88", "value" : 44000 }, { "_id" : "880", "value" : 440000 }, { "_id" : "881", "value" : 440500 }, { "_id" : "882", "value" : 441000 }, { "_id" : "883", "value" : 441500 }, { "_id" : "884", "value" : 442000 }, { "_id" : "885", "value" : 442500 }, { "_id" : "886", "value" : 443000 }, { "_id" : "887", "value" : 443500 }, { "_id" : "888", "value" : 444000 }, { "_id" : "889", "value" : 444500 }, { "_id" : "89", "value" : 44500 }, { "_id" : "890", "value" : 445000 }, { "_id" : "891", "value" : 445500 }, { "_id" : "892", "value" : 446000 }, { "_id" : "893", "value" : 446500 }, { "_id" : "894", "value" : 447000 }, { "_id" : "895", "value" : 447500 }, { "_id" : "896", "value" : 448000 }, { "_id" : "897", "value" : 448500 }, { "_id" : "898", "value" : 449000 }, { "_id" : "899", "value" : 449500 }, { "_id" : "9", "value" : 4500 }, { "_id" : "90", "value" : 45000 }, { "_id" : "900", "value" : 450000 }, { "_id" : "901", "value" : 450500 }, { "_id" : "902", "value" : 451000 }, { "_id" : "903", "value" : 451500 }, { "_id" : "904", "value" : 452000 }, { "_id" : "905", "value" : 452500 }, { "_id" : "906", "value" : 453000 }, { "_id" : "907", "value" : 453500 }, { "_id" : "908", "value" : 454000 }, { "_id" : "909", "value" : 454500 }, { "_id" : "91", "value" : 45500 }, { "_id" : "910", "value" : 455000 }, { "_id" : "911", "value" : 455500 }, { "_id" : "912", "value" : 456000 }, { "_id" : "913", "value" : 456500 }, { "_id" : "914", "value" : 457000 }, { "_id" : "915", "value" : 457500 }, { "_id" : "916", "value" : 458000 }, { "_id" : "917", "value" : 458500 }, { "_id" : "918", "value" : 459000 }, { "_id" : "919", "value" : 459500 }, { "_id" : "92", "value" : 46000 }, { "_id" : "920", "value" : 460000 }, { "_id" : "921", "value" : 460500 }, { "_id" : "922", "value" : 461000 }, { "_id" : "923", "value" : 461500 }, { "_id" : "924", "value" : 462000 }, { "_id" : "925", "value" : 462500 }, { "_id" : "926", "value" : 463000 }, { "_id" : "927", "value" : 463500 }, { "_id" : "928", "value" : 464000 }, { "_id" : "929", "value" : 464500 }, { "_id" : "93", "value" : 46500 }, { "_id" : "930", "value" : 465000 }, { "_id" : "931", "value" : 465500 }, { "_id" : "932", "value" : 466000 }, { "_id" : "933", "value" : 466500 }, { "_id" : "934", "value" : 467000 }, { "_id" : "935", "value" : 467500 }, { "_id" : "936", "value" : 468000 }, { "_id" : "937", "value" : 468500 }, { "_id" : "938", "value" : 469000 }, { "_id" : "939", "value" : 469500 }, { "_id" : "94", "value" : 47000 }, { "_id" : "940", "value" : 470000 }, { "_id" : "941", "value" : 470500 }, { "_id" : "942", "value" : 471000 }, { "_id" : "943", "value" : 471500 }, { "_id" : "944", "value" : 472000 }, { "_id" : "945", "value" : 472500 }, { "_id" : "946", "value" : 473000 }, { "_id" : "947", "value" : 473500 }, { "_id" : "948", "value" : 474000 }, { "_id" : "949", "value" : 474500 }, { "_id" : "95", "value" : 47500 }, { "_id" : "950", "value" : 475000 }, { "_id" : "951", "value" : 475500 }, { "_id" : "952", "value" : 476000 }, { "_id" : "953", "value" : 476500 }, { "_id" : "954", "value" : 477000 }, { "_id" : "955", "value" : 477500 }, { "_id" : "956", "value" : 478000 }, { "_id" : "957", "value" : 478500 }, { "_id" : "958", "value" : 479000 }, { "_id" : "959", "value" : 479500 }, { "_id" : "96", "value" : 48000 }, { "_id" : "960", "value" : 480000 }, { "_id" : "961", "value" : 480500 }, { "_id" : "962", "value" : 481000 }, { "_id" : "963", "value" : 481500 }, { "_id" : "964", "value" : 482000 }, { "_id" : "965", "value" : 482500 }, { "_id" : "966", "value" : 483000 }, { "_id" : "967", "value" : 483500 }, { "_id" : "968", "value" : 484000 }, { "_id" : "969", "value" : 484500 }, { "_id" : "97", "value" : 48500 }, { "_id" : "970", "value" : 485000 }, { "_id" : "971", "value" : 485500 }, { "_id" : "972", "value" : 486000 }, { "_id" : "973", "value" : 486500 }, { "_id" : "974", "value" : 487000 }, { "_id" : "975", "value" : 487500 }, { "_id" : "976", "value" : 488000 }, { "_id" : "977", "value" : 488500 }, { "_id" : "978", "value" : 489000 }, { "_id" : "979", "value" : 489500 }, { "_id" : "98", "value" : 49000 }, { "_id" : "980", "value" : 490000 }, { "_id" : "981", "value" : 490500 }, { "_id" : "982", "value" : 491000 }, { "_id" : "983", "value" : 491500 }, { "_id" : "984", "value" : 492000 }, { "_id" : "985", "value" : 492500 }, { "_id" : "986", "value" : 493000 }, { "_id" : "987", "value" : 493500 }, { "_id" : "988", "value" : 494000 }, { "_id" : "989", "value" : 494500 }, { "_id" : "99", "value" : 49500 }, { "_id" : "990", "value" : 495000 }, { "_id" : "991", "value" : 495500 }, { "_id" : "992", "value" : 496000 }, { "_id" : "993", "value" : 496500 }, { "_id" : "994", "value" : 497000 }, { "_id" : "995", "value" : 497500 }, { "_id" : "996", "value" : 498000 }, { "_id" : "997", "value" : 498500 }, { "_id" : "998", "value" : 499000 }, { "_id" : "999", "value" : 499500 } ] m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.$cmd", n2skip: 0, n2return: 1, options: 0, query: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30999| var total = 0 m30999| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533830_2", shardedFirstPass: true }, fields: {} } and CInfo { v_ns: "mr_shard_version.coll", filter: {} } m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] initializing over 2 shards required by [mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534] m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] initializing on shard shard0000:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] initialized command (lazily) on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] initialized command (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] finishing over 2 shards m30999| Fri Feb 22 11:50:30.574 [conn1] [pcursor] finishing on shard shard0000:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30001| Fri Feb 22 11:50:30.574 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_4 m30001| Fri Feb 22 11:50:30.574 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_4_inc m30000| Fri Feb 22 11:50:30.575 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_2 m30001| Fri Feb 22 11:50:30.575 [conn3] build index mr_shard_version.tmp.mr.coll_4_inc { 0: 1 } m30000| Fri Feb 22 11:50:30.575 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_2_inc m30000| Fri Feb 22 11:50:30.575 [conn6] build index mr_shard_version.tmp.mr.coll_2_inc { 0: 1 } m30001| Fri Feb 22 11:50:30.575 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:50:30.575 [conn3] build index mr_shard_version.tmp.mr.coll_4 { _id: 1 } m30000| Fri Feb 22 11:50:30.576 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:50:30.576 [conn6] build index mr_shard_version.tmp.mr.coll_2 { _id: 1 } m30000| Fri Feb 22 11:50:30.576 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:50:30.577 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:50:31.541 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 77902, clonedBytes: 3574912, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:32.565 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 80157, clonedBytes: 3678312, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:33.004 [conn3] M/R: (1/3) Emit Progress: 65600/460618 14% m30000| Fri Feb 22 11:50:33.042 [conn6] M/R: (1/3) Emit Progress: 61600/325898 18% m30001| Fri Feb 22 11:50:33.589 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 82850, clonedBytes: 3801970, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:34.614 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 85554, clonedBytes: 3926024, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:50:34.634 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:34.635 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:35.638 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 86983, clonedBytes: 3991648, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:36.002 [conn3] M/R: (1/3) Emit Progress: 145200/460618 31% m30000| Fri Feb 22 11:50:36.002 [conn6] M/R: (1/3) Emit Progress: 140600/325898 43% m30001| Fri Feb 22 11:50:36.662 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 90073, clonedBytes: 4133375, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:37.687 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 92166, clonedBytes: 4229406, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:38.711 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 95048, clonedBytes: 4361700, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:39.002 [conn6] M/R: (1/3) Emit Progress: 212500/325898 65% m30001| Fri Feb 22 11:50:39.031 [conn3] M/R: (1/3) Emit Progress: 223200/460618 48% m30001| Fri Feb 22 11:50:39.735 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 97051, clonedBytes: 4453615, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:40.626 [conn3] CMD: drop mr_shard_version.tmp.mrs.coll_1361533830_2 m30001| Fri Feb 22 11:50:40.631 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_4 m30001| Fri Feb 22 11:50:40.631 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_4 m30001| Fri Feb 22 11:50:40.631 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_4_inc m30001| Fri Feb 22 11:50:40.632 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_4 m30001| Fri Feb 22 11:50:40.633 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_4_inc m30001| Fri Feb 22 11:50:40.633 [conn3] command mr_shard_version.$cmd command: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30001| var total = 0 m30001| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533830_2", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 3501 locks(micros) W:5038 r:16314192 w:45025 reslen:149 10059ms m30999| Fri Feb 22 11:50:40.635 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:40.635 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:40.760 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 99057, clonedBytes: 4545665, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:41.041 [conn6] CMD: drop mr_shard_version.tmp.mrs.coll_1361533830_2 m30000| Fri Feb 22 11:50:41.044 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_2 m30000| Fri Feb 22 11:50:41.044 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_2 m30000| Fri Feb 22 11:50:41.044 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_2_inc m30000| Fri Feb 22 11:50:41.047 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_2 m30000| Fri Feb 22 11:50:41.047 [conn6] CMD: drop mr_shard_version.tmp.mr.coll_2_inc m30000| Fri Feb 22 11:50:41.048 [conn6] command mr_shard_version.$cmd command: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30000| var total = 0 m30000| for( var i = 0; i < value..., out: "tmp.mrs.coll_1361533830_2", shardedFirstPass: true } ntoreturn:1 keyUpdates:0 numYields: 3501 locks(micros) W:3767 r:17022155 w:49361 reslen:149 10473ms m30999| Fri Feb 22 11:50:41.048 [conn1] [pcursor] finished on shard shard0000:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: { result: "tmp.mrs.coll_1361533830_2", timeMillis: 10469, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:41.048 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:41.048 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "mr_shard_version.coll @ 2|1||51275b67d33e7c60dead1534", cursor: { result: "tmp.mrs.coll_1361533830_2", timeMillis: 10056, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:50:41.048 [conn1] MR with single shard output, NS=mr_shard_version.mrOutput primary=shard0001:localhost:30001 m30001| Fri Feb 22 11:50:41.049 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_5 m30001| Fri Feb 22 11:50:41.049 [conn3] build index mr_shard_version.tmp.mr.coll_5 { _id: 1 } m30001| Fri Feb 22 11:50:41.050 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 11:50:41.149 [conn3] CMD: drop mr_shard_version.mrOutput m30001| Fri Feb 22 11:50:41.152 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_5 m30001| Fri Feb 22 11:50:41.152 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_5 m30001| Fri Feb 22 11:50:41.152 [conn3] CMD: drop mr_shard_version.tmp.mr.coll_5 m30001| Fri Feb 22 11:50:41.152 [conn3] command mr_shard_version.$cmd command: { mapreduce.shardedfinish: { mapreduce: "coll", map: function (){ emit( this.key, this.value ) }, reduce: function (k, values){ m30001| var total = 0 m30001| for( var i = 0; i < value..., out: { replace: "mrOutput" } }, inputDB: "mr_shard_version", shardedOutputCollection: "tmp.mrs.coll_1361533830_2", shards: { localhost:30000: { result: "tmp.mrs.coll_1361533830_2", timeMillis: 10469, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 }, localhost:30001: { result: "tmp.mrs.coll_1361533830_2", timeMillis: 10056, counts: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, ok: 1.0 } }, shardCounts: { localhost:30000: { input: 250000, emit: 250000, reduce: 25000, output: 1000 }, localhost:30001: { input: 250000, emit: 250000, reduce: 25000, output: 1000 } }, counts: { emit: 500000, input: 500000, output: 2000, reduce: 50000 } } ntoreturn:1 keyUpdates:0 locks(micros) W:2917 w:32723 reslen:156 104ms m30000| Fri Feb 22 11:50:41.152 [conn7] CMD: drop mr_shard_version.tmp.mrs.coll_1361533830_2 m30001| Fri Feb 22 11:50:41.153 [conn8] CMD: drop mr_shard_version.tmp.mrs.coll_1361533830_2 m30999| Fri Feb 22 11:50:41.155 [conn1] [pcursor] creating pcursor over QSpec { ns: "mr_shard_version.mrOutput", n2skip: 0, n2return: 0, options: 0, query: { query: {}, orderby: { _id: 1.0 } }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:50:41.155 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ shard0001:localhost:30001] m30999| Fri Feb 22 11:50:41.155 [conn1] [pcursor] initializing on shard shard0001:localhost:30001, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:50:41.155 [conn1] [pcursor] initialized query (lazily) on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:41.155 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:50:41.155 [conn1] [pcursor] finishing on shard shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:50:41.155 [conn1] [pcursor] finished on shard shard0001:localhost:30001, current connection state is { state: { conn: "(done)", vinfo: "shard0001:localhost:30001", cursor: { _id: "0", value: 0.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } [ { "_id" : "0", "value" : 0 }, { "_id" : "1", "value" : 500 }, { "_id" : "10", "value" : 5000 }, { "_id" : "100", "value" : 50000 }, { "_id" : "101", "value" : 50500 }, { "_id" : "102", "value" : 51000 }, { "_id" : "103", "value" : 51500 }, { "_id" : "104", "value" : 52000 }, { "_id" : "105", "value" : 52500 }, { "_id" : "106", "value" : 53000 }, { "_id" : "107", "value" : 53500 }, { "_id" : "108", "value" : 54000 }, { "_id" : "109", "value" : 54500 }, { "_id" : "11", "value" : 5500 }, { "_id" : "110", "value" : 55000 }, { "_id" : "111", "value" : 55500 }, { "_id" : "112", "value" : 56000 }, { "_id" : "113", "value" : 56500 }, { "_id" : "114", "value" : 57000 }, { "_id" : "115", "value" : 57500 }, { "_id" : "116", "value" : 58000 }, { "_id" : "117", "value" : 58500 }, { "_id" : "118", "value" : 59000 }, { "_id" : "119", "value" : 59500 }, { "_id" : "12", "value" : 6000 }, { "_id" : "120", "value" : 60000 }, { "_id" : "121", "value" : 60500 }, { "_id" : "122", "value" : 61000 }, { "_id" : "123", "value" : 61500 }, { "_id" : "124", "value" : 62000 }, { "_id" : "125", "value" : 62500 }, { "_id" : "126", "value" : 63000 }, { "_id" : "127", "value" : 63500 }, { "_id" : "128", "value" : 64000 }, { "_id" : "129", "value" : 64500 }, { "_id" : "13", "value" : 6500 }, { "_id" : "130", "value" : 65000 }, { "_id" : "131", "value" : 65500 }, { "_id" : "132", "value" : 66000 }, { "_id" : "133", "value" : 66500 }, { "_id" : "134", "value" : 67000 }, { "_id" : "135", "value" : 67500 }, { "_id" : "136", "value" : 68000 }, { "_id" : "137", "value" : 68500 }, { "_id" : "138", "value" : 69000 }, { "_id" : "139", "value" : 69500 }, { "_id" : "14", "value" : 7000 }, { "_id" : "140", "value" : 70000 }, { "_id" : "141", "value" : 70500 }, { "_id" : "142", "value" : 71000 }, { "_id" : "143", "value" : 71500 }, { "_id" : "144", "value" : 72000 }, { "_id" : "145", "value" : 72500 }, { "_id" : "146", "value" : 73000 }, { "_id" : "147", "value" : 73500 }, { "_id" : "148", "value" : 74000 }, { "_id" : "149", "value" : 74500 }, { "_id" : "15", "value" : 7500 }, { "_id" : "150", "value" : 75000 }, { "_id" : "151", "value" : 75500 }, { "_id" : "152", "value" : 76000 }, { "_id" : "153", "value" : 76500 }, { "_id" : "154", "value" : 77000 }, { "_id" : "155", "value" : 77500 }, { "_id" : "156", "value" : 78000 }, { "_id" : "157", "value" : 78500 }, { "_id" : "158", "value" : 79000 }, { "_id" : "159", "value" : 79500 }, { "_id" : "16", "value" : 8000 }, { "_id" : "160", "value" : 80000 }, { "_id" : "161", "value" : 80500 }, { "_id" : "162", "value" : 81000 }, { "_id" : "163", "value" : 81500 }, { "_id" : "164", "value" : 82000 }, { "_id" : "165", "value" : 82500 }, { "_id" : "166", "value" : 83000 }, { "_id" : "167", "value" : 83500 }, { "_id" : "168", "value" : 84000 }, { "_id" : "169", "value" : 84500 }, { "_id" : "17", "value" : 8500 }, { "_id" : "170", "value" : 85000 }, { "_id" : "171", "value" : 85500 }, { "_id" : "172", "value" : 86000 }, { "_id" : "173", "value" : 86500 }, { "_id" : "174", "value" : 87000 }, { "_id" : "175", "value" : 87500 }, { "_id" : "176", "value" : 88000 }, { "_id" : "177", "value" : 88500 }, { "_id" : "178", "value" : 89000 }, { "_id" : "179", "value" : 89500 }, { "_id" : "18", "value" : 9000 }, { "_id" : "180", "value" : 90000 }, { "_id" : "181", "value" : 90500 }, { "_id" : "182", "value" : 91000 }, { "_id" : "183", "value" : 91500 }, { "_id" : "184", "value" : 92000 }, { "_id" : "185", "value" : 92500 }, { "_id" : "186", "value" : 93000 }, { "_id" : "187", "value" : 93500 }, { "_id" : "188", "value" : 94000 }, { "_id" : "189", "value" : 94500 }, { "_id" : "19", "value" : 9500 }, { "_id" : "190", "value" : 95000 }, { "_id" : "191", "value" : 95500 }, { "_id" : "192", "value" : 96000 }, { "_id" : "193", "value" : 96500 }, { "_id" : "194", "value" : 97000 }, { "_id" : "195", "value" : 97500 }, { "_id" : "196", "value" : 98000 }, { "_id" : "197", "value" : 98500 }, { "_id" : "198", "value" : 99000 }, { "_id" : "199", "value" : 99500 }, { "_id" : "2", "value" : 1000 }, { "_id" : "20", "value" : 10000 }, { "_id" : "200", "value" : 100000 }, { "_id" : "201", "value" : 100500 }, { "_id" : "202", "value" : 101000 }, { "_id" : "203", "value" : 101500 }, { "_id" : "204", "value" : 102000 }, { "_id" : "205", "value" : 102500 }, { "_id" : "206", "value" : 103000 }, { "_id" : "207", "value" : 103500 }, { "_id" : "208", "value" : 104000 }, { "_id" : "209", "value" : 104500 }, { "_id" : "21", "value" : 10500 }, { "_id" : "210", "value" : 105000 }, { "_id" : "211", "value" : 105500 }, { "_id" : "212", "value" : 106000 }, { "_id" : "213", "value" : 106500 }, { "_id" : "214", "value" : 107000 }, { "_id" : "215", "value" : 107500 }, { "_id" : "216", "value" : 108000 }, { "_id" : "217", "value" : 108500 }, { "_id" : "218", "value" : 109000 }, { "_id" : "219", "value" : 109500 }, { "_id" : "22", "value" : 11000 }, { "_id" : "220", "value" : 110000 }, { "_id" : "221", "value" : 110500 }, { "_id" : "222", "value" : 111000 }, { "_id" : "223", "value" : 111500 }, { "_id" : "224", "value" : 112000 }, { "_id" : "225", "value" : 112500 }, { "_id" : "226", "value" : 113000 }, { "_id" : "227", "value" : 113500 }, { "_id" : "228", "value" : 114000 }, { "_id" : "229", "value" : 114500 }, { "_id" : "23", "value" : 11500 }, { "_id" : "230", "value" : 115000 }, { "_id" : "231", "value" : 115500 }, { "_id" : "232", "value" : 116000 }, { "_id" : "233", "value" : 116500 }, { "_id" : "234", "value" : 117000 }, { "_id" : "235", "value" : 117500 }, { "_id" : "236", "value" : 118000 }, { "_id" : "237", "value" : 118500 }, { "_id" : "238", "value" : 119000 }, { "_id" : "239", "value" : 119500 }, { "_id" : "24", "value" : 12000 }, { "_id" : "240", "value" : 120000 }, { "_id" : "241", "value" : 120500 }, { "_id" : "242", "value" : 121000 }, { "_id" : "243", "value" : 121500 }, { "_id" : "244", "value" : 122000 }, { "_id" : "245", "value" : 122500 }, { "_id" : "246", "value" : 123000 }, { "_id" : "247", "value" : 123500 }, { "_id" : "248", "value" : 124000 }, { "_id" : "249", "value" : 124500 }, { "_id" : "25", "value" : 12500 }, { "_id" : "250", "value" : 125000 }, { "_id" : "251", "value" : 125500 }, { "_id" : "252", "value" : 126000 }, { "_id" : "253", "value" : 126500 }, { "_id" : "254", "value" : 127000 }, { "_id" : "255", "value" : 127500 }, { "_id" : "256", "value" : 128000 }, { "_id" : "257", "value" : 128500 }, { "_id" : "258", "value" : 129000 }, { "_id" : "259", "value" : 129500 }, { "_id" : "26", "value" : 13000 }, { "_id" : "260", "value" : 130000 }, { "_id" : "261", "value" : 130500 }, { "_id" : "262", "value" : 131000 }, { "_id" : "263", "value" : 131500 }, { "_id" : "264", "value" : 132000 }, { "_id" : "265", "value" : 132500 }, { "_id" : "266", "value" : 133000 }, { "_id" : "267", "value" : 133500 }, { "_id" : "268", "value" : 134000 }, { "_id" : "269", "value" : 134500 }, { "_id" : "27", "value" : 13500 }, { "_id" : "270", "value" : 135000 }, { "_id" : "271", "value" : 135500 }, { "_id" : "272", "value" : 136000 }, { "_id" : "273", "value" : 136500 }, { "_id" : "274", "value" : 137000 }, { "_id" : "275", "value" : 137500 }, { "_id" : "276", "value" : 138000 }, { "_id" : "277", "value" : 138500 }, { "_id" : "278", "value" : 139000 }, { "_id" : "279", "value" : 139500 }, { "_id" : "28", "value" : 14000 }, { "_id" : "280", "value" : 140000 }, { "_id" : "281", "value" : 140500 }, { "_id" : "282", "value" : 141000 }, { "_id" : "283", "value" : 141500 }, { "_id" : "284", "value" : 142000 }, { "_id" : "285", "value" : 142500 }, { "_id" : "286", "value" : 143000 }, { "_id" : "287", "value" : 143500 }, { "_id" : "288", "value" : 144000 }, { "_id" : "289", "value" : 144500 }, { "_id" : "29", "value" : 14500 }, { "_id" : "290", "value" : 145000 }, { "_id" : "291", "value" : 145500 }, { "_id" : "292", "value" : 146000 }, { "_id" : "293", "value" : 146500 }, { "_id" : "294", "value" : 147000 }, { "_id" : "295", "value" : 147500 }, { "_id" : "296", "value" : 148000 }, { "_id" : "297", "value" : 148500 }, { "_id" : "298", "value" : 149000 }, { "_id" : "299", "value" : 149500 }, { "_id" : "3", "value" : 1500 }, { "_id" : "30", "value" : 15000 }, { "_id" : "300", "value" : 150000 }, { "_id" : "301", "value" : 150500 }, { "_id" : "302", "value" : 151000 }, { "_id" : "303", "value" : 151500 }, { "_id" : "304", "value" : 152000 }, { "_id" : "305", "value" : 152500 }, { "_id" : "306", "value" : 153000 }, { "_id" : "307", "value" : 153500 }, { "_id" : "308", "value" : 154000 }, { "_id" : "309", "value" : 154500 }, { "_id" : "31", "value" : 15500 }, { "_id" : "310", "value" : 155000 }, { "_id" : "311", "value" : 155500 }, { "_id" : "312", "value" : 156000 }, { "_id" : "313", "value" : 156500 }, { "_id" : "314", "value" : 157000 }, { "_id" : "315", "value" : 157500 }, { "_id" : "316", "value" : 158000 }, { "_id" : "317", "value" : 158500 }, { "_id" : "318", "value" : 159000 }, { "_id" : "319", "value" : 159500 }, { "_id" : "32", "value" : 16000 }, { "_id" : "320", "value" : 160000 }, { "_id" : "321", "value" : 160500 }, { "_id" : "322", "value" : 161000 }, { "_id" : "323", "value" : 161500 }, { "_id" : "324", "value" : 162000 }, { "_id" : "325", "value" : 162500 }, { "_id" : "326", "value" : 163000 }, { "_id" : "327", "value" : 163500 }, { "_id" : "328", "value" : 164000 }, { "_id" : "329", "value" : 164500 }, { "_id" : "33", "value" : 16500 }, { "_id" : "330", "value" : 165000 }, { "_id" : "331", "value" : 165500 }, { "_id" : "332", "value" : 166000 }, { "_id" : "333", "value" : 166500 }, { "_id" : "334", "value" : 167000 }, { "_id" : "335", "value" : 167500 }, { "_id" : "336", "value" : 168000 }, { "_id" : "337", "value" : 168500 }, { "_id" : "338", "value" : 169000 }, { "_id" : "339", "value" : 169500 }, { "_id" : "34", "value" : 17000 }, { "_id" : "340", "value" : 170000 }, { "_id" : "341", "value" : 170500 }, { "_id" : "342", "value" : 171000 }, { "_id" : "343", "value" : 171500 }, { "_id" : "344", "value" : 172000 }, { "_id" : "345", "value" : 172500 }, { "_id" : "346", "value" : 173000 }, { "_id" : "347", "value" : 173500 }, { "_id" : "348", "value" : 174000 }, { "_id" : "349", "value" : 174500 }, { "_id" : "35", "value" : 17500 }, { "_id" : "350", "value" : 175000 }, { "_id" : "351", "value" : 175500 }, { "_id" : "352", "value" : 176000 }, { "_id" : "353", "value" : 176500 }, { "_id" : "354", "value" : 177000 }, { "_id" : "355", "value" : 177500 }, { "_id" : "356", "value" : 178000 }, { "_id" : "357", "value" : 178500 }, { "_id" : "358", "value" : 179000 }, { "_id" : "359", "value" : 179500 }, { "_id" : "36", "value" : 18000 }, { "_id" : "360", "value" : 180000 }, { "_id" : "361", "value" : 180500 }, { "_id" : "362", "value" : 181000 }, { "_id" : "363", "value" : 181500 }, { "_id" : "364", "value" : 182000 }, { "_id" : "365", "value" : 182500 }, { "_id" : "366", "value" : 183000 }, { "_id" : "367", "value" : 183500 }, { "_id" : "368", "value" : 184000 }, { "_id" : "369", "value" : 184500 }, { "_id" : "37", "value" : 18500 }, { "_id" : "370", "value" : 185000 }, { "_id" : "371", "value" : 185500 }, { "_id" : "372", "value" : 186000 }, { "_id" : "373", "value" : 186500 }, { "_id" : "374", "value" : 187000 }, { "_id" : "375", "value" : 187500 }, { "_id" : "376", "value" : 188000 }, { "_id" : "377", "value" : 188500 }, { "_id" : "378", "value" : 189000 }, { "_id" : "379", "value" : 189500 }, { "_id" : "38", "value" : 19000 }, { "_id" : "380", "value" : 190000 }, { "_id" : "381", "value" : 190500 }, { "_id" : "382", "value" : 191000 }, { "_id" : "383", "value" : 191500 }, { "_id" : "384", "value" : 192000 }, { "_id" : "385", "value" : 192500 }, { "_id" : "386", "value" : 193000 }, { "_id" : "387", "value" : 193500 }, { "_id" : "388", "value" : 194000 }, { "_id" : "389", "value" : 194500 }, { "_id" : "39", "value" : 19500 }, { "_id" : "390", "value" : 195000 }, { "_id" : "391", "value" : 195500 }, { "_id" : "392", "value" : 196000 }, { "_id" : "393", "value" : 196500 }, { "_id" : "394", "value" : 197000 }, { "_id" : "395", "value" : 197500 }, { "_id" : "396", "value" : 198000 }, { "_id" : "397", "value" : 198500 }, { "_id" : "398", "value" : 199000 }, { "_id" : "399", "value" : 199500 }, { "_id" : "4", "value" : 2000 }, { "_id" : "40", "value" : 20000 }, { "_id" : "400", "value" : 200000 }, { "_id" : "401", "value" : 200500 }, { "_id" : "402", "value" : 201000 }, { "_id" : "403", "value" : 201500 }, { "_id" : "404", "value" : 202000 }, { "_id" : "405", "value" : 202500 }, { "_id" : "406", "value" : 203000 }, { "_id" : "407", "value" : 203500 }, { "_id" : "408", "value" : 204000 }, { "_id" : "409", "value" : 204500 }, { "_id" : "41", "value" : 20500 }, { "_id" : "410", "value" : 205000 }, { "_id" : "411", "value" : 205500 }, { "_id" : "412", "value" : 206000 }, { "_id" : "413", "value" : 206500 }, { "_id" : "414", "value" : 207000 }, { "_id" : "415", "value" : 207500 }, { "_id" : "416", "value" : 208000 }, { "_id" : "417", "value" : 208500 }, { "_id" : "418", "value" : 209000 }, { "_id" : "419", "value" : 209500 }, { "_id" : "42", "value" : 21000 }, { "_id" : "420", "value" : 210000 }, { "_id" : "421", "value" : 210500 }, { "_id" : "422", "value" : 211000 }, { "_id" : "423", "value" : 211500 }, { "_id" : "424", "value" : 212000 }, { "_id" : "425", "value" : 212500 }, { "_id" : "426", "value" : 213000 }, { "_id" : "427", "value" : 213500 }, { "_id" : "428", "value" : 214000 }, { "_id" : "429", "value" : 214500 }, { "_id" : "43", "value" : 21500 }, { "_id" : "430", "value" : 215000 }, { "_id" : "431", "value" : 215500 }, { "_id" : "432", "value" : 216000 }, { "_id" : "433", "value" : 216500 }, { "_id" : "434", "value" : 217000 }, { "_id" : "435", "value" : 217500 }, { "_id" : "436", "value" : 218000 }, { "_id" : "437", "value" : 218500 }, { "_id" : "438", "value" : 219000 }, { "_id" : "439", "value" : 219500 }, { "_id" : "44", "value" : 22000 }, { "_id" : "440", "value" : 220000 }, { "_id" : "441", "value" : 220500 }, { "_id" : "442", "value" : 221000 }, { "_id" : "443", "value" : 221500 }, { "_id" : "444", "value" : 222000 }, { "_id" : "445", "value" : 222500 }, { "_id" : "446", "value" : 223000 }, { "_id" : "447", "value" : 223500 }, { "_id" : "448", "value" : 224000 }, { "_id" : "449", "value" : 224500 }, { "_id" : "45", "value" : 22500 }, { "_id" : "450", "value" : 225000 }, { "_id" : "451", "value" : 225500 }, { "_id" : "452", "value" : 226000 }, { "_id" : "453", "value" : 226500 }, { "_id" : "454", "value" : 227000 }, { "_id" : "455", "value" : 227500 }, { "_id" : "456", "value" : 228000 }, { "_id" : "457", "value" : 228500 }, { "_id" : "458", "value" : 229000 }, { "_id" : "459", "value" : 229500 }, { "_id" : "46", "value" : 23000 }, { "_id" : "460", "value" : 230000 }, { "_id" : "461", "value" : 230500 }, { "_id" : "462", "value" : 231000 }, { "_id" : "463", "value" : 231500 }, { "_id" : "464", "value" : 232000 }, { "_id" : "465", "value" : 232500 }, { "_id" : "466", "value" : 233000 }, { "_id" : "467", "value" : 233500 }, { "_id" : "468", "value" : 234000 }, { "_id" : "469", "value" : 234500 }, { "_id" : "47", "value" : 23500 }, { "_id" : "470", "value" : 235000 }, { "_id" : "471", "value" : 235500 }, { "_id" : "472", "value" : 236000 }, { "_id" : "473", "value" : 236500 }, { "_id" : "474", "value" : 237000 }, { "_id" : "475", "value" : 237500 }, { "_id" : "476", "value" : 238000 }, { "_id" : "477", "value" : 238500 }, { "_id" : "478", "value" : 239000 }, { "_id" : "479", "value" : 239500 }, { "_id" : "48", "value" : 24000 }, { "_id" : "480", "value" : 240000 }, { "_id" : "481", "value" : 240500 }, { "_id" : "482", "value" : 241000 }, { "_id" : "483", "value" : 241500 }, { "_id" : "484", "value" : 242000 }, { "_id" : "485", "value" : 242500 }, { "_id" : "486", "value" : 243000 }, { "_id" : "487", "value" : 243500 }, { "_id" : "488", "value" : 244000 }, { "_id" : "489", "value" : 244500 }, { "_id" : "49", "value" : 24500 }, { "_id" : "490", "value" : 245000 }, { "_id" : "491", "value" : 245500 }, { "_id" : "492", "value" : 246000 }, { "_id" : "493", "value" : 246500 }, { "_id" : "494", "value" : 247000 }, { "_id" : "495", "value" : 247500 }, { "_id" : "496", "value" : 248000 }, { "_id" : "497", "value" : 248500 }, { "_id" : "498", "value" : 249000 }, { "_id" : "499", "value" : 249500 }, { "_id" : "5", "value" : 2500 }, { "_id" : "50", "value" : 25000 }, { "_id" : "500", "value" : 250000 }, { "_id" : "501", "value" : 250500 }, { "_id" : "502", "value" : 251000 }, { "_id" : "503", "value" : 251500 }, { "_id" : "504", "value" : 252000 }, { "_id" : "505", "value" : 252500 }, { "_id" : "506", "value" : 253000 }, { "_id" : "507", "value" : 253500 }, { "_id" : "508", "value" : 254000 }, { "_id" : "509", "value" : 254500 }, { "_id" : "51", "value" : 25500 }, { "_id" : "510", "value" : 255000 }, { "_id" : "511", "value" : 255500 }, { "_id" : "512", "value" : 256000 }, { "_id" : "513", "value" : 256500 }, { "_id" : "514", "value" : 257000 }, { "_id" : "515", "value" : 257500 }, { "_id" : "516", "value" : 258000 }, { "_id" : "517", "value" : 258500 }, { "_id" : "518", "value" : 259000 }, { "_id" : "519", "value" : 259500 }, { "_id" : "52", "value" : 26000 }, { "_id" : "520", "value" : 260000 }, { "_id" : "521", "value" : 260500 }, { "_id" : "522", "value" : 261000 }, { "_id" : "523", "value" : 261500 }, { "_id" : "524", "value" : 262000 }, { "_id" : "525", "value" : 262500 }, { "_id" : "526", "value" : 263000 }, { "_id" : "527", "value" : 263500 }, { "_id" : "528", "value" : 264000 }, { "_id" : "529", "value" : 264500 }, { "_id" : "53", "value" : 26500 }, { "_id" : "530", "value" : 265000 }, { "_id" : "531", "value" : 265500 }, { "_id" : "532", "value" : 266000 }, { "_id" : "533", "value" : 266500 }, { "_id" : "534", "value" : 267000 }, { "_id" : "535", "value" : 267500 }, { "_id" : "536", "value" : 268000 }, { "_id" : "537", "value" : 268500 }, { "_id" : "538", "value" : 269000 }, { "_id" : "539", "value" : 269500 }, { "_id" : "54", "value" : 27000 }, { "_id" : "540", "value" : 270000 }, { "_id" : "541", "value" : 270500 }, { "_id" : "542", "value" : 271000 }, { "_id" : "543", "value" : 271500 }, { "_id" : "544", "value" : 272000 }, { "_id" : "545", "value" : 272500 }, { "_id" : "546", "value" : 273000 }, { "_id" : "547", "value" : 273500 }, { "_id" : "548", "value" : 274000 }, { "_id" : "549", "value" : 274500 }, { "_id" : "55", "value" : 27500 }, { "_id" : "550", "value" : 275000 }, { "_id" : "551", "value" : 275500 }, { "_id" : "552", "value" : 276000 }, { "_id" : "553", "value" : 276500 }, { "_id" : "554", "value" : 277000 }, { "_id" : "555", "value" : 277500 }, { "_id" : "556", "value" : 278000 }, { "_id" : "557", "value" : 278500 }, { "_id" : "558", "value" : 279000 }, { "_id" : "559", "value" : 279500 }, { "_id" : "56", "value" : 28000 }, { "_id" : "560", "value" : 280000 }, { "_id" : "561", "value" : 280500 }, { "_id" : "562", "value" : 281000 }, { "_id" : "563", "value" : 281500 }, { "_id" : "564", "value" : 282000 }, { "_id" : "565", "value" : 282500 }, { "_id" : "566", "value" : 283000 }, { "_id" : "567", "value" : 283500 }, { "_id" : "568", "value" : 284000 }, { "_id" : "569", "value" : 284500 }, { "_id" : "57", "value" : 28500 }, { "_id" : "570", "value" : 285000 }, { "_id" : "571", "value" : 285500 }, { "_id" : "572", "value" : 286000 }, { "_id" : "573", "value" : 286500 }, { "_id" : "574", "value" : 287000 }, { "_id" : "575", "value" : 287500 }, { "_id" : "576", "value" : 288000 }, { "_id" : "577", "value" : 288500 }, { "_id" : "578", "value" : 289000 }, { "_id" : "579", "value" : 289500 }, { "_id" : "58", "value" : 29000 }, { "_id" : "580", "value" : 290000 }, { "_id" : "581", "value" : 290500 }, { "_id" : "582", "value" : 291000 }, { "_id" : "583", "value" : 291500 }, { "_id" : "584", "value" : 292000 }, { "_id" : "585", "value" : 292500 }, { "_id" : "586", "value" : 293000 }, { "_id" : "587", "value" : 293500 }, { "_id" : "588", "value" : 294000 }, { "_id" : "589", "value" : 294500 }, { "_id" : "59", "value" : 29500 }, { "_id" : "590", "value" : 295000 }, { "_id" : "591", "value" : 295500 }, { "_id" : "592", "value" : 296000 }, { "_id" : "593", "value" : 296500 }, { "_id" : "594", "value" : 297000 }, { "_id" : "595", "value" : 297500 }, { "_id" : "596", "value" : 298000 }, { "_id" : "597", "value" : 298500 }, { "_id" : "598", "value" : 299000 }, { "_id" : "599", "value" : 299500 }, { "_id" : "6", "value" : 3000 }, { "_id" : "60", "value" : 30000 }, { "_id" : "600", "value" : 300000 }, { "_id" : "601", "value" : 300500 }, { "_id" : "602", "value" : 301000 }, { "_id" : "603", "value" : 301500 }, { "_id" : "604", "value" : 302000 }, { "_id" : "605", "value" : 302500 }, { "_id" : "606", "value" : 303000 }, { "_id" : "607", "value" : 303500 }, { "_id" : "608", "value" : 304000 }, { "_id" : "609", "value" : 304500 }, { "_id" : "61", "value" : 30500 }, { "_id" : "610", "value" : 305000 }, { "_id" : "611", "value" : 305500 }, { "_id" : "612", "value" : 306000 }, { "_id" : "613", "value" : 306500 }, { "_id" : "614", "value" : 307000 }, { "_id" : "615", "value" : 307500 }, { "_id" : "616", "value" : 308000 }, { "_id" : "617", "value" : 308500 }, { "_id" : "618", "value" : 309000 }, { "_id" : "619", "value" : 309500 }, { "_id" : "62", "value" : 31000 }, { "_id" : "620", "value" : 310000 }, { "_id" : "621", "value" : 310500 }, { "_id" : "622", "value" : 311000 }, { "_id" : "623", "value" : 311500 }, { "_id" : "624", "value" : 312000 }, { "_id" : "625", "value" : 312500 }, { "_id" : "626", "value" : 313000 }, { "_id" : "627", "value" : 313500 }, { "_id" : "628", "value" : 314000 }, { "_id" : "629", "value" : 314500 }, { "_id" : "63", "value" : 31500 }, { "_id" : "630", "value" : 315000 }, { "_id" : "631", "value" : 315500 }, { "_id" : "632", "value" : 316000 }, { "_id" : "633", "value" : 316500 }, { "_id" : "634", "value" : 317000 }, { "_id" : "635", "value" : 317500 }, { "_id" : "636", "value" : 318000 }, { "_id" : "637", "value" : 318500 }, { "_id" : "638", "value" : 319000 }, { "_id" : "639", "value" : 319500 }, { "_id" : "64", "value" : 32000 }, { "_id" : "640", "value" : 320000 }, { "_id" : "641", "value" : 320500 }, { "_id" : "642", "value" : 321000 }, { "_id" : "643", "value" : 321500 }, { "_id" : "644", "value" : 322000 }, { "_id" : "645", "value" : 322500 }, { "_id" : "646", "value" : 323000 }, { "_id" : "647", "value" : 323500 }, { "_id" : "648", "value" : 324000 }, { "_id" : "649", "value" : 324500 }, { "_id" : "65", "value" : 32500 }, { "_id" : "650", "value" : 325000 }, { "_id" : "651", "value" : 325500 }, { "_id" : "652", "value" : 326000 }, { "_id" : "653", "value" : 326500 }, { "_id" : "654", "value" : 327000 }, { "_id" : "655", "value" : 327500 }, { "_id" : "656", "value" : 328000 }, { "_id" : "657", "value" : 328500 }, { "_id" : "658", "value" : 329000 }, { "_id" : "659", "value" : 329500 }, { "_id" : "66", "value" : 33000 }, { "_id" : "660", "value" : 330000 }, { "_id" : "661", "value" : 330500 }, { "_id" : "662", "value" : 331000 }, { "_id" : "663", "value" : 331500 }, { "_id" : "664", "value" : 332000 }, { "_id" : "665", "value" : 332500 }, { "_id" : "666", "value" : 333000 }, { "_id" : "667", "value" : 333500 }, { "_id" : "668", "value" : 334000 }, { "_id" : "669", "value" : 334500 }, { "_id" : "67", "value" : 33500 }, { "_id" : "670", "value" : 335000 }, { "_id" : "671", "value" : 335500 }, { "_id" : "672", "value" : 336000 }, { "_id" : "673", "value" : 336500 }, { "_id" : "674", "value" : 337000 }, { "_id" : "675", "value" : 337500 }, { "_id" : "676", "value" : 338000 }, { "_id" : "677", "value" : 338500 }, { "_id" : "678", "value" : 339000 }, { "_id" : "679", "value" : 339500 }, { "_id" : "68", "value" : 34000 }, { "_id" : "680", "value" : 340000 }, { "_id" : "681", "value" : 340500 }, { "_id" : "682", "value" : 341000 }, { "_id" : "683", "value" : 341500 }, { "_id" : "684", "value" : 342000 }, { "_id" : "685", "value" : 342500 }, { "_id" : "686", "value" : 343000 }, { "_id" : "687", "value" : 343500 }, { "_id" : "688", "value" : 344000 }, { "_id" : "689", "value" : 344500 }, { "_id" : "69", "value" : 34500 }, { "_id" : "690", "value" : 345000 }, { "_id" : "691", "value" : 345500 }, { "_id" : "692", "value" : 346000 }, { "_id" : "693", "value" : 346500 }, { "_id" : "694", "value" : 347000 }, { "_id" : "695", "value" : 347500 }, { "_id" : "696", "value" : 348000 }, { "_id" : "697", "value" : 348500 }, { "_id" : "698", "value" : 349000 }, { "_id" : "699", "value" : 349500 }, { "_id" : "7", "value" : 3500 }, { "_id" : "70", "value" : 35000 }, { "_id" : "700", "value" : 350000 }, { "_id" : "701", "value" : 350500 }, { "_id" : "702", "value" : 351000 }, { "_id" : "703", "value" : 351500 }, { "_id" : "704", "value" : 352000 }, { "_id" : "705", "value" : 352500 }, { "_id" : "706", "value" : 353000 }, { "_id" : "707", "value" : 353500 }, { "_id" : "708", "value" : 354000 }, { "_id" : "709", "value" : 354500 }, { "_id" : "71", "value" : 35500 }, { "_id" : "710", "value" : 355000 }, { "_id" : "711", "value" : 355500 }, { "_id" : "712", "value" : 356000 }, { "_id" : "713", "value" : 356500 }, { "_id" : "714", "value" : 357000 }, { "_id" : "715", "value" : 357500 }, { "_id" : "716", "value" : 358000 }, { "_id" : "717", "value" : 358500 }, { "_id" : "718", "value" : 359000 }, { "_id" : "719", "value" : 359500 }, { "_id" : "72", "value" : 36000 }, { "_id" : "720", "value" : 360000 }, { "_id" : "721", "value" : 360500 }, { "_id" : "722", "value" : 361000 }, { "_id" : "723", "value" : 361500 }, { "_id" : "724", "value" : 362000 }, { "_id" : "725", "value" : 362500 }, { "_id" : "726", "value" : 363000 }, { "_id" : "727", "value" : 363500 }, { "_id" : "728", "value" : 364000 }, { "_id" : "729", "value" : 364500 }, { "_id" : "73", "value" : 36500 }, { "_id" : "730", "value" : 365000 }, { "_id" : "731", "value" : 365500 }, { "_id" : "732", "value" : 366000 }, { "_id" : "733", "value" : 366500 }, { "_id" : "734", "value" : 367000 }, { "_id" : "735", "value" : 367500 }, { "_id" : "736", "value" : 368000 }, { "_id" : "737", "value" : 368500 }, { "_id" : "738", "value" : 369000 }, { "_id" : "739", "value" : 369500 }, { "_id" : "74", "value" : 37000 }, { "_id" : "740", "value" : 370000 }, { "_id" : "741", "value" : 370500 }, { "_id" : "742", "value" : 371000 }, { "_id" : "743", "value" : 371500 }, { "_id" : "744", "value" : 372000 }, { "_id" : "745", "value" : 372500 }, { "_id" : "746", "value" : 373000 }, { "_id" : "747", "value" : 373500 }, { "_id" : "748", "value" : 374000 }, { "_id" : "749", "value" : 374500 }, { "_id" : "75", "value" : 37500 }, { "_id" : "750", "value" : 375000 }, { "_id" : "751", "value" : 375500 }, { "_id" : "752", "value" : 376000 }, { "_id" : "753", "value" : 376500 }, { "_id" : "754", "value" : 377000 }, { "_id" : "755", "value" : 377500 }, { "_id" : "756", "value" : 378000 }, { "_id" : "757", "value" : 378500 }, { "_id" : "758", "value" : 379000 }, { "_id" : "759", "value" : 379500 }, { "_id" : "76", "value" : 38000 }, { "_id" : "760", "value" : 380000 }, { "_id" : "761", "value" : 380500 }, { "_id" : "762", "value" : 381000 }, { "_id" : "763", "value" : 381500 }, { "_id" : "764", "value" : 382000 }, { "_id" : "765", "value" : 382500 }, { "_id" : "766", "value" : 383000 }, { "_id" : "767", "value" : 383500 }, { "_id" : "768", "value" : 384000 }, { "_id" : "769", "value" : 384500 }, { "_id" : "77", "value" : 38500 }, { "_id" : "770", "value" : 385000 }, { "_id" : "771", "value" : 385500 }, { "_id" : "772", "value" : 386000 }, { "_id" : "773", "value" : 386500 }, { "_id" : "774", "value" : 387000 }, { "_id" : "775", "value" : 387500 }, { "_id" : "776", "value" : 388000 }, { "_id" : "777", "value" : 388500 }, { "_id" : "778", "value" : 389000 }, { "_id" : "779", "value" : 389500 }, { "_id" : "78", "value" : 39000 }, { "_id" : "780", "value" : 390000 }, { "_id" : "781", "value" : 390500 }, { "_id" : "782", "value" : 391000 }, { "_id" : "783", "value" : 391500 }, { "_id" : "784", "value" : 392000 }, { "_id" : "785", "value" : 392500 }, { "_id" : "786", "value" : 393000 }, { "_id" : "787", "value" : 393500 }, { "_id" : "788", "value" : 394000 }, { "_id" : "789", "value" : 394500 }, { "_id" : "79", "value" : 39500 }, { "_id" : "790", "value" : 395000 }, { "_id" : "791", "value" : 395500 }, { "_id" : "792", "value" : 396000 }, { "_id" : "793", "value" : 396500 }, { "_id" : "794", "value" : 397000 }, { "_id" : "795", "value" : 397500 }, { "_id" : "796", "value" : 398000 }, { "_id" : "797", "value" : 398500 }, { "_id" : "798", "value" : 399000 }, { "_id" : "799", "value" : 399500 }, { "_id" : "8", "value" : 4000 }, { "_id" : "80", "value" : 40000 }, { "_id" : "800", "value" : 400000 }, { "_id" : "801", "value" : 400500 }, { "_id" : "802", "value" : 401000 }, { "_id" : "803", "value" : 401500 }, { "_id" : "804", "value" : 402000 }, { "_id" : "805", "value" : 402500 }, { "_id" : "806", "value" : 403000 }, { "_id" : "807", "value" : 403500 }, { "_id" : "808", "value" : 404000 }, { "_id" : "809", "value" : 404500 }, { "_id" : "81", "value" : 40500 }, { "_id" : "810", "value" : 405000 }, { "_id" : "811", "value" : 405500 }, { "_id" : "812", "value" : 406000 }, { "_id" : "813", "value" : 406500 }, { "_id" : "814", "value" : 407000 }, { "_id" : "815", "value" : 407500 }, { "_id" : "816", "value" : 408000 }, { "_id" : "817", "value" : 408500 }, { "_id" : "818", "value" : 409000 }, { "_id" : "819", "value" : 409500 }, { "_id" : "82", "value" : 41000 }, { "_id" : "820", "value" : 410000 }, { "_id" : "821", "value" : 410500 }, { "_id" : "822", "value" : 411000 }, { "_id" : "823", "value" : 411500 }, { "_id" : "824", "value" : 412000 }, { "_id" : "825", "value" : 412500 }, { "_id" : "826", "value" : 413000 }, { "_id" : "827", "value" : 413500 }, { "_id" : "828", "value" : 414000 }, { "_id" : "829", "value" : 414500 }, { "_id" : "83", "value" : 41500 }, { "_id" : "830", "value" : 415000 }, { "_id" : "831", "value" : 415500 }, { "_id" : "832", "value" : 416000 }, { "_id" : "833", "value" : 416500 }, { "_id" : "834", "value" : 417000 }, { "_id" : "835", "value" : 417500 }, { "_id" : "836", "value" : 418000 }, { "_id" : "837", "value" : 418500 }, { "_id" : "838", "value" : 419000 }, { "_id" : "839", "value" : 419500 }, { "_id" : "84", "value" : 42000 }, { "_id" : "840", "value" : 420000 }, { "_id" : "841", "value" : 420500 }, { "_id" : "842", "value" : 421000 }, { "_id" : "843", "value" : 421500 }, { "_id" : "844", "value" : 422000 }, { "_id" : "845", "value" : 422500 }, { "_id" : "846", "value" : 423000 }, { "_id" : "847", "value" : 423500 }, { "_id" : "848", "value" : 424000 }, { "_id" : "849", "value" : 424500 }, { "_id" : "85", "value" : 42500 }, { "_id" : "850", "value" : 425000 }, { "_id" : "851", "value" : 425500 }, { "_id" : "852", "value" : 426000 }, { "_id" : "853", "value" : 426500 }, { "_id" : "854", "value" : 427000 }, { "_id" : "855", "value" : 427500 }, { "_id" : "856", "value" : 428000 }, { "_id" : "857", "value" : 428500 }, { "_id" : "858", "value" : 429000 }, { "_id" : "859", "value" : 429500 }, { "_id" : "86", "value" : 43000 }, { "_id" : "860", "value" : 430000 }, { "_id" : "861", "value" : 430500 }, { "_id" : "862", "value" : 431000 }, { "_id" : "863", "value" : 431500 }, { "_id" : "864", "value" : 432000 }, { "_id" : "865", "value" : 432500 }, { "_id" : "866", "value" : 433000 }, { "_id" : "867", "value" : 433500 }, { "_id" : "868", "value" : 434000 }, { "_id" : "869", "value" : 434500 }, { "_id" : "87", "value" : 43500 }, { "_id" : "870", "value" : 435000 }, { "_id" : "871", "value" : 435500 }, { "_id" : "872", "value" : 436000 }, { "_id" : "873", "value" : 436500 }, { "_id" : "874", "value" : 437000 }, { "_id" : "875", "value" : 437500 }, { "_id" : "876", "value" : 438000 }, { "_id" : "877", "value" : 438500 }, { "_id" : "878", "value" : 439000 }, { "_id" : "879", "value" : 439500 }, { "_id" : "88", "value" : 44000 }, { "_id" : "880", "value" : 440000 }, { "_id" : "881", "value" : 440500 }, { "_id" : "882", "value" : 441000 }, { "_id" : "883", "value" : 441500 }, { "_id" : "884", "value" : 442000 }, { "_id" : "885", "value" : 442500 }, { "_id" : "886", "value" : 443000 }, { "_id" : "887", "value" : 443500 }, { "_id" : "888", "value" : 444000 }, { "_id" : "889", "value" : 444500 }, { "_id" : "89", "value" : 44500 }, { "_id" : "890", "value" : 445000 }, { "_id" : "891", "value" : 445500 }, { "_id" : "892", "value" : 446000 }, { "_id" : "893", "value" : 446500 }, { "_id" : "894", "value" : 447000 }, { "_id" : "895", "value" : 447500 }, { "_id" : "896", "value" : 448000 }, { "_id" : "897", "value" : 448500 }, { "_id" : "898", "value" : 449000 }, { "_id" : "899", "value" : 449500 }, { "_id" : "9", "value" : 4500 }, { "_id" : "90", "value" : 45000 }, { "_id" : "900", "value" : 450000 }, { "_id" : "901", "value" : 450500 }, { "_id" : "902", "value" : 451000 }, { "_id" : "903", "value" : 451500 }, { "_id" : "904", "value" : 452000 }, { "_id" : "905", "value" : 452500 }, { "_id" : "906", "value" : 453000 }, { "_id" : "907", "value" : 453500 }, { "_id" : "908", "value" : 454000 }, { "_id" : "909", "value" : 454500 }, { "_id" : "91", "value" : 45500 }, { "_id" : "910", "value" : 455000 }, { "_id" : "911", "value" : 455500 }, { "_id" : "912", "value" : 456000 }, { "_id" : "913", "value" : 456500 }, { "_id" : "914", "value" : 457000 }, { "_id" : "915", "value" : 457500 }, { "_id" : "916", "value" : 458000 }, { "_id" : "917", "value" : 458500 }, { "_id" : "918", "value" : 459000 }, { "_id" : "919", "value" : 459500 }, { "_id" : "92", "value" : 46000 }, { "_id" : "920", "value" : 460000 }, { "_id" : "921", "value" : 460500 }, { "_id" : "922", "value" : 461000 }, { "_id" : "923", "value" : 461500 }, { "_id" : "924", "value" : 462000 }, { "_id" : "925", "value" : 462500 }, { "_id" : "926", "value" : 463000 }, { "_id" : "927", "value" : 463500 }, { "_id" : "928", "value" : 464000 }, { "_id" : "929", "value" : 464500 }, { "_id" : "93", "value" : 46500 }, { "_id" : "930", "value" : 465000 }, { "_id" : "931", "value" : 465500 }, { "_id" : "932", "value" : 466000 }, { "_id" : "933", "value" : 466500 }, { "_id" : "934", "value" : 467000 }, { "_id" : "935", "value" : 467500 }, { "_id" : "936", "value" : 468000 }, { "_id" : "937", "value" : 468500 }, { "_id" : "938", "value" : 469000 }, { "_id" : "939", "value" : 469500 }, { "_id" : "94", "value" : 47000 }, { "_id" : "940", "value" : 470000 }, { "_id" : "941", "value" : 470500 }, { "_id" : "942", "value" : 471000 }, { "_id" : "943", "value" : 471500 }, { "_id" : "944", "value" : 472000 }, { "_id" : "945", "value" : 472500 }, { "_id" : "946", "value" : 473000 }, { "_id" : "947", "value" : 473500 }, { "_id" : "948", "value" : 474000 }, { "_id" : "949", "value" : 474500 }, { "_id" : "95", "value" : 47500 }, { "_id" : "950", "value" : 475000 }, { "_id" : "951", "value" : 475500 }, { "_id" : "952", "value" : 476000 }, { "_id" : "953", "value" : 476500 }, { "_id" : "954", "value" : 477000 }, { "_id" : "955", "value" : 477500 }, { "_id" : "956", "value" : 478000 }, { "_id" : "957", "value" : 478500 }, { "_id" : "958", "value" : 479000 }, { "_id" : "959", "value" : 479500 }, { "_id" : "96", "value" : 48000 }, { "_id" : "960", "value" : 480000 }, { "_id" : "961", "value" : 480500 }, { "_id" : "962", "value" : 481000 }, { "_id" : "963", "value" : 481500 }, { "_id" : "964", "value" : 482000 }, { "_id" : "965", "value" : 482500 }, { "_id" : "966", "value" : 483000 }, { "_id" : "967", "value" : 483500 }, { "_id" : "968", "value" : 484000 }, { "_id" : "969", "value" : 484500 }, { "_id" : "97", "value" : 48500 }, { "_id" : "970", "value" : 485000 }, { "_id" : "971", "value" : 485500 }, { "_id" : "972", "value" : 486000 }, { "_id" : "973", "value" : 486500 }, { "_id" : "974", "value" : 487000 }, { "_id" : "975", "value" : 487500 }, { "_id" : "976", "value" : 488000 }, { "_id" : "977", "value" : 488500 }, { "_id" : "978", "value" : 489000 }, { "_id" : "979", "value" : 489500 }, { "_id" : "98", "value" : 49000 }, { "_id" : "980", "value" : 490000 }, { "_id" : "981", "value" : 490500 }, { "_id" : "982", "value" : 491000 }, { "_id" : "983", "value" : 491500 }, { "_id" : "984", "value" : 492000 }, { "_id" : "985", "value" : 492500 }, { "_id" : "986", "value" : 493000 }, { "_id" : "987", "value" : 493500 }, { "_id" : "988", "value" : 494000 }, { "_id" : "989", "value" : 494500 }, { "_id" : "99", "value" : 49500 }, { "_id" : "990", "value" : 495000 }, { "_id" : "991", "value" : 495500 }, { "_id" : "992", "value" : 496000 }, { "_id" : "993", "value" : 496500 }, { "_id" : "994", "value" : 497000 }, { "_id" : "995", "value" : 497500 }, { "_id" : "996", "value" : 498000 }, { "_id" : "997", "value" : 498500 }, { "_id" : "998", "value" : 499000 }, { "_id" : "999", "value" : 499500 } ] ---- Finishing parallel migrations... ---- m30001| Fri Feb 22 11:50:41.784 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 117043, clonedBytes: 5371055, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:42.808 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 137494, clonedBytes: 6309544, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:43.833 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 159215, clonedBytes: 7306290, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:44.857 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 179190, clonedBytes: 8222940, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:45.881 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 197840, clonedBytes: 9078860, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 11:50:46.636 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:46.636 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 11:50:46.905 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 218181, clonedBytes: 10012236, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:47.930 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 239795, clonedBytes: 11004170, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 11:50:48.410 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:50:48.410 [migrateThread] migrate commit succeeded flushing to secondaries for 'mr_shard_version.coll' { _id: MinKey } -> { _id: 250000.0 } m30000| Fri Feb 22 11:50:48.412 [migrateThread] migrate commit flushed to journal for 'mr_shard_version.coll' { _id: MinKey } -> { _id: 250000.0 } m30001| Fri Feb 22 11:50:48.954 [conn4] moveChunk data transfer progress: { active: true, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 250000, clonedBytes: 11472500, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:50:48.954 [conn4] moveChunk setting version to: 3|0||51275b67d33e7c60dead1534 m30000| Fri Feb 22 11:50:48.954 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:50:48.963 [migrateThread] migrate commit succeeded flushing to secondaries for 'mr_shard_version.coll' { _id: MinKey } -> { _id: 250000.0 } m30000| Fri Feb 22 11:50:48.963 [migrateThread] migrate commit flushed to journal for 'mr_shard_version.coll' { _id: MinKey } -> { _id: 250000.0 } m30000| Fri Feb 22 11:50:48.963 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:48-51275b98dc5301d37b2aa1dd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533848963), what: "moveChunk.to", ns: "mr_shard_version.coll", details: { min: { _id: MinKey }, max: { _id: 250000.0 }, step1 of 5: 5, step2 of 5: 2, step3 of 5: 35323, step4 of 5: 0, step5 of 5: 553 } } m30001| Fri Feb 22 11:50:48.964 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "mr_shard_version.coll", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 250000.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 250000, clonedBytes: 11472500, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:50:48.964 [conn4] moveChunk moved last chunk out for collection 'mr_shard_version.coll' m30001| Fri Feb 22 11:50:48.965 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:48-51275b9868faedda20dee035", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59854", time: new Date(1361533848965), what: "moveChunk.commit", ns: "mr_shard_version.coll", details: { min: { _id: MinKey }, max: { _id: 250000.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:50:48.965 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:50:48.965 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:50:48.965 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:50:48.965 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:50:48.965 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:50:48.965 [cleanupOldData-51275b9868faedda20dee036] (start) waiting to cleanup mr_shard_version.coll from { _id: MinKey } -> { _id: 250000.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:50:48.966 [conn4] distributed lock 'mr_shard_version.coll/bs-smartos-x86-64-1.10gen.cc:30001:1361533799:29667' unlocked. m30001| Fri Feb 22 11:50:48.966 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:48-51275b9868faedda20dee037", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59854", time: new Date(1361533848966), what: "moveChunk.from", ns: "mr_shard_version.coll", details: { min: { _id: MinKey }, max: { _id: 250000.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 938, step4 of 6: 35877, step5 of 6: 10, step6 of 6: 0 } } m30001| Fri Feb 22 11:50:48.966 [conn4] command admin.$cmd command: { moveChunk: "mr_shard_version.coll", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 250000.0 }, maxChunkSizeBytes: 52428800, shardId: "mr_shard_version.coll-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 94 locks(micros) W:17 r:1621370 w:10 reslen:37 36830ms m30999| Fri Feb 22 11:50:48.966 [conn3] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:50:48.966 [conn3] loading chunk manager for collection mr_shard_version.coll using old chunk manager w/ version 2|1||51275b67d33e7c60dead1534 and 2 chunks m30999| Fri Feb 22 11:50:48.966 [conn3] major version query from 2|1||51275b67d33e7c60dead1534 and over 2 shards is { ns: "mr_shard_version.coll", $or: [ { lastmod: { $gte: Timestamp 2000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 2000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 2000|1 } } ] } m30999| Fri Feb 22 11:50:48.967 [conn3] loaded 1 chunks into new chunk manager for mr_shard_version.coll with version 3|0||51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:50:48.967 [conn3] ChunkManager: time to load chunks for mr_shard_version.coll: 0ms sequenceNumber: 5 version: 3|0||51275b67d33e7c60dead1534 based on: 2|1||51275b67d33e7c60dead1534 m30999| Fri Feb 22 11:50:48.967 [conn3] SocketException: remote: 127.0.0.1:61377 error: 9001 socket exception [0] server [127.0.0.1:61377] m30999| Fri Feb 22 11:50:48.967 [conn3] end connection 127.0.0.1:61377 (1 connection now open) m30999| Fri Feb 22 11:50:48.967 [mongosMain] connection accepted from 127.0.0.1:56375 #4 (2 connections now open) m30999| Fri Feb 22 11:50:48.968 [conn4] SocketException: remote: 127.0.0.1:56375 error: 9001 socket exception [0] server [127.0.0.1:56375] m30999| Fri Feb 22 11:50:48.968 [conn4] end connection 127.0.0.1:56375 (1 connection now open) { "note" : "values per second", "errCount" : NumberLong(0), "trapped" : "error: not implemented", "insert" : 0, "query" : 3, "update" : 0, "delete" : 0, "getmore" : 3, "command" : 6 } m30999| Fri Feb 22 11:50:48.969 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 11:50:48.981 [conn8] end connection 127.0.0.1:58706 (7 connections now open) m30001| Fri Feb 22 11:50:48.981 [conn3] end connection 127.0.0.1:46470 (7 connections now open) m30001| Fri Feb 22 11:50:48.981 [conn4] end connection 127.0.0.1:59854 (7 connections now open) m30000| Fri Feb 22 11:50:48.982 [conn3] end connection 127.0.0.1:64529 (12 connections now open) m30000| Fri Feb 22 11:50:48.982 [conn5] end connection 127.0.0.1:41399 (12 connections now open) m30000| Fri Feb 22 11:50:48.982 [conn7] end connection 127.0.0.1:57355 (12 connections now open) m30000| Fri Feb 22 11:50:48.982 [conn6] end connection 127.0.0.1:38027 (12 connections now open) m30001| Fri Feb 22 11:50:48.985 [cleanupOldData-51275b9868faedda20dee036] waiting to remove documents for mr_shard_version.coll from { _id: MinKey } -> { _id: 250000.0 } Fri Feb 22 11:50:49.969 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 11:50:49.969 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:50:49.969 [interruptThread] now exiting m30000| Fri Feb 22 11:50:49.969 dbexit: m30000| Fri Feb 22 11:50:49.969 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:50:49.969 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:50:49.969 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:50:49.969 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:50:49.969 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:50:49.969 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:50:49.969 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:50:49.969 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:50:49.969 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:50:49.969 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:50:49.970 [conn1] end connection 127.0.0.1:37036 (8 connections now open) m30000| Fri Feb 22 11:50:49.970 [conn2] end connection 127.0.0.1:41515 (8 connections now open) m30000| Fri Feb 22 11:50:49.970 [conn8] end connection 127.0.0.1:52142 (8 connections now open) m30000| Fri Feb 22 11:50:49.970 [conn12] end connection 127.0.0.1:45374 (8 connections now open) m30000| Fri Feb 22 11:50:49.970 [conn9] end connection 127.0.0.1:34668 (8 connections now open) m30000| Fri Feb 22 11:50:49.970 [conn11] end connection 127.0.0.1:39008 (8 connections now open) m30001| Fri Feb 22 11:50:49.970 [conn5] end connection 127.0.0.1:44939 (4 connections now open) m30000| Fri Feb 22 11:50:49.970 [conn10] end connection 127.0.0.1:53352 (8 connections now open) m30000| Fri Feb 22 11:50:49.970 [conn13] end connection 127.0.0.1:60501 (8 connections now open) m30000| Fri Feb 22 11:50:50.023 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:50:50.035 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:50:50.035 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:50:50.035 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:50:50.036 dbexit: really exiting now Fri Feb 22 11:50:50.969 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 11:50:50.969 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:50:50.969 [interruptThread] now exiting m30001| Fri Feb 22 11:50:50.969 dbexit: m30001| Fri Feb 22 11:50:50.970 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:50:50.970 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 11:50:50.970 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 11:50:50.970 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 11:50:50.970 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:50:50.970 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:50:50.970 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:50:50.970 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:50:50.970 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:50:50.970 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:50:50.970 [conn1] end connection 127.0.0.1:55334 (3 connections now open) m30001| Fri Feb 22 11:50:50.970 [conn6] end connection 127.0.0.1:48046 (3 connections now open) m30001| Fri Feb 22 11:50:50.970 [conn7] end connection 127.0.0.1:60951 (3 connections now open) m30001| Fri Feb 22 11:50:51.008 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:50:51.027 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:50:51.027 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:50:51.027 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:50:51.028 dbexit: really exiting now Fri Feb 22 11:50:51.970 shell: stopped mongo program on port 30001 *** ShardingTest test completed successfully in 89.86 seconds *** Fri Feb 22 11:50:52.023 [conn98] end connection 127.0.0.1:64512 (0 connections now open) 1.5011 minutes Fri Feb 22 11:50:52.051 [initandlisten] connection accepted from 127.0.0.1:35938 #99 (1 connection now open) Fri Feb 22 11:50:52.052 [conn99] end connection 127.0.0.1:35938 (0 connections now open) ******************************************* Test : newcollection2.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/newcollection2.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/newcollection2.js";TestData.testFile = "newcollection2.js";TestData.testName = "newcollection2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:50:52 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:50:52.180 [initandlisten] connection accepted from 127.0.0.1:53998 #100 (1 connection now open) null startMongod WARNING DELETES DATA DIRECTORY THIS IS FOR TESTING ONLY Fri Feb 22 11:50:52.192 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --noprealloc --smallfiles --port 31000 --dbpath /data/db/jstests_disk_newcollection2 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Fri Feb 22 11:50:52.268 [initandlisten] MongoDB starting : pid=1507 port=31000 dbpath=/data/db/jstests_disk_newcollection2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 11:50:52.268 [initandlisten] m31000| Fri Feb 22 11:50:52.268 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 11:50:52.268 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 11:50:52.268 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 11:50:52.268 [initandlisten] m31000| Fri Feb 22 11:50:52.268 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 11:50:52.268 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 11:50:52.268 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 11:50:52.268 [initandlisten] allocator: system m31000| Fri Feb 22 11:50:52.268 [initandlisten] options: { dbpath: "/data/db/jstests_disk_newcollection2", noprealloc: true, port: 31000, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Fri Feb 22 11:50:52.269 [initandlisten] journal dir=/data/db/jstests_disk_newcollection2/journal m31000| Fri Feb 22 11:50:52.269 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 11:50:52.282 [FileAllocator] allocating new datafile /data/db/jstests_disk_newcollection2/local.ns, filling with zeroes... m31000| Fri Feb 22 11:50:52.282 [FileAllocator] creating directory /data/db/jstests_disk_newcollection2/_tmp m31000| Fri Feb 22 11:50:52.282 [FileAllocator] done allocating datafile /data/db/jstests_disk_newcollection2/local.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:50:52.282 [FileAllocator] allocating new datafile /data/db/jstests_disk_newcollection2/local.0, filling with zeroes... m31000| Fri Feb 22 11:50:52.282 [FileAllocator] done allocating datafile /data/db/jstests_disk_newcollection2/local.0, size: 16MB, took 0 secs m31000| Fri Feb 22 11:50:52.285 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 11:50:52.285 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 11:50:52.395 [initandlisten] connection accepted from 127.0.0.1:60249 #1 (1 connection now open) m31000| Fri Feb 22 11:50:52.396 [FileAllocator] allocating new datafile /data/db/jstests_disk_newcollection2/test.ns, filling with zeroes... m31000| Fri Feb 22 11:50:52.397 [FileAllocator] done allocating datafile /data/db/jstests_disk_newcollection2/test.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:50:52.397 [FileAllocator] allocating new datafile /data/db/jstests_disk_newcollection2/test.0, filling with zeroes... m31000| Fri Feb 22 11:50:52.397 [FileAllocator] done allocating datafile /data/db/jstests_disk_newcollection2/test.0, size: 511MB, took 0 secs m31000| Fri Feb 22 11:50:52.399 [FileAllocator] allocating new datafile /data/db/jstests_disk_newcollection2/test.1, filling with zeroes... m31000| Fri Feb 22 11:50:52.399 [FileAllocator] done allocating datafile /data/db/jstests_disk_newcollection2/test.1, size: 511MB, took 0 secs m31000| Fri Feb 22 11:50:52.401 [conn1] build index test.jstests_disk_newcollection2 { _id: 1 } m31000| Fri Feb 22 11:50:52.402 [conn1] build index done. scanned 0 total records. 0.001 secs m31000| Fri Feb 22 11:50:52.403 [conn1] CMD: validate test.jstests_disk_newcollection2 m31000| Fri Feb 22 11:50:52.403 [conn1] validating index 0: test.jstests_disk_newcollection2.$_id_ { "ns" : "test.jstests_disk_newcollection2", "firstExtent" : "0:2000 ns:test.jstests_disk_newcollection2", "lastExtent" : "1:2000 ns:test.jstests_disk_newcollection2", "extentCount" : 2, "datasize" : 0, "nrecords" : 0, "lastExtentSize" : 4096, "padding" : 1, "firstExtentDetails" : { "loc" : "0:2000", "xnext" : "1:2000", "xprev" : "null", "nsdiag" : "test.jstests_disk_newcollection2", "size" : 536600560, "firstRecord" : "null", "lastRecord" : "null" }, "lastExtentDetails" : { "loc" : "1:2000", "xnext" : "null", "xprev" : "0:2000", "nsdiag" : "test.jstests_disk_newcollection2", "size" : 4096, "firstRecord" : "null", "lastRecord" : "null" }, "deletedCount" : 2, "deletedSize" : 536604304, "nIndexes" : 1, "keysPerIndex" : { "test.jstests_disk_newcollection2.$_id_" : 0 }, "valid" : true, "errors" : [ ], "warning" : "Some checks omitted for speed. use {full:true} option to do more thorough scan.", "ok" : 1 } Fri Feb 22 11:50:53.408 [conn100] end connection 127.0.0.1:53998 (0 connections now open) 1371.2111 ms Fri Feb 22 11:50:53.424 [initandlisten] connection accepted from 127.0.0.1:44895 #101 (1 connection now open) Fri Feb 22 11:50:53.425 [conn101] end connection 127.0.0.1:44895 (0 connections now open) ******************************************* Test : no_balance_collection.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/no_balance_collection.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/no_balance_collection.js";TestData.testFile = "no_balance_collection.js";TestData.testName = "no_balance_collection";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:50:53 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:50:53.575 [initandlisten] connection accepted from 127.0.0.1:57192 #102 (1 connection now open) null Resetting db path '/data/db/test0' Fri Feb 22 11:50:53.587 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/test0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 11:50:53.682 [initandlisten] MongoDB starting : pid=1510 port=30000 dbpath=/data/db/test0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 11:50:53.683 [initandlisten] m30000| Fri Feb 22 11:50:53.683 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 11:50:53.683 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 11:50:53.683 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 11:50:53.683 [initandlisten] m30000| Fri Feb 22 11:50:53.683 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 11:50:53.683 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 11:50:53.683 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 11:50:53.683 [initandlisten] allocator: system m30000| Fri Feb 22 11:50:53.683 [initandlisten] options: { dbpath: "/data/db/test0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 11:50:53.683 [initandlisten] journal dir=/data/db/test0/journal m30000| Fri Feb 22 11:50:53.684 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 11:50:53.698 [FileAllocator] allocating new datafile /data/db/test0/local.ns, filling with zeroes... m30000| Fri Feb 22 11:50:53.698 [FileAllocator] creating directory /data/db/test0/_tmp m30000| Fri Feb 22 11:50:53.698 [FileAllocator] done allocating datafile /data/db/test0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:50:53.699 [FileAllocator] allocating new datafile /data/db/test0/local.0, filling with zeroes... m30000| Fri Feb 22 11:50:53.699 [FileAllocator] done allocating datafile /data/db/test0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:50:53.702 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 11:50:53.702 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 11:50:53.790 [initandlisten] connection accepted from 127.0.0.1:61428 #1 (1 connection now open) Resetting db path '/data/db/test1' Fri Feb 22 11:50:53.793 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/test1 --setParameter enableTestCommands=1 m30001| Fri Feb 22 11:50:53.881 [initandlisten] MongoDB starting : pid=1516 port=30001 dbpath=/data/db/test1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 11:50:53.881 [initandlisten] m30001| Fri Feb 22 11:50:53.881 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 11:50:53.881 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 11:50:53.881 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 11:50:53.881 [initandlisten] m30001| Fri Feb 22 11:50:53.881 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 11:50:53.881 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 11:50:53.881 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 11:50:53.881 [initandlisten] allocator: system m30001| Fri Feb 22 11:50:53.881 [initandlisten] options: { dbpath: "/data/db/test1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 11:50:53.882 [initandlisten] journal dir=/data/db/test1/journal m30001| Fri Feb 22 11:50:53.882 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 11:50:53.896 [FileAllocator] allocating new datafile /data/db/test1/local.ns, filling with zeroes... m30001| Fri Feb 22 11:50:53.896 [FileAllocator] creating directory /data/db/test1/_tmp m30001| Fri Feb 22 11:50:53.896 [FileAllocator] done allocating datafile /data/db/test1/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:50:53.896 [FileAllocator] allocating new datafile /data/db/test1/local.0, filling with zeroes... m30001| Fri Feb 22 11:50:53.897 [FileAllocator] done allocating datafile /data/db/test1/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:50:53.900 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 11:50:53.900 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 11:50:53.995 [initandlisten] connection accepted from 127.0.0.1:52676 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 11:50:53.996 [initandlisten] connection accepted from 127.0.0.1:46218 #2 (2 connections now open) ShardingTest test : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 11:50:54.005 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -v --chunkSize 50 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:50:54.030 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:50:54.031 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=1518 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:50:54.031 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:50:54.031 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:50:54.031 [mongosMain] options: { chunkSize: 50, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 11:50:54.031 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 11:50:54.031 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:50:54.033 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:50:54.033 [initandlisten] connection accepted from 127.0.0.1:57186 #3 (3 connections now open) m30999| Fri Feb 22 11:50:54.033 [mongosMain] connected connection! m30999| Fri Feb 22 11:50:54.033 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:50:54.034 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:50:54.034 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:50:54.034 [initandlisten] connection accepted from 127.0.0.1:62953 #4 (4 connections now open) m30999| Fri Feb 22 11:50:54.034 [mongosMain] connected connection! m30000| Fri Feb 22 11:50:54.035 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:50:54.042 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:50:54.043 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:50:54.043 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:50:54.043 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 11:50:54.043 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:50:54 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "51275b9e08efcab5a82bb5fe" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 11:50:54.044 [FileAllocator] allocating new datafile /data/db/test0/config.ns, filling with zeroes... m30000| Fri Feb 22 11:50:54.044 [FileAllocator] done allocating datafile /data/db/test0/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:50:54.044 [FileAllocator] allocating new datafile /data/db/test0/config.0, filling with zeroes... m30000| Fri Feb 22 11:50:54.044 [FileAllocator] done allocating datafile /data/db/test0/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:50:54.045 [FileAllocator] allocating new datafile /data/db/test0/config.1, filling with zeroes... m30000| Fri Feb 22 11:50:54.045 [FileAllocator] done allocating datafile /data/db/test0/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:50:54.048 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 11:50:54.049 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:50:54.050 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 11:50:54.051 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:50:54.052 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 11:50:54 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838', sleeping for 30000ms m30000| Fri Feb 22 11:50:54.052 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 11:50:54.052 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:50:54.053 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275b9e08efcab5a82bb5fe m30999| Fri Feb 22 11:50:54.055 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:50:54.055 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:50:54.055 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9e08efcab5a82bb5ff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533854055), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 11:50:54.056 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 11:50:54.056 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:50:54.056 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 11:50:54.057 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 11:50:54.057 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:50:54.058 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9e08efcab5a82bb601", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361533854058), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:50:54.058 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:50:54.058 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30000| Fri Feb 22 11:50:54.060 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:50:54.061 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:50:54.061 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:50:54.061 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:50:54.061 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:50:54.061 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:50:54.061 BackgroundJob starting: PeriodicTask::Runner m30000| Fri Feb 22 11:50:54.061 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:50:54.061 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:50:54.061 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 11:50:54.062 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 11:50:54.063 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:50:54.063 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 11:50:54.063 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 11:50:54.064 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:50:54.064 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 11:50:54.064 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:50:54.065 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 11:50:54.065 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:50:54.065 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 11:50:54.066 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 11:50:54.066 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 11:50:54.066 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 11:50:54.068 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:50:54.068 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:50:54.068 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:50:54 m30999| Fri Feb 22 11:50:54.068 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:50:54.068 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:50:54.068 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 11:50:54.068 [conn3] build index config.mongos { _id: 1 } m30000| Fri Feb 22 11:50:54.068 [initandlisten] connection accepted from 127.0.0.1:36872 #5 (5 connections now open) m30999| Fri Feb 22 11:50:54.068 [Balancer] connected connection! m30000| Fri Feb 22 11:50:54.069 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:50:54.070 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:50:54.070 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:50:54.070 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:50:54.070 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:50:54 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275b9e08efcab5a82bb603" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:50:54.071 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275b9e08efcab5a82bb603 m30999| Fri Feb 22 11:50:54.071 [Balancer] *** start balancing round m30999| Fri Feb 22 11:50:54.071 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:50:54.071 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:50:54.071 [Balancer] no collections to balance m30999| Fri Feb 22 11:50:54.071 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:50:54.071 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:50:54.071 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:50:54.206 [mongosMain] connection accepted from 127.0.0.1:50003 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 11:50:54.209 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 11:50:54.210 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 11:50:54.210 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:50:54.211 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 11:50:54.212 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 11:50:54.214 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:50:54.214 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 11:50:54.214 [initandlisten] connection accepted from 127.0.0.1:45535 #2 (2 connections now open) m30999| Fri Feb 22 11:50:54.214 [conn1] connected connection! m30999| Fri Feb 22 11:50:54.216 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 11:50:54.217 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:50:54.217 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:50:54.217 [conn1] connected connection! m30000| Fri Feb 22 11:50:54.217 [initandlisten] connection accepted from 127.0.0.1:56649 #6 (6 connections now open) m30999| Fri Feb 22 11:50:54.217 [conn1] creating WriteBackListener for: localhost:30000 serverID: 51275b9e08efcab5a82bb602 m30999| Fri Feb 22 11:50:54.217 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 11:50:54.217 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 11:50:54.218 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:50:54.218 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:50:54.218 [conn1] connected connection! m30999| Fri Feb 22 11:50:54.218 [conn1] creating WriteBackListener for: localhost:30001 serverID: 51275b9e08efcab5a82bb602 m30999| Fri Feb 22 11:50:54.218 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 11:50:54.218 BackgroundJob starting: WriteBackListener-localhost:30001 m30001| Fri Feb 22 11:50:54.218 [initandlisten] connection accepted from 127.0.0.1:46697 #3 (3 connections now open) Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| Fri Feb 22 11:50:54.222 [conn1] couldn't find database [no_balance_collection] in config db m30999| Fri Feb 22 11:50:54.222 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 11:50:54.222 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:50:54.222 [conn1] connected connection! m30000| Fri Feb 22 11:50:54.222 [initandlisten] connection accepted from 127.0.0.1:47184 #7 (7 connections now open) m30999| Fri Feb 22 11:50:54.223 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 11:50:54.223 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 11:50:54.223 [initandlisten] connection accepted from 127.0.0.1:49874 #4 (4 connections now open) m30999| Fri Feb 22 11:50:54.223 [conn1] connected connection! m30999| Fri Feb 22 11:50:54.223 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:50:54.224 [conn1] put [no_balance_collection] on: shard0001:localhost:30001 m30999| Fri Feb 22 11:50:54.225 [conn1] enabling sharding on: no_balance_collection m30001| Fri Feb 22 11:50:54.227 [FileAllocator] allocating new datafile /data/db/test1/no_balance_collection.ns, filling with zeroes... m30001| Fri Feb 22 11:50:54.227 [FileAllocator] done allocating datafile /data/db/test1/no_balance_collection.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 11:50:54.227 [FileAllocator] allocating new datafile /data/db/test1/no_balance_collection.0, filling with zeroes... m30001| Fri Feb 22 11:50:54.227 [FileAllocator] done allocating datafile /data/db/test1/no_balance_collection.0, size: 64MB, took 0 secs m30001| Fri Feb 22 11:50:54.228 [FileAllocator] allocating new datafile /data/db/test1/no_balance_collection.1, filling with zeroes... m30001| Fri Feb 22 11:50:54.228 [FileAllocator] done allocating datafile /data/db/test1/no_balance_collection.1, size: 128MB, took 0 secs m30001| Fri Feb 22 11:50:54.231 [conn4] build index no_balance_collection.collA { _id: 1 } m30001| Fri Feb 22 11:50:54.232 [conn4] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:50:54.232 [conn4] info: creating collection no_balance_collection.collA on add index m30999| Fri Feb 22 11:50:54.232 [conn1] CMD: shardcollection: { shardcollection: "no_balance_collection.collA", key: { _id: 1.0 } } m30999| Fri Feb 22 11:50:54.232 [conn1] enable sharding on: no_balance_collection.collA with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:50:54.232 [conn1] going to create 1 chunk(s) for: no_balance_collection.collA using new epoch 51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.233 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 2 version: 1|0||51275b9e08efcab5a82bb604 based on: (empty) m30000| Fri Feb 22 11:50:54.234 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 11:50:54.235 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:50:54.236 [conn1] setShardVersion shard0001 localhost:30001 no_balance_collection.collA { setShardVersion: "no_balance_collection.collA", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275b9e08efcab5a82bb604'), serverID: ObjectId('51275b9e08efcab5a82bb602'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f630 2 m30999| Fri Feb 22 11:50:54.236 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "no_balance_collection.collA", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'no_balance_collection.collA'" } m30999| Fri Feb 22 11:50:54.236 [conn1] setShardVersion shard0001 localhost:30001 no_balance_collection.collA { setShardVersion: "no_balance_collection.collA", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275b9e08efcab5a82bb604'), serverID: ObjectId('51275b9e08efcab5a82bb602'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x117f630 2 m30001| Fri Feb 22 11:50:54.236 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 11:50:54.236 [initandlisten] connection accepted from 127.0.0.1:39570 #8 (8 connections now open) m30999| Fri Feb 22 11:50:54.237 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30001| Fri Feb 22 11:50:54.241 [conn4] build index no_balance_collection.collB { _id: 1 } m30001| Fri Feb 22 11:50:54.242 [conn4] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 11:50:54.242 [conn4] info: creating collection no_balance_collection.collB on add index m30999| Fri Feb 22 11:50:54.242 [conn1] CMD: shardcollection: { shardcollection: "no_balance_collection.collB", key: { _id: 1.0 } } m30999| Fri Feb 22 11:50:54.242 [conn1] enable sharding on: no_balance_collection.collB with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:50:54.242 [conn1] going to create 1 chunk(s) for: no_balance_collection.collB using new epoch 51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.243 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 3 version: 1|0||51275b9e08efcab5a82bb605 based on: (empty) m30999| Fri Feb 22 11:50:54.243 [conn1] setShardVersion shard0001 localhost:30001 no_balance_collection.collB { setShardVersion: "no_balance_collection.collB", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275b9e08efcab5a82bb605'), serverID: ObjectId('51275b9e08efcab5a82bb602'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f630 3 m30999| Fri Feb 22 11:50:54.244 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "no_balance_collection.collB", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'no_balance_collection.collB'" } m30999| Fri Feb 22 11:50:54.244 [conn1] setShardVersion shard0001 localhost:30001 no_balance_collection.collB { setShardVersion: "no_balance_collection.collB", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275b9e08efcab5a82bb605'), serverID: ObjectId('51275b9e08efcab5a82bb602'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x117f630 3 m30001| Fri Feb 22 11:50:54.244 [conn3] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 11:50:54.245 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:50:54.245 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.246 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "no_balance_collection.collA-_id_MinKey", configdb: "localhost:30000" } m30000| Fri Feb 22 11:50:54.246 [initandlisten] connection accepted from 127.0.0.1:50293 #9 (9 connections now open) m30001| Fri Feb 22 11:50:54.247 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160 (sleeping for 30000ms) m30001| Fri Feb 22 11:50:54.248 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349b3 m30001| Fri Feb 22 11:50:54.249 [conn4] splitChunk accepted at version 1|0||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.250 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349b4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854250), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.250 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.251 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 4 version: 1|2||51275b9e08efcab5a82bb604 based on: 1|0||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.252 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.252 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "no_balance_collection.collB-_id_MinKey", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.253 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349b5 m30001| Fri Feb 22 11:50:54.254 [conn4] splitChunk accepted at version 1|0||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.254 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349b6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854254), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.255 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.255 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 5 version: 1|2||51275b9e08efcab5a82bb605 based on: 1|0||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.256 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.257 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "no_balance_collection.collA-_id_0.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.257 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349b7 m30001| Fri Feb 22 11:50:54.258 [conn4] splitChunk accepted at version 1|2||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.259 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349b8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854259), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.259 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.260 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 6 version: 1|4||51275b9e08efcab5a82bb604 based on: 1|2||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.261 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.261 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1.0 } ], shardId: "no_balance_collection.collB-_id_0.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.262 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349b9 m30001| Fri Feb 22 11:50:54.263 [conn4] splitChunk accepted at version 1|2||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.263 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349ba", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854263), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 1.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.264 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.264 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 7 version: 1|4||51275b9e08efcab5a82bb605 based on: 1|2||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.265 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 1.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.265 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2.0 } ], shardId: "no_balance_collection.collA-_id_1.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.266 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349bb m30001| Fri Feb 22 11:50:54.267 [conn4] splitChunk accepted at version 1|4||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.268 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349bc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854268), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.268 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.269 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 8 version: 1|6||51275b9e08efcab5a82bb604 based on: 1|4||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.270 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 1.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.270 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 1.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2.0 } ], shardId: "no_balance_collection.collB-_id_1.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.271 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349bd m30001| Fri Feb 22 11:50:54.272 [conn4] splitChunk accepted at version 1|4||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.272 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349be", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854272), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 1.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1.0 }, max: { _id: 2.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.273 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.273 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 9 version: 1|6||51275b9e08efcab5a82bb605 based on: 1|4||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.274 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 2.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.274 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 3.0 } ], shardId: "no_balance_collection.collA-_id_2.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.275 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349bf m30001| Fri Feb 22 11:50:54.276 [conn4] splitChunk accepted at version 1|6||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.277 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349c0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854277), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.277 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.278 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 10 version: 1|8||51275b9e08efcab5a82bb604 based on: 1|6||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.279 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 2.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.279 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 2.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 3.0 } ], shardId: "no_balance_collection.collB-_id_2.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.280 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349c1 m30001| Fri Feb 22 11:50:54.280 [conn4] splitChunk accepted at version 1|6||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.281 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349c2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854281), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 2.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2.0 }, max: { _id: 3.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.281 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.282 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 11 version: 1|8||51275b9e08efcab5a82bb605 based on: 1|6||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.283 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 3.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.283 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 4.0 } ], shardId: "no_balance_collection.collA-_id_3.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.284 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349c3 m30001| Fri Feb 22 11:50:54.285 [conn4] splitChunk accepted at version 1|8||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.285 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349c4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854285), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.286 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.286 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 12 version: 1|10||51275b9e08efcab5a82bb604 based on: 1|8||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.287 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 3.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.287 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 3.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 4.0 } ], shardId: "no_balance_collection.collB-_id_3.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.288 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349c5 m30001| Fri Feb 22 11:50:54.289 [conn4] splitChunk accepted at version 1|8||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.290 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349c6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854290), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 3.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 3.0 }, max: { _id: 4.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.290 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.291 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 13 version: 1|10||51275b9e08efcab5a82bb605 based on: 1|8||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.292 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 4.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.292 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 5.0 } ], shardId: "no_balance_collection.collA-_id_4.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.293 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349c7 m30001| Fri Feb 22 11:50:54.293 [conn4] splitChunk accepted at version 1|10||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.294 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349c8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854294), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.294 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.295 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 14 version: 1|12||51275b9e08efcab5a82bb604 based on: 1|10||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.296 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 4.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.296 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 4.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 5.0 } ], shardId: "no_balance_collection.collB-_id_4.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.297 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349c9 m30001| Fri Feb 22 11:50:54.298 [conn4] splitChunk accepted at version 1|10||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.298 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349ca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854298), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 4.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 4.0 }, max: { _id: 5.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.299 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.300 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 15 version: 1|12||51275b9e08efcab5a82bb605 based on: 1|10||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.301 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 5.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.301 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 6.0 } ], shardId: "no_balance_collection.collA-_id_5.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.301 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349cb m30001| Fri Feb 22 11:50:54.302 [conn4] splitChunk accepted at version 1|12||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.303 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349cc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854303), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.304 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.304 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 16 version: 1|14||51275b9e08efcab5a82bb604 based on: 1|12||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.305 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 5.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.305 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 5.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 6.0 } ], shardId: "no_balance_collection.collB-_id_5.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.306 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349cd m30001| Fri Feb 22 11:50:54.307 [conn4] splitChunk accepted at version 1|12||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.308 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349ce", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854308), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 5.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 5.0 }, max: { _id: 6.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.308 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.309 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 17 version: 1|14||51275b9e08efcab5a82bb605 based on: 1|12||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.310 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 6.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.310 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 7.0 } ], shardId: "no_balance_collection.collA-_id_6.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.311 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349cf m30001| Fri Feb 22 11:50:54.311 [conn4] splitChunk accepted at version 1|14||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.312 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349d0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854312), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.312 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.313 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 18 version: 1|16||51275b9e08efcab5a82bb604 based on: 1|14||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.314 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 6.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.314 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 6.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 7.0 } ], shardId: "no_balance_collection.collB-_id_6.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.315 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349d1 m30001| Fri Feb 22 11:50:54.316 [conn4] splitChunk accepted at version 1|14||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.316 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349d2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854316), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 6.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 6.0 }, max: { _id: 7.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.317 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.318 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 19 version: 1|16||51275b9e08efcab5a82bb605 based on: 1|14||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:50:54.318 [conn1] splitting: no_balance_collection.collA shard: ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 7.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.319 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collA", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 8.0 } ], shardId: "no_balance_collection.collA-_id_7.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.319 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349d3 m30001| Fri Feb 22 11:50:54.320 [conn4] splitChunk accepted at version 1|16||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:50:54.321 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349d4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854321), what: "split", ns: "no_balance_collection.collA", details: { before: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') }, right: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604') } } } m30001| Fri Feb 22 11:50:54.321 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.322 [conn1] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 20 version: 1|18||51275b9e08efcab5a82bb604 based on: 1|16||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:50:54.323 [conn1] splitting: no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 7.0 }max: { _id: MaxKey } m30001| Fri Feb 22 11:50:54.323 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 7.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 8.0 } ], shardId: "no_balance_collection.collB-_id_7.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:50:54.324 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275b9ebef57cc68c0349d5 m30001| Fri Feb 22 11:50:54.325 [conn4] splitChunk accepted at version 1|16||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:50:54.325 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:50:54-51275b9ebef57cc68c0349d6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533854325), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 7.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 7.0 }, max: { _id: 8.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:50:54.326 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:50:54.326 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 21 version: 1|18||51275b9e08efcab5a82bb605 based on: 1|16||51275b9e08efcab5a82bb605 ---- Balancing disabled on no_balance_collection.collB ---- [ { "_id" : "no_balance_collection.collA", "lastmod" : ISODate("1970-01-16T18:12:13.854Z"), "dropped" : false, "key" : { "_id" : 1 }, "unique" : false, "lastmodEpoch" : ObjectId("51275b9e08efcab5a82bb604") }, { "_id" : "no_balance_collection.collB", "dropped" : false, "key" : { "_id" : 1 }, "lastmod" : ISODate("1970-01-16T18:12:13.854Z"), "lastmodEpoch" : ObjectId("51275b9e08efcab5a82bb605"), "noBalance" : true, "unique" : false } ] m30999| Fri Feb 22 11:51:00.072 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:00.073 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:00.073 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:00 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275ba408efcab5a82bb606" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275b9e08efcab5a82bb603" } } m30999| Fri Feb 22 11:51:00.074 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275ba408efcab5a82bb606 m30999| Fri Feb 22 11:51:00.074 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:00.074 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:00.074 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:00.074 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30000| Fri Feb 22 11:51:00.076 [conn3] build index config.tags { _id: 1 } m30000| Fri Feb 22 11:51:00.079 [conn3] build index done. scanned 0 total records. 0.003 secs m30000| Fri Feb 22 11:51:00.079 [conn3] info: creating collection config.tags on add index m30000| Fri Feb 22 11:51:00.079 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 11:51:00.081 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:51:00.082 [Balancer] shard0001 has more chunks me:10 best: shard0000:0 m30999| Fri Feb 22 11:51:00.082 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:00.082 [Balancer] donor : shard0001 chunks on 10 m30999| Fri Feb 22 11:51:00.082 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 11:51:00.082 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:00.082 [Balancer] ns: no_balance_collection.collA going to move { _id: "no_balance_collection.collA-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604'), ns: "no_balance_collection.collA", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:00.082 [Balancer] moving chunk ns: no_balance_collection.collA moving ( ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:00.082 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:00.082 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collA", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collA-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:00.083 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275ba4bef57cc68c0349d7 m30001| Fri Feb 22 11:51:00.083 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:00-51275ba4bef57cc68c0349d8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533860083), what: "moveChunk.start", ns: "no_balance_collection.collA", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:00.084 [conn4] moveChunk request accepted at version 1|18||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:51:00.084 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:00.084 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0 } for collection no_balance_collection.collA from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 11:51:00.085 [initandlisten] connection accepted from 127.0.0.1:32842 #5 (5 connections now open) m30000| Fri Feb 22 11:51:00.086 [FileAllocator] allocating new datafile /data/db/test0/no_balance_collection.ns, filling with zeroes... m30000| Fri Feb 22 11:51:00.086 [FileAllocator] done allocating datafile /data/db/test0/no_balance_collection.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 11:51:00.086 [FileAllocator] allocating new datafile /data/db/test0/no_balance_collection.0, filling with zeroes... m30000| Fri Feb 22 11:51:00.087 [FileAllocator] done allocating datafile /data/db/test0/no_balance_collection.0, size: 64MB, took 0 secs m30000| Fri Feb 22 11:51:00.087 [FileAllocator] allocating new datafile /data/db/test0/no_balance_collection.1, filling with zeroes... m30000| Fri Feb 22 11:51:00.087 [FileAllocator] done allocating datafile /data/db/test0/no_balance_collection.1, size: 128MB, took 0 secs m30000| Fri Feb 22 11:51:00.090 [migrateThread] build index no_balance_collection.collA { _id: 1 } m30000| Fri Feb 22 11:51:00.091 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 11:51:00.091 [migrateThread] info: creating collection no_balance_collection.collA on add index m30000| Fri Feb 22 11:51:00.092 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:00.092 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:51:00.093 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:51:00.095 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:00.095 [conn4] moveChunk setting version to: 2|0||51275b9e08efcab5a82bb604 m30000| Fri Feb 22 11:51:00.095 [initandlisten] connection accepted from 127.0.0.1:51347 #10 (10 connections now open) m30000| Fri Feb 22 11:51:00.095 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:00.103 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:51:00.103 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:51:00.103 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:00-51275ba41b56a92815dbab77", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533860103), what: "moveChunk.to", ns: "no_balance_collection.collA", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 11:51:00.104 [initandlisten] connection accepted from 127.0.0.1:63251 #11 (11 connections now open) m30001| Fri Feb 22 11:51:00.105 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:00.105 [conn4] moveChunk updating self version to: 2|1||51275b9e08efcab5a82bb604 through { _id: 0.0 } -> { _id: 1.0 } for collection 'no_balance_collection.collA' m30000| Fri Feb 22 11:51:00.106 [initandlisten] connection accepted from 127.0.0.1:62247 #12 (12 connections now open) m30001| Fri Feb 22 11:51:00.106 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:00-51275ba4bef57cc68c0349d9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533860106), what: "moveChunk.commit", ns: "no_balance_collection.collA", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:00.106 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:00.107 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:00.107 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:00.107 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:00.107 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:00.107 [cleanupOldData-51275ba4bef57cc68c0349da] (start) waiting to cleanup no_balance_collection.collA from { _id: MinKey } -> { _id: 0.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:00.107 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:00.107 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:00-51275ba4bef57cc68c0349db", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533860107), what: "moveChunk.from", ns: "no_balance_collection.collA", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:00.107 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:00.108 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 22 version: 2|1||51275b9e08efcab5a82bb604 based on: 1|18||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:51:00.108 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:00.108 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:00.127 [cleanupOldData-51275ba4bef57cc68c0349da] waiting to remove documents for no_balance_collection.collA from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:51:00.127 [cleanupOldData-51275ba4bef57cc68c0349da] moveChunk starting delete for: no_balance_collection.collA from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:51:00.127 [cleanupOldData-51275ba4bef57cc68c0349da] moveChunk deleted 0 documents for no_balance_collection.collA from { _id: MinKey } -> { _id: 0.0 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } m30999| Fri Feb 22 11:51:01.109 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:01.109 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:01.110 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275ba508efcab5a82bb607" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275ba408efcab5a82bb606" } } m30999| Fri Feb 22 11:51:01.110 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275ba508efcab5a82bb607 m30999| Fri Feb 22 11:51:01.110 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:01.110 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:01.110 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:01.111 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:01.112 [Balancer] shard0001 has more chunks me:9 best: shard0000:1 m30999| Fri Feb 22 11:51:01.112 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:01.112 [Balancer] donor : shard0001 chunks on 9 m30999| Fri Feb 22 11:51:01.112 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 11:51:01.112 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:01.112 [Balancer] ns: no_balance_collection.collA going to move { _id: "no_balance_collection.collA-_id_0.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604'), ns: "no_balance_collection.collA", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:01.112 [Balancer] moving chunk ns: no_balance_collection.collA moving ( ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:01.112 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:01.112 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collA", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collA-_id_0.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:01.113 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275ba5bef57cc68c0349dc m30001| Fri Feb 22 11:51:01.113 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:01-51275ba5bef57cc68c0349dd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533861113), what: "moveChunk.start", ns: "no_balance_collection.collA", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:01.114 [conn4] moveChunk request accepted at version 2|1||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:51:01.114 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:01.114 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 1.0 } for collection no_balance_collection.collA from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:01.115 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:01.115 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:51:01.115 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:51:01.124 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:01.124 [conn4] moveChunk setting version to: 3|0||51275b9e08efcab5a82bb604 m30000| Fri Feb 22 11:51:01.125 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:01.126 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:51:01.126 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:51:01.126 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:01-51275ba51b56a92815dbab78", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533861126), what: "moveChunk.to", ns: "no_balance_collection.collA", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:01.135 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:01.135 [conn4] moveChunk updating self version to: 3|1||51275b9e08efcab5a82bb604 through { _id: 1.0 } -> { _id: 2.0 } for collection 'no_balance_collection.collA' m30001| Fri Feb 22 11:51:01.135 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:01-51275ba5bef57cc68c0349de", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533861135), what: "moveChunk.commit", ns: "no_balance_collection.collA", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:01.135 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:01.135 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:01.135 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:01.135 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:01.135 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:01.136 [cleanupOldData-51275ba5bef57cc68c0349df] (start) waiting to cleanup no_balance_collection.collA from { _id: 0.0 } -> { _id: 1.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:01.136 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:01.136 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:01-51275ba5bef57cc68c0349e0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533861136), what: "moveChunk.from", ns: "no_balance_collection.collA", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:01.136 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:01.137 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 23 version: 3|1||51275b9e08efcab5a82bb604 based on: 2|1||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:51:01.137 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:01.138 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:01.156 [cleanupOldData-51275ba5bef57cc68c0349df] waiting to remove documents for no_balance_collection.collA from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:51:01.156 [cleanupOldData-51275ba5bef57cc68c0349df] moveChunk starting delete for: no_balance_collection.collA from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:51:01.156 [cleanupOldData-51275ba5bef57cc68c0349df] moveChunk deleted 0 documents for no_balance_collection.collA from { _id: 0.0 } -> { _id: 1.0 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } m30999| Fri Feb 22 11:51:02.138 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:02.139 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:02.139 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275ba608efcab5a82bb608" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275ba508efcab5a82bb607" } } m30999| Fri Feb 22 11:51:02.139 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275ba608efcab5a82bb608 m30999| Fri Feb 22 11:51:02.139 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:02.139 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:02.139 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:02.140 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:02.141 [Balancer] shard0001 has more chunks me:8 best: shard0000:2 m30999| Fri Feb 22 11:51:02.141 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:02.141 [Balancer] donor : shard0001 chunks on 8 m30999| Fri Feb 22 11:51:02.141 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 11:51:02.141 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:02.141 [Balancer] ns: no_balance_collection.collA going to move { _id: "no_balance_collection.collA-_id_1.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604'), ns: "no_balance_collection.collA", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:02.141 [Balancer] moving chunk ns: no_balance_collection.collA moving ( ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:02.141 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:02.141 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collA", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collA-_id_1.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:02.142 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275ba6bef57cc68c0349e1 m30001| Fri Feb 22 11:51:02.142 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:02-51275ba6bef57cc68c0349e2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533862142), what: "moveChunk.start", ns: "no_balance_collection.collA", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:02.143 [conn4] moveChunk request accepted at version 3|1||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:51:02.143 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:02.143 [migrateThread] starting receiving-end of migration of chunk { _id: 1.0 } -> { _id: 2.0 } for collection no_balance_collection.collA from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:02.144 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:02.144 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:51:02.145 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:51:02.153 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:02.154 [conn4] moveChunk setting version to: 4|0||51275b9e08efcab5a82bb604 m30000| Fri Feb 22 11:51:02.154 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:02.155 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:51:02.155 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:51:02.155 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:02-51275ba61b56a92815dbab79", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533862155), what: "moveChunk.to", ns: "no_balance_collection.collA", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:02.164 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:02.164 [conn4] moveChunk updating self version to: 4|1||51275b9e08efcab5a82bb604 through { _id: 2.0 } -> { _id: 3.0 } for collection 'no_balance_collection.collA' m30001| Fri Feb 22 11:51:02.165 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:02-51275ba6bef57cc68c0349e3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533862165), what: "moveChunk.commit", ns: "no_balance_collection.collA", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:02.165 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:02.165 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:02.165 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:02.165 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:02.165 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:02.165 [cleanupOldData-51275ba6bef57cc68c0349e4] (start) waiting to cleanup no_balance_collection.collA from { _id: 1.0 } -> { _id: 2.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:02.165 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:02.165 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:02-51275ba6bef57cc68c0349e5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533862165), what: "moveChunk.from", ns: "no_balance_collection.collA", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:02.165 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:02.166 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 24 version: 4|1||51275b9e08efcab5a82bb604 based on: 3|1||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:51:02.166 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:02.167 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. { "shardA" : 3, "shardB" : 7 } m30001| Fri Feb 22 11:51:02.185 [cleanupOldData-51275ba6bef57cc68c0349e4] waiting to remove documents for no_balance_collection.collA from { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:51:02.185 [cleanupOldData-51275ba6bef57cc68c0349e4] moveChunk starting delete for: no_balance_collection.collA from { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:51:02.185 [cleanupOldData-51275ba6bef57cc68c0349e4] moveChunk deleted 0 documents for no_balance_collection.collA from { _id: 1.0 } -> { _id: 2.0 } { "shardA" : 3, "shardB" : 7 } { "shardA" : 3, "shardB" : 7 } { "shardA" : 3, "shardB" : 7 } { "shardA" : 3, "shardB" : 7 } m30999| Fri Feb 22 11:51:03.167 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:03.168 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:03.168 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275ba708efcab5a82bb609" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275ba608efcab5a82bb608" } } m30999| Fri Feb 22 11:51:03.169 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275ba708efcab5a82bb609 m30999| Fri Feb 22 11:51:03.169 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:03.169 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:03.169 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:03.169 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:03.170 [Balancer] shard0001 has more chunks me:7 best: shard0000:3 m30999| Fri Feb 22 11:51:03.170 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:03.170 [Balancer] donor : shard0001 chunks on 7 m30999| Fri Feb 22 11:51:03.170 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 11:51:03.170 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:03.170 [Balancer] ns: no_balance_collection.collA going to move { _id: "no_balance_collection.collA-_id_2.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604'), ns: "no_balance_collection.collA", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:03.170 [Balancer] moving chunk ns: no_balance_collection.collA moving ( ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:03.170 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:03.170 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collA", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collA-_id_2.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:03.171 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275ba7bef57cc68c0349e6 m30001| Fri Feb 22 11:51:03.171 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:03-51275ba7bef57cc68c0349e7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533863171), what: "moveChunk.start", ns: "no_balance_collection.collA", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:03.172 [conn4] moveChunk request accepted at version 4|1||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:51:03.173 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:03.173 [migrateThread] starting receiving-end of migration of chunk { _id: 2.0 } -> { _id: 3.0 } for collection no_balance_collection.collA from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:03.174 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:03.174 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:51:03.174 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:51:03.183 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:03.183 [conn4] moveChunk setting version to: 5|0||51275b9e08efcab5a82bb604 m30000| Fri Feb 22 11:51:03.183 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:03.184 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:51:03.184 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:51:03.184 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:03-51275ba71b56a92815dbab7a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533863184), what: "moveChunk.to", ns: "no_balance_collection.collA", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:03.193 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:03.193 [conn4] moveChunk updating self version to: 5|1||51275b9e08efcab5a82bb604 through { _id: 3.0 } -> { _id: 4.0 } for collection 'no_balance_collection.collA' m30001| Fri Feb 22 11:51:03.194 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:03-51275ba7bef57cc68c0349e8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533863194), what: "moveChunk.commit", ns: "no_balance_collection.collA", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:03.194 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:03.194 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:03.194 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:03.194 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:03.194 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:03.194 [cleanupOldData-51275ba7bef57cc68c0349e9] (start) waiting to cleanup no_balance_collection.collA from { _id: 2.0 } -> { _id: 3.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:03.195 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:03.195 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:03-51275ba7bef57cc68c0349ea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533863195), what: "moveChunk.from", ns: "no_balance_collection.collA", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:03.195 [Balancer] moveChunk result: { ok: 1.0 } { "shardA" : 4, "shardB" : 6 } m30999| Fri Feb 22 11:51:03.196 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 25 version: 5|1||51275b9e08efcab5a82bb604 based on: 4|1||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:51:03.196 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:03.196 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:03.214 [cleanupOldData-51275ba7bef57cc68c0349e9] waiting to remove documents for no_balance_collection.collA from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:51:03.215 [cleanupOldData-51275ba7bef57cc68c0349e9] moveChunk starting delete for: no_balance_collection.collA from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:51:03.215 [cleanupOldData-51275ba7bef57cc68c0349e9] moveChunk deleted 0 documents for no_balance_collection.collA from { _id: 2.0 } -> { _id: 3.0 } { "shardA" : 4, "shardB" : 6 } { "shardA" : 4, "shardB" : 6 } { "shardA" : 4, "shardB" : 6 } { "shardA" : 4, "shardB" : 6 } m30999| Fri Feb 22 11:51:04.197 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:04.197 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:04.198 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:04 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275ba808efcab5a82bb60a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275ba708efcab5a82bb609" } } m30999| Fri Feb 22 11:51:04.198 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275ba808efcab5a82bb60a m30999| Fri Feb 22 11:51:04.198 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:04.198 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:04.198 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:04.198 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:04.199 [Balancer] shard0001 has more chunks me:6 best: shard0000:4 m30999| Fri Feb 22 11:51:04.199 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:04.199 [Balancer] donor : shard0001 chunks on 6 m30999| Fri Feb 22 11:51:04.200 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 11:51:04.200 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:04.200 [Balancer] ns: no_balance_collection.collA going to move { _id: "no_balance_collection.collA-_id_3.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb604'), ns: "no_balance_collection.collA", min: { _id: 3.0 }, max: { _id: 4.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:04.200 [Balancer] moving chunk ns: no_balance_collection.collA moving ( ns:no_balance_collection.collAshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:04.200 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:04.200 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collA", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collA-_id_3.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:04.201 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275ba8bef57cc68c0349eb m30001| Fri Feb 22 11:51:04.201 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:04-51275ba8bef57cc68c0349ec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533864201), what: "moveChunk.start", ns: "no_balance_collection.collA", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:04.201 [conn4] moveChunk request accepted at version 5|1||51275b9e08efcab5a82bb604 m30001| Fri Feb 22 11:51:04.202 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:04.202 [migrateThread] starting receiving-end of migration of chunk { _id: 3.0 } -> { _id: 4.0 } for collection no_balance_collection.collA from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:04.202 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:04.202 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:51:04.203 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 3.0 } -> { _id: 4.0 } { "shardA" : 4, "shardB" : 6 } m30001| Fri Feb 22 11:51:04.212 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:04.212 [conn4] moveChunk setting version to: 6|0||51275b9e08efcab5a82bb604 m30000| Fri Feb 22 11:51:04.212 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:04.213 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collA' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:51:04.213 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collA' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:51:04.213 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:04-51275ba81b56a92815dbab7b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533864213), what: "moveChunk.to", ns: "no_balance_collection.collA", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:04.222 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collA", from: "localhost:30001", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:04.222 [conn4] moveChunk updating self version to: 6|1||51275b9e08efcab5a82bb604 through { _id: 4.0 } -> { _id: 5.0 } for collection 'no_balance_collection.collA' m30001| Fri Feb 22 11:51:04.223 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:04-51275ba8bef57cc68c0349ed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533864223), what: "moveChunk.commit", ns: "no_balance_collection.collA", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:04.223 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:04.223 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:04.223 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:04.223 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:04.223 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:04.223 [cleanupOldData-51275ba8bef57cc68c0349ee] (start) waiting to cleanup no_balance_collection.collA from { _id: 3.0 } -> { _id: 4.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:04.223 [conn4] distributed lock 'no_balance_collection.collA/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:04.223 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:04-51275ba8bef57cc68c0349ef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533864223), what: "moveChunk.from", ns: "no_balance_collection.collA", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:04.223 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:04.224 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collA: 0ms sequenceNumber: 26 version: 6|1||51275b9e08efcab5a82bb604 based on: 5|1||51275b9e08efcab5a82bb604 m30999| Fri Feb 22 11:51:04.224 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:04.225 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:04.243 [cleanupOldData-51275ba8bef57cc68c0349ee] waiting to remove documents for no_balance_collection.collA from { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:51:04.243 [cleanupOldData-51275ba8bef57cc68c0349ee] moveChunk starting delete for: no_balance_collection.collA from { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:51:04.243 [cleanupOldData-51275ba8bef57cc68c0349ee] moveChunk deleted 0 documents for no_balance_collection.collA from { _id: 3.0 } -> { _id: 4.0 } { "shardA" : 5, "shardB" : 5 } ---- Chunks for no_balance_collection.collA are balanced. ---- { "shardA" : 0, "shardB" : 10 } { "shardA" : 0, "shardB" : 10 } { "shardA" : 0, "shardB" : 10 } { "shardA" : 0, "shardB" : 10 } { "shardA" : 0, "shardB" : 10 } { "shardA" : 0, "shardB" : 10 } m30999| Fri Feb 22 11:51:05.225 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:05.226 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:05.226 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:05 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275ba908efcab5a82bb60b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275ba808efcab5a82bb60a" } } m30999| Fri Feb 22 11:51:05.227 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275ba908efcab5a82bb60b m30999| Fri Feb 22 11:51:05.227 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:05.227 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:05.227 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:05.228 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:05.228 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:05.228 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:05.228 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:05.228 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:05.228 [Balancer] shard0001 has more chunks me:10 best: shard0000:0 m30999| Fri Feb 22 11:51:05.228 [Balancer] collection : no_balance_collection.collB m30999| Fri Feb 22 11:51:05.228 [Balancer] donor : shard0001 chunks on 10 m30999| Fri Feb 22 11:51:05.228 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 11:51:05.228 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:05.228 [Balancer] ns: no_balance_collection.collB going to move { _id: "no_balance_collection.collB-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605'), ns: "no_balance_collection.collB", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:05.228 [Balancer] moving chunk ns: no_balance_collection.collB moving ( ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:05.229 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:05.229 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collB", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collB-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:05.230 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275ba9bef57cc68c0349f0 m30001| Fri Feb 22 11:51:05.230 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:05-51275ba9bef57cc68c0349f1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533865230), what: "moveChunk.start", ns: "no_balance_collection.collB", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:05.230 [conn4] moveChunk request accepted at version 1|18||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:51:05.230 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:05.231 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0 } for collection no_balance_collection.collB from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:05.232 [migrateThread] build index no_balance_collection.collB { _id: 1 } m30000| Fri Feb 22 11:51:05.236 [migrateThread] build index done. scanned 0 total records. 0.003 secs m30000| Fri Feb 22 11:51:05.236 [migrateThread] info: creating collection no_balance_collection.collB on add index m30000| Fri Feb 22 11:51:05.236 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:05.236 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:51:05.237 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:51:05.241 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:05.241 [conn4] moveChunk setting version to: 2|0||51275b9e08efcab5a82bb605 m30000| Fri Feb 22 11:51:05.241 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:05.247 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:51:05.247 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 11:51:05.247 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:05-51275ba91b56a92815dbab7c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533865247), what: "moveChunk.to", ns: "no_balance_collection.collB", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:05.251 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:05.251 [conn4] moveChunk updating self version to: 2|1||51275b9e08efcab5a82bb605 through { _id: 0.0 } -> { _id: 1.0 } for collection 'no_balance_collection.collB' m30001| Fri Feb 22 11:51:05.252 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:05-51275ba9bef57cc68c0349f2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533865252), what: "moveChunk.commit", ns: "no_balance_collection.collB", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:05.252 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:05.252 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:05.252 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:05.252 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:05.252 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:05.252 [cleanupOldData-51275ba9bef57cc68c0349f3] (start) waiting to cleanup no_balance_collection.collB from { _id: MinKey } -> { _id: 0.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:05.252 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:05.252 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:05-51275ba9bef57cc68c0349f4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533865252), what: "moveChunk.from", ns: "no_balance_collection.collB", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:05.252 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:05.253 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 27 version: 2|1||51275b9e08efcab5a82bb605 based on: 1|18||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:51:05.253 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:05.254 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:05.272 [cleanupOldData-51275ba9bef57cc68c0349f3] waiting to remove documents for no_balance_collection.collB from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:51:05.272 [cleanupOldData-51275ba9bef57cc68c0349f3] moveChunk starting delete for: no_balance_collection.collB from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 11:51:05.272 [cleanupOldData-51275ba9bef57cc68c0349f3] moveChunk deleted 0 documents for no_balance_collection.collB from { _id: MinKey } -> { _id: 0.0 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } { "shardA" : 1, "shardB" : 9 } m30999| Fri Feb 22 11:51:06.254 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:06.255 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:06.255 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275baa08efcab5a82bb60c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275ba908efcab5a82bb60b" } } m30999| Fri Feb 22 11:51:06.256 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275baa08efcab5a82bb60c m30999| Fri Feb 22 11:51:06.256 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:06.256 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:06.256 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:06.257 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:06.257 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:06.257 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:06.257 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:06.257 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:06.258 [Balancer] shard0001 has more chunks me:9 best: shard0000:1 m30999| Fri Feb 22 11:51:06.258 [Balancer] collection : no_balance_collection.collB m30999| Fri Feb 22 11:51:06.258 [Balancer] donor : shard0001 chunks on 9 m30999| Fri Feb 22 11:51:06.258 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 11:51:06.258 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:06.258 [Balancer] ns: no_balance_collection.collB going to move { _id: "no_balance_collection.collB-_id_0.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605'), ns: "no_balance_collection.collB", min: { _id: 0.0 }, max: { _id: 1.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:06.258 [Balancer] moving chunk ns: no_balance_collection.collB moving ( ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 0.0 }max: { _id: 1.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:06.258 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:06.258 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collB", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 1.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collB-_id_0.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:06.259 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275baabef57cc68c0349f5 m30001| Fri Feb 22 11:51:06.259 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:06-51275baabef57cc68c0349f6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533866259), what: "moveChunk.start", ns: "no_balance_collection.collB", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:06.260 [conn4] moveChunk request accepted at version 2|1||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:51:06.261 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:06.261 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 1.0 } for collection no_balance_collection.collB from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:06.262 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:06.262 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:51:06.262 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:51:06.271 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:06.271 [conn4] moveChunk setting version to: 3|0||51275b9e08efcab5a82bb605 m30000| Fri Feb 22 11:51:06.271 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:06.272 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:51:06.272 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 0.0 } -> { _id: 1.0 } m30000| Fri Feb 22 11:51:06.272 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:06-51275baa1b56a92815dbab7d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533866272), what: "moveChunk.to", ns: "no_balance_collection.collB", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:06.281 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 1.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:06.281 [conn4] moveChunk updating self version to: 3|1||51275b9e08efcab5a82bb605 through { _id: 1.0 } -> { _id: 2.0 } for collection 'no_balance_collection.collB' m30001| Fri Feb 22 11:51:06.282 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:06-51275baabef57cc68c0349f7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533866282), what: "moveChunk.commit", ns: "no_balance_collection.collB", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:06.283 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:06.283 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:06.283 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:06.283 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:06.283 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:06.283 [cleanupOldData-51275baabef57cc68c0349f8] (start) waiting to cleanup no_balance_collection.collB from { _id: 0.0 } -> { _id: 1.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:06.283 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:06.283 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:06-51275baabef57cc68c0349f9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533866283), what: "moveChunk.from", ns: "no_balance_collection.collB", details: { min: { _id: 0.0 }, max: { _id: 1.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:06.283 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:06.284 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 28 version: 3|1||51275b9e08efcab5a82bb605 based on: 2|1||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:51:06.284 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:06.285 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:06.303 [cleanupOldData-51275baabef57cc68c0349f8] waiting to remove documents for no_balance_collection.collB from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:51:06.303 [cleanupOldData-51275baabef57cc68c0349f8] moveChunk starting delete for: no_balance_collection.collB from { _id: 0.0 } -> { _id: 1.0 } m30001| Fri Feb 22 11:51:06.303 [cleanupOldData-51275baabef57cc68c0349f8] moveChunk deleted 0 documents for no_balance_collection.collB from { _id: 0.0 } -> { _id: 1.0 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } { "shardA" : 2, "shardB" : 8 } m30999| Fri Feb 22 11:51:07.285 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:07.286 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:07.286 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:07 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bab08efcab5a82bb60d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275baa08efcab5a82bb60c" } } m30999| Fri Feb 22 11:51:07.286 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bab08efcab5a82bb60d m30999| Fri Feb 22 11:51:07.286 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:07.286 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:07.286 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:07.287 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:07.287 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:07.287 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:07.287 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:07.287 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:07.288 [Balancer] shard0001 has more chunks me:8 best: shard0000:2 m30999| Fri Feb 22 11:51:07.288 [Balancer] collection : no_balance_collection.collB m30999| Fri Feb 22 11:51:07.288 [Balancer] donor : shard0001 chunks on 8 m30999| Fri Feb 22 11:51:07.288 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 11:51:07.288 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:07.288 [Balancer] ns: no_balance_collection.collB going to move { _id: "no_balance_collection.collB-_id_1.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605'), ns: "no_balance_collection.collB", min: { _id: 1.0 }, max: { _id: 2.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:07.288 [Balancer] moving chunk ns: no_balance_collection.collB moving ( ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 1.0 }max: { _id: 2.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:07.288 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:07.288 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collB", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1.0 }, max: { _id: 2.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collB-_id_1.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:07.289 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275babbef57cc68c0349fa m30001| Fri Feb 22 11:51:07.289 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:07-51275babbef57cc68c0349fb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533867289), what: "moveChunk.start", ns: "no_balance_collection.collB", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:07.290 [conn4] moveChunk request accepted at version 3|1||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:51:07.290 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:07.290 [migrateThread] starting receiving-end of migration of chunk { _id: 1.0 } -> { _id: 2.0 } for collection no_balance_collection.collB from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:07.291 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:07.291 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:51:07.291 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:51:07.300 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:07.300 [conn4] moveChunk setting version to: 4|0||51275b9e08efcab5a82bb605 m30000| Fri Feb 22 11:51:07.300 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:07.301 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:51:07.301 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 1.0 } -> { _id: 2.0 } m30000| Fri Feb 22 11:51:07.301 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:07-51275bab1b56a92815dbab7e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533867301), what: "moveChunk.to", ns: "no_balance_collection.collB", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:07.311 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 1.0 }, max: { _id: 2.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:07.311 [conn4] moveChunk updating self version to: 4|1||51275b9e08efcab5a82bb605 through { _id: 2.0 } -> { _id: 3.0 } for collection 'no_balance_collection.collB' m30001| Fri Feb 22 11:51:07.311 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:07-51275babbef57cc68c0349fc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533867311), what: "moveChunk.commit", ns: "no_balance_collection.collB", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:07.311 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:07.311 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:07.311 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:07.311 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:07.311 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:07.311 [cleanupOldData-51275babbef57cc68c0349fd] (start) waiting to cleanup no_balance_collection.collB from { _id: 1.0 } -> { _id: 2.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:07.312 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:07.312 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:07-51275babbef57cc68c0349fe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533867312), what: "moveChunk.from", ns: "no_balance_collection.collB", details: { min: { _id: 1.0 }, max: { _id: 2.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:07.312 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:07.313 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 29 version: 4|1||51275b9e08efcab5a82bb605 based on: 3|1||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:51:07.313 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:07.313 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:07.332 [cleanupOldData-51275babbef57cc68c0349fd] waiting to remove documents for no_balance_collection.collB from { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:51:07.332 [cleanupOldData-51275babbef57cc68c0349fd] moveChunk starting delete for: no_balance_collection.collB from { _id: 1.0 } -> { _id: 2.0 } m30001| Fri Feb 22 11:51:07.332 [cleanupOldData-51275babbef57cc68c0349fd] moveChunk deleted 0 documents for no_balance_collection.collB from { _id: 1.0 } -> { _id: 2.0 } { "shardA" : 3, "shardB" : 7 } { "shardA" : 3, "shardB" : 7 } { "shardA" : 3, "shardB" : 7 } { "shardA" : 3, "shardB" : 7 } { "shardA" : 3, "shardB" : 7 } m30999| Fri Feb 22 11:51:08.314 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:08.314 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:08.314 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bac08efcab5a82bb60e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bab08efcab5a82bb60d" } } m30999| Fri Feb 22 11:51:08.315 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bac08efcab5a82bb60e m30999| Fri Feb 22 11:51:08.315 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:08.315 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:08.315 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:08.316 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:08.316 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:08.316 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:08.316 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:08.316 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:08.317 [Balancer] shard0001 has more chunks me:7 best: shard0000:3 m30999| Fri Feb 22 11:51:08.317 [Balancer] collection : no_balance_collection.collB m30999| Fri Feb 22 11:51:08.317 [Balancer] donor : shard0001 chunks on 7 m30999| Fri Feb 22 11:51:08.317 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 11:51:08.317 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:08.317 [Balancer] ns: no_balance_collection.collB going to move { _id: "no_balance_collection.collB-_id_2.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605'), ns: "no_balance_collection.collB", min: { _id: 2.0 }, max: { _id: 3.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:08.317 [Balancer] moving chunk ns: no_balance_collection.collB moving ( ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 2.0 }max: { _id: 3.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:08.317 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:08.317 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collB", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 2.0 }, max: { _id: 3.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collB-_id_2.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:08.318 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275bacbef57cc68c0349ff m30001| Fri Feb 22 11:51:08.318 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:08-51275bacbef57cc68c034a00", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533868318), what: "moveChunk.start", ns: "no_balance_collection.collB", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:08.319 [conn4] moveChunk request accepted at version 4|1||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:51:08.319 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:08.321 [migrateThread] starting receiving-end of migration of chunk { _id: 2.0 } -> { _id: 3.0 } for collection no_balance_collection.collB from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:08.321 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:08.322 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:51:08.322 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:51:08.331 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:08.331 [conn4] moveChunk setting version to: 5|0||51275b9e08efcab5a82bb605 m30000| Fri Feb 22 11:51:08.331 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:08.332 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:51:08.332 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 2.0 } -> { _id: 3.0 } m30000| Fri Feb 22 11:51:08.332 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:08-51275bac1b56a92815dbab7f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533868332), what: "moveChunk.to", ns: "no_balance_collection.collB", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:08.341 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 2.0 }, max: { _id: 3.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:08.342 [conn4] moveChunk updating self version to: 5|1||51275b9e08efcab5a82bb605 through { _id: 3.0 } -> { _id: 4.0 } for collection 'no_balance_collection.collB' m30001| Fri Feb 22 11:51:08.342 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:08-51275bacbef57cc68c034a01", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533868342), what: "moveChunk.commit", ns: "no_balance_collection.collB", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:08.342 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:08.342 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:08.342 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:08.342 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:08.342 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:08.342 [cleanupOldData-51275bacbef57cc68c034a02] (start) waiting to cleanup no_balance_collection.collB from { _id: 2.0 } -> { _id: 3.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:08.343 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:08.343 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:08-51275bacbef57cc68c034a03", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533868343), what: "moveChunk.from", ns: "no_balance_collection.collB", details: { min: { _id: 2.0 }, max: { _id: 3.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:08.343 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:08.344 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 30 version: 5|1||51275b9e08efcab5a82bb605 based on: 4|1||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:51:08.344 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:08.344 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:08.362 [cleanupOldData-51275bacbef57cc68c034a02] waiting to remove documents for no_balance_collection.collB from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:51:08.363 [cleanupOldData-51275bacbef57cc68c034a02] moveChunk starting delete for: no_balance_collection.collB from { _id: 2.0 } -> { _id: 3.0 } m30001| Fri Feb 22 11:51:08.363 [cleanupOldData-51275bacbef57cc68c034a02] moveChunk deleted 0 documents for no_balance_collection.collB from { _id: 2.0 } -> { _id: 3.0 } { "shardA" : 4, "shardB" : 6 } { "shardA" : 4, "shardB" : 6 } { "shardA" : 4, "shardB" : 6 } { "shardA" : 4, "shardB" : 6 } { "shardA" : 4, "shardB" : 6 } m30999| Fri Feb 22 11:51:09.345 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:09.345 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:09.345 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:09 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bad08efcab5a82bb60f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bac08efcab5a82bb60e" } } m30999| Fri Feb 22 11:51:09.346 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bad08efcab5a82bb60f m30999| Fri Feb 22 11:51:09.346 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:09.346 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:09.346 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:09.347 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:09.347 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:09.347 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:09.347 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:09.347 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:09.348 [Balancer] shard0001 has more chunks me:6 best: shard0000:4 m30999| Fri Feb 22 11:51:09.348 [Balancer] collection : no_balance_collection.collB m30999| Fri Feb 22 11:51:09.348 [Balancer] donor : shard0001 chunks on 6 m30999| Fri Feb 22 11:51:09.348 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 11:51:09.348 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:09.348 [Balancer] ns: no_balance_collection.collB going to move { _id: "no_balance_collection.collB-_id_3.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605'), ns: "no_balance_collection.collB", min: { _id: 3.0 }, max: { _id: 4.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 11:51:09.348 [Balancer] moving chunk ns: no_balance_collection.collB moving ( ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 3.0 }max: { _id: 4.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 11:51:09.348 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 11:51:09.348 [conn4] received moveChunk request: { moveChunk: "no_balance_collection.collB", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 3.0 }, max: { _id: 4.0 }, maxChunkSizeBytes: 52428800, shardId: "no_balance_collection.collB-_id_3.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 11:51:09.349 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275badbef57cc68c034a04 m30001| Fri Feb 22 11:51:09.349 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:09-51275badbef57cc68c034a05", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533869349), what: "moveChunk.start", ns: "no_balance_collection.collB", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:09.350 [conn4] moveChunk request accepted at version 5|1||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:51:09.350 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 11:51:09.351 [migrateThread] starting receiving-end of migration of chunk { _id: 3.0 } -> { _id: 4.0 } for collection no_balance_collection.collB from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 11:51:09.351 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 11:51:09.351 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:51:09.352 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:51:09.361 [conn4] moveChunk data transfer progress: { active: true, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 11:51:09.361 [conn4] moveChunk setting version to: 6|0||51275b9e08efcab5a82bb605 m30000| Fri Feb 22 11:51:09.361 [conn10] Waiting for commit to finish m30000| Fri Feb 22 11:51:09.362 [migrateThread] migrate commit succeeded flushing to secondaries for 'no_balance_collection.collB' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:51:09.362 [migrateThread] migrate commit flushed to journal for 'no_balance_collection.collB' { _id: 3.0 } -> { _id: 4.0 } m30000| Fri Feb 22 11:51:09.362 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:09-51275bad1b56a92815dbab80", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361533869362), what: "moveChunk.to", ns: "no_balance_collection.collB", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 11:51:09.371 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "no_balance_collection.collB", from: "localhost:30001", min: { _id: 3.0 }, max: { _id: 4.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 11:51:09.371 [conn4] moveChunk updating self version to: 6|1||51275b9e08efcab5a82bb605 through { _id: 4.0 } -> { _id: 5.0 } for collection 'no_balance_collection.collB' m30001| Fri Feb 22 11:51:09.372 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:09-51275badbef57cc68c034a06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533869372), what: "moveChunk.commit", ns: "no_balance_collection.collB", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 11:51:09.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:09.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:09.372 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 11:51:09.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 11:51:09.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 11:51:09.372 [cleanupOldData-51275badbef57cc68c034a07] (start) waiting to cleanup no_balance_collection.collB from { _id: 3.0 } -> { _id: 4.0 }, # cursors remaining: 0 m30001| Fri Feb 22 11:51:09.372 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30001| Fri Feb 22 11:51:09.372 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:09-51275badbef57cc68c034a08", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533869372), what: "moveChunk.from", ns: "no_balance_collection.collB", details: { min: { _id: 3.0 }, max: { _id: 4.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 11:51:09.373 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 11:51:09.374 [Balancer] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 31 version: 6|1||51275b9e08efcab5a82bb605 based on: 5|1||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:51:09.374 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:09.374 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:09.392 [cleanupOldData-51275badbef57cc68c034a07] waiting to remove documents for no_balance_collection.collB from { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:51:09.392 [cleanupOldData-51275badbef57cc68c034a07] moveChunk starting delete for: no_balance_collection.collB from { _id: 3.0 } -> { _id: 4.0 } m30001| Fri Feb 22 11:51:09.392 [cleanupOldData-51275badbef57cc68c034a07] moveChunk deleted 0 documents for no_balance_collection.collB from { _id: 3.0 } -> { _id: 4.0 } { "shardA" : 5, "shardB" : 5 } ---- Chunks for no_balance_collection.collB are balanced. ---- m30999| Fri Feb 22 11:51:09.471 [conn1] setShardVersion shard0000 localhost:30000 no_balance_collection.collB { setShardVersion: "no_balance_collection.collB", configdb: "localhost:30000", version: Timestamp 6000|0, versionEpoch: ObjectId('51275b9e08efcab5a82bb605'), serverID: ObjectId('51275b9e08efcab5a82bb602'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f2b0 31 m30999| Fri Feb 22 11:51:09.472 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "no_balance_collection.collB", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'no_balance_collection.collB'" } m30999| Fri Feb 22 11:51:09.472 [conn1] setShardVersion shard0000 localhost:30000 no_balance_collection.collB { setShardVersion: "no_balance_collection.collB", configdb: "localhost:30000", version: Timestamp 6000|0, versionEpoch: ObjectId('51275b9e08efcab5a82bb605'), serverID: ObjectId('51275b9e08efcab5a82bb602'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f2b0 31 m30000| Fri Feb 22 11:51:09.472 [conn6] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 11:51:09.473 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:51:09.473 [conn1] setShardVersion shard0001 localhost:30001 no_balance_collection.collB { setShardVersion: "no_balance_collection.collB", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('51275b9e08efcab5a82bb605'), serverID: ObjectId('51275b9e08efcab5a82bb602'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f630 31 m30999| Fri Feb 22 11:51:09.473 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51275b9e08efcab5a82bb605'), ok: 1.0 } m30999| Fri Feb 22 11:51:10.375 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:10.375 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:10.375 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:10 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bae08efcab5a82bb610" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bad08efcab5a82bb60f" } } m30999| Fri Feb 22 11:51:10.376 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bae08efcab5a82bb610 m30999| Fri Feb 22 11:51:10.376 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:10.376 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:10.376 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:10.376 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:10.378 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:10.378 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:10.378 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:10.378 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:10.378 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:10.378 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:10.378 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:10.378 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:11.172 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } dataWritten: 4718598 splitThreshold: 23592960 m30999| Fri Feb 22 11:51:11.173 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:51:16.379 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:16.379 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:16.379 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bb408efcab5a82bb611" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bae08efcab5a82bb610" } } m30999| Fri Feb 22 11:51:16.380 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bb408efcab5a82bb611 m30999| Fri Feb 22 11:51:16.380 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:16.380 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:16.380 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:16.380 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:16.381 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:16.381 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:16.381 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:16.381 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:16.381 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:16.381 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:16.381 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:16.381 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:19.022 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } dataWritten: 4718595 splitThreshold: 23592960 m30999| Fri Feb 22 11:51:19.022 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:51:22.382 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:22.382 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:22.382 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:22 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bba08efcab5a82bb612" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bb408efcab5a82bb611" } } m30999| Fri Feb 22 11:51:22.383 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bba08efcab5a82bb612 m30999| Fri Feb 22 11:51:22.383 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:22.383 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:22.383 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:22.383 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:22.385 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:22.385 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:22.385 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:22.385 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:22.385 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:22.385 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:22.385 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:22.385 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:24.053 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 11:51:24 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838', sleeping for 30000ms m30999| Fri Feb 22 11:51:26.903 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } dataWritten: 4718595 splitThreshold: 23592960 m30999| Fri Feb 22 11:51:26.903 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:51:28.386 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:28.386 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:28.386 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:28 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bc008efcab5a82bb613" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bba08efcab5a82bb612" } } m30999| Fri Feb 22 11:51:28.387 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bc008efcab5a82bb613 m30999| Fri Feb 22 11:51:28.387 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:28.387 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:28.387 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:28.387 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:28.388 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:28.388 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:28.388 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:28.388 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:28.388 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:28.388 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:28.388 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:28.388 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:34.389 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:34.389 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:34.389 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:34 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bc608efcab5a82bb614" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bc008efcab5a82bb613" } } m30999| Fri Feb 22 11:51:34.390 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bc608efcab5a82bb614 m30999| Fri Feb 22 11:51:34.390 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:34.390 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:34.390 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:34.390 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:34.392 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:34.392 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:34.392 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:34.392 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:34.392 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:34.392 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:34.392 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:34.392 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:35.025 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } dataWritten: 4718595 splitThreshold: 23592960 m30999| Fri Feb 22 11:51:35.025 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:51:40.393 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:40.393 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:40.393 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bcc08efcab5a82bb615" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bc608efcab5a82bb614" } } m30999| Fri Feb 22 11:51:40.394 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bcc08efcab5a82bb615 m30999| Fri Feb 22 11:51:40.394 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:40.394 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:40.394 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:40.394 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:40.395 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:40.395 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:40.395 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:40.395 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:40.395 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:40.395 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:40.395 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:40.395 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:42.997 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } dataWritten: 4718595 splitThreshold: 23592960 m30999| Fri Feb 22 11:51:42.997 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:51:46.396 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:46.396 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:46.396 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:46 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bd208efcab5a82bb616" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bcc08efcab5a82bb615" } } m30999| Fri Feb 22 11:51:46.397 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bd208efcab5a82bb616 m30999| Fri Feb 22 11:51:46.397 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:46.397 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:46.397 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:46.397 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:46.398 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:46.398 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:46.398 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:46.398 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:46.398 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:46.398 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:46.398 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:46.399 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:51.251 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } dataWritten: 4718595 splitThreshold: 23592960 m30999| Fri Feb 22 11:51:51.251 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:51:52.399 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:52.400 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:52.400 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:52 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bd808efcab5a82bb617" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bd208efcab5a82bb616" } } m30999| Fri Feb 22 11:51:52.400 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bd808efcab5a82bb617 m30999| Fri Feb 22 11:51:52.400 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:52.400 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:52.400 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:52.401 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:52.405 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:52.405 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:52.405 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:52.405 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:52.405 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:52.405 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:52.405 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:52.405 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30001| Fri Feb 22 11:51:52.694 [FileAllocator] allocating new datafile /data/db/test1/no_balance_collection.2, filling with zeroes... m30001| Fri Feb 22 11:51:52.695 [FileAllocator] done allocating datafile /data/db/test1/no_balance_collection.2, size: 256MB, took 0 secs m30999| Fri Feb 22 11:51:54.053 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 11:51:54 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838', sleeping for 30000ms m30999| Fri Feb 22 11:51:58.406 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:51:58.406 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:51:58.406 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:51:58 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275bde08efcab5a82bb618" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bd808efcab5a82bb617" } } m30999| Fri Feb 22 11:51:58.407 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275bde08efcab5a82bb618 m30999| Fri Feb 22 11:51:58.407 [Balancer] *** start balancing round m30999| Fri Feb 22 11:51:58.407 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:51:58.407 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:51:58.407 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:51:58.408 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:51:58.408 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:51:58.408 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:58.408 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:51:58.408 [Balancer] threshold : 2 m30999| Fri Feb 22 11:51:58.408 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:51:58.409 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:51:58.409 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:51:58.695 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } dataWritten: 4718595 splitThreshold: 23592960 m30001| Fri Feb 22 11:51:58.695 [conn4] request split points lookup for chunk no_balance_collection.collB { : 8.0 } -->> { : MaxKey } m30001| Fri Feb 22 11:51:58.695 [conn4] limiting split vector to 250000 (from 364088) objects m30001| Fri Feb 22 11:51:59.224 [conn4] max number of requested split points reached (2) before the end of chunk no_balance_collection.collB { : 8.0 } -->> { : MaxKey } m30001| Fri Feb 22 11:51:59.224 [conn4] warning: Finding the split vector for no_balance_collection.collB over { _id: 1.0 } keyCount: 250000 numSplits: 2 lookedAt: 0 took 529ms m30001| Fri Feb 22 11:51:59.224 [conn4] command admin.$cmd command: { splitVector: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 26214400, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) r:861183 reslen:111 529ms m30001| Fri Feb 22 11:51:59.225 [conn4] received splitChunk request: { splitChunk: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 8.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 837039.0 } ], shardId: "no_balance_collection.collB-_id_8.0", configdb: "localhost:30000" } m30001| Fri Feb 22 11:51:59.226 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' acquired, ts : 51275bdfbef57cc68c034a09 m30001| Fri Feb 22 11:51:59.227 [conn4] splitChunk accepted at version 6|1||51275b9e08efcab5a82bb605 m30001| Fri Feb 22 11:51:59.227 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:59-51275bdfbef57cc68c034a0a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:49874", time: new Date(1361533919227), what: "split", ns: "no_balance_collection.collB", details: { before: { min: { _id: 8.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 8.0 }, max: { _id: 837039.0 }, lastmod: Timestamp 6000|2, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') }, right: { min: { _id: 837039.0 }, max: { _id: MaxKey }, lastmod: Timestamp 6000|3, lastmodEpoch: ObjectId('51275b9e08efcab5a82bb605') } } } m30001| Fri Feb 22 11:51:59.228 [conn4] distributed lock 'no_balance_collection.collB/bs-smartos-x86-64-1.10gen.cc:30001:1361533854:3160' unlocked. m30999| Fri Feb 22 11:51:59.229 [conn1] ChunkManager: time to load chunks for no_balance_collection.collB: 0ms sequenceNumber: 32 version: 6|3||51275b9e08efcab5a82bb605 based on: 6|1||51275b9e08efcab5a82bb605 m30999| Fri Feb 22 11:51:59.229 [conn1] autosplitted no_balance_collection.collB shard: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 8.0 }max: { _id: MaxKey } on: { _id: 837039.0 } (splitThreshold 23592960) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 11:51:59.229 [conn1] setShardVersion shard0001 localhost:30001 no_balance_collection.collB { setShardVersion: "no_balance_collection.collB", configdb: "localhost:30000", version: Timestamp 6000|3, versionEpoch: ObjectId('51275b9e08efcab5a82bb605'), serverID: ObjectId('51275b9e08efcab5a82bb602'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f630 32 m30999| Fri Feb 22 11:51:59.229 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('51275b9e08efcab5a82bb605'), ok: 1.0 } m30999| Fri Feb 22 11:52:00.373 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 6|3||000000000000000000000000min: { _id: 837039.0 }max: { _id: MaxKey } dataWritten: 4718612 splitThreshold: 23592960 m30001| Fri Feb 22 11:52:00.373 [conn4] request split points lookup for chunk no_balance_collection.collB { : 837039.0 } -->> { : MaxKey } m30001| Fri Feb 22 11:52:00.373 [conn4] limiting split vector to 250000 (from 364088) objects m30999| Fri Feb 22 11:52:00.400 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 11:52:04.409 [Balancer] Refreshing MaxChunkSize: 50 m30999| Fri Feb 22 11:52:04.410 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838 ) m30999| Fri Feb 22 11:52:04.410 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:52:04 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275be408efcab5a82bb619" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51275bde08efcab5a82bb618" } } m30999| Fri Feb 22 11:52:04.411 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' acquired, ts : 51275be408efcab5a82bb619 m30999| Fri Feb 22 11:52:04.411 [Balancer] *** start balancing round m30999| Fri Feb 22 11:52:04.411 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:52:04.411 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:52:04.411 [Balancer] not balancing collection no_balance_collection.collB, explicitly disabled m30999| Fri Feb 22 11:52:04.412 [Balancer] shard0001 has more chunks me:5 best: shard0000:5 m30999| Fri Feb 22 11:52:04.412 [Balancer] collection : no_balance_collection.collA m30999| Fri Feb 22 11:52:04.412 [Balancer] donor : shard0000 chunks on 5 m30999| Fri Feb 22 11:52:04.412 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 11:52:04.412 [Balancer] threshold : 2 m30999| Fri Feb 22 11:52:04.412 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:52:04.412 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:52:04.412 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361533854:16838' unlocked. m30999| Fri Feb 22 11:52:08.121 [conn1] about to initiate autosplit: ns:no_balance_collection.collBshard: shard0001:localhost:30001lastmod: 6|3||000000000000000000000000min: { _id: 837039.0 }max: { _id: MaxKey } dataWritten: 4718595 splitThreshold: 23592960 m30001| Fri Feb 22 11:52:08.121 [conn4] request split points lookup for chunk no_balance_collection.collB { : 837039.0 } -->> { : MaxKey } m30001| Fri Feb 22 11:52:08.121 [conn4] limiting split vector to 250000 (from 364088) objects m30001| Fri Feb 22 11:52:08.326 [conn4] warning: Finding the split vector for no_balance_collection.collB over { _id: 1.0 } keyCount: 250000 numSplits: 0 lookedAt: 158620 took 205ms m30001| Fri Feb 22 11:52:08.326 [conn4] command admin.$cmd command: { splitVector: "no_balance_collection.collB", keyPattern: { _id: 1.0 }, min: { _id: 837039.0 }, max: { _id: MaxKey }, maxChunkSizeBytes: 26214400, maxSplitPoints: 2, maxChunkObjects: 250000 } ntoreturn:1 keyUpdates:0 locks(micros) r:205414 reslen:69 205ms m30999| Fri Feb 22 11:52:08.326 [conn1] chunk not full enough to trigger auto-split no split entry { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:09-51275badbef57cc68c034a06", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "127.0.0.1:49874", "time" : ISODate("2013-02-22T11:51:09.372Z"), "what" : "moveChunk.commit", "ns" : "no_balance_collection.collB", "details" : { "min" : { "_id" : 3 }, "max" : { "_id" : 4 }, "from" : "shard0001", "to" : "shard0000" } } { "_id" : "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:51:09-51275badbef57cc68c034a06", "server" : "bs-smartos-x86-64-1.10gen.cc", "clientAddr" : "127.0.0.1:49874", "time" : ISODate("2013-02-22T11:51:09.372Z"), "what" : "moveChunk.commit", "ns" : "no_balance_collection.collB", "details" : { "min" : { "_id" : 3 }, "max" : { "_id" : 4 }, "from" : "shard0001", "to" : "shard0000" } } m30999| Fri Feb 22 11:52:08.582 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 11:52:08.600 [conn3] end connection 127.0.0.1:46697 (4 connections now open) m30001| Fri Feb 22 11:52:08.600 [conn4] end connection 127.0.0.1:49874 (4 connections now open) m30000| Fri Feb 22 11:52:08.602 [conn6] end connection 127.0.0.1:56649 (11 connections now open) m30000| Fri Feb 22 11:52:08.602 [conn5] end connection 127.0.0.1:36872 (11 connections now open) m30000| Fri Feb 22 11:52:08.602 [conn7] end connection 127.0.0.1:47184 (11 connections now open) m30000| Fri Feb 22 11:52:08.602 [conn3] end connection 127.0.0.1:57186 (11 connections now open) Fri Feb 22 11:52:09.582 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 11:52:09.582 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 11:52:09.583 [interruptThread] now exiting m30000| Fri Feb 22 11:52:09.583 dbexit: m30000| Fri Feb 22 11:52:09.583 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 11:52:09.583 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 11:52:09.583 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 11:52:09.583 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 11:52:09.583 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 11:52:09.583 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 11:52:09.583 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 11:52:09.583 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 11:52:09.583 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 11:52:09.583 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 11:52:09.583 [conn1] end connection 127.0.0.1:61428 (7 connections now open) m30000| Fri Feb 22 11:52:09.583 [conn2] end connection 127.0.0.1:46218 (7 connections now open) m30000| Fri Feb 22 11:52:09.583 [conn8] end connection 127.0.0.1:39570 (7 connections now open) m30001| Fri Feb 22 11:52:09.583 [conn5] end connection 127.0.0.1:32842 (2 connections now open) m30000| Fri Feb 22 11:52:09.583 [conn11] end connection 127.0.0.1:63251 (7 connections now open) m30000| Fri Feb 22 11:52:09.583 [conn12] end connection 127.0.0.1:62247 (7 connections now open) m30000| Fri Feb 22 11:52:09.583 [conn9] end connection 127.0.0.1:50293 (7 connections now open) m30000| Fri Feb 22 11:52:09.583 [conn10] end connection 127.0.0.1:51347 (5 connections now open) m30000| Fri Feb 22 11:52:09.608 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 11:52:09.609 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 11:52:09.609 [interruptThread] journalCleanup... m30000| Fri Feb 22 11:52:09.609 [interruptThread] removeJournalFiles m30000| Fri Feb 22 11:52:09.609 dbexit: really exiting now Fri Feb 22 11:52:10.583 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 11:52:10.583 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 11:52:10.583 [interruptThread] now exiting m30001| Fri Feb 22 11:52:10.583 dbexit: m30001| Fri Feb 22 11:52:10.583 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 11:52:10.583 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 11:52:10.583 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 11:52:10.583 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 11:52:10.583 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 11:52:10.583 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 11:52:10.583 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 11:52:10.583 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 11:52:10.583 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 11:52:10.583 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 11:52:10.583 [conn1] end connection 127.0.0.1:52676 (1 connection now open) m30001| Fri Feb 22 11:52:10.616 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 11:52:10.629 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 11:52:10.629 [interruptThread] journalCleanup... m30001| Fri Feb 22 11:52:10.629 [interruptThread] removeJournalFiles m30001| Fri Feb 22 11:52:10.631 dbexit: really exiting now Fri Feb 22 11:52:11.583 shell: stopped mongo program on port 30001 *** ShardingTest test completed successfully in 78.03 seconds *** Fri Feb 22 11:52:11.623 [conn102] end connection 127.0.0.1:57192 (0 connections now open) 1.3036 minutes Fri Feb 22 11:52:11.644 [initandlisten] connection accepted from 127.0.0.1:42279 #103 (1 connection now open) Fri Feb 22 11:52:11.645 [conn103] end connection 127.0.0.1:42279 (0 connections now open) ******************************************* Test : recstore.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/recstore.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/recstore.js";TestData.testFile = "recstore.js";TestData.testName = "recstore";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:52:11 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:52:11.818 [initandlisten] connection accepted from 127.0.0.1:57762 #104 (1 connection now open) null Fri Feb 22 11:52:11.828 [conn104] CMD: drop test.storetest Fri Feb 22 11:52:11.829 [conn104] build index test.storetest { _id: 1 } Fri Feb 22 11:52:11.831 [conn104] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 11:52:11.831 [conn104] build index test.storetest { z: 1.0 } Fri Feb 22 11:52:11.833 [conn104] build index done. scanned 2 total records. 0.001 secs Fri Feb 22 11:52:11.833 [conn104] build index test.storetest { q: 1.0 } Fri Feb 22 11:52:11.835 [conn104] build index done. scanned 2 total records. 0.002 secs Fri Feb 22 11:52:11.836 [conn104] CMD: dropIndexes test.storetest Fri Feb 22 11:52:11.841 [conn104] build index test.storetest { z: 1.0 } Fri Feb 22 11:52:11.841 [conn104] build index done. scanned 2 total records. 0 secs Fri Feb 22 11:52:11.842 [conn104] build index test.storetest { q: 1.0 } Fri Feb 22 11:52:11.842 [conn104] build index done. scanned 2 total records. 0 secs Fri Feb 22 11:52:11.843 [conn104] DatabaseHolder::closeAll path:/data/db/sconsTests/ Fri Feb 22 11:52:11.886 [conn104] end connection 127.0.0.1:57762 (0 connections now open) 259.4311 ms Fri Feb 22 11:52:11.906 [initandlisten] connection accepted from 127.0.0.1:37190 #105 (1 connection now open) Fri Feb 22 11:52:11.907 [conn105] end connection 127.0.0.1:37190 (0 connections now open) ******************************************* Test : remove9.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/remove9.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/remove9.js";TestData.testFile = "remove9.js";TestData.testName = "remove9";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:52:11 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:52:12.083 [initandlisten] connection accepted from 127.0.0.1:54584 #106 (1 connection now open) null Fri Feb 22 11:52:12.093 [conn106] CMD: drop test.jstests_remove9 Fri Feb 22 11:52:12.096 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval while( 1 ) { for( i = 0; i < 10000; ++i ) { db.jstests_remove9.save( {i:i} ); } db.jstests_remove9.remove( {i: {$gte:0} } ); } 127.0.0.1:27999 Fri Feb 22 11:52:12.171 [initandlisten] connection accepted from 127.0.0.1:34101 #107 (2 connections now open) Fri Feb 22 11:52:12.173 [conn107] build index test.jstests_remove9 { _id: 1 } Fri Feb 22 11:52:12.174 [conn107] build index done. scanned 0 total records. 0 secs Fri Feb 22 11:52:12.292 [conn106] remove test.jstests_remove9 query: { i: 9071.0 } ndeleted:0 keyUpdates:0 numYields: 15 locks(micros) w:5542 103ms Fri Feb 22 11:52:12.399 [conn106] remove test.jstests_remove9 query: { i: 8517.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:8146 107ms Fri Feb 22 11:52:12.776 [conn106] remove test.jstests_remove9 query: { i: 2752.0 } ndeleted:1 keyUpdates:0 numYields: 36 locks(micros) w:21936 376ms Fri Feb 22 11:52:13.145 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:13.175 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 20 locks(micros) w:399024 410ms Fri Feb 22 11:52:13.183 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:13.442 [conn106] remove test.jstests_remove9 query: { i: 8247.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8905 255ms Fri Feb 22 11:52:13.738 [conn106] remove test.jstests_remove9 query: { i: 2565.0 } ndeleted:1 keyUpdates:0 numYields: 28 locks(micros) w:17593 296ms Fri Feb 22 11:52:14.109 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:14.157 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:14.173 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:14.177 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 24 locks(micros) w:414131 451ms Fri Feb 22 11:52:14.184 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:14.447 [conn106] remove test.jstests_remove9 query: { i: 7424.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9139 256ms Fri Feb 22 11:52:14.724 [conn106] remove test.jstests_remove9 query: { i: 6523.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:18482 277ms Fri Feb 22 11:52:15.027 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:15.131 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 21 locks(micros) w:411021 421ms Fri Feb 22 11:52:15.135 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:15.350 [conn106] remove test.jstests_remove9 query: { i: 9566.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:8295 206ms Fri Feb 22 11:52:15.562 [conn106] remove test.jstests_remove9 query: { i: 1684.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:15167 211ms Fri Feb 22 11:52:15.686 [conn106] remove test.jstests_remove9 query: { i: 3420.0 } ndeleted:1 keyUpdates:0 numYields: 11 locks(micros) w:18380 124ms Fri Feb 22 11:52:15.894 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:16.024 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:16.046 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:16.052 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 19 locks(micros) w:370744 380ms Fri Feb 22 11:52:16.197 [conn106] remove test.jstests_remove9 query: { i: 4170.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:6743 135ms Fri Feb 22 11:52:16.303 [conn106] remove test.jstests_remove9 query: { i: 6557.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:7086 106ms Fri Feb 22 11:52:16.592 [conn106] remove test.jstests_remove9 query: { i: 3731.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:20520 288ms Fri Feb 22 11:52:16.914 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:16.997 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 21 locks(micros) w:391058 411ms Fri Feb 22 11:52:17.001 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:17.263 [conn106] remove test.jstests_remove9 query: { i: 5832.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9281 256ms Fri Feb 22 11:52:17.548 [conn106] remove test.jstests_remove9 query: { i: 5641.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:18620 284ms Fri Feb 22 11:52:17.860 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:17.956 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:17.964 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 23 locks(micros) w:404328 427ms Fri Feb 22 11:52:17.965 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:17.975 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:18.193 [conn106] remove test.jstests_remove9 query: { i: 417.0 } ndeleted:1 keyUpdates:0 numYields: 21 locks(micros) w:8371 216ms Fri Feb 22 11:52:18.425 [conn106] remove test.jstests_remove9 query: { i: 9200.0 } ndeleted:0 keyUpdates:0 numYields: 22 locks(micros) w:17178 231ms Fri Feb 22 11:52:18.535 [conn106] remove test.jstests_remove9 query: { i: 3522.0 } ndeleted:1 keyUpdates:0 numYields: 9 locks(micros) w:21506 109ms Fri Feb 22 11:52:18.847 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:18.888 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:18.893 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9991 keyUpdates:0 numYields: 19 locks(micros) w:405173 395ms Fri Feb 22 11:52:18.896 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:19.160 [conn106] remove test.jstests_remove9 query: { i: 5274.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8729 255ms Fri Feb 22 11:52:19.436 [conn106] remove test.jstests_remove9 query: { i: 9297.0 } ndeleted:0 keyUpdates:0 numYields: 26 locks(micros) w:21693 274ms Fri Feb 22 11:52:19.658 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:19.829 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:19.841 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 20 locks(micros) w:407198 412ms Fri Feb 22 11:52:19.851 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:19.982 [conn106] remove test.jstests_remove9 query: { i: 2648.0 } ndeleted:0 keyUpdates:0 numYields: 12 locks(micros) w:5681 127ms Fri Feb 22 11:52:20.089 [conn106] remove test.jstests_remove9 query: { i: 9646.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:7279 106ms Fri Feb 22 11:52:20.346 [conn106] remove test.jstests_remove9 query: { i: 6418.0 } ndeleted:1 keyUpdates:0 numYields: 24 locks(micros) w:23956 256ms Fri Feb 22 11:52:20.701 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:20.753 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:20.763 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:20.768 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 20 locks(micros) w:380175 385ms Fri Feb 22 11:52:20.772 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:20.940 [conn106] remove test.jstests_remove9 query: { i: 5764.0 } ndeleted:0 keyUpdates:0 numYields: 16 locks(micros) w:4987 164ms Fri Feb 22 11:52:21.065 [conn106] remove test.jstests_remove9 query: { i: 8324.0 } ndeleted:0 keyUpdates:0 numYields: 12 locks(micros) w:8216 124ms Fri Feb 22 11:52:21.234 [conn106] remove test.jstests_remove9 query: { i: 3466.0 } ndeleted:1 keyUpdates:0 numYields: 16 locks(micros) w:9409 168ms Fri Feb 22 11:52:21.437 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:21.464 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:21.476 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:21.481 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 13 locks(micros) w:257298 260ms Fri Feb 22 11:52:21.483 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:21.743 [conn106] remove test.jstests_remove9 query: { i: 5595.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:6241 254ms Fri Feb 22 11:52:21.888 [conn106] remove test.jstests_remove9 query: { i: 9030.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:10935 144ms Fri Feb 22 11:52:22.019 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:22.206 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:22.246 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 20 locks(micros) w:336158 373ms Fri Feb 22 11:52:22.252 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:22.512 [conn106] remove test.jstests_remove9 query: { i: 1718.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9034 255ms Fri Feb 22 11:52:22.787 [conn106] remove test.jstests_remove9 query: { i: 2663.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:17079 274ms Fri Feb 22 11:52:23.102 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:23.177 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9983 keyUpdates:0 numYields: 19 locks(micros) w:406066 400ms Fri Feb 22 11:52:23.181 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:23.462 [conn106] remove test.jstests_remove9 query: { i: 5214.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:8641 210ms Fri Feb 22 11:52:23.675 [conn106] remove test.jstests_remove9 query: { i: 4739.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:17189 213ms Fri Feb 22 11:52:23.921 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:24.118 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:24.135 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:24.139 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 21 locks(micros) w:403072 415ms Fri Feb 22 11:52:24.148 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:24.447 [conn106] remove test.jstests_remove9 query: { i: 2894.0 } ndeleted:1 keyUpdates:0 numYields: 23 locks(micros) w:14438 244ms Fri Feb 22 11:52:24.690 [conn106] remove test.jstests_remove9 query: { i: 5626.0 } ndeleted:1 keyUpdates:0 numYields: 22 locks(micros) w:20420 242ms Fri Feb 22 11:52:25.042 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:25.112 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9978 keyUpdates:0 numYields: 23 locks(micros) w:393759 434ms Fri Feb 22 11:52:25.114 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:25.125 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:25.383 [conn106] remove test.jstests_remove9 query: { i: 9523.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9370 255ms Fri Feb 22 11:52:25.660 [conn106] remove test.jstests_remove9 query: { i: 7882.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:18105 276ms Fri Feb 22 11:52:26.013 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:26.049 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 20 locks(micros) w:391070 401ms Fri Feb 22 11:52:26.057 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:26.245 [conn106] remove test.jstests_remove9 query: { i: 8785.0 } ndeleted:0 keyUpdates:0 numYields: 18 locks(micros) w:8908 185ms Fri Feb 22 11:52:26.454 [conn106] remove test.jstests_remove9 query: { i: 3903.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:13220 208ms Fri Feb 22 11:52:26.587 [conn106] remove test.jstests_remove9 query: { i: 2133.0 } ndeleted:1 keyUpdates:0 numYields: 12 locks(micros) w:14890 132ms Fri Feb 22 11:52:26.959 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:27.014 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:27.029 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:27.035 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9983 keyUpdates:0 numYields: 25 locks(micros) w:422699 457ms Fri Feb 22 11:52:27.146 [conn106] remove test.jstests_remove9 query: { i: 8358.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:4719 103ms Fri Feb 22 11:52:27.253 [conn106] remove test.jstests_remove9 query: { i: 1081.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:7080 106ms Fri Feb 22 11:52:27.568 [conn106] remove test.jstests_remove9 query: { i: 5400.0 } ndeleted:1 keyUpdates:0 numYields: 30 locks(micros) w:21972 315ms Fri Feb 22 11:52:27.976 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:28.046 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 20 locks(micros) w:403181 408ms Fri Feb 22 11:52:28.053 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:28.201 [conn106] remove test.jstests_remove9 query: { i: 8054.0 } ndeleted:0 keyUpdates:0 numYields: 14 locks(micros) w:6334 144ms Fri Feb 22 11:52:28.307 [conn106] remove test.jstests_remove9 query: { i: 3774.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:6996 106ms Fri Feb 22 11:52:28.521 [conn106] remove test.jstests_remove9 query: { i: 357.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:15853 212ms Fri Feb 22 11:52:28.946 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:29.003 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:29.008 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 21 locks(micros) w:407169 421ms Fri Feb 22 11:52:29.256 [conn106] remove test.jstests_remove9 query: { i: 3834.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:8456 209ms Fri Feb 22 11:52:29.551 [conn106] remove test.jstests_remove9 query: { i: 6585.0 } ndeleted:1 keyUpdates:0 numYields: 28 locks(micros) w:16881 294ms Fri Feb 22 11:52:29.918 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:29.955 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:29.967 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:29.972 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 23 locks(micros) w:402560 433ms Fri Feb 22 11:52:29.975 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:29.986 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:30.245 [conn106] remove test.jstests_remove9 query: { i: 4824.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9696 256ms Fri Feb 22 11:52:30.532 [conn106] remove test.jstests_remove9 query: { i: 6567.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:18731 286ms Fri Feb 22 11:52:30.881 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:30.932 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 21 locks(micros) w:413944 421ms Fri Feb 22 11:52:30.940 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:31.203 [conn106] remove test.jstests_remove9 query: { i: 5775.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:10177 257ms Fri Feb 22 11:52:31.510 [conn106] remove test.jstests_remove9 query: { i: 4544.0 } ndeleted:1 keyUpdates:0 numYields: 29 locks(micros) w:17010 306ms Fri Feb 22 11:52:31.815 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:31.894 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:31.902 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 20 locks(micros) w:400851 404ms Fri Feb 22 11:52:32.109 [conn106] remove test.jstests_remove9 query: { i: 9989.0 } ndeleted:0 keyUpdates:0 numYields: 12 locks(micros) w:9814 125ms Fri Feb 22 11:52:32.408 [conn106] remove test.jstests_remove9 query: { i: 9669.0 } ndeleted:0 keyUpdates:0 numYields: 29 locks(micros) w:13788 298ms Fri Feb 22 11:52:32.701 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:32.805 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:32.813 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 21 locks(micros) w:383206 397ms Fri Feb 22 11:52:32.927 [conn106] remove test.jstests_remove9 query: { i: 4000.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:4525 104ms Fri Feb 22 11:52:33.033 [conn106] remove test.jstests_remove9 query: { i: 1618.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:6598 105ms Fri Feb 22 11:52:33.354 [conn106] remove test.jstests_remove9 query: { i: 9871.0 } ndeleted:0 keyUpdates:0 numYields: 30 locks(micros) w:18538 320ms Fri Feb 22 11:52:33.576 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:33.747 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:33.754 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:33.756 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 21 locks(micros) w:409682 417ms Fri Feb 22 11:52:33.763 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:33.978 [conn106] remove test.jstests_remove9 query: { i: 274.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:8185 205ms Fri Feb 22 11:52:34.315 [conn106] remove test.jstests_remove9 query: { i: 4903.0 } ndeleted:1 keyUpdates:0 numYields: 32 locks(micros) w:25235 337ms Fri Feb 22 11:52:34.566 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:34.705 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:34.725 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 21 locks(micros) w:406141 418ms Fri Feb 22 11:52:34.987 [conn106] remove test.jstests_remove9 query: { i: 6681.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8760 255ms Fri Feb 22 11:52:35.253 [conn106] remove test.jstests_remove9 query: { i: 7221.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:16161 265ms Fri Feb 22 11:52:35.576 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:35.640 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:35.662 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:35.671 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 24 locks(micros) w:403886 446ms Fri Feb 22 11:52:35.676 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:35.957 [conn106] remove test.jstests_remove9 query: { i: 5016.0 } ndeleted:0 keyUpdates:0 numYields: 18 locks(micros) w:9033 190ms Fri Feb 22 11:52:36.205 [conn106] remove test.jstests_remove9 query: { i: 3184.0 } ndeleted:1 keyUpdates:0 numYields: 23 locks(micros) w:19856 248ms Fri Feb 22 11:52:36.496 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:36.648 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:36.657 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9983 keyUpdates:0 numYields: 22 locks(micros) w:394445 422ms Fri Feb 22 11:52:36.923 [conn106] remove test.jstests_remove9 query: { i: 4926.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9044 255ms Fri Feb 22 11:52:37.192 [conn106] remove test.jstests_remove9 query: { i: 4041.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:18175 268ms Fri Feb 22 11:52:37.405 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:37.541 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:37.551 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 19 locks(micros) w:368001 369ms Fri Feb 22 11:52:37.776 [conn106] remove test.jstests_remove9 query: { i: 3929.0 } ndeleted:0 keyUpdates:0 numYields: 21 locks(micros) w:8362 216ms Fri Feb 22 11:52:37.987 [conn106] remove test.jstests_remove9 query: { i: 270.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:14393 211ms Fri Feb 22 11:52:38.335 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:38.478 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 22 locks(micros) w:391382 408ms Fri Feb 22 11:52:38.479 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:38.489 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:38.748 [conn106] remove test.jstests_remove9 query: { i: 5314.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9207 256ms Fri Feb 22 11:52:39.056 [conn106] remove test.jstests_remove9 query: { i: 7957.0 } ndeleted:1 keyUpdates:0 numYields: 29 locks(micros) w:17608 307ms Fri Feb 22 11:52:39.397 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:39.451 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9991 keyUpdates:0 numYields: 22 locks(micros) w:396437 418ms Fri Feb 22 11:52:39.457 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:39.669 [conn106] remove test.jstests_remove9 query: { i: 9030.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:8083 206ms Fri Feb 22 11:52:39.982 [conn106] remove test.jstests_remove9 query: { i: 3545.0 } ndeleted:1 keyUpdates:0 numYields: 30 locks(micros) w:18085 311ms Fri Feb 22 11:52:40.298 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:40.404 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 21 locks(micros) w:400623 410ms Fri Feb 22 11:52:40.407 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:40.634 [conn106] remove test.jstests_remove9 query: { i: 3296.0 } ndeleted:1 keyUpdates:0 numYields: 21 locks(micros) w:8932 215ms Fri Feb 22 11:52:40.953 [conn106] remove test.jstests_remove9 query: { i: 3514.0 } ndeleted:1 keyUpdates:0 numYields: 30 locks(micros) w:20448 318ms Fri Feb 22 11:52:41.060 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:41.354 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:41.390 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 23 locks(micros) w:414402 444ms Fri Feb 22 11:52:41.396 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:41.580 [conn106] remove test.jstests_remove9 query: { i: 9989.0 } ndeleted:0 keyUpdates:0 numYields: 11 locks(micros) w:8462 115ms Fri Feb 22 11:52:41.932 [conn106] remove test.jstests_remove9 query: { i: 6518.0 } ndeleted:1 keyUpdates:0 numYields: 33 locks(micros) w:20219 351ms Fri Feb 22 11:52:42.218 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:42.328 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:42.336 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 21 locks(micros) w:411285 418ms Fri Feb 22 11:52:42.338 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:42.348 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:42.515 [conn106] remove test.jstests_remove9 query: { i: 3988.0 } ndeleted:0 keyUpdates:0 numYields: 16 locks(micros) w:7310 165ms Fri Feb 22 11:52:42.651 [conn106] remove test.jstests_remove9 query: { i: 8826.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:8651 135ms Fri Feb 22 11:52:42.878 [conn106] remove test.jstests_remove9 query: { i: 3977.0 } ndeleted:1 keyUpdates:0 numYields: 21 locks(micros) w:17211 227ms Fri Feb 22 11:52:43.239 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:43.287 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 21 locks(micros) w:402923 419ms Fri Feb 22 11:52:43.293 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:43.555 [conn106] remove test.jstests_remove9 query: { i: 7396.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8998 255ms Fri Feb 22 11:52:43.833 [conn106] remove test.jstests_remove9 query: { i: 6321.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:22989 277ms Fri Feb 22 11:52:44.149 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:44.244 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 22 locks(micros) w:398096 420ms Fri Feb 22 11:52:44.248 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:44.518 [conn106] remove test.jstests_remove9 query: { i: 1860.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:13161 209ms Fri Feb 22 11:52:44.773 [conn106] remove test.jstests_remove9 query: { i: 6818.0 } ndeleted:1 keyUpdates:0 numYields: 24 locks(micros) w:22194 254ms Fri Feb 22 11:52:45.101 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:45.208 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:45.215 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:45.216 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 23 locks(micros) w:398010 424ms Fri Feb 22 11:52:45.223 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:45.465 [conn106] remove test.jstests_remove9 query: { i: 5493.0 } ndeleted:0 keyUpdates:0 numYields: 23 locks(micros) w:8103 236ms Fri Feb 22 11:52:45.731 [conn106] remove test.jstests_remove9 query: { i: 4747.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:15803 265ms Fri Feb 22 11:52:45.970 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:46.063 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 17 locks(micros) w:342137 347ms Fri Feb 22 11:52:46.066 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:46.309 [conn106] remove test.jstests_remove9 query: { i: 6076.0 } ndeleted:0 keyUpdates:0 numYields: 23 locks(micros) w:7452 235ms Fri Feb 22 11:52:46.511 [conn106] remove test.jstests_remove9 query: { i: 7344.0 } ndeleted:1 keyUpdates:0 numYields: 19 locks(micros) w:13132 201ms Fri Feb 22 11:52:46.773 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:46.888 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:46.898 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 21 locks(micros) w:379445 397ms Fri Feb 22 11:52:47.073 [conn106] remove test.jstests_remove9 query: { i: 302.0 } ndeleted:1 keyUpdates:0 numYields: 12 locks(micros) w:8643 125ms Fri Feb 22 11:52:47.265 [conn106] remove test.jstests_remove9 query: { i: 289.0 } ndeleted:1 keyUpdates:0 numYields: 18 locks(micros) w:14418 191ms Fri Feb 22 11:52:47.452 [conn106] remove test.jstests_remove9 query: { i: 8132.0 } ndeleted:1 keyUpdates:0 numYields: 17 locks(micros) w:15357 187ms Fri Feb 22 11:52:47.627 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:47.842 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:47.867 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 22 locks(micros) w:414358 433ms Fri Feb 22 11:52:47.869 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:48.004 [conn106] remove test.jstests_remove9 query: { i: 6478.0 } ndeleted:0 keyUpdates:0 numYields: 12 locks(micros) w:5630 124ms Fri Feb 22 11:52:48.174 [conn106] remove test.jstests_remove9 query: { i: 2512.0 } ndeleted:1 keyUpdates:0 numYields: 16 locks(micros) w:13655 169ms Fri Feb 22 11:52:48.420 [conn106] remove test.jstests_remove9 query: { i: 640.0 } ndeleted:1 keyUpdates:0 numYields: 23 locks(micros) w:20864 245ms Fri Feb 22 11:52:48.514 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:48.805 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:48.831 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 21 locks(micros) w:405504 420ms Fri Feb 22 11:52:48.953 [conn106] remove test.jstests_remove9 query: { i: 5748.0 } ndeleted:0 keyUpdates:0 numYields: 11 locks(micros) w:5253 115ms Fri Feb 22 11:52:49.121 [conn106] remove test.jstests_remove9 query: { i: 6878.0 } ndeleted:0 keyUpdates:0 numYields: 16 locks(micros) w:10580 166ms Fri Feb 22 11:52:49.375 [conn106] remove test.jstests_remove9 query: { i: 9388.0 } ndeleted:0 keyUpdates:0 numYields: 24 locks(micros) w:20150 253ms Fri Feb 22 11:52:49.646 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:49.770 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:49.783 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 19 locks(micros) w:364359 380ms Fri Feb 22 11:52:50.054 [conn106] remove test.jstests_remove9 query: { i: 5624.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:6913 256ms Fri Feb 22 11:52:50.258 [conn106] remove test.jstests_remove9 query: { i: 4755.0 } ndeleted:1 keyUpdates:0 numYields: 19 locks(micros) w:15663 203ms Fri Feb 22 11:52:50.476 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:50.552 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 15 locks(micros) w:302952 306ms Fri Feb 22 11:52:50.554 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:50.863 [conn106] remove test.jstests_remove9 query: { i: 9975.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:9334 211ms Fri Feb 22 11:52:51.089 [conn106] remove test.jstests_remove9 query: { i: 8022.0 } ndeleted:1 keyUpdates:0 numYields: 21 locks(micros) w:21832 225ms Fri Feb 22 11:52:51.426 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:51.534 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:51.545 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 23 locks(micros) w:470665 463ms Fri Feb 22 11:52:51.813 [conn106] remove test.jstests_remove9 query: { i: 9869.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9839 256ms Fri Feb 22 11:52:51.921 [conn106] remove test.jstests_remove9 query: { i: 4865.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:10916 108ms Fri Feb 22 11:52:52.096 [conn106] remove test.jstests_remove9 query: { i: 2755.0 } ndeleted:1 keyUpdates:0 numYields: 16 locks(micros) w:17176 173ms Fri Feb 22 11:52:52.314 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:52.517 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:52.541 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:52.549 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9978 keyUpdates:0 numYields: 26 locks(micros) w:418878 460ms Fri Feb 22 11:52:52.557 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:52.819 [conn106] remove test.jstests_remove9 query: { i: 7568.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9198 256ms Fri Feb 22 11:52:53.100 [conn106] remove test.jstests_remove9 query: { i: 4500.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:18537 280ms Fri Feb 22 11:52:53.379 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:53.522 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:53.529 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:53.532 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 24 locks(micros) w:408335 447ms Fri Feb 22 11:52:53.538 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:53.753 [conn106] remove test.jstests_remove9 query: { i: 4330.0 } ndeleted:0 keyUpdates:0 numYields: 14 locks(micros) w:9493 145ms Fri Feb 22 11:52:54.027 [conn106] remove test.jstests_remove9 query: { i: 4895.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:18331 272ms Fri Feb 22 11:52:54.272 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:54.369 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 13 locks(micros) w:278224 275ms Fri Feb 22 11:52:54.371 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:54.551 [conn106] remove test.jstests_remove9 query: { i: 3873.0 } ndeleted:0 keyUpdates:0 numYields: 17 locks(micros) w:5210 175ms Fri Feb 22 11:52:54.743 [conn106] remove test.jstests_remove9 query: { i: 293.0 } ndeleted:1 keyUpdates:0 numYields: 18 locks(micros) w:12847 191ms Fri Feb 22 11:52:54.942 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:55.070 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 17 locks(micros) w:317440 335ms Fri Feb 22 11:52:55.071 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:55.081 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:55.185 [conn106] remove test.jstests_remove9 query: { i: 32.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:4321 103ms Fri Feb 22 11:52:55.292 [conn106] remove test.jstests_remove9 query: { i: 1593.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:7784 106ms Fri Feb 22 11:52:55.400 [conn106] remove test.jstests_remove9 query: { i: 395.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:10093 107ms Fri Feb 22 11:52:55.577 [conn106] remove test.jstests_remove9 query: { i: 5021.0 } ndeleted:1 keyUpdates:0 numYields: 16 locks(micros) w:14261 176ms Fri Feb 22 11:52:55.921 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:55.970 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 20 locks(micros) w:391945 403ms Fri Feb 22 11:52:55.977 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:56.238 [conn106] remove test.jstests_remove9 query: { i: 9786.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8852 256ms Fri Feb 22 11:52:56.505 [conn106] remove test.jstests_remove9 query: { i: 4850.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:22816 267ms Fri Feb 22 11:52:56.822 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:56.901 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:56.914 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 22 locks(micros) w:393694 418ms Fri Feb 22 11:52:56.924 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:57.094 [conn106] remove test.jstests_remove9 query: { i: 6888.0 } ndeleted:0 keyUpdates:0 numYields: 16 locks(micros) w:7625 166ms Fri Feb 22 11:52:57.230 [conn106] remove test.jstests_remove9 query: { i: 6468.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:8939 136ms Fri Feb 22 11:52:57.465 [conn106] remove test.jstests_remove9 query: { i: 9459.0 } ndeleted:0 keyUpdates:0 numYields: 22 locks(micros) w:19687 234ms Fri Feb 22 11:52:57.803 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:57.879 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:57.895 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 24 locks(micros) w:397607 435ms Fri Feb 22 11:52:58.040 [conn106] remove test.jstests_remove9 query: { i: 9292.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:5917 105ms Fri Feb 22 11:52:58.041 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:58.250 [conn106] remove test.jstests_remove9 query: { i: 2040.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:9197 209ms Fri Feb 22 11:52:58.445 [conn106] remove test.jstests_remove9 query: { i: 595.0 } ndeleted:1 keyUpdates:0 numYields: 18 locks(micros) w:16865 194ms Fri Feb 22 11:52:58.811 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:58.845 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:58.862 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 22 locks(micros) w:407798 427ms Fri Feb 22 11:52:58.870 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:58.984 [conn106] remove test.jstests_remove9 query: { i: 1114.0 } ndeleted:1 keyUpdates:0 numYields: 11 locks(micros) w:5292 113ms Fri Feb 22 11:52:59.193 [conn106] remove test.jstests_remove9 query: { i: 8165.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:8611 208ms Fri Feb 22 11:52:59.441 [conn106] remove test.jstests_remove9 query: { i: 5624.0 } ndeleted:1 keyUpdates:0 numYields: 23 locks(micros) w:18334 248ms Fri Feb 22 11:52:59.809 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:59.839 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:52:59.841 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9979 keyUpdates:0 numYields: 20 locks(micros) w:408267 407ms Fri Feb 22 11:52:59.850 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:00.112 [conn106] remove test.jstests_remove9 query: { i: 668.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9674 256ms Fri Feb 22 11:53:00.392 [conn106] remove test.jstests_remove9 query: { i: 5428.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:20595 279ms Fri Feb 22 11:53:00.698 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:00.787 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 20 locks(micros) w:399627 404ms Fri Feb 22 11:53:00.793 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:00.971 [conn106] remove test.jstests_remove9 query: { i: 1402.0 } ndeleted:1 keyUpdates:0 numYields: 17 locks(micros) w:7911 176ms Fri Feb 22 11:53:01.108 [conn106] remove test.jstests_remove9 query: { i: 6073.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:9516 136ms Fri Feb 22 11:53:01.322 [conn106] remove test.jstests_remove9 query: { i: 7899.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:17176 213ms Fri Feb 22 11:53:01.692 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:01.743 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9983 keyUpdates:0 numYields: 23 locks(micros) w:407447 431ms Fri Feb 22 11:53:01.749 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:02.010 [conn106] remove test.jstests_remove9 query: { i: 588.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9131 256ms Fri Feb 22 11:53:02.285 [conn106] remove test.jstests_remove9 query: { i: 149.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:16790 274ms Fri Feb 22 11:53:02.597 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:02.697 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:02.704 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9983 keyUpdates:0 numYields: 22 locks(micros) w:424261 429ms Fri Feb 22 11:53:02.969 [conn106] remove test.jstests_remove9 query: { i: 5581.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9408 256ms Fri Feb 22 11:53:03.254 [conn106] remove test.jstests_remove9 query: { i: 1364.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:21202 284ms Fri Feb 22 11:53:03.571 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:03.674 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 22 locks(micros) w:398934 427ms Fri Feb 22 11:53:03.676 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:03.813 [conn106] remove test.jstests_remove9 query: { i: 4665.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:5693 105ms Fri Feb 22 11:53:04.024 [conn106] remove test.jstests_remove9 query: { i: 9944.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:9517 210ms Fri Feb 22 11:53:04.239 [conn106] remove test.jstests_remove9 query: { i: 9015.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:18608 213ms Fri Feb 22 11:53:04.275 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:04.639 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:04.658 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 21 locks(micros) w:382354 409ms Fri Feb 22 11:53:04.668 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:04.950 [conn106] remove test.jstests_remove9 query: { i: 2563.0 } ndeleted:1 keyUpdates:0 numYields: 17 locks(micros) w:10238 182ms Fri Feb 22 11:53:05.229 [conn106] remove test.jstests_remove9 query: { i: 8906.0 } ndeleted:0 keyUpdates:0 numYields: 26 locks(micros) w:20747 278ms Fri Feb 22 11:53:05.530 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:05.625 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:05.632 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:05.633 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9988 keyUpdates:0 numYields: 21 locks(micros) w:407299 416ms Fri Feb 22 11:53:05.640 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:05.822 [conn106] remove test.jstests_remove9 query: { i: 8173.0 } ndeleted:0 keyUpdates:0 numYields: 17 locks(micros) w:7537 175ms Fri Feb 22 11:53:06.136 [conn106] remove test.jstests_remove9 query: { i: 8959.0 } ndeleted:0 keyUpdates:0 numYields: 30 locks(micros) w:15508 313ms Fri Feb 22 11:53:06.443 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:06.550 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:06.567 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 20 locks(micros) w:371117 387ms Fri Feb 22 11:53:06.569 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:06.578 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:06.835 [conn106] remove test.jstests_remove9 query: { i: 1228.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9026 255ms Fri Feb 22 11:53:07.122 [conn106] remove test.jstests_remove9 query: { i: 6011.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:16898 286ms Fri Feb 22 11:53:07.467 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:07.528 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:07.539 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 23 locks(micros) w:412557 433ms Fri Feb 22 11:53:07.542 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:07.716 [conn106] remove test.jstests_remove9 query: { i: 1595.0 } ndeleted:1 keyUpdates:0 numYields: 16 locks(micros) w:7413 165ms Fri Feb 22 11:53:07.822 [conn106] remove test.jstests_remove9 query: { i: 5852.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:6236 105ms Fri Feb 22 11:53:08.079 [conn106] remove test.jstests_remove9 query: { i: 897.0 } ndeleted:1 keyUpdates:0 numYields: 24 locks(micros) w:22888 256ms Fri Feb 22 11:53:08.295 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:08.494 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:08.508 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9977 keyUpdates:0 numYields: 23 locks(micros) w:399905 437ms Fri Feb 22 11:53:08.519 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:08.776 [conn106] remove test.jstests_remove9 query: { i: 8411.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9495 256ms Fri Feb 22 11:53:09.042 [conn106] remove test.jstests_remove9 query: { i: 4682.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:18732 265ms Fri Feb 22 11:53:09.419 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:09.452 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9980 keyUpdates:0 numYields: 22 locks(micros) w:392895 418ms Fri Feb 22 11:53:09.460 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:09.622 [conn106] remove test.jstests_remove9 query: { i: 4991.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:6471 105ms Fri Feb 22 11:53:09.834 [conn106] remove test.jstests_remove9 query: { i: 9069.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:9194 211ms Fri Feb 22 11:53:10.007 [conn106] remove test.jstests_remove9 query: { i: 7436.0 } ndeleted:1 keyUpdates:0 numYields: 16 locks(micros) w:14998 172ms Fri Feb 22 11:53:10.335 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:10.419 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 22 locks(micros) w:391312 421ms Fri Feb 22 11:53:10.423 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:10.688 [conn106] remove test.jstests_remove9 query: { i: 2908.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9646 259ms Fri Feb 22 11:53:10.976 [conn106] remove test.jstests_remove9 query: { i: 5495.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:18121 287ms Fri Feb 22 11:53:11.286 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:11.382 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:11.389 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 22 locks(micros) w:408310 426ms Fri Feb 22 11:53:11.393 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:11.519 [conn106] remove test.jstests_remove9 query: { i: 186.0 } ndeleted:1 keyUpdates:0 numYields: 11 locks(micros) w:9552 117ms Fri Feb 22 11:53:11.688 [conn106] remove test.jstests_remove9 query: { i: 2526.0 } ndeleted:1 keyUpdates:0 numYields: 16 locks(micros) w:13766 169ms Fri Feb 22 11:53:11.945 [conn106] remove test.jstests_remove9 query: { i: 3688.0 } ndeleted:1 keyUpdates:0 numYields: 24 locks(micros) w:17706 256ms Fri Feb 22 11:53:12.139 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:12.269 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 16 locks(micros) w:329396 330ms Fri Feb 22 11:53:12.274 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:12.528 [conn106] remove test.jstests_remove9 query: { i: 7104.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:5910 253ms Fri Feb 22 11:53:12.746 [conn106] remove test.jstests_remove9 query: { i: 7391.0 } ndeleted:0 keyUpdates:0 numYields: 21 locks(micros) w:12000 218ms Fri Feb 22 11:53:13.145 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:13.180 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:13.185 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9990 keyUpdates:0 numYields: 21 locks(micros) w:424294 427ms Fri Feb 22 11:53:13.317 [conn106] remove test.jstests_remove9 query: { i: 5929.0 } ndeleted:0 keyUpdates:0 numYields: 12 locks(micros) w:5513 125ms Fri Feb 22 11:53:13.424 [conn106] remove test.jstests_remove9 query: { i: 7746.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:7472 106ms Fri Feb 22 11:53:13.532 [conn106] remove test.jstests_remove9 query: { i: 9671.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:9078 107ms Fri Feb 22 11:53:13.736 [conn106] remove test.jstests_remove9 query: { i: 6962.0 } ndeleted:1 keyUpdates:0 numYields: 19 locks(micros) w:16887 203ms Fri Feb 22 11:53:14.115 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:14.151 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:14.159 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 23 locks(micros) w:409987 433ms Fri Feb 22 11:53:14.164 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:14.175 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:14.432 [conn106] remove test.jstests_remove9 query: { i: 5640.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9365 256ms Fri Feb 22 11:53:14.710 [conn106] remove test.jstests_remove9 query: { i: 9731.0 } ndeleted:0 keyUpdates:0 numYields: 26 locks(micros) w:16851 276ms Fri Feb 22 11:53:15.070 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:15.100 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9988 keyUpdates:0 numYields: 21 locks(micros) w:388191 411ms Fri Feb 22 11:53:15.107 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:15.430 [conn106] remove test.jstests_remove9 query: { i: 8481.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:9517 211ms Fri Feb 22 11:53:15.668 [conn106] remove test.jstests_remove9 query: { i: 1106.0 } ndeleted:1 keyUpdates:0 numYields: 22 locks(micros) w:20038 237ms Fri Feb 22 11:53:15.945 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:16.059 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 20 locks(micros) w:385736 400ms Fri Feb 22 11:53:16.060 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:16.072 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:16.330 [conn106] remove test.jstests_remove9 query: { i: 1637.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9253 256ms Fri Feb 22 11:53:16.620 [conn106] remove test.jstests_remove9 query: { i: 6309.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:21943 289ms Fri Feb 22 11:53:16.983 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:17.015 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 20 locks(micros) w:392332 403ms Fri Feb 22 11:53:17.022 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:17.229 [conn106] remove test.jstests_remove9 query: { i: 5227.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:7615 105ms Fri Feb 22 11:53:17.460 [conn106] remove test.jstests_remove9 query: { i: 6730.0 } ndeleted:0 keyUpdates:0 numYields: 22 locks(micros) w:16242 230ms Fri Feb 22 11:53:17.580 [conn106] remove test.jstests_remove9 query: { i: 390.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:25045 118ms Fri Feb 22 11:53:17.845 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:17.964 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:17.978 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 23 locks(micros) w:403738 431ms Fri Feb 22 11:53:17.986 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:18.248 [conn106] remove test.jstests_remove9 query: { i: 5512.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9257 256ms Fri Feb 22 11:53:18.527 [conn106] remove test.jstests_remove9 query: { i: 3856.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:18575 278ms Fri Feb 22 11:53:18.824 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:18.901 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9990 keyUpdates:0 numYields: 19 locks(micros) w:384391 388ms Fri Feb 22 11:53:18.904 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:19.119 [conn106] remove test.jstests_remove9 query: { i: 5915.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:7902 205ms Fri Feb 22 11:53:19.348 [conn106] remove test.jstests_remove9 query: { i: 3397.0 } ndeleted:1 keyUpdates:0 numYields: 22 locks(micros) w:10471 228ms Fri Feb 22 11:53:19.409 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:19.630 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:19.703 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:19.713 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9977 keyUpdates:0 numYields: 19 locks(micros) w:349896 371ms Fri Feb 22 11:53:19.854 [conn106] remove test.jstests_remove9 query: { i: 2489.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:6184 134ms Fri Feb 22 11:53:19.961 [conn106] remove test.jstests_remove9 query: { i: 5764.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:6661 106ms Fri Feb 22 11:53:20.278 [conn106] remove test.jstests_remove9 query: { i: 4563.0 } ndeleted:1 keyUpdates:0 numYields: 30 locks(micros) w:16836 316ms Fri Feb 22 11:53:20.593 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:20.683 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:20.692 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 21 locks(micros) w:393227 411ms Fri Feb 22 11:53:20.806 [conn106] remove test.jstests_remove9 query: { i: 3764.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:4843 104ms Fri Feb 22 11:53:20.912 [conn106] remove test.jstests_remove9 query: { i: 1783.0 } ndeleted:1 keyUpdates:0 numYields: 10 locks(micros) w:7687 106ms Fri Feb 22 11:53:21.031 [conn106] remove test.jstests_remove9 query: { i: 8757.0 } ndeleted:0 keyUpdates:0 numYields: 11 locks(micros) w:12075 117ms Fri Feb 22 11:53:21.249 [conn106] remove test.jstests_remove9 query: { i: 4748.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:18372 217ms Fri Feb 22 11:53:21.567 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:21.676 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:21.686 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 24 locks(micros) w:399576 447ms Fri Feb 22 11:53:21.697 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:21.907 [conn106] remove test.jstests_remove9 query: { i: 1764.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:9442 205ms Fri Feb 22 11:53:22.106 [conn106] remove test.jstests_remove9 query: { i: 3152.0 } ndeleted:1 keyUpdates:0 numYields: 19 locks(micros) w:14749 199ms Fri Feb 22 11:53:22.243 [conn106] remove test.jstests_remove9 query: { i: 7919.0 } ndeleted:1 keyUpdates:0 numYields: 12 locks(micros) w:14677 136ms Fri Feb 22 11:53:22.573 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:22.639 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:22.663 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9978 keyUpdates:0 numYields: 23 locks(micros) w:395368 433ms Fri Feb 22 11:53:22.671 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:22.869 [conn106] remove test.jstests_remove9 query: { i: 5242.0 } ndeleted:0 keyUpdates:0 numYields: 19 locks(micros) w:8909 195ms Fri Feb 22 11:53:23.017 [conn106] remove test.jstests_remove9 query: { i: 9992.0 } ndeleted:0 keyUpdates:0 numYields: 14 locks(micros) w:10973 148ms Fri Feb 22 11:53:23.200 [conn106] remove test.jstests_remove9 query: { i: 1471.0 } ndeleted:1 keyUpdates:0 numYields: 17 locks(micros) w:15827 182ms Fri Feb 22 11:53:23.517 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:23.559 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:23.564 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 18 locks(micros) w:389853 379ms Fri Feb 22 11:53:23.567 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:23.577 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:23.834 [conn106] remove test.jstests_remove9 query: { i: 6737.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8784 255ms Fri Feb 22 11:53:23.963 [conn106] remove test.jstests_remove9 query: { i: 6026.0 } ndeleted:0 keyUpdates:0 numYields: 12 locks(micros) w:11830 129ms Fri Feb 22 11:53:24.118 [conn106] remove test.jstests_remove9 query: { i: 7141.0 } ndeleted:1 keyUpdates:0 numYields: 14 locks(micros) w:16210 154ms Fri Feb 22 11:53:24.484 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:24.526 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9989 keyUpdates:0 numYields: 21 locks(micros) w:408322 418ms Fri Feb 22 11:53:24.789 [conn106] remove test.jstests_remove9 query: { i: 5487.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9079 255ms Fri Feb 22 11:53:25.067 [conn106] remove test.jstests_remove9 query: { i: 7910.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:21487 277ms Fri Feb 22 11:53:25.208 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:25.469 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:25.484 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 22 locks(micros) w:392131 423ms Fri Feb 22 11:53:25.496 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:25.752 [conn106] remove test.jstests_remove9 query: { i: 9967.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9631 256ms Fri Feb 22 11:53:26.031 [conn106] remove test.jstests_remove9 query: { i: 4159.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:18139 277ms Fri Feb 22 11:53:26.388 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:26.441 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 22 locks(micros) w:394675 422ms Fri Feb 22 11:53:26.623 [conn106] remove test.jstests_remove9 query: { i: 8127.0 } ndeleted:0 keyUpdates:0 numYields: 17 locks(micros) w:9615 175ms Fri Feb 22 11:53:26.952 [conn106] remove test.jstests_remove9 query: { i: 324.0 } ndeleted:1 keyUpdates:0 numYields: 32 locks(micros) w:15316 329ms Fri Feb 22 11:53:27.065 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:27.353 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:27.379 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 20 locks(micros) w:411919 409ms Fri Feb 22 11:53:27.440 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:27.545 [conn106] remove test.jstests_remove9 query: { i: 9965.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:6665 105ms Fri Feb 22 11:53:27.691 [conn106] remove test.jstests_remove9 query: { i: 6323.0 } ndeleted:0 keyUpdates:0 numYields: 14 locks(micros) w:9042 145ms Fri Feb 22 11:53:27.943 [conn106] remove test.jstests_remove9 query: { i: 9015.0 } ndeleted:0 keyUpdates:0 numYields: 23 locks(micros) w:20842 251ms Fri Feb 22 11:53:28.336 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:28.371 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 23 locks(micros) w:425995 442ms Fri Feb 22 11:53:28.380 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:28.606 [conn106] remove test.jstests_remove9 query: { i: 545.0 } ndeleted:1 keyUpdates:0 numYields: 22 locks(micros) w:8838 225ms Fri Feb 22 11:53:28.818 [conn106] remove test.jstests_remove9 query: { i: 236.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:14743 211ms Fri Feb 22 11:53:28.948 [conn106] remove test.jstests_remove9 query: { i: 1748.0 } ndeleted:1 keyUpdates:0 numYields: 11 locks(micros) w:19661 129ms Fri Feb 22 11:53:29.324 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:29.363 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:29.367 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 23 locks(micros) w:428148 447ms Fri Feb 22 11:53:29.370 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:29.381 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:29.557 [conn106] remove test.jstests_remove9 query: { i: 6893.0 } ndeleted:0 keyUpdates:0 numYields: 17 locks(micros) w:7716 175ms Fri Feb 22 11:53:29.767 [conn106] remove test.jstests_remove9 query: { i: 4039.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:12539 209ms Fri Feb 22 11:53:29.891 [conn106] remove test.jstests_remove9 query: { i: 6003.0 } ndeleted:1 keyUpdates:0 numYields: 11 locks(micros) w:14732 124ms Fri Feb 22 11:53:30.235 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:30.263 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9988 keyUpdates:0 numYields: 19 locks(micros) w:371312 380ms Fri Feb 22 11:53:30.526 [conn106] remove test.jstests_remove9 query: { i: 6607.0 } ndeleted:0 keyUpdates:0 numYields: 23 locks(micros) w:7712 235ms Fri Feb 22 11:53:30.742 [conn106] remove test.jstests_remove9 query: { i: 857.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:17359 215ms Fri Feb 22 11:53:30.892 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:31.123 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:31.159 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9979 keyUpdates:0 numYields: 23 locks(micros) w:395700 431ms Fri Feb 22 11:53:31.160 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:31.170 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:31.429 [conn106] remove test.jstests_remove9 query: { i: 2969.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:10067 256ms Fri Feb 22 11:53:31.717 [conn106] remove test.jstests_remove9 query: { i: 296.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:19828 287ms Fri Feb 22 11:53:32.063 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:32.116 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:32.122 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 21 locks(micros) w:400614 413ms Fri Feb 22 11:53:32.123 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:32.133 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:32.393 [conn106] remove test.jstests_remove9 query: { i: 2010.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:10657 256ms Fri Feb 22 11:53:32.671 [conn106] remove test.jstests_remove9 query: { i: 7384.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:19339 278ms Fri Feb 22 11:53:33.029 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:33.081 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 21 locks(micros) w:394121 420ms Fri Feb 22 11:53:33.085 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:33.250 [conn106] remove test.jstests_remove9 query: { i: 9066.0 } ndeleted:0 keyUpdates:0 numYields: 15 locks(micros) w:6908 155ms Fri Feb 22 11:53:33.386 [conn106] remove test.jstests_remove9 query: { i: 4901.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:8913 135ms Fri Feb 22 11:53:33.632 [conn106] remove test.jstests_remove9 query: { i: 61.0 } ndeleted:1 keyUpdates:0 numYields: 23 locks(micros) w:19754 245ms Fri Feb 22 11:53:33.766 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:34.028 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:34.051 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:34.063 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 24 locks(micros) w:398828 436ms Fri Feb 22 11:53:34.195 [conn106] remove test.jstests_remove9 query: { i: 237.0 } ndeleted:1 keyUpdates:0 numYields: 12 locks(micros) w:5616 124ms Fri Feb 22 11:53:34.311 [conn106] remove test.jstests_remove9 query: { i: 1153.0 } ndeleted:1 keyUpdates:0 numYields: 11 locks(micros) w:8707 115ms Fri Feb 22 11:53:34.594 [conn106] remove test.jstests_remove9 query: { i: 220.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:18006 282ms Fri Feb 22 11:53:34.699 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:34.989 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:35.001 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 21 locks(micros) w:395425 413ms Fri Feb 22 11:53:35.268 [conn106] remove test.jstests_remove9 query: { i: 2234.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9632 256ms Fri Feb 22 11:53:35.574 [conn106] remove test.jstests_remove9 query: { i: 1640.0 } ndeleted:1 keyUpdates:0 numYields: 29 locks(micros) w:20699 305ms Fri Feb 22 11:53:35.641 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:35.956 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:36.001 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:36.019 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:36.032 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:36.034 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 26 locks(micros) w:420342 473ms Fri Feb 22 11:53:36.043 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:36.298 [conn106] remove test.jstests_remove9 query: { i: 8259.0 } ndeleted:0 keyUpdates:0 numYields: 24 locks(micros) w:8951 251ms Fri Feb 22 11:53:36.512 [conn106] remove test.jstests_remove9 query: { i: 1365.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:12168 213ms Fri Feb 22 11:53:36.889 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:36.961 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:36.967 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 21 locks(micros) w:406671 417ms Fri Feb 22 11:53:36.974 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:37.247 [conn106] remove test.jstests_remove9 query: { i: 6806.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:8988 205ms Fri Feb 22 11:53:37.512 [conn106] remove test.jstests_remove9 query: { i: 8149.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:17792 264ms Fri Feb 22 11:53:37.830 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:37.907 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:37.918 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:37.924 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 23 locks(micros) w:399254 424ms Fri Feb 22 11:53:37.926 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:37.935 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:38.196 [conn106] remove test.jstests_remove9 query: { i: 12.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9523 257ms Fri Feb 22 11:53:38.304 [conn106] remove test.jstests_remove9 query: { i: 5140.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:11014 108ms Fri Feb 22 11:53:38.483 [conn106] remove test.jstests_remove9 query: { i: 8537.0 } ndeleted:0 keyUpdates:0 numYields: 16 locks(micros) w:15270 178ms Fri Feb 22 11:53:38.779 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:38.792 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9982 keyUpdates:0 numYields: 16 locks(micros) w:309637 322ms Fri Feb 22 11:53:38.944 [conn106] remove test.jstests_remove9 query: { i: 531.0 } ndeleted:1 keyUpdates:0 numYields: 14 locks(micros) w:5011 145ms Fri Feb 22 11:53:39.078 [conn106] remove test.jstests_remove9 query: { i: 4902.0 } ndeleted:0 keyUpdates:0 numYields: 13 locks(micros) w:6348 134ms Fri Feb 22 11:53:39.296 [conn106] remove test.jstests_remove9 query: { i: 4740.0 } ndeleted:1 keyUpdates:0 numYields: 20 locks(micros) w:14716 217ms Fri Feb 22 11:53:39.510 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:39.682 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:39.712 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:39.722 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9987 keyUpdates:0 numYields: 23 locks(micros) w:414096 440ms Fri Feb 22 11:53:39.723 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:39.993 [conn106] remove test.jstests_remove9 query: { i: 8460.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8969 259ms Fri Feb 22 11:53:40.218 [conn106] remove test.jstests_remove9 query: { i: 4407.0 } ndeleted:1 keyUpdates:0 numYields: 21 locks(micros) w:20870 224ms Fri Feb 22 11:53:40.219 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:40.662 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:40.689 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:40.690 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9984 keyUpdates:0 numYields: 21 locks(micros) w:406814 416ms Fri Feb 22 11:53:40.701 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:40.959 [conn106] remove test.jstests_remove9 query: { i: 6384.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:9346 257ms Fri Feb 22 11:53:41.251 [conn106] remove test.jstests_remove9 query: { i: 1731.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:21641 291ms Fri Feb 22 11:53:41.615 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:41.649 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 20 locks(micros) w:408105 410ms Fri Feb 22 11:53:41.657 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:41.919 [conn106] remove test.jstests_remove9 query: { i: 2202.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9353 259ms Fri Feb 22 11:53:42.164 [conn106] remove test.jstests_remove9 query: { i: 9015.0 } ndeleted:0 keyUpdates:0 numYields: 23 locks(micros) w:18262 245ms Fri Feb 22 11:53:42.470 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:42.527 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9983 keyUpdates:0 numYields: 20 locks(micros) w:357242 376ms Fri Feb 22 11:53:42.751 [conn106] remove test.jstests_remove9 query: { i: 8552.0 } ndeleted:0 keyUpdates:0 numYields: 21 locks(micros) w:9404 216ms Fri Feb 22 11:53:42.993 [conn106] remove test.jstests_remove9 query: { i: 2902.0 } ndeleted:1 keyUpdates:0 numYields: 23 locks(micros) w:17881 241ms Fri Feb 22 11:53:43.102 [conn106] remove test.jstests_remove9 query: { i: 8125.0 } ndeleted:1 keyUpdates:0 numYields: 9 locks(micros) w:24921 108ms Fri Feb 22 11:53:43.105 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:43.455 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:43.489 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:43.503 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:43.506 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9986 keyUpdates:0 numYields: 24 locks(micros) w:427193 443ms Fri Feb 22 11:53:43.513 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:43.525 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:43.786 [conn106] remove test.jstests_remove9 query: { i: 904.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:10449 257ms Fri Feb 22 11:53:44.094 [conn106] remove test.jstests_remove9 query: { i: 1051.0 } ndeleted:1 keyUpdates:0 numYields: 29 locks(micros) w:22976 308ms Fri Feb 22 11:53:44.454 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:44.510 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:44.521 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9977 keyUpdates:0 numYields: 22 locks(micros) w:417588 433ms Fri Feb 22 11:53:44.651 [conn106] remove test.jstests_remove9 query: { i: 3386.0 } ndeleted:0 keyUpdates:0 numYields: 12 locks(micros) w:5511 123ms Fri Feb 22 11:53:44.758 [conn106] remove test.jstests_remove9 query: { i: 3629.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:7192 106ms Fri Feb 22 11:53:44.971 [conn106] remove test.jstests_remove9 query: { i: 7927.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:15745 212ms Fri Feb 22 11:53:45.094 [conn106] remove test.jstests_remove9 query: { i: 582.0 } ndeleted:1 keyUpdates:0 numYields: 11 locks(micros) w:16580 122ms Fri Feb 22 11:53:45.405 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:45.512 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:45.518 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:45.520 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9985 keyUpdates:0 numYields: 24 locks(micros) w:401182 436ms Fri Feb 22 11:53:45.526 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:45.794 [conn106] remove test.jstests_remove9 query: { i: 3782.0 } ndeleted:0 keyUpdates:0 numYields: 25 locks(micros) w:8964 259ms Fri Feb 22 11:53:46.053 [conn106] remove test.jstests_remove9 query: { i: 1791.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:14534 259ms Fri Feb 22 11:53:46.310 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:46.474 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:46.501 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:46.512 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 24 locks(micros) w:394819 442ms Fri Feb 22 11:53:46.765 [conn106] remove test.jstests_remove9 query: { i: 9747.0 } ndeleted:0 keyUpdates:0 numYields: 24 locks(micros) w:8761 245ms Fri Feb 22 11:53:47.054 [conn106] remove test.jstests_remove9 query: { i: 1811.0 } ndeleted:1 keyUpdates:0 numYields: 27 locks(micros) w:19428 288ms Fri Feb 22 11:53:47.360 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:47.477 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:47.503 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:47.518 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:47.522 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9981 keyUpdates:0 numYields: 27 locks(micros) w:411193 479ms Fri Feb 22 11:53:47.530 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:47.715 [conn106] remove test.jstests_remove9 query: { i: 8068.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:6679 106ms Fri Feb 22 11:53:47.926 [conn106] remove test.jstests_remove9 query: { i: 5000.0 } ndeleted:0 keyUpdates:0 numYields: 20 locks(micros) w:8773 211ms Fri Feb 22 11:53:48.084 [conn106] remove test.jstests_remove9 query: { i: 4497.0 } ndeleted:1 keyUpdates:0 numYields: 14 locks(micros) w:14702 156ms Fri Feb 22 11:53:48.350 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:48.487 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:48.494 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:48.496 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9991 keyUpdates:0 numYields: 22 locks(micros) w:404421 424ms Fri Feb 22 11:53:48.504 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:48.663 [conn106] remove test.jstests_remove9 query: { i: 3839.0 } ndeleted:0 keyUpdates:0 numYields: 15 locks(micros) w:6761 154ms Fri Feb 22 11:53:48.770 [conn106] remove test.jstests_remove9 query: { i: 6021.0 } ndeleted:0 keyUpdates:0 numYields: 10 locks(micros) w:6272 106ms Fri Feb 22 11:53:49.041 [conn106] remove test.jstests_remove9 query: { i: 95.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:19518 271ms Fri Feb 22 11:53:49.357 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:49.444 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9983 keyUpdates:0 numYields: 21 locks(micros) w:396830 416ms Fri Feb 22 11:53:49.447 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:49.713 [conn106] remove test.jstests_remove9 query: { i: 1548.0 } ndeleted:1 keyUpdates:0 numYields: 25 locks(micros) w:9554 257ms Fri Feb 22 11:53:49.993 [conn106] remove test.jstests_remove9 query: { i: 517.0 } ndeleted:1 keyUpdates:0 numYields: 26 locks(micros) w:17012 279ms Fri Feb 22 11:53:50.203 [conn106] info DFM::findAll(): extent 2:2435000 was empty, skipping ahead. ns:test.jstests_remove9 Fri Feb 22 11:53:50.354 [initandlisten] connection accepted from 127.0.0.1:39314 #108 (3 connections now open) Fri Feb 22 11:53:50.355 [conn108] end connection 127.0.0.1:39314 (2 connections now open) Fri Feb 22 11:53:50.371 [conn107] remove test.jstests_remove9 query: { i: { $gte: 0.0 } } ndeleted:9988 keyUpdates:0 numYields: 20 locks(micros) w:381373 394ms Fri Feb 22 11:53:50.442 [conn107] end connection 127.0.0.1:34101 (1 connection now open) sh1665| MongoDB shell version: 2.4.0-rc1-pre- sh1665| connecting to: 127.0.0.1:27999/test Fri Feb 22 11:53:51.344 shell: stopped mongo program on pid 1665 Fri Feb 22 11:53:51.351 [conn106] end connection 127.0.0.1:54584 (0 connections now open) 1.6578 minutes Fri Feb 22 11:53:51.373 [initandlisten] connection accepted from 127.0.0.1:40992 #109 (1 connection now open) Fri Feb 22 11:53:51.374 [conn109] end connection 127.0.0.1:40992 (0 connections now open) ******************************************* Test : replReads.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replReads.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replReads.js";TestData.testFile = "replReads.js";TestData.testName = "replReads";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:53:51 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:53:51.501 [initandlisten] connection accepted from 127.0.0.1:63946 #110 (1 connection now open) null Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 0, "set" : "test-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-0' Fri Feb 22 11:53:51.517 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-0 --setParameter enableTestCommands=1 m31100| note: noprealloc may hurt performance in many applications m31100| Fri Feb 22 11:53:51.588 [initandlisten] MongoDB starting : pid=1889 port=31100 dbpath=/data/db/test-rs0-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31100| Fri Feb 22 11:53:51.589 [initandlisten] m31100| Fri Feb 22 11:53:51.589 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31100| Fri Feb 22 11:53:51.589 [initandlisten] ** uses to detect impending page faults. m31100| Fri Feb 22 11:53:51.589 [initandlisten] ** This may result in slower performance for certain use cases m31100| Fri Feb 22 11:53:51.589 [initandlisten] m31100| Fri Feb 22 11:53:51.589 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31100| Fri Feb 22 11:53:51.589 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31100| Fri Feb 22 11:53:51.589 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31100| Fri Feb 22 11:53:51.589 [initandlisten] allocator: system m31100| Fri Feb 22 11:53:51.589 [initandlisten] options: { dbpath: "/data/db/test-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "test-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31100| Fri Feb 22 11:53:51.589 [initandlisten] journal dir=/data/db/test-rs0-0/journal m31100| Fri Feb 22 11:53:51.589 [initandlisten] recover : no journal files present, no recovery needed m31100| Fri Feb 22 11:53:51.601 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.ns, filling with zeroes... m31100| Fri Feb 22 11:53:51.601 [FileAllocator] creating directory /data/db/test-rs0-0/_tmp m31100| Fri Feb 22 11:53:51.602 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:53:51.602 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.0, filling with zeroes... m31100| Fri Feb 22 11:53:51.602 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:53:51.605 [initandlisten] waiting for connections on port 31100 m31100| Fri Feb 22 11:53:51.605 [websvr] admin web console waiting for connections on port 32100 m31100| Fri Feb 22 11:53:51.607 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 11:53:51.607 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31100| Fri Feb 22 11:53:51.720 [initandlisten] connection accepted from 127.0.0.1:52856 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 1, "set" : "test-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-1' Fri Feb 22 11:53:51.729 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-1 --setParameter enableTestCommands=1 m31101| note: noprealloc may hurt performance in many applications m31101| Fri Feb 22 11:53:51.811 [initandlisten] MongoDB starting : pid=1892 port=31101 dbpath=/data/db/test-rs0-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31101| Fri Feb 22 11:53:51.811 [initandlisten] m31101| Fri Feb 22 11:53:51.811 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31101| Fri Feb 22 11:53:51.811 [initandlisten] ** uses to detect impending page faults. m31101| Fri Feb 22 11:53:51.812 [initandlisten] ** This may result in slower performance for certain use cases m31101| Fri Feb 22 11:53:51.812 [initandlisten] m31101| Fri Feb 22 11:53:51.812 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31101| Fri Feb 22 11:53:51.812 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31101| Fri Feb 22 11:53:51.812 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31101| Fri Feb 22 11:53:51.812 [initandlisten] allocator: system m31101| Fri Feb 22 11:53:51.812 [initandlisten] options: { dbpath: "/data/db/test-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "test-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31101| Fri Feb 22 11:53:51.812 [initandlisten] journal dir=/data/db/test-rs0-1/journal m31101| Fri Feb 22 11:53:51.812 [initandlisten] recover : no journal files present, no recovery needed m31101| Fri Feb 22 11:53:51.826 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.ns, filling with zeroes... m31101| Fri Feb 22 11:53:51.826 [FileAllocator] creating directory /data/db/test-rs0-1/_tmp m31101| Fri Feb 22 11:53:51.826 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:53:51.826 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.0, filling with zeroes... m31101| Fri Feb 22 11:53:51.826 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:53:51.829 [initandlisten] waiting for connections on port 31101 m31101| Fri Feb 22 11:53:51.829 [websvr] admin web console waiting for connections on port 32101 m31101| Fri Feb 22 11:53:51.831 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31101| Fri Feb 22 11:53:51.831 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31101| Fri Feb 22 11:53:51.931 [initandlisten] connection accepted from 127.0.0.1:61668 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31102, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 2, "set" : "test-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-2' Fri Feb 22 11:53:51.936 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31102 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-2 --setParameter enableTestCommands=1 m31102| note: noprealloc may hurt performance in many applications m31102| Fri Feb 22 11:53:52.023 [initandlisten] MongoDB starting : pid=1893 port=31102 dbpath=/data/db/test-rs0-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31102| Fri Feb 22 11:53:52.024 [initandlisten] m31102| Fri Feb 22 11:53:52.024 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31102| Fri Feb 22 11:53:52.024 [initandlisten] ** uses to detect impending page faults. m31102| Fri Feb 22 11:53:52.024 [initandlisten] ** This may result in slower performance for certain use cases m31102| Fri Feb 22 11:53:52.024 [initandlisten] m31102| Fri Feb 22 11:53:52.024 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31102| Fri Feb 22 11:53:52.024 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31102| Fri Feb 22 11:53:52.024 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31102| Fri Feb 22 11:53:52.024 [initandlisten] allocator: system m31102| Fri Feb 22 11:53:52.024 [initandlisten] options: { dbpath: "/data/db/test-rs0-2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "test-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31102| Fri Feb 22 11:53:52.024 [initandlisten] journal dir=/data/db/test-rs0-2/journal m31102| Fri Feb 22 11:53:52.024 [initandlisten] recover : no journal files present, no recovery needed m31102| Fri Feb 22 11:53:52.039 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.ns, filling with zeroes... m31102| Fri Feb 22 11:53:52.039 [FileAllocator] creating directory /data/db/test-rs0-2/_tmp m31102| Fri Feb 22 11:53:52.040 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 11:53:52.040 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.0, filling with zeroes... m31102| Fri Feb 22 11:53:52.040 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.0, size: 16MB, took 0 secs m31102| Fri Feb 22 11:53:52.043 [initandlisten] waiting for connections on port 31102 m31102| Fri Feb 22 11:53:52.043 [websvr] admin web console waiting for connections on port 32102 m31102| Fri Feb 22 11:53:52.046 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31102| Fri Feb 22 11:53:52.046 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31102| Fri Feb 22 11:53:52.137 [initandlisten] connection accepted from 127.0.0.1:50934 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101, connection to bs-smartos-x86-64-1.10gen.cc:31102 ] { "replSetInitiate" : { "_id" : "test-rs0", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31102" } ] } } m31100| Fri Feb 22 11:53:52.140 [conn1] replSet replSetInitiate admin command received from client m31100| Fri Feb 22 11:53:52.143 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31100| Fri Feb 22 11:53:52.144 [initandlisten] connection accepted from 165.225.128.186:33543 #2 (2 connections now open) m31101| Fri Feb 22 11:53:52.145 [initandlisten] connection accepted from 165.225.128.186:34172 #2 (2 connections now open) m31102| Fri Feb 22 11:53:52.150 [initandlisten] connection accepted from 165.225.128.186:64148 #2 (2 connections now open) m31100| Fri Feb 22 11:53:52.151 [conn1] replSet replSetInitiate all members seem up m31100| Fri Feb 22 11:53:52.151 [conn1] ****** m31100| Fri Feb 22 11:53:52.151 [conn1] creating replication oplog of size: 40MB... m31100| Fri Feb 22 11:53:52.151 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.1, filling with zeroes... m31100| Fri Feb 22 11:53:52.151 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.1, size: 64MB, took 0 secs m31100| Fri Feb 22 11:53:52.161 [conn2] end connection 165.225.128.186:33543 (1 connection now open) m31100| Fri Feb 22 11:53:52.164 [conn1] ****** m31100| Fri Feb 22 11:53:52.164 [conn1] replSet info saving a newer config version to local.system.replset m31100| Fri Feb 22 11:53:52.172 [conn1] replSet saveConfigLocally done m31100| Fri Feb 22 11:53:52.172 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 11:54:01.607 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:01.607 [rsStart] replSet STARTUP2 m31100| Fri Feb 22 11:54:01.608 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31101| Fri Feb 22 11:54:01.832 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:01.833 [initandlisten] connection accepted from 165.225.128.186:49670 #3 (2 connections now open) m31101| Fri Feb 22 11:54:01.834 [initandlisten] connection accepted from 165.225.128.186:43071 #3 (3 connections now open) m31101| Fri Feb 22 11:54:01.834 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:54:01.834 [rsStart] replSet got config version 1 from a remote, saving locally m31101| Fri Feb 22 11:54:01.834 [rsStart] replSet info saving a newer config version to local.system.replset m31101| Fri Feb 22 11:54:01.840 [rsStart] replSet saveConfigLocally done m31101| Fri Feb 22 11:54:01.840 [rsStart] replSet STARTUP2 m31101| Fri Feb 22 11:54:01.841 [rsSync] ****** m31101| Fri Feb 22 11:54:01.841 [rsSync] creating replication oplog of size: 40MB... m31101| Fri Feb 22 11:54:01.841 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.1, filling with zeroes... m31101| Fri Feb 22 11:54:01.841 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.1, size: 64MB, took 0 secs m31101| Fri Feb 22 11:54:01.849 [conn3] end connection 165.225.128.186:43071 (2 connections now open) m31101| Fri Feb 22 11:54:01.852 [rsSync] ****** m31101| Fri Feb 22 11:54:01.852 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:54:01.852 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31102| Fri Feb 22 11:54:02.046 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 11:54:02.608 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:54:03.608 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31100| Fri Feb 22 11:54:03.608 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31100| Fri Feb 22 11:54:03.608 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 thinks that we are down m31100| Fri Feb 22 11:54:03.608 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31100| Fri Feb 22 11:54:03.608 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 11:54:03.834 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31101| Fri Feb 22 11:54:03.834 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31102| Fri Feb 22 11:54:03.835 [initandlisten] connection accepted from 165.225.128.186:40949 #3 (3 connections now open) m31101| Fri Feb 22 11:54:03.835 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31100| Fri Feb 22 11:54:09.609 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31102| Fri Feb 22 11:54:12.046 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:12.047 [initandlisten] connection accepted from 165.225.128.186:60282 #4 (3 connections now open) m31102| Fri Feb 22 11:54:12.048 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:54:12.048 [initandlisten] connection accepted from 165.225.128.186:52874 #4 (3 connections now open) m31102| Fri Feb 22 11:54:12.049 [initandlisten] connection accepted from 165.225.128.186:62255 #4 (4 connections now open) m31102| Fri Feb 22 11:54:12.049 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 11:54:12.049 [rsStart] replSet got config version 1 from a remote, saving locally m31102| Fri Feb 22 11:54:12.049 [rsStart] replSet info saving a newer config version to local.system.replset m31102| Fri Feb 22 11:54:12.051 [rsStart] replSet saveConfigLocally done m31102| Fri Feb 22 11:54:12.051 [rsStart] replSet STARTUP2 m31102| Fri Feb 22 11:54:12.052 [rsSync] ****** m31102| Fri Feb 22 11:54:12.052 [rsSync] creating replication oplog of size: 40MB... m31102| Fri Feb 22 11:54:12.052 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.1, filling with zeroes... m31102| Fri Feb 22 11:54:12.052 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.1, size: 64MB, took 0 secs m31102| Fri Feb 22 11:54:12.065 [rsSync] ****** m31102| Fri Feb 22 11:54:12.065 [rsSync] replSet initial sync pending m31102| Fri Feb 22 11:54:12.065 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31102| Fri Feb 22 11:54:12.068 [conn4] end connection 165.225.128.186:62255 (3 connections now open) m31100| Fri Feb 22 11:54:13.609 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31100| Fri Feb 22 11:54:13.609 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31100| Fri Feb 22 11:54:13.609 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 11:54:13.836 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31101| Fri Feb 22 11:54:13.836 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31102| Fri Feb 22 11:54:14.049 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31102| Fri Feb 22 11:54:14.049 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31102| Fri Feb 22 11:54:14.049 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31102| Fri Feb 22 11:54:14.049 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31101| Fri Feb 22 11:54:15.610 [conn2] end connection 165.225.128.186:34172 (2 connections now open) m31101| Fri Feb 22 11:54:15.610 [initandlisten] connection accepted from 165.225.128.186:59948 #5 (3 connections now open) m31100| Fri Feb 22 11:54:17.836 [conn3] end connection 165.225.128.186:49670 (2 connections now open) m31100| Fri Feb 22 11:54:17.837 [initandlisten] connection accepted from 165.225.128.186:56019 #5 (3 connections now open) m31101| Fri Feb 22 11:54:17.852 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:54:17.852 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:17.853 [initandlisten] connection accepted from 165.225.128.186:41018 #6 (4 connections now open) m31101| Fri Feb 22 11:54:17.861 [rsSync] build index local.me { _id: 1 } m31101| Fri Feb 22 11:54:17.865 [rsSync] build index done. scanned 0 total records. 0.004 secs m31101| Fri Feb 22 11:54:17.867 [rsSync] build index local.replset.minvalid { _id: 1 } m31101| Fri Feb 22 11:54:17.868 [rsSync] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 11:54:17.868 [rsSync] replSet initial sync drop all databases m31101| Fri Feb 22 11:54:17.868 [rsSync] dropAllDatabasesExceptLocal 1 m31101| Fri Feb 22 11:54:17.868 [rsSync] replSet initial sync clone all databases m31101| Fri Feb 22 11:54:17.868 [rsSync] replSet initial sync data copy, starting syncup m31101| Fri Feb 22 11:54:17.868 [rsSync] oplog sync 1 of 3 m31101| Fri Feb 22 11:54:17.869 [rsSync] oplog sync 2 of 3 m31101| Fri Feb 22 11:54:17.869 [rsSync] replSet initial sync building indexes m31101| Fri Feb 22 11:54:17.869 [rsSync] oplog sync 3 of 3 m31101| Fri Feb 22 11:54:17.869 [rsSync] replSet initial sync finishing up m31101| Fri Feb 22 11:54:17.877 [rsSync] replSet set minValid=51275c50:1 m31101| Fri Feb 22 11:54:17.881 [rsSync] replSet RECOVERING m31101| Fri Feb 22 11:54:17.882 [rsSync] replSet initial sync done m31100| Fri Feb 22 11:54:17.882 [conn6] end connection 165.225.128.186:41018 (3 connections now open) m31102| Fri Feb 22 11:54:18.050 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31100| Fri Feb 22 11:54:19.610 [rsMgr] replSet info electSelf 0 m31102| Fri Feb 22 11:54:19.610 [conn2] replSet RECOVERING m31102| Fri Feb 22 11:54:19.610 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31101| Fri Feb 22 11:54:19.610 [conn5] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31100| Fri Feb 22 11:54:19.611 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31101| Fri Feb 22 11:54:19.837 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31101| Fri Feb 22 11:54:19.882 [rsSync] replSet SECONDARY m31102| Fri Feb 22 11:54:20.050 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31100| Fri Feb 22 11:54:20.609 [rsMgr] replSet PRIMARY m31100| Fri Feb 22 11:54:21.610 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31100| Fri Feb 22 11:54:21.611 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31101| Fri Feb 22 11:54:21.838 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31101| Fri Feb 22 11:54:21.841 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:21.842 [initandlisten] connection accepted from 165.225.128.186:42898 #7 (4 connections now open) m31101| Fri Feb 22 11:54:21.882 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:21.883 [initandlisten] connection accepted from 165.225.128.186:44422 #8 (5 connections now open) m31102| Fri Feb 22 11:54:22.050 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31100| Fri Feb 22 11:54:22.891 [slaveTracking] build index local.slaves { _id: 1 } m31100| Fri Feb 22 11:54:22.894 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31100| Fri Feb 22 11:54:28.051 [conn4] end connection 165.225.128.186:60282 (4 connections now open) m31100| Fri Feb 22 11:54:28.051 [initandlisten] connection accepted from 165.225.128.186:56129 #9 (5 connections now open) m31102| Fri Feb 22 11:54:28.066 [rsSync] replSet initial sync pending m31102| Fri Feb 22 11:54:28.066 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:28.066 [initandlisten] connection accepted from 165.225.128.186:61697 #10 (6 connections now open) m31102| Fri Feb 22 11:54:28.071 [rsSync] build index local.me { _id: 1 } m31102| Fri Feb 22 11:54:28.074 [rsSync] build index done. scanned 0 total records. 0.002 secs m31102| Fri Feb 22 11:54:28.075 [rsSync] build index local.replset.minvalid { _id: 1 } m31102| Fri Feb 22 11:54:28.076 [rsSync] build index done. scanned 0 total records. 0 secs m31102| Fri Feb 22 11:54:28.076 [rsSync] replSet initial sync drop all databases m31102| Fri Feb 22 11:54:28.076 [rsSync] dropAllDatabasesExceptLocal 1 m31102| Fri Feb 22 11:54:28.076 [rsSync] replSet initial sync clone all databases m31102| Fri Feb 22 11:54:28.077 [rsSync] replSet initial sync data copy, starting syncup m31102| Fri Feb 22 11:54:28.077 [rsSync] oplog sync 1 of 3 m31102| Fri Feb 22 11:54:28.077 [rsSync] oplog sync 2 of 3 m31102| Fri Feb 22 11:54:28.077 [rsSync] replSet initial sync building indexes m31102| Fri Feb 22 11:54:28.077 [rsSync] oplog sync 3 of 3 m31102| Fri Feb 22 11:54:28.077 [rsSync] replSet initial sync finishing up m31102| Fri Feb 22 11:54:28.086 [rsSync] replSet set minValid=51275c50:1 m31102| Fri Feb 22 11:54:28.092 [rsSync] replSet initial sync done m31100| Fri Feb 22 11:54:28.093 [conn10] end connection 165.225.128.186:61697 (5 connections now open) m31102| Fri Feb 22 11:54:29.052 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:29.053 [initandlisten] connection accepted from 165.225.128.186:38095 #11 (6 connections now open) m31102| Fri Feb 22 11:54:29.092 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:29.093 [initandlisten] connection accepted from 165.225.128.186:38654 #12 (7 connections now open) m31102| Fri Feb 22 11:54:30.094 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:54:30.096 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.ns, filling with zeroes... m31100| Fri Feb 22 11:54:30.096 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:54:30.096 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.0, filling with zeroes... m31100| Fri Feb 22 11:54:30.097 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:54:30.100 [conn1] build index admin.foo { _id: 1 } m31100| Fri Feb 22 11:54:30.101 [conn1] build index done. scanned 0 total records. 0.001 secs m31102| Fri Feb 22 11:54:30.102 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.ns, filling with zeroes... m31101| Fri Feb 22 11:54:30.102 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.ns, filling with zeroes... m31102| Fri Feb 22 11:54:30.102 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:54:30.102 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 11:54:30.103 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.0, filling with zeroes... m31101| Fri Feb 22 11:54:30.103 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.0, filling with zeroes... m31102| Fri Feb 22 11:54:30.103 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:54:30.103 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361534070000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361534070000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 m31102| Fri Feb 22 11:54:30.106 [repl writer worker 1] build index admin.foo { _id: 1 } m31101| Fri Feb 22 11:54:30.106 [repl writer worker 1] build index admin.foo { _id: 1 } m31102| Fri Feb 22 11:54:30.107 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 11:54:30.108 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31102 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31102, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361534070000, "i" : 1 } Fri Feb 22 11:54:30.113 starting new replica set monitor for replica set test-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 11:54:30.114 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set test-rs0 m31100| Fri Feb 22 11:54:30.114 [initandlisten] connection accepted from 165.225.128.186:35938 #13 (8 connections now open) Fri Feb 22 11:54:30.114 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from test-rs0/ Fri Feb 22 11:54:30.114 trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set test-rs0 Fri Feb 22 11:54:30.115 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set test-rs0 m31100| Fri Feb 22 11:54:30.115 [initandlisten] connection accepted from 165.225.128.186:60350 #14 (9 connections now open) Fri Feb 22 11:54:30.115 trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set test-rs0 Fri Feb 22 11:54:30.115 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set test-rs0 Fri Feb 22 11:54:30.115 trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set test-rs0 m31101| Fri Feb 22 11:54:30.115 [initandlisten] connection accepted from 165.225.128.186:38510 #6 (4 connections now open) Fri Feb 22 11:54:30.115 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set test-rs0 m31102| Fri Feb 22 11:54:30.115 [initandlisten] connection accepted from 165.225.128.186:48837 #5 (4 connections now open) m31100| Fri Feb 22 11:54:30.116 [initandlisten] connection accepted from 165.225.128.186:60377 #15 (10 connections now open) m31100| Fri Feb 22 11:54:30.116 [conn13] end connection 165.225.128.186:35938 (9 connections now open) Fri Feb 22 11:54:30.116 Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:54:30.117 [initandlisten] connection accepted from 165.225.128.186:39191 #7 (5 connections now open) m31102| Fri Feb 22 11:54:30.118 [initandlisten] connection accepted from 165.225.128.186:45611 #6 (5 connections now open) Fri Feb 22 11:54:30.118 replica set monitor for replica set test-rs0 started, address is test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 11:54:30.118 [ReplicaSetMonitorWatcher] starting Resetting db path '/data/db/test-config0' Fri Feb 22 11:54:30.122 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/test-config0 --configsvr --setParameter enableTestCommands=1 m29000| Fri Feb 22 11:54:30.203 [initandlisten] MongoDB starting : pid=1947 port=29000 dbpath=/data/db/test-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 11:54:30.203 [initandlisten] m29000| Fri Feb 22 11:54:30.203 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 11:54:30.203 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 11:54:30.203 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 11:54:30.203 [initandlisten] m29000| Fri Feb 22 11:54:30.203 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 11:54:30.203 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 11:54:30.203 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 11:54:30.203 [initandlisten] allocator: system m29000| Fri Feb 22 11:54:30.203 [initandlisten] options: { configsvr: true, dbpath: "/data/db/test-config0", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:54:30.203 [initandlisten] journal dir=/data/db/test-config0/journal m29000| Fri Feb 22 11:54:30.203 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 11:54:30.219 [FileAllocator] allocating new datafile /data/db/test-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 11:54:30.220 [FileAllocator] creating directory /data/db/test-config0/_tmp m29000| Fri Feb 22 11:54:30.220 [FileAllocator] done allocating datafile /data/db/test-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:54:30.220 [FileAllocator] allocating new datafile /data/db/test-config0/local.0, filling with zeroes... m29000| Fri Feb 22 11:54:30.220 [FileAllocator] done allocating datafile /data/db/test-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:54:30.223 [initandlisten] ****** m29000| Fri Feb 22 11:54:30.223 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 11:54:30.227 [initandlisten] ****** m29000| Fri Feb 22 11:54:30.228 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 11:54:30.228 [websvr] admin web console waiting for connections on port 30000 m29000| Fri Feb 22 11:54:30.324 [initandlisten] connection accepted from 127.0.0.1:37237 #1 (1 connection now open) "bs-smartos-x86-64-1.10gen.cc:29000" m29000| Fri Feb 22 11:54:30.325 [initandlisten] connection accepted from 165.225.128.186:57448 #2 (2 connections now open) ShardingTest test : { "config" : "bs-smartos-x86-64-1.10gen.cc:29000", "shards" : [ connection to test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 ] } Fri Feb 22 11:54:30.328 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb bs-smartos-x86-64-1.10gen.cc:29000 -vv --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:54:30.343 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:54:30.343 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=1948 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:54:30.343 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:54:30.343 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:54:30.343 [mongosMain] options: { chunkSize: 1, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30999, setParameter: [ "enableTestCommands=1" ], vv: true } m30999| Fri Feb 22 11:54:30.344 [mongosMain] config string : bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:30.344 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:30.345 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.345 [mongosMain] connected connection! m29000| Fri Feb 22 11:54:30.345 [initandlisten] connection accepted from 165.225.128.186:37658 #3 (3 connections now open) m30999| Fri Feb 22 11:54:30.345 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 11:54:30.345 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:30.346 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.346 [mongosMain] connected connection! m29000| Fri Feb 22 11:54:30.346 [initandlisten] connection accepted from 165.225.128.186:49101 #4 (4 connections now open) m29000| Fri Feb 22 11:54:30.346 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:54:30.352 [mongosMain] created new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:54:30.352 [mongosMain] skew from remote server bs-smartos-x86-64-1.10gen.cc:29000 found: 0 m30999| Fri Feb 22 11:54:30.353 [mongosMain] skew from remote server bs-smartos-x86-64-1.10gen.cc:29000 found: -1 m30999| Fri Feb 22 11:54:30.353 [mongosMain] skew from remote server bs-smartos-x86-64-1.10gen.cc:29000 found: 0 m30999| Fri Feb 22 11:54:30.353 [mongosMain] total clock skew of 0ms for servers bs-smartos-x86-64-1.10gen.cc:29000 is in 30000ms bounds. m30999| Fri Feb 22 11:54:30.353 [mongosMain] trying to acquire new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838 ) m30999| Fri Feb 22 11:54:30.353 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838 (sleeping for 30000ms) m30999| Fri Feb 22 11:54:30.353 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m29000| Fri Feb 22 11:54:30.353 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes... m30999| Fri Feb 22 11:54:30.353 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:54:30 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "51275c764b746466f5f93930" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m29000| Fri Feb 22 11:54:30.353 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:54:30.353 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes... m29000| Fri Feb 22 11:54:30.354 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:54:30.354 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes... m29000| Fri Feb 22 11:54:30.354 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 11:54:30.356 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 11:54:30.356 [conn4] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:54:30.358 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 11:54:30.359 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:54:30.360 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 11:54:30 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838', sleeping for 30000ms m29000| Fri Feb 22 11:54:30.360 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 11:54:30.360 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:54:30.360 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838' acquired, ts : 51275c764b746466f5f93930 m30999| Fri Feb 22 11:54:30.362 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:54:30.362 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:54:30.363 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:54:30-51275c764b746466f5f93931", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361534070363), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 11:54:30.363 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 11:54:30.363 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:54:30.363 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 11:54:30.364 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 11:54:30.364 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:54:30.365 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:54:30-51275c764b746466f5f93933", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361534070365), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:54:30.365 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:54:30.365 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838' unlocked. m29000| Fri Feb 22 11:54:30.366 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:54:30.367 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 11:54:30.367 BackgroundJob starting: Balancer m30999| Fri Feb 22 11:54:30.367 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 11:54:30.367 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:54:30.367 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m29000| Fri Feb 22 11:54:30.367 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:54:30.367 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 11:54:30.367 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:54:30.367 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 11:54:30.367 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 11:54:30.368 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:54:30.369 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 11:54:30.369 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 11:54:30.369 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:54:30.369 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 11:54:30.370 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:54:30.370 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 11:54:30.370 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:54:30.370 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 11:54:30.371 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:54:30.371 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 11:54:30.371 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 11:54:30.372 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:54:30.372 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:54:30.372 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:54:30 m30999| Fri Feb 22 11:54:30.372 [Balancer] created new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 11:54:30.373 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m29000| Fri Feb 22 11:54:30.373 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 11:54:30.373 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 11:54:30.373 [initandlisten] connection accepted from 165.225.128.186:60469 #5 (5 connections now open) m30999| Fri Feb 22 11:54:30.373 [Balancer] connected connection! m29000| Fri Feb 22 11:54:30.374 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:54:30.374 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:54:30.374 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838 ) m30999| Fri Feb 22 11:54:30.374 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 11:54:30.374 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838", m30999| "when" : { "$date" : "Fri Feb 22 11:54:30 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51275c764b746466f5f93935" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 11:54:30.375 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838' acquired, ts : 51275c764b746466f5f93935 m30999| Fri Feb 22 11:54:30.375 [Balancer] *** start balancing round m30999| Fri Feb 22 11:54:30.375 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 11:54:30.375 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 11:54:30.375 [Balancer] no collections to balance m30999| Fri Feb 22 11:54:30.375 [Balancer] no need to move any chunk m30999| Fri Feb 22 11:54:30.375 [Balancer] *** end of balancing round m30999| Fri Feb 22 11:54:30.375 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361534070:16838' unlocked. m30999| Fri Feb 22 11:54:30.529 [mongosMain] connection accepted from 127.0.0.1:51753 #1 (1 connection now open) ShardingTest undefined going to add shard : test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.531 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 11:54:30.531 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 11:54:30.532 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:54:30.532 [conn1] put [admin] on: config:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:30.532 [conn1] starting new replica set monitor for replica set test-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.532 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.533 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.533 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.533 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set test-rs0 m31100| Fri Feb 22 11:54:30.533 [initandlisten] connection accepted from 165.225.128.186:64669 #16 (10 connections now open) m30999| Fri Feb 22 11:54:30.533 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534070533), ok: 1.0 } m30999| Fri Feb 22 11:54:30.533 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from test-rs0/ m30999| Fri Feb 22 11:54:30.533 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set test-rs0 m30999| Fri Feb 22 11:54:30.533 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.533 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.534 [initandlisten] connection accepted from 165.225.128.186:65043 #17 (11 connections now open) m30999| Fri Feb 22 11:54:30.534 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.534 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set test-rs0 m30999| Fri Feb 22 11:54:30.534 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set test-rs0 m30999| Fri Feb 22 11:54:30.534 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.534 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.534 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.534 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set test-rs0 m30999| Fri Feb 22 11:54:30.534 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set test-rs0 m31101| Fri Feb 22 11:54:30.534 [initandlisten] connection accepted from 165.225.128.186:47791 #8 (6 connections now open) m30999| Fri Feb 22 11:54:30.534 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.534 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.534 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.534 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set test-rs0 m31102| Fri Feb 22 11:54:30.534 [initandlisten] connection accepted from 165.225.128.186:47904 #7 (6 connections now open) m30999| Fri Feb 22 11:54:30.534 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.535 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.535 [initandlisten] connection accepted from 165.225.128.186:50805 #18 (12 connections now open) m30999| Fri Feb 22 11:54:30.535 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.535 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.535 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.535 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.535 [conn1] replicaSetChange: shard not found for set: test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.535 [conn1] _check : test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31100| Fri Feb 22 11:54:30.535 [conn16] end connection 165.225.128.186:64669 (11 connections now open) m30999| Fri Feb 22 11:54:30.535 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534070535), ok: 1.0 } m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.536 [conn1] Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.536 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534070536), ok: 1.0 } m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.536 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534070536), ok: 1.0 } m30999| Fri Feb 22 11:54:30.536 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.536 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:54:30.536 [initandlisten] connection accepted from 165.225.128.186:48384 #9 (7 connections now open) m30999| Fri Feb 22 11:54:30.536 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.536 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.537 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.537 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534070537), ok: 1.0 } m30999| Fri Feb 22 11:54:30.537 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.537 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:30.537 [initandlisten] connection accepted from 165.225.128.186:41409 #8 (7 connections now open) m30999| Fri Feb 22 11:54:30.537 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.537 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.537 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.537 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.537 [conn1] replica set monitor for replica set test-rs0 started, address is test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.537 BackgroundJob starting: ReplicaSetMonitorWatcher m30999| Fri Feb 22 11:54:30.537 [ReplicaSetMonitorWatcher] starting m30999| Fri Feb 22 11:54:30.537 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.538 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.538 [conn1] connected connection! m31100| Fri Feb 22 11:54:30.538 [initandlisten] connection accepted from 165.225.128.186:58368 #19 (12 connections now open) m30999| Fri Feb 22 11:54:30.539 [conn1] going to add shard: { _id: "test-rs0", host: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } { "shardAdded" : "test-rs0", "ok" : 1 } m30999| Fri Feb 22 11:54:30.540 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 11:54:30.541 [conn1] best shard for new allocation is shard: test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 mapped: 128 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 11:54:30.541 [conn1] put [test] on: test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.541 [conn1] enabling sharding on: test m30999| Fri Feb 22 11:54:30.542 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.settings", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.542 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:bs-smartos-x86-64-1.10gen.cc:29000] m30999| Fri Feb 22 11:54:30.542 [conn1] [pcursor] initializing on shard config:bs-smartos-x86-64-1.10gen.cc:29000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.542 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31100 serverID: 51275c764b746466f5f93934 m30999| Fri Feb 22 11:54:30.542 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31101 serverID: 51275c764b746466f5f93934 m30999| Fri Feb 22 11:54:30.542 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.542 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31102 serverID: 51275c764b746466f5f93934 m30999| Fri Feb 22 11:54:30.542 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.542 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.542 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.542 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.542 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.542 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.542 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.543 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.543 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.543 [conn1] connected connection! m30999| Fri Feb 22 11:54:30.543 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] connected connection! m30999| Fri Feb 22 11:54:30.543 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:30.543 [initandlisten] connection accepted from 165.225.128.186:60358 #20 (13 connections now open) m30999| Fri Feb 22 11:54:30.543 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.543 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] connected connection! m31101| Fri Feb 22 11:54:30.543 [initandlisten] connection accepted from 165.225.128.186:63207 #10 (8 connections now open) m30999| Fri Feb 22 11:54:30.543 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.543 [initandlisten] connection accepted from 165.225.128.186:45498 #21 (14 connections now open) m30999| Fri Feb 22 11:54:30.543 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] connected connection! m31102| Fri Feb 22 11:54:30.543 [initandlisten] connection accepted from 165.225.128.186:45682 #9 (8 connections now open) m30999| Fri Feb 22 11:54:30.543 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:30.543 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.544 [conn1] connected connection! m29000| Fri Feb 22 11:54:30.544 [initandlisten] connection accepted from 165.225.128.186:49371 #6 (6 connections now open) m30999| Fri Feb 22 11:54:30.544 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:29000 serverID: 51275c764b746466f5f93934 m30999| Fri Feb 22 11:54:30.544 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:30.544 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:30.544 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.544 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000] bs-smartos-x86-64-1.10gen.cc:29000 is not a shard node m30999| Fri Feb 22 11:54:30.544 [conn1] [pcursor] initialized query (lazily) on shard config:bs-smartos-x86-64-1.10gen.cc:29000, current connection state is { state: { conn: "bs-smartos-x86-64-1.10gen.cc:29000", vinfo: "config:bs-smartos-x86-64-1.10gen.cc:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.544 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.544 [conn1] [pcursor] finishing on shard config:bs-smartos-x86-64-1.10gen.cc:29000, current connection state is { state: { conn: "bs-smartos-x86-64-1.10gen.cc:29000", vinfo: "config:bs-smartos-x86-64-1.10gen.cc:29000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.544 [conn1] [pcursor] finished on shard config:bs-smartos-x86-64-1.10gen.cc:29000, current connection state is { state: { conn: "(done)", vinfo: "config:bs-smartos-x86-64-1.10gen.cc:29000", cursor: { _id: "chunksize", value: 1 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "chunksize", "value" : 1 } m31100| Fri Feb 22 11:54:30.546 [FileAllocator] allocating new datafile /data/db/test-rs0-0/test.ns, filling with zeroes... m31100| Fri Feb 22 11:54:30.546 [FileAllocator] done allocating datafile /data/db/test-rs0-0/test.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:54:30.547 [FileAllocator] allocating new datafile /data/db/test-rs0-0/test.0, filling with zeroes... m31100| Fri Feb 22 11:54:30.547 [FileAllocator] done allocating datafile /data/db/test-rs0-0/test.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:54:30.551 [conn19] build index test.foo { _id: 1 } m31100| Fri Feb 22 11:54:30.552 [conn19] build index done. scanned 0 total records. 0.001 secs m31100| Fri Feb 22 11:54:30.552 [conn19] info: creating collection test.foo on add index m30999| Fri Feb 22 11:54:30.552 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 11:54:30.552 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 11:54:30.553 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 51275c764b746466f5f93936 m31101| Fri Feb 22 11:54:30.553 [FileAllocator] allocating new datafile /data/db/test-rs0-1/test.ns, filling with zeroes... m31102| Fri Feb 22 11:54:30.553 [FileAllocator] allocating new datafile /data/db/test-rs0-2/test.ns, filling with zeroes... m31102| Fri Feb 22 11:54:30.553 [FileAllocator] done allocating datafile /data/db/test-rs0-2/test.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:54:30.553 [FileAllocator] done allocating datafile /data/db/test-rs0-1/test.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 11:54:30.553 [FileAllocator] allocating new datafile /data/db/test-rs0-2/test.0, filling with zeroes... m31101| Fri Feb 22 11:54:30.553 [FileAllocator] allocating new datafile /data/db/test-rs0-1/test.0, filling with zeroes... m30999| Fri Feb 22 11:54:30.553 [conn1] major version query from 0|0||51275c764b746466f5f93936 and over 0 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 0|0 } } ] } m31102| Fri Feb 22 11:54:30.553 [FileAllocator] done allocating datafile /data/db/test-rs0-2/test.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:54:30.553 [FileAllocator] done allocating datafile /data/db/test-rs0-1/test.0, size: 16MB, took 0 secs m30999| Fri Feb 22 11:54:30.554 [conn1] loaded 1 chunks into new chunk manager for test.foo with version 1|0||51275c764b746466f5f93936 m30999| Fri Feb 22 11:54:30.554 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||51275c764b746466f5f93936 based on: (empty) m29000| Fri Feb 22 11:54:30.554 [conn3] build index config.collections { _id: 1 } m29000| Fri Feb 22 11:54:30.555 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 11:54:30.556 [conn1] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.556 [conn1] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1187070 2 m30999| Fri Feb 22 11:54:30.556 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:54:30.556 [conn1] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.556 [conn1] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true, shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1187070 2 m31100| Fri Feb 22 11:54:30.556 [conn21] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 11:54:30.557 [initandlisten] connection accepted from 165.225.128.186:60276 #7 (7 connections now open) m31102| Fri Feb 22 11:54:30.557 [repl writer worker 1] build index test.foo { _id: 1 } m30999| Fri Feb 22 11:54:30.557 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m31101| Fri Feb 22 11:54:30.557 [repl writer worker 1] build index test.foo { _id: 1 } m30999| Fri Feb 22 11:54:30.558 [conn1] about to initiate autosplit: ns:test.fooshard: test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 200412 splitThreshold: 921 m30999| Fri Feb 22 11:54:30.558 [conn1] chunk not full enough to trigger auto-split no split entry m31102| Fri Feb 22 11:54:30.558 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31102| Fri Feb 22 11:54:30.558 [repl writer worker 1] info: creating collection test.foo on add index m31101| Fri Feb 22 11:54:30.558 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 11:54:30.559 [repl writer worker 1] info: creating collection test.foo on add index [ { "addr" : "bs-smartos-x86-64-1.10gen.cc:31100", "ok" : true, "ismaster" : true, "hidden" : false, "secondary" : false, "pingTimeMillis" : 0 }, { "addr" : "bs-smartos-x86-64-1.10gen.cc:31101", "ok" : true, "ismaster" : false, "hidden" : false, "secondary" : true, "pingTimeMillis" : 0 }, { "addr" : "bs-smartos-x86-64-1.10gen.cc:31102", "ok" : true, "ismaster" : false, "hidden" : false, "secondary" : true, "pingTimeMillis" : 0 } ] m31101| Fri Feb 22 11:54:30.762 [conn1] creating profile collection: test.system.profile m31102| Fri Feb 22 11:54:30.763 [conn1] creating profile collection: test.system.profile m31100| Fri Feb 22 11:54:30.766 [conn1] creating profile collection: test.system.profile m30999| Fri Feb 22 11:54:30.767 [mongosMain] connection accepted from 165.225.128.186:59004 #2 (2 connections now open) m30999| Fri Feb 22 11:54:30.767 [conn2] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.768 [conn2] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.768 [conn2] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.768 [conn2] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.768 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.768 [conn2] connected connection! m31100| Fri Feb 22 11:54:30.768 [initandlisten] connection accepted from 165.225.128.186:53159 #22 (15 connections now open) m30999| Fri Feb 22 11:54:30.768 [conn2] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.768 [conn2] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.768 [conn2] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.769 [conn2] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x118a820 2 m30999| Fri Feb 22 11:54:30.769 [conn2] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.769 [conn2] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.769 [conn2] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.769 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.769 [conn2] connected connection! m31101| Fri Feb 22 11:54:30.769 [initandlisten] connection accepted from 165.225.128.186:61153 #11 (9 connections now open) m30999| Fri Feb 22 11:54:30.769 [conn2] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.769 [conn2] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.769 [conn2] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.769 [conn2] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.770 [mongosMain] connection accepted from 165.225.128.186:52338 #3 (3 connections now open) m30999| Fri Feb 22 11:54:30.770 [conn3] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.770 [conn3] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.770 [conn3] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.770 [conn3] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.770 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.771 [initandlisten] connection accepted from 165.225.128.186:45104 #23 (16 connections now open) m30999| Fri Feb 22 11:54:30.771 [conn3] connected connection! m30999| Fri Feb 22 11:54:30.771 [conn3] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.771 [conn3] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.771 [conn3] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.771 [conn3] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x118caa0 2 m30999| Fri Feb 22 11:54:30.771 [conn3] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.771 [conn3] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.771 [conn3] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.771 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.771 [conn3] connected connection! m31102| Fri Feb 22 11:54:30.771 [initandlisten] connection accepted from 165.225.128.186:36716 #10 (9 connections now open) m30999| Fri Feb 22 11:54:30.771 [conn3] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.771 [conn3] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.771 [conn3] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.772 [conn3] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.772 [mongosMain] connection accepted from 165.225.128.186:65491 #4 (4 connections now open) m30999| Fri Feb 22 11:54:30.772 [conn4] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.772 [conn4] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.772 [conn4] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.772 [conn4] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.773 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.773 [conn4] connected connection! m31100| Fri Feb 22 11:54:30.773 [initandlisten] connection accepted from 165.225.128.186:59059 #24 (17 connections now open) m30999| Fri Feb 22 11:54:30.773 [conn4] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.773 [conn4] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.773 [conn4] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.773 [conn4] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x118e360 2 m30999| Fri Feb 22 11:54:30.773 [conn4] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.773 [conn4] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.773 [conn4] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.773 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.774 [conn4] connected connection! m31101| Fri Feb 22 11:54:30.774 [initandlisten] connection accepted from 165.225.128.186:42428 #12 (10 connections now open) m30999| Fri Feb 22 11:54:30.774 [conn4] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.774 [conn4] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.774 [conn4] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.774 [conn4] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.774 [mongosMain] connection accepted from 165.225.128.186:64982 #5 (5 connections now open) m30999| Fri Feb 22 11:54:30.774 [conn5] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.774 [conn5] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.774 [conn5] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.775 [conn5] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.775 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.775 [initandlisten] connection accepted from 165.225.128.186:40571 #25 (18 connections now open) m30999| Fri Feb 22 11:54:30.775 [conn5] connected connection! m30999| Fri Feb 22 11:54:30.775 [conn5] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.775 [conn5] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.775 [conn5] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.775 [conn5] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x118f970 2 m30999| Fri Feb 22 11:54:30.775 [conn5] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.775 [conn5] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.775 [conn5] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.775 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:30.776 [initandlisten] connection accepted from 165.225.128.186:46951 #11 (10 connections now open) m30999| Fri Feb 22 11:54:30.776 [conn5] connected connection! m30999| Fri Feb 22 11:54:30.776 [conn5] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.776 [conn5] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.776 [conn5] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.776 [conn5] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.776 [mongosMain] connection accepted from 165.225.128.186:46943 #6 (6 connections now open) m30999| Fri Feb 22 11:54:30.777 [conn6] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.777 [conn6] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.777 [conn6] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.777 [conn6] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.777 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.777 [initandlisten] connection accepted from 165.225.128.186:45704 #26 (19 connections now open) m30999| Fri Feb 22 11:54:30.777 [conn6] connected connection! m30999| Fri Feb 22 11:54:30.777 [conn6] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.777 [conn6] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.777 [conn6] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.777 [conn6] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1193140 2 m30999| Fri Feb 22 11:54:30.778 [conn6] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.778 [conn6] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.778 [conn6] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.778 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:54:30.778 [initandlisten] connection accepted from 165.225.128.186:37405 #13 (11 connections now open) m30999| Fri Feb 22 11:54:30.778 [conn6] connected connection! m30999| Fri Feb 22 11:54:30.778 [conn6] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.778 [conn6] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.778 [conn6] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.778 [conn6] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.778 [mongosMain] connection accepted from 165.225.128.186:42868 #7 (7 connections now open) m30999| Fri Feb 22 11:54:30.779 [conn7] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.779 [conn7] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.779 [conn7] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.779 [conn7] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.779 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.779 [initandlisten] connection accepted from 165.225.128.186:44415 #27 (20 connections now open) m30999| Fri Feb 22 11:54:30.779 [conn7] connected connection! m30999| Fri Feb 22 11:54:30.779 [conn7] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.779 [conn7] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.779 [conn7] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.779 [conn7] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11946d0 2 m30999| Fri Feb 22 11:54:30.780 [conn7] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.780 [conn7] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.780 [conn7] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.780 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:30.780 [initandlisten] connection accepted from 165.225.128.186:55504 #12 (11 connections now open) m30999| Fri Feb 22 11:54:30.780 [conn7] connected connection! m30999| Fri Feb 22 11:54:30.780 [conn7] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.780 [conn7] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.780 [conn7] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.780 [conn7] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.780 [mongosMain] connection accepted from 165.225.128.186:42597 #8 (8 connections now open) m30999| Fri Feb 22 11:54:30.781 [conn8] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.781 [conn8] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.781 [conn8] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.781 [conn8] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.781 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.781 [conn8] connected connection! m30999| Fri Feb 22 11:54:30.781 [conn8] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:30.781 [initandlisten] connection accepted from 165.225.128.186:54464 #28 (21 connections now open) m30999| Fri Feb 22 11:54:30.781 [conn8] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.782 [conn8] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.782 [conn8] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11987a0 2 m30999| Fri Feb 22 11:54:30.782 [conn8] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.782 [conn8] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.782 [conn8] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.782 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:54:30.782 [initandlisten] connection accepted from 165.225.128.186:45422 #14 (12 connections now open) m30999| Fri Feb 22 11:54:30.782 [conn8] connected connection! m30999| Fri Feb 22 11:54:30.782 [conn8] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.782 [conn8] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.782 [conn8] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.782 [conn8] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.783 [mongosMain] connection accepted from 165.225.128.186:44560 #9 (9 connections now open) m30999| Fri Feb 22 11:54:30.783 [conn9] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.783 [conn9] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.783 [conn9] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.783 [conn9] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.783 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.784 [initandlisten] connection accepted from 165.225.128.186:40289 #29 (22 connections now open) m30999| Fri Feb 22 11:54:30.784 [conn9] connected connection! m30999| Fri Feb 22 11:54:30.784 [conn9] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.784 [conn9] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.784 [conn9] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.784 [conn9] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x119af30 2 m30999| Fri Feb 22 11:54:30.784 [conn9] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.784 [conn9] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.784 [conn9] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.784 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.784 [conn9] connected connection! m31102| Fri Feb 22 11:54:30.784 [initandlisten] connection accepted from 165.225.128.186:49441 #13 (12 connections now open) m30999| Fri Feb 22 11:54:30.784 [conn9] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.784 [conn9] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.784 [conn9] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.785 [conn9] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.785 [mongosMain] connection accepted from 165.225.128.186:39447 #10 (10 connections now open) m30999| Fri Feb 22 11:54:30.785 [conn10] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.785 [conn10] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.785 [conn10] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.785 [conn10] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.786 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.786 [initandlisten] connection accepted from 165.225.128.186:63652 #30 (23 connections now open) m30999| Fri Feb 22 11:54:30.786 [conn10] connected connection! m30999| Fri Feb 22 11:54:30.786 [conn10] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.786 [conn10] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.786 [conn10] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.786 [conn10] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x119c4c0 2 m30999| Fri Feb 22 11:54:30.786 [conn10] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.786 [conn10] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.786 [conn10] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.786 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:54:30.787 [initandlisten] connection accepted from 165.225.128.186:42951 #15 (13 connections now open) m30999| Fri Feb 22 11:54:30.787 [conn10] connected connection! m30999| Fri Feb 22 11:54:30.787 [conn10] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.787 [conn10] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.787 [conn10] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.787 [conn10] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.787 [mongosMain] connection accepted from 165.225.128.186:52361 #11 (11 connections now open) m30999| Fri Feb 22 11:54:30.787 [conn11] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.788 [conn11] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.788 [conn11] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.788 [conn11] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.788 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.788 [initandlisten] connection accepted from 165.225.128.186:59165 #31 (24 connections now open) m30999| Fri Feb 22 11:54:30.788 [conn11] connected connection! m30999| Fri Feb 22 11:54:30.788 [conn11] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.788 [conn11] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.788 [conn11] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.788 [conn11] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x119dd80 2 m30999| Fri Feb 22 11:54:30.788 [conn11] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.788 [conn11] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.788 [conn11] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.789 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.789 [conn11] connected connection! m31102| Fri Feb 22 11:54:30.789 [initandlisten] connection accepted from 165.225.128.186:40821 #14 (13 connections now open) m30999| Fri Feb 22 11:54:30.789 [conn11] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.789 [conn11] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.789 [conn11] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.789 [conn11] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.789 [mongosMain] connection accepted from 165.225.128.186:56089 #12 (12 connections now open) m30999| Fri Feb 22 11:54:30.789 [conn12] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.789 [conn12] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.789 [conn12] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.790 [conn12] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.790 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.790 [initandlisten] connection accepted from 165.225.128.186:61892 #32 (25 connections now open) m30999| Fri Feb 22 11:54:30.790 [conn12] connected connection! m30999| Fri Feb 22 11:54:30.790 [conn12] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.790 [conn12] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.790 [conn12] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.790 [conn12] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x119f390 2 m30999| Fri Feb 22 11:54:30.790 [conn12] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.790 [conn12] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.790 [conn12] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.791 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.791 [conn12] connected connection! m31101| Fri Feb 22 11:54:30.791 [initandlisten] connection accepted from 165.225.128.186:37744 #16 (14 connections now open) m30999| Fri Feb 22 11:54:30.791 [conn12] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.791 [conn12] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.791 [conn12] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.791 [conn12] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.791 [mongosMain] connection accepted from 165.225.128.186:37361 #13 (13 connections now open) m30999| Fri Feb 22 11:54:30.792 [conn13] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.792 [conn13] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.792 [conn13] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.792 [conn13] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.792 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.792 [initandlisten] connection accepted from 165.225.128.186:35033 #33 (26 connections now open) m30999| Fri Feb 22 11:54:30.792 [conn13] connected connection! m30999| Fri Feb 22 11:54:30.792 [conn13] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.792 [conn13] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.792 [conn13] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.792 [conn13] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11a0b90 2 m30999| Fri Feb 22 11:54:30.793 [conn13] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.793 [conn13] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.793 [conn13] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.793 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:30.793 [initandlisten] connection accepted from 165.225.128.186:60400 #15 (14 connections now open) m30999| Fri Feb 22 11:54:30.793 [conn13] connected connection! m30999| Fri Feb 22 11:54:30.793 [conn13] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.793 [conn13] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.793 [conn13] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.793 [conn13] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.794 [mongosMain] connection accepted from 165.225.128.186:54953 #14 (14 connections now open) m30999| Fri Feb 22 11:54:30.794 [conn14] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.794 [conn14] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.794 [conn14] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.794 [conn14] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.794 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.794 [initandlisten] connection accepted from 165.225.128.186:41021 #34 (27 connections now open) m30999| Fri Feb 22 11:54:30.794 [conn14] connected connection! m30999| Fri Feb 22 11:54:30.794 [conn14] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.794 [conn14] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.794 [conn14] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.794 [conn14] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11a2330 2 m30999| Fri Feb 22 11:54:30.795 [conn14] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.795 [conn14] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.795 [conn14] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.795 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:54:30.795 [initandlisten] connection accepted from 165.225.128.186:64880 #17 (15 connections now open) m30999| Fri Feb 22 11:54:30.795 [conn14] connected connection! m30999| Fri Feb 22 11:54:30.795 [conn14] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.795 [conn14] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.795 [conn14] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.795 [conn14] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.795 [mongosMain] connection accepted from 165.225.128.186:52847 #15 (15 connections now open) m30999| Fri Feb 22 11:54:30.796 [conn15] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.796 [conn15] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.796 [conn15] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.796 [conn15] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.796 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.796 [initandlisten] connection accepted from 165.225.128.186:44937 #35 (28 connections now open) m30999| Fri Feb 22 11:54:30.796 [conn15] connected connection! m30999| Fri Feb 22 11:54:30.796 [conn15] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.796 [conn15] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.796 [conn15] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.796 [conn15] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11a3940 2 m30999| Fri Feb 22 11:54:30.796 [conn15] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.796 [conn15] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.796 [conn15] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.796 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.797 [conn15] connected connection! m30999| Fri Feb 22 11:54:30.797 [conn15] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.797 [conn15] [pcursor] finishing over 1 shards m31102| Fri Feb 22 11:54:30.797 [initandlisten] connection accepted from 165.225.128.186:56986 #16 (15 connections now open) m30999| Fri Feb 22 11:54:30.797 [conn15] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.797 [conn15] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.797 [mongosMain] connection accepted from 165.225.128.186:49395 #16 (16 connections now open) m30999| Fri Feb 22 11:54:30.798 [conn16] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.798 [conn16] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.798 [conn16] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.798 [conn16] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.798 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.798 [initandlisten] connection accepted from 165.225.128.186:35770 #36 (29 connections now open) m30999| Fri Feb 22 11:54:30.798 [conn16] connected connection! m30999| Fri Feb 22 11:54:30.798 [conn16] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.798 [conn16] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.798 [conn16] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.798 [conn16] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11a5140 2 m30999| Fri Feb 22 11:54:30.799 [conn16] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.799 [conn16] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.799 [conn16] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.799 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:54:30.799 [initandlisten] connection accepted from 165.225.128.186:56671 #18 (16 connections now open) m30999| Fri Feb 22 11:54:30.799 [conn16] connected connection! m30999| Fri Feb 22 11:54:30.799 [conn16] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.799 [conn16] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.799 [conn16] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.799 [conn16] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.799 [mongosMain] connection accepted from 165.225.128.186:64190 #17 (17 connections now open) m30999| Fri Feb 22 11:54:30.800 [conn17] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.800 [conn17] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.800 [conn17] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.800 [conn17] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.800 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.800 [initandlisten] connection accepted from 165.225.128.186:35652 #37 (30 connections now open) m30999| Fri Feb 22 11:54:30.800 [conn17] connected connection! m30999| Fri Feb 22 11:54:30.800 [conn17] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.800 [conn17] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.801 [conn17] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.801 [conn17] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11a80a0 2 m30999| Fri Feb 22 11:54:30.801 [conn17] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.801 [conn17] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.801 [conn17] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.801 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.801 [conn17] connected connection! m31102| Fri Feb 22 11:54:30.801 [initandlisten] connection accepted from 165.225.128.186:59035 #17 (16 connections now open) m30999| Fri Feb 22 11:54:30.801 [conn17] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.801 [conn17] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.801 [conn17] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.801 [conn17] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.802 [mongosMain] connection accepted from 165.225.128.186:61853 #18 (18 connections now open) m30999| Fri Feb 22 11:54:30.802 [conn18] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.802 [conn18] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.802 [conn18] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.802 [conn18] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.802 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.803 [initandlisten] connection accepted from 165.225.128.186:56452 #38 (31 connections now open) m30999| Fri Feb 22 11:54:30.803 [conn18] connected connection! m30999| Fri Feb 22 11:54:30.803 [conn18] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.803 [conn18] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.803 [conn18] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.803 [conn18] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11ae080 2 m30999| Fri Feb 22 11:54:30.803 [conn18] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.803 [conn18] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.803 [conn18] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.803 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.803 [conn18] connected connection! m31101| Fri Feb 22 11:54:30.803 [initandlisten] connection accepted from 165.225.128.186:48062 #19 (17 connections now open) m30999| Fri Feb 22 11:54:30.803 [conn18] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.803 [conn18] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.803 [conn18] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.803 [conn18] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.804 [mongosMain] connection accepted from 165.225.128.186:37459 #19 (19 connections now open) m30999| Fri Feb 22 11:54:30.804 [conn19] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.804 [conn19] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.804 [conn19] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.804 [conn19] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.804 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.805 [initandlisten] connection accepted from 165.225.128.186:64569 #39 (32 connections now open) m30999| Fri Feb 22 11:54:30.805 [conn19] connected connection! m30999| Fri Feb 22 11:54:30.805 [conn19] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.805 [conn19] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.805 [conn19] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.805 [conn19] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11af610 2 m30999| Fri Feb 22 11:54:30.805 [conn19] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.805 [conn19] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.805 [conn19] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.805 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.805 [conn19] connected connection! m31102| Fri Feb 22 11:54:30.805 [initandlisten] connection accepted from 165.225.128.186:36414 #18 (17 connections now open) m30999| Fri Feb 22 11:54:30.805 [conn19] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.805 [conn19] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.805 [conn19] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.806 [conn19] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.806 [mongosMain] connection accepted from 165.225.128.186:48742 #20 (20 connections now open) m30999| Fri Feb 22 11:54:30.806 [conn20] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.806 [conn20] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.806 [conn20] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.806 [conn20] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.807 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.807 [initandlisten] connection accepted from 165.225.128.186:42535 #40 (33 connections now open) m30999| Fri Feb 22 11:54:30.807 [conn20] connected connection! m30999| Fri Feb 22 11:54:30.807 [conn20] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.807 [conn20] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.807 [conn20] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.807 [conn20] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11b1d90 2 m30999| Fri Feb 22 11:54:30.807 [conn20] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.807 [conn20] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:30.807 [conn20] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 11:54:30.807 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 11:54:30.808 [initandlisten] connection accepted from 165.225.128.186:56793 #20 (18 connections now open) m30999| Fri Feb 22 11:54:30.808 [conn20] connected connection! m30999| Fri Feb 22 11:54:30.808 [conn20] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.808 [conn20] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.808 [conn20] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.808 [conn20] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:30.808 [mongosMain] connection accepted from 165.225.128.186:45949 #21 (21 connections now open) m30999| Fri Feb 22 11:54:30.808 [conn21] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:30.808 [conn21] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:30.808 [conn21] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.808 [conn21] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.809 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:30.809 [initandlisten] connection accepted from 165.225.128.186:45186 #41 (34 connections now open) m30999| Fri Feb 22 11:54:30.809 [conn21] connected connection! m30999| Fri Feb 22 11:54:30.809 [conn21] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.809 [conn21] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:30.809 [conn21] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:30.809 [conn21] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11b3650 2 m30999| Fri Feb 22 11:54:30.810 [conn21] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:30.810 [conn21] dbclient_rs _selectNode found local secondary for queries: 2, ping time: 0 m30999| Fri Feb 22 11:54:30.810 [conn21] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:30.810 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:30.810 [conn21] connected connection! m31102| Fri Feb 22 11:54:30.810 [initandlisten] connection accepted from 165.225.128.186:40739 #19 (18 connections now open) m30999| Fri Feb 22 11:54:30.810 [conn21] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.810 [conn21] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:30.810 [conn21] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:30.810 [conn21] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "set" : "test-rs0", "date" : ISODate("2013-02-22T11:54:30Z"), "myState" : 1, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31100", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 39, "optime" : { "t" : 1361534070000, "i" : 3 }, "optimeDate" : ISODate("2013-02-22T11:54:30Z"), "self" : true }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31101", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 29, "optime" : { "t" : 1361534032000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T11:53:52Z"), "lastHeartbeat" : ISODate("2013-02-22T11:54:29Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T11:54:29Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31102", "health" : 1, "state" : 3, "stateStr" : "RECOVERING", "uptime" : 27, "optime" : { "t" : 1361534032000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T11:53:52Z"), "lastHeartbeat" : ISODate("2013-02-22T11:54:29Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T11:54:30Z"), "pingMs" : 0, "lastHeartbeatMessage" : "syncing to: bs-smartos-x86-64-1.10gen.cc:31100" } ], "ok" : 1 } config before: { "_id" : "test-rs0", "version" : 1, "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31102" } ] } m31100| Fri Feb 22 11:54:30.830 [conn1] replSet replSetReconfig config object parses ok, 3 members specified m31100| Fri Feb 22 11:54:30.831 [conn1] replSet replSetReconfig [2] m31100| Fri Feb 22 11:54:30.831 [conn1] replSet info saving a newer config version to local.system.replset m31100| Fri Feb 22 11:54:30.842 [conn1] replSet saveConfigLocally done m31100| Fri Feb 22 11:54:30.843 [conn1] replSet relinquishing primary state m31100| Fri Feb 22 11:54:30.843 [conn1] replSet SECONDARY m31100| Fri Feb 22 11:54:30.843 [conn1] replSet closing client sockets after relinquishing primary m31100| Fri Feb 22 11:54:30.843 [conn21] end connection 165.225.128.186:45498 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn22] end connection 165.225.128.186:53159 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn23] end connection 165.225.128.186:45104 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn24] end connection 165.225.128.186:59059 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn25] end connection 165.225.128.186:40571 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn26] end connection 165.225.128.186:45704 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn28] end connection 165.225.128.186:54464 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn27] end connection 165.225.128.186:44415 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn29] end connection 165.225.128.186:40289 (33 connections now open) m31102| Fri Feb 22 11:54:30.843 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:30.843 [conn15] end connection 165.225.128.186:60377 (33 connections now open) m31101| Fri Feb 22 11:54:30.843 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:30.843 [conn17] end connection 165.225.128.186:65043 (33 connections now open) m31102| Fri Feb 22 11:54:30.843 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:54:30.843 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:30.843 [conn30] end connection 165.225.128.186:63652 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn18] end connection 165.225.128.186:50805 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn19] end connection 165.225.128.186:58368 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn32] end connection 165.225.128.186:61892 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn14] end connection 165.225.128.186:60350 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn33] end connection 165.225.128.186:35033 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn31] end connection 165.225.128.186:59165 (33 connections now open) m31100| Fri Feb 22 11:54:30.843 [conn34] end connection 165.225.128.186:41021 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn35] end connection 165.225.128.186:44937 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn37] end connection 165.225.128.186:35652 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn36] end connection 165.225.128.186:35770 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn38] end connection 165.225.128.186:56452 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn39] end connection 165.225.128.186:64569 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn40] end connection 165.225.128.186:42535 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn41] end connection 165.225.128.186:45186 (33 connections now open) m31100| Fri Feb 22 11:54:30.844 [conn8] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:44422] m31100| Fri Feb 22 11:54:30.844 [conn7] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:42898] m31100| Fri Feb 22 11:54:30.845 [conn1] replSet PRIMARY m31100| Fri Feb 22 11:54:30.845 [conn12] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:38654] m31102| Fri Feb 22 11:54:30.845 [conn2] end connection 165.225.128.186:64148 (17 connections now open) m31100| Fri Feb 22 11:54:30.845 [conn1] replSet replSetReconfig new config saved locally m31100| Fri Feb 22 11:54:30.846 [conn1] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:52856] m31100| Fri Feb 22 11:54:30.846 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31100| Fri Feb 22 11:54:30.846 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31102| Fri Feb 22 11:54:30.846 [initandlisten] connection accepted from 165.225.128.186:36976 #20 (18 connections now open) m31100| Fri Feb 22 11:54:30.846 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31100| Fri Feb 22 11:54:30.846 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY Fri Feb 22 11:54:30.849 DBClientCursor::init call() failed Fri Feb 22 11:54:30.850 trying reconnect to 127.0.0.1:31100 Fri Feb 22 11:54:30.850 reconnect 127.0.0.1:31100 ok m31100| Fri Feb 22 11:54:30.850 [initandlisten] connection accepted from 127.0.0.1:39711 #42 (5 connections now open) reconnected to server after rs command (which is normal) config after: { "_id" : "test-rs0", "version" : 2, "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101", "priority" : 0, "hidden" : true }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31102" } ] } { "hosts" : [ { "addr" : "bs-smartos-x86-64-1.10gen.cc:31100", "ok" : true, "ismaster" : true, "hidden" : false, "secondary" : false, "pingTimeMillis" : 0 }, { "addr" : "bs-smartos-x86-64-1.10gen.cc:31101", "ok" : true, "ismaster" : false, "hidden" : false, "secondary" : true, "pingTimeMillis" : 0 }, { "addr" : "bs-smartos-x86-64-1.10gen.cc:31102", "ok" : true, "ismaster" : false, "hidden" : false, "secondary" : true, "pingTimeMillis" : 0 } ], "master" : 0, "nextSlave" : 0 } m30999| Fri Feb 22 11:54:30.858 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [0] server [165.225.128.186:31100] m30999| Fri Feb 22 11:54:30.858 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] DBClientCursor::init call() failed m30999| Fri Feb 22 11:54:30.858 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] User Assertion: 10276:DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('51275c764b746466f5f93934') } m30999| Fri Feb 22 11:54:30.858 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] Detected bad connection created at 1361534070543157 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:30.858 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('51275c764b746466f5f93934') } m29000| Fri Feb 22 11:54:30.865 [conn7] end connection 165.225.128.186:60276 (6 connections now open) m31102| Fri Feb 22 11:54:31.838 [conn3] end connection 165.225.128.186:40949 (17 connections now open) m31102| Fri Feb 22 11:54:31.839 [initandlisten] connection accepted from 165.225.128.186:35601 #21 (18 connections now open) m31101| Fri Feb 22 11:54:31.839 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY m31101| Fri Feb 22 11:54:31.839 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31101| Fri Feb 22 11:54:31.839 [rsMgr] replSet info saving a newer config version to local.system.replset m31102| Fri Feb 22 11:54:31.843 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:31.844 [initandlisten] connection accepted from 165.225.128.186:43243 #43 (6 connections now open) m31101| Fri Feb 22 11:54:31.857 [rsMgr] replSet saveConfigLocally done m31101| Fri Feb 22 11:54:31.857 [rsMgr] replSet replSetReconfig new config saved locally m31102| Fri Feb 22 11:54:31.857 [conn21] end connection 165.225.128.186:35601 (17 connections now open) m31101| Fri Feb 22 11:54:31.857 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31101| Fri Feb 22 11:54:31.857 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31102| Fri Feb 22 11:54:31.857 [initandlisten] connection accepted from 165.225.128.186:62859 #22 (18 connections now open) m31101| Fri Feb 22 11:54:31.858 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31101| Fri Feb 22 11:54:31.858 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY m30999| Fri Feb 22 11:54:31.859 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:31.859 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:31.859 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] connected connection! m31100| Fri Feb 22 11:54:31.859 [initandlisten] connection accepted from 165.225.128.186:64550 #44 (7 connections now open) m31102| Fri Feb 22 11:54:32.052 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31102| Fri Feb 22 11:54:32.052 [rsMgr] replSet info saving a newer config version to local.system.replset m31102| Fri Feb 22 11:54:32.076 [rsMgr] replSet saveConfigLocally done m31102| Fri Feb 22 11:54:32.076 [rsMgr] replSet replSetReconfig new config saved locally m31102| Fri Feb 22 11:54:32.076 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31101| Fri Feb 22 11:54:32.076 [conn4] end connection 165.225.128.186:52874 (17 connections now open) m31102| Fri Feb 22 11:54:32.077 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 2 2 m31102| Fri Feb 22 11:54:32.077 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31102| Fri Feb 22 11:54:32.077 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31101| Fri Feb 22 11:54:32.077 [initandlisten] connection accepted from 165.225.128.186:49790 #21 (18 connections now open) m31102| Fri Feb 22 11:54:32.077 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31102| Fri Feb 22 11:54:32.077 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY { "hosts" : [ { "addr" : "bs-smartos-x86-64-1.10gen.cc:31100", "ok" : true, "ismaster" : true, "hidden" : false, "secondary" : false, "pingTimeMillis" : 0 }, { "addr" : "bs-smartos-x86-64-1.10gen.cc:31101", "ok" : true, "ismaster" : false, "hidden" : false, "secondary" : true, "pingTimeMillis" : 0 }, { "addr" : "bs-smartos-x86-64-1.10gen.cc:31102", "ok" : true, "ismaster" : false, "hidden" : false, "secondary" : true, "pingTimeMillis" : 0 } ], "master" : 0, "nextSlave" : 0 } m31100| Fri Feb 22 11:54:35.895 [conn11] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:38095] m30999| Fri Feb 22 11:54:36.376 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 11:54:36.376 [Balancer] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 11:54:36.376 [Balancer] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m30999| Fri Feb 22 11:54:36.376 [Balancer] DBClientCursor::init call() failed m30999| Fri Feb 22 11:54:36.376 [Balancer] User Assertion: 10276:DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { features: 1 } m30999| Fri Feb 22 11:54:36.377 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30999| Fri Feb 22 11:54:36.377 [Balancer] caught exception while doing balance: DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { features: 1 } m30999| Fri Feb 22 11:54:36.377 [Balancer] *** End of balancing round m29000| Fri Feb 22 11:54:36.377 [conn3] end connection 165.225.128.186:37658 (5 connections now open) Fri Feb 22 11:54:40.118 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 Fri Feb 22 11:54:40.118 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 11:54:40.118 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:54:40.118 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:54:40.119 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 ok m31100| Fri Feb 22 11:54:40.119 [initandlisten] connection accepted from 165.225.128.186:39672 #45 (7 connections now open) Fri Feb 22 11:54:40.119 [ReplicaSetMonitorWatcher] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102" } from test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 11:54:40.119 [ReplicaSetMonitorWatcher] erasing host { addr: "bs-smartos-x86-64-1.10gen.cc:31101", isMaster: false, secondary: true, hidden: false, ok: true } from replica set test-rs0 Fri Feb 22 11:54:40.119 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 Fri Feb 22 11:54:40.119 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 11:54:40.119 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:54:40.119 [ReplicaSetMonitorWatcher] Detected bad connection created at 1361534070116156 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:54:40.120 [ReplicaSetMonitorWatcher] Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:40.120 [initandlisten] connection accepted from 165.225.128.186:39844 #46 (8 connections now open) m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] checking replica set: test-rs0 m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { ismaster: 1 } m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { ismaster: 1 } m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] _check : test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.538 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.538 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 ok m31100| Fri Feb 22 11:54:40.538 [initandlisten] connection accepted from 165.225.128.186:61327 #47 (9 connections now open) m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534080539), ok: 1.0 } m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102" } from test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] erasing host { addr: "bs-smartos-x86-64-1.10gen.cc:31101", isMaster: false, secondary: true, hidden: false, ok: true } from replica set test-rs0 m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m31101| Fri Feb 22 11:54:40.539 [conn8] end connection 165.225.128.186:47791 (17 connections now open) m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { replSetGetStatus: 1 } m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] Detected bad connection created at 1361534070535351 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { replSetGetStatus: 1 } m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534080539), ok: 1.0 } m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.539 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.539 [initandlisten] connection accepted from 165.225.128.186:39904 #48 (10 connections now open) m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] connected connection! m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] dbclient_rs _checkStatus couldn't _find(bs-smartos-x86-64-1.10gen.cc:31101) m30999| Fri Feb 22 11:54:40.539 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.540 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534080539), ok: 1.0 } m30999| Fri Feb 22 11:54:40.540 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.540 [ReplicaSetMonitorWatcher] dbclient_rs _checkStatus couldn't _find(bs-smartos-x86-64-1.10gen.cc:31101) m30999| Fri Feb 22 11:54:40.540 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 11:54:40.843 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:54:40.844 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:40.844 [initandlisten] connection accepted from 165.225.128.186:54985 #49 (11 connections now open) m31100| Fri Feb 22 11:54:40.844 [initandlisten] connection accepted from 165.225.128.186:41677 #50 (12 connections now open) m31101| Fri Feb 22 11:54:40.845 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:40.846 [initandlisten] connection accepted from 165.225.128.186:59213 #51 (13 connections now open) { "hosts" : [ { "addr" : "bs-smartos-x86-64-1.10gen.cc:31100", "ok" : true, "ismaster" : true, "hidden" : false, "secondary" : false, "pingTimeMillis" : 0 }, { "addr" : "bs-smartos-x86-64-1.10gen.cc:31102", "ok" : true, "ismaster" : false, "hidden" : false, "secondary" : true, "pingTimeMillis" : 0 } ], "master" : 0, "nextSlave" : 0 } m30999| Fri Feb 22 11:54:40.855 [mongosMain] connection accepted from 165.225.128.186:35337 #22 (22 connections now open) m30999| Fri Feb 22 11:54:40.856 [conn22] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.856 [conn22] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.856 [conn22] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.856 [conn22] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.856 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.861 [conn22] connected connection! m30999| Fri Feb 22 11:54:40.861 [conn22] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:40.861 [initandlisten] connection accepted from 165.225.128.186:55532 #52 (14 connections now open) m30999| Fri Feb 22 11:54:40.861 [conn22] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.862 [conn22] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.862 [conn22] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11e1e10 2 m30999| Fri Feb 22 11:54:40.862 [conn22] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 11:54:40.863 [conn22] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.863 [conn22] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true, shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11e1e10 2 m31100| Fri Feb 22 11:54:40.863 [conn52] no current chunk manager found for this shard, will initialize m31100| Fri Feb 22 11:54:40.863 [conn52] Socket say send() errno:9 Bad file number 165.225.128.186:29000 m31100| Fri Feb 22 11:54:40.863 [conn52] Detected bad connection created at 1361534070557030 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:54:40.863 [conn22] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), errmsg: "exception: socket exception [SEND_ERROR] for 165.225.128.186:29000", code: 9001, ok: 0.0 } m30999| Fri Feb 22 11:54:40.863 [conn22] going to retry checkShardVersion host: bs-smartos-x86-64-1.10gen.cc:31100 { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), errmsg: "exception: socket exception [SEND_ERROR] for 165.225.128.186:29000", code: 9001, ok: 0.0 } m30999| Fri Feb 22 11:54:40.883 [conn22] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.883 [conn22] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true, shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11e1e10 2 m31100| Fri Feb 22 11:54:40.883 [conn52] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 11:54:40.884 [initandlisten] connection accepted from 165.225.128.186:45468 #8 (6 connections now open) m30999| Fri Feb 22 11:54:40.885 [conn22] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.885 [conn22] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.885 [conn22] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.885 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.885 [conn22] connected connection! m31102| Fri Feb 22 11:54:40.885 [initandlisten] connection accepted from 165.225.128.186:43466 #23 (19 connections now open) m30999| Fri Feb 22 11:54:40.885 [conn22] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.885 [conn22] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.885 [conn22] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.886 [conn22] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.886 [mongosMain] connection accepted from 165.225.128.186:49881 #23 (23 connections now open) m30999| Fri Feb 22 11:54:40.886 [conn23] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.886 [conn23] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.886 [conn23] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.886 [conn23] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.887 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.887 [initandlisten] connection accepted from 165.225.128.186:48053 #53 (15 connections now open) m30999| Fri Feb 22 11:54:40.887 [conn23] connected connection! m30999| Fri Feb 22 11:54:40.887 [conn23] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.887 [conn23] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.887 [conn23] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.887 [conn23] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11e3580 2 m30999| Fri Feb 22 11:54:40.887 [conn23] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.887 [conn23] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.887 [conn23] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.887 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.888 [conn23] connected connection! m31102| Fri Feb 22 11:54:40.888 [initandlisten] connection accepted from 165.225.128.186:39698 #24 (20 connections now open) m30999| Fri Feb 22 11:54:40.888 [conn23] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.888 [conn23] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.888 [conn23] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.888 [conn23] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.888 [mongosMain] connection accepted from 165.225.128.186:59416 #24 (24 connections now open) m30999| Fri Feb 22 11:54:40.889 [conn24] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.889 [conn24] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.889 [conn24] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.889 [conn24] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.889 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.889 [initandlisten] connection accepted from 165.225.128.186:57939 #54 (16 connections now open) m30999| Fri Feb 22 11:54:40.889 [conn24] connected connection! m30999| Fri Feb 22 11:54:40.889 [conn24] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.889 [conn24] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.889 [conn24] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.889 [conn24] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11e4c50 2 m30999| Fri Feb 22 11:54:40.890 [conn24] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.890 [conn24] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.890 [conn24] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.890 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.890 [conn24] connected connection! m31102| Fri Feb 22 11:54:40.890 [initandlisten] connection accepted from 165.225.128.186:47539 #25 (21 connections now open) m30999| Fri Feb 22 11:54:40.890 [conn24] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.890 [conn24] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.890 [conn24] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.890 [conn24] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.891 [mongosMain] connection accepted from 165.225.128.186:54353 #25 (25 connections now open) m30999| Fri Feb 22 11:54:40.891 [conn25] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.891 [conn25] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.891 [conn25] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.891 [conn25] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.892 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.892 [initandlisten] connection accepted from 165.225.128.186:62490 #55 (17 connections now open) m30999| Fri Feb 22 11:54:40.892 [conn25] connected connection! m30999| Fri Feb 22 11:54:40.892 [conn25] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.892 [conn25] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.892 [conn25] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.892 [conn25] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11e63d0 2 m30999| Fri Feb 22 11:54:40.892 [conn25] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.892 [conn25] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.892 [conn25] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.892 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.892 [conn25] connected connection! m31102| Fri Feb 22 11:54:40.892 [initandlisten] connection accepted from 165.225.128.186:44246 #26 (22 connections now open) m30999| Fri Feb 22 11:54:40.892 [conn25] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.892 [conn25] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.892 [conn25] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.893 [conn25] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.893 [mongosMain] connection accepted from 165.225.128.186:56468 #26 (26 connections now open) m30999| Fri Feb 22 11:54:40.893 [conn26] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.893 [conn26] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.893 [conn26] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.894 [conn26] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.894 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.894 [initandlisten] connection accepted from 165.225.128.186:63771 #56 (18 connections now open) m30999| Fri Feb 22 11:54:40.894 [conn26] connected connection! m30999| Fri Feb 22 11:54:40.894 [conn26] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.894 [conn26] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.894 [conn26] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.894 [conn26] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11e7b70 2 m30999| Fri Feb 22 11:54:40.894 [conn26] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.894 [conn26] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.894 [conn26] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.895 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.895 [conn26] connected connection! m31102| Fri Feb 22 11:54:40.895 [initandlisten] connection accepted from 165.225.128.186:63466 #27 (23 connections now open) m30999| Fri Feb 22 11:54:40.895 [conn26] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.895 [conn26] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.895 [conn26] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.895 [conn26] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.895 [mongosMain] connection accepted from 165.225.128.186:42348 #27 (27 connections now open) m30999| Fri Feb 22 11:54:40.896 [conn27] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.896 [conn27] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.896 [conn27] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.896 [conn27] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.896 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.896 [initandlisten] connection accepted from 165.225.128.186:56540 #57 (19 connections now open) m30999| Fri Feb 22 11:54:40.896 [conn27] connected connection! m30999| Fri Feb 22 11:54:40.896 [conn27] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.896 [conn27] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.897 [conn27] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.897 [conn27] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11eb990 2 m30999| Fri Feb 22 11:54:40.897 [conn27] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.897 [conn27] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.897 [conn27] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.897 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:40.897 [initandlisten] connection accepted from 165.225.128.186:37355 #28 (24 connections now open) m30999| Fri Feb 22 11:54:40.897 [conn27] connected connection! m30999| Fri Feb 22 11:54:40.897 [conn27] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.897 [conn27] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.897 [conn27] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.898 [conn27] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.898 [mongosMain] connection accepted from 165.225.128.186:38029 #28 (28 connections now open) m30999| Fri Feb 22 11:54:40.898 [conn28] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.898 [conn28] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.898 [conn28] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.898 [conn28] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.899 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.899 [initandlisten] connection accepted from 165.225.128.186:54565 #58 (20 connections now open) m30999| Fri Feb 22 11:54:40.899 [conn28] connected connection! m30999| Fri Feb 22 11:54:40.899 [conn28] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.899 [conn28] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.899 [conn28] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.899 [conn28] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11ed1a0 2 m30999| Fri Feb 22 11:54:40.899 [conn28] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.899 [conn28] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.899 [conn28] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.900 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.900 [conn28] connected connection! m31102| Fri Feb 22 11:54:40.900 [initandlisten] connection accepted from 165.225.128.186:41449 #29 (25 connections now open) m30999| Fri Feb 22 11:54:40.900 [conn28] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.900 [conn28] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.900 [conn28] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.900 [conn28] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.900 [mongosMain] connection accepted from 165.225.128.186:49381 #29 (29 connections now open) m30999| Fri Feb 22 11:54:40.901 [conn29] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.901 [conn29] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.901 [conn29] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.901 [conn29] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.901 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.901 [initandlisten] connection accepted from 165.225.128.186:44673 #59 (21 connections now open) m30999| Fri Feb 22 11:54:40.901 [conn29] connected connection! m30999| Fri Feb 22 11:54:40.901 [conn29] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.901 [conn29] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.901 [conn29] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.901 [conn29] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11ee790 2 m30999| Fri Feb 22 11:54:40.902 [conn29] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.902 [conn29] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.902 [conn29] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.902 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:40.902 [initandlisten] connection accepted from 165.225.128.186:57662 #30 (26 connections now open) m30999| Fri Feb 22 11:54:40.902 [conn29] connected connection! m30999| Fri Feb 22 11:54:40.902 [conn29] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.902 [conn29] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.902 [conn29] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.902 [conn29] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.903 [mongosMain] connection accepted from 165.225.128.186:61744 #30 (30 connections now open) m30999| Fri Feb 22 11:54:40.903 [conn30] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.903 [conn30] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.903 [conn30] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.903 [conn30] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.903 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.904 [initandlisten] connection accepted from 165.225.128.186:57601 #60 (22 connections now open) m30999| Fri Feb 22 11:54:40.904 [conn30] connected connection! m30999| Fri Feb 22 11:54:40.904 [conn30] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.904 [conn30] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.904 [conn30] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.904 [conn30] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11f3030 2 m30999| Fri Feb 22 11:54:40.904 [conn30] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.904 [conn30] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.904 [conn30] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.904 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:40.904 [initandlisten] connection accepted from 165.225.128.186:60469 #31 (27 connections now open) m30999| Fri Feb 22 11:54:40.904 [conn30] connected connection! m30999| Fri Feb 22 11:54:40.904 [conn30] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.904 [conn30] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.905 [conn30] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.905 [conn30] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.905 [mongosMain] connection accepted from 165.225.128.186:52057 #31 (31 connections now open) m30999| Fri Feb 22 11:54:40.905 [conn31] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.905 [conn31] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.905 [conn31] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.906 [conn31] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.906 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.906 [initandlisten] connection accepted from 165.225.128.186:56917 #61 (23 connections now open) m30999| Fri Feb 22 11:54:40.906 [conn31] connected connection! m30999| Fri Feb 22 11:54:40.906 [conn31] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.906 [conn31] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.906 [conn31] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.906 [conn31] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11f4500 2 m30999| Fri Feb 22 11:54:40.906 [conn31] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.906 [conn31] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.906 [conn31] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.907 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:40.907 [initandlisten] connection accepted from 165.225.128.186:40834 #32 (28 connections now open) m30999| Fri Feb 22 11:54:40.907 [conn31] connected connection! m30999| Fri Feb 22 11:54:40.907 [conn31] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.907 [conn31] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.907 [conn31] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.907 [conn31] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.908 [mongosMain] connection accepted from 165.225.128.186:56451 #32 (32 connections now open) m30999| Fri Feb 22 11:54:40.908 [conn32] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.908 [conn32] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.908 [conn32] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.908 [conn32] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.908 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.909 [initandlisten] connection accepted from 165.225.128.186:55583 #62 (24 connections now open) m30999| Fri Feb 22 11:54:40.909 [conn32] connected connection! m30999| Fri Feb 22 11:54:40.909 [conn32] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.909 [conn32] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.909 [conn32] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.909 [conn32] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11f5e80 2 m30999| Fri Feb 22 11:54:40.909 [conn32] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.909 [conn32] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.909 [conn32] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.909 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:40.909 [initandlisten] connection accepted from 165.225.128.186:57917 #33 (29 connections now open) m30999| Fri Feb 22 11:54:40.909 [conn32] connected connection! m30999| Fri Feb 22 11:54:40.909 [conn32] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.909 [conn32] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.909 [conn32] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.910 [conn32] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.910 [mongosMain] connection accepted from 165.225.128.186:64525 #33 (33 connections now open) m30999| Fri Feb 22 11:54:40.910 [conn33] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.910 [conn33] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.910 [conn33] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.911 [conn33] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.911 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.911 [initandlisten] connection accepted from 165.225.128.186:53357 #63 (25 connections now open) m30999| Fri Feb 22 11:54:40.911 [conn33] connected connection! m30999| Fri Feb 22 11:54:40.911 [conn33] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.911 [conn33] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.911 [conn33] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.911 [conn33] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11f7490 2 m30999| Fri Feb 22 11:54:40.911 [conn33] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.911 [conn33] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.911 [conn33] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.912 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.912 [conn33] connected connection! m31102| Fri Feb 22 11:54:40.912 [initandlisten] connection accepted from 165.225.128.186:59984 #34 (30 connections now open) m30999| Fri Feb 22 11:54:40.912 [conn33] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.912 [conn33] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.912 [conn33] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.912 [conn33] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.912 [mongosMain] connection accepted from 165.225.128.186:38247 #34 (34 connections now open) m30999| Fri Feb 22 11:54:40.913 [conn34] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.913 [conn34] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.913 [conn34] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.913 [conn34] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.913 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.913 [initandlisten] connection accepted from 165.225.128.186:57497 #64 (26 connections now open) m30999| Fri Feb 22 11:54:40.913 [conn34] connected connection! m30999| Fri Feb 22 11:54:40.913 [conn34] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.914 [conn34] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.914 [conn34] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.914 [conn34] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11fa4b0 2 m30999| Fri Feb 22 11:54:40.914 [conn34] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.914 [conn34] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.914 [conn34] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.914 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.914 [conn34] connected connection! m31102| Fri Feb 22 11:54:40.914 [initandlisten] connection accepted from 165.225.128.186:47386 #35 (31 connections now open) m30999| Fri Feb 22 11:54:40.914 [conn34] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.914 [conn34] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.914 [conn34] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.915 [conn34] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.915 [mongosMain] connection accepted from 165.225.128.186:47693 #35 (35 connections now open) m30999| Fri Feb 22 11:54:40.915 [conn35] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.915 [conn35] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.915 [conn35] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.915 [conn35] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.916 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.916 [initandlisten] connection accepted from 165.225.128.186:33049 #65 (27 connections now open) m30999| Fri Feb 22 11:54:40.916 [conn35] connected connection! m30999| Fri Feb 22 11:54:40.916 [conn35] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.916 [conn35] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.916 [conn35] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.916 [conn35] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11fbbf0 2 m30999| Fri Feb 22 11:54:40.916 [conn35] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.916 [conn35] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.916 [conn35] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.916 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.917 [conn35] connected connection! m31102| Fri Feb 22 11:54:40.917 [initandlisten] connection accepted from 165.225.128.186:57948 #36 (32 connections now open) m30999| Fri Feb 22 11:54:40.917 [conn35] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.917 [conn35] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.917 [conn35] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.917 [conn35] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.917 [mongosMain] connection accepted from 165.225.128.186:65039 #36 (36 connections now open) m30999| Fri Feb 22 11:54:40.918 [conn36] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.918 [conn36] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.918 [conn36] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.918 [conn36] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.918 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.918 [initandlisten] connection accepted from 165.225.128.186:38995 #66 (28 connections now open) m30999| Fri Feb 22 11:54:40.918 [conn36] connected connection! m30999| Fri Feb 22 11:54:40.918 [conn36] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.918 [conn36] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.918 [conn36] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.918 [conn36] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11ffb10 2 m30999| Fri Feb 22 11:54:40.919 [conn36] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.919 [conn36] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.919 [conn36] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.919 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 11:54:40.919 [initandlisten] connection accepted from 165.225.128.186:56313 #37 (33 connections now open) m30999| Fri Feb 22 11:54:40.919 [conn36] connected connection! m30999| Fri Feb 22 11:54:40.919 [conn36] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.919 [conn36] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.919 [conn36] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.919 [conn36] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.920 [mongosMain] connection accepted from 165.225.128.186:53282 #37 (37 connections now open) m30999| Fri Feb 22 11:54:40.920 [conn37] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.920 [conn37] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.920 [conn37] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.920 [conn37] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.920 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.921 [initandlisten] connection accepted from 165.225.128.186:34998 #67 (29 connections now open) m30999| Fri Feb 22 11:54:40.921 [conn37] connected connection! m30999| Fri Feb 22 11:54:40.921 [conn37] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.921 [conn37] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.921 [conn37] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.921 [conn37] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x12011b0 2 m30999| Fri Feb 22 11:54:40.921 [conn37] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.921 [conn37] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.921 [conn37] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.921 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.921 [conn37] connected connection! m31102| Fri Feb 22 11:54:40.921 [initandlisten] connection accepted from 165.225.128.186:42845 #38 (34 connections now open) m30999| Fri Feb 22 11:54:40.921 [conn37] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.921 [conn37] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.921 [conn37] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.922 [conn37] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.922 [mongosMain] connection accepted from 165.225.128.186:47681 #38 (38 connections now open) m30999| Fri Feb 22 11:54:40.922 [conn38] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.922 [conn38] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.922 [conn38] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.922 [conn38] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.923 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.923 [initandlisten] connection accepted from 165.225.128.186:38895 #68 (30 connections now open) m30999| Fri Feb 22 11:54:40.923 [conn38] connected connection! m30999| Fri Feb 22 11:54:40.923 [conn38] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.923 [conn38] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.923 [conn38] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.923 [conn38] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1202950 2 m30999| Fri Feb 22 11:54:40.923 [conn38] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.923 [conn38] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.923 [conn38] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.923 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.924 [conn38] connected connection! m31102| Fri Feb 22 11:54:40.924 [initandlisten] connection accepted from 165.225.128.186:49756 #39 (35 connections now open) m30999| Fri Feb 22 11:54:40.924 [conn38] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.924 [conn38] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.924 [conn38] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.924 [conn38] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.924 [mongosMain] connection accepted from 165.225.128.186:58236 #39 (39 connections now open) m30999| Fri Feb 22 11:54:40.925 [conn39] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.925 [conn39] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.925 [conn39] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.925 [conn39] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.925 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.925 [initandlisten] connection accepted from 165.225.128.186:42341 #69 (31 connections now open) m30999| Fri Feb 22 11:54:40.925 [conn39] connected connection! m30999| Fri Feb 22 11:54:40.925 [conn39] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.925 [conn39] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.925 [conn39] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.925 [conn39] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1203f60 2 m30999| Fri Feb 22 11:54:40.926 [conn39] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.926 [conn39] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.926 [conn39] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.926 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.926 [conn39] connected connection! m31102| Fri Feb 22 11:54:40.926 [initandlisten] connection accepted from 165.225.128.186:59630 #40 (36 connections now open) m30999| Fri Feb 22 11:54:40.926 [conn39] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.926 [conn39] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.926 [conn39] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.926 [conn39] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.927 [mongosMain] connection accepted from 165.225.128.186:62172 #40 (40 connections now open) m30999| Fri Feb 22 11:54:40.927 [conn40] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.927 [conn40] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.927 [conn40] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.927 [conn40] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.927 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.928 [initandlisten] connection accepted from 165.225.128.186:45883 #70 (32 connections now open) m30999| Fri Feb 22 11:54:40.928 [conn40] connected connection! m30999| Fri Feb 22 11:54:40.928 [conn40] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.928 [conn40] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.928 [conn40] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.928 [conn40] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1205820 2 m30999| Fri Feb 22 11:54:40.928 [conn40] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.928 [conn40] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.928 [conn40] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.928 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.928 [conn40] connected connection! m31102| Fri Feb 22 11:54:40.928 [initandlisten] connection accepted from 165.225.128.186:60857 #41 (37 connections now open) m30999| Fri Feb 22 11:54:40.928 [conn40] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.928 [conn40] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.928 [conn40] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.929 [conn40] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.929 [mongosMain] connection accepted from 165.225.128.186:50296 #41 (41 connections now open) m30999| Fri Feb 22 11:54:40.929 [conn41] [pcursor] creating pcursor over QSpec { ns: "test.foo", n2skip: 0, n2return: -1, options: 4, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 11:54:40.929 [conn41] [pcursor] initializing over 1 shards required by [test.foo @ 1|0||51275c764b746466f5f93936] m30999| Fri Feb 22 11:54:40.929 [conn41] [pcursor] initializing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.929 [conn41] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.930 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 11:54:40.930 [initandlisten] connection accepted from 165.225.128.186:49315 #71 (33 connections now open) m30999| Fri Feb 22 11:54:40.930 [conn41] connected connection! m30999| Fri Feb 22 11:54:40.930 [conn41] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:54:40.930 [conn41] initial sharding settings : { setShardVersion: "", init: true, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", serverID: ObjectId('51275c764b746466f5f93934'), authoritative: true } m30999| Fri Feb 22 11:54:40.930 [conn41] have to set shard version for conn: bs-smartos-x86-64-1.10gen.cc:31100 ns:test.foo my last seq: 0 current: 2 version: 1|0||51275c764b746466f5f93936 manager: 0x118b8e0 m30999| Fri Feb 22 11:54:40.930 [conn41] setShardVersion test-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('51275c764b746466f5f93936'), serverID: ObjectId('51275c764b746466f5f93934'), shard: "test-rs0", shardHost: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1208010 2 m30999| Fri Feb 22 11:54:40.930 [conn41] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 11:54:40.930 [conn41] dbclient_rs _selectNode found local secondary for queries: 1, ping time: 0 m30999| Fri Feb 22 11:54:40.930 [conn41] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:54:40.931 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 11:54:40.931 [conn41] connected connection! m31102| Fri Feb 22 11:54:40.931 [initandlisten] connection accepted from 165.225.128.186:32932 #42 (38 connections now open) m30999| Fri Feb 22 11:54:40.931 [conn41] [pcursor] initialized query (lazily) on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.931 [conn41] [pcursor] finishing over 1 shards m30999| Fri Feb 22 11:54:40.931 [conn41] [pcursor] finishing on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31102", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 11:54:40.931 [conn41] [pcursor] finished on shard test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, current connection state is { state: { conn: "(done)", vinfo: "test.foo @ 1|0||51275c764b746466f5f93936", cursor: { _id: ObjectId('51275c76d057575af65dc029'), a: 123.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 11:54:40.935 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m31100| Fri Feb 22 11:54:40.936 [conn47] end connection 165.225.128.186:61327 (32 connections now open) m29000| Fri Feb 22 11:54:40.936 [conn4] end connection 165.225.128.186:49101 (5 connections now open) m29000| Fri Feb 22 11:54:40.936 [conn5] end connection 165.225.128.186:60469 (5 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn9] end connection 165.225.128.186:48384 (16 connections now open) m31100| Fri Feb 22 11:54:40.936 [conn52] end connection 165.225.128.186:55532 (31 connections now open) m31100| Fri Feb 22 11:54:40.936 [conn48] end connection 165.225.128.186:39904 (31 connections now open) m29000| Fri Feb 22 11:54:40.936 [conn6] end connection 165.225.128.186:49371 (3 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn7] end connection 165.225.128.186:47904 (37 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn8] end connection 165.225.128.186:41409 (37 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn23] end connection 165.225.128.186:43466 (37 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn10] end connection 165.225.128.186:36716 (34 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn11] end connection 165.225.128.186:46951 (34 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn11] end connection 165.225.128.186:61153 (15 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn12] end connection 165.225.128.186:42428 (15 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn13] end connection 165.225.128.186:37405 (13 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn14] end connection 165.225.128.186:45422 (13 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn12] end connection 165.225.128.186:55504 (33 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn13] end connection 165.225.128.186:49441 (32 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn15] end connection 165.225.128.186:42951 (11 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn14] end connection 165.225.128.186:40821 (32 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn16] end connection 165.225.128.186:37744 (10 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn17] end connection 165.225.128.186:64880 (10 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn15] end connection 165.225.128.186:60400 (29 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn16] end connection 165.225.128.186:56986 (29 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn18] end connection 165.225.128.186:56671 (8 connections now open) m31102| Fri Feb 22 11:54:40.936 [conn17] end connection 165.225.128.186:59035 (29 connections now open) m31101| Fri Feb 22 11:54:40.936 [conn19] end connection 165.225.128.186:48062 (8 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn18] end connection 165.225.128.186:36414 (27 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn53] end connection 165.225.128.186:48053 (29 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn54] end connection 165.225.128.186:57939 (29 connections now open) m31101| Fri Feb 22 11:54:40.937 [conn20] end connection 165.225.128.186:56793 (6 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn19] end connection 165.225.128.186:40739 (26 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn55] end connection 165.225.128.186:62490 (29 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn56] end connection 165.225.128.186:63771 (28 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn25] end connection 165.225.128.186:47539 (25 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn24] end connection 165.225.128.186:39698 (25 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn26] end connection 165.225.128.186:44246 (25 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn57] end connection 165.225.128.186:56540 (27 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn27] end connection 165.225.128.186:63466 (23 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn58] end connection 165.225.128.186:54565 (25 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn59] end connection 165.225.128.186:44673 (24 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn60] end connection 165.225.128.186:57601 (24 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn61] end connection 165.225.128.186:56917 (24 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn33] end connection 165.225.128.186:57917 (20 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn32] end connection 165.225.128.186:40834 (20 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn31] end connection 165.225.128.186:60469 (20 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn29] end connection 165.225.128.186:41449 (20 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn34] end connection 165.225.128.186:59984 (20 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn38] end connection 165.225.128.186:42845 (20 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn63] end connection 165.225.128.186:53357 (21 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn64] end connection 165.225.128.186:57497 (21 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn70] end connection 165.225.128.186:45883 (21 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn66] end connection 165.225.128.186:38995 (21 connections now open) m31100| Fri Feb 22 11:54:40.937 [conn69] end connection 165.225.128.186:42341 (21 connections now open) m31100| Fri Feb 22 11:54:40.938 [conn71] end connection 165.225.128.186:49315 (19 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn35] end connection 165.225.128.186:47386 (19 connections now open) m31102| Fri Feb 22 11:54:40.937 [conn39] end connection 165.225.128.186:49756 (18 connections now open) m31102| Fri Feb 22 11:54:40.938 [conn40] end connection 165.225.128.186:59630 (14 connections now open) m31102| Fri Feb 22 11:54:40.938 [conn37] end connection 165.225.128.186:56313 (15 connections now open) m31102| Fri Feb 22 11:54:40.938 [conn42] end connection 165.225.128.186:32932 (14 connections now open) m31102| Fri Feb 22 11:54:40.938 [conn41] end connection 165.225.128.186:60857 (14 connections now open) m31102| Fri Feb 22 11:54:40.938 [conn30] end connection 165.225.128.186:57662 (11 connections now open) m31100| Fri Feb 22 11:54:40.938 [conn67] end connection 165.225.128.186:34998 (16 connections now open) m31100| Fri Feb 22 11:54:40.938 [conn65] end connection 165.225.128.186:33049 (14 connections now open) m31102| Fri Feb 22 11:54:40.938 [conn28] end connection 165.225.128.186:37355 (20 connections now open) m31100| Fri Feb 22 11:54:40.938 [conn62] end connection 165.225.128.186:55583 (23 connections now open) m31102| Fri Feb 22 11:54:40.938 [conn36] end connection 165.225.128.186:57948 (8 connections now open) m31100| Fri Feb 22 11:54:40.938 [conn68] end connection 165.225.128.186:38895 (12 connections now open) Fri Feb 22 11:54:41.935 shell: stopped mongo program on port 30999 Fri Feb 22 11:54:41.935 No db started on port: 30000 Fri Feb 22 11:54:41.935 shell: stopped mongo program on port 30000 ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number ReplSetTest stop *** Shutting down mongod in port 31100 *** m31100| Fri Feb 22 11:54:41.936 got signal 15 (Terminated), will terminate after current cmd ends m31100| Fri Feb 22 11:54:41.936 [interruptThread] now exiting m31100| Fri Feb 22 11:54:41.936 dbexit: m31100| Fri Feb 22 11:54:41.936 [interruptThread] shutdown: going to close listening sockets... m31100| Fri Feb 22 11:54:41.936 [interruptThread] closing listening socket: 12 m31100| Fri Feb 22 11:54:41.936 [interruptThread] closing listening socket: 13 m31100| Fri Feb 22 11:54:41.936 [interruptThread] closing listening socket: 14 m31100| Fri Feb 22 11:54:41.936 [interruptThread] removing socket file: /tmp/mongodb-31100.sock m31100| Fri Feb 22 11:54:41.936 [interruptThread] shutdown: going to flush diaglog... m31100| Fri Feb 22 11:54:41.936 [interruptThread] shutdown: going to close sockets... m31100| Fri Feb 22 11:54:41.936 [interruptThread] shutdown: waiting for fs preallocator... m31100| Fri Feb 22 11:54:41.936 [interruptThread] shutdown: lock for final commit... m31100| Fri Feb 22 11:54:41.936 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 11:54:41.936 [conn5] end connection 165.225.128.186:59948 (5 connections now open) m31100| Fri Feb 22 11:54:41.936 [conn43] end connection 165.225.128.186:43243 (10 connections now open) m31100| Fri Feb 22 11:54:41.937 [conn5] end connection 165.225.128.186:56019 (10 connections now open) m31102| Fri Feb 22 11:54:41.937 [conn20] end connection 165.225.128.186:36976 (5 connections now open) m29000| Fri Feb 22 11:54:41.937 [conn8] end connection 165.225.128.186:45468 (2 connections now open) m31100| Fri Feb 22 11:54:41.937 [conn42] end connection 127.0.0.1:39711 (10 connections now open) m31100| Fri Feb 22 11:54:41.937 [conn9] end connection 165.225.128.186:56129 (10 connections now open) m31101| Fri Feb 22 11:54:41.937 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:41.937 [conn46] end connection 165.225.128.186:39844 (10 connections now open) m31101| Fri Feb 22 11:54:41.937 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 11:54:41.937 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:41.937 [conn45] end connection 165.225.128.186:39672 (9 connections now open) m31100| Fri Feb 22 11:54:41.953 [interruptThread] shutdown: closing all files... m31100| Fri Feb 22 11:54:41.953 [interruptThread] closeAllFiles() finished m31100| Fri Feb 22 11:54:41.953 [interruptThread] journalCleanup... m31100| Fri Feb 22 11:54:41.953 [interruptThread] removeJournalFiles m31100| Fri Feb 22 11:54:41.953 dbexit: really exiting now m31102| Fri Feb 22 11:54:42.078 [rsHealthPoll] DBClientCursor::init call() failed m31102| Fri Feb 22 11:54:42.078 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 11:54:42.079 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31100 is down (or slow to respond): m31102| Fri Feb 22 11:54:42.079 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state DOWN m31102| Fri Feb 22 11:54:42.079 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'bs-smartos-x86-64-1.10gen.cc:31102 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31100 is already primary and more up-to-date' Fri Feb 22 11:54:42.936 shell: stopped mongo program on port 31100 ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number ReplSetTest stop *** Shutting down mongod in port 31101 *** m31101| Fri Feb 22 11:54:42.936 got signal 15 (Terminated), will terminate after current cmd ends m31101| Fri Feb 22 11:54:42.937 [interruptThread] now exiting m31101| Fri Feb 22 11:54:42.937 dbexit: m31101| Fri Feb 22 11:54:42.937 [interruptThread] shutdown: going to close listening sockets... m31101| Fri Feb 22 11:54:42.937 [interruptThread] closing listening socket: 15 m31101| Fri Feb 22 11:54:42.937 [interruptThread] closing listening socket: 16 m31101| Fri Feb 22 11:54:42.937 [interruptThread] closing listening socket: 17 m31101| Fri Feb 22 11:54:42.937 [interruptThread] removing socket file: /tmp/mongodb-31101.sock m31101| Fri Feb 22 11:54:42.937 [interruptThread] shutdown: going to flush diaglog... m31101| Fri Feb 22 11:54:42.937 [interruptThread] shutdown: going to close sockets... m31101| Fri Feb 22 11:54:42.937 [interruptThread] shutdown: waiting for fs preallocator... m31101| Fri Feb 22 11:54:42.937 [interruptThread] shutdown: lock for final commit... m31101| Fri Feb 22 11:54:42.937 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 11:54:42.937 [conn1] end connection 127.0.0.1:61668 (4 connections now open) m31101| Fri Feb 22 11:54:42.937 [conn21] end connection 165.225.128.186:49790 (4 connections now open) m31101| Fri Feb 22 11:54:42.937 [conn6] end connection 165.225.128.186:38510 (4 connections now open) m31101| Fri Feb 22 11:54:42.937 [conn7] end connection 165.225.128.186:39191 (4 connections now open) m31102| Fri Feb 22 11:54:42.937 [conn22] end connection 165.225.128.186:62859 (4 connections now open) m31101| Fri Feb 22 11:54:42.959 [interruptThread] shutdown: closing all files... m31101| Fri Feb 22 11:54:42.959 [interruptThread] closeAllFiles() finished m31101| Fri Feb 22 11:54:42.959 [interruptThread] journalCleanup... m31101| Fri Feb 22 11:54:42.959 [interruptThread] removeJournalFiles m31101| Fri Feb 22 11:54:42.959 dbexit: really exiting now Fri Feb 22 11:54:43.936 shell: stopped mongo program on port 31101 ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number ReplSetTest stop *** Shutting down mongod in port 31102 *** m31102| Fri Feb 22 11:54:43.937 got signal 15 (Terminated), will terminate after current cmd ends m31102| Fri Feb 22 11:54:43.937 [interruptThread] now exiting m31102| Fri Feb 22 11:54:43.937 dbexit: m31102| Fri Feb 22 11:54:43.937 [interruptThread] shutdown: going to close listening sockets... m31102| Fri Feb 22 11:54:43.937 [interruptThread] closing listening socket: 18 m31102| Fri Feb 22 11:54:43.937 [interruptThread] closing listening socket: 19 m31102| Fri Feb 22 11:54:43.937 [interruptThread] closing listening socket: 20 m31102| Fri Feb 22 11:54:43.937 [interruptThread] removing socket file: /tmp/mongodb-31102.sock m31102| Fri Feb 22 11:54:43.937 [interruptThread] shutdown: going to flush diaglog... m31102| Fri Feb 22 11:54:43.937 [interruptThread] shutdown: going to close sockets... m31102| Fri Feb 22 11:54:43.937 [interruptThread] shutdown: waiting for fs preallocator... m31102| Fri Feb 22 11:54:43.937 [interruptThread] shutdown: lock for final commit... m31102| Fri Feb 22 11:54:43.937 [interruptThread] shutdown: final commit... m31102| Fri Feb 22 11:54:43.937 [conn1] end connection 127.0.0.1:50934 (3 connections now open) m31102| Fri Feb 22 11:54:43.937 [conn5] end connection 165.225.128.186:48837 (3 connections now open) m31102| Fri Feb 22 11:54:43.937 [conn6] end connection 165.225.128.186:45611 (3 connections now open) m31102| Fri Feb 22 11:54:43.958 [interruptThread] shutdown: closing all files... m31102| Fri Feb 22 11:54:43.958 [interruptThread] closeAllFiles() finished m31102| Fri Feb 22 11:54:43.958 [interruptThread] journalCleanup... m31102| Fri Feb 22 11:54:43.958 [interruptThread] removeJournalFiles m31102| Fri Feb 22 11:54:43.959 dbexit: really exiting now Fri Feb 22 11:54:44.937 shell: stopped mongo program on port 31102 ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** m29000| Fri Feb 22 11:54:44.950 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 11:54:44.950 [interruptThread] now exiting m29000| Fri Feb 22 11:54:44.950 dbexit: m29000| Fri Feb 22 11:54:44.950 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 11:54:44.950 [interruptThread] closing listening socket: 27 m29000| Fri Feb 22 11:54:44.950 [interruptThread] closing listening socket: 28 m29000| Fri Feb 22 11:54:44.950 [interruptThread] closing listening socket: 29 m29000| Fri Feb 22 11:54:44.950 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 11:54:44.950 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 11:54:44.950 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 11:54:44.950 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 11:54:44.950 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 11:54:44.950 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 11:54:44.950 [conn1] end connection 127.0.0.1:37237 (1 connection now open) m29000| Fri Feb 22 11:54:44.950 [conn2] end connection 165.225.128.186:57448 (1 connection now open) m29000| Fri Feb 22 11:54:44.964 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 11:54:44.964 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 11:54:44.964 [interruptThread] journalCleanup... m29000| Fri Feb 22 11:54:44.965 [interruptThread] removeJournalFiles m29000| Fri Feb 22 11:54:44.965 dbexit: really exiting now Fri Feb 22 11:54:45.950 shell: stopped mongo program on port 29000 *** ShardingTest test completed successfully in 54.446 seconds *** Fri Feb 22 11:54:45.965 [conn110] end connection 127.0.0.1:63946 (0 connections now open) 54.6131 seconds Fri Feb 22 11:54:45.988 [initandlisten] connection accepted from 127.0.0.1:51840 #111 (1 connection now open) Fri Feb 22 11:54:45.989 [conn111] end connection 127.0.0.1:51840 (0 connections now open) ******************************************* Test : replica_set_shard_version.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replica_set_shard_version.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replica_set_shard_version.js";TestData.testFile = "replica_set_shard_version.js";TestData.testName = "replica_set_shard_version";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:54:45 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:54:46.194 [initandlisten] connection accepted from 127.0.0.1:34876 #112 (1 connection now open) null ---- Starting sharded cluster... ---- Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 0, "set" : "test-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-0' Fri Feb 22 11:54:46.216 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-0 --setParameter enableTestCommands=1 m31100| note: noprealloc may hurt performance in many applications m31100| Fri Feb 22 11:54:46.308 [initandlisten] MongoDB starting : pid=1967 port=31100 dbpath=/data/db/test-rs0-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31100| Fri Feb 22 11:54:46.308 [initandlisten] m31100| Fri Feb 22 11:54:46.308 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31100| Fri Feb 22 11:54:46.308 [initandlisten] ** uses to detect impending page faults. m31100| Fri Feb 22 11:54:46.308 [initandlisten] ** This may result in slower performance for certain use cases m31100| Fri Feb 22 11:54:46.308 [initandlisten] m31100| Fri Feb 22 11:54:46.308 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31100| Fri Feb 22 11:54:46.308 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31100| Fri Feb 22 11:54:46.308 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31100| Fri Feb 22 11:54:46.308 [initandlisten] allocator: system m31100| Fri Feb 22 11:54:46.308 [initandlisten] options: { dbpath: "/data/db/test-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "test-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31100| Fri Feb 22 11:54:46.309 [initandlisten] journal dir=/data/db/test-rs0-0/journal m31100| Fri Feb 22 11:54:46.309 [initandlisten] recover : no journal files present, no recovery needed m31100| Fri Feb 22 11:54:46.326 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.ns, filling with zeroes... m31100| Fri Feb 22 11:54:46.326 [FileAllocator] creating directory /data/db/test-rs0-0/_tmp m31100| Fri Feb 22 11:54:46.326 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:54:46.326 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.0, filling with zeroes... m31100| Fri Feb 22 11:54:46.327 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:54:46.330 [initandlisten] waiting for connections on port 31100 m31100| Fri Feb 22 11:54:46.330 [websvr] admin web console waiting for connections on port 32100 m31100| Fri Feb 22 11:54:46.333 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 11:54:46.333 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31100| Fri Feb 22 11:54:46.419 [initandlisten] connection accepted from 127.0.0.1:62329 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 1, "set" : "test-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-1' Fri Feb 22 11:54:46.428 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-1 --setParameter enableTestCommands=1 m31101| note: noprealloc may hurt performance in many applications m31101| Fri Feb 22 11:54:46.523 [initandlisten] MongoDB starting : pid=1968 port=31101 dbpath=/data/db/test-rs0-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31101| Fri Feb 22 11:54:46.524 [initandlisten] m31101| Fri Feb 22 11:54:46.524 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31101| Fri Feb 22 11:54:46.524 [initandlisten] ** uses to detect impending page faults. m31101| Fri Feb 22 11:54:46.524 [initandlisten] ** This may result in slower performance for certain use cases m31101| Fri Feb 22 11:54:46.524 [initandlisten] m31101| Fri Feb 22 11:54:46.524 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31101| Fri Feb 22 11:54:46.524 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31101| Fri Feb 22 11:54:46.524 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31101| Fri Feb 22 11:54:46.524 [initandlisten] allocator: system m31101| Fri Feb 22 11:54:46.524 [initandlisten] options: { dbpath: "/data/db/test-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "test-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31101| Fri Feb 22 11:54:46.524 [initandlisten] journal dir=/data/db/test-rs0-1/journal m31101| Fri Feb 22 11:54:46.524 [initandlisten] recover : no journal files present, no recovery needed m31101| Fri Feb 22 11:54:46.539 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.ns, filling with zeroes... m31101| Fri Feb 22 11:54:46.539 [FileAllocator] creating directory /data/db/test-rs0-1/_tmp m31101| Fri Feb 22 11:54:46.539 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:54:46.540 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.0, filling with zeroes... m31101| Fri Feb 22 11:54:46.540 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:54:46.543 [websvr] admin web console waiting for connections on port 32101 m31101| Fri Feb 22 11:54:46.543 [initandlisten] waiting for connections on port 31101 m31101| Fri Feb 22 11:54:46.546 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31101| Fri Feb 22 11:54:46.546 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31101| Fri Feb 22 11:54:46.630 [initandlisten] connection accepted from 127.0.0.1:65099 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31102, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "test", "shard" : 0, "node" : 2, "set" : "test-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/test-rs0-2' Fri Feb 22 11:54:46.635 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31102 --noprealloc --smallfiles --rest --replSet test-rs0 --dbpath /data/db/test-rs0-2 --setParameter enableTestCommands=1 m31102| note: noprealloc may hurt performance in many applications m31102| Fri Feb 22 11:54:46.713 [initandlisten] MongoDB starting : pid=1969 port=31102 dbpath=/data/db/test-rs0-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31102| Fri Feb 22 11:54:46.713 [initandlisten] m31102| Fri Feb 22 11:54:46.713 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31102| Fri Feb 22 11:54:46.713 [initandlisten] ** uses to detect impending page faults. m31102| Fri Feb 22 11:54:46.713 [initandlisten] ** This may result in slower performance for certain use cases m31102| Fri Feb 22 11:54:46.713 [initandlisten] m31102| Fri Feb 22 11:54:46.713 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31102| Fri Feb 22 11:54:46.713 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31102| Fri Feb 22 11:54:46.713 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31102| Fri Feb 22 11:54:46.714 [initandlisten] allocator: system m31102| Fri Feb 22 11:54:46.714 [initandlisten] options: { dbpath: "/data/db/test-rs0-2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "test-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31102| Fri Feb 22 11:54:46.714 [initandlisten] journal dir=/data/db/test-rs0-2/journal m31102| Fri Feb 22 11:54:46.714 [initandlisten] recover : no journal files present, no recovery needed m31102| Fri Feb 22 11:54:46.729 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.ns, filling with zeroes... m31102| Fri Feb 22 11:54:46.729 [FileAllocator] creating directory /data/db/test-rs0-2/_tmp m31102| Fri Feb 22 11:54:46.729 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 11:54:46.729 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.0, filling with zeroes... m31102| Fri Feb 22 11:54:46.729 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.0, size: 16MB, took 0 secs m31102| Fri Feb 22 11:54:46.732 [initandlisten] waiting for connections on port 31102 m31102| Fri Feb 22 11:54:46.732 [websvr] admin web console waiting for connections on port 32102 m31102| Fri Feb 22 11:54:46.735 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31102| Fri Feb 22 11:54:46.735 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31102| Fri Feb 22 11:54:46.836 [initandlisten] connection accepted from 127.0.0.1:35803 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101, connection to bs-smartos-x86-64-1.10gen.cc:31102 ] { "replSetInitiate" : { "_id" : "test-rs0", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31102" } ] } } m31100| Fri Feb 22 11:54:46.840 [conn1] replSet replSetInitiate admin command received from client m31100| Fri Feb 22 11:54:46.841 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31100| Fri Feb 22 11:54:46.841 [initandlisten] connection accepted from 165.225.128.186:51308 #2 (2 connections now open) m31101| Fri Feb 22 11:54:46.842 [initandlisten] connection accepted from 165.225.128.186:60506 #2 (2 connections now open) m31102| Fri Feb 22 11:54:46.843 [initandlisten] connection accepted from 165.225.128.186:49775 #2 (2 connections now open) m31100| Fri Feb 22 11:54:46.844 [conn1] replSet replSetInitiate all members seem up m31100| Fri Feb 22 11:54:46.844 [conn1] ****** m31100| Fri Feb 22 11:54:46.844 [conn1] creating replication oplog of size: 40MB... m31100| Fri Feb 22 11:54:46.845 [FileAllocator] allocating new datafile /data/db/test-rs0-0/local.1, filling with zeroes... m31100| Fri Feb 22 11:54:46.845 [FileAllocator] done allocating datafile /data/db/test-rs0-0/local.1, size: 64MB, took 0 secs m31100| Fri Feb 22 11:54:46.855 [conn2] end connection 165.225.128.186:51308 (1 connection now open) m31100| Fri Feb 22 11:54:46.856 [conn1] ****** m31100| Fri Feb 22 11:54:46.856 [conn1] replSet info saving a newer config version to local.system.replset m31100| Fri Feb 22 11:54:46.862 [conn1] replSet saveConfigLocally done m31100| Fri Feb 22 11:54:46.862 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 11:54:56.333 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:56.334 [rsStart] replSet STARTUP2 m31100| Fri Feb 22 11:54:56.334 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31101| Fri Feb 22 11:54:56.547 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:54:56.547 [initandlisten] connection accepted from 165.225.128.186:34774 #3 (2 connections now open) m31101| Fri Feb 22 11:54:56.548 [initandlisten] connection accepted from 165.225.128.186:46674 #3 (3 connections now open) m31101| Fri Feb 22 11:54:56.549 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:54:56.549 [rsStart] replSet got config version 1 from a remote, saving locally m31101| Fri Feb 22 11:54:56.549 [rsStart] replSet info saving a newer config version to local.system.replset m31101| Fri Feb 22 11:54:56.552 [rsStart] replSet saveConfigLocally done m31101| Fri Feb 22 11:54:56.553 [rsStart] replSet STARTUP2 m31101| Fri Feb 22 11:54:56.553 [rsSync] ****** m31101| Fri Feb 22 11:54:56.553 [rsSync] creating replication oplog of size: 40MB... m31101| Fri Feb 22 11:54:56.554 [FileAllocator] allocating new datafile /data/db/test-rs0-1/local.1, filling with zeroes... m31101| Fri Feb 22 11:54:56.554 [FileAllocator] done allocating datafile /data/db/test-rs0-1/local.1, size: 64MB, took 0 secs m31101| Fri Feb 22 11:54:56.565 [rsSync] ****** m31101| Fri Feb 22 11:54:56.565 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:54:56.565 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31101| Fri Feb 22 11:54:56.569 [conn3] end connection 165.225.128.186:46674 (2 connections now open) m31102| Fri Feb 22 11:54:56.735 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 11:54:57.335 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:54:58.334 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31100| Fri Feb 22 11:54:58.334 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 thinks that we are down m31100| Fri Feb 22 11:54:58.334 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31100| Fri Feb 22 11:54:58.334 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31100| Fri Feb 22 11:54:58.334 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 11:54:58.549 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31101| Fri Feb 22 11:54:58.549 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31102| Fri Feb 22 11:54:58.549 [initandlisten] connection accepted from 165.225.128.186:41841 #3 (3 connections now open) m31101| Fri Feb 22 11:54:58.550 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31100| Fri Feb 22 11:55:04.335 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31102| Fri Feb 22 11:55:06.736 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:06.737 [initandlisten] connection accepted from 165.225.128.186:49631 #4 (3 connections now open) m31102| Fri Feb 22 11:55:06.737 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 11:55:06.737 [initandlisten] connection accepted from 165.225.128.186:44730 #4 (3 connections now open) m31102| Fri Feb 22 11:55:06.738 [initandlisten] connection accepted from 165.225.128.186:47767 #4 (4 connections now open) m31102| Fri Feb 22 11:55:06.739 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 11:55:06.739 [rsStart] replSet got config version 1 from a remote, saving locally m31102| Fri Feb 22 11:55:06.739 [rsStart] replSet info saving a newer config version to local.system.replset m31102| Fri Feb 22 11:55:06.743 [rsStart] replSet saveConfigLocally done m31102| Fri Feb 22 11:55:06.743 [rsStart] replSet STARTUP2 m31102| Fri Feb 22 11:55:06.743 [rsSync] ****** m31102| Fri Feb 22 11:55:06.743 [rsSync] creating replication oplog of size: 40MB... m31102| Fri Feb 22 11:55:06.744 [FileAllocator] allocating new datafile /data/db/test-rs0-2/local.1, filling with zeroes... m31102| Fri Feb 22 11:55:06.744 [FileAllocator] done allocating datafile /data/db/test-rs0-2/local.1, size: 64MB, took 0 secs m31102| Fri Feb 22 11:55:06.752 [conn4] end connection 165.225.128.186:47767 (3 connections now open) m31102| Fri Feb 22 11:55:06.754 [rsSync] ****** m31102| Fri Feb 22 11:55:06.754 [rsSync] replSet initial sync pending m31102| Fri Feb 22 11:55:06.754 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31100| Fri Feb 22 11:55:08.335 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31100| Fri Feb 22 11:55:08.335 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31100| Fri Feb 22 11:55:08.335 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 11:55:08.551 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31101| Fri Feb 22 11:55:08.551 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31102| Fri Feb 22 11:55:08.739 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31102| Fri Feb 22 11:55:08.739 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31102| Fri Feb 22 11:55:08.739 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31102| Fri Feb 22 11:55:08.739 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31101| Fri Feb 22 11:55:10.336 [conn2] end connection 165.225.128.186:60506 (2 connections now open) m31101| Fri Feb 22 11:55:10.336 [initandlisten] connection accepted from 165.225.128.186:51559 #5 (3 connections now open) m31100| Fri Feb 22 11:55:12.551 [conn3] end connection 165.225.128.186:34774 (2 connections now open) m31100| Fri Feb 22 11:55:12.551 [initandlisten] connection accepted from 165.225.128.186:39570 #5 (3 connections now open) m31101| Fri Feb 22 11:55:12.565 [rsSync] replSet initial sync pending m31101| Fri Feb 22 11:55:12.565 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:12.566 [initandlisten] connection accepted from 165.225.128.186:57714 #6 (4 connections now open) m31101| Fri Feb 22 11:55:12.573 [rsSync] build index local.me { _id: 1 } m31101| Fri Feb 22 11:55:12.577 [rsSync] build index done. scanned 0 total records. 0.004 secs m31101| Fri Feb 22 11:55:12.579 [rsSync] build index local.replset.minvalid { _id: 1 } m31101| Fri Feb 22 11:55:12.580 [rsSync] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 11:55:12.581 [rsSync] replSet initial sync drop all databases m31101| Fri Feb 22 11:55:12.581 [rsSync] dropAllDatabasesExceptLocal 1 m31101| Fri Feb 22 11:55:12.581 [rsSync] replSet initial sync clone all databases m31101| Fri Feb 22 11:55:12.581 [rsSync] replSet initial sync data copy, starting syncup m31101| Fri Feb 22 11:55:12.581 [rsSync] oplog sync 1 of 3 m31101| Fri Feb 22 11:55:12.581 [rsSync] oplog sync 2 of 3 m31101| Fri Feb 22 11:55:12.581 [rsSync] replSet initial sync building indexes m31101| Fri Feb 22 11:55:12.581 [rsSync] oplog sync 3 of 3 m31101| Fri Feb 22 11:55:12.582 [rsSync] replSet initial sync finishing up m31101| Fri Feb 22 11:55:12.589 [rsSync] replSet set minValid=51275c86:b m31101| Fri Feb 22 11:55:12.595 [rsSync] replSet RECOVERING m31101| Fri Feb 22 11:55:12.595 [rsSync] replSet initial sync done m31100| Fri Feb 22 11:55:12.595 [conn6] end connection 165.225.128.186:57714 (3 connections now open) m31102| Fri Feb 22 11:55:12.739 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31100| Fri Feb 22 11:55:14.336 [rsMgr] replSet info electSelf 0 m31102| Fri Feb 22 11:55:14.337 [conn2] replSet RECOVERING m31102| Fri Feb 22 11:55:14.337 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31100| Fri Feb 22 11:55:14.337 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31101| Fri Feb 22 11:55:14.337 [conn5] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31101| Fri Feb 22 11:55:14.551 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31101| Fri Feb 22 11:55:14.595 [rsSync] replSet SECONDARY m31102| Fri Feb 22 11:55:14.740 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31100| Fri Feb 22 11:55:15.335 [rsMgr] replSet PRIMARY m31100| Fri Feb 22 11:55:16.336 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31100| Fri Feb 22 11:55:16.337 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31101| Fri Feb 22 11:55:16.552 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31101| Fri Feb 22 11:55:16.554 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:16.555 [initandlisten] connection accepted from 165.225.128.186:53554 #7 (4 connections now open) m31101| Fri Feb 22 11:55:16.595 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:16.596 [initandlisten] connection accepted from 165.225.128.186:44471 #8 (5 connections now open) m31102| Fri Feb 22 11:55:16.740 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31100| Fri Feb 22 11:55:17.604 [slaveTracking] build index local.slaves { _id: 1 } m31100| Fri Feb 22 11:55:17.607 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31100| Fri Feb 22 11:55:22.741 [conn4] end connection 165.225.128.186:49631 (4 connections now open) m31100| Fri Feb 22 11:55:22.741 [initandlisten] connection accepted from 165.225.128.186:63210 #9 (5 connections now open) m31102| Fri Feb 22 11:55:22.755 [rsSync] replSet initial sync pending m31102| Fri Feb 22 11:55:22.755 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:22.755 [initandlisten] connection accepted from 165.225.128.186:33143 #10 (6 connections now open) m31102| Fri Feb 22 11:55:22.763 [rsSync] build index local.me { _id: 1 } m31102| Fri Feb 22 11:55:22.767 [rsSync] build index done. scanned 0 total records. 0.003 secs m31102| Fri Feb 22 11:55:22.768 [rsSync] build index local.replset.minvalid { _id: 1 } m31102| Fri Feb 22 11:55:22.769 [rsSync] build index done. scanned 0 total records. 0.001 secs m31102| Fri Feb 22 11:55:22.769 [rsSync] replSet initial sync drop all databases m31102| Fri Feb 22 11:55:22.770 [rsSync] dropAllDatabasesExceptLocal 1 m31102| Fri Feb 22 11:55:22.770 [rsSync] replSet initial sync clone all databases m31102| Fri Feb 22 11:55:22.770 [rsSync] replSet initial sync data copy, starting syncup m31102| Fri Feb 22 11:55:22.770 [rsSync] oplog sync 1 of 3 m31102| Fri Feb 22 11:55:22.770 [rsSync] oplog sync 2 of 3 m31102| Fri Feb 22 11:55:22.770 [rsSync] replSet initial sync building indexes m31102| Fri Feb 22 11:55:22.770 [rsSync] oplog sync 3 of 3 m31102| Fri Feb 22 11:55:22.770 [rsSync] replSet initial sync finishing up m31102| Fri Feb 22 11:55:22.777 [rsSync] replSet set minValid=51275c86:b m31102| Fri Feb 22 11:55:22.781 [rsSync] replSet initial sync done m31100| Fri Feb 22 11:55:22.781 [conn10] end connection 165.225.128.186:33143 (5 connections now open) m31102| Fri Feb 22 11:55:23.744 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:23.745 [initandlisten] connection accepted from 165.225.128.186:43795 #11 (6 connections now open) m31102| Fri Feb 22 11:55:23.782 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:23.783 [initandlisten] connection accepted from 165.225.128.186:60049 #12 (7 connections now open) m31102| Fri Feb 22 11:55:24.782 [rsSync] replSet SECONDARY m31100| Fri Feb 22 11:55:24.787 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.ns, filling with zeroes... m31100| Fri Feb 22 11:55:24.787 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 11:55:24.787 [FileAllocator] allocating new datafile /data/db/test-rs0-0/admin.0, filling with zeroes... m31100| Fri Feb 22 11:55:24.787 [FileAllocator] done allocating datafile /data/db/test-rs0-0/admin.0, size: 16MB, took 0 secs m31100| Fri Feb 22 11:55:24.790 [conn1] build index admin.foo { _id: 1 } m31100| Fri Feb 22 11:55:24.791 [conn1] build index done. scanned 0 total records. 0 secs m31101| Fri Feb 22 11:55:24.792 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.ns, filling with zeroes... m31102| Fri Feb 22 11:55:24.792 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.ns, filling with zeroes... m31102| Fri Feb 22 11:55:24.792 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 11:55:24.792 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 11:55:24.792 [FileAllocator] allocating new datafile /data/db/test-rs0-2/admin.0, filling with zeroes... m31101| Fri Feb 22 11:55:24.792 [FileAllocator] allocating new datafile /data/db/test-rs0-1/admin.0, filling with zeroes... m31102| Fri Feb 22 11:55:24.792 [FileAllocator] done allocating datafile /data/db/test-rs0-2/admin.0, size: 16MB, took 0 secs m31101| Fri Feb 22 11:55:24.792 [FileAllocator] done allocating datafile /data/db/test-rs0-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361534124000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361534124000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 m31102| Fri Feb 22 11:55:24.795 [repl writer worker 1] build index admin.foo { _id: 1 } m31101| Fri Feb 22 11:55:24.795 [repl writer worker 1] build index admin.foo { _id: 1 } m31102| Fri Feb 22 11:55:24.796 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 11:55:24.797 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31102 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31102, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361534124000, "i" : 1 } Fri Feb 22 11:55:24.800 starting new replica set monitor for replica set test-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 11:55:24.801 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set test-rs0 m31100| Fri Feb 22 11:55:24.801 [initandlisten] connection accepted from 165.225.128.186:57070 #13 (8 connections now open) Fri Feb 22 11:55:24.801 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from test-rs0/ Fri Feb 22 11:55:24.801 trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set test-rs0 Fri Feb 22 11:55:24.801 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set test-rs0 Fri Feb 22 11:55:24.801 trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set test-rs0 m31100| Fri Feb 22 11:55:24.801 [initandlisten] connection accepted from 165.225.128.186:35635 #14 (9 connections now open) Fri Feb 22 11:55:24.802 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set test-rs0 Fri Feb 22 11:55:24.802 trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set test-rs0 m31101| Fri Feb 22 11:55:24.802 [initandlisten] connection accepted from 165.225.128.186:34946 #6 (4 connections now open) Fri Feb 22 11:55:24.802 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set test-rs0 m31102| Fri Feb 22 11:55:24.802 [initandlisten] connection accepted from 165.225.128.186:56680 #5 (4 connections now open) m31100| Fri Feb 22 11:55:24.802 [initandlisten] connection accepted from 165.225.128.186:58054 #15 (10 connections now open) m31100| Fri Feb 22 11:55:24.802 [conn13] end connection 165.225.128.186:57070 (9 connections now open) Fri Feb 22 11:55:24.802 Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:55:24.803 [initandlisten] connection accepted from 165.225.128.186:63869 #7 (5 connections now open) m31102| Fri Feb 22 11:55:24.803 [initandlisten] connection accepted from 165.225.128.186:47040 #6 (5 connections now open) Fri Feb 22 11:55:24.804 replica set monitor for replica set test-rs0 started, address is test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 11:55:24.804 [ReplicaSetMonitorWatcher] starting Resetting db path '/data/db/test-config0' Fri Feb 22 11:55:24.807 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/test-config0 --configsvr --setParameter enableTestCommands=1 m29000| Fri Feb 22 11:55:24.887 [initandlisten] MongoDB starting : pid=2932 port=29000 dbpath=/data/db/test-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 11:55:24.888 [initandlisten] m29000| Fri Feb 22 11:55:24.888 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 11:55:24.888 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 11:55:24.888 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 11:55:24.888 [initandlisten] m29000| Fri Feb 22 11:55:24.888 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 11:55:24.888 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 11:55:24.888 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 11:55:24.888 [initandlisten] allocator: system m29000| Fri Feb 22 11:55:24.888 [initandlisten] options: { configsvr: true, dbpath: "/data/db/test-config0", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:55:24.888 [initandlisten] journal dir=/data/db/test-config0/journal m29000| Fri Feb 22 11:55:24.888 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 11:55:24.903 [FileAllocator] allocating new datafile /data/db/test-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 11:55:24.903 [FileAllocator] creating directory /data/db/test-config0/_tmp m29000| Fri Feb 22 11:55:24.904 [FileAllocator] done allocating datafile /data/db/test-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:55:24.904 [FileAllocator] allocating new datafile /data/db/test-config0/local.0, filling with zeroes... m29000| Fri Feb 22 11:55:24.904 [FileAllocator] done allocating datafile /data/db/test-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:55:24.907 [initandlisten] ****** m29000| Fri Feb 22 11:55:24.907 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 11:55:24.910 [initandlisten] ****** m29000| Fri Feb 22 11:55:24.911 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 11:55:24.911 [websvr] admin web console waiting for connections on port 30000 m29000| Fri Feb 22 11:55:25.008 [initandlisten] connection accepted from 127.0.0.1:49790 #1 (1 connection now open) "bs-smartos-x86-64-1.10gen.cc:29000" m29000| Fri Feb 22 11:55:25.008 [initandlisten] connection accepted from 165.225.128.186:35442 #2 (2 connections now open) ShardingTest test : { "config" : "bs-smartos-x86-64-1.10gen.cc:29000", "shards" : [ connection to test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 ] } Fri Feb 22 11:55:25.012 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb bs-smartos-x86-64-1.10gen.cc:29000 --chunkSize 50 --setParameter enableTestCommands=1 m30999| Fri Feb 22 11:55:25.028 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 11:55:25.028 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=2933 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 11:55:25.028 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 11:55:25.028 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 11:55:25.028 [mongosMain] options: { chunkSize: 50, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30999, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:55:25.029 [initandlisten] connection accepted from 165.225.128.186:59375 #3 (3 connections now open) m29000| Fri Feb 22 11:55:25.030 [initandlisten] connection accepted from 165.225.128.186:49724 #4 (4 connections now open) m29000| Fri Feb 22 11:55:25.031 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 11:55:25.036 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361534125:16838 (sleeping for 30000ms) m29000| Fri Feb 22 11:55:25.036 [FileAllocator] allocating new datafile /data/db/test-config0/config.ns, filling with zeroes... m29000| Fri Feb 22 11:55:25.036 [FileAllocator] done allocating datafile /data/db/test-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 11:55:25.036 [FileAllocator] allocating new datafile /data/db/test-config0/config.0, filling with zeroes... m29000| Fri Feb 22 11:55:25.036 [FileAllocator] done allocating datafile /data/db/test-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 11:55:25.036 [FileAllocator] allocating new datafile /data/db/test-config0/config.1, filling with zeroes... m29000| Fri Feb 22 11:55:25.036 [FileAllocator] done allocating datafile /data/db/test-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 11:55:25.038 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 11:55:25.039 [conn4] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:55:25.040 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 11:55:25.041 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Fri Feb 22 11:55:25.042 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 11:55:25.043 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 11:55:25.043 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361534125:16838' acquired, ts : 51275cadc01a1ebec5b83ea7 m30999| Fri Feb 22 11:55:25.045 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 11:55:25.045 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 11:55:25.045 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:55:25-51275cadc01a1ebec5b83ea8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361534125045), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 11:55:25.045 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 11:55:25.045 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:55:25.046 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 11:55:25.046 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 11:55:25.046 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:55:25.047 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T11:55:25-51275cadc01a1ebec5b83eaa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361534125047), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 11:55:25.047 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 11:55:25.047 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361534125:16838' unlocked. m29000| Fri Feb 22 11:55:25.048 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 11:55:25.049 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 11:55:25.049 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 11:55:25.049 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 11:55:25.049 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Fri Feb 22 11:55:25.050 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 11:55:25.051 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:55:25.051 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 11:55:25.051 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 11:55:25.051 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:55:25.052 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 11:55:25.052 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:55:25.052 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 11:55:25.053 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:55:25.053 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 11:55:25.053 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 11:55:25.053 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 11:55:25.053 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 11:55:25.054 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:55:25.055 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 11:55:25.055 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 11:55:25 m29000| Fri Feb 22 11:55:25.055 [conn3] build index config.mongos { _id: 1 } m29000| Fri Feb 22 11:55:25.055 [initandlisten] connection accepted from 165.225.128.186:58427 #5 (5 connections now open) m29000| Fri Feb 22 11:55:25.056 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:55:25.057 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361534125:16838' acquired, ts : 51275cadc01a1ebec5b83eac m30999| Fri Feb 22 11:55:25.057 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361534125:16838' unlocked. m30999| Fri Feb 22 11:55:25.213 [mongosMain] connection accepted from 127.0.0.1:59072 #1 (1 connection now open) Fri Feb 22 11:55:25.216 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30998 --configdb bs-smartos-x86-64-1.10gen.cc:29000 --chunkSize 50 --setParameter enableTestCommands=1 m30998| Fri Feb 22 11:55:25.235 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30998| Fri Feb 22 11:55:25.236 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=2935 port=30998 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30998| Fri Feb 22 11:55:25.236 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30998| Fri Feb 22 11:55:25.236 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30998| Fri Feb 22 11:55:25.236 [mongosMain] options: { chunkSize: 50, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30998, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 11:55:25.237 [initandlisten] connection accepted from 165.225.128.186:59171 #6 (6 connections now open) m29000| Fri Feb 22 11:55:25.243 [initandlisten] connection accepted from 165.225.128.186:51699 #7 (7 connections now open) m30998| Fri Feb 22 11:55:25.245 [Balancer] about to contact config servers and shards m30998| Fri Feb 22 11:55:25.245 [Balancer] config servers and shards contacted successfully m30998| Fri Feb 22 11:55:25.245 [websvr] admin web console waiting for connections on port 31998 m30998| Fri Feb 22 11:55:25.245 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30998 started at Feb 22 11:55:25 m30998| Fri Feb 22 11:55:25.245 [mongosMain] waiting for connections on port 30998 m29000| Fri Feb 22 11:55:25.245 [initandlisten] connection accepted from 165.225.128.186:57189 #8 (8 connections now open) m30998| Fri Feb 22 11:55:25.246 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30998:1361534125:16838 (sleeping for 30000ms) m30998| Fri Feb 22 11:55:25.247 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361534125:16838' acquired, ts : 51275cad6df0af3b99289ed3 m30998| Fri Feb 22 11:55:25.248 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361534125:16838' unlocked. m30998| Fri Feb 22 11:55:25.418 [mongosMain] connection accepted from 127.0.0.1:57085 #1 (1 connection now open) ShardingTest undefined going to add shard : test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:55:25.420 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 11:55:25.420 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 11:55:25.421 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 11:55:25.421 [conn1] put [admin] on: config:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 11:55:25.421 [conn1] starting new replica set monitor for replica set test-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:55:25.421 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set test-rs0 m31100| Fri Feb 22 11:55:25.421 [initandlisten] connection accepted from 165.225.128.186:50837 #16 (10 connections now open) m30999| Fri Feb 22 11:55:25.422 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from test-rs0/ m30999| Fri Feb 22 11:55:25.422 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set test-rs0 m31100| Fri Feb 22 11:55:25.422 [initandlisten] connection accepted from 165.225.128.186:44194 #17 (11 connections now open) m30999| Fri Feb 22 11:55:25.422 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set test-rs0 m30999| Fri Feb 22 11:55:25.422 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set test-rs0 m30999| Fri Feb 22 11:55:25.422 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set test-rs0 m30999| Fri Feb 22 11:55:25.422 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set test-rs0 m31101| Fri Feb 22 11:55:25.422 [initandlisten] connection accepted from 165.225.128.186:35980 #8 (6 connections now open) m30999| Fri Feb 22 11:55:25.422 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set test-rs0 m31102| Fri Feb 22 11:55:25.422 [initandlisten] connection accepted from 165.225.128.186:63766 #7 (6 connections now open) m31100| Fri Feb 22 11:55:25.423 [initandlisten] connection accepted from 165.225.128.186:53683 #18 (12 connections now open) m31100| Fri Feb 22 11:55:25.423 [conn16] end connection 165.225.128.186:50837 (11 connections now open) m30999| Fri Feb 22 11:55:25.423 [conn1] Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:55:25.424 [initandlisten] connection accepted from 165.225.128.186:50028 #9 (7 connections now open) m31102| Fri Feb 22 11:55:25.424 [initandlisten] connection accepted from 165.225.128.186:65413 #8 (7 connections now open) m30999| Fri Feb 22 11:55:25.424 [conn1] replica set monitor for replica set test-rs0 started, address is test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:55:25.424 [ReplicaSetMonitorWatcher] starting m31100| Fri Feb 22 11:55:25.425 [initandlisten] connection accepted from 165.225.128.186:60375 #19 (12 connections now open) m30999| Fri Feb 22 11:55:25.426 [conn1] going to add shard: { _id: "test-rs0", host: "test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } { "shardAdded" : "test-rs0", "ok" : 1 } m30999| Fri Feb 22 11:55:25.428 [conn1] couldn't find database [replica_set_shard_version] in config db m30999| Fri Feb 22 11:55:25.429 [conn1] put [replica_set_shard_version] on: test-rs0:test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:55:25.429 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31100 serverID: 51275cadc01a1ebec5b83eab m30999| Fri Feb 22 11:55:25.429 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31101 serverID: 51275cadc01a1ebec5b83eab m30999| Fri Feb 22 11:55:25.429 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31102 serverID: 51275cadc01a1ebec5b83eab m31100| Fri Feb 22 11:55:25.429 [initandlisten] connection accepted from 165.225.128.186:44646 #20 (13 connections now open) m31100| Fri Feb 22 11:55:25.430 [initandlisten] connection accepted from 165.225.128.186:55478 #21 (14 connections now open) m31102| Fri Feb 22 11:55:25.430 [initandlisten] connection accepted from 165.225.128.186:40558 #9 (8 connections now open) m31101| Fri Feb 22 11:55:25.430 [initandlisten] connection accepted from 165.225.128.186:61397 #10 (8 connections now open) m31100| Fri Feb 22 11:55:25.431 [initandlisten] connection accepted from 165.225.128.186:57720 #22 (15 connections now open) m31100| Fri Feb 22 11:55:25.431 [conn22] replSet info stepping down as primary secs=3000 m31100| Fri Feb 22 11:55:25.431 [conn22] replSet relinquishing primary state m31100| Fri Feb 22 11:55:25.431 [conn22] replSet SECONDARY m31100| Fri Feb 22 11:55:25.431 [conn22] replSet closing client sockets after relinquishing primary m31100| Fri Feb 22 11:55:25.432 [conn1] end connection 127.0.0.1:62329 (14 connections now open) m31100| Fri Feb 22 11:55:25.432 [conn17] end connection 165.225.128.186:44194 (14 connections now open) m31100| Fri Feb 22 11:55:25.432 [conn14] end connection 165.225.128.186:35635 (14 connections now open) m31100| Fri Feb 22 11:55:25.432 [conn19] end connection 165.225.128.186:60375 (14 connections now open) m31101| Fri Feb 22 11:55:25.432 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:25.432 [conn15] end connection 165.225.128.186:58054 (14 connections now open) m31102| Fri Feb 22 11:55:25.432 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 11:55:25.432 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 11:55:25.432 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 11:55:25.432 [conn18] end connection 165.225.128.186:53683 (14 connections now open) m31100| Fri Feb 22 11:55:25.432 [conn21] end connection 165.225.128.186:55478 (14 connections now open) m31100| Fri Feb 22 11:55:25.432 [conn22] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:57720] m30999| Fri Feb 22 11:55:25.441 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] DBClientCursor::init call() failed m30999| Fri Feb 22 11:55:25.441 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] Detected bad connection created at 1361534125429965 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:55:25.441 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('51275cadc01a1ebec5b83eab') } Fri Feb 22 11:55:25.442 DBClientCursor::init call() failed Fri Feb 22 11:55:25.442 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31100 Fri Feb 22 11:55:25.442 SocketException: remote: 127.0.0.1:31100 error: 9001 socket exception [1] server [127.0.0.1:31100] Fri Feb 22 11:55:25.442 DBClientCursor::init call() failed ReplSetTest Could not call ismaster on node 0: Error: error doing query: failed m31102| Fri Feb 22 11:55:26.338 [conn2] end connection 165.225.128.186:49775 (7 connections now open) m31102| Fri Feb 22 11:55:26.338 [initandlisten] connection accepted from 165.225.128.186:47010 #10 (8 connections now open) m31100| Fri Feb 22 11:55:26.338 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY m31100| Fri Feb 22 11:55:26.442 [initandlisten] connection accepted from 165.225.128.186:50779 #23 (8 connections now open) m31102| Fri Feb 22 11:55:26.553 [conn3] end connection 165.225.128.186:41841 (7 connections now open) m31102| Fri Feb 22 11:55:26.553 [initandlisten] connection accepted from 165.225.128.186:63874 #11 (8 connections now open) m31101| Fri Feb 22 11:55:26.553 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY m31101| Fri Feb 22 11:55:26.554 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31101| Fri Feb 22 11:55:26.554 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'bs-smartos-x86-64-1.10gen.cc:31101 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31100 is already primary and more up-to-date' m31102| Fri Feb 22 11:55:26.742 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31102| Fri Feb 22 11:55:26.742 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31102 is electable' Fri Feb 22 11:55:27.444 trying reconnect to 127.0.0.1:31100 m31100| Fri Feb 22 11:55:27.445 [initandlisten] connection accepted from 127.0.0.1:36973 #24 (9 connections now open) Fri Feb 22 11:55:27.445 reconnect 127.0.0.1:31100 ok m31100| Fri Feb 22 11:55:29.842 [conn7] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:53554] m31100| Fri Feb 22 11:55:29.842 [conn11] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:43795] m31100| Fri Feb 22 11:55:29.847 [conn12] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:60049] m31100| Fri Feb 22 11:55:29.848 [conn8] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:44471] m30999| Fri Feb 22 11:55:31.058 [Balancer] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 11:55:31.058 [Balancer] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m30999| Fri Feb 22 11:55:31.058 [Balancer] DBClientCursor::init call() failed m30999| Fri Feb 22 11:55:31.059 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30999| Fri Feb 22 11:55:31.059 [Balancer] caught exception while doing balance: DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { features: 1 } m29000| Fri Feb 22 11:55:31.077 [conn3] end connection 165.225.128.186:59375 (7 connections now open) m30998| Fri Feb 22 11:55:31.248 [Balancer] starting new replica set monitor for replica set test-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 11:55:31.249 [Balancer] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set test-rs0 m31100| Fri Feb 22 11:55:31.249 [initandlisten] connection accepted from 165.225.128.186:56311 #25 (6 connections now open) m30998| Fri Feb 22 11:55:31.249 [Balancer] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from test-rs0/ m30998| Fri Feb 22 11:55:31.249 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set test-rs0 m31100| Fri Feb 22 11:55:31.249 [initandlisten] connection accepted from 165.225.128.186:41376 #26 (7 connections now open) m30998| Fri Feb 22 11:55:31.249 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set test-rs0 m30998| Fri Feb 22 11:55:31.249 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set test-rs0 m30998| Fri Feb 22 11:55:31.250 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set test-rs0 m30998| Fri Feb 22 11:55:31.250 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set test-rs0 m31101| Fri Feb 22 11:55:31.250 [initandlisten] connection accepted from 165.225.128.186:61913 #11 (9 connections now open) m30998| Fri Feb 22 11:55:31.250 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set test-rs0 m31102| Fri Feb 22 11:55:31.250 [initandlisten] connection accepted from 165.225.128.186:33059 #12 (9 connections now open) m31100| Fri Feb 22 11:55:31.250 [initandlisten] connection accepted from 165.225.128.186:43138 #27 (8 connections now open) m31100| Fri Feb 22 11:55:31.251 [conn25] end connection 165.225.128.186:56311 (7 connections now open) m31101| Fri Feb 22 11:55:31.251 [initandlisten] connection accepted from 165.225.128.186:33698 #12 (10 connections now open) m31102| Fri Feb 22 11:55:31.252 [initandlisten] connection accepted from 165.225.128.186:45538 #13 (10 connections now open) m30998| Fri Feb 22 11:55:32.252 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534132252), ok: 1.0 } m30998| Fri Feb 22 11:55:32.253 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534132253), ok: 1.0 } m30998| Fri Feb 22 11:55:32.253 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534132253), ok: 1.0 } m30998| Fri Feb 22 11:55:33.253 [Balancer] warning: No primary detected for set test-rs0 m30998| Fri Feb 22 11:55:33.253 [Balancer] replica set monitor for replica set test-rs0 started, address is test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 11:55:33.253 [ReplicaSetMonitorWatcher] starting m31101| Fri Feb 22 11:55:33.432 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31102| Fri Feb 22 11:55:33.711 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m30998| Fri Feb 22 11:55:34.254 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534134254), ok: 1.0 } m30998| Fri Feb 22 11:55:34.255 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534134254), ok: 1.0 } m30998| Fri Feb 22 11:55:34.255 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534134255), ok: 1.0 } Fri Feb 22 11:55:34.804 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 Fri Feb 22 11:55:34.804 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 11:55:34.804 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:55:34.804 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:55:34.805 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 ok m31100| Fri Feb 22 11:55:34.805 [initandlisten] connection accepted from 165.225.128.186:64999 #28 (8 connections now open) Fri Feb 22 11:55:34.805 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 Fri Feb 22 11:55:34.805 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 11:55:34.805 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:55:34.805 [ReplicaSetMonitorWatcher] Detected bad connection created at 1361534124802458 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 11:55:35.255 [Balancer] warning: No primary detected for set test-rs0 m30998| Fri Feb 22 11:55:35.269 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30998| Fri Feb 22 11:55:35.269 [Balancer] caught exception while doing balance: ReplicaSetMonitor no master found for set: test-rs0 m29000| Fri Feb 22 11:55:35.269 [conn6] end connection 165.225.128.186:59171 (6 connections now open) m30999| Fri Feb 22 11:55:35.425 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 11:55:35.425 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m30999| Fri Feb 22 11:55:35.425 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed m30999| Fri Feb 22 11:55:35.425 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 11:55:35.425 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 ok m31100| Fri Feb 22 11:55:35.426 [initandlisten] connection accepted from 165.225.128.186:38520 #29 (9 connections now open) m30999| Fri Feb 22 11:55:35.426 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 11:55:35.426 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m30999| Fri Feb 22 11:55:35.426 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed m30999| Fri Feb 22 11:55:35.426 [ReplicaSetMonitorWatcher] Detected bad connection created at 1361534125423141 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 11:55:35.806 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534135806), ok: 1.0 } m31100| Fri Feb 22 11:55:35.806 [initandlisten] connection accepted from 165.225.128.186:45451 #30 (10 connections now open) Fri Feb 22 11:55:35.807 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534135807), ok: 1.0 } Fri Feb 22 11:55:35.807 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534135807), ok: 1.0 } m30999| Fri Feb 22 11:55:36.427 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534136427), ok: 1.0 } m31100| Fri Feb 22 11:55:36.427 [initandlisten] connection accepted from 165.225.128.186:54087 #31 (11 connections now open) m30999| Fri Feb 22 11:55:36.427 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534136427), ok: 1.0 } m30999| Fri Feb 22 11:55:36.428 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534136428), ok: 1.0 } m31101| Fri Feb 22 11:55:36.743 [conn4] end connection 165.225.128.186:44730 (9 connections now open) m31101| Fri Feb 22 11:55:36.743 [initandlisten] connection accepted from 165.225.128.186:41298 #13 (10 connections now open) Fri Feb 22 11:55:36.807 [ReplicaSetMonitorWatcher] warning: No primary detected for set test-rs0 m29000| Fri Feb 22 11:55:37.059 [initandlisten] connection accepted from 165.225.128.186:57230 #9 (7 connections now open) m31100| Fri Feb 22 11:55:37.060 [initandlisten] connection accepted from 165.225.128.186:42857 #32 (12 connections now open) m30999| Fri Feb 22 11:55:37.061 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361534125:16838' acquired, ts : 51275cb9c01a1ebec5b83ead m30999| Fri Feb 22 11:55:37.061 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361534125:16838' unlocked. m30999| Fri Feb 22 11:55:37.428 [ReplicaSetMonitorWatcher] warning: No primary detected for set test-rs0 m31101| Fri Feb 22 11:55:39.314 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31102| Fri Feb 22 11:55:39.408 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31101| Fri Feb 22 11:55:40.340 [conn5] end connection 165.225.128.186:51559 (9 connections now open) m31101| Fri Feb 22 11:55:40.340 [initandlisten] connection accepted from 165.225.128.186:49334 #14 (10 connections now open) m29000| Fri Feb 22 11:55:41.270 [initandlisten] connection accepted from 165.225.128.186:62848 #10 (8 connections now open) m30998| Fri Feb 22 11:55:42.271 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534142271), ok: 1.0 } m30998| Fri Feb 22 11:55:42.272 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534142272), ok: 1.0 } m30998| Fri Feb 22 11:55:42.272 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534142272), ok: 1.0 } m31100| Fri Feb 22 11:55:42.555 [conn5] end connection 165.225.128.186:39570 (11 connections now open) m31100| Fri Feb 22 11:55:42.556 [initandlisten] connection accepted from 165.225.128.186:41551 #33 (12 connections now open) m30998| Fri Feb 22 11:55:43.272 [Balancer] warning: No primary detected for set test-rs0 m30998| Fri Feb 22 11:55:43.273 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30998| Fri Feb 22 11:55:43.273 [Balancer] caught exception while doing balance: ReplicaSetMonitor no master found for set: test-rs0 m29000| Fri Feb 22 11:55:43.273 [conn8] end connection 165.225.128.186:57189 (7 connections now open) m30999| Fri Feb 22 11:55:44.063 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534144063), ok: 1.0 } m30999| Fri Feb 22 11:55:44.064 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534144064), ok: 1.0 } m30999| Fri Feb 22 11:55:44.064 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534144064), ok: 1.0 } m30998| Fri Feb 22 11:55:44.255 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534144254), ok: 1.0 } m30998| Fri Feb 22 11:55:44.255 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534144255), ok: 1.0 } m30998| Fri Feb 22 11:55:44.255 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534144255), ok: 1.0 } m31101| Fri Feb 22 11:55:44.979 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m30999| Fri Feb 22 11:55:45.064 [Balancer] warning: No primary detected for set test-rs0 m30999| Fri Feb 22 11:55:45.065 [Balancer] scoped connection to test-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 not being returned to the pool m30999| Fri Feb 22 11:55:45.065 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30999| Fri Feb 22 11:55:45.065 [Balancer] caught exception while doing balance: ReplicaSetMonitor no master found for set: test-rs0 m31100| Fri Feb 22 11:55:45.065 [conn32] end connection 165.225.128.186:42857 (11 connections now open) m29000| Fri Feb 22 11:55:45.065 [conn5] end connection 165.225.128.186:58427 (6 connections now open) m30998| Fri Feb 22 11:55:45.256 [ReplicaSetMonitorWatcher] warning: No primary detected for set test-rs0 m31102| Fri Feb 22 11:55:45.373 [rsMgr] replSet info electSelf 2 m31101| Fri Feb 22 11:55:45.373 [conn13] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31102 (2) m31100| Fri Feb 22 11:55:45.373 [conn9] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31102 (2) m31102| Fri Feb 22 11:55:45.433 [rsMgr] replSet PRIMARY m30999| Fri Feb 22 11:55:45.463 [conn1] Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 11:55:45.463 [initandlisten] connection accepted from 165.225.128.186:63190 #14 (11 connections now open) ---- Running query which should succeed... ---- Fri Feb 22 11:55:45.465 Primary for replica set test-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 11:55:45.466 [initandlisten] connection accepted from 165.225.128.186:42191 #15 (12 connections now open) m31102| Fri Feb 22 11:55:45.466 [conn15] replSet info stepping down as primary secs=3000 m31102| Fri Feb 22 11:55:45.466 [conn15] replSet relinquishing primary state m31102| Fri Feb 22 11:55:45.466 [conn15] replSet SECONDARY m31102| Fri Feb 22 11:55:45.466 [conn15] replSet closing client sockets after relinquishing primary m31102| Fri Feb 22 11:55:45.466 [conn1] end connection 127.0.0.1:35803 (11 connections now open) m31102| Fri Feb 22 11:55:45.466 [conn12] end connection 165.225.128.186:33059 (11 connections now open) m31102| Fri Feb 22 11:55:45.466 [conn5] end connection 165.225.128.186:56680 (11 connections now open) m31102| Fri Feb 22 11:55:45.466 [conn6] end connection 165.225.128.186:47040 (11 connections now open) Fri Feb 22 11:55:45.466 DBClientCursor::init call() failed m30999| Fri Feb 22 11:55:45.466 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] DBClientCursor::init call() failed m30999| Fri Feb 22 11:55:45.466 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] Detected bad connection created at 1361534125430088 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 11:55:45.466 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] WriteBackListener exception : DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31102 ns: admin.$cmd query: { writebacklisten: ObjectId('51275cadc01a1ebec5b83eab') } m31102| Fri Feb 22 11:55:45.466 [conn15] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:42191] m31102| Fri Feb 22 11:55:45.466 [conn13] end connection 165.225.128.186:45538 (7 connections now open) m31102| Fri Feb 22 11:55:45.466 [conn7] end connection 165.225.128.186:63766 (7 connections now open) m31102| Fri Feb 22 11:55:45.466 [conn14] end connection 165.225.128.186:63190 (7 connections now open) m31102| Fri Feb 22 11:55:45.467 [conn8] end connection 165.225.128.186:65413 (6 connections now open) m30999| Fri Feb 22 11:55:45.467 [mongosMain] connection accepted from 165.225.128.186:51425 #2 (2 connections now open) m31102| Fri Feb 22 11:55:45.468 [initandlisten] connection accepted from 165.225.128.186:32777 #16 (4 connections now open) m31101| Fri Feb 22 11:55:45.468 [initandlisten] connection accepted from 165.225.128.186:36447 #15 (11 connections now open) eliot: null m30999| Fri Feb 22 11:55:45.469 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m31100| Fri Feb 22 11:55:45.470 [conn29] end connection 165.225.128.186:38520 (10 connections now open) m29000| Fri Feb 22 11:55:45.470 [conn4] end connection 165.225.128.186:49724 (5 connections now open) m31101| Fri Feb 22 11:55:45.470 [conn9] end connection 165.225.128.186:50028 (10 connections now open) m31100| Fri Feb 22 11:55:45.470 [conn31] end connection 165.225.128.186:54087 (9 connections now open) m31101| Fri Feb 22 11:55:45.470 [conn8] end connection 165.225.128.186:35980 (10 connections now open) m31102| Fri Feb 22 11:55:45.470 [conn16] end connection 165.225.128.186:32777 (3 connections now open) m29000| Fri Feb 22 11:55:45.470 [conn9] end connection 165.225.128.186:57230 (4 connections now open) m31101| Fri Feb 22 11:55:45.470 [conn15] end connection 165.225.128.186:36447 (8 connections now open) Fri Feb 22 11:55:46.469 shell: stopped mongo program on port 30999 m30998| Fri Feb 22 11:55:46.469 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m29000| Fri Feb 22 11:55:46.470 [conn10] end connection 165.225.128.186:62848 (3 connections now open) m29000| Fri Feb 22 11:55:46.470 [conn7] end connection 165.225.128.186:51699 (3 connections now open) m31101| Fri Feb 22 11:55:46.470 [conn12] end connection 165.225.128.186:33698 (7 connections now open) m31100| Fri Feb 22 11:55:46.470 [conn26] end connection 165.225.128.186:41376 (8 connections now open) m31101| Fri Feb 22 11:55:46.470 [conn11] end connection 165.225.128.186:61913 (7 connections now open) m31100| Fri Feb 22 11:55:46.470 [conn27] end connection 165.225.128.186:43138 (8 connections now open) Fri Feb 22 11:55:46.807 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31102 Fri Feb 22 11:55:46.807 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31102 error: 9001 socket exception [1] server [165.225.128.186:31102] Fri Feb 22 11:55:46.807 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:55:46.808 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 11:55:46.809 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31102 ok m31102| Fri Feb 22 11:55:46.809 [initandlisten] connection accepted from 165.225.128.186:45478 #17 (4 connections now open) Fri Feb 22 11:55:46.809 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31102 Fri Feb 22 11:55:46.809 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31102 error: 9001 socket exception [1] server [165.225.128.186:31102] Fri Feb 22 11:55:46.809 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:55:46.809 [ReplicaSetMonitorWatcher] Detected bad connection created at 1361534124803865 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 11:55:47.469 shell: stopped mongo program on port 30998 Fri Feb 22 11:55:47.470 No db started on port: 30000 Fri Feb 22 11:55:47.470 shell: stopped mongo program on port 30000 ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number ReplSetTest stop *** Shutting down mongod in port 31100 *** m31100| Fri Feb 22 11:55:47.470 got signal 15 (Terminated), will terminate after current cmd ends m31100| Fri Feb 22 11:55:47.471 [interruptThread] now exiting m31100| Fri Feb 22 11:55:47.471 dbexit: m31100| Fri Feb 22 11:55:47.471 [interruptThread] shutdown: going to close listening sockets... m31100| Fri Feb 22 11:55:47.471 [interruptThread] closing listening socket: 12 m31100| Fri Feb 22 11:55:47.471 [interruptThread] closing listening socket: 13 m31100| Fri Feb 22 11:55:47.471 [interruptThread] closing listening socket: 14 m31100| Fri Feb 22 11:55:47.471 [interruptThread] removing socket file: /tmp/mongodb-31100.sock m31100| Fri Feb 22 11:55:47.471 [interruptThread] shutdown: going to flush diaglog... m31100| Fri Feb 22 11:55:47.471 [interruptThread] shutdown: going to close sockets... m31100| Fri Feb 22 11:55:47.471 [interruptThread] shutdown: waiting for fs preallocator... m31100| Fri Feb 22 11:55:47.471 [interruptThread] shutdown: lock for final commit... m31100| Fri Feb 22 11:55:47.471 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 11:55:47.471 [conn14] end connection 165.225.128.186:49334 (5 connections now open) m31102| Fri Feb 22 11:55:47.471 [conn10] end connection 165.225.128.186:47010 (3 connections now open) m31100| Fri Feb 22 11:55:47.471 [conn33] end connection 165.225.128.186:41551 (6 connections now open) m31100| Fri Feb 22 11:55:47.471 [conn9] end connection 165.225.128.186:63210 (6 connections now open) m31100| Fri Feb 22 11:55:47.471 [conn30] end connection 165.225.128.186:45451 (6 connections now open) m31100| Fri Feb 22 11:55:47.471 [conn28] end connection 165.225.128.186:64999 (6 connections now open) m31100| Fri Feb 22 11:55:47.471 [conn24] end connection 127.0.0.1:36973 (6 connections now open) m31100| Fri Feb 22 11:55:47.486 [interruptThread] shutdown: closing all files... m31100| Fri Feb 22 11:55:47.487 [interruptThread] closeAllFiles() finished m31100| Fri Feb 22 11:55:47.487 [interruptThread] journalCleanup... m31100| Fri Feb 22 11:55:47.487 [interruptThread] removeJournalFiles m31100| Fri Feb 22 11:55:47.487 dbexit: really exiting now Fri Feb 22 11:55:47.809 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 Fri Feb 22 11:55:47.809 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 11:55:47.809 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 11:55:47.809 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { ismaster: 1 } Fri Feb 22 11:55:47.809 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534147809), ok: 1.0 } Fri Feb 22 11:55:47.810 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "test-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361534147810), ok: 1.0 } m31102| Fri Feb 22 11:55:47.810 [initandlisten] connection accepted from 165.225.128.186:64533 #18 (4 connections now open) Fri Feb 22 11:55:48.471 shell: stopped mongo program on port 31100 ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number ReplSetTest stop *** Shutting down mongod in port 31101 *** m31101| Fri Feb 22 11:55:48.471 got signal 15 (Terminated), will terminate after current cmd ends m31101| Fri Feb 22 11:55:48.471 [interruptThread] now exiting m31101| Fri Feb 22 11:55:48.471 dbexit: m31101| Fri Feb 22 11:55:48.471 [interruptThread] shutdown: going to close listening sockets... m31101| Fri Feb 22 11:55:48.471 [interruptThread] closing listening socket: 15 m31101| Fri Feb 22 11:55:48.471 [interruptThread] closing listening socket: 16 m31101| Fri Feb 22 11:55:48.471 [interruptThread] closing listening socket: 17 m31101| Fri Feb 22 11:55:48.471 [interruptThread] removing socket file: /tmp/mongodb-31101.sock m31101| Fri Feb 22 11:55:48.471 [interruptThread] shutdown: going to flush diaglog... m31101| Fri Feb 22 11:55:48.471 [interruptThread] shutdown: going to close sockets... m31101| Fri Feb 22 11:55:48.471 [interruptThread] shutdown: waiting for fs preallocator... m31101| Fri Feb 22 11:55:48.471 [interruptThread] shutdown: lock for final commit... m31101| Fri Feb 22 11:55:48.471 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 11:55:48.472 [conn1] end connection 127.0.0.1:65099 (4 connections now open) m31101| Fri Feb 22 11:55:48.472 [conn13] end connection 165.225.128.186:41298 (4 connections now open) m31102| Fri Feb 22 11:55:48.472 [conn11] end connection 165.225.128.186:63874 (3 connections now open) m31101| Fri Feb 22 11:55:48.472 [conn6] end connection 165.225.128.186:34946 (4 connections now open) m31101| Fri Feb 22 11:55:48.472 [conn7] end connection 165.225.128.186:63869 (1 connection now open) m31101| Fri Feb 22 11:55:48.489 [interruptThread] shutdown: closing all files... m31101| Fri Feb 22 11:55:48.489 [interruptThread] closeAllFiles() finished m31101| Fri Feb 22 11:55:48.489 [interruptThread] journalCleanup... m31101| Fri Feb 22 11:55:48.489 [interruptThread] removeJournalFiles m31101| Fri Feb 22 11:55:48.489 dbexit: really exiting now m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] DBClientCursor::init call() failed m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] DBClientCursor::init call() failed m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 heartbeat failed, retrying m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31101 is down (or slow to respond): m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state DOWN m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31100 is down (or slow to respond): m31102| Fri Feb 22 11:55:48.745 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state DOWN Fri Feb 22 11:55:48.810 [ReplicaSetMonitorWatcher] warning: No primary detected for set test-rs0 Fri Feb 22 11:55:49.471 shell: stopped mongo program on port 31101 ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number ReplSetTest stop *** Shutting down mongod in port 31102 *** m31102| Fri Feb 22 11:55:49.471 got signal 15 (Terminated), will terminate after current cmd ends m31102| Fri Feb 22 11:55:49.471 [interruptThread] now exiting m31102| Fri Feb 22 11:55:49.472 dbexit: m31102| Fri Feb 22 11:55:49.472 [interruptThread] shutdown: going to close listening sockets... m31102| Fri Feb 22 11:55:49.472 [interruptThread] closing listening socket: 18 m31102| Fri Feb 22 11:55:49.472 [interruptThread] closing listening socket: 19 m31102| Fri Feb 22 11:55:49.472 [interruptThread] closing listening socket: 20 m31102| Fri Feb 22 11:55:49.472 [interruptThread] removing socket file: /tmp/mongodb-31102.sock m31102| Fri Feb 22 11:55:49.472 [interruptThread] shutdown: going to flush diaglog... m31102| Fri Feb 22 11:55:49.472 [interruptThread] shutdown: going to close sockets... m31102| Fri Feb 22 11:55:49.472 [interruptThread] shutdown: waiting for fs preallocator... m31102| Fri Feb 22 11:55:49.472 [interruptThread] shutdown: lock for final commit... m31102| Fri Feb 22 11:55:49.472 [interruptThread] shutdown: final commit... m31102| Fri Feb 22 11:55:49.472 [conn18] end connection 165.225.128.186:64533 (2 connections now open) m31102| Fri Feb 22 11:55:49.472 [conn17] end connection 165.225.128.186:45478 (2 connections now open) m31102| Fri Feb 22 11:55:49.487 [interruptThread] shutdown: closing all files... m31102| Fri Feb 22 11:55:49.488 [interruptThread] closeAllFiles() finished m31102| Fri Feb 22 11:55:49.488 [interruptThread] journalCleanup... m31102| Fri Feb 22 11:55:49.488 [interruptThread] removeJournalFiles m31102| Fri Feb 22 11:55:49.488 dbexit: really exiting now Fri Feb 22 11:55:50.472 shell: stopped mongo program on port 31102 ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** m29000| Fri Feb 22 11:55:50.480 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 11:55:50.480 [interruptThread] now exiting m29000| Fri Feb 22 11:55:50.480 dbexit: m29000| Fri Feb 22 11:55:50.480 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 11:55:50.480 [interruptThread] closing listening socket: 27 m29000| Fri Feb 22 11:55:50.480 [interruptThread] closing listening socket: 28 m29000| Fri Feb 22 11:55:50.481 [interruptThread] closing listening socket: 29 m29000| Fri Feb 22 11:55:50.481 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 11:55:50.481 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 11:55:50.481 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 11:55:50.481 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 11:55:50.481 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 11:55:50.481 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 11:55:50.481 [conn1] end connection 127.0.0.1:49790 (1 connection now open) m29000| Fri Feb 22 11:55:50.481 [conn2] end connection 165.225.128.186:35442 (0 connections now open) m29000| Fri Feb 22 11:55:50.490 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 11:55:50.491 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 11:55:50.491 [interruptThread] journalCleanup... m29000| Fri Feb 22 11:55:50.491 [interruptThread] removeJournalFiles m29000| Fri Feb 22 11:55:50.491 dbexit: really exiting now Fri Feb 22 11:55:51.481 shell: stopped mongo program on port 29000 *** ShardingTest test completed successfully in 65.28 seconds *** Fri Feb 22 11:55:51.495 [conn112] end connection 127.0.0.1:34876 (0 connections now open) 1.0921 minutes Fri Feb 22 11:55:51.516 [initandlisten] connection accepted from 127.0.0.1:53518 #113 (1 connection now open) Fri Feb 22 11:55:51.517 [conn113] end connection 127.0.0.1:53518 (0 connections now open) ******************************************* Test : replsets_killop.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replsets_killop.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replsets_killop.js";TestData.testFile = "replsets_killop.js";TestData.testName = "replsets_killop";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 11:55:51 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 11:55:51.694 [initandlisten] connection accepted from 127.0.0.1:38284 #114 (1 connection now open) null ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "test" } } ReplSetTest Starting.... Resetting db path '/data/db/test-0' Fri Feb 22 11:55:51.714 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet test --dbpath /data/db/test-0 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Fri Feb 22 11:55:51.804 [initandlisten] MongoDB starting : pid=2965 port=31000 dbpath=/data/db/test-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 11:55:51.804 [initandlisten] m31000| Fri Feb 22 11:55:51.804 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 11:55:51.804 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 11:55:51.804 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 11:55:51.804 [initandlisten] m31000| Fri Feb 22 11:55:51.804 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 11:55:51.804 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 11:55:51.804 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 11:55:51.805 [initandlisten] allocator: system m31000| Fri Feb 22 11:55:51.805 [initandlisten] options: { dbpath: "/data/db/test-0", noprealloc: true, oplogSize: 40, port: 31000, replSet: "test", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Fri Feb 22 11:55:51.805 [initandlisten] journal dir=/data/db/test-0/journal m31000| Fri Feb 22 11:55:51.805 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 11:55:51.819 [FileAllocator] allocating new datafile /data/db/test-0/local.ns, filling with zeroes... m31000| Fri Feb 22 11:55:51.819 [FileAllocator] creating directory /data/db/test-0/_tmp m31000| Fri Feb 22 11:55:51.820 [FileAllocator] done allocating datafile /data/db/test-0/local.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:55:51.820 [FileAllocator] allocating new datafile /data/db/test-0/local.0, filling with zeroes... m31000| Fri Feb 22 11:55:51.820 [FileAllocator] done allocating datafile /data/db/test-0/local.0, size: 16MB, took 0 secs m31000| Fri Feb 22 11:55:51.823 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 11:55:51.823 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 11:55:51.826 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Fri Feb 22 11:55:51.826 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31000| Fri Feb 22 11:55:51.917 [initandlisten] connection accepted from 127.0.0.1:65197 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31001, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 1, "set" : "test" } } ReplSetTest Starting.... Resetting db path '/data/db/test-1' Fri Feb 22 11:55:51.925 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31001 --noprealloc --smallfiles --rest --replSet test --dbpath /data/db/test-1 --setParameter enableTestCommands=1 m31001| note: noprealloc may hurt performance in many applications m31001| Fri Feb 22 11:55:52.014 [initandlisten] MongoDB starting : pid=2966 port=31001 dbpath=/data/db/test-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31001| Fri Feb 22 11:55:52.014 [initandlisten] m31001| Fri Feb 22 11:55:52.014 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31001| Fri Feb 22 11:55:52.014 [initandlisten] ** uses to detect impending page faults. m31001| Fri Feb 22 11:55:52.014 [initandlisten] ** This may result in slower performance for certain use cases m31001| Fri Feb 22 11:55:52.014 [initandlisten] m31001| Fri Feb 22 11:55:52.014 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31001| Fri Feb 22 11:55:52.014 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31001| Fri Feb 22 11:55:52.014 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31001| Fri Feb 22 11:55:52.014 [initandlisten] allocator: system m31001| Fri Feb 22 11:55:52.014 [initandlisten] options: { dbpath: "/data/db/test-1", noprealloc: true, oplogSize: 40, port: 31001, replSet: "test", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31001| Fri Feb 22 11:55:52.015 [initandlisten] journal dir=/data/db/test-1/journal m31001| Fri Feb 22 11:55:52.015 [initandlisten] recover : no journal files present, no recovery needed m31001| Fri Feb 22 11:55:52.031 [FileAllocator] allocating new datafile /data/db/test-1/local.ns, filling with zeroes... m31001| Fri Feb 22 11:55:52.031 [FileAllocator] creating directory /data/db/test-1/_tmp m31001| Fri Feb 22 11:55:52.031 [FileAllocator] done allocating datafile /data/db/test-1/local.ns, size: 16MB, took 0 secs m31001| Fri Feb 22 11:55:52.031 [FileAllocator] allocating new datafile /data/db/test-1/local.0, filling with zeroes... m31001| Fri Feb 22 11:55:52.032 [FileAllocator] done allocating datafile /data/db/test-1/local.0, size: 16MB, took 0 secs m31001| Fri Feb 22 11:55:52.035 [initandlisten] waiting for connections on port 31001 m31001| Fri Feb 22 11:55:52.035 [websvr] admin web console waiting for connections on port 32001 m31001| Fri Feb 22 11:55:52.038 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31001| Fri Feb 22 11:55:52.038 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31001| Fri Feb 22 11:55:52.127 [initandlisten] connection accepted from 127.0.0.1:37593 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31000, 31001, 31002 ] 31002 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31002, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "test", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 2, "set" : "test" } } ReplSetTest Starting.... Resetting db path '/data/db/test-2' Fri Feb 22 11:55:52.132 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31002 --noprealloc --smallfiles --rest --replSet test --dbpath /data/db/test-2 --setParameter enableTestCommands=1 m31002| note: noprealloc may hurt performance in many applications m31002| Fri Feb 22 11:55:52.226 [initandlisten] MongoDB starting : pid=2967 port=31002 dbpath=/data/db/test-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31002| Fri Feb 22 11:55:52.226 [initandlisten] m31002| Fri Feb 22 11:55:52.226 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31002| Fri Feb 22 11:55:52.226 [initandlisten] ** uses to detect impending page faults. m31002| Fri Feb 22 11:55:52.226 [initandlisten] ** This may result in slower performance for certain use cases m31002| Fri Feb 22 11:55:52.227 [initandlisten] m31002| Fri Feb 22 11:55:52.227 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31002| Fri Feb 22 11:55:52.227 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31002| Fri Feb 22 11:55:52.227 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31002| Fri Feb 22 11:55:52.227 [initandlisten] allocator: system m31002| Fri Feb 22 11:55:52.227 [initandlisten] options: { dbpath: "/data/db/test-2", noprealloc: true, oplogSize: 40, port: 31002, replSet: "test", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31002| Fri Feb 22 11:55:52.227 [initandlisten] journal dir=/data/db/test-2/journal m31002| Fri Feb 22 11:55:52.227 [initandlisten] recover : no journal files present, no recovery needed m31002| Fri Feb 22 11:55:52.242 [FileAllocator] allocating new datafile /data/db/test-2/local.ns, filling with zeroes... m31002| Fri Feb 22 11:55:52.242 [FileAllocator] creating directory /data/db/test-2/_tmp m31002| Fri Feb 22 11:55:52.242 [FileAllocator] done allocating datafile /data/db/test-2/local.ns, size: 16MB, took 0 secs m31002| Fri Feb 22 11:55:52.242 [FileAllocator] allocating new datafile /data/db/test-2/local.0, filling with zeroes... m31002| Fri Feb 22 11:55:52.242 [FileAllocator] done allocating datafile /data/db/test-2/local.0, size: 16MB, took 0 secs m31002| Fri Feb 22 11:55:52.246 [initandlisten] waiting for connections on port 31002 m31002| Fri Feb 22 11:55:52.246 [websvr] admin web console waiting for connections on port 32002 m31002| Fri Feb 22 11:55:52.248 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31002| Fri Feb 22 11:55:52.248 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31002| Fri Feb 22 11:55:52.333 [initandlisten] connection accepted from 127.0.0.1:59906 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001, connection to bs-smartos-x86-64-1.10gen.cc:31002 ] { "replSetInitiate" : { "_id" : "test", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31000" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31001" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31002" } ] } } m31000| Fri Feb 22 11:55:52.338 [conn1] replSet replSetInitiate admin command received from client m31000| Fri Feb 22 11:55:52.338 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31000| Fri Feb 22 11:55:52.339 [initandlisten] connection accepted from 165.225.128.186:60234 #2 (2 connections now open) m31001| Fri Feb 22 11:55:52.339 [initandlisten] connection accepted from 165.225.128.186:40979 #2 (2 connections now open) m31002| Fri Feb 22 11:55:52.341 [initandlisten] connection accepted from 165.225.128.186:37646 #2 (2 connections now open) m31000| Fri Feb 22 11:55:52.342 [conn1] replSet replSetInitiate all members seem up m31000| Fri Feb 22 11:55:52.342 [conn1] ****** m31000| Fri Feb 22 11:55:52.342 [conn1] creating replication oplog of size: 40MB... m31000| Fri Feb 22 11:55:52.342 [FileAllocator] allocating new datafile /data/db/test-0/local.1, filling with zeroes... m31000| Fri Feb 22 11:55:52.342 [FileAllocator] done allocating datafile /data/db/test-0/local.1, size: 64MB, took 0 secs m31000| Fri Feb 22 11:55:52.353 [conn2] end connection 165.225.128.186:60234 (1 connection now open) m31000| Fri Feb 22 11:55:52.354 [conn1] ****** m31000| Fri Feb 22 11:55:52.354 [conn1] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 11:55:52.370 [conn1] replSet saveConfigLocally done m31000| Fri Feb 22 11:55:52.370 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31000| Fri Feb 22 11:56:01.826 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:01.826 [rsStart] replSet STARTUP2 m31000| Fri Feb 22 11:56:01.826 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31001| Fri Feb 22 11:56:02.039 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:02.039 [initandlisten] connection accepted from 165.225.128.186:34342 #3 (2 connections now open) m31001| Fri Feb 22 11:56:02.040 [initandlisten] connection accepted from 165.225.128.186:60736 #3 (3 connections now open) m31001| Fri Feb 22 11:56:02.040 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 11:56:02.040 [rsStart] replSet got config version 1 from a remote, saving locally m31001| Fri Feb 22 11:56:02.040 [rsStart] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 11:56:02.044 [rsStart] replSet saveConfigLocally done m31001| Fri Feb 22 11:56:02.045 [rsStart] replSet STARTUP2 m31001| Fri Feb 22 11:56:02.045 [rsSync] ****** m31001| Fri Feb 22 11:56:02.045 [rsSync] creating replication oplog of size: 40MB... m31001| Fri Feb 22 11:56:02.045 [FileAllocator] allocating new datafile /data/db/test-1/local.1, filling with zeroes... m31001| Fri Feb 22 11:56:02.046 [FileAllocator] done allocating datafile /data/db/test-1/local.1, size: 64MB, took 0 secs m31001| Fri Feb 22 11:56:02.054 [conn3] end connection 165.225.128.186:60736 (2 connections now open) m31001| Fri Feb 22 11:56:02.054 [rsSync] ****** m31001| Fri Feb 22 11:56:02.054 [rsSync] replSet initial sync pending m31001| Fri Feb 22 11:56:02.054 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31002| Fri Feb 22 11:56:02.248 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Fri Feb 22 11:56:02.827 [rsSync] replSet SECONDARY m31000| Fri Feb 22 11:56:03.826 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 11:56:03.827 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 thinks that we are down m31000| Fri Feb 22 11:56:03.827 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state STARTUP2 m31000| Fri Feb 22 11:56:03.827 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31000 is electable' m31000| Fri Feb 22 11:56:03.827 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31000 is electable' m31001| Fri Feb 22 11:56:04.041 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 11:56:04.041 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 11:56:04.041 [initandlisten] connection accepted from 165.225.128.186:59078 #3 (3 connections now open) m31001| Fri Feb 22 11:56:04.041 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 11:56:09.828 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31002| Fri Feb 22 11:56:12.249 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:12.250 [initandlisten] connection accepted from 165.225.128.186:35587 #4 (3 connections now open) m31002| Fri Feb 22 11:56:12.250 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 11:56:12.250 [initandlisten] connection accepted from 165.225.128.186:40619 #4 (3 connections now open) m31002| Fri Feb 22 11:56:12.251 [initandlisten] connection accepted from 165.225.128.186:41479 #4 (4 connections now open) m31002| Fri Feb 22 11:56:12.252 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 11:56:12.252 [rsStart] replSet got config version 1 from a remote, saving locally m31002| Fri Feb 22 11:56:12.252 [rsStart] replSet info saving a newer config version to local.system.replset m31002| Fri Feb 22 11:56:12.255 [rsStart] replSet saveConfigLocally done m31002| Fri Feb 22 11:56:12.256 [rsStart] replSet STARTUP2 m31002| Fri Feb 22 11:56:12.256 [rsSync] ****** m31002| Fri Feb 22 11:56:12.256 [rsSync] creating replication oplog of size: 40MB... m31002| Fri Feb 22 11:56:12.257 [FileAllocator] allocating new datafile /data/db/test-2/local.1, filling with zeroes... m31002| Fri Feb 22 11:56:12.257 [FileAllocator] done allocating datafile /data/db/test-2/local.1, size: 64MB, took 0 secs m31002| Fri Feb 22 11:56:12.266 [rsSync] ****** m31002| Fri Feb 22 11:56:12.266 [rsSync] replSet initial sync pending m31002| Fri Feb 22 11:56:12.266 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31002| Fri Feb 22 11:56:12.271 [conn4] end connection 165.225.128.186:41479 (3 connections now open) m31000| Fri Feb 22 11:56:13.828 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 thinks that we are down m31000| Fri Feb 22 11:56:13.828 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state STARTUP2 m31000| Fri Feb 22 11:56:13.828 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31000 is electable' m31001| Fri Feb 22 11:56:14.043 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 thinks that we are down m31001| Fri Feb 22 11:56:14.043 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state STARTUP2 m31002| Fri Feb 22 11:56:14.252 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 11:56:14.252 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 11:56:14.252 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 11:56:14.252 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state STARTUP2 m31001| Fri Feb 22 11:56:15.828 [conn2] end connection 165.225.128.186:40979 (2 connections now open) m31001| Fri Feb 22 11:56:15.829 [initandlisten] connection accepted from 165.225.128.186:39879 #5 (3 connections now open) m31000| Fri Feb 22 11:56:18.043 [conn3] end connection 165.225.128.186:34342 (2 connections now open) m31000| Fri Feb 22 11:56:18.043 [initandlisten] connection accepted from 165.225.128.186:64990 #5 (3 connections now open) m31001| Fri Feb 22 11:56:18.055 [rsSync] replSet initial sync pending m31001| Fri Feb 22 11:56:18.055 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:18.055 [initandlisten] connection accepted from 165.225.128.186:61317 #6 (4 connections now open) m31001| Fri Feb 22 11:56:18.063 [rsSync] build index local.me { _id: 1 } m31001| Fri Feb 22 11:56:18.067 [rsSync] build index done. scanned 0 total records. 0.003 secs m31001| Fri Feb 22 11:56:18.068 [rsSync] build index local.replset.minvalid { _id: 1 } m31001| Fri Feb 22 11:56:18.069 [rsSync] build index done. scanned 0 total records. 0.001 secs m31001| Fri Feb 22 11:56:18.070 [rsSync] replSet initial sync drop all databases m31001| Fri Feb 22 11:56:18.070 [rsSync] dropAllDatabasesExceptLocal 1 m31001| Fri Feb 22 11:56:18.070 [rsSync] replSet initial sync clone all databases m31001| Fri Feb 22 11:56:18.070 [rsSync] replSet initial sync data copy, starting syncup m31001| Fri Feb 22 11:56:18.070 [rsSync] oplog sync 1 of 3 m31001| Fri Feb 22 11:56:18.070 [rsSync] oplog sync 2 of 3 m31001| Fri Feb 22 11:56:18.070 [rsSync] replSet initial sync building indexes m31001| Fri Feb 22 11:56:18.070 [rsSync] oplog sync 3 of 3 m31001| Fri Feb 22 11:56:18.071 [rsSync] replSet initial sync finishing up m31001| Fri Feb 22 11:56:18.076 [rsSync] replSet set minValid=51275cc8:1 m31001| Fri Feb 22 11:56:18.080 [rsSync] replSet RECOVERING m31001| Fri Feb 22 11:56:18.080 [rsSync] replSet initial sync done m31000| Fri Feb 22 11:56:18.080 [conn6] end connection 165.225.128.186:61317 (3 connections now open) m31002| Fri Feb 22 11:56:18.253 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state RECOVERING m31000| Fri Feb 22 11:56:19.829 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 11:56:19.829 [conn2] replSet RECOVERING m31001| Fri Feb 22 11:56:19.829 [conn5] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31002| Fri Feb 22 11:56:19.829 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31000| Fri Feb 22 11:56:19.829 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state RECOVERING m31001| Fri Feb 22 11:56:20.044 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state RECOVERING m31001| Fri Feb 22 11:56:20.080 [rsSync] replSet SECONDARY m31002| Fri Feb 22 11:56:20.253 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 11:56:20.828 [rsMgr] replSet PRIMARY m31000| Fri Feb 22 11:56:21.829 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state RECOVERING m31000| Fri Feb 22 11:56:21.830 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31001| Fri Feb 22 11:56:22.044 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31001| Fri Feb 22 11:56:22.046 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:22.047 [initandlisten] connection accepted from 165.225.128.186:34845 #7 (4 connections now open) m31001| Fri Feb 22 11:56:22.080 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:22.081 [initandlisten] connection accepted from 165.225.128.186:62253 #8 (5 connections now open) m31002| Fri Feb 22 11:56:22.253 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31000| Fri Feb 22 11:56:23.089 [slaveTracking] build index local.slaves { _id: 1 } m31000| Fri Feb 22 11:56:23.091 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31000| Fri Feb 22 11:56:28.254 [conn4] end connection 165.225.128.186:35587 (4 connections now open) m31000| Fri Feb 22 11:56:28.254 [initandlisten] connection accepted from 165.225.128.186:45840 #9 (5 connections now open) m31002| Fri Feb 22 11:56:28.266 [rsSync] replSet initial sync pending m31002| Fri Feb 22 11:56:28.266 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:28.267 [initandlisten] connection accepted from 165.225.128.186:36298 #10 (6 connections now open) m31002| Fri Feb 22 11:56:28.275 [rsSync] build index local.me { _id: 1 } m31002| Fri Feb 22 11:56:28.279 [rsSync] build index done. scanned 0 total records. 0.003 secs m31002| Fri Feb 22 11:56:28.280 [rsSync] build index local.replset.minvalid { _id: 1 } m31002| Fri Feb 22 11:56:28.282 [rsSync] build index done. scanned 0 total records. 0.001 secs m31002| Fri Feb 22 11:56:28.282 [rsSync] replSet initial sync drop all databases m31002| Fri Feb 22 11:56:28.282 [rsSync] dropAllDatabasesExceptLocal 1 m31002| Fri Feb 22 11:56:28.282 [rsSync] replSet initial sync clone all databases m31002| Fri Feb 22 11:56:28.282 [rsSync] replSet initial sync data copy, starting syncup m31002| Fri Feb 22 11:56:28.282 [rsSync] oplog sync 1 of 3 m31002| Fri Feb 22 11:56:28.282 [rsSync] oplog sync 2 of 3 m31002| Fri Feb 22 11:56:28.282 [rsSync] replSet initial sync building indexes m31002| Fri Feb 22 11:56:28.282 [rsSync] oplog sync 3 of 3 m31002| Fri Feb 22 11:56:28.283 [rsSync] replSet initial sync finishing up m31002| Fri Feb 22 11:56:28.288 [rsSync] replSet set minValid=51275cc8:1 m31002| Fri Feb 22 11:56:28.292 [rsSync] replSet initial sync done m31000| Fri Feb 22 11:56:28.292 [conn10] end connection 165.225.128.186:36298 (5 connections now open) m31002| Fri Feb 22 11:56:29.257 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:29.258 [initandlisten] connection accepted from 165.225.128.186:64958 #11 (6 connections now open) m31002| Fri Feb 22 11:56:29.293 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:29.294 [initandlisten] connection accepted from 165.225.128.186:63124 #12 (7 connections now open) m31002| Fri Feb 22 11:56:30.293 [rsSync] replSet SECONDARY m31000| Fri Feb 22 11:56:30.304 [FileAllocator] allocating new datafile /data/db/test-0/test.ns, filling with zeroes... m31000| Fri Feb 22 11:56:30.304 [FileAllocator] done allocating datafile /data/db/test-0/test.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 11:56:30.304 [FileAllocator] allocating new datafile /data/db/test-0/test.0, filling with zeroes... m31000| Fri Feb 22 11:56:30.304 [FileAllocator] done allocating datafile /data/db/test-0/test.0, size: 16MB, took 0 secs m31000| Fri Feb 22 11:56:30.308 [conn1] build index test.test { _id: 1 } m31000| Fri Feb 22 11:56:30.308 [conn1] build index done. scanned 0 total records. 0 secs m31002| Fri Feb 22 11:56:30.309 [FileAllocator] allocating new datafile /data/db/test-2/test.ns, filling with zeroes... m31001| Fri Feb 22 11:56:30.309 [FileAllocator] allocating new datafile /data/db/test-1/test.ns, filling with zeroes... m31002| Fri Feb 22 11:56:30.309 [FileAllocator] done allocating datafile /data/db/test-2/test.ns, size: 16MB, took 0 secs m31001| Fri Feb 22 11:56:30.309 [FileAllocator] done allocating datafile /data/db/test-1/test.ns, size: 16MB, took 0 secs m31002| Fri Feb 22 11:56:30.310 [FileAllocator] allocating new datafile /data/db/test-2/test.0, filling with zeroes... m31001| Fri Feb 22 11:56:30.310 [FileAllocator] allocating new datafile /data/db/test-1/test.0, filling with zeroes... m31002| Fri Feb 22 11:56:30.310 [FileAllocator] done allocating datafile /data/db/test-2/test.0, size: 16MB, took 0 secs m31001| Fri Feb 22 11:56:30.310 [FileAllocator] done allocating datafile /data/db/test-1/test.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31000, is { "t" : 1361534190000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361534190000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 11:56:30.314 [repl writer worker 1] build index test.test { _id: 1 } m31001| Fri Feb 22 11:56:30.314 [repl writer worker 1] build index test.test { _id: 1 } m31002| Fri Feb 22 11:56:30.315 [repl writer worker 1] build index done. scanned 0 total records. 0 secs m31001| Fri Feb 22 11:56:30.315 [repl writer worker 1] build index done. scanned 0 total records. 0 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31001, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31002 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31002, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361534190000, "i" : 1 } Fri Feb 22 11:56:30.322 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replsets_killop.js", "testFile" : "replsets_killop.js", "testName" : "replsets_killop", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i = 1; i < 100000; ++i ) { db.test.save( { a:i } ); sleep( 1 ); } db.getLastError(); bs-smartos-x86-64-1.10gen.cc:31000/admin m31000| Fri Feb 22 11:56:30.324 [conn1] going to kill op: op: 153.0 m31000| Fri Feb 22 11:56:30.324 [conn1] going to kill op: op: 155.0 m31000| Fri Feb 22 11:56:30.324 [conn1] going to kill op: op: 151.0 m31000| Fri Feb 22 11:56:30.325 [conn1] going to kill op: op: 150.0 sh3029| MongoDB shell version: 2.4.0-rc1-pre- sh3029| connecting to: bs-smartos-x86-64-1.10gen.cc:31000/admin m31000| Fri Feb 22 11:56:30.395 [initandlisten] connection accepted from 165.225.128.186:52721 #13 (8 connections now open) m31000| Fri Feb 22 11:56:30.398 [conn7] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.398 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.398 [conn7] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534152000|1 } } cursorid:129481173618412 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:128 nreturned:0 reslen:20 89ms m31000| Fri Feb 22 11:56:30.398 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534152000|1 } } cursorid:160470247761602 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:144 nreturned:0 reslen:20 82ms m31000| Fri Feb 22 11:56:30.398 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.398 [conn11] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.398 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534152000|1 } } cursorid:129613785636769 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:218 nreturned:0 reslen:20 82ms m31000| Fri Feb 22 11:56:30.398 [conn11] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534152000|1 } } cursorid:160337363798213 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:212 nreturned:0 reslen:20 89ms m31000| Fri Feb 22 11:56:30.398 [conn7] getMore: cursorid not found local.oplog.rs 129481173618412 m31001| Fri Feb 22 11:56:30.398 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:30.398 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.398 [conn7] end connection 165.225.128.186:34845 (7 connections now open) m31000| Fri Feb 22 11:56:30.398 [conn11] end connection 165.225.128.186:64958 (7 connections now open) m31002| Fri Feb 22 11:56:30.398 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:56:30.399 [initandlisten] connection accepted from 165.225.128.186:47467 #14 (7 connections now open) m31000| Fri Feb 22 11:56:30.399 [initandlisten] connection accepted from 165.225.128.186:64987 #15 (8 connections now open) m31002| Fri Feb 22 11:56:30.400 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:30.400 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.425 [conn1] going to kill op: op: 176.0 m31000| Fri Feb 22 11:56:30.426 [conn1] going to kill op: op: 175.0 m31000| Fri Feb 22 11:56:30.430 [conn15] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.430 [conn15] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|1 } } cursorid:165165034750683 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:30.430 [conn14] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.430 [conn14] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|1 } } cursorid:165163076739122 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:30.430 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.430 [conn15] end connection 165.225.128.186:64987 (7 connections now open) m31001| Fri Feb 22 11:56:30.430 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.430 [conn14] end connection 165.225.128.186:47467 (6 connections now open) m31000| Fri Feb 22 11:56:30.430 [initandlisten] connection accepted from 165.225.128.186:34806 #16 (7 connections now open) m31000| Fri Feb 22 11:56:30.430 [initandlisten] connection accepted from 165.225.128.186:46334 #17 (8 connections now open) m31000| Fri Feb 22 11:56:30.526 [conn1] going to kill op: op: 214.0 m31000| Fri Feb 22 11:56:30.527 [conn1] going to kill op: op: 213.0 m31000| Fri Feb 22 11:56:30.533 [conn17] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.533 [conn16] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.533 [conn17] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|4 } } cursorid:165300470848629 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:30.533 [conn16] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|4 } } cursorid:165302140413504 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:87 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:30.533 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.533 [conn16] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:56:30.533 [conn17] end connection 165.225.128.186:46334 (7 connections now open) m31002| Fri Feb 22 11:56:30.533 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.533 [conn16] end connection 165.225.128.186:34806 (6 connections now open) m31000| Fri Feb 22 11:56:30.533 [initandlisten] connection accepted from 165.225.128.186:55463 #18 (7 connections now open) m31000| Fri Feb 22 11:56:30.533 [initandlisten] connection accepted from 165.225.128.186:44878 #19 (8 connections now open) m31000| Fri Feb 22 11:56:30.627 [conn1] going to kill op: op: 252.0 m31000| Fri Feb 22 11:56:30.628 [conn1] going to kill op: op: 251.0 m31000| Fri Feb 22 11:56:30.635 [conn19] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.635 [conn19] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|14 } } cursorid:165740412636311 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:30.635 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.635 [conn19] end connection 165.225.128.186:44878 (7 connections now open) m31000| Fri Feb 22 11:56:30.636 [conn18] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.636 [conn18] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|14 } } cursorid:165739923513108 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:30.636 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.636 [initandlisten] connection accepted from 165.225.128.186:37866 #20 (8 connections now open) m31000| Fri Feb 22 11:56:30.636 [conn18] end connection 165.225.128.186:55463 (6 connections now open) m31000| Fri Feb 22 11:56:30.636 [initandlisten] connection accepted from 165.225.128.186:43947 #21 (8 connections now open) m31000| Fri Feb 22 11:56:30.728 [conn1] going to kill op: op: 287.0 m31000| Fri Feb 22 11:56:30.728 [conn1] going to kill op: op: 286.0 m31000| Fri Feb 22 11:56:30.829 [conn1] going to kill op: op: 317.0 m31000| Fri Feb 22 11:56:30.829 [conn1] going to kill op: op: 316.0 m31000| Fri Feb 22 11:56:30.830 [conn20] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.830 [conn21] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.830 [conn20] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|24 } } cursorid:166178230903465 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:30.830 [conn21] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|24 } } cursorid:166178392638146 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:30.830 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:30.830 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.831 [conn21] end connection 165.225.128.186:43947 (7 connections now open) m31000| Fri Feb 22 11:56:30.831 [conn20] end connection 165.225.128.186:37866 (7 connections now open) m31000| Fri Feb 22 11:56:30.831 [initandlisten] connection accepted from 165.225.128.186:52719 #22 (7 connections now open) m31000| Fri Feb 22 11:56:30.831 [initandlisten] connection accepted from 165.225.128.186:34429 #23 (8 connections now open) m31000| Fri Feb 22 11:56:30.930 [conn1] going to kill op: op: 354.0 m31000| Fri Feb 22 11:56:30.930 [conn1] going to kill op: op: 355.0 m31000| Fri Feb 22 11:56:30.933 [conn22] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.933 [conn22] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|43 } } cursorid:167001619039688 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:30.933 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.933 [conn22] end connection 165.225.128.186:52719 (7 connections now open) m31000| Fri Feb 22 11:56:30.934 [conn23] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:30.934 [conn23] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|43 } } cursorid:167002366948915 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:30.934 [initandlisten] connection accepted from 165.225.128.186:37179 #24 (8 connections now open) m31002| Fri Feb 22 11:56:30.934 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:30.934 [conn23] end connection 165.225.128.186:34429 (7 connections now open) m31000| Fri Feb 22 11:56:30.934 [initandlisten] connection accepted from 165.225.128.186:53373 #25 (8 connections now open) m31000| Fri Feb 22 11:56:31.031 [conn1] going to kill op: op: 393.0 m31000| Fri Feb 22 11:56:31.031 [conn1] going to kill op: op: 392.0 m31000| Fri Feb 22 11:56:31.036 [conn24] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.036 [conn24] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|53 } } cursorid:167435261216342 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:31.036 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.036 [conn24] end connection 165.225.128.186:37179 (7 connections now open) m31000| Fri Feb 22 11:56:31.036 [initandlisten] connection accepted from 165.225.128.186:62776 #26 (8 connections now open) m31000| Fri Feb 22 11:56:31.037 [conn25] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.037 [conn25] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534190000|53 } } cursorid:167439938253084 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:31.037 [conn25] ClientCursor::find(): cursor not found in map '167439938253084' (ok after a drop) m31002| Fri Feb 22 11:56:31.037 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.037 [conn25] end connection 165.225.128.186:53373 (7 connections now open) m31000| Fri Feb 22 11:56:31.037 [initandlisten] connection accepted from 165.225.128.186:57161 #27 (8 connections now open) m31000| Fri Feb 22 11:56:31.132 [conn1] going to kill op: op: 433.0 m31000| Fri Feb 22 11:56:31.132 [conn1] going to kill op: op: 432.0 m31000| Fri Feb 22 11:56:31.138 [conn26] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.138 [conn26] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|2 } } cursorid:167874616256239 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:31.138 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.138 [conn26] end connection 165.225.128.186:62776 (7 connections now open) m31000| Fri Feb 22 11:56:31.138 [initandlisten] connection accepted from 165.225.128.186:43202 #28 (8 connections now open) m31000| Fri Feb 22 11:56:31.139 [conn27] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.139 [conn27] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|2 } } cursorid:167878217219302 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:31.139 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.139 [conn27] end connection 165.225.128.186:57161 (7 connections now open) m31000| Fri Feb 22 11:56:31.140 [initandlisten] connection accepted from 165.225.128.186:35814 #29 (8 connections now open) m31000| Fri Feb 22 11:56:31.233 [conn1] going to kill op: op: 470.0 m31000| Fri Feb 22 11:56:31.233 [conn1] going to kill op: op: 471.0 m31000| Fri Feb 22 11:56:31.241 [conn28] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.241 [conn28] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|12 } } cursorid:168313244787314 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:31.241 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.241 [conn28] end connection 165.225.128.186:43202 (7 connections now open) m31000| Fri Feb 22 11:56:31.241 [initandlisten] connection accepted from 165.225.128.186:34181 #30 (8 connections now open) m31000| Fri Feb 22 11:56:31.242 [conn29] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.242 [conn29] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|12 } } cursorid:168316339211806 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:31.242 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.242 [conn29] end connection 165.225.128.186:35814 (7 connections now open) m31000| Fri Feb 22 11:56:31.242 [initandlisten] connection accepted from 165.225.128.186:37651 #31 (8 connections now open) m31000| Fri Feb 22 11:56:31.334 [conn1] going to kill op: op: 508.0 m31000| Fri Feb 22 11:56:31.334 [conn1] going to kill op: op: 506.0 m31000| Fri Feb 22 11:56:31.343 [conn30] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.343 [conn30] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|22 } } cursorid:168751003153350 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:31.344 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.344 [conn30] end connection 165.225.128.186:34181 (7 connections now open) m31000| Fri Feb 22 11:56:31.344 [initandlisten] connection accepted from 165.225.128.186:57848 #32 (8 connections now open) m31000| Fri Feb 22 11:56:31.435 [conn1] going to kill op: op: 547.0 m31000| Fri Feb 22 11:56:31.435 [conn1] going to kill op: op: 546.0 m31000| Fri Feb 22 11:56:31.435 [conn1] going to kill op: op: 548.0 m31000| Fri Feb 22 11:56:31.436 [conn1] going to kill op: op: 545.0 m31000| Fri Feb 22 11:56:31.436 [conn31] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.436 [conn31] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|22 } } cursorid:168754376555154 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:31.436 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.436 [conn31] end connection 165.225.128.186:37651 (7 connections now open) m31000| Fri Feb 22 11:56:31.436 [conn32] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.436 [conn32] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|32 } } cursorid:169189007404291 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:31.436 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.436 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|38 } } cursorid:169402188359020 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:31.436 [conn32] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:31.436 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.436 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.436 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|38 } } cursorid:169402788082733 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:31.436 [conn32] end connection 165.225.128.186:57848 (6 connections now open) m31000| Fri Feb 22 11:56:31.436 [initandlisten] connection accepted from 165.225.128.186:40826 #33 (8 connections now open) m31000| Fri Feb 22 11:56:31.436 [initandlisten] connection accepted from 165.225.128.186:47299 #34 (8 connections now open) m31002| Fri Feb 22 11:56:31.437 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:31.437 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.536 [conn1] going to kill op: op: 588.0 m31000| Fri Feb 22 11:56:31.536 [conn1] going to kill op: op: 587.0 m31000| Fri Feb 22 11:56:31.538 [conn33] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.539 [conn33] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|41 } } cursorid:169584500954564 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:31.539 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.539 [conn33] end connection 165.225.128.186:40826 (7 connections now open) m31000| Fri Feb 22 11:56:31.539 [conn34] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.539 [conn34] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|41 } } cursorid:169583496427464 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:100 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:31.539 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.539 [conn34] end connection 165.225.128.186:47299 (6 connections now open) m31000| Fri Feb 22 11:56:31.539 [initandlisten] connection accepted from 165.225.128.186:37894 #35 (7 connections now open) m31000| Fri Feb 22 11:56:31.539 [initandlisten] connection accepted from 165.225.128.186:48749 #36 (8 connections now open) m31000| Fri Feb 22 11:56:31.637 [conn1] going to kill op: op: 625.0 m31000| Fri Feb 22 11:56:31.637 [conn1] going to kill op: op: 626.0 m31000| Fri Feb 22 11:56:31.641 [conn35] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.641 [conn35] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|51 } } cursorid:170022288292653 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:31.641 [conn36] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.641 [conn36] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|51 } } cursorid:170020835434244 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:31.641 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.641 [conn35] end connection 165.225.128.186:37894 (7 connections now open) m31001| Fri Feb 22 11:56:31.641 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.642 [conn36] end connection 165.225.128.186:48749 (6 connections now open) m31000| Fri Feb 22 11:56:31.642 [initandlisten] connection accepted from 165.225.128.186:46969 #37 (7 connections now open) m31000| Fri Feb 22 11:56:31.642 [initandlisten] connection accepted from 165.225.128.186:49703 #38 (8 connections now open) m31000| Fri Feb 22 11:56:31.738 [conn1] going to kill op: op: 663.0 m31000| Fri Feb 22 11:56:31.738 [conn1] going to kill op: op: 664.0 m31000| Fri Feb 22 11:56:31.744 [conn37] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.744 [conn37] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|61 } } cursorid:170460309674006 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:31.744 [conn38] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.744 [conn38] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|61 } } cursorid:170459777108499 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:31.744 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.744 [conn38] ClientCursor::find(): cursor not found in map '170459777108499' (ok after a drop) m31001| Fri Feb 22 11:56:31.744 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.744 [conn37] end connection 165.225.128.186:46969 (7 connections now open) m31000| Fri Feb 22 11:56:31.744 [conn38] end connection 165.225.128.186:49703 (6 connections now open) m31000| Fri Feb 22 11:56:31.745 [initandlisten] connection accepted from 165.225.128.186:49781 #39 (7 connections now open) m31000| Fri Feb 22 11:56:31.745 [initandlisten] connection accepted from 165.225.128.186:56034 #40 (8 connections now open) m31002| Fri Feb 22 11:56:31.830 [conn2] end connection 165.225.128.186:37646 (2 connections now open) m31002| Fri Feb 22 11:56:31.830 [initandlisten] connection accepted from 165.225.128.186:36015 #5 (3 connections now open) m31000| Fri Feb 22 11:56:31.831 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 11:56:31.839 [conn1] going to kill op: op: 703.0 m31000| Fri Feb 22 11:56:31.839 [conn1] going to kill op: op: 702.0 m31000| Fri Feb 22 11:56:31.847 [conn40] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.847 [conn39] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.847 [conn40] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|71 } } cursorid:170897049074959 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:31.848 [conn39] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|71 } } cursorid:170898195296621 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:31.848 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:31.848 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.848 [conn40] end connection 165.225.128.186:56034 (7 connections now open) m31000| Fri Feb 22 11:56:31.848 [conn39] end connection 165.225.128.186:49781 (7 connections now open) m31000| Fri Feb 22 11:56:31.848 [initandlisten] connection accepted from 165.225.128.186:59446 #41 (7 connections now open) m31000| Fri Feb 22 11:56:31.848 [initandlisten] connection accepted from 165.225.128.186:34662 #42 (8 connections now open) m31000| Fri Feb 22 11:56:31.940 [conn1] going to kill op: op: 737.0 m31000| Fri Feb 22 11:56:31.940 [conn1] going to kill op: op: 738.0 m31000| Fri Feb 22 11:56:31.940 [conn41] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.940 [conn41] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|82 } } cursorid:171336608745187 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:31.941 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.941 [conn41] end connection 165.225.128.186:59446 (7 connections now open) m31000| Fri Feb 22 11:56:31.941 [conn42] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:31.941 [conn42] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|82 } } cursorid:171336763223957 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:31.941 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:31.941 [initandlisten] connection accepted from 165.225.128.186:41500 #43 (8 connections now open) m31000| Fri Feb 22 11:56:31.941 [conn42] end connection 165.225.128.186:34662 (7 connections now open) m31000| Fri Feb 22 11:56:31.942 [initandlisten] connection accepted from 165.225.128.186:33681 #44 (8 connections now open) m31000| Fri Feb 22 11:56:32.041 [conn1] going to kill op: op: 775.0 m31000| Fri Feb 22 11:56:32.041 [conn1] going to kill op: op: 776.0 m31000| Fri Feb 22 11:56:32.044 [conn43] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.044 [conn44] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.045 [conn43] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|91 } } cursorid:171731626580276 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.045 [conn44] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534191000|91 } } cursorid:171731496000674 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:119 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:32.045 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:32.045 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.045 [conn44] end connection 165.225.128.186:33681 (7 connections now open) m31000| Fri Feb 22 11:56:32.045 [conn43] end connection 165.225.128.186:41500 (7 connections now open) m31002| Fri Feb 22 11:56:32.045 [conn3] end connection 165.225.128.186:59078 (2 connections now open) m31000| Fri Feb 22 11:56:32.045 [initandlisten] connection accepted from 165.225.128.186:35719 #45 (7 connections now open) m31000| Fri Feb 22 11:56:32.045 [initandlisten] connection accepted from 165.225.128.186:60725 #46 (8 connections now open) m31002| Fri Feb 22 11:56:32.046 [initandlisten] connection accepted from 165.225.128.186:53085 #6 (3 connections now open) m31001| Fri Feb 22 11:56:32.046 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 11:56:32.142 [conn1] going to kill op: op: 816.0 m31000| Fri Feb 22 11:56:32.142 [conn1] going to kill op: op: 817.0 m31000| Fri Feb 22 11:56:32.148 [conn45] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.148 [conn45] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|3 } } cursorid:172169570730162 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.148 [conn45] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:32.148 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.148 [conn45] end connection 165.225.128.186:35719 (7 connections now open) m31000| Fri Feb 22 11:56:32.148 [initandlisten] connection accepted from 165.225.128.186:61523 #47 (8 connections now open) m31000| Fri Feb 22 11:56:32.148 [conn46] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.148 [conn46] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|3 } } cursorid:172169619364115 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:32.149 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.149 [conn46] end connection 165.225.128.186:60725 (7 connections now open) m31000| Fri Feb 22 11:56:32.149 [initandlisten] connection accepted from 165.225.128.186:63775 #48 (8 connections now open) m31000| Fri Feb 22 11:56:32.243 [conn1] going to kill op: op: 855.0 m31000| Fri Feb 22 11:56:32.243 [conn1] going to kill op: op: 854.0 m31000| Fri Feb 22 11:56:32.251 [conn47] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.251 [conn47] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|13 } } cursorid:172603568149978 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:32.251 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.251 [conn48] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.251 [conn48] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|13 } } cursorid:172606927765820 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.251 [conn47] end connection 165.225.128.186:61523 (7 connections now open) m31001| Fri Feb 22 11:56:32.251 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.252 [conn48] end connection 165.225.128.186:63775 (6 connections now open) m31000| Fri Feb 22 11:56:32.252 [initandlisten] connection accepted from 165.225.128.186:43081 #49 (7 connections now open) m31000| Fri Feb 22 11:56:32.252 [initandlisten] connection accepted from 165.225.128.186:41515 #50 (8 connections now open) m31000| Fri Feb 22 11:56:32.344 [conn1] going to kill op: op: 891.0 m31000| Fri Feb 22 11:56:32.344 [conn1] going to kill op: op: 890.0 m31000| Fri Feb 22 11:56:32.345 [conn50] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.345 [conn50] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|23 } } cursorid:173045194110252 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:32.345 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.345 [conn50] end connection 165.225.128.186:41515 (7 connections now open) m31000| Fri Feb 22 11:56:32.345 [initandlisten] connection accepted from 165.225.128.186:40492 #51 (8 connections now open) m31000| Fri Feb 22 11:56:32.445 [conn1] going to kill op: op: 929.0 m31000| Fri Feb 22 11:56:32.445 [conn1] going to kill op: op: 928.0 m31000| Fri Feb 22 11:56:32.446 [conn1] going to kill op: op: 924.0 m31000| Fri Feb 22 11:56:32.446 [conn49] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.446 [conn49] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|23 } } cursorid:173045509273066 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:32.446 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.446 [conn49] end connection 165.225.128.186:43081 (7 connections now open) m31000| Fri Feb 22 11:56:32.448 [conn51] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.448 [conn51] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|32 } } cursorid:173435708249369 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:32.448 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.448 [conn51] end connection 165.225.128.186:40492 (6 connections now open) m31000| Fri Feb 22 11:56:32.448 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.448 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|42 } } cursorid:173823614965902 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.449 [initandlisten] connection accepted from 165.225.128.186:54954 #52 (7 connections now open) m31001| Fri Feb 22 11:56:32.449 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.450 [initandlisten] connection accepted from 165.225.128.186:58845 #53 (8 connections now open) m31000| Fri Feb 22 11:56:32.546 [conn1] going to kill op: op: 979.0 m31000| Fri Feb 22 11:56:32.546 [conn1] going to kill op: op: 977.0 m31000| Fri Feb 22 11:56:32.547 [conn1] going to kill op: op: 978.0 m31000| Fri Feb 22 11:56:32.551 [conn52] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.551 [conn52] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|43 } } cursorid:173870920308900 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.551 [conn52] ClientCursor::find(): cursor not found in map '173870920308900' (ok after a drop) m31001| Fri Feb 22 11:56:32.551 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.551 [conn52] end connection 165.225.128.186:54954 (7 connections now open) m31000| Fri Feb 22 11:56:32.552 [initandlisten] connection accepted from 165.225.128.186:59633 #54 (8 connections now open) m31000| Fri Feb 22 11:56:32.552 [conn53] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.552 [conn53] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|42 } } cursorid:173874590286368 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:32.552 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.552 [conn53] end connection 165.225.128.186:58845 (7 connections now open) m31000| Fri Feb 22 11:56:32.552 [initandlisten] connection accepted from 165.225.128.186:40340 #55 (8 connections now open) m31000| Fri Feb 22 11:56:32.553 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.553 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|42 } } cursorid:173823469855154 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:32.554 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.647 [conn1] going to kill op: op: 1017.0 m31000| Fri Feb 22 11:56:32.648 [conn1] going to kill op: op: 1018.0 m31000| Fri Feb 22 11:56:32.654 [conn54] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.655 [conn55] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.655 [conn54] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|53 } } cursorid:174308280507311 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.655 [conn55] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|53 } } cursorid:174312576724722 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:32.655 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:32.655 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.655 [conn55] end connection 165.225.128.186:40340 (7 connections now open) m31000| Fri Feb 22 11:56:32.655 [conn54] end connection 165.225.128.186:59633 (7 connections now open) m31000| Fri Feb 22 11:56:32.655 [initandlisten] connection accepted from 165.225.128.186:53498 #56 (7 connections now open) m31000| Fri Feb 22 11:56:32.655 [initandlisten] connection accepted from 165.225.128.186:41143 #57 (8 connections now open) m31000| Fri Feb 22 11:56:32.748 [conn1] going to kill op: op: 1055.0 m31000| Fri Feb 22 11:56:32.748 [conn1] going to kill op: op: 1056.0 m31000| Fri Feb 22 11:56:32.758 [conn56] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.758 [conn57] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.758 [conn56] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|63 } } cursorid:174750625189545 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.758 [conn57] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|63 } } cursorid:174751089945709 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:32.758 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:32.758 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.758 [conn57] end connection 165.225.128.186:41143 (7 connections now open) m31000| Fri Feb 22 11:56:32.758 [conn56] end connection 165.225.128.186:53498 (7 connections now open) m31000| Fri Feb 22 11:56:32.759 [initandlisten] connection accepted from 165.225.128.186:49928 #58 (7 connections now open) m31000| Fri Feb 22 11:56:32.759 [initandlisten] connection accepted from 165.225.128.186:60897 #59 (8 connections now open) m31000| Fri Feb 22 11:56:32.849 [conn1] going to kill op: op: 1090.0 m31000| Fri Feb 22 11:56:32.849 [conn1] going to kill op: op: 1091.0 m31000| Fri Feb 22 11:56:32.851 [conn59] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.851 [conn58] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.851 [conn59] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|73 } } cursorid:175189378802515 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.851 [conn58] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|73 } } cursorid:175189047354685 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.851 [conn59] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:32.851 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:32.851 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.851 [conn58] end connection 165.225.128.186:49928 (7 connections now open) m31000| Fri Feb 22 11:56:32.851 [conn59] end connection 165.225.128.186:60897 (7 connections now open) m31000| Fri Feb 22 11:56:32.852 [initandlisten] connection accepted from 165.225.128.186:49630 #60 (7 connections now open) m31000| Fri Feb 22 11:56:32.852 [initandlisten] connection accepted from 165.225.128.186:34436 #61 (8 connections now open) m31000| Fri Feb 22 11:56:32.953 [conn1] going to kill op: op: 1128.0 m31000| Fri Feb 22 11:56:32.953 [conn1] going to kill op: op: 1129.0 m31000| Fri Feb 22 11:56:32.954 [conn61] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.954 [conn60] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:32.954 [conn61] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|82 } } cursorid:175584410371375 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:32.954 [conn60] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|82 } } cursorid:175583559513277 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:32.954 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:32.954 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:32.955 [conn61] end connection 165.225.128.186:34436 (7 connections now open) m31000| Fri Feb 22 11:56:32.955 [conn60] end connection 165.225.128.186:49630 (7 connections now open) m31000| Fri Feb 22 11:56:32.955 [initandlisten] connection accepted from 165.225.128.186:62250 #62 (7 connections now open) m31000| Fri Feb 22 11:56:32.955 [initandlisten] connection accepted from 165.225.128.186:39630 #63 (8 connections now open) m31000| Fri Feb 22 11:56:33.054 [conn1] going to kill op: op: 1168.0 m31000| Fri Feb 22 11:56:33.054 [conn1] going to kill op: op: 1167.0 m31000| Fri Feb 22 11:56:33.057 [conn62] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.057 [conn62] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|92 } } cursorid:176022656681744 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:33.057 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.057 [conn62] end connection 165.225.128.186:62250 (7 connections now open) m31000| Fri Feb 22 11:56:33.057 [initandlisten] connection accepted from 165.225.128.186:50627 #64 (8 connections now open) m31000| Fri Feb 22 11:56:33.057 [conn63] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.058 [conn63] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534192000|92 } } cursorid:176020793180785 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:33.058 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.058 [conn63] end connection 165.225.128.186:39630 (7 connections now open) m31000| Fri Feb 22 11:56:33.058 [initandlisten] connection accepted from 165.225.128.186:39298 #65 (8 connections now open) m31000| Fri Feb 22 11:56:33.155 [conn1] going to kill op: op: 1207.0 m31000| Fri Feb 22 11:56:33.155 [conn1] going to kill op: op: 1208.0 m31000| Fri Feb 22 11:56:33.160 [conn64] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.160 [conn64] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|5 } } cursorid:176455281297436 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.160 [conn65] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.160 [conn65] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|5 } } cursorid:176460646281862 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:33.160 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.160 [conn64] end connection 165.225.128.186:50627 (7 connections now open) m31002| Fri Feb 22 11:56:33.160 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.160 [conn65] end connection 165.225.128.186:39298 (6 connections now open) m31000| Fri Feb 22 11:56:33.160 [initandlisten] connection accepted from 165.225.128.186:55639 #66 (7 connections now open) m31000| Fri Feb 22 11:56:33.161 [initandlisten] connection accepted from 165.225.128.186:60167 #67 (8 connections now open) m31000| Fri Feb 22 11:56:33.256 [conn1] going to kill op: op: 1246.0 m31000| Fri Feb 22 11:56:33.256 [conn1] going to kill op: op: 1245.0 m31000| Fri Feb 22 11:56:33.263 [conn67] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.263 [conn67] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|15 } } cursorid:176898586447352 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.263 [conn67] ClientCursor::find(): cursor not found in map '176898586447352' (ok after a drop) m31002| Fri Feb 22 11:56:33.263 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.263 [conn67] end connection 165.225.128.186:60167 (7 connections now open) m31000| Fri Feb 22 11:56:33.263 [conn66] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.263 [conn66] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|15 } } cursorid:176898376982224 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:33.264 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.264 [initandlisten] connection accepted from 165.225.128.186:57908 #68 (8 connections now open) m31000| Fri Feb 22 11:56:33.264 [conn66] end connection 165.225.128.186:55639 (7 connections now open) m31000| Fri Feb 22 11:56:33.264 [initandlisten] connection accepted from 165.225.128.186:37020 #69 (8 connections now open) m31000| Fri Feb 22 11:56:33.356 [conn1] going to kill op: op: 1283.0 m31000| Fri Feb 22 11:56:33.357 [conn1] going to kill op: op: 1284.0 m31000| Fri Feb 22 11:56:33.366 [conn68] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.366 [conn68] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|25 } } cursorid:177335502987580 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.366 [conn69] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:56:33.366 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.366 [conn69] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|25 } } cursorid:177336236814752 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.366 [conn68] end connection 165.225.128.186:57908 (7 connections now open) m31001| Fri Feb 22 11:56:33.366 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.366 [conn69] end connection 165.225.128.186:37020 (6 connections now open) m31000| Fri Feb 22 11:56:33.367 [initandlisten] connection accepted from 165.225.128.186:35808 #70 (7 connections now open) m31000| Fri Feb 22 11:56:33.367 [initandlisten] connection accepted from 165.225.128.186:45188 #71 (8 connections now open) m31000| Fri Feb 22 11:56:33.457 [conn1] going to kill op: op: 1321.0 m31000| Fri Feb 22 11:56:33.457 [conn1] going to kill op: op: 1318.0 m31000| Fri Feb 22 11:56:33.458 [conn1] going to kill op: op: 1319.0 m31000| Fri Feb 22 11:56:33.459 [conn70] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.459 [conn70] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|35 } } cursorid:177773793434472 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:33.459 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.459 [conn70] end connection 165.225.128.186:35808 (7 connections now open) m31000| Fri Feb 22 11:56:33.459 [conn71] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.459 [conn71] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|35 } } cursorid:177774423918424 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.459 [initandlisten] connection accepted from 165.225.128.186:56866 #72 (8 connections now open) m31001| Fri Feb 22 11:56:33.459 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.459 [conn71] end connection 165.225.128.186:45188 (7 connections now open) m31000| Fri Feb 22 11:56:33.460 [initandlisten] connection accepted from 165.225.128.186:64741 #73 (8 connections now open) m31000| Fri Feb 22 11:56:33.460 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.460 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|44 } } cursorid:178117409328023 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:33.460 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.558 [conn1] going to kill op: op: 1360.0 m31000| Fri Feb 22 11:56:33.559 [conn1] going to kill op: op: 1359.0 m31000| Fri Feb 22 11:56:33.561 [conn72] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.561 [conn72] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|44 } } cursorid:178164017554997 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.561 [conn72] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:33.561 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.561 [conn72] end connection 165.225.128.186:56866 (7 connections now open) m31000| Fri Feb 22 11:56:33.562 [initandlisten] connection accepted from 165.225.128.186:50502 #74 (8 connections now open) m31000| Fri Feb 22 11:56:33.562 [conn73] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.562 [conn73] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|44 } } cursorid:178169578995608 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:33.562 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.562 [conn73] end connection 165.225.128.186:64741 (7 connections now open) m31000| Fri Feb 22 11:56:33.563 [initandlisten] connection accepted from 165.225.128.186:61062 #75 (8 connections now open) m31000| Fri Feb 22 11:56:33.659 [conn1] going to kill op: op: 1410.0 m31000| Fri Feb 22 11:56:33.660 [conn1] going to kill op: op: 1409.0 m31000| Fri Feb 22 11:56:33.660 [conn1] going to kill op: op: 1408.0 m31000| Fri Feb 22 11:56:33.664 [conn74] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.664 [conn74] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|54 } } cursorid:178602952529235 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:33.664 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.664 [conn74] end connection 165.225.128.186:50502 (7 connections now open) m31000| Fri Feb 22 11:56:33.664 [initandlisten] connection accepted from 165.225.128.186:36827 #76 (8 connections now open) m31000| Fri Feb 22 11:56:33.664 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.664 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|54 } } cursorid:178554971265983 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.665 [conn75] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.665 [conn75] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|54 } } cursorid:178606403625779 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:33.665 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.665 [conn75] end connection 165.225.128.186:61062 (7 connections now open) m31002| Fri Feb 22 11:56:33.665 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.665 [initandlisten] connection accepted from 165.225.128.186:41434 #77 (8 connections now open) m31000| Fri Feb 22 11:56:33.760 [conn1] going to kill op: op: 1450.0 m31000| Fri Feb 22 11:56:33.761 [conn1] going to kill op: op: 1448.0 m31000| Fri Feb 22 11:56:33.767 [conn76] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.767 [conn76] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|64 } } cursorid:179040802489545 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:33.767 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.767 [conn76] end connection 165.225.128.186:36827 (7 connections now open) m31000| Fri Feb 22 11:56:33.767 [conn77] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.767 [conn77] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|64 } } cursorid:179045025340689 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:33.767 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.767 [conn77] end connection 165.225.128.186:41434 (6 connections now open) m31000| Fri Feb 22 11:56:33.767 [initandlisten] connection accepted from 165.225.128.186:34737 #78 (8 connections now open) m31000| Fri Feb 22 11:56:33.767 [initandlisten] connection accepted from 165.225.128.186:51130 #79 (8 connections now open) m31000| Fri Feb 22 11:56:33.861 [conn1] going to kill op: op: 1487.0 m31000| Fri Feb 22 11:56:33.861 [conn1] going to kill op: op: 1488.0 m31000| Fri Feb 22 11:56:33.869 [conn78] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.869 [conn78] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|74 } } cursorid:179483134898956 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:33.869 [conn79] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:33.869 [conn79] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|75 } } cursorid:179484002918706 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:33.869 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.869 [conn79] ClientCursor::find(): cursor not found in map '179484002918706' (ok after a drop) m31001| Fri Feb 22 11:56:33.869 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:33.869 [conn78] end connection 165.225.128.186:34737 (7 connections now open) m31000| Fri Feb 22 11:56:33.870 [conn79] end connection 165.225.128.186:51130 (6 connections now open) m31000| Fri Feb 22 11:56:33.870 [initandlisten] connection accepted from 165.225.128.186:35721 #80 (7 connections now open) m31000| Fri Feb 22 11:56:33.870 [initandlisten] connection accepted from 165.225.128.186:59994 #81 (8 connections now open) m31000| Fri Feb 22 11:56:33.962 [conn1] going to kill op: op: 1522.0 m31000| Fri Feb 22 11:56:33.962 [conn1] going to kill op: op: 1523.0 m31000| Fri Feb 22 11:56:34.063 [conn1] going to kill op: op: 1554.0 m31000| Fri Feb 22 11:56:34.063 [conn1] going to kill op: op: 1553.0 m31000| Fri Feb 22 11:56:34.063 [conn81] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.063 [conn81] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|85 } } cursorid:179920836219597 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:34.064 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.064 [conn81] end connection 165.225.128.186:59994 (7 connections now open) m31000| Fri Feb 22 11:56:34.064 [conn80] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.064 [conn80] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534193000|85 } } cursorid:179921315378380 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:34.064 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.064 [initandlisten] connection accepted from 165.225.128.186:34869 #82 (8 connections now open) m31000| Fri Feb 22 11:56:34.064 [conn80] end connection 165.225.128.186:35721 (6 connections now open) m31000| Fri Feb 22 11:56:34.064 [initandlisten] connection accepted from 165.225.128.186:40134 #83 (8 connections now open) m31000| Fri Feb 22 11:56:34.164 [conn1] going to kill op: op: 1594.0 m31000| Fri Feb 22 11:56:34.164 [conn1] going to kill op: op: 1593.0 m31000| Fri Feb 22 11:56:34.166 [conn82] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.166 [conn82] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|6 } } cursorid:180745621845678 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.166 [conn83] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.166 [conn83] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|6 } } cursorid:180745224772900 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:34.166 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:34.166 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.166 [conn82] end connection 165.225.128.186:34869 (7 connections now open) m31000| Fri Feb 22 11:56:34.166 [conn83] end connection 165.225.128.186:40134 (7 connections now open) m31000| Fri Feb 22 11:56:34.167 [initandlisten] connection accepted from 165.225.128.186:37968 #84 (7 connections now open) m31000| Fri Feb 22 11:56:34.167 [initandlisten] connection accepted from 165.225.128.186:52864 #85 (8 connections now open) m31000| Fri Feb 22 11:56:34.264 [conn1] going to kill op: op: 1633.0 m31000| Fri Feb 22 11:56:34.268 [conn1] going to kill op: op: 1632.0 m31000| Fri Feb 22 11:56:34.269 [conn84] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.269 [conn85] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.269 [conn84] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|16 } } cursorid:181184773841073 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.269 [conn85] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|16 } } cursorid:181184233863123 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:42 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:34.269 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:34.269 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.269 [conn84] end connection 165.225.128.186:37968 (7 connections now open) m31000| Fri Feb 22 11:56:34.269 [conn85] end connection 165.225.128.186:52864 (7 connections now open) m31000| Fri Feb 22 11:56:34.270 [initandlisten] connection accepted from 165.225.128.186:61940 #86 (7 connections now open) m31000| Fri Feb 22 11:56:34.270 [initandlisten] connection accepted from 165.225.128.186:62522 #87 (8 connections now open) m31000| Fri Feb 22 11:56:34.371 [conn1] going to kill op: op: 1671.0 m31000| Fri Feb 22 11:56:34.371 [conn1] going to kill op: op: 1670.0 m31000| Fri Feb 22 11:56:34.373 [conn86] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.373 [conn86] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|26 } } cursorid:181622771146638 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.373 [conn87] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.373 [conn87] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|26 } } cursorid:181622420886044 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.373 [conn86] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:34.373 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:34.373 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.373 [conn86] end connection 165.225.128.186:61940 (7 connections now open) m31000| Fri Feb 22 11:56:34.373 [conn87] end connection 165.225.128.186:62522 (7 connections now open) m31000| Fri Feb 22 11:56:34.373 [initandlisten] connection accepted from 165.225.128.186:34984 #88 (7 connections now open) m31000| Fri Feb 22 11:56:34.373 [initandlisten] connection accepted from 165.225.128.186:54158 #89 (8 connections now open) m31000| Fri Feb 22 11:56:34.472 [conn1] going to kill op: op: 1711.0 m31000| Fri Feb 22 11:56:34.472 [conn1] going to kill op: op: 1710.0 m31000| Fri Feb 22 11:56:34.472 [conn1] going to kill op: op: 1709.0 m31000| Fri Feb 22 11:56:34.475 [conn88] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.475 [conn88] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|36 } } cursorid:182060181923265 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:34.475 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.475 [conn88] end connection 165.225.128.186:34984 (7 connections now open) m31000| Fri Feb 22 11:56:34.476 [conn89] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.476 [conn89] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|36 } } cursorid:182059484132240 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.476 [initandlisten] connection accepted from 165.225.128.186:54948 #90 (8 connections now open) m31001| Fri Feb 22 11:56:34.476 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.476 [conn89] end connection 165.225.128.186:54158 (7 connections now open) m31000| Fri Feb 22 11:56:34.476 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.476 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|45 } } cursorid:182404357211854 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.476 [initandlisten] connection accepted from 165.225.128.186:60632 #91 (8 connections now open) m31001| Fri Feb 22 11:56:34.477 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.573 [conn1] going to kill op: op: 1749.0 m31000| Fri Feb 22 11:56:34.573 [conn1] going to kill op: op: 1750.0 m31000| Fri Feb 22 11:56:34.578 [conn90] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.578 [conn90] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|46 } } cursorid:182497776564535 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:34.578 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.578 [conn90] end connection 165.225.128.186:54948 (7 connections now open) m31000| Fri Feb 22 11:56:34.578 [conn91] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.578 [conn91] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|46 } } cursorid:182497466923889 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.578 [initandlisten] connection accepted from 165.225.128.186:34827 #92 (8 connections now open) m31001| Fri Feb 22 11:56:34.578 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.578 [conn91] end connection 165.225.128.186:60632 (7 connections now open) m31000| Fri Feb 22 11:56:34.579 [initandlisten] connection accepted from 165.225.128.186:33853 #93 (8 connections now open) m31000| Fri Feb 22 11:56:34.674 [conn1] going to kill op: op: 1788.0 m31000| Fri Feb 22 11:56:34.674 [conn1] going to kill op: op: 1790.0 m31000| Fri Feb 22 11:56:34.680 [conn92] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.680 [conn92] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|56 } } cursorid:182932467333678 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:34.681 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.681 [conn92] end connection 165.225.128.186:34827 (7 connections now open) m31000| Fri Feb 22 11:56:34.681 [conn93] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.681 [conn93] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|56 } } cursorid:182935826309980 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.681 [initandlisten] connection accepted from 165.225.128.186:59761 #94 (8 connections now open) m31000| Fri Feb 22 11:56:34.681 [conn93] ClientCursor::find(): cursor not found in map '182935826309980' (ok after a drop) m31001| Fri Feb 22 11:56:34.681 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.681 [conn93] end connection 165.225.128.186:33853 (7 connections now open) m31000| Fri Feb 22 11:56:34.681 [initandlisten] connection accepted from 165.225.128.186:50552 #95 (8 connections now open) m31000| Fri Feb 22 11:56:34.774 [conn1] going to kill op: op: 1840.0 m31000| Fri Feb 22 11:56:34.775 [conn1] going to kill op: op: 1838.0 m31000| Fri Feb 22 11:56:34.775 [conn1] going to kill op: op: 1839.0 m31000| Fri Feb 22 11:56:34.783 [conn94] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.783 [conn94] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|66 } } cursorid:183370355616160 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:34.783 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.783 [conn94] end connection 165.225.128.186:59761 (7 connections now open) m31000| Fri Feb 22 11:56:34.783 [conn95] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.783 [conn95] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|66 } } cursorid:183375414638971 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.784 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.784 [initandlisten] connection accepted from 165.225.128.186:41179 #96 (8 connections now open) m31000| Fri Feb 22 11:56:34.784 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|65 } } cursorid:183280702724470 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:34.784 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.784 [conn95] end connection 165.225.128.186:50552 (7 connections now open) m31000| Fri Feb 22 11:56:34.784 [initandlisten] connection accepted from 165.225.128.186:57933 #97 (8 connections now open) m31002| Fri Feb 22 11:56:34.785 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.875 [conn1] going to kill op: op: 1875.0 m31000| Fri Feb 22 11:56:34.876 [conn1] going to kill op: op: 1876.0 m31000| Fri Feb 22 11:56:34.876 [conn96] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.876 [conn96] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|77 } } cursorid:183812537319776 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.876 [conn97] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.876 [conn97] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|77 } } cursorid:183812786128472 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:34.876 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:34.876 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.876 [conn96] end connection 165.225.128.186:41179 (7 connections now open) m31000| Fri Feb 22 11:56:34.876 [conn97] end connection 165.225.128.186:57933 (7 connections now open) m31000| Fri Feb 22 11:56:34.876 [initandlisten] connection accepted from 165.225.128.186:39951 #98 (7 connections now open) m31000| Fri Feb 22 11:56:34.876 [initandlisten] connection accepted from 165.225.128.186:49300 #99 (8 connections now open) m31000| Fri Feb 22 11:56:34.976 [conn1] going to kill op: op: 1913.0 m31000| Fri Feb 22 11:56:34.976 [conn1] going to kill op: op: 1914.0 m31000| Fri Feb 22 11:56:34.978 [conn99] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.978 [conn99] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|86 } } cursorid:184208155317864 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:34.979 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.979 [conn98] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:34.979 [conn98] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|86 } } cursorid:184206987807997 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:34.979 [conn99] end connection 165.225.128.186:49300 (7 connections now open) m31000| Fri Feb 22 11:56:34.979 [conn98] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:34.979 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:34.979 [conn98] end connection 165.225.128.186:39951 (6 connections now open) m31000| Fri Feb 22 11:56:34.979 [initandlisten] connection accepted from 165.225.128.186:50973 #100 (7 connections now open) m31000| Fri Feb 22 11:56:34.979 [initandlisten] connection accepted from 165.225.128.186:40897 #101 (8 connections now open) m31000| Fri Feb 22 11:56:35.077 [conn1] going to kill op: op: 1952.0 m31000| Fri Feb 22 11:56:35.077 [conn1] going to kill op: op: 1951.0 m31000| Fri Feb 22 11:56:35.082 [conn101] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.082 [conn100] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.082 [conn101] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|96 } } cursorid:184645200035199 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.082 [conn100] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534194000|96 } } cursorid:184647056003148 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.082 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:35.082 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.082 [conn101] end connection 165.225.128.186:40897 (7 connections now open) m31000| Fri Feb 22 11:56:35.082 [conn100] end connection 165.225.128.186:50973 (7 connections now open) m31000| Fri Feb 22 11:56:35.082 [initandlisten] connection accepted from 165.225.128.186:53314 #102 (7 connections now open) m31000| Fri Feb 22 11:56:35.082 [initandlisten] connection accepted from 165.225.128.186:48431 #103 (8 connections now open) m31000| Fri Feb 22 11:56:35.178 [conn1] going to kill op: op: 1992.0 m31000| Fri Feb 22 11:56:35.178 [conn1] going to kill op: op: 1991.0 m31000| Fri Feb 22 11:56:35.185 [conn103] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.185 [conn102] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.185 [conn103] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|7 } } cursorid:185083637877538 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.185 [conn102] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|7 } } cursorid:185084086325748 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.185 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:35.185 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.185 [conn103] end connection 165.225.128.186:48431 (7 connections now open) m31000| Fri Feb 22 11:56:35.185 [conn102] end connection 165.225.128.186:53314 (6 connections now open) m31000| Fri Feb 22 11:56:35.185 [initandlisten] connection accepted from 165.225.128.186:41581 #104 (7 connections now open) m31000| Fri Feb 22 11:56:35.185 [initandlisten] connection accepted from 165.225.128.186:44389 #105 (8 connections now open) m31000| Fri Feb 22 11:56:35.279 [conn1] going to kill op: op: 2029.0 m31000| Fri Feb 22 11:56:35.279 [conn1] going to kill op: op: 2030.0 m31000| Fri Feb 22 11:56:35.288 [conn105] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.288 [conn104] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.288 [conn105] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|17 } } cursorid:185523065479524 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.288 [conn104] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|17 } } cursorid:185522401864049 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:35.288 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:35.288 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.288 [conn105] end connection 165.225.128.186:44389 (7 connections now open) m31000| Fri Feb 22 11:56:35.288 [conn104] end connection 165.225.128.186:41581 (7 connections now open) m31000| Fri Feb 22 11:56:35.288 [initandlisten] connection accepted from 165.225.128.186:65526 #106 (7 connections now open) m31000| Fri Feb 22 11:56:35.288 [initandlisten] connection accepted from 165.225.128.186:59792 #107 (8 connections now open) m31000| Fri Feb 22 11:56:35.380 [conn1] going to kill op: op: 2065.0 m31000| Fri Feb 22 11:56:35.380 [conn1] going to kill op: op: 2064.0 m31000| Fri Feb 22 11:56:35.381 [conn107] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.381 [conn107] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|27 } } cursorid:185960357958254 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.381 [conn106] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:56:35.381 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.381 [conn106] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|27 } } cursorid:185960468879990 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.381 [conn107] end connection 165.225.128.186:59792 (7 connections now open) m31000| Fri Feb 22 11:56:35.381 [conn106] ClientCursor::find(): cursor not found in map '185960468879990' (ok after a drop) m31001| Fri Feb 22 11:56:35.381 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.381 [conn106] end connection 165.225.128.186:65526 (6 connections now open) m31000| Fri Feb 22 11:56:35.381 [initandlisten] connection accepted from 165.225.128.186:38032 #108 (8 connections now open) m31000| Fri Feb 22 11:56:35.381 [initandlisten] connection accepted from 165.225.128.186:46212 #109 (8 connections now open) m31000| Fri Feb 22 11:56:35.480 [conn1] going to kill op: op: 2106.0 m31000| Fri Feb 22 11:56:35.481 [conn1] going to kill op: op: 2103.0 m31000| Fri Feb 22 11:56:35.481 [conn1] going to kill op: op: 2104.0 m31000| Fri Feb 22 11:56:35.484 [conn108] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.484 [conn108] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|36 } } cursorid:186355793280593 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.484 [conn109] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.484 [conn109] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|36 } } cursorid:186354616975568 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.484 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.484 [conn108] end connection 165.225.128.186:38032 (7 connections now open) m31001| Fri Feb 22 11:56:35.484 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.484 [conn109] end connection 165.225.128.186:46212 (6 connections now open) m31000| Fri Feb 22 11:56:35.484 [initandlisten] connection accepted from 165.225.128.186:53754 #110 (7 connections now open) m31000| Fri Feb 22 11:56:35.484 [initandlisten] connection accepted from 165.225.128.186:46822 #111 (8 connections now open) m31000| Fri Feb 22 11:56:35.487 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.487 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|47 } } cursorid:186741249152599 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:35.488 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.582 [conn1] going to kill op: op: 2145.0 m31000| Fri Feb 22 11:56:35.582 [conn1] going to kill op: op: 2144.0 m31000| Fri Feb 22 11:56:35.587 [conn110] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.587 [conn110] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|47 } } cursorid:186793873537821 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.587 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.587 [conn110] end connection 165.225.128.186:53754 (7 connections now open) m31000| Fri Feb 22 11:56:35.587 [conn111] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.587 [conn111] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|47 } } cursorid:186793784729489 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.587 [initandlisten] connection accepted from 165.225.128.186:41173 #112 (8 connections now open) m31001| Fri Feb 22 11:56:35.587 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.587 [conn111] end connection 165.225.128.186:46822 (7 connections now open) m31000| Fri Feb 22 11:56:35.588 [initandlisten] connection accepted from 165.225.128.186:53167 #113 (8 connections now open) m31000| Fri Feb 22 11:56:35.683 [conn1] going to kill op: op: 2182.0 m31000| Fri Feb 22 11:56:35.683 [conn1] going to kill op: op: 2183.0 m31000| Fri Feb 22 11:56:35.689 [conn112] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.689 [conn112] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|57 } } cursorid:187226885399063 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.689 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.689 [conn112] end connection 165.225.128.186:41173 (7 connections now open) m31000| Fri Feb 22 11:56:35.689 [conn113] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.690 [conn113] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|57 } } cursorid:187230726207099 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.690 [conn113] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:35.690 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.690 [initandlisten] connection accepted from 165.225.128.186:64856 #114 (8 connections now open) m31000| Fri Feb 22 11:56:35.690 [conn113] end connection 165.225.128.186:53167 (6 connections now open) m31000| Fri Feb 22 11:56:35.690 [initandlisten] connection accepted from 165.225.128.186:48045 #115 (8 connections now open) m31000| Fri Feb 22 11:56:35.784 [conn1] going to kill op: op: 2220.0 m31000| Fri Feb 22 11:56:35.784 [conn1] going to kill op: op: 2221.0 m31000| Fri Feb 22 11:56:35.792 [conn115] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.792 [conn114] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.792 [conn114] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|67 } } cursorid:187670170566744 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.792 [conn115] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|67 } } cursorid:187669035645704 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:87 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.792 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:35.792 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.792 [conn114] end connection 165.225.128.186:64856 (7 connections now open) m31000| Fri Feb 22 11:56:35.793 [conn115] end connection 165.225.128.186:48045 (7 connections now open) m31000| Fri Feb 22 11:56:35.793 [initandlisten] connection accepted from 165.225.128.186:47707 #116 (7 connections now open) m31000| Fri Feb 22 11:56:35.793 [initandlisten] connection accepted from 165.225.128.186:64566 #117 (8 connections now open) m31000| Fri Feb 22 11:56:35.885 [conn1] going to kill op: op: 2266.0 m31000| Fri Feb 22 11:56:35.885 [conn1] going to kill op: op: 2267.0 m31000| Fri Feb 22 11:56:35.885 [conn117] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.885 [conn117] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|77 } } cursorid:188107460219695 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:35.885 [conn1] going to kill op: op: 2265.0 m31001| Fri Feb 22 11:56:35.885 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.885 [conn117] end connection 165.225.128.186:64566 (7 connections now open) m31000| Fri Feb 22 11:56:35.885 [initandlisten] connection accepted from 165.225.128.186:41922 #118 (8 connections now open) m31000| Fri Feb 22 11:56:35.886 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.886 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|77 } } cursorid:188057274760918 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.887 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.986 [conn1] going to kill op: op: 2301.0 m31000| Fri Feb 22 11:56:35.986 [conn1] going to kill op: op: 2302.0 m31000| Fri Feb 22 11:56:35.986 [conn116] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.986 [conn116] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|77 } } cursorid:188108759583833 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:35.987 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.987 [conn116] end connection 165.225.128.186:47707 (7 connections now open) m31000| Fri Feb 22 11:56:35.987 [initandlisten] connection accepted from 165.225.128.186:60673 #119 (8 connections now open) m31000| Fri Feb 22 11:56:35.988 [conn118] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:35.988 [conn118] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|86 } } cursorid:188497773725433 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:35.988 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:35.988 [conn118] end connection 165.225.128.186:41922 (7 connections now open) m31000| Fri Feb 22 11:56:35.988 [initandlisten] connection accepted from 165.225.128.186:46179 #120 (8 connections now open) m31000| Fri Feb 22 11:56:36.087 [conn1] going to kill op: op: 2341.0 m31000| Fri Feb 22 11:56:36.087 [conn1] going to kill op: op: 2340.0 m31000| Fri Feb 22 11:56:36.089 [conn119] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.089 [conn119] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|96 } } cursorid:188932876229987 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:36.089 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.090 [conn119] end connection 165.225.128.186:60673 (7 connections now open) m31000| Fri Feb 22 11:56:36.090 [initandlisten] connection accepted from 165.225.128.186:61111 #121 (8 connections now open) m31000| Fri Feb 22 11:56:36.090 [conn120] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.090 [conn120] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534195000|96 } } cursorid:188936863720047 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:36.090 [conn120] ClientCursor::find(): cursor not found in map '188936863720047' (ok after a drop) m31001| Fri Feb 22 11:56:36.090 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.090 [conn120] end connection 165.225.128.186:46179 (7 connections now open) m31000| Fri Feb 22 11:56:36.091 [initandlisten] connection accepted from 165.225.128.186:32792 #122 (8 connections now open) m31000| Fri Feb 22 11:56:36.187 [conn1] going to kill op: op: 2383.0 m31000| Fri Feb 22 11:56:36.188 [conn1] going to kill op: op: 2381.0 m31000| Fri Feb 22 11:56:36.193 [conn121] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.193 [conn121] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|8 } } cursorid:189369485533658 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:36.193 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.193 [conn121] end connection 165.225.128.186:61111 (7 connections now open) m31000| Fri Feb 22 11:56:36.193 [initandlisten] connection accepted from 165.225.128.186:65354 #123 (8 connections now open) m31000| Fri Feb 22 11:56:36.193 [conn122] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.193 [conn122] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|8 } } cursorid:189374776992077 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:36.193 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.194 [conn122] end connection 165.225.128.186:32792 (7 connections now open) m31000| Fri Feb 22 11:56:36.194 [initandlisten] connection accepted from 165.225.128.186:49680 #124 (8 connections now open) m31000| Fri Feb 22 11:56:36.288 [conn1] going to kill op: op: 2422.0 m31000| Fri Feb 22 11:56:36.288 [conn1] going to kill op: op: 2421.0 m31000| Fri Feb 22 11:56:36.295 [conn123] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.295 [conn123] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|18 } } cursorid:189808087702633 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:36.295 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.295 [conn123] end connection 165.225.128.186:65354 (7 connections now open) m31000| Fri Feb 22 11:56:36.296 [initandlisten] connection accepted from 165.225.128.186:39560 #125 (8 connections now open) m31000| Fri Feb 22 11:56:36.296 [conn124] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.296 [conn124] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|19 } } cursorid:189812888121652 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:108 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:36.297 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.297 [conn124] end connection 165.225.128.186:49680 (7 connections now open) m31000| Fri Feb 22 11:56:36.297 [initandlisten] connection accepted from 165.225.128.186:58255 #126 (8 connections now open) m31000| Fri Feb 22 11:56:36.389 [conn1] going to kill op: op: 2459.0 m31000| Fri Feb 22 11:56:36.390 [conn1] going to kill op: op: 2457.0 m31000| Fri Feb 22 11:56:36.398 [conn125] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.398 [conn125] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|29 } } cursorid:190246081625581 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:36.398 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.398 [conn125] end connection 165.225.128.186:39560 (7 connections now open) m31000| Fri Feb 22 11:56:36.398 [initandlisten] connection accepted from 165.225.128.186:51303 #127 (8 connections now open) m31000| Fri Feb 22 11:56:36.490 [conn1] going to kill op: op: 2490.0 m31000| Fri Feb 22 11:56:36.490 [conn127] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.490 [conn127] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|39 } } cursorid:190684304850847 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:36.490 [conn1] going to kill op: op: 2491.0 m31002| Fri Feb 22 11:56:36.490 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.490 [conn127] end connection 165.225.128.186:51303 (7 connections now open) m31000| Fri Feb 22 11:56:36.491 [initandlisten] connection accepted from 165.225.128.186:41965 #128 (8 connections now open) m31000| Fri Feb 22 11:56:36.491 [conn126] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.491 [conn126] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|29 } } cursorid:190251382526855 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:36.491 [conn126] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:36.491 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.491 [conn126] end connection 165.225.128.186:58255 (7 connections now open) m31000| Fri Feb 22 11:56:36.491 [initandlisten] connection accepted from 165.225.128.186:46080 #129 (8 connections now open) m31000| Fri Feb 22 11:56:36.591 [conn1] going to kill op: op: 2538.0 m31000| Fri Feb 22 11:56:36.591 [conn1] going to kill op: op: 2540.0 m31000| Fri Feb 22 11:56:36.591 [conn1] going to kill op: op: 2539.0 m31000| Fri Feb 22 11:56:36.593 [conn128] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.593 [conn128] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|48 } } cursorid:191075082295181 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:36.593 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.593 [conn128] end connection 165.225.128.186:41965 (7 connections now open) m31000| Fri Feb 22 11:56:36.593 [initandlisten] connection accepted from 165.225.128.186:63091 #130 (8 connections now open) m31000| Fri Feb 22 11:56:36.595 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.595 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|48 } } cursorid:191028908307007 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:36.595 [conn129] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.595 [conn129] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|48 } } cursorid:191079003736224 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:36.595 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.595 [conn129] end connection 165.225.128.186:46080 (7 connections now open) m31000| Fri Feb 22 11:56:36.595 [initandlisten] connection accepted from 165.225.128.186:53748 #131 (8 connections now open) m31001| Fri Feb 22 11:56:36.596 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.692 [conn1] going to kill op: op: 2578.0 m31000| Fri Feb 22 11:56:36.692 [conn1] going to kill op: op: 2579.0 m31000| Fri Feb 22 11:56:36.695 [conn130] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.695 [conn130] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|58 } } cursorid:191513974743937 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:36.695 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.695 [conn130] end connection 165.225.128.186:63091 (7 connections now open) m31000| Fri Feb 22 11:56:36.696 [initandlisten] connection accepted from 165.225.128.186:34015 #132 (8 connections now open) m31000| Fri Feb 22 11:56:36.698 [conn131] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.698 [conn131] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|58 } } cursorid:191518217240847 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:36.698 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.698 [conn131] end connection 165.225.128.186:53748 (7 connections now open) m31000| Fri Feb 22 11:56:36.698 [initandlisten] connection accepted from 165.225.128.186:47845 #133 (8 connections now open) m31000| Fri Feb 22 11:56:36.793 [conn1] going to kill op: op: 2617.0 m31000| Fri Feb 22 11:56:36.793 [conn1] going to kill op: op: 2616.0 m31000| Fri Feb 22 11:56:36.798 [conn132] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.798 [conn132] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|68 } } cursorid:191952505428154 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:36.798 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.798 [conn132] end connection 165.225.128.186:34015 (7 connections now open) m31000| Fri Feb 22 11:56:36.799 [initandlisten] connection accepted from 165.225.128.186:43872 #134 (8 connections now open) m31000| Fri Feb 22 11:56:36.801 [conn133] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.801 [conn133] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|68 } } cursorid:191955621499144 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:36.801 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.801 [conn133] end connection 165.225.128.186:47845 (7 connections now open) m31000| Fri Feb 22 11:56:36.801 [initandlisten] connection accepted from 165.225.128.186:59362 #135 (8 connections now open) m31000| Fri Feb 22 11:56:36.894 [conn1] going to kill op: op: 2656.0 m31000| Fri Feb 22 11:56:36.894 [conn1] going to kill op: op: 2658.0 m31000| Fri Feb 22 11:56:36.894 [conn1] going to kill op: op: 2655.0 m31000| Fri Feb 22 11:56:36.901 [conn134] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.901 [conn134] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|78 } } cursorid:192347455590137 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:36.901 [conn134] ClientCursor::find(): cursor not found in map '192347455590137' (ok after a drop) m31002| Fri Feb 22 11:56:36.901 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.901 [conn134] end connection 165.225.128.186:43872 (7 connections now open) m31000| Fri Feb 22 11:56:36.901 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.901 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|87 } } cursorid:192694146815108 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:36.901 [initandlisten] connection accepted from 165.225.128.186:48504 #136 (8 connections now open) m31002| Fri Feb 22 11:56:36.902 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.903 [conn135] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.903 [conn135] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|78 } } cursorid:192351400551535 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:36.903 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.903 [conn135] end connection 165.225.128.186:59362 (7 connections now open) m31000| Fri Feb 22 11:56:36.904 [initandlisten] connection accepted from 165.225.128.186:40817 #137 (8 connections now open) m31000| Fri Feb 22 11:56:36.995 [conn1] going to kill op: op: 2695.0 m31000| Fri Feb 22 11:56:36.995 [conn1] going to kill op: op: 2694.0 m31000| Fri Feb 22 11:56:36.996 [conn137] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:36.996 [conn137] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|89 } } cursorid:192746607873349 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:36.996 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:36.996 [conn137] end connection 165.225.128.186:40817 (7 connections now open) m31000| Fri Feb 22 11:56:36.996 [initandlisten] connection accepted from 165.225.128.186:56735 #138 (8 connections now open) m31000| Fri Feb 22 11:56:37.003 [conn136] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.003 [conn136] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|88 } } cursorid:192742800612136 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:37.003 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.003 [conn136] end connection 165.225.128.186:48504 (7 connections now open) m31000| Fri Feb 22 11:56:37.004 [initandlisten] connection accepted from 165.225.128.186:51980 #139 (8 connections now open) m31000| Fri Feb 22 11:56:37.096 [conn1] going to kill op: op: 2731.0 m31000| Fri Feb 22 11:56:37.096 [conn1] going to kill op: op: 2732.0 m31000| Fri Feb 22 11:56:37.096 [conn139] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.096 [conn139] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|98 } } cursorid:193141817498263 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:37.096 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.096 [conn139] end connection 165.225.128.186:51980 (7 connections now open) m31000| Fri Feb 22 11:56:37.097 [initandlisten] connection accepted from 165.225.128.186:37112 #140 (8 connections now open) m31000| Fri Feb 22 11:56:37.098 [conn138] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.098 [conn138] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534196000|98 } } cursorid:193136233194496 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.098 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.099 [conn138] end connection 165.225.128.186:56735 (7 connections now open) m31000| Fri Feb 22 11:56:37.099 [initandlisten] connection accepted from 165.225.128.186:53996 #141 (8 connections now open) m31000| Fri Feb 22 11:56:37.197 [conn1] going to kill op: op: 2771.0 m31000| Fri Feb 22 11:56:37.197 [conn1] going to kill op: op: 2772.0 m31000| Fri Feb 22 11:56:37.199 [conn140] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.199 [conn140] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|9 } } cursorid:193531728944410 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:37.199 [conn140] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:37.199 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.199 [conn140] end connection 165.225.128.186:37112 (7 connections now open) m31000| Fri Feb 22 11:56:37.199 [initandlisten] connection accepted from 165.225.128.186:34111 #142 (8 connections now open) m31000| Fri Feb 22 11:56:37.201 [conn141] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.201 [conn141] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|9 } } cursorid:193537197421312 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.201 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.201 [conn141] end connection 165.225.128.186:53996 (7 connections now open) m31000| Fri Feb 22 11:56:37.202 [initandlisten] connection accepted from 165.225.128.186:53293 #143 (8 connections now open) m31000| Fri Feb 22 11:56:37.297 [conn1] going to kill op: op: 2810.0 m31000| Fri Feb 22 11:56:37.298 [conn1] going to kill op: op: 2809.0 m31000| Fri Feb 22 11:56:37.301 [conn142] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.301 [conn142] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|19 } } cursorid:193971399303517 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:37.301 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.301 [conn142] end connection 165.225.128.186:34111 (7 connections now open) m31000| Fri Feb 22 11:56:37.302 [initandlisten] connection accepted from 165.225.128.186:62845 #144 (8 connections now open) m31000| Fri Feb 22 11:56:37.304 [conn143] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.304 [conn143] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|19 } } cursorid:193975631265377 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.304 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.304 [conn143] end connection 165.225.128.186:53293 (7 connections now open) m31000| Fri Feb 22 11:56:37.305 [initandlisten] connection accepted from 165.225.128.186:39140 #145 (8 connections now open) m31000| Fri Feb 22 11:56:37.398 [conn1] going to kill op: op: 2848.0 m31000| Fri Feb 22 11:56:37.399 [conn1] going to kill op: op: 2847.0 m31000| Fri Feb 22 11:56:37.405 [conn144] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.405 [conn144] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|29 } } cursorid:194365903125463 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:37.405 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.405 [conn144] end connection 165.225.128.186:62845 (7 connections now open) m31000| Fri Feb 22 11:56:37.406 [initandlisten] connection accepted from 165.225.128.186:64843 #146 (8 connections now open) m31000| Fri Feb 22 11:56:37.407 [conn145] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.407 [conn145] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|29 } } cursorid:194370495774285 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.407 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.407 [conn145] end connection 165.225.128.186:39140 (7 connections now open) m31000| Fri Feb 22 11:56:37.408 [initandlisten] connection accepted from 165.225.128.186:34928 #147 (8 connections now open) m31000| Fri Feb 22 11:56:37.499 [conn1] going to kill op: op: 2883.0 m31000| Fri Feb 22 11:56:37.499 [conn1] going to kill op: op: 2885.0 m31000| Fri Feb 22 11:56:37.500 [conn147] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.500 [conn147] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|39 } } cursorid:194807858591107 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.500 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.500 [conn147] end connection 165.225.128.186:34928 (7 connections now open) m31000| Fri Feb 22 11:56:37.500 [initandlisten] connection accepted from 165.225.128.186:46692 #148 (8 connections now open) m31000| Fri Feb 22 11:56:37.508 [conn146] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.508 [conn146] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|39 } } cursorid:194804420134014 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:37.508 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.508 [conn146] end connection 165.225.128.186:64843 (7 connections now open) m31000| Fri Feb 22 11:56:37.508 [initandlisten] connection accepted from 165.225.128.186:38896 #149 (8 connections now open) m31000| Fri Feb 22 11:56:37.600 [conn1] going to kill op: op: 2925.0 m31000| Fri Feb 22 11:56:37.600 [conn1] going to kill op: op: 2923.0 m31000| Fri Feb 22 11:56:37.600 [conn1] going to kill op: op: 2921.0 m31000| Fri Feb 22 11:56:37.601 [conn149] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.601 [conn149] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|49 } } cursorid:195202156264147 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:37.601 [conn149] ClientCursor::find(): cursor not found in map '195202156264147' (ok after a drop) m31002| Fri Feb 22 11:56:37.601 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.601 [conn149] end connection 165.225.128.186:38896 (7 connections now open) m31000| Fri Feb 22 11:56:37.601 [initandlisten] connection accepted from 165.225.128.186:39652 #150 (8 connections now open) m31000| Fri Feb 22 11:56:37.603 [conn148] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.603 [conn148] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|48 } } cursorid:195198178366199 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.603 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.603 [conn148] end connection 165.225.128.186:46692 (7 connections now open) m31000| Fri Feb 22 11:56:37.603 [initandlisten] connection accepted from 165.225.128.186:34418 #151 (8 connections now open) m31000| Fri Feb 22 11:56:37.607 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.607 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|59 } } cursorid:195547376571747 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.607 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.701 [conn1] going to kill op: op: 2962.0 m31000| Fri Feb 22 11:56:37.701 [conn1] going to kill op: op: 2964.0 m31000| Fri Feb 22 11:56:37.703 [conn150] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.703 [conn150] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|58 } } cursorid:195593801849868 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:37.703 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.703 [conn150] end connection 165.225.128.186:39652 (7 connections now open) m31000| Fri Feb 22 11:56:37.704 [initandlisten] connection accepted from 165.225.128.186:33892 #152 (8 connections now open) m31000| Fri Feb 22 11:56:37.706 [conn151] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.706 [conn151] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|59 } } cursorid:195598538623464 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.706 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.706 [conn151] end connection 165.225.128.186:34418 (7 connections now open) m31000| Fri Feb 22 11:56:37.706 [initandlisten] connection accepted from 165.225.128.186:34028 #153 (8 connections now open) m31000| Fri Feb 22 11:56:37.802 [conn1] going to kill op: op: 3002.0 m31000| Fri Feb 22 11:56:37.802 [conn1] going to kill op: op: 3001.0 m31000| Fri Feb 22 11:56:37.806 [conn152] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.806 [conn152] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|68 } } cursorid:195989463589216 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:37.806 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.806 [conn152] end connection 165.225.128.186:33892 (7 connections now open) m31000| Fri Feb 22 11:56:37.806 [initandlisten] connection accepted from 165.225.128.186:40161 #154 (8 connections now open) m31000| Fri Feb 22 11:56:37.809 [conn153] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.809 [conn153] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|69 } } cursorid:195992554175213 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.809 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.809 [conn153] end connection 165.225.128.186:34028 (7 connections now open) m31000| Fri Feb 22 11:56:37.809 [initandlisten] connection accepted from 165.225.128.186:49494 #155 (8 connections now open) m31000| Fri Feb 22 11:56:37.902 [conn1] going to kill op: op: 3039.0 m31000| Fri Feb 22 11:56:37.903 [conn1] going to kill op: op: 3040.0 m31000| Fri Feb 22 11:56:37.908 [conn154] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.908 [conn154] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|79 } } cursorid:196384433766292 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:37.908 [conn154] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:37.908 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.908 [conn154] end connection 165.225.128.186:40161 (7 connections now open) m31000| Fri Feb 22 11:56:37.909 [initandlisten] connection accepted from 165.225.128.186:57324 #156 (8 connections now open) m31000| Fri Feb 22 11:56:37.911 [conn155] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:37.911 [conn155] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|79 } } cursorid:196387407486273 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:37.912 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:37.912 [conn155] end connection 165.225.128.186:49494 (7 connections now open) m31000| Fri Feb 22 11:56:37.912 [initandlisten] connection accepted from 165.225.128.186:37985 #157 (8 connections now open) m31000| Fri Feb 22 11:56:38.003 [conn1] going to kill op: op: 3085.0 m31000| Fri Feb 22 11:56:38.003 [conn1] going to kill op: op: 3088.0 m31000| Fri Feb 22 11:56:38.004 [conn1] going to kill op: op: 3086.0 m31000| Fri Feb 22 11:56:38.004 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.004 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|89 } } cursorid:196775125137209 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:87 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:38.004 [conn157] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:56:38.004 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.004 [conn157] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|89 } } cursorid:196782543445971 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.005 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.005 [conn157] end connection 165.225.128.186:37985 (7 connections now open) m31000| Fri Feb 22 11:56:38.005 [initandlisten] connection accepted from 165.225.128.186:56565 #158 (8 connections now open) m31000| Fri Feb 22 11:56:38.011 [conn156] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.011 [conn156] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|89 } } cursorid:196778701381983 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.011 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.011 [conn156] end connection 165.225.128.186:57324 (7 connections now open) m31000| Fri Feb 22 11:56:38.011 [initandlisten] connection accepted from 165.225.128.186:63904 #159 (8 connections now open) m31000| Fri Feb 22 11:56:38.104 [conn1] going to kill op: op: 3130.0 m31000| Fri Feb 22 11:56:38.105 [conn1] going to kill op: op: 3128.0 m31000| Fri Feb 22 11:56:38.107 [conn158] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.107 [conn158] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|98 } } cursorid:197174140690971 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.107 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.107 [conn158] end connection 165.225.128.186:56565 (7 connections now open) m31000| Fri Feb 22 11:56:38.107 [initandlisten] connection accepted from 165.225.128.186:37975 #160 (8 connections now open) m31000| Fri Feb 22 11:56:38.114 [conn159] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.114 [conn159] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534197000|99 } } cursorid:197178413371947 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.114 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.114 [conn159] end connection 165.225.128.186:63904 (7 connections now open) m31000| Fri Feb 22 11:56:38.114 [initandlisten] connection accepted from 165.225.128.186:62922 #161 (8 connections now open) m31000| Fri Feb 22 11:56:38.205 [conn1] going to kill op: op: 3166.0 m31000| Fri Feb 22 11:56:38.205 [conn1] going to kill op: op: 3165.0 m31000| Fri Feb 22 11:56:38.206 [conn161] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.206 [conn161] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|10 } } cursorid:197572956679905 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.206 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.206 [conn161] end connection 165.225.128.186:62922 (7 connections now open) m31000| Fri Feb 22 11:56:38.206 [initandlisten] connection accepted from 165.225.128.186:60236 #162 (8 connections now open) m31000| Fri Feb 22 11:56:38.209 [conn160] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.209 [conn160] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|9 } } cursorid:197568503742351 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:38.209 [conn160] ClientCursor::find(): cursor not found in map '197568503742351' (ok after a drop) m31001| Fri Feb 22 11:56:38.209 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.210 [conn160] end connection 165.225.128.186:37975 (7 connections now open) m31000| Fri Feb 22 11:56:38.210 [initandlisten] connection accepted from 165.225.128.186:59826 #163 (8 connections now open) m31000| Fri Feb 22 11:56:38.306 [conn1] going to kill op: op: 3205.0 m31000| Fri Feb 22 11:56:38.306 [conn1] going to kill op: op: 3204.0 m31000| Fri Feb 22 11:56:38.308 [conn162] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.308 [conn162] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|19 } } cursorid:197964035768468 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.309 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.309 [conn162] end connection 165.225.128.186:60236 (7 connections now open) m31000| Fri Feb 22 11:56:38.309 [initandlisten] connection accepted from 165.225.128.186:51630 #164 (8 connections now open) m31000| Fri Feb 22 11:56:38.312 [conn163] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.312 [conn163] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|19 } } cursorid:197969631455368 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.312 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.312 [conn163] end connection 165.225.128.186:59826 (7 connections now open) m31000| Fri Feb 22 11:56:38.312 [initandlisten] connection accepted from 165.225.128.186:34084 #165 (8 connections now open) m31000| Fri Feb 22 11:56:38.407 [conn1] going to kill op: op: 3243.0 m31000| Fri Feb 22 11:56:38.407 [conn1] going to kill op: op: 3242.0 m31000| Fri Feb 22 11:56:38.411 [conn164] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.411 [conn164] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|29 } } cursorid:198358806737627 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.411 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.412 [conn164] end connection 165.225.128.186:51630 (7 connections now open) m31000| Fri Feb 22 11:56:38.412 [initandlisten] connection accepted from 165.225.128.186:36291 #166 (8 connections now open) m31000| Fri Feb 22 11:56:38.414 [conn165] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.414 [conn165] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|29 } } cursorid:198363373967285 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.414 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.415 [conn165] end connection 165.225.128.186:34084 (7 connections now open) m31000| Fri Feb 22 11:56:38.415 [initandlisten] connection accepted from 165.225.128.186:60665 #167 (8 connections now open) m31000| Fri Feb 22 11:56:38.508 [conn1] going to kill op: op: 3283.0 m31000| Fri Feb 22 11:56:38.508 [conn1] going to kill op: op: 3281.0 m31000| Fri Feb 22 11:56:38.514 [conn166] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.514 [conn166] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|39 } } cursorid:198755099565518 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.514 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.514 [conn166] end connection 165.225.128.186:36291 (7 connections now open) m31000| Fri Feb 22 11:56:38.514 [initandlisten] connection accepted from 165.225.128.186:49691 #168 (8 connections now open) m31000| Fri Feb 22 11:56:38.517 [conn167] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.517 [conn167] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|39 } } cursorid:198758798291794 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.517 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.517 [conn167] end connection 165.225.128.186:60665 (7 connections now open) m31000| Fri Feb 22 11:56:38.517 [initandlisten] connection accepted from 165.225.128.186:44803 #169 (8 connections now open) m31000| Fri Feb 22 11:56:38.608 [conn1] going to kill op: op: 3321.0 m31000| Fri Feb 22 11:56:38.609 [conn1] going to kill op: op: 3319.0 m31000| Fri Feb 22 11:56:38.609 [conn1] going to kill op: op: 3318.0 m31000| Fri Feb 22 11:56:38.609 [conn169] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.609 [conn169] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|50 } } cursorid:199154792271268 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:38.609 [conn169] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:38.609 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.609 [conn169] end connection 165.225.128.186:44803 (7 connections now open) m31000| Fri Feb 22 11:56:38.610 [initandlisten] connection accepted from 165.225.128.186:46042 #170 (8 connections now open) m31000| Fri Feb 22 11:56:38.616 [conn168] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.616 [conn168] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|49 } } cursorid:199149130499019 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.616 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.616 [conn168] end connection 165.225.128.186:49691 (7 connections now open) m31000| Fri Feb 22 11:56:38.617 [initandlisten] connection accepted from 165.225.128.186:37180 #171 (8 connections now open) m31000| Fri Feb 22 11:56:38.617 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.617 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|59 } } cursorid:199498431842831 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.617 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.709 [conn1] going to kill op: op: 3359.0 m31000| Fri Feb 22 11:56:38.710 [conn1] going to kill op: op: 3360.0 m31000| Fri Feb 22 11:56:38.712 [conn170] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.712 [conn170] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|59 } } cursorid:199545374406240 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.712 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.712 [conn170] end connection 165.225.128.186:46042 (7 connections now open) m31000| Fri Feb 22 11:56:38.712 [initandlisten] connection accepted from 165.225.128.186:50091 #172 (8 connections now open) m31000| Fri Feb 22 11:56:38.719 [conn171] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.719 [conn171] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|59 } } cursorid:199550363749650 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.719 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.719 [conn171] end connection 165.225.128.186:37180 (7 connections now open) m31000| Fri Feb 22 11:56:38.720 [initandlisten] connection accepted from 165.225.128.186:53712 #173 (8 connections now open) m31000| Fri Feb 22 11:56:38.810 [conn1] going to kill op: op: 3397.0 m31000| Fri Feb 22 11:56:38.811 [conn1] going to kill op: op: 3396.0 m31000| Fri Feb 22 11:56:38.811 [conn173] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.811 [conn173] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|69 } } cursorid:199945729161147 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.811 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.812 [conn173] end connection 165.225.128.186:53712 (7 connections now open) m31000| Fri Feb 22 11:56:38.812 [initandlisten] connection accepted from 165.225.128.186:51544 #174 (8 connections now open) m31000| Fri Feb 22 11:56:38.814 [conn172] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.814 [conn172] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|69 } } cursorid:199939537207849 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:38.815 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.815 [conn172] end connection 165.225.128.186:50091 (7 connections now open) m31000| Fri Feb 22 11:56:38.815 [initandlisten] connection accepted from 165.225.128.186:59043 #175 (8 connections now open) m31000| Fri Feb 22 11:56:38.911 [conn1] going to kill op: op: 3435.0 m31000| Fri Feb 22 11:56:38.911 [conn1] going to kill op: op: 3434.0 m31000| Fri Feb 22 11:56:38.914 [conn174] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.914 [conn174] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|79 } } cursorid:200334597303646 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:38.914 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.914 [conn174] end connection 165.225.128.186:51544 (7 connections now open) m31000| Fri Feb 22 11:56:38.915 [initandlisten] connection accepted from 165.225.128.186:50317 #176 (8 connections now open) m31000| Fri Feb 22 11:56:38.918 [conn175] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:38.918 [conn175] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|79 } } cursorid:200340703150678 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:38.918 [conn175] ClientCursor::find(): cursor not found in map '200340703150678' (ok after a drop) m31001| Fri Feb 22 11:56:38.918 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:38.918 [conn175] end connection 165.225.128.186:59043 (7 connections now open) m31000| Fri Feb 22 11:56:38.918 [initandlisten] connection accepted from 165.225.128.186:38434 #177 (8 connections now open) m31000| Fri Feb 22 11:56:39.012 [conn1] going to kill op: op: 3474.0 m31000| Fri Feb 22 11:56:39.012 [conn1] going to kill op: op: 3475.0 m31000| Fri Feb 22 11:56:39.013 [conn1] going to kill op: op: 3473.0 m31000| Fri Feb 22 11:56:39.017 [conn176] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.017 [conn176] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|89 } } cursorid:200730725518829 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:93 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.017 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.018 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.018 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|98 } } cursorid:201079370644229 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:39.018 [conn176] end connection 165.225.128.186:50317 (7 connections now open) m31000| Fri Feb 22 11:56:39.018 [initandlisten] connection accepted from 165.225.128.186:40221 #178 (8 connections now open) m31002| Fri Feb 22 11:56:39.019 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.021 [conn177] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.021 [conn177] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534198000|89 } } cursorid:200735562655926 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:39.021 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.021 [conn177] end connection 165.225.128.186:38434 (7 connections now open) m31000| Fri Feb 22 11:56:39.021 [initandlisten] connection accepted from 165.225.128.186:40935 #179 (8 connections now open) m31000| Fri Feb 22 11:56:39.113 [conn1] going to kill op: op: 3513.0 m31000| Fri Feb 22 11:56:39.114 [conn1] going to kill op: op: 3515.0 m31000| Fri Feb 22 11:56:39.120 [conn178] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.120 [conn178] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|1 } } cursorid:201126539110605 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.120 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.120 [conn178] end connection 165.225.128.186:40221 (7 connections now open) m31000| Fri Feb 22 11:56:39.121 [initandlisten] connection accepted from 165.225.128.186:59540 #180 (8 connections now open) m31000| Fri Feb 22 11:56:39.214 [conn1] going to kill op: op: 3547.0 m31000| Fri Feb 22 11:56:39.215 [conn1] going to kill op: op: 3549.0 m31000| Fri Feb 22 11:56:39.215 [conn179] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.215 [conn179] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|1 } } cursorid:201129130909173 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:39.215 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.215 [conn179] end connection 165.225.128.186:40935 (7 connections now open) m31000| Fri Feb 22 11:56:39.215 [initandlisten] connection accepted from 165.225.128.186:62608 #181 (8 connections now open) m31000| Fri Feb 22 11:56:39.223 [conn180] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.223 [conn180] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|11 } } cursorid:201520794830308 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.223 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.223 [conn180] end connection 165.225.128.186:59540 (7 connections now open) m31000| Fri Feb 22 11:56:39.223 [initandlisten] connection accepted from 165.225.128.186:57143 #182 (8 connections now open) m31000| Fri Feb 22 11:56:39.316 [conn1] going to kill op: op: 3585.0 m31000| Fri Feb 22 11:56:39.316 [conn1] going to kill op: op: 3584.0 m31000| Fri Feb 22 11:56:39.317 [conn181] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.317 [conn181] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|20 } } cursorid:201911400135058 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:39.317 [conn181] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:39.317 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.317 [conn181] end connection 165.225.128.186:62608 (7 connections now open) m31000| Fri Feb 22 11:56:39.318 [initandlisten] connection accepted from 165.225.128.186:59995 #183 (8 connections now open) m31000| Fri Feb 22 11:56:39.416 [conn1] going to kill op: op: 3618.0 m31000| Fri Feb 22 11:56:39.417 [conn1] going to kill op: op: 3619.0 m31000| Fri Feb 22 11:56:39.417 [conn182] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.417 [conn182] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|21 } } cursorid:201915737915232 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.417 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.418 [conn182] end connection 165.225.128.186:57143 (7 connections now open) m31000| Fri Feb 22 11:56:39.418 [initandlisten] connection accepted from 165.225.128.186:42040 #184 (8 connections now open) m31000| Fri Feb 22 11:56:39.420 [conn183] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.420 [conn183] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|30 } } cursorid:202307338423733 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:39.420 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.420 [conn183] end connection 165.225.128.186:59995 (7 connections now open) m31000| Fri Feb 22 11:56:39.421 [initandlisten] connection accepted from 165.225.128.186:33181 #185 (8 connections now open) m31000| Fri Feb 22 11:56:39.517 [conn1] going to kill op: op: 3659.0 m31000| Fri Feb 22 11:56:39.517 [conn1] going to kill op: op: 3657.0 m31000| Fri Feb 22 11:56:39.520 [conn184] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.520 [conn184] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|40 } } cursorid:202698240608915 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.520 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.520 [conn184] end connection 165.225.128.186:42040 (7 connections now open) m31000| Fri Feb 22 11:56:39.520 [initandlisten] connection accepted from 165.225.128.186:55423 #186 (8 connections now open) m31000| Fri Feb 22 11:56:39.522 [conn185] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.522 [conn185] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|40 } } cursorid:202702906882990 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:39.522 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.522 [conn185] end connection 165.225.128.186:33181 (7 connections now open) m31000| Fri Feb 22 11:56:39.523 [initandlisten] connection accepted from 165.225.128.186:64352 #187 (8 connections now open) m31000| Fri Feb 22 11:56:39.618 [conn1] going to kill op: op: 3695.0 m31000| Fri Feb 22 11:56:39.618 [conn1] going to kill op: op: 3697.0 m31000| Fri Feb 22 11:56:39.622 [conn186] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.622 [conn186] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|50 } } cursorid:203092466266914 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.622 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.622 [conn186] end connection 165.225.128.186:55423 (7 connections now open) m31000| Fri Feb 22 11:56:39.623 [initandlisten] connection accepted from 165.225.128.186:50348 #188 (8 connections now open) m31000| Fri Feb 22 11:56:39.625 [conn187] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.625 [conn187] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|51 } } cursorid:203097691367986 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:39.625 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.625 [conn187] end connection 165.225.128.186:64352 (7 connections now open) m31000| Fri Feb 22 11:56:39.625 [initandlisten] connection accepted from 165.225.128.186:48904 #189 (8 connections now open) m31000| Fri Feb 22 11:56:39.719 [conn1] going to kill op: op: 3743.0 m31000| Fri Feb 22 11:56:39.719 [conn1] going to kill op: op: 3744.0 m31000| Fri Feb 22 11:56:39.719 [conn1] going to kill op: op: 3746.0 m31000| Fri Feb 22 11:56:39.719 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.719 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|61 } } cursorid:203483667383298 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:39.719 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.725 [conn188] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.725 [conn188] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|60 } } cursorid:203488317631326 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:39.725 [conn188] ClientCursor::find(): cursor not found in map '203488317631326' (ok after a drop) m31002| Fri Feb 22 11:56:39.725 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.725 [conn188] end connection 165.225.128.186:50348 (7 connections now open) m31000| Fri Feb 22 11:56:39.725 [initandlisten] connection accepted from 165.225.128.186:36481 #190 (8 connections now open) m31000| Fri Feb 22 11:56:39.727 [conn189] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.727 [conn189] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|61 } } cursorid:203492262865560 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:39.728 [conn189] end connection 165.225.128.186:48904 (7 connections now open) m31001| Fri Feb 22 11:56:39.728 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.728 [initandlisten] connection accepted from 165.225.128.186:62526 #191 (8 connections now open) m31000| Fri Feb 22 11:56:39.820 [conn1] going to kill op: op: 3785.0 m31000| Fri Feb 22 11:56:39.828 [conn190] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.828 [conn190] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|70 } } cursorid:203882450662276 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.828 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.828 [conn190] end connection 165.225.128.186:36481 (7 connections now open) m31000| Fri Feb 22 11:56:39.828 [initandlisten] connection accepted from 165.225.128.186:48995 #192 (8 connections now open) m31000| Fri Feb 22 11:56:39.921 [conn1] going to kill op: op: 3819.0 m31000| Fri Feb 22 11:56:39.921 [conn1] going to kill op: op: 3817.0 m31000| Fri Feb 22 11:56:39.921 [conn191] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.922 [conn191] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|71 } } cursorid:203888203765232 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:39.922 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.922 [conn191] end connection 165.225.128.186:62526 (7 connections now open) m31000| Fri Feb 22 11:56:39.922 [initandlisten] connection accepted from 165.225.128.186:48687 #193 (8 connections now open) m31000| Fri Feb 22 11:56:39.930 [conn192] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:39.930 [conn192] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|81 } } cursorid:204321272511248 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:39.930 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:39.930 [conn192] end connection 165.225.128.186:48995 (7 connections now open) m31000| Fri Feb 22 11:56:39.931 [initandlisten] connection accepted from 165.225.128.186:52880 #194 (8 connections now open) m31000| Fri Feb 22 11:56:40.022 [conn1] going to kill op: op: 3857.0 m31000| Fri Feb 22 11:56:40.022 [conn1] going to kill op: op: 3854.0 m31000| Fri Feb 22 11:56:40.022 [conn1] going to kill op: op: 3855.0 m31000| Fri Feb 22 11:56:40.023 [conn194] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.023 [conn194] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|91 } } cursorid:204717206231275 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.023 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.023 [conn194] end connection 165.225.128.186:52880 (7 connections now open) m31000| Fri Feb 22 11:56:40.023 [initandlisten] connection accepted from 165.225.128.186:45241 #195 (8 connections now open) m31000| Fri Feb 22 11:56:40.024 [conn193] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.024 [conn193] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534199000|90 } } cursorid:204712038785499 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.025 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.025 [conn193] end connection 165.225.128.186:48687 (7 connections now open) m31000| Fri Feb 22 11:56:40.025 [initandlisten] connection accepted from 165.225.128.186:47291 #196 (8 connections now open) m31000| Fri Feb 22 11:56:40.029 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.029 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|1 } } cursorid:205059397033414 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:40.029 [conn12] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:40.029 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.123 [conn1] going to kill op: op: 3898.0 m31000| Fri Feb 22 11:56:40.123 [conn1] going to kill op: op: 3899.0 m31000| Fri Feb 22 11:56:40.126 [conn195] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.126 [conn195] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|1 } } cursorid:205106936960020 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.126 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.126 [conn195] end connection 165.225.128.186:45241 (7 connections now open) m31000| Fri Feb 22 11:56:40.126 [initandlisten] connection accepted from 165.225.128.186:55689 #197 (8 connections now open) m31000| Fri Feb 22 11:56:40.128 [conn196] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.128 [conn196] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|1 } } cursorid:205111157110818 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.128 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.128 [conn196] end connection 165.225.128.186:47291 (7 connections now open) m31000| Fri Feb 22 11:56:40.128 [initandlisten] connection accepted from 165.225.128.186:32831 #198 (8 connections now open) m31000| Fri Feb 22 11:56:40.224 [conn1] going to kill op: op: 3937.0 m31000| Fri Feb 22 11:56:40.224 [conn1] going to kill op: op: 3936.0 m31000| Fri Feb 22 11:56:40.229 [conn197] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.229 [conn197] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|11 } } cursorid:205545843919667 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.229 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.229 [conn197] end connection 165.225.128.186:55689 (7 connections now open) m31000| Fri Feb 22 11:56:40.229 [initandlisten] connection accepted from 165.225.128.186:56984 #199 (8 connections now open) m31000| Fri Feb 22 11:56:40.230 [conn198] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.230 [conn198] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|11 } } cursorid:205549058180929 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.230 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.230 [conn198] end connection 165.225.128.186:32831 (7 connections now open) m31000| Fri Feb 22 11:56:40.231 [initandlisten] connection accepted from 165.225.128.186:64928 #200 (8 connections now open) m31000| Fri Feb 22 11:56:40.325 [conn1] going to kill op: op: 3975.0 m31000| Fri Feb 22 11:56:40.325 [conn1] going to kill op: op: 3976.0 m31000| Fri Feb 22 11:56:40.331 [conn199] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.332 [conn199] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|21 } } cursorid:205982662992702 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.332 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.332 [conn199] end connection 165.225.128.186:56984 (7 connections now open) m31000| Fri Feb 22 11:56:40.332 [initandlisten] connection accepted from 165.225.128.186:63858 #201 (8 connections now open) m31000| Fri Feb 22 11:56:40.333 [conn200] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.333 [conn200] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|21 } } cursorid:205987128848107 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.333 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.333 [conn200] end connection 165.225.128.186:64928 (7 connections now open) m31000| Fri Feb 22 11:56:40.333 [initandlisten] connection accepted from 165.225.128.186:38186 #202 (8 connections now open) m31000| Fri Feb 22 11:56:40.425 [conn1] going to kill op: op: 4011.0 m31000| Fri Feb 22 11:56:40.426 [conn1] going to kill op: op: 4013.0 m31000| Fri Feb 22 11:56:40.434 [conn201] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.434 [conn201] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|31 } } cursorid:206422214154777 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.434 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.434 [conn201] end connection 165.225.128.186:63858 (7 connections now open) m31000| Fri Feb 22 11:56:40.434 [initandlisten] connection accepted from 165.225.128.186:45243 #203 (8 connections now open) m31000| Fri Feb 22 11:56:40.526 [conn1] going to kill op: op: 4044.0 m31000| Fri Feb 22 11:56:40.527 [conn1] going to kill op: op: 4045.0 m31000| Fri Feb 22 11:56:40.527 [conn202] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.527 [conn202] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|31 } } cursorid:206426749071820 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:40.527 [conn202] ClientCursor::find(): cursor not found in map '206426749071820' (ok after a drop) m31001| Fri Feb 22 11:56:40.527 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.527 [conn202] end connection 165.225.128.186:38186 (7 connections now open) m31000| Fri Feb 22 11:56:40.528 [initandlisten] connection accepted from 165.225.128.186:51951 #204 (8 connections now open) m31000| Fri Feb 22 11:56:40.627 [conn1] going to kill op: op: 4078.0 m31000| Fri Feb 22 11:56:40.627 [conn1] going to kill op: op: 4080.0 m31000| Fri Feb 22 11:56:40.628 [conn203] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.628 [conn203] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|41 } } cursorid:206860311792963 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.628 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.628 [conn203] end connection 165.225.128.186:45243 (7 connections now open) m31000| Fri Feb 22 11:56:40.628 [initandlisten] connection accepted from 165.225.128.186:56068 #205 (8 connections now open) m31000| Fri Feb 22 11:56:40.630 [conn204] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.630 [conn204] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|50 } } cursorid:207249543223533 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.630 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.630 [conn204] end connection 165.225.128.186:51951 (7 connections now open) m31000| Fri Feb 22 11:56:40.631 [initandlisten] connection accepted from 165.225.128.186:46071 #206 (8 connections now open) m31000| Fri Feb 22 11:56:40.728 [conn1] going to kill op: op: 4118.0 m31000| Fri Feb 22 11:56:40.728 [conn1] going to kill op: op: 4120.0 m31000| Fri Feb 22 11:56:40.729 [conn1] going to kill op: op: 4121.0 m31000| Fri Feb 22 11:56:40.730 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.730 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|70 } } cursorid:208032236781393 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.730 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.731 [conn205] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.731 [conn205] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|60 } } cursorid:207685154530762 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.731 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.731 [conn205] end connection 165.225.128.186:56068 (7 connections now open) m31000| Fri Feb 22 11:56:40.731 [initandlisten] connection accepted from 165.225.128.186:61529 #207 (8 connections now open) m31000| Fri Feb 22 11:56:40.733 [conn206] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.733 [conn206] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|61 } } cursorid:207687614081946 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.733 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.733 [conn206] end connection 165.225.128.186:46071 (7 connections now open) m31000| Fri Feb 22 11:56:40.734 [initandlisten] connection accepted from 165.225.128.186:41980 #208 (8 connections now open) m31000| Fri Feb 22 11:56:40.829 [conn1] going to kill op: op: 4160.0 m31000| Fri Feb 22 11:56:40.829 [conn1] going to kill op: op: 4159.0 m31000| Fri Feb 22 11:56:40.833 [conn207] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.833 [conn207] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|71 } } cursorid:208122824014511 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.834 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.834 [conn207] end connection 165.225.128.186:61529 (7 connections now open) m31000| Fri Feb 22 11:56:40.834 [initandlisten] connection accepted from 165.225.128.186:47315 #209 (8 connections now open) m31000| Fri Feb 22 11:56:40.836 [conn208] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.836 [conn208] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|71 } } cursorid:208126392078447 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:40.836 [conn208] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:40.836 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.836 [conn208] end connection 165.225.128.186:41980 (7 connections now open) m31000| Fri Feb 22 11:56:40.836 [initandlisten] connection accepted from 165.225.128.186:47635 #210 (8 connections now open) m31000| Fri Feb 22 11:56:40.930 [conn1] going to kill op: op: 4197.0 m31000| Fri Feb 22 11:56:40.930 [conn1] going to kill op: op: 4198.0 m31000| Fri Feb 22 11:56:40.936 [conn209] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.936 [conn209] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|81 } } cursorid:208560016318520 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:40.936 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.936 [conn209] end connection 165.225.128.186:47315 (7 connections now open) m31000| Fri Feb 22 11:56:40.936 [initandlisten] connection accepted from 165.225.128.186:37617 #211 (8 connections now open) m31000| Fri Feb 22 11:56:40.939 [conn210] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:40.939 [conn210] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|81 } } cursorid:208565232571328 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:40.939 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:40.939 [conn210] end connection 165.225.128.186:47635 (7 connections now open) m31000| Fri Feb 22 11:56:40.939 [initandlisten] connection accepted from 165.225.128.186:47603 #212 (8 connections now open) m31000| Fri Feb 22 11:56:41.031 [conn1] going to kill op: op: 4237.0 m31000| Fri Feb 22 11:56:41.031 [conn1] going to kill op: op: 4235.0 m31000| Fri Feb 22 11:56:41.031 [conn1] going to kill op: op: 4233.0 m31000| Fri Feb 22 11:56:41.039 [conn211] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.039 [conn211] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|91 } } cursorid:208954963541560 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.039 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.039 [conn211] end connection 165.225.128.186:37617 (7 connections now open) m31000| Fri Feb 22 11:56:41.039 [initandlisten] connection accepted from 165.225.128.186:54885 #213 (8 connections now open) m31000| Fri Feb 22 11:56:41.040 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.040 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|3 } } cursorid:209346620383527 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.040 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.132 [conn1] going to kill op: op: 4272.0 m31000| Fri Feb 22 11:56:41.132 [conn1] going to kill op: op: 4274.0 m31000| Fri Feb 22 11:56:41.133 [conn212] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.133 [conn212] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534200000|91 } } cursorid:208960642922595 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:101 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.134 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.134 [conn212] end connection 165.225.128.186:47603 (7 connections now open) m31000| Fri Feb 22 11:56:41.134 [initandlisten] connection accepted from 165.225.128.186:59010 #214 (8 connections now open) m31000| Fri Feb 22 11:56:41.142 [conn213] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.142 [conn213] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|3 } } cursorid:209350787459614 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.142 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.142 [conn213] end connection 165.225.128.186:54885 (7 connections now open) m31000| Fri Feb 22 11:56:41.142 [initandlisten] connection accepted from 165.225.128.186:49432 #215 (8 connections now open) m31000| Fri Feb 22 11:56:41.233 [conn1] going to kill op: op: 4309.0 m31000| Fri Feb 22 11:56:41.233 [conn1] going to kill op: op: 4310.0 m31000| Fri Feb 22 11:56:41.235 [conn215] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.235 [conn215] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|13 } } cursorid:209745725241565 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.235 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.235 [conn215] end connection 165.225.128.186:49432 (7 connections now open) m31000| Fri Feb 22 11:56:41.235 [initandlisten] connection accepted from 165.225.128.186:64712 #216 (8 connections now open) m31000| Fri Feb 22 11:56:41.236 [conn214] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.236 [conn214] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|12 } } cursorid:209740751090279 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:41.236 [conn214] ClientCursor::find(): cursor not found in map '209740751090279' (ok after a drop) m31001| Fri Feb 22 11:56:41.236 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.236 [conn214] end connection 165.225.128.186:59010 (7 connections now open) m31000| Fri Feb 22 11:56:41.237 [initandlisten] connection accepted from 165.225.128.186:47219 #217 (8 connections now open) m31000| Fri Feb 22 11:56:41.334 [conn1] going to kill op: op: 4347.0 m31000| Fri Feb 22 11:56:41.334 [conn1] going to kill op: op: 4349.0 m31000| Fri Feb 22 11:56:41.337 [conn216] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.337 [conn216] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|22 } } cursorid:210136455004742 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.338 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.338 [conn216] end connection 165.225.128.186:64712 (7 connections now open) m31000| Fri Feb 22 11:56:41.338 [initandlisten] connection accepted from 165.225.128.186:54615 #218 (8 connections now open) m31000| Fri Feb 22 11:56:41.339 [conn217] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.339 [conn217] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|22 } } cursorid:210141317493361 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.339 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.339 [conn217] end connection 165.225.128.186:47219 (7 connections now open) m31000| Fri Feb 22 11:56:41.339 [initandlisten] connection accepted from 165.225.128.186:42895 #219 (8 connections now open) m31000| Fri Feb 22 11:56:41.434 [conn1] going to kill op: op: 4385.0 m31000| Fri Feb 22 11:56:41.435 [conn1] going to kill op: op: 4387.0 m31000| Fri Feb 22 11:56:41.440 [conn218] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.440 [conn218] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|32 } } cursorid:210575264653046 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.440 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.440 [conn218] end connection 165.225.128.186:54615 (7 connections now open) m31000| Fri Feb 22 11:56:41.440 [initandlisten] connection accepted from 165.225.128.186:46373 #220 (8 connections now open) m31000| Fri Feb 22 11:56:41.441 [conn219] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.441 [conn219] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|32 } } cursorid:210579701355838 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.441 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.441 [conn219] end connection 165.225.128.186:42895 (7 connections now open) m31000| Fri Feb 22 11:56:41.442 [initandlisten] connection accepted from 165.225.128.186:57212 #221 (8 connections now open) m31000| Fri Feb 22 11:56:41.535 [conn1] going to kill op: op: 4424.0 m31000| Fri Feb 22 11:56:41.535 [conn1] going to kill op: op: 4425.0 m31000| Fri Feb 22 11:56:41.542 [conn220] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.542 [conn220] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|42 } } cursorid:211013296750742 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.542 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.542 [conn220] end connection 165.225.128.186:46373 (7 connections now open) m31000| Fri Feb 22 11:56:41.543 [initandlisten] connection accepted from 165.225.128.186:45616 #222 (8 connections now open) m31000| Fri Feb 22 11:56:41.544 [conn221] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.544 [conn221] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|43 } } cursorid:211017068312957 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.544 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.544 [conn221] end connection 165.225.128.186:57212 (7 connections now open) m31000| Fri Feb 22 11:56:41.544 [initandlisten] connection accepted from 165.225.128.186:41935 #223 (8 connections now open) m31000| Fri Feb 22 11:56:41.636 [conn1] going to kill op: op: 4460.0 m31000| Fri Feb 22 11:56:41.636 [conn223] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.636 [conn223] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|53 } } cursorid:211454371124396 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:41.636 [conn1] going to kill op: op: 4462.0 m31000| Fri Feb 22 11:56:41.636 [conn223] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:41.636 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.636 [conn223] end connection 165.225.128.186:41935 (7 connections now open) m31000| Fri Feb 22 11:56:41.637 [initandlisten] connection accepted from 165.225.128.186:55603 #224 (8 connections now open) m31000| Fri Feb 22 11:56:41.645 [conn222] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.645 [conn222] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|53 } } cursorid:211450155997567 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.645 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.645 [conn222] end connection 165.225.128.186:45616 (7 connections now open) m31000| Fri Feb 22 11:56:41.645 [initandlisten] connection accepted from 165.225.128.186:38961 #225 (8 connections now open) m31000| Fri Feb 22 11:56:41.737 [conn1] going to kill op: op: 4500.0 m31000| Fri Feb 22 11:56:41.737 [conn1] going to kill op: op: 4498.0 m31000| Fri Feb 22 11:56:41.737 [conn1] going to kill op: op: 4497.0 m31000| Fri Feb 22 11:56:41.739 [conn224] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.739 [conn224] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|62 } } cursorid:211846316577233 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.739 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.739 [conn224] end connection 165.225.128.186:55603 (7 connections now open) m31000| Fri Feb 22 11:56:41.739 [initandlisten] connection accepted from 165.225.128.186:44056 #226 (8 connections now open) m31000| Fri Feb 22 11:56:41.741 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.741 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|72 } } cursorid:212193270977741 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.741 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.838 [conn1] going to kill op: op: 4535.0 m31000| Fri Feb 22 11:56:41.838 [conn1] going to kill op: op: 4534.0 m31000| Fri Feb 22 11:56:41.838 [conn225] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.839 [conn225] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|63 } } cursorid:211849830505975 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.839 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.839 [conn225] end connection 165.225.128.186:38961 (7 connections now open) m31000| Fri Feb 22 11:56:41.839 [initandlisten] connection accepted from 165.225.128.186:36052 #227 (8 connections now open) m31000| Fri Feb 22 11:56:41.841 [conn226] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.841 [conn226] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|72 } } cursorid:212240767148789 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.841 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.841 [conn226] end connection 165.225.128.186:44056 (7 connections now open) m31000| Fri Feb 22 11:56:41.841 [initandlisten] connection accepted from 165.225.128.186:38762 #228 (8 connections now open) m31000| Fri Feb 22 11:56:41.939 [conn1] going to kill op: op: 4573.0 m31000| Fri Feb 22 11:56:41.939 [conn1] going to kill op: op: 4572.0 m31000| Fri Feb 22 11:56:41.941 [conn227] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.941 [conn227] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|82 } } cursorid:212632780340566 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:41.941 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.942 [conn227] end connection 165.225.128.186:36052 (7 connections now open) m31000| Fri Feb 22 11:56:41.943 [conn228] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:41.943 [conn228] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|82 } } cursorid:212635447694220 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:41.944 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:41.944 [conn228] end connection 165.225.128.186:38762 (6 connections now open) m31000| Fri Feb 22 11:56:41.944 [initandlisten] connection accepted from 165.225.128.186:33155 #229 (7 connections now open) m31000| Fri Feb 22 11:56:41.947 [initandlisten] connection accepted from 165.225.128.186:44354 #230 (8 connections now open) m31000| Fri Feb 22 11:56:42.040 [conn1] going to kill op: op: 4612.0 m31000| Fri Feb 22 11:56:42.040 [conn1] going to kill op: op: 4610.0 m31000| Fri Feb 22 11:56:42.047 [conn229] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.047 [conn229] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|92 } } cursorid:213070614008322 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:42.047 [conn229] ClientCursor::find(): cursor not found in map '213070614008322' (ok after a drop) m31001| Fri Feb 22 11:56:42.047 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.047 [conn229] end connection 165.225.128.186:33155 (7 connections now open) m31000| Fri Feb 22 11:56:42.047 [initandlisten] connection accepted from 165.225.128.186:38845 #231 (8 connections now open) m31000| Fri Feb 22 11:56:42.049 [conn230] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.049 [conn230] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534201000|92 } } cursorid:213074871442891 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.049 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.049 [conn230] end connection 165.225.128.186:44354 (7 connections now open) m31000| Fri Feb 22 11:56:42.050 [initandlisten] connection accepted from 165.225.128.186:41207 #232 (8 connections now open) m31000| Fri Feb 22 11:56:42.140 [conn1] going to kill op: op: 4661.0 m31000| Fri Feb 22 11:56:42.141 [conn1] going to kill op: op: 4660.0 m31000| Fri Feb 22 11:56:42.141 [conn1] going to kill op: op: 4662.0 m31000| Fri Feb 22 11:56:42.142 [conn232] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.142 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.142 [conn232] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|4 } } cursorid:213469960880412 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:42.142 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|4 } } cursorid:213459950085790 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.143 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.143 [conn232] end connection 165.225.128.186:41207 (7 connections now open) m31000| Fri Feb 22 11:56:42.143 [initandlisten] connection accepted from 165.225.128.186:54495 #233 (8 connections now open) m31002| Fri Feb 22 11:56:42.144 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.149 [conn231] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.149 [conn231] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|3 } } cursorid:213465222421207 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:42.149 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.149 [conn231] end connection 165.225.128.186:38845 (7 connections now open) m31000| Fri Feb 22 11:56:42.149 [initandlisten] connection accepted from 165.225.128.186:44767 #234 (8 connections now open) m31000| Fri Feb 22 11:56:42.242 [conn1] going to kill op: op: 4698.0 m31000| Fri Feb 22 11:56:42.242 [conn1] going to kill op: op: 4700.0 m31000| Fri Feb 22 11:56:42.246 [conn233] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.246 [conn233] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|13 } } cursorid:213859576118428 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.246 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.246 [conn233] end connection 165.225.128.186:54495 (7 connections now open) m31000| Fri Feb 22 11:56:42.246 [initandlisten] connection accepted from 165.225.128.186:37904 #235 (8 connections now open) m31001| Fri Feb 22 11:56:42.256 [conn4] end connection 165.225.128.186:40619 (2 connections now open) m31001| Fri Feb 22 11:56:42.256 [initandlisten] connection accepted from 165.225.128.186:38049 #6 (3 connections now open) m31000| Fri Feb 22 11:56:42.342 [conn1] going to kill op: op: 4733.0 m31000| Fri Feb 22 11:56:42.343 [conn1] going to kill op: op: 4735.0 m31000| Fri Feb 22 11:56:42.343 [conn234] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.344 [conn234] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|13 } } cursorid:213864096223873 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:113 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:42.344 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.344 [conn234] end connection 165.225.128.186:44767 (7 connections now open) m31000| Fri Feb 22 11:56:42.344 [initandlisten] connection accepted from 165.225.128.186:63094 #236 (8 connections now open) m31000| Fri Feb 22 11:56:42.348 [conn235] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.348 [conn235] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|23 } } cursorid:214255017203890 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:42.349 [conn235] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:42.349 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.349 [conn235] end connection 165.225.128.186:37904 (7 connections now open) m31000| Fri Feb 22 11:56:42.349 [initandlisten] connection accepted from 165.225.128.186:49231 #237 (8 connections now open) m31000| Fri Feb 22 11:56:42.444 [conn1] going to kill op: op: 4772.0 m31000| Fri Feb 22 11:56:42.444 [conn1] going to kill op: op: 4773.0 m31000| Fri Feb 22 11:56:42.446 [conn236] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.446 [conn236] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|32 } } cursorid:214647066893310 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:42.447 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.447 [conn236] end connection 165.225.128.186:63094 (7 connections now open) m31000| Fri Feb 22 11:56:42.447 [initandlisten] connection accepted from 165.225.128.186:45043 #238 (8 connections now open) m31000| Fri Feb 22 11:56:42.451 [conn237] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.451 [conn237] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|33 } } cursorid:214650687891163 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.451 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.451 [conn237] end connection 165.225.128.186:49231 (7 connections now open) m31000| Fri Feb 22 11:56:42.452 [initandlisten] connection accepted from 165.225.128.186:45708 #239 (8 connections now open) m31000| Fri Feb 22 11:56:42.545 [conn1] going to kill op: op: 4811.0 m31000| Fri Feb 22 11:56:42.545 [conn1] going to kill op: op: 4810.0 m31000| Fri Feb 22 11:56:42.549 [conn238] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.549 [conn238] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|43 } } cursorid:215042118724706 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:42.549 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.549 [conn238] end connection 165.225.128.186:45043 (7 connections now open) m31000| Fri Feb 22 11:56:42.550 [initandlisten] connection accepted from 165.225.128.186:50259 #240 (8 connections now open) m31000| Fri Feb 22 11:56:42.554 [conn239] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.554 [conn239] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|43 } } cursorid:215045188645467 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.554 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.554 [conn239] end connection 165.225.128.186:45708 (7 connections now open) m31000| Fri Feb 22 11:56:42.554 [initandlisten] connection accepted from 165.225.128.186:45623 #241 (8 connections now open) m31000| Fri Feb 22 11:56:42.645 [conn1] going to kill op: op: 4846.0 m31000| Fri Feb 22 11:56:42.646 [conn1] going to kill op: op: 4848.0 m31000| Fri Feb 22 11:56:42.646 [conn241] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.646 [conn241] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|53 } } cursorid:215440792886920 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.646 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.646 [conn241] end connection 165.225.128.186:45623 (7 connections now open) m31000| Fri Feb 22 11:56:42.646 [initandlisten] connection accepted from 165.225.128.186:52548 #242 (8 connections now open) m31000| Fri Feb 22 11:56:42.652 [conn240] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.652 [conn240] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|53 } } cursorid:215435976000432 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:42.652 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.652 [conn240] end connection 165.225.128.186:50259 (7 connections now open) m31000| Fri Feb 22 11:56:42.652 [initandlisten] connection accepted from 165.225.128.186:52021 #243 (8 connections now open) m31000| Fri Feb 22 11:56:42.746 [conn1] going to kill op: op: 4888.0 m31000| Fri Feb 22 11:56:42.747 [conn1] going to kill op: op: 4884.0 m31000| Fri Feb 22 11:56:42.747 [conn1] going to kill op: op: 4887.0 m31000| Fri Feb 22 11:56:42.748 [conn242] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.748 [conn242] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|62 } } cursorid:215832141455035 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:90 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.748 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.748 [conn242] end connection 165.225.128.186:52548 (7 connections now open) m31000| Fri Feb 22 11:56:42.749 [initandlisten] connection accepted from 165.225.128.186:54221 #244 (8 connections now open) m31000| Fri Feb 22 11:56:42.754 [conn243] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.754 [conn243] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|63 } } cursorid:215835799318973 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:42.754 [conn243] ClientCursor::find(): cursor not found in map '215835799318973' (ok after a drop) m31001| Fri Feb 22 11:56:42.754 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.755 [conn243] end connection 165.225.128.186:52021 (7 connections now open) m31000| Fri Feb 22 11:56:42.755 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.755 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|72 } } cursorid:216178997026526 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:42.755 [initandlisten] connection accepted from 165.225.128.186:57716 #245 (8 connections now open) m31001| Fri Feb 22 11:56:42.756 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.848 [conn1] going to kill op: op: 4927.0 m31000| Fri Feb 22 11:56:42.848 [conn1] going to kill op: op: 4925.0 m31000| Fri Feb 22 11:56:42.851 [conn244] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.851 [conn244] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|72 } } cursorid:216226016656819 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.851 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.851 [conn244] end connection 165.225.128.186:54221 (7 connections now open) m31000| Fri Feb 22 11:56:42.851 [initandlisten] connection accepted from 165.225.128.186:64825 #246 (8 connections now open) m31000| Fri Feb 22 11:56:42.857 [conn245] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.857 [conn245] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|73 } } cursorid:216230467245278 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:42.857 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.857 [conn245] end connection 165.225.128.186:57716 (7 connections now open) m31000| Fri Feb 22 11:56:42.858 [initandlisten] connection accepted from 165.225.128.186:37805 #247 (8 connections now open) m31000| Fri Feb 22 11:56:42.948 [conn1] going to kill op: op: 4962.0 m31000| Fri Feb 22 11:56:42.949 [conn1] going to kill op: op: 4964.0 m31000| Fri Feb 22 11:56:42.950 [conn247] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.950 [conn247] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|83 } } cursorid:216625656523068 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:42.950 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.950 [conn247] end connection 165.225.128.186:37805 (7 connections now open) m31000| Fri Feb 22 11:56:42.950 [initandlisten] connection accepted from 165.225.128.186:39281 #248 (8 connections now open) m31000| Fri Feb 22 11:56:42.954 [conn246] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:42.954 [conn246] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|82 } } cursorid:216622455504022 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:42.954 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:42.954 [conn246] end connection 165.225.128.186:64825 (7 connections now open) m31000| Fri Feb 22 11:56:42.954 [initandlisten] connection accepted from 165.225.128.186:58250 #249 (8 connections now open) m31000| Fri Feb 22 11:56:43.049 [conn1] going to kill op: op: 5000.0 m31000| Fri Feb 22 11:56:43.050 [conn1] going to kill op: op: 5002.0 m31000| Fri Feb 22 11:56:43.053 [conn248] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.053 [conn248] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|92 } } cursorid:217017610355671 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.053 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.053 [conn248] end connection 165.225.128.186:39281 (7 connections now open) m31000| Fri Feb 22 11:56:43.053 [initandlisten] connection accepted from 165.225.128.186:56936 #250 (8 connections now open) m31000| Fri Feb 22 11:56:43.056 [conn249] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.056 [conn249] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534202000|93 } } cursorid:217021795449348 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:43.056 [conn249] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:43.056 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.056 [conn249] end connection 165.225.128.186:58250 (7 connections now open) m31000| Fri Feb 22 11:56:43.057 [initandlisten] connection accepted from 165.225.128.186:40120 #251 (8 connections now open) m31000| Fri Feb 22 11:56:43.150 [conn1] going to kill op: op: 5041.0 m31000| Fri Feb 22 11:56:43.150 [conn1] going to kill op: op: 5042.0 m31000| Fri Feb 22 11:56:43.151 [conn1] going to kill op: op: 5044.0 m31000| Fri Feb 22 11:56:43.154 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.154 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|14 } } cursorid:217759607972888 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.155 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.155 [conn250] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.156 [conn250] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|4 } } cursorid:217412969166099 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.156 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.156 [conn250] end connection 165.225.128.186:56936 (7 connections now open) m31000| Fri Feb 22 11:56:43.156 [initandlisten] connection accepted from 165.225.128.186:39663 #252 (8 connections now open) m31000| Fri Feb 22 11:56:43.159 [conn251] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.159 [conn251] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|5 } } cursorid:217416769356453 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.159 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.159 [conn251] end connection 165.225.128.186:40120 (7 connections now open) m31000| Fri Feb 22 11:56:43.159 [initandlisten] connection accepted from 165.225.128.186:54756 #253 (8 connections now open) m31000| Fri Feb 22 11:56:43.251 [conn1] going to kill op: op: 5083.0 m31000| Fri Feb 22 11:56:43.252 [conn1] going to kill op: op: 5082.0 m31000| Fri Feb 22 11:56:43.258 [conn252] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.258 [conn252] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|14 } } cursorid:217808048137673 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.258 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.258 [conn252] end connection 165.225.128.186:39663 (7 connections now open) m31000| Fri Feb 22 11:56:43.259 [initandlisten] connection accepted from 165.225.128.186:59711 #254 (8 connections now open) m31000| Fri Feb 22 11:56:43.261 [conn253] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.261 [conn253] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|15 } } cursorid:217811368423076 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.261 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.261 [conn253] end connection 165.225.128.186:54756 (7 connections now open) m31000| Fri Feb 22 11:56:43.261 [initandlisten] connection accepted from 165.225.128.186:38749 #255 (8 connections now open) m31000| Fri Feb 22 11:56:43.352 [conn1] going to kill op: op: 5118.0 m31000| Fri Feb 22 11:56:43.353 [conn1] going to kill op: op: 5120.0 m31000| Fri Feb 22 11:56:43.354 [conn255] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.354 [conn255] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|25 } } cursorid:218206287222734 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.354 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.354 [conn255] end connection 165.225.128.186:38749 (7 connections now open) m31000| Fri Feb 22 11:56:43.354 [initandlisten] connection accepted from 165.225.128.186:33185 #256 (8 connections now open) m31000| Fri Feb 22 11:56:43.361 [conn254] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.361 [conn254] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|25 } } cursorid:218203502431047 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.361 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.361 [conn254] end connection 165.225.128.186:59711 (7 connections now open) m31000| Fri Feb 22 11:56:43.362 [initandlisten] connection accepted from 165.225.128.186:39424 #257 (8 connections now open) m31000| Fri Feb 22 11:56:43.453 [conn1] going to kill op: op: 5156.0 m31000| Fri Feb 22 11:56:43.453 [conn1] going to kill op: op: 5155.0 m31000| Fri Feb 22 11:56:43.454 [conn257] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.454 [conn257] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|35 } } cursorid:218601174087634 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:43.454 [conn257] ClientCursor::find(): cursor not found in map '218601174087634' (ok after a drop) m31001| Fri Feb 22 11:56:43.455 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.455 [conn257] end connection 165.225.128.186:39424 (7 connections now open) m31000| Fri Feb 22 11:56:43.455 [initandlisten] connection accepted from 165.225.128.186:59673 #258 (8 connections now open) m31000| Fri Feb 22 11:56:43.457 [conn256] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.457 [conn256] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|34 } } cursorid:218598572892295 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.457 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.457 [conn256] end connection 165.225.128.186:33185 (7 connections now open) m31000| Fri Feb 22 11:56:43.457 [initandlisten] connection accepted from 165.225.128.186:61958 #259 (8 connections now open) m31000| Fri Feb 22 11:56:43.554 [conn1] going to kill op: op: 5194.0 m31000| Fri Feb 22 11:56:43.554 [conn1] going to kill op: op: 5193.0 m31000| Fri Feb 22 11:56:43.557 [conn258] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.557 [conn258] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|44 } } cursorid:218991833011331 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.557 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.557 [conn258] end connection 165.225.128.186:59673 (7 connections now open) m31000| Fri Feb 22 11:56:43.558 [initandlisten] connection accepted from 165.225.128.186:51983 #260 (8 connections now open) m31000| Fri Feb 22 11:56:43.559 [conn259] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.559 [conn259] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|44 } } cursorid:218996230496363 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:42 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.559 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.559 [conn259] end connection 165.225.128.186:61958 (7 connections now open) m31000| Fri Feb 22 11:56:43.560 [initandlisten] connection accepted from 165.225.128.186:43956 #261 (8 connections now open) m31000| Fri Feb 22 11:56:43.655 [conn1] going to kill op: op: 5231.0 m31000| Fri Feb 22 11:56:43.655 [conn1] going to kill op: op: 5232.0 m31000| Fri Feb 22 11:56:43.660 [conn260] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.660 [conn260] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|54 } } cursorid:219431090548248 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.660 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.660 [conn260] end connection 165.225.128.186:51983 (7 connections now open) m31000| Fri Feb 22 11:56:43.660 [initandlisten] connection accepted from 165.225.128.186:59127 #262 (8 connections now open) m31000| Fri Feb 22 11:56:43.662 [conn261] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.662 [conn261] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|54 } } cursorid:219434834327066 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.662 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.662 [conn261] end connection 165.225.128.186:43956 (7 connections now open) m31000| Fri Feb 22 11:56:43.662 [initandlisten] connection accepted from 165.225.128.186:62638 #263 (8 connections now open) m31000| Fri Feb 22 11:56:43.756 [conn1] going to kill op: op: 5270.0 m31000| Fri Feb 22 11:56:43.757 [conn1] going to kill op: op: 5269.0 m31000| Fri Feb 22 11:56:43.762 [conn262] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.762 [conn262] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|64 } } cursorid:219868670467389 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.762 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.763 [conn262] end connection 165.225.128.186:59127 (7 connections now open) m31000| Fri Feb 22 11:56:43.763 [initandlisten] connection accepted from 165.225.128.186:42560 #264 (8 connections now open) m31000| Fri Feb 22 11:56:43.764 [conn263] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.764 [conn263] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|64 } } cursorid:219873088212660 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:43.764 [conn263] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:43.764 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.764 [conn263] end connection 165.225.128.186:62638 (7 connections now open) m31000| Fri Feb 22 11:56:43.765 [initandlisten] connection accepted from 165.225.128.186:45277 #265 (8 connections now open) m31000| Fri Feb 22 11:56:43.857 [conn1] going to kill op: op: 5319.0 m31000| Fri Feb 22 11:56:43.857 [conn1] going to kill op: op: 5321.0 m31000| Fri Feb 22 11:56:43.858 [conn1] going to kill op: op: 5318.0 m31000| Fri Feb 22 11:56:43.865 [conn264] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.865 [conn264] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|74 } } cursorid:220307852921064 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.865 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.865 [conn264] end connection 165.225.128.186:42560 (7 connections now open) m31000| Fri Feb 22 11:56:43.866 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.866 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|74 } } cursorid:220260482190811 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:43.866 [initandlisten] connection accepted from 165.225.128.186:64336 #266 (8 connections now open) m31001| Fri Feb 22 11:56:43.867 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.867 [conn265] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.867 [conn265] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|74 } } cursorid:220311249484128 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.867 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.867 [conn265] end connection 165.225.128.186:45277 (7 connections now open) m31000| Fri Feb 22 11:56:43.867 [initandlisten] connection accepted from 165.225.128.186:46954 #267 (8 connections now open) m31000| Fri Feb 22 11:56:43.958 [conn1] going to kill op: op: 5360.0 m31000| Fri Feb 22 11:56:43.958 [conn1] going to kill op: op: 5358.0 m31000| Fri Feb 22 11:56:43.959 [conn267] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.959 [conn267] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|85 } } cursorid:220749888005767 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:43.959 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.959 [conn267] end connection 165.225.128.186:46954 (7 connections now open) m31000| Fri Feb 22 11:56:43.960 [initandlisten] connection accepted from 165.225.128.186:63350 #268 (8 connections now open) m31000| Fri Feb 22 11:56:43.968 [conn266] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:43.968 [conn266] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|84 } } cursorid:220745776802000 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:43.968 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:43.968 [conn266] end connection 165.225.128.186:64336 (7 connections now open) m31000| Fri Feb 22 11:56:43.968 [initandlisten] connection accepted from 165.225.128.186:62844 #269 (8 connections now open) m31000| Fri Feb 22 11:56:44.059 [conn1] going to kill op: op: 5396.0 m31000| Fri Feb 22 11:56:44.059 [conn1] going to kill op: op: 5397.0 m31000| Fri Feb 22 11:56:44.061 [conn269] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.061 [conn269] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|95 } } cursorid:221144083101200 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.061 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.061 [conn269] end connection 165.225.128.186:62844 (7 connections now open) m31000| Fri Feb 22 11:56:44.061 [initandlisten] connection accepted from 165.225.128.186:55828 #270 (8 connections now open) m31000| Fri Feb 22 11:56:44.062 [conn268] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.062 [conn268] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534203000|94 } } cursorid:221140461442857 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.062 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.062 [conn268] end connection 165.225.128.186:63350 (7 connections now open) m31000| Fri Feb 22 11:56:44.063 [initandlisten] connection accepted from 165.225.128.186:56782 #271 (8 connections now open) m31000| Fri Feb 22 11:56:44.160 [conn1] going to kill op: op: 5439.0 m31000| Fri Feb 22 11:56:44.160 [conn1] going to kill op: op: 5436.0 m31000| Fri Feb 22 11:56:44.160 [conn1] going to kill op: op: 5437.0 m31000| Fri Feb 22 11:56:44.163 [conn270] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.164 [conn270] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|5 } } cursorid:221534855960375 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:44.164 [conn270] ClientCursor::find(): cursor not found in map '221534855960375' (ok after a drop) m31001| Fri Feb 22 11:56:44.164 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.164 [conn270] end connection 165.225.128.186:55828 (7 connections now open) m31000| Fri Feb 22 11:56:44.164 [initandlisten] connection accepted from 165.225.128.186:54098 #272 (8 connections now open) m31000| Fri Feb 22 11:56:44.164 [conn271] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.164 [conn271] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|5 } } cursorid:221538974408527 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.165 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.165 [conn271] end connection 165.225.128.186:56782 (7 connections now open) m31000| Fri Feb 22 11:56:44.165 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.165 [initandlisten] connection accepted from 165.225.128.186:55988 #273 (8 connections now open) m31000| Fri Feb 22 11:56:44.165 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|15 } } cursorid:221926914805255 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.166 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.261 [conn1] going to kill op: op: 5479.0 m31000| Fri Feb 22 11:56:44.261 [conn1] going to kill op: op: 5477.0 m31000| Fri Feb 22 11:56:44.266 [conn272] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.266 [conn272] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|15 } } cursorid:221974628527315 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.266 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.266 [conn272] end connection 165.225.128.186:54098 (7 connections now open) m31000| Fri Feb 22 11:56:44.267 [initandlisten] connection accepted from 165.225.128.186:45130 #274 (8 connections now open) m31000| Fri Feb 22 11:56:44.267 [conn273] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.267 [conn273] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|15 } } cursorid:221978136820929 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.267 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.267 [conn273] end connection 165.225.128.186:55988 (7 connections now open) m31000| Fri Feb 22 11:56:44.268 [initandlisten] connection accepted from 165.225.128.186:40870 #275 (8 connections now open) m31000| Fri Feb 22 11:56:44.362 [conn1] going to kill op: op: 5517.0 m31000| Fri Feb 22 11:56:44.362 [conn1] going to kill op: op: 5516.0 m31000| Fri Feb 22 11:56:44.369 [conn274] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.369 [conn274] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|25 } } cursorid:222412337607606 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.369 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.369 [conn274] end connection 165.225.128.186:45130 (7 connections now open) m31000| Fri Feb 22 11:56:44.369 [initandlisten] connection accepted from 165.225.128.186:63913 #276 (8 connections now open) m31000| Fri Feb 22 11:56:44.369 [conn275] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.369 [conn275] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|25 } } cursorid:222417012846271 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.370 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.370 [conn275] end connection 165.225.128.186:40870 (7 connections now open) m31000| Fri Feb 22 11:56:44.370 [initandlisten] connection accepted from 165.225.128.186:62224 #277 (8 connections now open) m31000| Fri Feb 22 11:56:44.463 [conn1] going to kill op: op: 5555.0 m31000| Fri Feb 22 11:56:44.463 [conn1] going to kill op: op: 5554.0 m31000| Fri Feb 22 11:56:44.471 [conn276] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.471 [conn276] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|35 } } cursorid:222849669763961 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:44.472 [conn276] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:44.472 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.472 [conn276] end connection 165.225.128.186:63913 (7 connections now open) m31000| Fri Feb 22 11:56:44.472 [conn277] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.472 [conn277] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|35 } } cursorid:222854553617534 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.472 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.472 [conn277] end connection 165.225.128.186:62224 (6 connections now open) m31000| Fri Feb 22 11:56:44.472 [initandlisten] connection accepted from 165.225.128.186:33232 #278 (7 connections now open) m31000| Fri Feb 22 11:56:44.472 [initandlisten] connection accepted from 165.225.128.186:37154 #279 (8 connections now open) m31000| Fri Feb 22 11:56:44.563 [conn1] going to kill op: op: 5589.0 m31000| Fri Feb 22 11:56:44.564 [conn1] going to kill op: op: 5590.0 m31000| Fri Feb 22 11:56:44.564 [conn278] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.564 [conn278] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|45 } } cursorid:223291059756063 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:44.564 [conn279] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.564 [conn279] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|45 } } cursorid:223291095159847 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.564 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:44.565 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.565 [conn278] end connection 165.225.128.186:33232 (7 connections now open) m31000| Fri Feb 22 11:56:44.565 [conn279] end connection 165.225.128.186:37154 (7 connections now open) m31000| Fri Feb 22 11:56:44.565 [initandlisten] connection accepted from 165.225.128.186:45602 #280 (7 connections now open) m31000| Fri Feb 22 11:56:44.565 [initandlisten] connection accepted from 165.225.128.186:53725 #281 (8 connections now open) m31000| Fri Feb 22 11:56:44.664 [conn1] going to kill op: op: 5631.0 m31000| Fri Feb 22 11:56:44.665 [conn1] going to kill op: op: 5630.0 m31000| Fri Feb 22 11:56:44.668 [conn280] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.668 [conn281] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.668 [conn280] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|54 } } cursorid:223688116371742 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:44.668 [conn281] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|54 } } cursorid:223687894740722 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.668 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:44.668 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.668 [conn280] end connection 165.225.128.186:45602 (7 connections now open) m31000| Fri Feb 22 11:56:44.668 [conn281] end connection 165.225.128.186:53725 (7 connections now open) m31000| Fri Feb 22 11:56:44.668 [initandlisten] connection accepted from 165.225.128.186:58935 #282 (7 connections now open) m31000| Fri Feb 22 11:56:44.669 [initandlisten] connection accepted from 165.225.128.186:42003 #283 (8 connections now open) m31000| Fri Feb 22 11:56:44.765 [conn1] going to kill op: op: 5668.0 m31000| Fri Feb 22 11:56:44.765 [conn1] going to kill op: op: 5669.0 m31000| Fri Feb 22 11:56:44.771 [conn283] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.771 [conn283] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|65 } } cursorid:224125447827075 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.771 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.771 [conn282] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.771 [conn282] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|65 } } cursorid:224125398442164 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:44.771 [conn283] end connection 165.225.128.186:42003 (7 connections now open) m31002| Fri Feb 22 11:56:44.771 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.771 [conn282] end connection 165.225.128.186:58935 (6 connections now open) m31000| Fri Feb 22 11:56:44.772 [initandlisten] connection accepted from 165.225.128.186:34463 #284 (7 connections now open) m31000| Fri Feb 22 11:56:44.772 [initandlisten] connection accepted from 165.225.128.186:56605 #285 (8 connections now open) m31000| Fri Feb 22 11:56:44.866 [conn1] going to kill op: op: 5707.0 m31000| Fri Feb 22 11:56:44.866 [conn1] going to kill op: op: 5706.0 m31000| Fri Feb 22 11:56:44.874 [conn285] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.874 [conn285] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|75 } } cursorid:224562408724912 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:44.874 [conn285] ClientCursor::find(): cursor not found in map '224562408724912' (ok after a drop) m31002| Fri Feb 22 11:56:44.874 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.874 [conn285] end connection 165.225.128.186:56605 (7 connections now open) m31000| Fri Feb 22 11:56:44.874 [conn284] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.874 [conn284] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|75 } } cursorid:224563284133049 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.874 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.874 [initandlisten] connection accepted from 165.225.128.186:55227 #286 (8 connections now open) m31000| Fri Feb 22 11:56:44.874 [conn284] end connection 165.225.128.186:34463 (7 connections now open) m31000| Fri Feb 22 11:56:44.875 [initandlisten] connection accepted from 165.225.128.186:52425 #287 (8 connections now open) m31000| Fri Feb 22 11:56:44.967 [conn1] going to kill op: op: 5755.0 m31000| Fri Feb 22 11:56:44.967 [conn1] going to kill op: op: 5753.0 m31000| Fri Feb 22 11:56:44.967 [conn1] going to kill op: op: 5756.0 m31000| Fri Feb 22 11:56:44.968 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.968 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|85 } } cursorid:224950595035843 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.969 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.976 [conn286] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.976 [conn286] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|85 } } cursorid:225001038147454 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:44.976 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.976 [conn286] end connection 165.225.128.186:55227 (7 connections now open) m31000| Fri Feb 22 11:56:44.977 [conn287] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:44.977 [initandlisten] connection accepted from 165.225.128.186:57321 #288 (8 connections now open) m31000| Fri Feb 22 11:56:44.977 [conn287] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|85 } } cursorid:225001835936214 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:44.977 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:44.977 [conn287] end connection 165.225.128.186:52425 (7 connections now open) m31000| Fri Feb 22 11:56:44.977 [initandlisten] connection accepted from 165.225.128.186:64466 #289 (8 connections now open) m31000| Fri Feb 22 11:56:45.068 [conn1] going to kill op: op: 5791.0 m31000| Fri Feb 22 11:56:45.068 [conn1] going to kill op: op: 5792.0 m31000| Fri Feb 22 11:56:45.069 [conn288] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.069 [conn288] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|95 } } cursorid:225434512249954 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:45.069 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.069 [conn288] end connection 165.225.128.186:57321 (7 connections now open) m31000| Fri Feb 22 11:56:45.069 [conn289] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.069 [conn289] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534204000|95 } } cursorid:225440036553098 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:45.070 [initandlisten] connection accepted from 165.225.128.186:37700 #290 (8 connections now open) m31001| Fri Feb 22 11:56:45.070 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.070 [conn289] end connection 165.225.128.186:64466 (7 connections now open) m31000| Fri Feb 22 11:56:45.070 [initandlisten] connection accepted from 165.225.128.186:40139 #291 (8 connections now open) m31000| Fri Feb 22 11:56:45.169 [conn1] going to kill op: op: 5831.0 m31000| Fri Feb 22 11:56:45.169 [conn1] going to kill op: op: 5832.0 m31000| Fri Feb 22 11:56:45.172 [conn290] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.172 [conn291] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.172 [conn290] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|6 } } cursorid:225835429651090 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:45.172 [conn291] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|6 } } cursorid:225833865892323 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:45.172 [conn290] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:45.172 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:45.172 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.173 [conn290] end connection 165.225.128.186:37700 (7 connections now open) m31000| Fri Feb 22 11:56:45.173 [conn291] end connection 165.225.128.186:40139 (7 connections now open) m31000| Fri Feb 22 11:56:45.173 [initandlisten] connection accepted from 165.225.128.186:37867 #292 (7 connections now open) m31000| Fri Feb 22 11:56:45.173 [initandlisten] connection accepted from 165.225.128.186:39391 #293 (8 connections now open) m31000| Fri Feb 22 11:56:45.270 [conn1] going to kill op: op: 5881.0 m31000| Fri Feb 22 11:56:45.270 [conn1] going to kill op: op: 5880.0 m31000| Fri Feb 22 11:56:45.270 [conn1] going to kill op: op: 5879.0 m31000| Fri Feb 22 11:56:45.275 [conn292] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.275 [conn292] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|16 } } cursorid:226271974921722 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:45.275 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.275 [conn292] end connection 165.225.128.186:37867 (7 connections now open) m31000| Fri Feb 22 11:56:45.275 [initandlisten] connection accepted from 165.225.128.186:60695 #294 (8 connections now open) m31000| Fri Feb 22 11:56:45.276 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.276 [conn293] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.276 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|16 } } cursorid:226220440218409 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:91 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:45.276 [conn293] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|16 } } cursorid:226273030810059 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:92 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.276 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:45.276 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.276 [conn293] end connection 165.225.128.186:39391 (7 connections now open) m31000| Fri Feb 22 11:56:45.277 [initandlisten] connection accepted from 165.225.128.186:51515 #295 (8 connections now open) m31000| Fri Feb 22 11:56:45.371 [conn1] going to kill op: op: 5919.0 m31000| Fri Feb 22 11:56:45.371 [conn1] going to kill op: op: 5921.0 m31000| Fri Feb 22 11:56:45.377 [conn294] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.377 [conn294] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|26 } } cursorid:226706024697864 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:45.377 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.377 [conn294] end connection 165.225.128.186:60695 (7 connections now open) m31000| Fri Feb 22 11:56:45.378 [initandlisten] connection accepted from 165.225.128.186:49727 #296 (8 connections now open) m31000| Fri Feb 22 11:56:45.379 [conn295] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.379 [conn295] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|26 } } cursorid:226710624139137 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.379 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.379 [conn295] end connection 165.225.128.186:51515 (7 connections now open) m31000| Fri Feb 22 11:56:45.379 [initandlisten] connection accepted from 165.225.128.186:43708 #297 (8 connections now open) m31000| Fri Feb 22 11:56:45.472 [conn1] going to kill op: op: 5958.0 m31000| Fri Feb 22 11:56:45.480 [conn296] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.480 [conn296] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|36 } } cursorid:227145302164706 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:45.480 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.480 [conn296] end connection 165.225.128.186:49727 (7 connections now open) m31000| Fri Feb 22 11:56:45.481 [initandlisten] connection accepted from 165.225.128.186:35766 #298 (8 connections now open) m31000| Fri Feb 22 11:56:45.572 [conn1] going to kill op: op: 5989.0 m31000| Fri Feb 22 11:56:45.573 [conn298] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.573 [conn298] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|47 } } cursorid:227581789052083 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:45.573 [conn1] going to kill op: op: 5990.0 m31000| Fri Feb 22 11:56:45.573 [conn298] ClientCursor::find(): cursor not found in map '227581789052083' (ok after a drop) m31002| Fri Feb 22 11:56:45.573 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.573 [conn298] end connection 165.225.128.186:35766 (7 connections now open) m31000| Fri Feb 22 11:56:45.573 [initandlisten] connection accepted from 165.225.128.186:35209 #299 (8 connections now open) m31000| Fri Feb 22 11:56:45.573 [conn297] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.573 [conn297] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|37 } } cursorid:227148513475095 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.574 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.574 [conn297] end connection 165.225.128.186:43708 (7 connections now open) m31000| Fri Feb 22 11:56:45.574 [initandlisten] connection accepted from 165.225.128.186:56776 #300 (8 connections now open) m31000| Fri Feb 22 11:56:45.673 [conn1] going to kill op: op: 6027.0 m31000| Fri Feb 22 11:56:45.673 [conn1] going to kill op: op: 6028.0 m31000| Fri Feb 22 11:56:45.676 [conn299] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.676 [conn299] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|56 } } cursorid:227973927731507 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:45.676 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.676 [conn299] end connection 165.225.128.186:35209 (7 connections now open) m31000| Fri Feb 22 11:56:45.676 [initandlisten] connection accepted from 165.225.128.186:54509 #301 (8 connections now open) m31000| Fri Feb 22 11:56:45.676 [conn300] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.676 [conn300] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|56 } } cursorid:227978604999396 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.677 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.677 [conn300] end connection 165.225.128.186:56776 (7 connections now open) m31000| Fri Feb 22 11:56:45.677 [initandlisten] connection accepted from 165.225.128.186:34404 #302 (8 connections now open) m31000| Fri Feb 22 11:56:45.774 [conn1] going to kill op: op: 6065.0 m31000| Fri Feb 22 11:56:45.774 [conn1] going to kill op: op: 6066.0 m31000| Fri Feb 22 11:56:45.779 [conn301] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.779 [conn301] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|66 } } cursorid:228412413231576 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:45.779 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.779 [conn301] end connection 165.225.128.186:54509 (7 connections now open) m31000| Fri Feb 22 11:56:45.779 [initandlisten] connection accepted from 165.225.128.186:42880 #303 (8 connections now open) m31000| Fri Feb 22 11:56:45.779 [conn302] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.779 [conn302] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|66 } } cursorid:228415835536333 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.779 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.780 [conn302] end connection 165.225.128.186:34404 (7 connections now open) m31000| Fri Feb 22 11:56:45.780 [initandlisten] connection accepted from 165.225.128.186:54066 #304 (8 connections now open) m31001| Fri Feb 22 11:56:45.832 [conn5] end connection 165.225.128.186:39879 (2 connections now open) m31001| Fri Feb 22 11:56:45.832 [initandlisten] connection accepted from 165.225.128.186:60247 #7 (3 connections now open) m31000| Fri Feb 22 11:56:45.875 [conn1] going to kill op: op: 6104.0 m31000| Fri Feb 22 11:56:45.875 [conn1] going to kill op: op: 6103.0 m31000| Fri Feb 22 11:56:45.881 [conn304] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.881 [conn304] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|76 } } cursorid:228854629236632 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:45.881 [conn303] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.881 [conn303] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|76 } } cursorid:228849511785605 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:37 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.881 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.882 [conn304] end connection 165.225.128.186:54066 (7 connections now open) m31000| Fri Feb 22 11:56:45.882 [conn303] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:45.882 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.882 [conn303] end connection 165.225.128.186:42880 (6 connections now open) m31000| Fri Feb 22 11:56:45.882 [initandlisten] connection accepted from 165.225.128.186:43082 #305 (7 connections now open) m31000| Fri Feb 22 11:56:45.882 [initandlisten] connection accepted from 165.225.128.186:47024 #306 (8 connections now open) m31000| Fri Feb 22 11:56:45.976 [conn1] going to kill op: op: 6144.0 m31000| Fri Feb 22 11:56:45.976 [conn1] going to kill op: op: 6142.0 m31000| Fri Feb 22 11:56:45.976 [conn1] going to kill op: op: 6143.0 m31000| Fri Feb 22 11:56:45.984 [conn306] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.984 [conn306] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|86 } } cursorid:229291887780150 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:45.984 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.984 [conn306] end connection 165.225.128.186:47024 (7 connections now open) m31000| Fri Feb 22 11:56:45.984 [conn305] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.985 [conn305] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|86 } } cursorid:229292856888720 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.985 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:45.985 [initandlisten] connection accepted from 165.225.128.186:58168 #307 (8 connections now open) m31000| Fri Feb 22 11:56:45.985 [conn305] end connection 165.225.128.186:43082 (6 connections now open) m31000| Fri Feb 22 11:56:45.985 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:45.985 [initandlisten] connection accepted from 165.225.128.186:48854 #308 (8 connections now open) m31000| Fri Feb 22 11:56:45.985 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|95 } } cursorid:229635928057674 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:45.986 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.077 [conn1] going to kill op: op: 6180.0 m31000| Fri Feb 22 11:56:46.077 [conn1] going to kill op: op: 6181.0 m31000| Fri Feb 22 11:56:46.078 [conn308] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.078 [conn307] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.078 [conn308] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|96 } } cursorid:229730978974230 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:46.078 [conn307] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534205000|96 } } cursorid:229731303099027 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.078 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:46.078 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.078 [conn308] end connection 165.225.128.186:48854 (7 connections now open) m31000| Fri Feb 22 11:56:46.078 [conn307] end connection 165.225.128.186:58168 (7 connections now open) m31000| Fri Feb 22 11:56:46.078 [initandlisten] connection accepted from 165.225.128.186:47312 #309 (7 connections now open) m31000| Fri Feb 22 11:56:46.078 [initandlisten] connection accepted from 165.225.128.186:40592 #310 (8 connections now open) m31000| Fri Feb 22 11:56:46.178 [conn1] going to kill op: op: 6220.0 m31000| Fri Feb 22 11:56:46.178 [conn1] going to kill op: op: 6221.0 m31000| Fri Feb 22 11:56:46.180 [conn309] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.180 [conn309] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|6 } } cursorid:230124569104711 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.180 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.180 [conn309] end connection 165.225.128.186:47312 (7 connections now open) m31000| Fri Feb 22 11:56:46.180 [initandlisten] connection accepted from 165.225.128.186:33114 #311 (8 connections now open) m31000| Fri Feb 22 11:56:46.181 [conn310] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.181 [conn310] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|6 } } cursorid:230125935513248 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:46.181 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.181 [conn310] end connection 165.225.128.186:40592 (7 connections now open) m31000| Fri Feb 22 11:56:46.182 [initandlisten] connection accepted from 165.225.128.186:54252 #312 (8 connections now open) m31000| Fri Feb 22 11:56:46.279 [conn1] going to kill op: op: 6263.0 m31000| Fri Feb 22 11:56:46.279 [conn1] going to kill op: op: 6259.0 m31000| Fri Feb 22 11:56:46.279 [conn1] going to kill op: op: 6261.0 m31000| Fri Feb 22 11:56:46.283 [conn311] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.283 [conn311] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|16 } } cursorid:230558526492124 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:46.283 [conn311] ClientCursor::find(): cursor not found in map '230558526492124' (ok after a drop) m31001| Fri Feb 22 11:56:46.283 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.284 [conn311] end connection 165.225.128.186:33114 (7 connections now open) m31000| Fri Feb 22 11:56:46.284 [conn312] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.284 [conn312] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|16 } } cursorid:230563465419212 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:46.284 [initandlisten] connection accepted from 165.225.128.186:54560 #313 (8 connections now open) m31002| Fri Feb 22 11:56:46.284 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.284 [conn312] end connection 165.225.128.186:54252 (7 connections now open) m31000| Fri Feb 22 11:56:46.284 [initandlisten] connection accepted from 165.225.128.186:55588 #314 (8 connections now open) m31000| Fri Feb 22 11:56:46.287 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.287 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|27 } } cursorid:230949525686858 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:46.287 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.380 [conn1] going to kill op: op: 6302.0 m31000| Fri Feb 22 11:56:46.380 [conn1] going to kill op: op: 6301.0 m31000| Fri Feb 22 11:56:46.386 [conn313] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.386 [conn313] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|26 } } cursorid:230996949619563 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:46.387 [conn314] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.387 [conn314] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|27 } } cursorid:231002098976595 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.387 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.387 [conn313] end connection 165.225.128.186:54560 (7 connections now open) m31002| Fri Feb 22 11:56:46.387 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.387 [conn314] end connection 165.225.128.186:55588 (6 connections now open) m31000| Fri Feb 22 11:56:46.387 [initandlisten] connection accepted from 165.225.128.186:47796 #315 (7 connections now open) m31000| Fri Feb 22 11:56:46.387 [initandlisten] connection accepted from 165.225.128.186:43804 #316 (8 connections now open) m31000| Fri Feb 22 11:56:46.481 [conn1] going to kill op: op: 6340.0 m31000| Fri Feb 22 11:56:46.481 [conn1] going to kill op: op: 6339.0 m31000| Fri Feb 22 11:56:46.489 [conn316] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.489 [conn316] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|37 } } cursorid:231440654211881 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:46.489 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.489 [conn316] end connection 165.225.128.186:43804 (7 connections now open) m31000| Fri Feb 22 11:56:46.490 [conn315] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.490 [initandlisten] connection accepted from 165.225.128.186:50232 #317 (8 connections now open) m31000| Fri Feb 22 11:56:46.490 [conn315] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|37 } } cursorid:231438912541428 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.490 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.490 [conn315] end connection 165.225.128.186:47796 (7 connections now open) m31000| Fri Feb 22 11:56:46.490 [initandlisten] connection accepted from 165.225.128.186:42813 #318 (8 connections now open) m31000| Fri Feb 22 11:56:46.582 [conn1] going to kill op: op: 6374.0 m31000| Fri Feb 22 11:56:46.582 [conn317] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.582 [conn317] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|47 } } cursorid:231873013260836 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:46.582 [conn317] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:56:46.582 [conn1] going to kill op: op: 6375.0 m31002| Fri Feb 22 11:56:46.582 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.582 [conn317] end connection 165.225.128.186:50232 (7 connections now open) m31000| Fri Feb 22 11:56:46.582 [initandlisten] connection accepted from 165.225.128.186:62058 #319 (8 connections now open) m31000| Fri Feb 22 11:56:46.582 [conn318] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.582 [conn318] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|47 } } cursorid:231878519690608 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:92 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.583 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.583 [conn318] end connection 165.225.128.186:42813 (7 connections now open) m31000| Fri Feb 22 11:56:46.583 [initandlisten] connection accepted from 165.225.128.186:56358 #320 (8 connections now open) m31000| Fri Feb 22 11:56:46.682 [conn1] going to kill op: op: 6412.0 m31000| Fri Feb 22 11:56:46.683 [conn1] going to kill op: op: 6413.0 m31000| Fri Feb 22 11:56:46.684 [conn319] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.684 [conn319] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|56 } } cursorid:232268603900540 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:46.684 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.684 [conn319] end connection 165.225.128.186:62058 (7 connections now open) m31000| Fri Feb 22 11:56:46.685 [initandlisten] connection accepted from 165.225.128.186:33791 #321 (8 connections now open) m31000| Fri Feb 22 11:56:46.686 [conn320] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.686 [conn320] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|56 } } cursorid:232272470372398 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.686 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.686 [conn320] end connection 165.225.128.186:56358 (7 connections now open) m31000| Fri Feb 22 11:56:46.686 [initandlisten] connection accepted from 165.225.128.186:53355 #322 (8 connections now open) m31000| Fri Feb 22 11:56:46.783 [conn1] going to kill op: op: 6450.0 m31000| Fri Feb 22 11:56:46.783 [conn1] going to kill op: op: 6451.0 m31000| Fri Feb 22 11:56:46.787 [conn321] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.788 [conn321] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|66 } } cursorid:232707418636456 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:110 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:46.788 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.788 [conn321] end connection 165.225.128.186:33791 (7 connections now open) m31000| Fri Feb 22 11:56:46.788 [initandlisten] connection accepted from 165.225.128.186:33361 #323 (8 connections now open) m31000| Fri Feb 22 11:56:46.788 [conn322] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.788 [conn322] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|66 } } cursorid:232710512292736 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.789 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.789 [conn322] end connection 165.225.128.186:53355 (7 connections now open) m31000| Fri Feb 22 11:56:46.789 [initandlisten] connection accepted from 165.225.128.186:34690 #324 (8 connections now open) m31000| Fri Feb 22 11:56:46.884 [conn1] going to kill op: op: 6488.0 m31000| Fri Feb 22 11:56:46.884 [conn1] going to kill op: op: 6489.0 m31000| Fri Feb 22 11:56:46.890 [conn323] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.890 [conn323] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|76 } } cursorid:233143928834691 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:46.890 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.890 [conn323] end connection 165.225.128.186:33361 (7 connections now open) m31000| Fri Feb 22 11:56:46.891 [initandlisten] connection accepted from 165.225.128.186:55712 #325 (8 connections now open) m31000| Fri Feb 22 11:56:46.892 [conn324] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.892 [conn324] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|76 } } cursorid:233148717851945 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:46.892 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.892 [conn324] end connection 165.225.128.186:34690 (7 connections now open) m31000| Fri Feb 22 11:56:46.893 [initandlisten] connection accepted from 165.225.128.186:33530 #326 (8 connections now open) m31000| Fri Feb 22 11:56:46.985 [conn1] going to kill op: op: 6525.0 m31000| Fri Feb 22 11:56:46.985 [conn1] going to kill op: op: 6526.0 m31000| Fri Feb 22 11:56:46.985 [conn326] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.985 [conn326] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|86 } } cursorid:233586268002697 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:46.986 [conn326] ClientCursor::find(): cursor not found in map '233586268002697' (ok after a drop) m31001| Fri Feb 22 11:56:46.986 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.986 [conn326] end connection 165.225.128.186:33530 (7 connections now open) m31000| Fri Feb 22 11:56:46.986 [initandlisten] connection accepted from 165.225.128.186:38918 #327 (8 connections now open) m31000| Fri Feb 22 11:56:46.993 [conn325] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:46.993 [conn325] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|86 } } cursorid:233582386769597 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:46.993 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:46.993 [conn325] end connection 165.225.128.186:55712 (7 connections now open) m31000| Fri Feb 22 11:56:46.993 [initandlisten] connection accepted from 165.225.128.186:33335 #328 (8 connections now open) m31000| Fri Feb 22 11:56:47.086 [conn1] going to kill op: op: 6574.0 m31000| Fri Feb 22 11:56:47.086 [conn1] going to kill op: op: 6572.0 m31000| Fri Feb 22 11:56:47.086 [conn1] going to kill op: op: 6573.0 m31000| Fri Feb 22 11:56:47.088 [conn328] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.088 [conn328] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|96 } } cursorid:233982850497248 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.088 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.088 [conn328] end connection 165.225.128.186:33335 (7 connections now open) m31000| Fri Feb 22 11:56:47.088 [conn327] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.088 [conn327] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|96 } } cursorid:233977585800874 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:47.089 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.089 [initandlisten] connection accepted from 165.225.128.186:54989 #329 (8 connections now open) m31000| Fri Feb 22 11:56:47.089 [conn327] end connection 165.225.128.186:38918 (7 connections now open) m31000| Fri Feb 22 11:56:47.089 [initandlisten] connection accepted from 165.225.128.186:55270 #330 (8 connections now open) m31000| Fri Feb 22 11:56:47.089 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.090 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534206000|96 } } cursorid:233978392059616 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:47.090 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.187 [conn1] going to kill op: op: 6614.0 m31000| Fri Feb 22 11:56:47.187 [conn1] going to kill op: op: 6615.0 m31000| Fri Feb 22 11:56:47.191 [conn329] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.191 [conn329] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|8 } } cursorid:234377920359742 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.191 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.191 [conn329] end connection 165.225.128.186:54989 (7 connections now open) m31000| Fri Feb 22 11:56:47.191 [conn330] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.191 [conn330] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|8 } } cursorid:234377166273317 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:47.191 [initandlisten] connection accepted from 165.225.128.186:52973 #331 (8 connections now open) m31001| Fri Feb 22 11:56:47.191 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.192 [conn330] end connection 165.225.128.186:55270 (7 connections now open) m31000| Fri Feb 22 11:56:47.192 [initandlisten] connection accepted from 165.225.128.186:56481 #332 (8 connections now open) m31000| Fri Feb 22 11:56:47.288 [conn1] going to kill op: op: 6655.0 m31000| Fri Feb 22 11:56:47.288 [conn1] going to kill op: op: 6653.0 m31000| Fri Feb 22 11:56:47.288 [conn1] going to kill op: op: 6652.0 m31000| Fri Feb 22 11:56:47.294 [conn331] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.294 [conn331] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|18 } } cursorid:234811094478440 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:47.294 [conn331] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:47.294 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.294 [conn331] end connection 165.225.128.186:52973 (7 connections now open) m31000| Fri Feb 22 11:56:47.294 [conn332] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.294 [initandlisten] connection accepted from 165.225.128.186:43145 #333 (8 connections now open) m31000| Fri Feb 22 11:56:47.294 [conn332] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|18 } } cursorid:234815765210385 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:47.294 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.294 [conn332] end connection 165.225.128.186:56481 (7 connections now open) m31000| Fri Feb 22 11:56:47.295 [initandlisten] connection accepted from 165.225.128.186:49999 #334 (8 connections now open) m31000| Fri Feb 22 11:56:47.297 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.297 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|28 } } cursorid:235201418263201 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.297 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.389 [conn1] going to kill op: op: 6693.0 m31000| Fri Feb 22 11:56:47.389 [conn1] going to kill op: op: 6694.0 m31000| Fri Feb 22 11:56:47.396 [conn333] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.396 [conn333] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|28 } } cursorid:235248892895192 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.397 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.397 [conn333] end connection 165.225.128.186:43145 (7 connections now open) m31000| Fri Feb 22 11:56:47.397 [conn334] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.397 [conn334] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|28 } } cursorid:235254509060975 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:47.397 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.397 [initandlisten] connection accepted from 165.225.128.186:53932 #335 (8 connections now open) m31000| Fri Feb 22 11:56:47.397 [conn334] end connection 165.225.128.186:49999 (6 connections now open) m31000| Fri Feb 22 11:56:47.397 [initandlisten] connection accepted from 165.225.128.186:37184 #336 (8 connections now open) m31000| Fri Feb 22 11:56:47.490 [conn1] going to kill op: op: 6729.0 m31000| Fri Feb 22 11:56:47.490 [conn336] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.490 [conn336] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|38 } } cursorid:235692687221806 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:47.490 [conn1] going to kill op: op: 6731.0 m31001| Fri Feb 22 11:56:47.490 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.490 [conn336] end connection 165.225.128.186:37184 (7 connections now open) m31000| Fri Feb 22 11:56:47.490 [initandlisten] connection accepted from 165.225.128.186:62645 #337 (8 connections now open) m31000| Fri Feb 22 11:56:47.499 [conn335] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.499 [conn335] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|38 } } cursorid:235691668914962 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.499 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.500 [conn335] end connection 165.225.128.186:53932 (7 connections now open) m31000| Fri Feb 22 11:56:47.500 [initandlisten] connection accepted from 165.225.128.186:56603 #338 (8 connections now open) m31000| Fri Feb 22 11:56:47.591 [conn1] going to kill op: op: 6767.0 m31000| Fri Feb 22 11:56:47.591 [conn1] going to kill op: op: 6766.0 m31000| Fri Feb 22 11:56:47.592 [conn338] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.592 [conn338] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|48 } } cursorid:236086224925440 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.592 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.592 [conn338] end connection 165.225.128.186:56603 (7 connections now open) m31000| Fri Feb 22 11:56:47.592 [initandlisten] connection accepted from 165.225.128.186:63265 #339 (8 connections now open) m31000| Fri Feb 22 11:56:47.593 [conn337] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.593 [conn337] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|47 } } cursorid:236082639913002 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:47.593 [conn337] ClientCursor::find(): cursor not found in map '236082639913002' (ok after a drop) m31001| Fri Feb 22 11:56:47.593 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.593 [conn337] end connection 165.225.128.186:62645 (7 connections now open) m31000| Fri Feb 22 11:56:47.593 [initandlisten] connection accepted from 165.225.128.186:37574 #340 (8 connections now open) m31000| Fri Feb 22 11:56:47.691 [conn1] going to kill op: op: 6807.0 m31000| Fri Feb 22 11:56:47.692 [conn1] going to kill op: op: 6806.0 m31000| Fri Feb 22 11:56:47.695 [conn339] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.695 [conn339] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|57 } } cursorid:236478456151410 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:47.695 [conn340] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.695 [conn340] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|57 } } cursorid:236480955145960 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.695 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.695 [conn339] end connection 165.225.128.186:63265 (7 connections now open) m31001| Fri Feb 22 11:56:47.695 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.695 [conn340] end connection 165.225.128.186:37574 (6 connections now open) m31000| Fri Feb 22 11:56:47.696 [initandlisten] connection accepted from 165.225.128.186:38832 #341 (7 connections now open) m31000| Fri Feb 22 11:56:47.696 [initandlisten] connection accepted from 165.225.128.186:57199 #342 (8 connections now open) m31000| Fri Feb 22 11:56:47.792 [conn1] going to kill op: op: 6844.0 m31000| Fri Feb 22 11:56:47.793 [conn1] going to kill op: op: 6845.0 m31000| Fri Feb 22 11:56:47.798 [conn341] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.798 [conn341] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|68 } } cursorid:236920999088196 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:47.798 [conn342] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.798 [conn342] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|68 } } cursorid:236919120137519 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:47.798 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.798 [conn341] end connection 165.225.128.186:38832 (7 connections now open) m31001| Fri Feb 22 11:56:47.798 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.798 [conn342] end connection 165.225.128.186:57199 (6 connections now open) m31000| Fri Feb 22 11:56:47.799 [initandlisten] connection accepted from 165.225.128.186:33377 #343 (7 connections now open) m31000| Fri Feb 22 11:56:47.799 [initandlisten] connection accepted from 165.225.128.186:40868 #344 (8 connections now open) m31000| Fri Feb 22 11:56:47.893 [conn1] going to kill op: op: 6883.0 m31000| Fri Feb 22 11:56:47.893 [conn1] going to kill op: op: 6882.0 m31000| Fri Feb 22 11:56:47.901 [conn344] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.901 [conn344] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|78 } } cursorid:237357887055301 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:47.901 [conn343] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:47.901 [conn343] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|78 } } cursorid:237358794784606 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:47.901 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:47.901 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:47.901 [conn344] end connection 165.225.128.186:40868 (7 connections now open) m31000| Fri Feb 22 11:56:47.901 [conn343] end connection 165.225.128.186:33377 (6 connections now open) m31000| Fri Feb 22 11:56:47.902 [initandlisten] connection accepted from 165.225.128.186:57756 #345 (7 connections now open) m31000| Fri Feb 22 11:56:47.902 [initandlisten] connection accepted from 165.225.128.186:40152 #346 (8 connections now open) m31000| Fri Feb 22 11:56:47.994 [conn1] going to kill op: op: 6917.0 m31000| Fri Feb 22 11:56:47.994 [conn1] going to kill op: op: 6918.0 m31000| Fri Feb 22 11:56:48.047 [conn5] end connection 165.225.128.186:64990 (7 connections now open) m31000| Fri Feb 22 11:56:48.048 [initandlisten] connection accepted from 165.225.128.186:51293 #347 (8 connections now open) m31000| Fri Feb 22 11:56:48.095 [conn1] going to kill op: op: 6952.0 m31000| Fri Feb 22 11:56:48.095 [conn1] going to kill op: op: 6949.0 m31000| Fri Feb 22 11:56:48.095 [conn1] going to kill op: op: 6950.0 m31000| Fri Feb 22 11:56:48.096 [conn345] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.096 [conn345] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|88 } } cursorid:237796386409150 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:48.096 [conn346] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.096 [conn346] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534207000|88 } } cursorid:237795638986883 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:48.096 [conn345] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:48.096 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:48.096 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.096 [conn345] end connection 165.225.128.186:57756 (7 connections now open) m31000| Fri Feb 22 11:56:48.096 [conn346] end connection 165.225.128.186:40152 (6 connections now open) m31000| Fri Feb 22 11:56:48.097 [initandlisten] connection accepted from 165.225.128.186:56749 #348 (7 connections now open) m31000| Fri Feb 22 11:56:48.097 [initandlisten] connection accepted from 165.225.128.186:57234 #349 (8 connections now open) m31000| Fri Feb 22 11:56:48.100 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.100 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|8 } } cursorid:238573530015229 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:48.100 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.196 [conn1] going to kill op: op: 6993.0 m31000| Fri Feb 22 11:56:48.196 [conn1] going to kill op: op: 6992.0 m31000| Fri Feb 22 11:56:48.199 [conn348] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.199 [conn349] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.199 [conn348] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|8 } } cursorid:238582762598837 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:89 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:48.199 [conn349] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|8 } } cursorid:238583149861791 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:48.199 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:48.199 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.199 [conn349] end connection 165.225.128.186:57234 (7 connections now open) m31000| Fri Feb 22 11:56:48.199 [conn348] end connection 165.225.128.186:56749 (7 connections now open) m31000| Fri Feb 22 11:56:48.200 [initandlisten] connection accepted from 165.225.128.186:52950 #350 (7 connections now open) m31000| Fri Feb 22 11:56:48.200 [initandlisten] connection accepted from 165.225.128.186:49617 #351 (8 connections now open) m31000| Fri Feb 22 11:56:48.297 [conn1] going to kill op: op: 7032.0 m31000| Fri Feb 22 11:56:48.297 [conn1] going to kill op: op: 7031.0 m31000| Fri Feb 22 11:56:48.302 [conn351] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.302 [conn351] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|18 } } cursorid:239019358057542 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:48.302 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.302 [conn351] end connection 165.225.128.186:49617 (7 connections now open) m31000| Fri Feb 22 11:56:48.302 [conn350] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.302 [conn350] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|18 } } cursorid:239020534152317 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:48.302 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.303 [conn350] end connection 165.225.128.186:52950 (6 connections now open) m31000| Fri Feb 22 11:56:48.303 [initandlisten] connection accepted from 165.225.128.186:45416 #352 (8 connections now open) m31000| Fri Feb 22 11:56:48.303 [initandlisten] connection accepted from 165.225.128.186:42772 #353 (8 connections now open) m31000| Fri Feb 22 11:56:48.398 [conn1] going to kill op: op: 7081.0 m31000| Fri Feb 22 11:56:48.398 [conn1] going to kill op: op: 7080.0 m31000| Fri Feb 22 11:56:48.399 [conn1] going to kill op: op: 7079.0 m31000| Fri Feb 22 11:56:48.405 [conn352] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.405 [conn352] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|28 } } cursorid:239457908706024 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:48.405 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.405 [conn352] end connection 165.225.128.186:45416 (7 connections now open) m31000| Fri Feb 22 11:56:48.405 [conn353] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.405 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.405 [conn353] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|28 } } cursorid:239458554213189 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:48.405 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|28 } } cursorid:239406612187815 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:48.405 [initandlisten] connection accepted from 165.225.128.186:41184 #354 (8 connections now open) m31000| Fri Feb 22 11:56:48.405 [conn353] ClientCursor::find(): cursor not found in map '239458554213189' (ok after a drop) m31001| Fri Feb 22 11:56:48.405 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.406 [conn353] end connection 165.225.128.186:42772 (7 connections now open) m31000| Fri Feb 22 11:56:48.406 [initandlisten] connection accepted from 165.225.128.186:42652 #355 (8 connections now open) m31002| Fri Feb 22 11:56:48.406 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.499 [conn1] going to kill op: op: 7121.0 m31000| Fri Feb 22 11:56:48.499 [conn1] going to kill op: op: 7120.0 m31000| Fri Feb 22 11:56:48.507 [conn354] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.508 [conn354] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|38 } } cursorid:239895889663652 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:48.508 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.508 [conn354] end connection 165.225.128.186:41184 (7 connections now open) m31000| Fri Feb 22 11:56:48.508 [conn355] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.508 [conn355] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|38 } } cursorid:239896320730108 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:48.508 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.508 [initandlisten] connection accepted from 165.225.128.186:48148 #356 (8 connections now open) m31000| Fri Feb 22 11:56:48.508 [conn355] end connection 165.225.128.186:42652 (7 connections now open) m31000| Fri Feb 22 11:56:48.508 [initandlisten] connection accepted from 165.225.128.186:56058 #357 (8 connections now open) m31000| Fri Feb 22 11:56:48.600 [conn1] going to kill op: op: 7155.0 m31000| Fri Feb 22 11:56:48.600 [conn1] going to kill op: op: 7156.0 m31000| Fri Feb 22 11:56:48.600 [conn356] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.600 [conn356] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|49 } } cursorid:240334872182721 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:48.600 [conn357] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.600 [conn357] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|49 } } cursorid:240334480268114 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:48.600 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.601 [conn356] end connection 165.225.128.186:48148 (7 connections now open) m31001| Fri Feb 22 11:56:48.601 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.601 [conn357] end connection 165.225.128.186:56058 (6 connections now open) m31000| Fri Feb 22 11:56:48.601 [initandlisten] connection accepted from 165.225.128.186:46211 #358 (7 connections now open) m31000| Fri Feb 22 11:56:48.601 [initandlisten] connection accepted from 165.225.128.186:62761 #359 (8 connections now open) m31000| Fri Feb 22 11:56:48.701 [conn1] going to kill op: op: 7193.0 m31000| Fri Feb 22 11:56:48.701 [conn1] going to kill op: op: 7194.0 m31000| Fri Feb 22 11:56:48.703 [conn358] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.703 [conn358] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|58 } } cursorid:240729981256254 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:48.703 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.703 [conn358] end connection 165.225.128.186:46211 (7 connections now open) m31000| Fri Feb 22 11:56:48.703 [initandlisten] connection accepted from 165.225.128.186:33930 #360 (8 connections now open) m31000| Fri Feb 22 11:56:48.704 [conn359] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.704 [conn359] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|58 } } cursorid:240728863607179 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:94 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:48.704 [conn359] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:48.704 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.704 [conn359] end connection 165.225.128.186:62761 (7 connections now open) m31000| Fri Feb 22 11:56:48.704 [initandlisten] connection accepted from 165.225.128.186:38874 #361 (8 connections now open) m31000| Fri Feb 22 11:56:48.802 [conn1] going to kill op: op: 7231.0 m31000| Fri Feb 22 11:56:48.802 [conn1] going to kill op: op: 7232.0 m31000| Fri Feb 22 11:56:48.806 [conn360] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.806 [conn360] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|68 } } cursorid:241163454101984 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:48.806 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.806 [conn360] end connection 165.225.128.186:33930 (7 connections now open) m31000| Fri Feb 22 11:56:48.807 [initandlisten] connection accepted from 165.225.128.186:39902 #362 (8 connections now open) m31000| Fri Feb 22 11:56:48.807 [conn361] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.807 [conn361] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|68 } } cursorid:241167180779710 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:48.807 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.807 [conn361] end connection 165.225.128.186:38874 (7 connections now open) m31000| Fri Feb 22 11:56:48.807 [initandlisten] connection accepted from 165.225.128.186:60850 #363 (8 connections now open) m31000| Fri Feb 22 11:56:48.903 [conn1] going to kill op: op: 7269.0 m31000| Fri Feb 22 11:56:48.903 [conn1] going to kill op: op: 7270.0 m31000| Fri Feb 22 11:56:48.909 [conn362] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.909 [conn362] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|78 } } cursorid:241602090789857 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:48.909 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.909 [conn362] end connection 165.225.128.186:39902 (7 connections now open) m31000| Fri Feb 22 11:56:48.909 [initandlisten] connection accepted from 165.225.128.186:58839 #364 (8 connections now open) m31000| Fri Feb 22 11:56:48.910 [conn363] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:48.910 [conn363] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|78 } } cursorid:241605245615912 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:48.910 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:48.910 [conn363] end connection 165.225.128.186:60850 (7 connections now open) m31000| Fri Feb 22 11:56:48.910 [initandlisten] connection accepted from 165.225.128.186:36952 #365 (8 connections now open) m31000| Fri Feb 22 11:56:49.004 [conn1] going to kill op: op: 7308.0 m31000| Fri Feb 22 11:56:49.004 [conn1] going to kill op: op: 7307.0 m31000| Fri Feb 22 11:56:49.012 [conn364] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.012 [conn364] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|88 } } cursorid:242038646009758 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.012 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.012 [conn364] end connection 165.225.128.186:58839 (7 connections now open) m31000| Fri Feb 22 11:56:49.012 [conn365] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.012 [conn365] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|88 } } cursorid:242043675974345 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.013 [initandlisten] connection accepted from 165.225.128.186:44526 #366 (8 connections now open) m31001| Fri Feb 22 11:56:49.013 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.013 [conn365] end connection 165.225.128.186:36952 (7 connections now open) m31000| Fri Feb 22 11:56:49.013 [initandlisten] connection accepted from 165.225.128.186:42989 #367 (8 connections now open) m31000| Fri Feb 22 11:56:49.105 [conn1] going to kill op: op: 7348.0 m31000| Fri Feb 22 11:56:49.105 [conn1] going to kill op: op: 7345.0 m31000| Fri Feb 22 11:56:49.105 [conn1] going to kill op: op: 7346.0 m31000| Fri Feb 22 11:56:49.105 [conn366] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.105 [conn367] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.106 [conn367] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|98 } } cursorid:242481275622403 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.106 [conn366] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534208000|98 } } cursorid:242481402875540 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.106 [conn366] ClientCursor::find(): cursor not found in map '242481402875540' (ok after a drop) m31001| Fri Feb 22 11:56:49.106 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:49.106 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.106 [conn367] end connection 165.225.128.186:42989 (7 connections now open) m31000| Fri Feb 22 11:56:49.106 [conn366] end connection 165.225.128.186:44526 (7 connections now open) m31000| Fri Feb 22 11:56:49.106 [initandlisten] connection accepted from 165.225.128.186:38469 #368 (7 connections now open) m31000| Fri Feb 22 11:56:49.106 [initandlisten] connection accepted from 165.225.128.186:35786 #369 (8 connections now open) m31000| Fri Feb 22 11:56:49.111 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.111 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|10 } } cursorid:242826076036227 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:49.111 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.206 [conn1] going to kill op: op: 7389.0 m31000| Fri Feb 22 11:56:49.206 [conn1] going to kill op: op: 7388.0 m31000| Fri Feb 22 11:56:49.209 [conn369] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.209 [conn369] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|10 } } cursorid:242877031158147 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.209 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.209 [conn368] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.209 [conn368] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|10 } } cursorid:242877283182319 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.209 [conn369] end connection 165.225.128.186:35786 (7 connections now open) m31001| Fri Feb 22 11:56:49.209 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.209 [conn368] end connection 165.225.128.186:38469 (6 connections now open) m31000| Fri Feb 22 11:56:49.209 [initandlisten] connection accepted from 165.225.128.186:45719 #370 (7 connections now open) m31000| Fri Feb 22 11:56:49.210 [initandlisten] connection accepted from 165.225.128.186:38850 #371 (8 connections now open) m31000| Fri Feb 22 11:56:49.307 [conn1] going to kill op: op: 7426.0 m31000| Fri Feb 22 11:56:49.307 [conn1] going to kill op: op: 7427.0 m31000| Fri Feb 22 11:56:49.312 [conn370] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.312 [conn370] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|20 } } cursorid:243314507100277 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.312 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.312 [conn370] end connection 165.225.128.186:45719 (7 connections now open) m31000| Fri Feb 22 11:56:49.312 [initandlisten] connection accepted from 165.225.128.186:49140 #372 (8 connections now open) m31000| Fri Feb 22 11:56:49.313 [conn371] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.313 [conn371] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|20 } } cursorid:243316335356284 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:49.313 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.313 [conn371] end connection 165.225.128.186:38850 (7 connections now open) m31000| Fri Feb 22 11:56:49.313 [initandlisten] connection accepted from 165.225.128.186:55002 #373 (8 connections now open) m31000| Fri Feb 22 11:56:49.408 [conn1] going to kill op: op: 7467.0 m31000| Fri Feb 22 11:56:49.408 [conn1] going to kill op: op: 7465.0 m31000| Fri Feb 22 11:56:49.408 [conn1] going to kill op: op: 7464.0 m31000| Fri Feb 22 11:56:49.414 [conn372] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.414 [conn372] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|30 } } cursorid:243748846969123 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.415 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.415 [conn372] end connection 165.225.128.186:49140 (7 connections now open) m31000| Fri Feb 22 11:56:49.415 [initandlisten] connection accepted from 165.225.128.186:35505 #374 (8 connections now open) m31000| Fri Feb 22 11:56:49.416 [conn373] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.416 [conn373] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|30 } } cursorid:243754043373312 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.416 [conn373] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:49.416 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.416 [conn373] end connection 165.225.128.186:55002 (7 connections now open) m31000| Fri Feb 22 11:56:49.416 [initandlisten] connection accepted from 165.225.128.186:49134 #375 (8 connections now open) m31000| Fri Feb 22 11:56:49.417 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.417 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|40 } } cursorid:244140490161778 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.417 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.509 [conn1] going to kill op: op: 7503.0 m31000| Fri Feb 22 11:56:49.509 [conn1] going to kill op: op: 7505.0 m31000| Fri Feb 22 11:56:49.517 [conn374] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.518 [conn374] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|40 } } cursorid:244187966111116 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.518 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.518 [conn374] end connection 165.225.128.186:35505 (7 connections now open) m31000| Fri Feb 22 11:56:49.518 [initandlisten] connection accepted from 165.225.128.186:43933 #376 (8 connections now open) m31000| Fri Feb 22 11:56:49.609 [conn1] going to kill op: op: 7536.0 m31000| Fri Feb 22 11:56:49.610 [conn1] going to kill op: op: 7537.0 m31000| Fri Feb 22 11:56:49.610 [conn376] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.610 [conn376] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|50 } } cursorid:244624191447334 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.610 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.610 [conn376] end connection 165.225.128.186:43933 (7 connections now open) m31000| Fri Feb 22 11:56:49.610 [conn375] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.610 [conn375] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|40 } } cursorid:244191424759833 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.610 [initandlisten] connection accepted from 165.225.128.186:60108 #377 (8 connections now open) m31001| Fri Feb 22 11:56:49.611 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.611 [conn375] end connection 165.225.128.186:49134 (7 connections now open) m31000| Fri Feb 22 11:56:49.611 [initandlisten] connection accepted from 165.225.128.186:33708 #378 (8 connections now open) m31000| Fri Feb 22 11:56:49.710 [conn1] going to kill op: op: 7577.0 m31000| Fri Feb 22 11:56:49.710 [conn1] going to kill op: op: 7578.0 m31000| Fri Feb 22 11:56:49.713 [conn377] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.713 [conn377] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|59 } } cursorid:245016390696531 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.713 [conn378] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.713 [conn378] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|59 } } cursorid:245019977834717 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.713 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.714 [conn377] end connection 165.225.128.186:60108 (7 connections now open) m31001| Fri Feb 22 11:56:49.713 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.714 [conn378] end connection 165.225.128.186:33708 (6 connections now open) m31000| Fri Feb 22 11:56:49.714 [initandlisten] connection accepted from 165.225.128.186:42609 #379 (7 connections now open) m31000| Fri Feb 22 11:56:49.714 [initandlisten] connection accepted from 165.225.128.186:61239 #380 (8 connections now open) m31000| Fri Feb 22 11:56:49.811 [conn1] going to kill op: op: 7616.0 m31000| Fri Feb 22 11:56:49.811 [conn1] going to kill op: op: 7615.0 m31000| Fri Feb 22 11:56:49.816 [conn380] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.816 [conn380] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|70 } } cursorid:245458266593993 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.816 [conn379] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.816 [conn379] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|70 } } cursorid:245457383053336 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.816 [conn379] ClientCursor::find(): cursor not found in map '245457383053336' (ok after a drop) m31001| Fri Feb 22 11:56:49.816 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:49.816 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.816 [conn380] end connection 165.225.128.186:61239 (7 connections now open) m31000| Fri Feb 22 11:56:49.816 [conn379] end connection 165.225.128.186:42609 (6 connections now open) m31000| Fri Feb 22 11:56:49.817 [initandlisten] connection accepted from 165.225.128.186:40734 #381 (7 connections now open) m31000| Fri Feb 22 11:56:49.817 [initandlisten] connection accepted from 165.225.128.186:64208 #382 (8 connections now open) m31000| Fri Feb 22 11:56:49.912 [conn1] going to kill op: op: 7654.0 m31000| Fri Feb 22 11:56:49.912 [conn1] going to kill op: op: 7653.0 m31000| Fri Feb 22 11:56:49.919 [conn381] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.919 [conn382] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:49.919 [conn381] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|80 } } cursorid:245896755178181 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:49.919 [conn382] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|80 } } cursorid:245895651118672 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:49.919 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:49.919 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:49.920 [conn382] end connection 165.225.128.186:64208 (7 connections now open) m31000| Fri Feb 22 11:56:49.920 [conn381] end connection 165.225.128.186:40734 (7 connections now open) m31000| Fri Feb 22 11:56:49.920 [initandlisten] connection accepted from 165.225.128.186:36234 #383 (7 connections now open) m31000| Fri Feb 22 11:56:49.920 [initandlisten] connection accepted from 165.225.128.186:57492 #384 (8 connections now open) m31000| Fri Feb 22 11:56:50.013 [conn1] going to kill op: op: 7692.0 m31000| Fri Feb 22 11:56:50.013 [conn1] going to kill op: op: 7691.0 m31000| Fri Feb 22 11:56:50.022 [conn384] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.022 [conn384] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|90 } } cursorid:246335596750607 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.022 [conn383] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.022 [conn383] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534209000|90 } } cursorid:246335622113319 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.023 [conn384] getMore: cursorid not found local.oplog.rs 246335596750607 m31002| Fri Feb 22 11:56:50.023 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:50.023 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.023 [conn384] end connection 165.225.128.186:57492 (7 connections now open) m31000| Fri Feb 22 11:56:50.023 [conn383] end connection 165.225.128.186:36234 (7 connections now open) m31000| Fri Feb 22 11:56:50.023 [initandlisten] connection accepted from 165.225.128.186:39247 #385 (7 connections now open) m31000| Fri Feb 22 11:56:50.023 [initandlisten] connection accepted from 165.225.128.186:49025 #386 (8 connections now open) m31000| Fri Feb 22 11:56:50.114 [conn1] going to kill op: op: 7730.0 m31000| Fri Feb 22 11:56:50.114 [conn1] going to kill op: op: 7729.0 m31000| Fri Feb 22 11:56:50.115 [conn386] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.115 [conn386] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|1 } } cursorid:246771802667188 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.115 [conn385] { $err: "operation was interrupted", code: 11601 } m31001| Fri Feb 22 11:56:50.115 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.115 [conn385] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|1 } } cursorid:246772166848028 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.115 [conn386] end connection 165.225.128.186:49025 (7 connections now open) m31002| Fri Feb 22 11:56:50.115 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.116 [conn385] end connection 165.225.128.186:39247 (6 connections now open) m31000| Fri Feb 22 11:56:50.116 [initandlisten] connection accepted from 165.225.128.186:36187 #387 (7 connections now open) m31000| Fri Feb 22 11:56:50.116 [initandlisten] connection accepted from 165.225.128.186:41735 #388 (8 connections now open) m31000| Fri Feb 22 11:56:50.215 [conn1] going to kill op: op: 7779.0 m31000| Fri Feb 22 11:56:50.215 [conn1] going to kill op: op: 7777.0 m31000| Fri Feb 22 11:56:50.215 [conn1] going to kill op: op: 7778.0 m31000| Fri Feb 22 11:56:50.218 [conn387] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.218 [conn387] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|10 } } cursorid:247167850278340 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.218 [conn387] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:56:50.218 [conn388] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.218 [conn388] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|10 } } cursorid:247168396247809 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:50.218 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.218 [conn387] end connection 165.225.128.186:36187 (7 connections now open) m31000| Fri Feb 22 11:56:50.218 [conn8] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:56:50.218 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.218 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|10 } } cursorid:247116538104889 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.218 [conn388] end connection 165.225.128.186:41735 (6 connections now open) m31000| Fri Feb 22 11:56:50.219 [initandlisten] connection accepted from 165.225.128.186:62880 #389 (7 connections now open) m31000| Fri Feb 22 11:56:50.219 [initandlisten] connection accepted from 165.225.128.186:50093 #390 (8 connections now open) m31001| Fri Feb 22 11:56:50.219 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.316 [conn1] going to kill op: op: 7818.0 m31000| Fri Feb 22 11:56:50.316 [conn1] going to kill op: op: 7819.0 m31000| Fri Feb 22 11:56:50.321 [conn389] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.321 [conn389] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|20 } } cursorid:247606856172330 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:108 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.321 [conn390] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.321 [conn390] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|20 } } cursorid:247606896693445 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:50.321 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.321 [conn389] end connection 165.225.128.186:62880 (7 connections now open) m31002| Fri Feb 22 11:56:50.321 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.322 [conn390] end connection 165.225.128.186:50093 (6 connections now open) m31000| Fri Feb 22 11:56:50.322 [initandlisten] connection accepted from 165.225.128.186:45731 #391 (7 connections now open) m31000| Fri Feb 22 11:56:50.322 [initandlisten] connection accepted from 165.225.128.186:55769 #392 (8 connections now open) m31000| Fri Feb 22 11:56:50.416 [conn1] going to kill op: op: 7858.0 m31000| Fri Feb 22 11:56:50.417 [conn1] going to kill op: op: 7859.0 m31000| Fri Feb 22 11:56:50.425 [conn392] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.425 [conn392] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|30 } } cursorid:248044425054382 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:50.425 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.425 [conn391] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.425 [conn392] end connection 165.225.128.186:55769 (7 connections now open) m31000| Fri Feb 22 11:56:50.425 [conn391] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|30 } } cursorid:248044655506353 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:50.425 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.425 [conn391] end connection 165.225.128.186:45731 (6 connections now open) m31000| Fri Feb 22 11:56:50.425 [initandlisten] connection accepted from 165.225.128.186:47348 #393 (7 connections now open) m31000| Fri Feb 22 11:56:50.425 [initandlisten] connection accepted from 165.225.128.186:62063 #394 (8 connections now open) m31000| Fri Feb 22 11:56:50.517 [conn1] going to kill op: op: 7905.0 m31000| Fri Feb 22 11:56:50.517 [conn1] going to kill op: op: 7904.0 m31000| Fri Feb 22 11:56:50.518 [conn1] going to kill op: op: 7903.0 m31000| Fri Feb 22 11:56:50.518 [conn393] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.518 [conn393] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|41 } } cursorid:248481774374076 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:50.518 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.518 [conn393] end connection 165.225.128.186:47348 (7 connections now open) m31000| Fri Feb 22 11:56:50.519 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.519 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|41 } } cursorid:248429840326174 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:45 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.519 [initandlisten] connection accepted from 165.225.128.186:45019 #395 (8 connections now open) m31000| Fri Feb 22 11:56:50.519 [conn12] ClientCursor::find(): cursor not found in map '248429840326174' (ok after a drop) m31002| Fri Feb 22 11:56:50.519 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.618 [conn1] going to kill op: op: 7940.0 m31000| Fri Feb 22 11:56:50.618 [conn1] going to kill op: op: 7939.0 m31000| Fri Feb 22 11:56:50.619 [conn394] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.619 [conn394] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|41 } } cursorid:248482128681632 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:50.619 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.620 [conn394] end connection 165.225.128.186:62063 (7 connections now open) m31000| Fri Feb 22 11:56:50.620 [initandlisten] connection accepted from 165.225.128.186:42741 #396 (8 connections now open) m31000| Fri Feb 22 11:56:50.621 [conn395] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.621 [conn395] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|50 } } cursorid:248872403250468 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:50.621 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.621 [conn395] end connection 165.225.128.186:45019 (7 connections now open) m31000| Fri Feb 22 11:56:50.621 [initandlisten] connection accepted from 165.225.128.186:39761 #397 (8 connections now open) m31000| Fri Feb 22 11:56:50.719 [conn1] going to kill op: op: 7977.0 m31000| Fri Feb 22 11:56:50.719 [conn1] going to kill op: op: 7978.0 m31000| Fri Feb 22 11:56:50.722 [conn396] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.722 [conn396] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|60 } } cursorid:249306605359805 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:50.722 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.722 [conn396] end connection 165.225.128.186:42741 (7 connections now open) m31000| Fri Feb 22 11:56:50.723 [conn397] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.723 [conn397] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|60 } } cursorid:249310884194807 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:50.723 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.723 [conn397] end connection 165.225.128.186:39761 (6 connections now open) m31000| Fri Feb 22 11:56:50.724 [initandlisten] connection accepted from 165.225.128.186:48428 #398 (7 connections now open) m31000| Fri Feb 22 11:56:50.725 [initandlisten] connection accepted from 165.225.128.186:44769 #399 (8 connections now open) m31000| Fri Feb 22 11:56:50.820 [conn1] going to kill op: op: 8015.0 m31000| Fri Feb 22 11:56:50.820 [conn1] going to kill op: op: 8016.0 m31000| Fri Feb 22 11:56:50.826 [conn398] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.826 [conn398] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|70 } } cursorid:249744341450259 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:50.826 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.826 [conn398] end connection 165.225.128.186:48428 (7 connections now open) m31000| Fri Feb 22 11:56:50.826 [initandlisten] connection accepted from 165.225.128.186:45723 #400 (8 connections now open) m31000| Fri Feb 22 11:56:50.827 [conn399] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.828 [conn399] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|70 } } cursorid:249749529473235 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:50.828 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.828 [conn399] end connection 165.225.128.186:44769 (7 connections now open) m31000| Fri Feb 22 11:56:50.828 [initandlisten] connection accepted from 165.225.128.186:45792 #401 (8 connections now open) m31000| Fri Feb 22 11:56:50.921 [conn1] going to kill op: op: 8054.0 m31000| Fri Feb 22 11:56:50.921 [conn1] going to kill op: op: 8053.0 m31000| Fri Feb 22 11:56:50.928 [conn400] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.928 [conn400] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|80 } } cursorid:250183094779813 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:50.928 [conn400] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:50.928 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.928 [conn400] end connection 165.225.128.186:45723 (7 connections now open) m31000| Fri Feb 22 11:56:50.929 [initandlisten] connection accepted from 165.225.128.186:50086 #402 (8 connections now open) m31000| Fri Feb 22 11:56:50.930 [conn401] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:50.930 [conn401] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|80 } } cursorid:250186427522676 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:50.930 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:50.930 [conn401] end connection 165.225.128.186:45792 (7 connections now open) m31000| Fri Feb 22 11:56:50.931 [initandlisten] connection accepted from 165.225.128.186:35536 #403 (8 connections now open) m31000| Fri Feb 22 11:56:51.022 [conn1] going to kill op: op: 8090.0 m31000| Fri Feb 22 11:56:51.022 [conn1] going to kill op: op: 8091.0 m31000| Fri Feb 22 11:56:51.022 [conn403] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.022 [conn403] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|90 } } cursorid:250624890699283 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:51.023 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.023 [conn403] end connection 165.225.128.186:35536 (7 connections now open) m31000| Fri Feb 22 11:56:51.023 [initandlisten] connection accepted from 165.225.128.186:57655 #404 (8 connections now open) m31000| Fri Feb 22 11:56:51.031 [conn402] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.031 [conn402] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534210000|90 } } cursorid:250621511369773 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.031 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.031 [conn402] end connection 165.225.128.186:50086 (7 connections now open) m31000| Fri Feb 22 11:56:51.032 [initandlisten] connection accepted from 165.225.128.186:38624 #405 (8 connections now open) m31000| Fri Feb 22 11:56:51.123 [conn1] going to kill op: op: 8131.0 m31000| Fri Feb 22 11:56:51.123 [conn1] going to kill op: op: 8130.0 m31000| Fri Feb 22 11:56:51.124 [conn405] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.124 [conn405] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|2 } } cursorid:251019429439627 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.124 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.124 [conn405] end connection 165.225.128.186:38624 (7 connections now open) m31000| Fri Feb 22 11:56:51.124 [initandlisten] connection accepted from 165.225.128.186:36432 #406 (8 connections now open) m31000| Fri Feb 22 11:56:51.125 [conn404] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.125 [conn404] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|2 } } cursorid:251016887311085 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:33 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:51.125 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.125 [conn404] end connection 165.225.128.186:57655 (7 connections now open) m31000| Fri Feb 22 11:56:51.125 [initandlisten] connection accepted from 165.225.128.186:49303 #407 (8 connections now open) m31000| Fri Feb 22 11:56:51.224 [conn1] going to kill op: op: 8171.0 m31000| Fri Feb 22 11:56:51.224 [conn1] going to kill op: op: 8169.0 m31000| Fri Feb 22 11:56:51.224 [conn1] going to kill op: op: 8168.0 m31000| Fri Feb 22 11:56:51.226 [conn406] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.226 [conn406] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|12 } } cursorid:251411718333385 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.226 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.226 [conn406] end connection 165.225.128.186:36432 (7 connections now open) m31000| Fri Feb 22 11:56:51.227 [initandlisten] connection accepted from 165.225.128.186:58086 #408 (8 connections now open) m31000| Fri Feb 22 11:56:51.227 [conn407] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.227 [conn407] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|12 } } cursorid:251415487303583 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:51.227 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.227 [conn407] end connection 165.225.128.186:49303 (7 connections now open) m31000| Fri Feb 22 11:56:51.228 [initandlisten] connection accepted from 165.225.128.186:40539 #409 (8 connections now open) m31000| Fri Feb 22 11:56:51.230 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.230 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|22 } } cursorid:251801645792900 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:51.230 [conn8] ClientCursor::find(): cursor not found in map '251801645792900' (ok after a drop) m31001| Fri Feb 22 11:56:51.230 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.325 [conn1] going to kill op: op: 8209.0 m31000| Fri Feb 22 11:56:51.325 [conn1] going to kill op: op: 8210.0 m31000| Fri Feb 22 11:56:51.329 [conn408] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.329 [conn408] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|22 } } cursorid:251848543171688 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.329 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.329 [conn408] end connection 165.225.128.186:58086 (7 connections now open) m31000| Fri Feb 22 11:56:51.329 [initandlisten] connection accepted from 165.225.128.186:49055 #410 (8 connections now open) m31000| Fri Feb 22 11:56:51.329 [conn409] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.330 [conn409] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|22 } } cursorid:251853435695333 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:51.330 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.330 [conn409] end connection 165.225.128.186:40539 (7 connections now open) m31000| Fri Feb 22 11:56:51.330 [initandlisten] connection accepted from 165.225.128.186:57645 #411 (8 connections now open) m31000| Fri Feb 22 11:56:51.426 [conn1] going to kill op: op: 8248.0 m31000| Fri Feb 22 11:56:51.426 [conn1] going to kill op: op: 8247.0 m31000| Fri Feb 22 11:56:51.432 [conn410] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.432 [conn410] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|32 } } cursorid:252287728454935 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:51.432 [conn411] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.432 [conn411] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|32 } } cursorid:252291335605266 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.432 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.432 [conn410] end connection 165.225.128.186:49055 (7 connections now open) m31001| Fri Feb 22 11:56:51.432 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.432 [conn411] end connection 165.225.128.186:57645 (6 connections now open) m31000| Fri Feb 22 11:56:51.432 [initandlisten] connection accepted from 165.225.128.186:40683 #412 (7 connections now open) m31000| Fri Feb 22 11:56:51.432 [initandlisten] connection accepted from 165.225.128.186:62727 #413 (8 connections now open) m31000| Fri Feb 22 11:56:51.526 [conn1] going to kill op: op: 8288.0 m31000| Fri Feb 22 11:56:51.527 [conn1] going to kill op: op: 8287.0 m31000| Fri Feb 22 11:56:51.527 [conn1] going to kill op: op: 8286.0 m31000| Fri Feb 22 11:56:51.535 [conn413] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.535 [conn413] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|42 } } cursorid:252729861871279 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:51.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.535 [conn413] end connection 165.225.128.186:62727 (7 connections now open) m31000| Fri Feb 22 11:56:51.535 [initandlisten] connection accepted from 165.225.128.186:48139 #414 (8 connections now open) m31000| Fri Feb 22 11:56:51.535 [conn412] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.535 [conn412] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|42 } } cursorid:252729851536026 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.535 [conn412] end connection 165.225.128.186:40683 (7 connections now open) m31000| Fri Feb 22 11:56:51.535 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.535 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|51 } } cursorid:253073102951996 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:51.536 [initandlisten] connection accepted from 165.225.128.186:52504 #415 (8 connections now open) m31000| Fri Feb 22 11:56:51.536 [conn12] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:51.536 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.627 [conn1] going to kill op: op: 8324.0 m31000| Fri Feb 22 11:56:51.628 [conn1] going to kill op: op: 8323.0 m31000| Fri Feb 22 11:56:51.729 [conn1] going to kill op: op: 8353.0 m31000| Fri Feb 22 11:56:51.729 [conn1] going to kill op: op: 8354.0 m31000| Fri Feb 22 11:56:51.729 [conn415] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.729 [conn415] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|52 } } cursorid:253168601885244 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:51.729 [conn414] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.729 [conn414] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|52 } } cursorid:253164611288373 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.729 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.730 [conn415] end connection 165.225.128.186:52504 (7 connections now open) m31001| Fri Feb 22 11:56:51.730 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.730 [conn414] end connection 165.225.128.186:48139 (6 connections now open) m31000| Fri Feb 22 11:56:51.730 [initandlisten] connection accepted from 165.225.128.186:46695 #416 (7 connections now open) m31000| Fri Feb 22 11:56:51.730 [initandlisten] connection accepted from 165.225.128.186:34299 #417 (8 connections now open) m31000| Fri Feb 22 11:56:51.829 [conn1] going to kill op: op: 8394.0 m31000| Fri Feb 22 11:56:51.830 [conn1] going to kill op: op: 8393.0 m31000| Fri Feb 22 11:56:51.832 [conn416] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.832 [conn417] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.832 [conn417] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|71 } } cursorid:253993020939323 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:51.832 [conn416] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|71 } } cursorid:253992933788506 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:51.833 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:51.833 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.833 [conn417] end connection 165.225.128.186:34299 (7 connections now open) m31000| Fri Feb 22 11:56:51.833 [conn416] end connection 165.225.128.186:46695 (7 connections now open) m31000| Fri Feb 22 11:56:51.833 [initandlisten] connection accepted from 165.225.128.186:35294 #418 (7 connections now open) m31000| Fri Feb 22 11:56:51.833 [initandlisten] connection accepted from 165.225.128.186:46776 #419 (8 connections now open) m31000| Fri Feb 22 11:56:51.930 [conn1] going to kill op: op: 8434.0 m31000| Fri Feb 22 11:56:51.930 [conn1] going to kill op: op: 8432.0 m31000| Fri Feb 22 11:56:51.935 [conn419] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.935 [conn419] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|81 } } cursorid:254429692386539 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:51.935 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.936 [conn419] end connection 165.225.128.186:46776 (7 connections now open) m31000| Fri Feb 22 11:56:51.936 [conn418] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:51.936 [conn418] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|81 } } cursorid:254430043477910 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:51.936 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:51.936 [conn418] end connection 165.225.128.186:35294 (6 connections now open) m31000| Fri Feb 22 11:56:51.936 [initandlisten] connection accepted from 165.225.128.186:33913 #420 (8 connections now open) m31000| Fri Feb 22 11:56:51.936 [initandlisten] connection accepted from 165.225.128.186:34441 #421 (8 connections now open) m31000| Fri Feb 22 11:56:52.031 [conn1] going to kill op: op: 8471.0 m31000| Fri Feb 22 11:56:52.031 [conn1] going to kill op: op: 8472.0 m31000| Fri Feb 22 11:56:52.038 [conn420] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.038 [conn420] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|91 } } cursorid:254869159857435 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:52.038 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.038 [conn420] end connection 165.225.128.186:33913 (7 connections now open) m31000| Fri Feb 22 11:56:52.039 [conn421] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.039 [conn421] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534211000|92 } } cursorid:254867711011278 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:86 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.039 [conn421] ClientCursor::find(): cursor not found in map '254867711011278' (ok after a drop) m31000| Fri Feb 22 11:56:52.039 [initandlisten] connection accepted from 165.225.128.186:34803 #422 (8 connections now open) m31002| Fri Feb 22 11:56:52.039 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.039 [conn421] end connection 165.225.128.186:34441 (7 connections now open) m31000| Fri Feb 22 11:56:52.039 [initandlisten] connection accepted from 165.225.128.186:40356 #423 (8 connections now open) m31000| Fri Feb 22 11:56:52.132 [conn1] going to kill op: op: 8512.0 m31000| Fri Feb 22 11:56:52.132 [conn1] going to kill op: op: 8513.0 m31000| Fri Feb 22 11:56:52.141 [conn422] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.141 [conn422] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|3 } } cursorid:255306058278112 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:52.141 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.141 [conn422] end connection 165.225.128.186:34803 (7 connections now open) m31000| Fri Feb 22 11:56:52.141 [conn423] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.141 [conn423] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|3 } } cursorid:255305828078427 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.142 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.142 [conn423] end connection 165.225.128.186:40356 (6 connections now open) m31000| Fri Feb 22 11:56:52.142 [initandlisten] connection accepted from 165.225.128.186:45565 #424 (8 connections now open) m31000| Fri Feb 22 11:56:52.142 [initandlisten] connection accepted from 165.225.128.186:52163 #425 (8 connections now open) m31000| Fri Feb 22 11:56:52.233 [conn1] going to kill op: op: 8548.0 m31000| Fri Feb 22 11:56:52.233 [conn1] going to kill op: op: 8547.0 m31000| Fri Feb 22 11:56:52.234 [conn425] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.234 [conn425] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|13 } } cursorid:255745764223746 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.234 [conn424] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.234 [conn424] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|13 } } cursorid:255745231047629 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.234 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:52.234 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.234 [conn425] end connection 165.225.128.186:52163 (7 connections now open) m31000| Fri Feb 22 11:56:52.234 [conn424] end connection 165.225.128.186:45565 (7 connections now open) m31000| Fri Feb 22 11:56:52.235 [initandlisten] connection accepted from 165.225.128.186:60844 #426 (7 connections now open) m31000| Fri Feb 22 11:56:52.235 [initandlisten] connection accepted from 165.225.128.186:45925 #427 (8 connections now open) m31000| Fri Feb 22 11:56:52.334 [conn1] going to kill op: op: 8598.0 m31000| Fri Feb 22 11:56:52.334 [conn1] going to kill op: op: 8597.0 m31000| Fri Feb 22 11:56:52.334 [conn1] going to kill op: op: 8596.0 m31000| Fri Feb 22 11:56:52.337 [conn426] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.337 [conn426] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|22 } } cursorid:256140880754194 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.337 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.337 [conn427] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.337 [conn427] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|22 } } cursorid:256140454160734 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.337 [conn426] end connection 165.225.128.186:60844 (7 connections now open) m31001| Fri Feb 22 11:56:52.337 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.337 [conn427] end connection 165.225.128.186:45925 (6 connections now open) m31000| Fri Feb 22 11:56:52.337 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.337 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|22 } } cursorid:256088667478735 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.337 [initandlisten] connection accepted from 165.225.128.186:62894 #428 (8 connections now open) m31000| Fri Feb 22 11:56:52.338 [initandlisten] connection accepted from 165.225.128.186:65093 #429 (8 connections now open) m31000| Fri Feb 22 11:56:52.339 [conn8] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:52.339 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.435 [conn1] going to kill op: op: 8636.0 m31000| Fri Feb 22 11:56:52.435 [conn1] going to kill op: op: 8637.0 m31000| Fri Feb 22 11:56:52.440 [conn428] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.440 [conn428] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|32 } } cursorid:256577123349809 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.440 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.440 [conn428] end connection 165.225.128.186:62894 (7 connections now open) m31000| Fri Feb 22 11:56:52.440 [conn429] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.440 [conn429] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|32 } } cursorid:256578465022425 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:52.440 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.440 [initandlisten] connection accepted from 165.225.128.186:34330 #430 (8 connections now open) m31000| Fri Feb 22 11:56:52.440 [conn429] end connection 165.225.128.186:65093 (7 connections now open) m31000| Fri Feb 22 11:56:52.441 [initandlisten] connection accepted from 165.225.128.186:54067 #431 (8 connections now open) m31000| Fri Feb 22 11:56:52.536 [conn1] going to kill op: op: 8674.0 m31000| Fri Feb 22 11:56:52.536 [conn1] going to kill op: op: 8675.0 m31000| Fri Feb 22 11:56:52.543 [conn430] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.543 [conn430] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|42 } } cursorid:257015810408301 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.543 [conn431] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.543 [conn431] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|42 } } cursorid:257015740603545 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.543 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:52.543 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.543 [conn430] end connection 165.225.128.186:34330 (7 connections now open) m31000| Fri Feb 22 11:56:52.543 [conn431] end connection 165.225.128.186:54067 (6 connections now open) m31000| Fri Feb 22 11:56:52.543 [initandlisten] connection accepted from 165.225.128.186:44042 #432 (7 connections now open) m31000| Fri Feb 22 11:56:52.543 [initandlisten] connection accepted from 165.225.128.186:36790 #433 (8 connections now open) m31000| Fri Feb 22 11:56:52.637 [conn1] going to kill op: op: 8728.0 m31000| Fri Feb 22 11:56:52.637 [conn1] going to kill op: op: 8726.0 m31000| Fri Feb 22 11:56:52.637 [conn1] going to kill op: op: 8727.0 m31000| Fri Feb 22 11:56:52.646 [conn432] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.646 [conn432] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|52 } } cursorid:257454384669838 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.646 [conn433] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.646 [conn433] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|52 } } cursorid:257453963478366 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.646 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.646 [conn432] end connection 165.225.128.186:44042 (7 connections now open) m31001| Fri Feb 22 11:56:52.646 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.646 [conn433] end connection 165.225.128.186:36790 (6 connections now open) m31000| Fri Feb 22 11:56:52.647 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.647 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|52 } } cursorid:257402667984847 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.647 [initandlisten] connection accepted from 165.225.128.186:59428 #434 (7 connections now open) m31000| Fri Feb 22 11:56:52.647 [initandlisten] connection accepted from 165.225.128.186:65100 #435 (8 connections now open) m31002| Fri Feb 22 11:56:52.648 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.738 [conn1] going to kill op: op: 8764.0 m31000| Fri Feb 22 11:56:52.738 [conn1] going to kill op: op: 8763.0 m31000| Fri Feb 22 11:56:52.739 [conn435] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.739 [conn435] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|63 } } cursorid:257893288397207 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.739 [conn434] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.739 [conn434] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|63 } } cursorid:257891596197159 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:92 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.739 [conn435] ClientCursor::find(): cursor not found in map '257893288397207' (ok after a drop) m31001| Fri Feb 22 11:56:52.739 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:52.739 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.739 [conn435] end connection 165.225.128.186:65100 (7 connections now open) m31000| Fri Feb 22 11:56:52.740 [conn434] end connection 165.225.128.186:59428 (6 connections now open) m31000| Fri Feb 22 11:56:52.740 [initandlisten] connection accepted from 165.225.128.186:49871 #436 (7 connections now open) m31000| Fri Feb 22 11:56:52.740 [initandlisten] connection accepted from 165.225.128.186:36357 #437 (8 connections now open) m31000| Fri Feb 22 11:56:52.839 [conn1] going to kill op: op: 8801.0 m31000| Fri Feb 22 11:56:52.839 [conn1] going to kill op: op: 8802.0 m31000| Fri Feb 22 11:56:52.842 [conn437] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.842 [conn437] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|72 } } cursorid:258287200802320 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.842 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.843 [conn436] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.843 [conn437] end connection 165.225.128.186:36357 (7 connections now open) m31000| Fri Feb 22 11:56:52.843 [conn436] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|72 } } cursorid:258288591541162 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:52.843 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.843 [conn436] end connection 165.225.128.186:49871 (6 connections now open) m31000| Fri Feb 22 11:56:52.843 [initandlisten] connection accepted from 165.225.128.186:58850 #438 (8 connections now open) m31000| Fri Feb 22 11:56:52.843 [initandlisten] connection accepted from 165.225.128.186:63970 #439 (8 connections now open) m31000| Fri Feb 22 11:56:52.940 [conn1] going to kill op: op: 8839.0 m31000| Fri Feb 22 11:56:52.940 [conn1] going to kill op: op: 8840.0 m31000| Fri Feb 22 11:56:52.945 [conn438] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.945 [conn438] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|82 } } cursorid:258725846510342 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:52.945 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.945 [conn438] end connection 165.225.128.186:58850 (7 connections now open) m31000| Fri Feb 22 11:56:52.946 [conn439] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:52.946 [conn439] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|82 } } cursorid:258725591034948 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:52.946 [initandlisten] connection accepted from 165.225.128.186:52636 #440 (8 connections now open) m31001| Fri Feb 22 11:56:52.946 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:52.946 [conn439] end connection 165.225.128.186:63970 (7 connections now open) m31000| Fri Feb 22 11:56:52.946 [initandlisten] connection accepted from 165.225.128.186:64745 #441 (8 connections now open) m31000| Fri Feb 22 11:56:53.041 [conn1] going to kill op: op: 8878.0 m31000| Fri Feb 22 11:56:53.041 [conn1] going to kill op: op: 8877.0 m31000| Fri Feb 22 11:56:53.048 [conn440] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.048 [conn440] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|92 } } cursorid:259160257770795 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:53.048 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.048 [conn440] end connection 165.225.128.186:52636 (7 connections now open) m31000| Fri Feb 22 11:56:53.049 [initandlisten] connection accepted from 165.225.128.186:63042 #442 (8 connections now open) m31000| Fri Feb 22 11:56:53.049 [conn441] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.049 [conn441] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534212000|92 } } cursorid:259164265560820 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.049 [conn441] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:53.049 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.049 [conn441] end connection 165.225.128.186:64745 (7 connections now open) m31000| Fri Feb 22 11:56:53.049 [initandlisten] connection accepted from 165.225.128.186:59917 #443 (8 connections now open) m31000| Fri Feb 22 11:56:53.142 [conn1] going to kill op: op: 8917.0 m31000| Fri Feb 22 11:56:53.142 [conn1] going to kill op: op: 8915.0 m31000| Fri Feb 22 11:56:53.151 [conn442] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.151 [conn442] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|4 } } cursorid:259596640315680 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:53.151 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.152 [conn442] end connection 165.225.128.186:63042 (7 connections now open) m31000| Fri Feb 22 11:56:53.152 [initandlisten] connection accepted from 165.225.128.186:50982 #444 (8 connections now open) m31000| Fri Feb 22 11:56:53.243 [conn1] going to kill op: op: 8948.0 m31000| Fri Feb 22 11:56:53.243 [conn1] going to kill op: op: 8951.0 m31000| Fri Feb 22 11:56:53.244 [conn443] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.244 [conn443] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|4 } } cursorid:259602805444212 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:53.244 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.244 [conn443] end connection 165.225.128.186:59917 (7 connections now open) m31000| Fri Feb 22 11:56:53.244 [initandlisten] connection accepted from 165.225.128.186:62067 #445 (8 connections now open) m31000| Fri Feb 22 11:56:53.245 [conn444] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.245 [conn444] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|14 } } cursorid:260035297996275 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:53.245 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.245 [conn444] end connection 165.225.128.186:50982 (7 connections now open) m31000| Fri Feb 22 11:56:53.245 [initandlisten] connection accepted from 165.225.128.186:41250 #446 (8 connections now open) m31000| Fri Feb 22 11:56:53.344 [conn1] going to kill op: op: 8991.0 m31000| Fri Feb 22 11:56:53.344 [conn1] going to kill op: op: 8989.0 m31000| Fri Feb 22 11:56:53.344 [conn1] going to kill op: op: 8988.0 m31000| Fri Feb 22 11:56:53.347 [conn445] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.347 [conn445] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|23 } } cursorid:260425796313233 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:53.347 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.347 [conn445] end connection 165.225.128.186:62067 (7 connections now open) m31000| Fri Feb 22 11:56:53.347 [initandlisten] connection accepted from 165.225.128.186:51999 #447 (8 connections now open) m31000| Fri Feb 22 11:56:53.348 [conn446] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.348 [conn446] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|24 } } cursorid:260431195798660 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:92 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:53.348 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.348 [conn446] end connection 165.225.128.186:41250 (7 connections now open) m31000| Fri Feb 22 11:56:53.348 [initandlisten] connection accepted from 165.225.128.186:44550 #448 (8 connections now open) m31000| Fri Feb 22 11:56:53.349 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.349 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|34 } } cursorid:260816675589527 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:53.349 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.445 [conn1] going to kill op: op: 9029.0 m31000| Fri Feb 22 11:56:53.445 [conn1] going to kill op: op: 9030.0 m31000| Fri Feb 22 11:56:53.450 [conn447] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.450 [conn447] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|34 } } cursorid:260864845872540 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:53.450 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.450 [conn447] end connection 165.225.128.186:51999 (7 connections now open) m31000| Fri Feb 22 11:56:53.450 [conn448] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.450 [initandlisten] connection accepted from 165.225.128.186:42777 #449 (8 connections now open) m31000| Fri Feb 22 11:56:53.450 [conn448] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|34 } } cursorid:260868729056942 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.450 [conn448] ClientCursor::find(): cursor not found in map '260868729056942' (ok after a drop) m31002| Fri Feb 22 11:56:53.450 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.451 [conn448] end connection 165.225.128.186:44550 (7 connections now open) m31000| Fri Feb 22 11:56:53.451 [initandlisten] connection accepted from 165.225.128.186:58384 #450 (8 connections now open) m31000| Fri Feb 22 11:56:53.545 [conn1] going to kill op: op: 9068.0 m31000| Fri Feb 22 11:56:53.546 [conn1] going to kill op: op: 9067.0 m31000| Fri Feb 22 11:56:53.553 [conn449] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.553 [conn449] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|44 } } cursorid:261301649317095 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:53.553 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.553 [conn449] end connection 165.225.128.186:42777 (7 connections now open) m31000| Fri Feb 22 11:56:53.553 [conn450] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.553 [conn450] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|44 } } cursorid:261307139967748 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.553 [initandlisten] connection accepted from 165.225.128.186:65397 #451 (8 connections now open) m31002| Fri Feb 22 11:56:53.553 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.553 [conn450] end connection 165.225.128.186:58384 (7 connections now open) m31000| Fri Feb 22 11:56:53.554 [initandlisten] connection accepted from 165.225.128.186:60405 #452 (8 connections now open) m31000| Fri Feb 22 11:56:53.646 [conn1] going to kill op: op: 9106.0 m31000| Fri Feb 22 11:56:53.647 [conn1] going to kill op: op: 9105.0 m31000| Fri Feb 22 11:56:53.656 [conn451] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.656 [conn451] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|54 } } cursorid:261745797466072 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:53.656 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.656 [conn451] end connection 165.225.128.186:65397 (7 connections now open) m31000| Fri Feb 22 11:56:53.656 [conn452] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.656 [conn452] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|54 } } cursorid:261745822041328 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.656 [initandlisten] connection accepted from 165.225.128.186:56281 #453 (8 connections now open) m31002| Fri Feb 22 11:56:53.657 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.657 [conn452] end connection 165.225.128.186:60405 (7 connections now open) m31000| Fri Feb 22 11:56:53.657 [initandlisten] connection accepted from 165.225.128.186:57723 #454 (8 connections now open) m31000| Fri Feb 22 11:56:53.747 [conn1] going to kill op: op: 9152.0 m31000| Fri Feb 22 11:56:53.748 [conn1] going to kill op: op: 9151.0 m31000| Fri Feb 22 11:56:53.748 [conn1] going to kill op: op: 9150.0 m31000| Fri Feb 22 11:56:53.749 [conn454] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.749 [conn453] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.749 [conn454] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|64 } } cursorid:262183652974756 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.749 [conn453] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|64 } } cursorid:262178292132211 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:53.749 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:53.749 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.749 [conn454] end connection 165.225.128.186:57723 (7 connections now open) m31000| Fri Feb 22 11:56:53.749 [conn453] end connection 165.225.128.186:56281 (7 connections now open) m31000| Fri Feb 22 11:56:53.749 [initandlisten] connection accepted from 165.225.128.186:35856 #455 (7 connections now open) m31000| Fri Feb 22 11:56:53.750 [initandlisten] connection accepted from 165.225.128.186:46110 #456 (8 connections now open) m31000| Fri Feb 22 11:56:53.750 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.750 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|64 } } cursorid:262130946449856 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.751 [conn12] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:53.751 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.848 [conn1] going to kill op: op: 9191.0 m31000| Fri Feb 22 11:56:53.849 [conn1] going to kill op: op: 9190.0 m31000| Fri Feb 22 11:56:53.852 [conn456] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.852 [conn456] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|73 } } cursorid:262577344413658 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.852 [conn455] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.852 [conn455] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|73 } } cursorid:262578914685259 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:53.852 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:53.852 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.852 [conn456] end connection 165.225.128.186:46110 (7 connections now open) m31000| Fri Feb 22 11:56:53.852 [conn455] end connection 165.225.128.186:35856 (6 connections now open) m31000| Fri Feb 22 11:56:53.853 [initandlisten] connection accepted from 165.225.128.186:46250 #457 (7 connections now open) m31000| Fri Feb 22 11:56:53.853 [initandlisten] connection accepted from 165.225.128.186:36300 #458 (8 connections now open) m31000| Fri Feb 22 11:56:53.949 [conn1] going to kill op: op: 9231.0 m31000| Fri Feb 22 11:56:53.949 [conn1] going to kill op: op: 9232.0 m31000| Fri Feb 22 11:56:53.955 [conn457] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.955 [conn457] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|83 } } cursorid:263017110094292 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:53.955 [conn458] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:53.955 [conn458] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|83 } } cursorid:263016555539122 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:53.955 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:53.955 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:53.955 [conn457] end connection 165.225.128.186:46250 (7 connections now open) m31000| Fri Feb 22 11:56:53.955 [conn458] end connection 165.225.128.186:36300 (7 connections now open) m31000| Fri Feb 22 11:56:53.955 [initandlisten] connection accepted from 165.225.128.186:62697 #459 (7 connections now open) m31000| Fri Feb 22 11:56:53.956 [initandlisten] connection accepted from 165.225.128.186:52066 #460 (8 connections now open) m31000| Fri Feb 22 11:56:54.050 [conn1] going to kill op: op: 9270.0 m31000| Fri Feb 22 11:56:54.050 [conn1] going to kill op: op: 9269.0 m31000| Fri Feb 22 11:56:54.058 [conn460] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.058 [conn460] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|94 } } cursorid:263454883237543 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:54.058 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.058 [conn460] end connection 165.225.128.186:52066 (7 connections now open) m31000| Fri Feb 22 11:56:54.058 [initandlisten] connection accepted from 165.225.128.186:41822 #461 (8 connections now open) m31000| Fri Feb 22 11:56:54.058 [conn459] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.058 [conn459] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534213000|94 } } cursorid:263454365210713 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:54.058 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.058 [conn459] end connection 165.225.128.186:62697 (7 connections now open) m31000| Fri Feb 22 11:56:54.059 [initandlisten] connection accepted from 165.225.128.186:39941 #462 (8 connections now open) m31000| Fri Feb 22 11:56:54.151 [conn1] going to kill op: op: 9311.0 m31000| Fri Feb 22 11:56:54.151 [conn1] going to kill op: op: 9310.0 m31000| Fri Feb 22 11:56:54.160 [conn461] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.160 [conn461] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|5 } } cursorid:263888037450434 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:54.161 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.161 [conn462] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.161 [conn461] end connection 165.225.128.186:41822 (7 connections now open) m31000| Fri Feb 22 11:56:54.161 [conn462] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|5 } } cursorid:263892721055314 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.161 [conn462] ClientCursor::find(): cursor not found in map '263892721055314' (ok after a drop) m31002| Fri Feb 22 11:56:54.161 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.161 [conn462] end connection 165.225.128.186:39941 (6 connections now open) m31000| Fri Feb 22 11:56:54.161 [initandlisten] connection accepted from 165.225.128.186:55249 #463 (8 connections now open) m31000| Fri Feb 22 11:56:54.161 [initandlisten] connection accepted from 165.225.128.186:36834 #464 (8 connections now open) m31000| Fri Feb 22 11:56:54.252 [conn1] going to kill op: op: 9346.0 m31000| Fri Feb 22 11:56:54.252 [conn1] going to kill op: op: 9345.0 m31000| Fri Feb 22 11:56:54.253 [conn464] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.253 [conn463] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.253 [conn464] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|15 } } cursorid:264330873305815 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.253 [conn463] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|15 } } cursorid:264330992453919 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:54.253 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:54.253 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.253 [conn464] end connection 165.225.128.186:36834 (7 connections now open) m31000| Fri Feb 22 11:56:54.253 [conn463] end connection 165.225.128.186:55249 (7 connections now open) m31000| Fri Feb 22 11:56:54.254 [initandlisten] connection accepted from 165.225.128.186:50674 #465 (7 connections now open) m31000| Fri Feb 22 11:56:54.254 [initandlisten] connection accepted from 165.225.128.186:36649 #466 (8 connections now open) m31000| Fri Feb 22 11:56:54.353 [conn1] going to kill op: op: 9387.0 m31000| Fri Feb 22 11:56:54.353 [conn1] going to kill op: op: 9385.0 m31000| Fri Feb 22 11:56:54.353 [conn1] going to kill op: op: 9384.0 m31000| Fri Feb 22 11:56:54.356 [conn465] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.356 [conn465] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|24 } } cursorid:264726253125677 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:54.356 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.356 [conn465] end connection 165.225.128.186:50674 (7 connections now open) m31000| Fri Feb 22 11:56:54.356 [conn466] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.356 [conn466] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|24 } } cursorid:264725056807480 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:54.356 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.356 [initandlisten] connection accepted from 165.225.128.186:46548 #467 (8 connections now open) m31000| Fri Feb 22 11:56:54.356 [conn466] end connection 165.225.128.186:36649 (7 connections now open) m31000| Fri Feb 22 11:56:54.356 [initandlisten] connection accepted from 165.225.128.186:40053 #468 (8 connections now open) m31000| Fri Feb 22 11:56:54.360 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.360 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|34 } } cursorid:265111370362413 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:54.360 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.454 [conn1] going to kill op: op: 9426.0 m31000| Fri Feb 22 11:56:54.454 [conn1] going to kill op: op: 9425.0 m31000| Fri Feb 22 11:56:54.458 [conn468] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.458 [conn467] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.458 [conn468] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|34 } } cursorid:265162855178013 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.458 [conn467] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|34 } } cursorid:265163069157860 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:54.458 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.458 [conn467] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:56:54.458 [conn468] end connection 165.225.128.186:40053 (7 connections now open) m31002| Fri Feb 22 11:56:54.458 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.458 [conn467] end connection 165.225.128.186:46548 (6 connections now open) m31000| Fri Feb 22 11:56:54.459 [initandlisten] connection accepted from 165.225.128.186:36585 #469 (7 connections now open) m31000| Fri Feb 22 11:56:54.459 [initandlisten] connection accepted from 165.225.128.186:51895 #470 (8 connections now open) m31000| Fri Feb 22 11:56:54.554 [conn1] going to kill op: op: 9464.0 m31000| Fri Feb 22 11:56:54.555 [conn1] going to kill op: op: 9463.0 m31000| Fri Feb 22 11:56:54.561 [conn470] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.561 [conn470] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|44 } } cursorid:265600969876656 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:54.561 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.561 [conn469] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.561 [conn470] end connection 165.225.128.186:51895 (7 connections now open) m31000| Fri Feb 22 11:56:54.561 [conn469] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|44 } } cursorid:265601248207909 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:54.561 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.562 [conn469] end connection 165.225.128.186:36585 (6 connections now open) m31000| Fri Feb 22 11:56:54.562 [initandlisten] connection accepted from 165.225.128.186:39414 #471 (8 connections now open) m31000| Fri Feb 22 11:56:54.562 [initandlisten] connection accepted from 165.225.128.186:63060 #472 (8 connections now open) m31000| Fri Feb 22 11:56:54.655 [conn1] going to kill op: op: 9501.0 m31000| Fri Feb 22 11:56:54.655 [conn1] going to kill op: op: 9502.0 m31000| Fri Feb 22 11:56:54.664 [conn471] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.664 [conn471] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|54 } } cursorid:266040818105329 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.664 [conn472] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.664 [conn472] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|54 } } cursorid:266039902558399 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:54.664 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.664 [conn471] end connection 165.225.128.186:39414 (7 connections now open) m31001| Fri Feb 22 11:56:54.664 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.664 [conn472] end connection 165.225.128.186:63060 (6 connections now open) m31000| Fri Feb 22 11:56:54.664 [initandlisten] connection accepted from 165.225.128.186:44644 #473 (7 connections now open) m31000| Fri Feb 22 11:56:54.665 [initandlisten] connection accepted from 165.225.128.186:39236 #474 (8 connections now open) m31000| Fri Feb 22 11:56:54.756 [conn1] going to kill op: op: 9542.0 m31000| Fri Feb 22 11:56:54.756 [conn1] going to kill op: op: 9539.0 m31000| Fri Feb 22 11:56:54.757 [conn1] going to kill op: op: 9540.0 m31000| Fri Feb 22 11:56:54.757 [conn473] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.757 [conn473] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|64 } } cursorid:266478138816477 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:54.757 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.757 [conn474] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.757 [conn474] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|64 } } cursorid:266478235065637 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.757 [conn473] end connection 165.225.128.186:44644 (7 connections now open) m31001| Fri Feb 22 11:56:54.757 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.758 [conn474] end connection 165.225.128.186:39236 (6 connections now open) m31000| Fri Feb 22 11:56:54.758 [initandlisten] connection accepted from 165.225.128.186:51953 #475 (7 connections now open) m31000| Fri Feb 22 11:56:54.758 [initandlisten] connection accepted from 165.225.128.186:57876 #476 (8 connections now open) m31000| Fri Feb 22 11:56:54.761 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.761 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|74 } } cursorid:266821070722835 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:45 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:54.761 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.857 [conn1] going to kill op: op: 9580.0 m31000| Fri Feb 22 11:56:54.857 [conn1] going to kill op: op: 9581.0 m31000| Fri Feb 22 11:56:54.860 [conn475] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.860 [conn475] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|74 } } cursorid:266873297795935 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.860 [conn476] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.860 [conn476] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|74 } } cursorid:266873581317745 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.860 [conn475] ClientCursor::find(): cursor not found in map '266873297795935' (ok after a drop) m31001| Fri Feb 22 11:56:54.860 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:54.860 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.860 [conn476] end connection 165.225.128.186:57876 (7 connections now open) m31000| Fri Feb 22 11:56:54.860 [conn475] end connection 165.225.128.186:51953 (7 connections now open) m31000| Fri Feb 22 11:56:54.861 [initandlisten] connection accepted from 165.225.128.186:55167 #477 (7 connections now open) m31000| Fri Feb 22 11:56:54.861 [initandlisten] connection accepted from 165.225.128.186:61341 #478 (8 connections now open) m31000| Fri Feb 22 11:56:54.958 [conn1] going to kill op: op: 9618.0 m31000| Fri Feb 22 11:56:54.958 [conn1] going to kill op: op: 9619.0 m31000| Fri Feb 22 11:56:54.963 [conn478] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:54.963 [conn478] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|84 } } cursorid:267310431766543 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.963 [conn477] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:56:54.963 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.963 [conn477] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|84 } } cursorid:267310527989387 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:54.964 [conn478] end connection 165.225.128.186:61341 (7 connections now open) m31001| Fri Feb 22 11:56:54.964 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:54.964 [conn477] end connection 165.225.128.186:55167 (6 connections now open) m31000| Fri Feb 22 11:56:54.964 [initandlisten] connection accepted from 165.225.128.186:58900 #479 (7 connections now open) m31000| Fri Feb 22 11:56:54.964 [initandlisten] connection accepted from 165.225.128.186:53908 #480 (8 connections now open) m31000| Fri Feb 22 11:56:55.059 [conn1] going to kill op: op: 9657.0 m31000| Fri Feb 22 11:56:55.059 [conn1] going to kill op: op: 9656.0 m31000| Fri Feb 22 11:56:55.066 [conn480] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.066 [conn480] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|94 } } cursorid:267748483439288 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:55.066 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.066 [conn480] end connection 165.225.128.186:53908 (7 connections now open) m31000| Fri Feb 22 11:56:55.066 [conn479] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.066 [conn479] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534214000|94 } } cursorid:267748262805821 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.067 [initandlisten] connection accepted from 165.225.128.186:55099 #481 (8 connections now open) m31002| Fri Feb 22 11:56:55.067 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.067 [conn479] end connection 165.225.128.186:58900 (7 connections now open) m31000| Fri Feb 22 11:56:55.067 [initandlisten] connection accepted from 165.225.128.186:54089 #482 (8 connections now open) m31000| Fri Feb 22 11:56:55.160 [conn1] going to kill op: op: 9697.0 m31000| Fri Feb 22 11:56:55.160 [conn1] going to kill op: op: 9696.0 m31000| Fri Feb 22 11:56:55.169 [conn481] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.169 [conn482] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.169 [conn481] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|6 } } cursorid:268186364149747 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.169 [conn482] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|6 } } cursorid:268186468107470 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.169 [conn481] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:55.169 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:55.169 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.169 [conn482] end connection 165.225.128.186:54089 (7 connections now open) m31000| Fri Feb 22 11:56:55.169 [conn481] end connection 165.225.128.186:55099 (7 connections now open) m31000| Fri Feb 22 11:56:55.169 [initandlisten] connection accepted from 165.225.128.186:36363 #483 (7 connections now open) m31000| Fri Feb 22 11:56:55.170 [initandlisten] connection accepted from 165.225.128.186:35007 #484 (8 connections now open) m31000| Fri Feb 22 11:56:55.261 [conn1] going to kill op: op: 9731.0 m31000| Fri Feb 22 11:56:55.261 [conn1] going to kill op: op: 9732.0 m31000| Fri Feb 22 11:56:55.262 [conn483] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.262 [conn483] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|16 } } cursorid:268626311042248 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.262 [conn484] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.262 [conn484] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|16 } } cursorid:268626355499829 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.262 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:55.262 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.262 [conn483] end connection 165.225.128.186:36363 (7 connections now open) m31000| Fri Feb 22 11:56:55.262 [conn484] end connection 165.225.128.186:35007 (6 connections now open) m31000| Fri Feb 22 11:56:55.262 [initandlisten] connection accepted from 165.225.128.186:57995 #485 (7 connections now open) m31000| Fri Feb 22 11:56:55.262 [initandlisten] connection accepted from 165.225.128.186:34318 #486 (8 connections now open) m31000| Fri Feb 22 11:56:55.362 [conn1] going to kill op: op: 9769.0 m31000| Fri Feb 22 11:56:55.362 [conn1] going to kill op: op: 9770.0 m31000| Fri Feb 22 11:56:55.365 [conn485] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.365 [conn485] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|25 } } cursorid:269021605225221 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.365 [conn486] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.365 [conn486] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|25 } } cursorid:269021500855363 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.365 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:55.365 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.365 [conn485] end connection 165.225.128.186:57995 (7 connections now open) m31000| Fri Feb 22 11:56:55.365 [conn486] end connection 165.225.128.186:34318 (7 connections now open) m31000| Fri Feb 22 11:56:55.365 [initandlisten] connection accepted from 165.225.128.186:63113 #487 (7 connections now open) m31000| Fri Feb 22 11:56:55.365 [initandlisten] connection accepted from 165.225.128.186:42604 #488 (8 connections now open) m31000| Fri Feb 22 11:56:55.463 [conn1] going to kill op: op: 9821.0 m31000| Fri Feb 22 11:56:55.463 [conn1] going to kill op: op: 9819.0 m31000| Fri Feb 22 11:56:55.463 [conn1] going to kill op: op: 9820.0 m31000| Fri Feb 22 11:56:55.468 [conn487] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.468 [conn487] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|35 } } cursorid:269458004694336 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:55.468 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.468 [conn488] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.468 [conn487] end connection 165.225.128.186:63113 (7 connections now open) m31000| Fri Feb 22 11:56:55.468 [conn488] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|35 } } cursorid:269457807651334 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.468 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.468 [conn488] end connection 165.225.128.186:42604 (6 connections now open) m31000| Fri Feb 22 11:56:55.468 [initandlisten] connection accepted from 165.225.128.186:62881 #489 (8 connections now open) m31000| Fri Feb 22 11:56:55.468 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.468 [initandlisten] connection accepted from 165.225.128.186:45324 #490 (8 connections now open) m31000| Fri Feb 22 11:56:55.468 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|35 } } cursorid:269407839838834 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:55.469 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.564 [conn1] going to kill op: op: 9860.0 m31000| Fri Feb 22 11:56:55.564 [conn1] going to kill op: op: 9859.0 m31000| Fri Feb 22 11:56:55.570 [conn489] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.570 [conn489] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|46 } } cursorid:269897819166354 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.570 [conn489] ClientCursor::find(): cursor not found in map '269897819166354' (ok after a drop) m31001| Fri Feb 22 11:56:55.570 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.570 [conn489] end connection 165.225.128.186:62881 (7 connections now open) m31000| Fri Feb 22 11:56:55.571 [conn490] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.571 [conn490] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|46 } } cursorid:269896106541327 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.571 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.571 [initandlisten] connection accepted from 165.225.128.186:57869 #491 (8 connections now open) m31000| Fri Feb 22 11:56:55.571 [conn490] end connection 165.225.128.186:45324 (6 connections now open) m31000| Fri Feb 22 11:56:55.571 [initandlisten] connection accepted from 165.225.128.186:43964 #492 (8 connections now open) m31000| Fri Feb 22 11:56:55.665 [conn1] going to kill op: op: 9898.0 m31000| Fri Feb 22 11:56:55.665 [conn1] going to kill op: op: 9897.0 m31000| Fri Feb 22 11:56:55.673 [conn492] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.673 [conn491] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.673 [conn492] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|56 } } cursorid:270334667681702 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.673 [conn491] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|56 } } cursorid:270334557014169 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.673 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:55.673 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.673 [conn492] end connection 165.225.128.186:43964 (7 connections now open) m31000| Fri Feb 22 11:56:55.673 [conn491] end connection 165.225.128.186:57869 (7 connections now open) m31000| Fri Feb 22 11:56:55.674 [initandlisten] connection accepted from 165.225.128.186:35628 #493 (7 connections now open) m31000| Fri Feb 22 11:56:55.674 [initandlisten] connection accepted from 165.225.128.186:54611 #494 (8 connections now open) m31000| Fri Feb 22 11:56:55.765 [conn1] going to kill op: op: 9932.0 m31000| Fri Feb 22 11:56:55.766 [conn1] going to kill op: op: 9933.0 m31000| Fri Feb 22 11:56:55.766 [conn493] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.766 [conn493] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|66 } } cursorid:270772233370593 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.766 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.766 [conn494] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.766 [conn494] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|66 } } cursorid:270773449961868 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.766 [conn493] end connection 165.225.128.186:35628 (7 connections now open) m31001| Fri Feb 22 11:56:55.766 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.766 [conn494] end connection 165.225.128.186:54611 (6 connections now open) m31000| Fri Feb 22 11:56:55.766 [initandlisten] connection accepted from 165.225.128.186:51821 #495 (7 connections now open) m31000| Fri Feb 22 11:56:55.767 [initandlisten] connection accepted from 165.225.128.186:43235 #496 (8 connections now open) m31000| Fri Feb 22 11:56:55.866 [conn1] going to kill op: op: 9983.0 m31000| Fri Feb 22 11:56:55.867 [conn1] going to kill op: op: 9981.0 m31000| Fri Feb 22 11:56:55.867 [conn1] going to kill op: op: 9982.0 m31000| Fri Feb 22 11:56:55.869 [conn495] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.869 [conn495] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|75 } } cursorid:271167865257812 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.869 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.869 [conn495] end connection 165.225.128.186:51821 (7 connections now open) m31000| Fri Feb 22 11:56:55.869 [conn496] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.869 [conn496] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|75 } } cursorid:271167279654613 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.869 [conn496] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:55.869 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.869 [initandlisten] connection accepted from 165.225.128.186:59848 #497 (8 connections now open) m31000| Fri Feb 22 11:56:55.869 [conn496] end connection 165.225.128.186:43235 (6 connections now open) m31000| Fri Feb 22 11:56:55.869 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.869 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|75 } } cursorid:271115723125135 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:55.869 [initandlisten] connection accepted from 165.225.128.186:34669 #498 (8 connections now open) m31002| Fri Feb 22 11:56:55.870 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.967 [conn1] going to kill op: op: 10021.0 m31000| Fri Feb 22 11:56:55.968 [conn1] going to kill op: op: 10022.0 m31000| Fri Feb 22 11:56:55.971 [conn498] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.971 [conn498] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|85 } } cursorid:271606304012178 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:55.971 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.972 [conn497] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:55.972 [conn498] end connection 165.225.128.186:34669 (7 connections now open) m31000| Fri Feb 22 11:56:55.972 [conn497] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|85 } } cursorid:271606623351342 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:55.972 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:55.972 [conn497] end connection 165.225.128.186:59848 (6 connections now open) m31000| Fri Feb 22 11:56:55.972 [initandlisten] connection accepted from 165.225.128.186:34491 #499 (8 connections now open) m31000| Fri Feb 22 11:56:55.972 [initandlisten] connection accepted from 165.225.128.186:62344 #500 (8 connections now open) m31000| Fri Feb 22 11:56:56.068 [conn1] going to kill op: op: 10061.0 m31000| Fri Feb 22 11:56:56.068 [conn1] going to kill op: op: 10060.0 m31000| Fri Feb 22 11:56:56.074 [conn499] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.074 [conn499] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|95 } } cursorid:272044803003019 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:56.074 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.074 [conn499] end connection 165.225.128.186:34491 (7 connections now open) m31000| Fri Feb 22 11:56:56.074 [initandlisten] connection accepted from 165.225.128.186:51224 #501 (8 connections now open) m31000| Fri Feb 22 11:56:56.075 [conn500] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.075 [conn500] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534215000|95 } } cursorid:272044225220647 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:56.075 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.075 [conn500] end connection 165.225.128.186:62344 (7 connections now open) m31000| Fri Feb 22 11:56:56.075 [initandlisten] connection accepted from 165.225.128.186:53632 #502 (8 connections now open) m31000| Fri Feb 22 11:56:56.169 [conn1] going to kill op: op: 10101.0 m31000| Fri Feb 22 11:56:56.169 [conn1] going to kill op: op: 10100.0 m31000| Fri Feb 22 11:56:56.176 [conn501] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.176 [conn501] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|6 } } cursorid:272478067250821 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:56.176 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.176 [conn501] end connection 165.225.128.186:51224 (7 connections now open) m31000| Fri Feb 22 11:56:56.177 [initandlisten] connection accepted from 165.225.128.186:60670 #503 (8 connections now open) m31000| Fri Feb 22 11:56:56.177 [conn502] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.177 [conn502] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|6 } } cursorid:272482093042581 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:56.177 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.178 [conn502] end connection 165.225.128.186:53632 (7 connections now open) m31000| Fri Feb 22 11:56:56.178 [initandlisten] connection accepted from 165.225.128.186:34963 #504 (8 connections now open) m31000| Fri Feb 22 11:56:56.270 [conn1] going to kill op: op: 10139.0 m31000| Fri Feb 22 11:56:56.270 [conn1] going to kill op: op: 10140.0 m31000| Fri Feb 22 11:56:56.279 [conn503] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.279 [conn503] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|16 } } cursorid:272915221641037 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.279 [conn503] ClientCursor::find(): cursor not found in map '272915221641037' (ok after a drop) m31001| Fri Feb 22 11:56:56.279 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.279 [conn503] end connection 165.225.128.186:60670 (7 connections now open) m31000| Fri Feb 22 11:56:56.279 [initandlisten] connection accepted from 165.225.128.186:34114 #505 (8 connections now open) m31000| Fri Feb 22 11:56:56.370 [conn1] going to kill op: op: 10173.0 m31000| Fri Feb 22 11:56:56.371 [conn1] going to kill op: op: 10172.0 m31000| Fri Feb 22 11:56:56.371 [conn505] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.371 [conn505] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|26 } } cursorid:273353679773423 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:56.371 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.371 [conn505] end connection 165.225.128.186:34114 (7 connections now open) m31000| Fri Feb 22 11:56:56.371 [conn504] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.371 [conn504] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|16 } } cursorid:272919518046574 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:56.372 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.372 [initandlisten] connection accepted from 165.225.128.186:39590 #506 (8 connections now open) m31000| Fri Feb 22 11:56:56.372 [conn504] end connection 165.225.128.186:34963 (7 connections now open) m31000| Fri Feb 22 11:56:56.372 [initandlisten] connection accepted from 165.225.128.186:58279 #507 (8 connections now open) m31000| Fri Feb 22 11:56:56.471 [conn1] going to kill op: op: 10213.0 m31000| Fri Feb 22 11:56:56.472 [conn1] going to kill op: op: 10211.0 m31000| Fri Feb 22 11:56:56.472 [conn1] going to kill op: op: 10210.0 m31000| Fri Feb 22 11:56:56.474 [conn506] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.474 [conn506] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|36 } } cursorid:273750395347201 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.474 [conn507] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.474 [conn507] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|36 } } cursorid:273750392792474 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:56.474 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:56.474 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.474 [conn506] end connection 165.225.128.186:39590 (7 connections now open) m31000| Fri Feb 22 11:56:56.474 [conn507] end connection 165.225.128.186:58279 (6 connections now open) m31000| Fri Feb 22 11:56:56.475 [initandlisten] connection accepted from 165.225.128.186:50001 #508 (7 connections now open) m31000| Fri Feb 22 11:56:56.475 [initandlisten] connection accepted from 165.225.128.186:53642 #509 (8 connections now open) m31000| Fri Feb 22 11:56:56.480 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.480 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|46 } } cursorid:274136733742259 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:56.480 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.572 [conn1] going to kill op: op: 10251.0 m31000| Fri Feb 22 11:56:56.573 [conn1] going to kill op: op: 10252.0 m31000| Fri Feb 22 11:56:56.577 [conn508] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.577 [conn508] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|46 } } cursorid:274187566605774 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.577 [conn509] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.577 [conn509] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|46 } } cursorid:274186704125784 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:56.577 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.577 [conn508] end connection 165.225.128.186:50001 (7 connections now open) m31000| Fri Feb 22 11:56:56.577 [conn509] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:56.577 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.577 [conn509] end connection 165.225.128.186:53642 (6 connections now open) m31000| Fri Feb 22 11:56:56.577 [initandlisten] connection accepted from 165.225.128.186:64324 #510 (7 connections now open) m31000| Fri Feb 22 11:56:56.577 [initandlisten] connection accepted from 165.225.128.186:37236 #511 (8 connections now open) m31000| Fri Feb 22 11:56:56.673 [conn1] going to kill op: op: 10290.0 m31000| Fri Feb 22 11:56:56.673 [conn1] going to kill op: op: 10289.0 m31000| Fri Feb 22 11:56:56.680 [conn511] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.680 [conn511] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|56 } } cursorid:274625226466974 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.680 [conn510] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.680 [conn510] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|56 } } cursorid:274626090212496 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:56.680 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.680 [conn511] end connection 165.225.128.186:37236 (7 connections now open) m31001| Fri Feb 22 11:56:56.680 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.680 [conn510] end connection 165.225.128.186:64324 (6 connections now open) m31000| Fri Feb 22 11:56:56.680 [initandlisten] connection accepted from 165.225.128.186:46000 #512 (7 connections now open) m31000| Fri Feb 22 11:56:56.680 [initandlisten] connection accepted from 165.225.128.186:57285 #513 (8 connections now open) m31000| Fri Feb 22 11:56:56.774 [conn1] going to kill op: op: 10327.0 m31000| Fri Feb 22 11:56:56.774 [conn1] going to kill op: op: 10328.0 m31000| Fri Feb 22 11:56:56.782 [conn512] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.782 [conn512] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|66 } } cursorid:275064372581638 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.782 [conn513] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.782 [conn513] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|66 } } cursorid:275062707516661 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:56.783 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:56.783 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.783 [conn512] end connection 165.225.128.186:46000 (7 connections now open) m31000| Fri Feb 22 11:56:56.783 [conn513] end connection 165.225.128.186:57285 (7 connections now open) m31000| Fri Feb 22 11:56:56.783 [initandlisten] connection accepted from 165.225.128.186:49400 #514 (7 connections now open) m31000| Fri Feb 22 11:56:56.783 [initandlisten] connection accepted from 165.225.128.186:50622 #515 (8 connections now open) m31000| Fri Feb 22 11:56:56.875 [conn1] going to kill op: op: 10362.0 m31000| Fri Feb 22 11:56:56.875 [conn1] going to kill op: op: 10363.0 m31000| Fri Feb 22 11:56:56.875 [conn514] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.875 [conn514] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|76 } } cursorid:275501087179461 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.875 [conn515] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.875 [conn515] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|76 } } cursorid:275502539983955 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:56.875 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:56.875 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.875 [conn514] end connection 165.225.128.186:49400 (7 connections now open) m31000| Fri Feb 22 11:56:56.876 [conn515] end connection 165.225.128.186:50622 (6 connections now open) m31000| Fri Feb 22 11:56:56.876 [initandlisten] connection accepted from 165.225.128.186:54521 #516 (7 connections now open) m31000| Fri Feb 22 11:56:56.876 [initandlisten] connection accepted from 165.225.128.186:50109 #517 (8 connections now open) m31000| Fri Feb 22 11:56:56.976 [conn1] going to kill op: op: 10416.0 m31000| Fri Feb 22 11:56:56.976 [conn1] going to kill op: op: 10414.0 m31000| Fri Feb 22 11:56:56.976 [conn1] going to kill op: op: 10415.0 m31000| Fri Feb 22 11:56:56.979 [conn516] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.979 [conn517] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.979 [conn516] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|85 } } cursorid:275896075711154 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.979 [conn517] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|85 } } cursorid:275897939306509 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.979 [conn517] ClientCursor::find(): cursor not found in map '275897939306509' (ok after a drop) m31002| Fri Feb 22 11:56:56.979 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:56:56.979 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:56.979 [conn516] end connection 165.225.128.186:54521 (7 connections now open) m31000| Fri Feb 22 11:56:56.979 [conn517] end connection 165.225.128.186:50109 (6 connections now open) m31000| Fri Feb 22 11:56:56.979 [initandlisten] connection accepted from 165.225.128.186:54423 #518 (7 connections now open) m31000| Fri Feb 22 11:56:56.979 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:56.979 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|85 } } cursorid:275845230291687 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:56.979 [initandlisten] connection accepted from 165.225.128.186:47115 #519 (8 connections now open) m31002| Fri Feb 22 11:56:56.980 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.077 [conn1] going to kill op: op: 10454.0 m31000| Fri Feb 22 11:56:57.077 [conn1] going to kill op: op: 10455.0 m31000| Fri Feb 22 11:56:57.082 [conn518] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.082 [conn518] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|96 } } cursorid:276335623692310 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.082 [conn519] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.082 [conn519] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534216000|96 } } cursorid:276335044501039 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.082 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.082 [conn518] end connection 165.225.128.186:54423 (7 connections now open) m31001| Fri Feb 22 11:56:57.082 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.083 [conn519] end connection 165.225.128.186:47115 (6 connections now open) m31000| Fri Feb 22 11:56:57.083 [initandlisten] connection accepted from 165.225.128.186:47450 #520 (7 connections now open) m31000| Fri Feb 22 11:56:57.092 [initandlisten] connection accepted from 165.225.128.186:65224 #521 (8 connections now open) m31000| Fri Feb 22 11:56:57.178 [conn1] going to kill op: op: 10494.0 m31000| Fri Feb 22 11:56:57.178 [conn1] going to kill op: op: 10493.0 m31000| Fri Feb 22 11:56:57.184 [conn521] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.184 [conn521] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|7 } } cursorid:276772110404385 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.185 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.185 [conn521] end connection 165.225.128.186:65224 (7 connections now open) m31000| Fri Feb 22 11:56:57.185 [initandlisten] connection accepted from 165.225.128.186:39062 #522 (8 connections now open) m31000| Fri Feb 22 11:56:57.186 [conn520] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.186 [conn520] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|7 } } cursorid:276767705627412 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:95 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:57.186 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.186 [conn520] end connection 165.225.128.186:47450 (7 connections now open) m31000| Fri Feb 22 11:56:57.187 [initandlisten] connection accepted from 165.225.128.186:38559 #523 (8 connections now open) m31000| Fri Feb 22 11:56:57.278 [conn1] going to kill op: op: 10529.0 m31000| Fri Feb 22 11:56:57.279 [conn1] going to kill op: op: 10531.0 m31000| Fri Feb 22 11:56:57.288 [conn522] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.288 [conn522] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|17 } } cursorid:277163983686088 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.288 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.288 [conn522] end connection 165.225.128.186:39062 (7 connections now open) m31000| Fri Feb 22 11:56:57.288 [initandlisten] connection accepted from 165.225.128.186:56266 #524 (8 connections now open) m31000| Fri Feb 22 11:56:57.379 [conn1] going to kill op: op: 10562.0 m31000| Fri Feb 22 11:56:57.380 [conn1] going to kill op: op: 10563.0 m31000| Fri Feb 22 11:56:57.381 [conn524] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.381 [conn524] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|27 } } cursorid:277601224226281 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.381 [conn523] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.381 [conn523] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|17 } } cursorid:277168875268480 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.381 [conn524] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:56:57.381 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:56:57.381 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.381 [conn523] end connection 165.225.128.186:38559 (7 connections now open) m31000| Fri Feb 22 11:56:57.381 [conn524] end connection 165.225.128.186:56266 (7 connections now open) m31000| Fri Feb 22 11:56:57.381 [initandlisten] connection accepted from 165.225.128.186:36181 #525 (7 connections now open) m31000| Fri Feb 22 11:56:57.381 [initandlisten] connection accepted from 165.225.128.186:33007 #526 (8 connections now open) m31000| Fri Feb 22 11:56:57.480 [conn1] going to kill op: op: 10600.0 m31000| Fri Feb 22 11:56:57.481 [conn1] going to kill op: op: 10601.0 m31000| Fri Feb 22 11:56:57.484 [conn525] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.484 [conn525] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|36 } } cursorid:277997340385229 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.484 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.484 [conn526] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.484 [conn526] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|36 } } cursorid:277997200321330 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.484 [conn525] end connection 165.225.128.186:36181 (7 connections now open) m31001| Fri Feb 22 11:56:57.484 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.484 [conn526] end connection 165.225.128.186:33007 (6 connections now open) m31000| Fri Feb 22 11:56:57.484 [initandlisten] connection accepted from 165.225.128.186:59197 #527 (7 connections now open) m31000| Fri Feb 22 11:56:57.485 [initandlisten] connection accepted from 165.225.128.186:37271 #528 (8 connections now open) m31000| Fri Feb 22 11:56:57.581 [conn1] going to kill op: op: 10650.0 m31000| Fri Feb 22 11:56:57.582 [conn1] going to kill op: op: 10648.0 m31000| Fri Feb 22 11:56:57.582 [conn1] going to kill op: op: 10649.0 m31000| Fri Feb 22 11:56:57.587 [conn527] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.587 [conn527] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|46 } } cursorid:278435405679268 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.587 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.587 [conn527] end connection 165.225.128.186:59197 (7 connections now open) m31000| Fri Feb 22 11:56:57.587 [conn528] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.587 [initandlisten] connection accepted from 165.225.128.186:55473 #529 (8 connections now open) m31000| Fri Feb 22 11:56:57.587 [conn528] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|46 } } cursorid:278434701954517 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:57.588 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.588 [conn528] end connection 165.225.128.186:37271 (7 connections now open) m31000| Fri Feb 22 11:56:57.588 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.588 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|46 } } cursorid:278383409116610 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.588 [initandlisten] connection accepted from 165.225.128.186:46119 #530 (8 connections now open) m31001| Fri Feb 22 11:56:57.589 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.682 [conn1] going to kill op: op: 10690.0 m31000| Fri Feb 22 11:56:57.683 [conn1] going to kill op: op: 10691.0 m31000| Fri Feb 22 11:56:57.690 [conn529] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.690 [conn529] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|56 } } cursorid:278868723751907 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.690 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.690 [conn529] end connection 165.225.128.186:55473 (7 connections now open) m31000| Fri Feb 22 11:56:57.690 [conn530] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.690 [conn530] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|56 } } cursorid:278873668004555 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.690 [initandlisten] connection accepted from 165.225.128.186:62392 #531 (8 connections now open) m31000| Fri Feb 22 11:56:57.690 [conn530] ClientCursor::find(): cursor not found in map '278873668004555' (ok after a drop) m31001| Fri Feb 22 11:56:57.690 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.690 [conn530] end connection 165.225.128.186:46119 (7 connections now open) m31000| Fri Feb 22 11:56:57.691 [initandlisten] connection accepted from 165.225.128.186:44347 #532 (8 connections now open) m31000| Fri Feb 22 11:56:57.783 [conn1] going to kill op: op: 10728.0 m31000| Fri Feb 22 11:56:57.792 [conn531] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.792 [conn531] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|67 } } cursorid:279306238020246 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.792 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.792 [conn531] end connection 165.225.128.186:62392 (7 connections now open) m31000| Fri Feb 22 11:56:57.793 [initandlisten] connection accepted from 165.225.128.186:44531 #533 (8 connections now open) m31000| Fri Feb 22 11:56:57.884 [conn1] going to kill op: op: 10760.0 m31000| Fri Feb 22 11:56:57.884 [conn1] going to kill op: op: 10759.0 m31000| Fri Feb 22 11:56:57.885 [conn533] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.885 [conn533] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|77 } } cursorid:279745374738177 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.885 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.885 [conn533] end connection 165.225.128.186:44531 (7 connections now open) m31000| Fri Feb 22 11:56:57.885 [conn532] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.885 [conn532] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|67 } } cursorid:279311821234211 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.885 [initandlisten] connection accepted from 165.225.128.186:53385 #534 (8 connections now open) m31001| Fri Feb 22 11:56:57.885 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.885 [conn532] end connection 165.225.128.186:44347 (7 connections now open) m31000| Fri Feb 22 11:56:57.886 [initandlisten] connection accepted from 165.225.128.186:52308 #535 (8 connections now open) m31000| Fri Feb 22 11:56:57.985 [conn1] going to kill op: op: 10800.0 m31000| Fri Feb 22 11:56:57.987 [conn1] going to kill op: op: 10797.0 m31000| Fri Feb 22 11:56:57.987 [conn534] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.987 [conn534] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|86 } } cursorid:280137082113422 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:57.987 [conn1] going to kill op: op: 10798.0 m31002| Fri Feb 22 11:56:57.987 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.987 [conn534] end connection 165.225.128.186:53385 (7 connections now open) m31000| Fri Feb 22 11:56:57.988 [initandlisten] connection accepted from 165.225.128.186:57485 #536 (8 connections now open) m31000| Fri Feb 22 11:56:57.988 [conn535] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.988 [conn535] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|86 } } cursorid:280140278710646 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:57.988 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:57.988 [conn535] end connection 165.225.128.186:52308 (7 connections now open) m31000| Fri Feb 22 11:56:57.989 [initandlisten] connection accepted from 165.225.128.186:40567 #537 (8 connections now open) m31000| Fri Feb 22 11:56:57.991 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:57.991 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|96 } } cursorid:280527043493523 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:57.991 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.088 [conn1] going to kill op: op: 10839.0 m31000| Fri Feb 22 11:56:58.088 [conn1] going to kill op: op: 10840.0 m31000| Fri Feb 22 11:56:58.090 [conn536] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.090 [conn536] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|96 } } cursorid:280574611812381 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:58.091 [conn536] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:58.091 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.091 [conn536] end connection 165.225.128.186:57485 (7 connections now open) m31000| Fri Feb 22 11:56:58.091 [conn537] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.091 [conn537] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534217000|96 } } cursorid:280578011990526 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:58.091 [initandlisten] connection accepted from 165.225.128.186:51967 #538 (8 connections now open) m31001| Fri Feb 22 11:56:58.091 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.091 [conn537] end connection 165.225.128.186:40567 (7 connections now open) m31000| Fri Feb 22 11:56:58.092 [initandlisten] connection accepted from 165.225.128.186:49236 #539 (8 connections now open) m31000| Fri Feb 22 11:56:58.189 [conn1] going to kill op: op: 10880.0 m31000| Fri Feb 22 11:56:58.189 [conn1] going to kill op: op: 10879.0 m31000| Fri Feb 22 11:56:58.193 [conn538] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.193 [conn538] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|8 } } cursorid:281015480098360 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:58.193 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.194 [conn538] end connection 165.225.128.186:51967 (7 connections now open) m31000| Fri Feb 22 11:56:58.194 [initandlisten] connection accepted from 165.225.128.186:57747 #540 (8 connections now open) m31000| Fri Feb 22 11:56:58.194 [conn539] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.194 [conn539] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|8 } } cursorid:281016987632048 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.194 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.195 [conn539] end connection 165.225.128.186:49236 (7 connections now open) m31000| Fri Feb 22 11:56:58.195 [initandlisten] connection accepted from 165.225.128.186:35167 #541 (8 connections now open) m31000| Fri Feb 22 11:56:58.258 [conn9] end connection 165.225.128.186:45840 (7 connections now open) m31000| Fri Feb 22 11:56:58.258 [initandlisten] connection accepted from 165.225.128.186:34688 #542 (8 connections now open) m31000| Fri Feb 22 11:56:58.289 [conn1] going to kill op: op: 10920.0 m31000| Fri Feb 22 11:56:58.290 [conn1] going to kill op: op: 10919.0 m31000| Fri Feb 22 11:56:58.296 [conn540] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.296 [conn540] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|18 } } cursorid:281449674649614 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:58.296 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.296 [conn540] end connection 165.225.128.186:57747 (7 connections now open) m31000| Fri Feb 22 11:56:58.297 [initandlisten] connection accepted from 165.225.128.186:53741 #543 (8 connections now open) m31000| Fri Feb 22 11:56:58.297 [conn541] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.297 [conn541] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|18 } } cursorid:281454642186126 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.297 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.297 [conn541] end connection 165.225.128.186:35167 (7 connections now open) m31000| Fri Feb 22 11:56:58.298 [initandlisten] connection accepted from 165.225.128.186:54827 #544 (8 connections now open) m31000| Fri Feb 22 11:56:58.390 [conn1] going to kill op: op: 10960.0 m31000| Fri Feb 22 11:56:58.391 [conn1] going to kill op: op: 10959.0 m31000| Fri Feb 22 11:56:58.400 [conn543] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.400 [conn543] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|28 } } cursorid:281849309618470 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:58.400 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.400 [conn543] end connection 165.225.128.186:53741 (7 connections now open) m31000| Fri Feb 22 11:56:58.400 [initandlisten] connection accepted from 165.225.128.186:39003 #545 (8 connections now open) m31000| Fri Feb 22 11:56:58.400 [conn544] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.400 [conn544] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|28 } } cursorid:281853065834420 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.401 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.401 [conn544] end connection 165.225.128.186:54827 (7 connections now open) m31000| Fri Feb 22 11:56:58.401 [initandlisten] connection accepted from 165.225.128.186:43766 #546 (8 connections now open) m31000| Fri Feb 22 11:56:58.491 [conn1] going to kill op: op: 10995.0 m31000| Fri Feb 22 11:56:58.492 [conn1] going to kill op: op: 10994.0 m31000| Fri Feb 22 11:56:58.492 [conn545] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.492 [conn545] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|39 } } cursorid:282288684449264 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:58.492 [conn545] ClientCursor::find(): cursor not found in map '282288684449264' (ok after a drop) m31002| Fri Feb 22 11:56:58.492 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.492 [conn545] end connection 165.225.128.186:39003 (7 connections now open) m31000| Fri Feb 22 11:56:58.493 [initandlisten] connection accepted from 165.225.128.186:54762 #547 (8 connections now open) m31000| Fri Feb 22 11:56:58.493 [conn546] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.493 [conn546] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|39 } } cursorid:282291495994083 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.493 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.493 [conn546] end connection 165.225.128.186:43766 (7 connections now open) m31000| Fri Feb 22 11:56:58.494 [initandlisten] connection accepted from 165.225.128.186:63641 #548 (8 connections now open) m31000| Fri Feb 22 11:56:58.592 [conn1] going to kill op: op: 11035.0 m31000| Fri Feb 22 11:56:58.592 [conn1] going to kill op: op: 11033.0 m31000| Fri Feb 22 11:56:58.593 [conn1] going to kill op: op: 11032.0 m31000| Fri Feb 22 11:56:58.595 [conn547] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.595 [conn547] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|48 } } cursorid:282682875544952 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:58.595 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.595 [conn547] end connection 165.225.128.186:54762 (7 connections now open) m31000| Fri Feb 22 11:56:58.596 [initandlisten] connection accepted from 165.225.128.186:58674 #549 (8 connections now open) m31000| Fri Feb 22 11:56:58.596 [conn548] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.596 [conn548] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|48 } } cursorid:282687896667250 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.596 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.596 [conn548] end connection 165.225.128.186:63641 (7 connections now open) m31000| Fri Feb 22 11:56:58.596 [initandlisten] connection accepted from 165.225.128.186:58758 #550 (8 connections now open) m31000| Fri Feb 22 11:56:58.599 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.599 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|58 } } cursorid:283072774517814 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.599 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.693 [conn1] going to kill op: op: 11074.0 m31000| Fri Feb 22 11:56:58.693 [conn1] going to kill op: op: 11073.0 m31000| Fri Feb 22 11:56:58.698 [conn549] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.698 [conn549] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|58 } } cursorid:283121237081169 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:58.698 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.698 [conn549] end connection 165.225.128.186:58674 (7 connections now open) m31000| Fri Feb 22 11:56:58.698 [initandlisten] connection accepted from 165.225.128.186:46836 #551 (8 connections now open) m31000| Fri Feb 22 11:56:58.698 [conn550] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.698 [conn550] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|58 } } cursorid:283125404533156 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.699 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.699 [conn550] end connection 165.225.128.186:58758 (7 connections now open) m31000| Fri Feb 22 11:56:58.699 [initandlisten] connection accepted from 165.225.128.186:50995 #552 (8 connections now open) m31000| Fri Feb 22 11:56:58.794 [conn1] going to kill op: op: 11112.0 m31000| Fri Feb 22 11:56:58.794 [conn1] going to kill op: op: 11111.0 m31000| Fri Feb 22 11:56:58.800 [conn551] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.800 [conn551] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|68 } } cursorid:283560165394354 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:58.800 [conn551] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:56:58.800 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.801 [conn551] end connection 165.225.128.186:46836 (7 connections now open) m31000| Fri Feb 22 11:56:58.801 [initandlisten] connection accepted from 165.225.128.186:50184 #553 (8 connections now open) m31000| Fri Feb 22 11:56:58.802 [conn552] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.802 [conn552] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|68 } } cursorid:283564384155244 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.802 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.802 [conn552] end connection 165.225.128.186:50995 (7 connections now open) m31000| Fri Feb 22 11:56:58.802 [initandlisten] connection accepted from 165.225.128.186:65353 #554 (8 connections now open) m31000| Fri Feb 22 11:56:58.895 [conn1] going to kill op: op: 11150.0 m31000| Fri Feb 22 11:56:58.895 [conn1] going to kill op: op: 11149.0 m31000| Fri Feb 22 11:56:58.903 [conn553] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.903 [conn553] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|78 } } cursorid:283998069131366 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:58.903 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.903 [conn553] end connection 165.225.128.186:50184 (7 connections now open) m31000| Fri Feb 22 11:56:58.903 [initandlisten] connection accepted from 165.225.128.186:33424 #555 (8 connections now open) m31000| Fri Feb 22 11:56:58.904 [conn554] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.904 [conn554] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|78 } } cursorid:284001238224704 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.904 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.904 [conn554] end connection 165.225.128.186:65353 (7 connections now open) m31000| Fri Feb 22 11:56:58.905 [initandlisten] connection accepted from 165.225.128.186:41310 #556 (8 connections now open) m31000| Fri Feb 22 11:56:58.996 [conn1] going to kill op: op: 11187.0 m31000| Fri Feb 22 11:56:58.997 [conn556] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:58.997 [conn556] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|88 } } cursorid:284439029023804 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:58.997 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:58.997 [conn556] end connection 165.225.128.186:41310 (7 connections now open) m31000| Fri Feb 22 11:56:58.998 [initandlisten] connection accepted from 165.225.128.186:57742 #557 (8 connections now open) m31000| Fri Feb 22 11:56:59.096 [conn1] going to kill op: op: 11230.0 m31000| Fri Feb 22 11:56:59.097 [conn1] going to kill op: op: 11232.0 m31000| Fri Feb 22 11:56:59.097 [conn1] going to kill op: op: 11229.0 m31000| Fri Feb 22 11:56:59.098 [conn555] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.098 [conn555] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|88 } } cursorid:284435364891480 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:59.098 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.098 [conn555] end connection 165.225.128.186:33424 (7 connections now open) m31000| Fri Feb 22 11:56:59.098 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.098 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|97 } } cursorid:284783008970255 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:59.100 [conn557] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.100 [conn557] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534218000|98 } } cursorid:284830850176694 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:59.100 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.100 [conn557] end connection 165.225.128.186:57742 (6 connections now open) m31000| Fri Feb 22 11:56:59.100 [initandlisten] connection accepted from 165.225.128.186:42200 #558 (7 connections now open) m31000| Fri Feb 22 11:56:59.102 [initandlisten] connection accepted from 165.225.128.186:51098 #559 (8 connections now open) m31002| Fri Feb 22 11:56:59.103 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.198 [conn1] going to kill op: op: 11272.0 m31000| Fri Feb 22 11:56:59.198 [conn1] going to kill op: op: 11273.0 m31000| Fri Feb 22 11:56:59.203 [conn558] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.203 [conn558] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|9 } } cursorid:285263726071347 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:59.203 [conn558] ClientCursor::find(): cursor not found in map '285263726071347' (ok after a drop) m31001| Fri Feb 22 11:56:59.203 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.203 [conn558] end connection 165.225.128.186:42200 (7 connections now open) m31000| Fri Feb 22 11:56:59.203 [initandlisten] connection accepted from 165.225.128.186:49583 #560 (8 connections now open) m31000| Fri Feb 22 11:56:59.205 [conn559] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.205 [conn559] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|8 } } cursorid:285268165622346 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:59.205 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.205 [conn559] end connection 165.225.128.186:51098 (7 connections now open) m31000| Fri Feb 22 11:56:59.205 [initandlisten] connection accepted from 165.225.128.186:64179 #561 (8 connections now open) m31000| Fri Feb 22 11:56:59.298 [conn1] going to kill op: op: 11310.0 m31000| Fri Feb 22 11:56:59.299 [conn1] going to kill op: op: 11311.0 m31000| Fri Feb 22 11:56:59.305 [conn560] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.305 [conn560] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|19 } } cursorid:285701679241908 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:59.306 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.306 [conn560] end connection 165.225.128.186:49583 (7 connections now open) m31000| Fri Feb 22 11:56:59.306 [initandlisten] connection accepted from 165.225.128.186:54451 #562 (8 connections now open) m31000| Fri Feb 22 11:56:59.307 [conn561] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.307 [conn561] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|19 } } cursorid:285707525439063 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:59.307 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.307 [conn561] end connection 165.225.128.186:64179 (7 connections now open) m31000| Fri Feb 22 11:56:59.307 [initandlisten] connection accepted from 165.225.128.186:45204 #563 (8 connections now open) m31000| Fri Feb 22 11:56:59.399 [conn1] going to kill op: op: 11348.0 m31000| Fri Feb 22 11:56:59.399 [conn1] going to kill op: op: 11346.0 m31000| Fri Feb 22 11:56:59.408 [conn562] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.408 [conn562] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|29 } } cursorid:286140469899816 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:59.408 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.408 [conn562] end connection 165.225.128.186:54451 (7 connections now open) m31000| Fri Feb 22 11:56:59.408 [initandlisten] connection accepted from 165.225.128.186:54637 #564 (8 connections now open) m31000| Fri Feb 22 11:56:59.500 [conn1] going to kill op: op: 11379.0 m31000| Fri Feb 22 11:56:59.500 [conn1] going to kill op: op: 11380.0 m31000| Fri Feb 22 11:56:59.501 [conn564] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.501 [conn564] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|39 } } cursorid:286579244455145 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:59.501 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.501 [conn564] end connection 165.225.128.186:54637 (7 connections now open) m31000| Fri Feb 22 11:56:59.501 [initandlisten] connection accepted from 165.225.128.186:58172 #565 (8 connections now open) m31000| Fri Feb 22 11:56:59.501 [conn563] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.501 [conn563] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|29 } } cursorid:286145733647055 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:59.501 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.502 [conn563] end connection 165.225.128.186:45204 (7 connections now open) m31000| Fri Feb 22 11:56:59.502 [initandlisten] connection accepted from 165.225.128.186:64691 #566 (8 connections now open) m31000| Fri Feb 22 11:56:59.601 [conn1] going to kill op: op: 11417.0 m31000| Fri Feb 22 11:56:59.601 [conn1] going to kill op: op: 11418.0 m31000| Fri Feb 22 11:56:59.604 [conn565] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.604 [conn565] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|48 } } cursorid:286968669451702 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:59.604 [conn565] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:56:59.604 [conn566] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.604 [conn566] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|48 } } cursorid:286974664082488 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:59.604 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.604 [conn565] end connection 165.225.128.186:58172 (7 connections now open) m31002| Fri Feb 22 11:56:59.604 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.604 [conn566] end connection 165.225.128.186:64691 (6 connections now open) m31000| Fri Feb 22 11:56:59.604 [initandlisten] connection accepted from 165.225.128.186:39368 #567 (7 connections now open) m31000| Fri Feb 22 11:56:59.604 [initandlisten] connection accepted from 165.225.128.186:55314 #568 (8 connections now open) m31000| Fri Feb 22 11:56:59.701 [conn1] going to kill op: op: 11466.0 m31000| Fri Feb 22 11:56:59.702 [conn1] going to kill op: op: 11465.0 m31000| Fri Feb 22 11:56:59.706 [conn568] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.707 [conn568] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|58 } } cursorid:287412066643070 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:59.707 [conn567] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:56:59.707 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.707 [conn567] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|58 } } cursorid:287411400817394 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:59.707 [conn568] end connection 165.225.128.186:55314 (7 connections now open) m31001| Fri Feb 22 11:56:59.707 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.707 [conn567] end connection 165.225.128.186:39368 (6 connections now open) m31000| Fri Feb 22 11:56:59.707 [initandlisten] connection accepted from 165.225.128.186:42844 #569 (7 connections now open) m31000| Fri Feb 22 11:56:59.707 [initandlisten] connection accepted from 165.225.128.186:35029 #570 (8 connections now open) m31000| Fri Feb 22 11:56:59.802 [conn1] going to kill op: op: 11517.0 m31000| Fri Feb 22 11:56:59.802 [conn1] going to kill op: op: 11515.0 m31000| Fri Feb 22 11:56:59.803 [conn1] going to kill op: op: 11516.0 m31000| Fri Feb 22 11:56:59.809 [conn569] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.809 [conn569] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|68 } } cursorid:287850801187025 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:56:59.809 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.809 [conn569] end connection 165.225.128.186:42844 (7 connections now open) m31000| Fri Feb 22 11:56:59.809 [conn570] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.810 [conn570] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|68 } } cursorid:287850724269164 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:59.810 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.810 [conn570] end connection 165.225.128.186:35029 (6 connections now open) m31000| Fri Feb 22 11:56:59.810 [initandlisten] connection accepted from 165.225.128.186:35295 #571 (8 connections now open) m31000| Fri Feb 22 11:56:59.810 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.810 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|58 } } cursorid:287360253663143 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:59.810 [initandlisten] connection accepted from 165.225.128.186:59674 #572 (8 connections now open) m31001| Fri Feb 22 11:56:59.811 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.903 [conn1] going to kill op: op: 11555.0 m31000| Fri Feb 22 11:56:59.904 [conn1] going to kill op: op: 11556.0 m31000| Fri Feb 22 11:56:59.912 [conn572] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.912 [conn572] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|79 } } cursorid:288287315076133 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:56:59.912 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.912 [conn572] end connection 165.225.128.186:59674 (7 connections now open) m31000| Fri Feb 22 11:56:59.912 [conn571] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:56:59.912 [conn571] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|79 } } cursorid:288288845762736 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:56:59.912 [initandlisten] connection accepted from 165.225.128.186:50773 #573 (8 connections now open) m31000| Fri Feb 22 11:56:59.912 [conn571] ClientCursor::find(): cursor not found in map '288288845762736' (ok after a drop) m31002| Fri Feb 22 11:56:59.912 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:56:59.913 [conn571] end connection 165.225.128.186:35295 (7 connections now open) m31000| Fri Feb 22 11:56:59.913 [initandlisten] connection accepted from 165.225.128.186:38787 #574 (8 connections now open) m31000| Fri Feb 22 11:57:00.004 [conn1] going to kill op: op: 11591.0 m31000| Fri Feb 22 11:57:00.005 [conn574] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.005 [conn574] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|89 } } cursorid:288725467712805 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.005 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.005 [conn574] end connection 165.225.128.186:38787 (7 connections now open) m31000| Fri Feb 22 11:57:00.006 [initandlisten] connection accepted from 165.225.128.186:40397 #575 (8 connections now open) m31000| Fri Feb 22 11:57:00.105 [conn1] going to kill op: op: 11625.0 m31000| Fri Feb 22 11:57:00.105 [conn1] going to kill op: op: 11626.0 m31000| Fri Feb 22 11:57:00.105 [conn573] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.105 [conn573] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|89 } } cursorid:288722304068301 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.105 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.105 [conn573] end connection 165.225.128.186:50773 (7 connections now open) m31000| Fri Feb 22 11:57:00.106 [initandlisten] connection accepted from 165.225.128.186:50849 #576 (8 connections now open) m31000| Fri Feb 22 11:57:00.108 [conn575] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.108 [conn575] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534219000|98 } } cursorid:289116259682079 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.108 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.108 [conn575] end connection 165.225.128.186:40397 (7 connections now open) m31000| Fri Feb 22 11:57:00.108 [initandlisten] connection accepted from 165.225.128.186:41717 #577 (8 connections now open) m31000| Fri Feb 22 11:57:00.206 [conn1] going to kill op: op: 11677.0 m31000| Fri Feb 22 11:57:00.206 [conn1] going to kill op: op: 11675.0 m31000| Fri Feb 22 11:57:00.206 [conn1] going to kill op: op: 11676.0 m31000| Fri Feb 22 11:57:00.208 [conn576] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.208 [conn576] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|10 } } cursorid:289507737995149 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.208 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.208 [conn576] end connection 165.225.128.186:50849 (7 connections now open) m31000| Fri Feb 22 11:57:00.208 [initandlisten] connection accepted from 165.225.128.186:63763 #578 (8 connections now open) m31000| Fri Feb 22 11:57:00.210 [conn577] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.210 [conn577] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|10 } } cursorid:289512275750224 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.210 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.210 [conn577] end connection 165.225.128.186:41717 (7 connections now open) m31000| Fri Feb 22 11:57:00.210 [initandlisten] connection accepted from 165.225.128.186:62149 #579 (8 connections now open) m31000| Fri Feb 22 11:57:00.211 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.211 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|10 } } cursorid:289503925518282 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.211 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.307 [conn1] going to kill op: op: 11716.0 m31000| Fri Feb 22 11:57:00.307 [conn1] going to kill op: op: 11717.0 m31000| Fri Feb 22 11:57:00.310 [conn578] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.310 [conn578] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|20 } } cursorid:289903368478611 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:00.311 [conn578] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:00.311 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.311 [conn578] end connection 165.225.128.186:63763 (7 connections now open) m31000| Fri Feb 22 11:57:00.311 [initandlisten] connection accepted from 165.225.128.186:54020 #580 (8 connections now open) m31000| Fri Feb 22 11:57:00.312 [conn579] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.312 [conn579] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|20 } } cursorid:289907225822725 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.312 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.312 [conn579] end connection 165.225.128.186:62149 (7 connections now open) m31000| Fri Feb 22 11:57:00.313 [initandlisten] connection accepted from 165.225.128.186:58413 #581 (8 connections now open) m31000| Fri Feb 22 11:57:00.407 [conn1] going to kill op: op: 11755.0 m31000| Fri Feb 22 11:57:00.408 [conn1] going to kill op: op: 11754.0 m31000| Fri Feb 22 11:57:00.413 [conn580] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.413 [conn580] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|30 } } cursorid:290340859741712 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.413 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.413 [conn580] end connection 165.225.128.186:54020 (7 connections now open) m31000| Fri Feb 22 11:57:00.414 [initandlisten] connection accepted from 165.225.128.186:58184 #582 (8 connections now open) m31000| Fri Feb 22 11:57:00.415 [conn581] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.415 [conn581] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|30 } } cursorid:290344260852833 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.415 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.415 [conn581] end connection 165.225.128.186:58413 (7 connections now open) m31000| Fri Feb 22 11:57:00.415 [initandlisten] connection accepted from 165.225.128.186:40968 #583 (8 connections now open) m31000| Fri Feb 22 11:57:00.508 [conn1] going to kill op: op: 11792.0 m31000| Fri Feb 22 11:57:00.509 [conn1] going to kill op: op: 11793.0 m31000| Fri Feb 22 11:57:00.516 [conn582] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.516 [conn582] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|40 } } cursorid:290779934868853 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.516 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.516 [conn582] end connection 165.225.128.186:58184 (7 connections now open) m31000| Fri Feb 22 11:57:00.517 [initandlisten] connection accepted from 165.225.128.186:37714 #584 (8 connections now open) m31000| Fri Feb 22 11:57:00.518 [conn583] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.518 [conn583] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|40 } } cursorid:290783375426921 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.518 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.518 [conn583] end connection 165.225.128.186:40968 (7 connections now open) m31000| Fri Feb 22 11:57:00.518 [initandlisten] connection accepted from 165.225.128.186:52692 #585 (8 connections now open) m31000| Fri Feb 22 11:57:00.609 [conn1] going to kill op: op: 11829.0 m31000| Fri Feb 22 11:57:00.609 [conn1] going to kill op: op: 11828.0 m31000| Fri Feb 22 11:57:00.610 [conn585] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.610 [conn585] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|50 } } cursorid:291221509971249 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.610 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.610 [conn585] end connection 165.225.128.186:52692 (7 connections now open) m31000| Fri Feb 22 11:57:00.610 [initandlisten] connection accepted from 165.225.128.186:59468 #586 (8 connections now open) m31000| Fri Feb 22 11:57:00.710 [conn1] going to kill op: op: 11864.0 m31000| Fri Feb 22 11:57:00.710 [conn1] going to kill op: op: 11863.0 m31000| Fri Feb 22 11:57:00.711 [conn584] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.711 [conn584] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|50 } } cursorid:291217590279664 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.711 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.711 [conn584] end connection 165.225.128.186:37714 (7 connections now open) m31000| Fri Feb 22 11:57:00.711 [initandlisten] connection accepted from 165.225.128.186:60965 #587 (8 connections now open) m31000| Fri Feb 22 11:57:00.712 [conn586] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.713 [conn586] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|60 } } cursorid:291612188052583 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:00.713 [conn586] ClientCursor::find(): cursor not found in map '291612188052583' (ok after a drop) m31002| Fri Feb 22 11:57:00.713 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.713 [conn586] end connection 165.225.128.186:59468 (7 connections now open) m31000| Fri Feb 22 11:57:00.713 [initandlisten] connection accepted from 165.225.128.186:38422 #588 (8 connections now open) m31000| Fri Feb 22 11:57:00.811 [conn1] going to kill op: op: 11901.0 m31000| Fri Feb 22 11:57:00.811 [conn1] going to kill op: op: 11902.0 m31000| Fri Feb 22 11:57:00.813 [conn587] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.813 [conn587] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|70 } } cursorid:292045271360809 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.814 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.814 [conn587] end connection 165.225.128.186:60965 (7 connections now open) m31000| Fri Feb 22 11:57:00.814 [initandlisten] connection accepted from 165.225.128.186:47825 #589 (8 connections now open) m31000| Fri Feb 22 11:57:00.815 [conn588] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.815 [conn588] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|70 } } cursorid:292051304511682 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.815 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.815 [conn588] end connection 165.225.128.186:38422 (7 connections now open) m31000| Fri Feb 22 11:57:00.816 [initandlisten] connection accepted from 165.225.128.186:61512 #590 (8 connections now open) m31000| Fri Feb 22 11:57:00.911 [conn1] going to kill op: op: 11948.0 m31000| Fri Feb 22 11:57:00.912 [conn1] going to kill op: op: 11951.0 m31000| Fri Feb 22 11:57:00.912 [conn1] going to kill op: op: 11950.0 m31000| Fri Feb 22 11:57:00.913 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.913 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|80 } } cursorid:292437090598911 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.913 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.916 [conn589] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.916 [conn589] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|80 } } cursorid:292483257276688 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:00.916 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.916 [conn589] end connection 165.225.128.186:47825 (7 connections now open) m31000| Fri Feb 22 11:57:00.916 [initandlisten] connection accepted from 165.225.128.186:39292 #591 (8 connections now open) m31000| Fri Feb 22 11:57:00.918 [conn590] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:00.918 [conn590] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|80 } } cursorid:292489103561990 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:00.918 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:00.918 [conn590] end connection 165.225.128.186:61512 (7 connections now open) m31000| Fri Feb 22 11:57:00.918 [initandlisten] connection accepted from 165.225.128.186:48158 #592 (8 connections now open) m31000| Fri Feb 22 11:57:01.013 [conn1] going to kill op: op: 11990.0 m31000| Fri Feb 22 11:57:01.013 [conn1] going to kill op: op: 11989.0 m31000| Fri Feb 22 11:57:01.019 [conn591] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.019 [conn591] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|90 } } cursorid:292921539975853 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:01.019 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.019 [conn591] end connection 165.225.128.186:39292 (7 connections now open) m31000| Fri Feb 22 11:57:01.019 [initandlisten] connection accepted from 165.225.128.186:56477 #593 (8 connections now open) m31000| Fri Feb 22 11:57:01.020 [conn592] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.020 [conn592] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534220000|90 } } cursorid:292926780588795 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.020 [conn592] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:01.020 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.020 [conn592] end connection 165.225.128.186:48158 (7 connections now open) m31000| Fri Feb 22 11:57:01.021 [initandlisten] connection accepted from 165.225.128.186:35572 #594 (8 connections now open) m31000| Fri Feb 22 11:57:01.114 [conn1] going to kill op: op: 12029.0 m31000| Fri Feb 22 11:57:01.114 [conn1] going to kill op: op: 12030.0 m31000| Fri Feb 22 11:57:01.122 [conn593] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.122 [conn593] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|1 } } cursorid:293359764233214 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:104 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:01.122 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.122 [conn593] end connection 165.225.128.186:56477 (7 connections now open) m31000| Fri Feb 22 11:57:01.123 [initandlisten] connection accepted from 165.225.128.186:50899 #595 (8 connections now open) m31000| Fri Feb 22 11:57:01.123 [conn594] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.123 [conn594] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|1 } } cursorid:293363585665460 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:01.123 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.123 [conn594] end connection 165.225.128.186:35572 (7 connections now open) m31000| Fri Feb 22 11:57:01.124 [initandlisten] connection accepted from 165.225.128.186:44434 #596 (8 connections now open) m31000| Fri Feb 22 11:57:01.215 [conn1] going to kill op: op: 12065.0 m31000| Fri Feb 22 11:57:01.215 [conn1] going to kill op: op: 12064.0 m31000| Fri Feb 22 11:57:01.215 [conn595] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.215 [conn595] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|11 } } cursorid:293797276579507 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:01.215 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.215 [conn595] end connection 165.225.128.186:50899 (7 connections now open) m31000| Fri Feb 22 11:57:01.216 [conn596] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.216 [conn596] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|11 } } cursorid:293801650737787 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.216 [initandlisten] connection accepted from 165.225.128.186:39606 #597 (8 connections now open) m31002| Fri Feb 22 11:57:01.216 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.216 [conn596] end connection 165.225.128.186:44434 (7 connections now open) m31000| Fri Feb 22 11:57:01.216 [initandlisten] connection accepted from 165.225.128.186:37465 #598 (8 connections now open) m31000| Fri Feb 22 11:57:01.315 [conn1] going to kill op: op: 12117.0 m31000| Fri Feb 22 11:57:01.316 [conn1] going to kill op: op: 12114.0 m31000| Fri Feb 22 11:57:01.316 [conn1] going to kill op: op: 12115.0 m31000| Fri Feb 22 11:57:01.319 [conn598] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.319 [conn597] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.319 [conn598] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|20 } } cursorid:294198472014939 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.319 [conn597] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|20 } } cursorid:294193739936330 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:01.319 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:01.319 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.319 [conn598] end connection 165.225.128.186:37465 (7 connections now open) m31000| Fri Feb 22 11:57:01.319 [conn597] end connection 165.225.128.186:39606 (7 connections now open) m31000| Fri Feb 22 11:57:01.319 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.319 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|20 } } cursorid:294145564417512 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.319 [initandlisten] connection accepted from 165.225.128.186:56436 #599 (7 connections now open) m31000| Fri Feb 22 11:57:01.320 [initandlisten] connection accepted from 165.225.128.186:57600 #600 (8 connections now open) m31002| Fri Feb 22 11:57:01.321 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.417 [conn1] going to kill op: op: 12156.0 m31000| Fri Feb 22 11:57:01.417 [conn1] going to kill op: op: 12155.0 m31000| Fri Feb 22 11:57:01.422 [conn599] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.422 [conn600] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.422 [conn599] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|31 } } cursorid:294636856623567 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.422 [conn600] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|31 } } cursorid:294635217277207 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.422 [conn599] ClientCursor::find(): cursor not found in map '294636856623567' (ok after a drop) m31002| Fri Feb 22 11:57:01.423 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:01.423 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.423 [conn599] end connection 165.225.128.186:56436 (7 connections now open) m31000| Fri Feb 22 11:57:01.423 [conn600] end connection 165.225.128.186:57600 (7 connections now open) m31000| Fri Feb 22 11:57:01.423 [initandlisten] connection accepted from 165.225.128.186:61209 #601 (7 connections now open) m31000| Fri Feb 22 11:57:01.423 [initandlisten] connection accepted from 165.225.128.186:62696 #602 (8 connections now open) m31000| Fri Feb 22 11:57:01.518 [conn1] going to kill op: op: 12193.0 m31000| Fri Feb 22 11:57:01.518 [conn1] going to kill op: op: 12194.0 m31000| Fri Feb 22 11:57:01.526 [conn601] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.526 [conn601] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|41 } } cursorid:295073473318908 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.526 [conn602] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.526 [conn602] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|41 } } cursorid:295074416242179 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:01.526 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.526 [conn601] end connection 165.225.128.186:61209 (7 connections now open) m31001| Fri Feb 22 11:57:01.526 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.526 [conn602] end connection 165.225.128.186:62696 (6 connections now open) m31000| Fri Feb 22 11:57:01.527 [initandlisten] connection accepted from 165.225.128.186:47975 #603 (7 connections now open) m31000| Fri Feb 22 11:57:01.527 [initandlisten] connection accepted from 165.225.128.186:48137 #604 (8 connections now open) m31000| Fri Feb 22 11:57:01.619 [conn1] going to kill op: op: 12228.0 m31000| Fri Feb 22 11:57:01.619 [conn1] going to kill op: op: 12229.0 m31000| Fri Feb 22 11:57:01.619 [conn603] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.619 [conn603] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|51 } } cursorid:295511583649458 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:01.619 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.619 [conn604] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.619 [conn604] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|51 } } cursorid:295512173938553 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.619 [conn603] end connection 165.225.128.186:47975 (7 connections now open) m31001| Fri Feb 22 11:57:01.620 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.620 [conn604] end connection 165.225.128.186:48137 (6 connections now open) m31000| Fri Feb 22 11:57:01.620 [initandlisten] connection accepted from 165.225.128.186:48702 #605 (7 connections now open) m31000| Fri Feb 22 11:57:01.620 [initandlisten] connection accepted from 165.225.128.186:38193 #606 (8 connections now open) m31000| Fri Feb 22 11:57:01.720 [conn1] going to kill op: op: 12266.0 m31000| Fri Feb 22 11:57:01.720 [conn1] going to kill op: op: 12267.0 m31000| Fri Feb 22 11:57:01.722 [conn605] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.723 [conn605] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|60 } } cursorid:295906853404874 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:110 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:01.723 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.723 [conn606] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.723 [conn606] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|60 } } cursorid:295907571656892 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:01.723 [conn605] end connection 165.225.128.186:48702 (7 connections now open) m31000| Fri Feb 22 11:57:01.723 [conn606] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:01.723 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.723 [conn606] end connection 165.225.128.186:38193 (6 connections now open) m31000| Fri Feb 22 11:57:01.723 [initandlisten] connection accepted from 165.225.128.186:54351 #607 (7 connections now open) m31000| Fri Feb 22 11:57:01.726 [initandlisten] connection accepted from 165.225.128.186:36882 #608 (8 connections now open) m31000| Fri Feb 22 11:57:01.821 [conn1] going to kill op: op: 12304.0 m31000| Fri Feb 22 11:57:01.821 [conn1] going to kill op: op: 12306.0 m31000| Fri Feb 22 11:57:01.826 [conn607] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.826 [conn607] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|70 } } cursorid:296341959751125 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:01.826 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.826 [conn607] end connection 165.225.128.186:54351 (7 connections now open) m31000| Fri Feb 22 11:57:01.826 [initandlisten] connection accepted from 165.225.128.186:43325 #609 (8 connections now open) m31000| Fri Feb 22 11:57:01.828 [conn608] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.828 [conn608] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|70 } } cursorid:296345815693422 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:01.828 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.828 [conn608] end connection 165.225.128.186:36882 (7 connections now open) m31000| Fri Feb 22 11:57:01.828 [initandlisten] connection accepted from 165.225.128.186:41271 #610 (8 connections now open) m31002| Fri Feb 22 11:57:01.834 [conn5] end connection 165.225.128.186:36015 (2 connections now open) m31002| Fri Feb 22 11:57:01.834 [initandlisten] connection accepted from 165.225.128.186:63700 #7 (3 connections now open) m31000| Fri Feb 22 11:57:01.921 [conn1] going to kill op: op: 12344.0 m31000| Fri Feb 22 11:57:01.922 [conn1] going to kill op: op: 12346.0 m31000| Fri Feb 22 11:57:01.922 [conn1] going to kill op: op: 12347.0 m31000| Fri Feb 22 11:57:01.923 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.923 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|90 } } cursorid:297127067381462 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:01.924 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.928 [conn609] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.928 [conn609] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|80 } } cursorid:296778112025105 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:01.929 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.929 [conn609] end connection 165.225.128.186:43325 (7 connections now open) m31000| Fri Feb 22 11:57:01.929 [initandlisten] connection accepted from 165.225.128.186:46790 #611 (8 connections now open) m31000| Fri Feb 22 11:57:01.930 [conn610] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:01.930 [conn610] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|81 } } cursorid:296784305883111 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:01.930 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:01.930 [conn610] end connection 165.225.128.186:41271 (7 connections now open) m31000| Fri Feb 22 11:57:01.930 [initandlisten] connection accepted from 165.225.128.186:64184 #612 (8 connections now open) m31000| Fri Feb 22 11:57:02.023 [conn1] going to kill op: op: 12383.0 m31000| Fri Feb 22 11:57:02.023 [conn1] going to kill op: op: 12385.0 m31000| Fri Feb 22 11:57:02.031 [conn611] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.031 [conn611] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|91 } } cursorid:297216314375262 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:02.031 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.031 [conn611] end connection 165.225.128.186:46790 (7 connections now open) m31000| Fri Feb 22 11:57:02.032 [initandlisten] connection accepted from 165.225.128.186:40191 #613 (8 connections now open) m31002| Fri Feb 22 11:57:02.049 [conn6] end connection 165.225.128.186:53085 (2 connections now open) m31002| Fri Feb 22 11:57:02.049 [initandlisten] connection accepted from 165.225.128.186:55951 #8 (3 connections now open) m31000| Fri Feb 22 11:57:02.123 [conn1] going to kill op: op: 12420.0 m31000| Fri Feb 22 11:57:02.124 [conn1] going to kill op: op: 12419.0 m31000| Fri Feb 22 11:57:02.124 [conn613] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.124 [conn613] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|2 } } cursorid:297656169914469 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:02.124 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:02.124 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.124 [conn612] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.124 [conn612] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534221000|91 } } cursorid:297220857978045 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.124 [conn613] end connection 165.225.128.186:40191 (7 connections now open) m31000| Fri Feb 22 11:57:02.124 [conn612] ClientCursor::find(): cursor not found in map '297220857978045' (ok after a drop) m31000| Fri Feb 22 11:57:02.124 [conn612] end connection 165.225.128.186:64184 (6 connections now open) m31000| Fri Feb 22 11:57:02.124 [initandlisten] connection accepted from 165.225.128.186:50269 #614 (7 connections now open) m31000| Fri Feb 22 11:57:02.125 [initandlisten] connection accepted from 165.225.128.186:34091 #615 (8 connections now open) m31000| Fri Feb 22 11:57:02.224 [conn1] going to kill op: op: 12457.0 m31000| Fri Feb 22 11:57:02.224 [conn1] going to kill op: op: 12458.0 m31000| Fri Feb 22 11:57:02.227 [conn614] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.227 [conn614] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|11 } } cursorid:298051165634336 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:02.227 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.227 [conn615] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.227 [conn615] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|11 } } cursorid:298051281116909 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.227 [conn614] end connection 165.225.128.186:50269 (7 connections now open) m31001| Fri Feb 22 11:57:02.227 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.227 [conn615] end connection 165.225.128.186:34091 (6 connections now open) m31000| Fri Feb 22 11:57:02.227 [initandlisten] connection accepted from 165.225.128.186:62311 #616 (7 connections now open) m31000| Fri Feb 22 11:57:02.227 [initandlisten] connection accepted from 165.225.128.186:53017 #617 (8 connections now open) m31000| Fri Feb 22 11:57:02.325 [conn1] going to kill op: op: 12499.0 m31000| Fri Feb 22 11:57:02.325 [conn1] going to kill op: op: 12496.0 m31000| Fri Feb 22 11:57:02.325 [conn1] going to kill op: op: 12497.0 m31000| Fri Feb 22 11:57:02.329 [conn616] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.329 [conn617] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.330 [conn616] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|21 } } cursorid:298488202176201 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.330 [conn617] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|21 } } cursorid:298487515200168 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:02.330 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:02.330 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.330 [conn617] end connection 165.225.128.186:53017 (7 connections now open) m31000| Fri Feb 22 11:57:02.330 [conn616] end connection 165.225.128.186:62311 (7 connections now open) m31000| Fri Feb 22 11:57:02.330 [initandlisten] connection accepted from 165.225.128.186:61314 #618 (7 connections now open) m31000| Fri Feb 22 11:57:02.330 [initandlisten] connection accepted from 165.225.128.186:54544 #619 (8 connections now open) m31000| Fri Feb 22 11:57:02.331 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.331 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|31 } } cursorid:298874793123194 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:02.331 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.426 [conn1] going to kill op: op: 12537.0 m31000| Fri Feb 22 11:57:02.426 [conn1] going to kill op: op: 12538.0 m31000| Fri Feb 22 11:57:02.433 [conn618] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.433 [conn619] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.433 [conn618] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|31 } } cursorid:298927545017478 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.433 [conn619] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|31 } } cursorid:298925642110501 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:92 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.433 [conn619] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:02.433 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:02.433 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.433 [conn618] end connection 165.225.128.186:61314 (7 connections now open) m31000| Fri Feb 22 11:57:02.433 [conn619] end connection 165.225.128.186:54544 (7 connections now open) m31000| Fri Feb 22 11:57:02.433 [initandlisten] connection accepted from 165.225.128.186:53121 #620 (7 connections now open) m31000| Fri Feb 22 11:57:02.433 [initandlisten] connection accepted from 165.225.128.186:60491 #621 (8 connections now open) m31000| Fri Feb 22 11:57:02.527 [conn1] going to kill op: op: 12575.0 m31000| Fri Feb 22 11:57:02.527 [conn1] going to kill op: op: 12576.0 m31000| Fri Feb 22 11:57:02.536 [conn620] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.536 [conn621] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.536 [conn620] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|41 } } cursorid:299365373265326 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.536 [conn621] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|41 } } cursorid:299364498515406 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:02.536 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:02.536 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.536 [conn621] end connection 165.225.128.186:60491 (7 connections now open) m31000| Fri Feb 22 11:57:02.536 [conn620] end connection 165.225.128.186:53121 (7 connections now open) m31000| Fri Feb 22 11:57:02.536 [initandlisten] connection accepted from 165.225.128.186:37684 #622 (7 connections now open) m31000| Fri Feb 22 11:57:02.536 [initandlisten] connection accepted from 165.225.128.186:64923 #623 (8 connections now open) m31000| Fri Feb 22 11:57:02.627 [conn1] going to kill op: op: 12614.0 m31000| Fri Feb 22 11:57:02.628 [conn1] going to kill op: op: 12613.0 m31000| Fri Feb 22 11:57:02.628 [conn622] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.628 [conn622] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|51 } } cursorid:299803676576398 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.628 [conn623] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.628 [conn623] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|51 } } cursorid:299803552870494 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:02.628 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:02.628 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.629 [conn622] end connection 165.225.128.186:37684 (7 connections now open) m31000| Fri Feb 22 11:57:02.629 [conn623] end connection 165.225.128.186:64923 (6 connections now open) m31000| Fri Feb 22 11:57:02.629 [initandlisten] connection accepted from 165.225.128.186:33979 #624 (7 connections now open) m31000| Fri Feb 22 11:57:02.629 [initandlisten] connection accepted from 165.225.128.186:56687 #625 (8 connections now open) m31000| Fri Feb 22 11:57:02.728 [conn1] going to kill op: op: 12652.0 m31000| Fri Feb 22 11:57:02.728 [conn1] going to kill op: op: 12651.0 m31000| Fri Feb 22 11:57:02.731 [conn625] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.731 [conn625] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|61 } } cursorid:300197637913973 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.731 [conn624] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.731 [conn624] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|61 } } cursorid:300198213280866 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:02.731 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.731 [conn625] end connection 165.225.128.186:56687 (7 connections now open) m31002| Fri Feb 22 11:57:02.731 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.731 [conn624] end connection 165.225.128.186:33979 (6 connections now open) m31000| Fri Feb 22 11:57:02.731 [initandlisten] connection accepted from 165.225.128.186:46804 #626 (7 connections now open) m31000| Fri Feb 22 11:57:02.732 [initandlisten] connection accepted from 165.225.128.186:55960 #627 (8 connections now open) m31000| Fri Feb 22 11:57:02.829 [conn1] going to kill op: op: 12689.0 m31000| Fri Feb 22 11:57:02.829 [conn1] going to kill op: op: 12690.0 m31000| Fri Feb 22 11:57:02.834 [conn627] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.834 [conn627] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|71 } } cursorid:300636663892011 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:02.834 [conn626] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.834 [conn626] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|71 } } cursorid:300636249108193 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:02.834 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:02.834 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.834 [conn627] end connection 165.225.128.186:55960 (7 connections now open) m31000| Fri Feb 22 11:57:02.834 [conn626] ClientCursor::find(): cursor not found in map '300636249108193' (ok after a drop) m31000| Fri Feb 22 11:57:02.834 [conn626] end connection 165.225.128.186:46804 (6 connections now open) m31000| Fri Feb 22 11:57:02.834 [initandlisten] connection accepted from 165.225.128.186:64883 #628 (7 connections now open) m31000| Fri Feb 22 11:57:02.834 [initandlisten] connection accepted from 165.225.128.186:64326 #629 (8 connections now open) m31000| Fri Feb 22 11:57:02.930 [conn1] going to kill op: op: 12730.0 m31000| Fri Feb 22 11:57:02.930 [conn1] going to kill op: op: 12728.0 m31000| Fri Feb 22 11:57:02.930 [conn1] going to kill op: op: 12729.0 m31000| Fri Feb 22 11:57:02.936 [conn628] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.936 [conn628] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|81 } } cursorid:301074090840160 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:02.937 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.937 [conn629] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.937 [conn628] end connection 165.225.128.186:64883 (7 connections now open) m31000| Fri Feb 22 11:57:02.937 [conn629] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|81 } } cursorid:301073504229588 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:02.937 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:02.937 [conn629] end connection 165.225.128.186:64326 (6 connections now open) m31000| Fri Feb 22 11:57:02.937 [initandlisten] connection accepted from 165.225.128.186:43601 #630 (8 connections now open) m31000| Fri Feb 22 11:57:02.937 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:02.937 [initandlisten] connection accepted from 165.225.128.186:56759 #631 (8 connections now open) m31000| Fri Feb 22 11:57:02.937 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|90 } } cursorid:301416693523951 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:02.938 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.031 [conn1] going to kill op: op: 12768.0 m31000| Fri Feb 22 11:57:03.031 [conn1] going to kill op: op: 12769.0 m31000| Fri Feb 22 11:57:03.040 [conn630] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.040 [conn630] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|91 } } cursorid:301512894548218 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.040 [conn631] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.040 [conn631] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534222000|91 } } cursorid:301511105799865 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:03.040 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.040 [conn630] end connection 165.225.128.186:43601 (7 connections now open) m31001| Fri Feb 22 11:57:03.040 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.040 [conn631] end connection 165.225.128.186:56759 (6 connections now open) m31000| Fri Feb 22 11:57:03.040 [initandlisten] connection accepted from 165.225.128.186:54172 #632 (7 connections now open) m31000| Fri Feb 22 11:57:03.040 [initandlisten] connection accepted from 165.225.128.186:48815 #633 (8 connections now open) m31000| Fri Feb 22 11:57:03.131 [conn1] going to kill op: op: 12806.0 m31000| Fri Feb 22 11:57:03.132 [conn1] going to kill op: op: 12805.0 m31000| Fri Feb 22 11:57:03.133 [conn632] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.133 [conn632] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|3 } } cursorid:301949809391366 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.133 [conn633] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.133 [conn633] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|3 } } cursorid:301950837990240 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.133 [conn633] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:03.133 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:03.133 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.133 [conn632] end connection 165.225.128.186:54172 (7 connections now open) m31000| Fri Feb 22 11:57:03.133 [conn633] end connection 165.225.128.186:48815 (7 connections now open) m31000| Fri Feb 22 11:57:03.133 [initandlisten] connection accepted from 165.225.128.186:59517 #634 (7 connections now open) m31000| Fri Feb 22 11:57:03.133 [initandlisten] connection accepted from 165.225.128.186:63207 #635 (8 connections now open) m31000| Fri Feb 22 11:57:03.232 [conn1] going to kill op: op: 12844.0 m31000| Fri Feb 22 11:57:03.232 [conn1] going to kill op: op: 12843.0 m31000| Fri Feb 22 11:57:03.235 [conn634] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.235 [conn634] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|12 } } cursorid:302345166806324 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:03.236 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.236 [conn634] end connection 165.225.128.186:59517 (7 connections now open) m31000| Fri Feb 22 11:57:03.236 [conn635] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.236 [conn635] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|12 } } cursorid:302345224402227 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:03.236 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.236 [conn635] end connection 165.225.128.186:63207 (6 connections now open) m31000| Fri Feb 22 11:57:03.236 [initandlisten] connection accepted from 165.225.128.186:65105 #636 (8 connections now open) m31000| Fri Feb 22 11:57:03.236 [initandlisten] connection accepted from 165.225.128.186:61997 #637 (8 connections now open) m31000| Fri Feb 22 11:57:03.333 [conn1] going to kill op: op: 12886.0 m31000| Fri Feb 22 11:57:03.333 [conn1] going to kill op: op: 12883.0 m31000| Fri Feb 22 11:57:03.333 [conn1] going to kill op: op: 12884.0 m31000| Fri Feb 22 11:57:03.339 [conn637] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.339 [conn636] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.339 [conn637] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|22 } } cursorid:302783666365669 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.339 [conn636] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|22 } } cursorid:302783639060043 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:03.339 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:03.339 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.339 [conn636] end connection 165.225.128.186:65105 (7 connections now open) m31000| Fri Feb 22 11:57:03.339 [conn637] end connection 165.225.128.186:61997 (7 connections now open) m31000| Fri Feb 22 11:57:03.339 [initandlisten] connection accepted from 165.225.128.186:37676 #638 (7 connections now open) m31000| Fri Feb 22 11:57:03.339 [initandlisten] connection accepted from 165.225.128.186:33612 #639 (8 connections now open) m31000| Fri Feb 22 11:57:03.342 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.342 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|33 } } cursorid:303169310616067 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:03.342 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.434 [conn1] going to kill op: op: 12925.0 m31000| Fri Feb 22 11:57:03.434 [conn1] going to kill op: op: 12924.0 m31000| Fri Feb 22 11:57:03.442 [conn638] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.442 [conn639] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.442 [conn638] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|33 } } cursorid:303221426761600 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.442 [conn639] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|33 } } cursorid:303221851937194 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:03.442 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:03.442 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.442 [conn639] end connection 165.225.128.186:33612 (7 connections now open) m31000| Fri Feb 22 11:57:03.442 [conn638] end connection 165.225.128.186:37676 (7 connections now open) m31000| Fri Feb 22 11:57:03.442 [initandlisten] connection accepted from 165.225.128.186:53210 #640 (7 connections now open) m31000| Fri Feb 22 11:57:03.442 [initandlisten] connection accepted from 165.225.128.186:44059 #641 (8 connections now open) m31000| Fri Feb 22 11:57:03.535 [conn1] going to kill op: op: 12962.0 m31000| Fri Feb 22 11:57:03.535 [conn1] going to kill op: op: 12960.0 m31000| Fri Feb 22 11:57:03.544 [conn640] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.544 [conn640] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|43 } } cursorid:303659524503250 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.544 [conn640] ClientCursor::find(): cursor not found in map '303659524503250' (ok after a drop) m31001| Fri Feb 22 11:57:03.544 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.544 [conn640] end connection 165.225.128.186:53210 (7 connections now open) m31000| Fri Feb 22 11:57:03.544 [initandlisten] connection accepted from 165.225.128.186:37208 #642 (8 connections now open) m31000| Fri Feb 22 11:57:03.636 [conn1] going to kill op: op: 12993.0 m31000| Fri Feb 22 11:57:03.636 [conn1] going to kill op: op: 12994.0 m31000| Fri Feb 22 11:57:03.637 [conn642] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.637 [conn642] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|53 } } cursorid:304092717667529 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.637 [conn641] { $err: "operation was interrupted", code: 11601 } m31001| Fri Feb 22 11:57:03.637 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.637 [conn641] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|43 } } cursorid:303660157023343 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.637 [conn642] end connection 165.225.128.186:37208 (7 connections now open) m31002| Fri Feb 22 11:57:03.637 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.637 [conn641] end connection 165.225.128.186:44059 (6 connections now open) m31000| Fri Feb 22 11:57:03.637 [initandlisten] connection accepted from 165.225.128.186:56260 #643 (7 connections now open) m31000| Fri Feb 22 11:57:03.637 [initandlisten] connection accepted from 165.225.128.186:57089 #644 (8 connections now open) m31000| Fri Feb 22 11:57:03.736 [conn1] going to kill op: op: 13032.0 m31000| Fri Feb 22 11:57:03.737 [conn1] going to kill op: op: 13031.0 m31000| Fri Feb 22 11:57:03.740 [conn644] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.740 [conn644] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|62 } } cursorid:304488219305440 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.740 [conn643] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:57:03.740 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.740 [conn643] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|62 } } cursorid:304488988783200 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.740 [conn644] end connection 165.225.128.186:57089 (7 connections now open) m31001| Fri Feb 22 11:57:03.740 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.740 [conn643] end connection 165.225.128.186:56260 (6 connections now open) m31000| Fri Feb 22 11:57:03.740 [initandlisten] connection accepted from 165.225.128.186:59990 #645 (7 connections now open) m31000| Fri Feb 22 11:57:03.740 [initandlisten] connection accepted from 165.225.128.186:48592 #646 (8 connections now open) m31000| Fri Feb 22 11:57:03.837 [conn1] going to kill op: op: 13069.0 m31000| Fri Feb 22 11:57:03.837 [conn1] going to kill op: op: 13070.0 m31000| Fri Feb 22 11:57:03.843 [conn645] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.843 [conn645] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|72 } } cursorid:304925618039689 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.843 [conn646] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.843 [conn646] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|72 } } cursorid:304925884632408 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:03.843 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:03.843 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.843 [conn645] end connection 165.225.128.186:59990 (7 connections now open) m31000| Fri Feb 22 11:57:03.843 [conn646] end connection 165.225.128.186:48592 (6 connections now open) m31000| Fri Feb 22 11:57:03.843 [initandlisten] connection accepted from 165.225.128.186:47735 #647 (7 connections now open) m31000| Fri Feb 22 11:57:03.843 [initandlisten] connection accepted from 165.225.128.186:57732 #648 (8 connections now open) m31000| Fri Feb 22 11:57:03.938 [conn1] going to kill op: op: 13107.0 m31000| Fri Feb 22 11:57:03.939 [conn1] going to kill op: op: 13108.0 m31000| Fri Feb 22 11:57:03.945 [conn647] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.945 [conn647] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|82 } } cursorid:305365098043825 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:03.945 [conn647] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:03.946 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.946 [conn647] end connection 165.225.128.186:47735 (7 connections now open) m31000| Fri Feb 22 11:57:03.946 [conn648] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:03.946 [conn648] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|82 } } cursorid:305365143226089 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:90 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:03.946 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:03.946 [conn648] end connection 165.225.128.186:57732 (6 connections now open) m31000| Fri Feb 22 11:57:03.946 [initandlisten] connection accepted from 165.225.128.186:65065 #649 (8 connections now open) m31000| Fri Feb 22 11:57:03.946 [initandlisten] connection accepted from 165.225.128.186:34607 #650 (8 connections now open) m31000| Fri Feb 22 11:57:04.039 [conn1] going to kill op: op: 13153.0 m31000| Fri Feb 22 11:57:04.040 [conn1] going to kill op: op: 13155.0 m31000| Fri Feb 22 11:57:04.040 [conn1] going to kill op: op: 13156.0 m31000| Fri Feb 22 11:57:04.049 [conn649] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.049 [conn650] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.049 [conn649] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|92 } } cursorid:305802943845087 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.049 [conn650] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|92 } } cursorid:305801893507040 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.049 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:04.049 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.049 [conn650] end connection 165.225.128.186:34607 (7 connections now open) m31000| Fri Feb 22 11:57:04.049 [conn649] end connection 165.225.128.186:65065 (7 connections now open) m31000| Fri Feb 22 11:57:04.049 [initandlisten] connection accepted from 165.225.128.186:62453 #651 (7 connections now open) m31000| Fri Feb 22 11:57:04.049 [initandlisten] connection accepted from 165.225.128.186:40164 #652 (8 connections now open) m31000| Fri Feb 22 11:57:04.141 [conn1] going to kill op: op: 13206.0 m31000| Fri Feb 22 11:57:04.141 [conn1] going to kill op: op: 13204.0 m31000| Fri Feb 22 11:57:04.141 [conn1] going to kill op: op: 13205.0 m31000| Fri Feb 22 11:57:04.142 [conn651] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.142 [conn651] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|3 } } cursorid:306241515793140 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.142 [conn652] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.142 [conn652] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|3 } } cursorid:306241566021053 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.142 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.142 [conn651] end connection 165.225.128.186:62453 (7 connections now open) m31002| Fri Feb 22 11:57:04.142 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.142 [conn652] end connection 165.225.128.186:40164 (6 connections now open) m31000| Fri Feb 22 11:57:04.142 [initandlisten] connection accepted from 165.225.128.186:52745 #653 (7 connections now open) m31000| Fri Feb 22 11:57:04.143 [initandlisten] connection accepted from 165.225.128.186:44270 #654 (8 connections now open) m31000| Fri Feb 22 11:57:04.143 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.143 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534223000|92 } } cursorid:305750572546859 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.143 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.242 [conn1] going to kill op: op: 13244.0 m31000| Fri Feb 22 11:57:04.242 [conn1] going to kill op: op: 13245.0 m31000| Fri Feb 22 11:57:04.245 [conn653] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.245 [conn653] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|13 } } cursorid:306636322429861 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.245 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.245 [conn653] end connection 165.225.128.186:52745 (7 connections now open) m31000| Fri Feb 22 11:57:04.245 [conn654] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.245 [conn654] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|13 } } cursorid:306636286576415 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.245 [conn654] ClientCursor::find(): cursor not found in map '306636286576415' (ok after a drop) m31000| Fri Feb 22 11:57:04.245 [initandlisten] connection accepted from 165.225.128.186:33830 #655 (8 connections now open) m31002| Fri Feb 22 11:57:04.245 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.245 [conn654] end connection 165.225.128.186:44270 (7 connections now open) m31000| Fri Feb 22 11:57:04.246 [initandlisten] connection accepted from 165.225.128.186:33154 #656 (8 connections now open) m31000| Fri Feb 22 11:57:04.343 [conn1] going to kill op: op: 13286.0 m31000| Fri Feb 22 11:57:04.343 [conn1] going to kill op: op: 13284.0 m31000| Fri Feb 22 11:57:04.343 [conn1] going to kill op: op: 13283.0 m31000| Fri Feb 22 11:57:04.348 [conn655] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.348 [conn655] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|23 } } cursorid:307073868983504 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.348 [conn656] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.348 [conn656] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|23 } } cursorid:307073264298909 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.348 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:04.348 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.348 [conn655] end connection 165.225.128.186:33830 (7 connections now open) m31000| Fri Feb 22 11:57:04.348 [conn656] end connection 165.225.128.186:33154 (7 connections now open) m31000| Fri Feb 22 11:57:04.348 [initandlisten] connection accepted from 165.225.128.186:62199 #657 (7 connections now open) m31000| Fri Feb 22 11:57:04.348 [initandlisten] connection accepted from 165.225.128.186:54665 #658 (8 connections now open) m31000| Fri Feb 22 11:57:04.352 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.352 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|33 } } cursorid:307459703584091 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:04.352 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.444 [conn1] going to kill op: op: 13325.0 m31000| Fri Feb 22 11:57:04.444 [conn1] going to kill op: op: 13324.0 m31000| Fri Feb 22 11:57:04.451 [conn658] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.451 [conn658] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|33 } } cursorid:307512749581514 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.451 [conn657] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.451 [conn657] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|33 } } cursorid:307513214266694 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:04.451 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:04.451 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.451 [conn658] end connection 165.225.128.186:54665 (7 connections now open) m31000| Fri Feb 22 11:57:04.451 [conn657] end connection 165.225.128.186:62199 (7 connections now open) m31000| Fri Feb 22 11:57:04.451 [initandlisten] connection accepted from 165.225.128.186:60786 #659 (7 connections now open) m31000| Fri Feb 22 11:57:04.451 [initandlisten] connection accepted from 165.225.128.186:46781 #660 (8 connections now open) m31000| Fri Feb 22 11:57:04.545 [conn1] going to kill op: op: 13362.0 m31000| Fri Feb 22 11:57:04.545 [conn1] going to kill op: op: 13363.0 m31000| Fri Feb 22 11:57:04.553 [conn659] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.553 [conn659] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|43 } } cursorid:307949980311195 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.554 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.554 [conn659] end connection 165.225.128.186:60786 (7 connections now open) m31000| Fri Feb 22 11:57:04.554 [initandlisten] connection accepted from 165.225.128.186:60498 #661 (8 connections now open) m31000| Fri Feb 22 11:57:04.554 [conn660] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.554 [conn660] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|43 } } cursorid:307951203424259 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.554 [conn660] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:04.554 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.554 [conn660] end connection 165.225.128.186:46781 (7 connections now open) m31000| Fri Feb 22 11:57:04.555 [initandlisten] connection accepted from 165.225.128.186:61178 #662 (8 connections now open) m31000| Fri Feb 22 11:57:04.645 [conn1] going to kill op: op: 13398.0 m31000| Fri Feb 22 11:57:04.646 [conn1] going to kill op: op: 13397.0 m31000| Fri Feb 22 11:57:04.646 [conn661] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.646 [conn661] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|53 } } cursorid:308383413943939 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.646 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.646 [conn661] end connection 165.225.128.186:60498 (7 connections now open) m31000| Fri Feb 22 11:57:04.646 [conn662] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.647 [conn662] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|53 } } cursorid:308389172187178 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:04.647 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.647 [conn662] end connection 165.225.128.186:61178 (6 connections now open) m31000| Fri Feb 22 11:57:04.647 [initandlisten] connection accepted from 165.225.128.186:36293 #663 (7 connections now open) m31000| Fri Feb 22 11:57:04.647 [initandlisten] connection accepted from 165.225.128.186:58248 #664 (8 connections now open) m31000| Fri Feb 22 11:57:04.746 [conn1] going to kill op: op: 13435.0 m31000| Fri Feb 22 11:57:04.747 [conn1] going to kill op: op: 13436.0 m31000| Fri Feb 22 11:57:04.749 [conn663] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.749 [conn663] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|62 } } cursorid:308784194117726 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.749 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.749 [conn663] end connection 165.225.128.186:36293 (7 connections now open) m31000| Fri Feb 22 11:57:04.749 [conn664] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.749 [conn664] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|62 } } cursorid:308783564842787 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:04.750 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.750 [conn664] end connection 165.225.128.186:58248 (6 connections now open) m31000| Fri Feb 22 11:57:04.750 [initandlisten] connection accepted from 165.225.128.186:46951 #665 (8 connections now open) m31000| Fri Feb 22 11:57:04.750 [initandlisten] connection accepted from 165.225.128.186:61996 #666 (8 connections now open) m31000| Fri Feb 22 11:57:04.847 [conn1] going to kill op: op: 13474.0 m31000| Fri Feb 22 11:57:04.847 [conn1] going to kill op: op: 13475.0 m31000| Fri Feb 22 11:57:04.852 [conn665] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.852 [conn665] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|72 } } cursorid:309220930534089 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:04.852 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.852 [conn666] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.852 [conn666] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|72 } } cursorid:309221310160882 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.852 [conn665] end connection 165.225.128.186:46951 (7 connections now open) m31002| Fri Feb 22 11:57:04.852 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.853 [conn666] end connection 165.225.128.186:61996 (6 connections now open) m31000| Fri Feb 22 11:57:04.853 [initandlisten] connection accepted from 165.225.128.186:36666 #667 (7 connections now open) m31000| Fri Feb 22 11:57:04.853 [initandlisten] connection accepted from 165.225.128.186:51151 #668 (8 connections now open) m31000| Fri Feb 22 11:57:04.948 [conn1] going to kill op: op: 13513.0 m31000| Fri Feb 22 11:57:04.948 [conn1] going to kill op: op: 13512.0 m31000| Fri Feb 22 11:57:04.955 [conn668] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.955 [conn668] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|83 } } cursorid:309660073647503 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:04.955 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.955 [conn667] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:04.955 [conn667] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|83 } } cursorid:309659974773676 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:04.955 [conn668] end connection 165.225.128.186:51151 (7 connections now open) m31000| Fri Feb 22 11:57:04.955 [conn667] ClientCursor::find(): cursor not found in map '309659974773676' (ok after a drop) m31001| Fri Feb 22 11:57:04.956 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:04.956 [conn667] end connection 165.225.128.186:36666 (6 connections now open) m31000| Fri Feb 22 11:57:04.956 [initandlisten] connection accepted from 165.225.128.186:43523 #669 (7 connections now open) m31000| Fri Feb 22 11:57:04.956 [initandlisten] connection accepted from 165.225.128.186:54391 #670 (8 connections now open) m31000| Fri Feb 22 11:57:05.049 [conn1] going to kill op: op: 13550.0 m31000| Fri Feb 22 11:57:05.049 [conn1] going to kill op: op: 13551.0 m31000| Fri Feb 22 11:57:05.058 [conn669] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.058 [conn669] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|93 } } cursorid:310097139049802 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.058 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.058 [conn670] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.059 [conn670] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534224000|93 } } cursorid:310098180087193 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:05.059 [conn669] end connection 165.225.128.186:43523 (7 connections now open) m31001| Fri Feb 22 11:57:05.059 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.059 [conn670] end connection 165.225.128.186:54391 (6 connections now open) m31000| Fri Feb 22 11:57:05.059 [initandlisten] connection accepted from 165.225.128.186:36023 #671 (7 connections now open) m31000| Fri Feb 22 11:57:05.059 [initandlisten] connection accepted from 165.225.128.186:64222 #672 (8 connections now open) m31000| Fri Feb 22 11:57:05.150 [conn1] going to kill op: op: 13590.0 m31000| Fri Feb 22 11:57:05.150 [conn1] going to kill op: op: 13587.0 m31000| Fri Feb 22 11:57:05.150 [conn1] going to kill op: op: 13588.0 m31000| Fri Feb 22 11:57:05.151 [conn671] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.151 [conn671] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|5 } } cursorid:310534814603861 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.151 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.152 [conn671] end connection 165.225.128.186:36023 (7 connections now open) m31000| Fri Feb 22 11:57:05.152 [conn672] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.152 [conn672] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|5 } } cursorid:310535622643519 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.152 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.152 [conn672] end connection 165.225.128.186:64222 (6 connections now open) m31000| Fri Feb 22 11:57:05.152 [initandlisten] connection accepted from 165.225.128.186:52936 #673 (8 connections now open) m31000| Fri Feb 22 11:57:05.152 [initandlisten] connection accepted from 165.225.128.186:53455 #674 (8 connections now open) m31000| Fri Feb 22 11:57:05.154 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.154 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|14 } } cursorid:310880433027540 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.154 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.251 [conn1] going to kill op: op: 13629.0 m31000| Fri Feb 22 11:57:05.251 [conn1] going to kill op: op: 13628.0 m31000| Fri Feb 22 11:57:05.255 [conn673] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.255 [conn673] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|14 } } cursorid:310931287702489 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:104 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:05.255 [conn674] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.255 [conn674] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|14 } } cursorid:310930235081787 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.255 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.256 [conn673] end connection 165.225.128.186:52936 (7 connections now open) m31000| Fri Feb 22 11:57:05.256 [conn674] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:05.256 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.256 [conn674] end connection 165.225.128.186:53455 (6 connections now open) m31000| Fri Feb 22 11:57:05.256 [initandlisten] connection accepted from 165.225.128.186:58865 #675 (7 connections now open) m31000| Fri Feb 22 11:57:05.256 [initandlisten] connection accepted from 165.225.128.186:33339 #676 (8 connections now open) m31000| Fri Feb 22 11:57:05.352 [conn1] going to kill op: op: 13669.0 m31000| Fri Feb 22 11:57:05.352 [conn1] going to kill op: op: 13668.0 m31000| Fri Feb 22 11:57:05.359 [conn676] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.359 [conn676] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|24 } } cursorid:311369558899368 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.359 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.359 [conn676] end connection 165.225.128.186:33339 (7 connections now open) m31000| Fri Feb 22 11:57:05.359 [conn675] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.359 [conn675] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|24 } } cursorid:311369686198477 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.359 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.359 [conn675] end connection 165.225.128.186:58865 (6 connections now open) m31000| Fri Feb 22 11:57:05.359 [initandlisten] connection accepted from 165.225.128.186:37594 #677 (7 connections now open) m31000| Fri Feb 22 11:57:05.360 [initandlisten] connection accepted from 165.225.128.186:33523 #678 (8 connections now open) m31000| Fri Feb 22 11:57:05.452 [conn1] going to kill op: op: 13715.0 m31000| Fri Feb 22 11:57:05.453 [conn1] going to kill op: op: 13718.0 m31000| Fri Feb 22 11:57:05.453 [conn1] going to kill op: op: 13717.0 m31000| Fri Feb 22 11:57:05.455 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.455 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|35 } } cursorid:311754511779547 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.455 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.462 [conn678] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.462 [conn678] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|35 } } cursorid:311807124430345 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.462 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.462 [conn678] end connection 165.225.128.186:33523 (7 connections now open) m31000| Fri Feb 22 11:57:05.462 [conn677] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.462 [conn677] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|34 } } cursorid:311807025890631 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.462 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.463 [conn677] end connection 165.225.128.186:37594 (6 connections now open) m31000| Fri Feb 22 11:57:05.463 [initandlisten] connection accepted from 165.225.128.186:51514 #679 (7 connections now open) m31000| Fri Feb 22 11:57:05.470 [initandlisten] connection accepted from 165.225.128.186:60810 #680 (8 connections now open) m31000| Fri Feb 22 11:57:05.554 [conn1] going to kill op: op: 13753.0 m31000| Fri Feb 22 11:57:05.554 [conn1] going to kill op: op: 13755.0 m31000| Fri Feb 22 11:57:05.555 [conn679] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.555 [conn679] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|45 } } cursorid:312240147635239 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.555 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.555 [conn679] end connection 165.225.128.186:51514 (7 connections now open) m31000| Fri Feb 22 11:57:05.555 [initandlisten] connection accepted from 165.225.128.186:43404 #681 (8 connections now open) m31000| Fri Feb 22 11:57:05.562 [conn680] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.562 [conn680] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|45 } } cursorid:312246067404132 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.562 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.563 [conn680] end connection 165.225.128.186:60810 (7 connections now open) m31000| Fri Feb 22 11:57:05.563 [initandlisten] connection accepted from 165.225.128.186:53302 #682 (8 connections now open) m31000| Fri Feb 22 11:57:05.654 [conn1] going to kill op: op: 13791.0 m31000| Fri Feb 22 11:57:05.655 [conn1] going to kill op: op: 13790.0 m31000| Fri Feb 22 11:57:05.656 [conn682] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.656 [conn682] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|55 } } cursorid:312597914698967 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:05.656 [conn682] ClientCursor::find(): cursor not found in map '312597914698967' (ok after a drop) m31001| Fri Feb 22 11:57:05.656 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.656 [conn682] end connection 165.225.128.186:53302 (7 connections now open) m31000| Fri Feb 22 11:57:05.656 [initandlisten] connection accepted from 165.225.128.186:62939 #683 (8 connections now open) m31000| Fri Feb 22 11:57:05.658 [conn681] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.658 [conn681] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|54 } } cursorid:312592383609134 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.658 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.658 [conn681] end connection 165.225.128.186:43404 (7 connections now open) m31000| Fri Feb 22 11:57:05.658 [initandlisten] connection accepted from 165.225.128.186:60925 #684 (8 connections now open) m31000| Fri Feb 22 11:57:05.755 [conn1] going to kill op: op: 13829.0 m31000| Fri Feb 22 11:57:05.755 [conn1] going to kill op: op: 13828.0 m31000| Fri Feb 22 11:57:05.759 [conn683] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.759 [conn683] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|64 } } cursorid:312988750068306 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.759 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.759 [conn683] end connection 165.225.128.186:62939 (7 connections now open) m31000| Fri Feb 22 11:57:05.760 [initandlisten] connection accepted from 165.225.128.186:60243 #685 (8 connections now open) m31000| Fri Feb 22 11:57:05.761 [conn684] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.761 [conn684] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|64 } } cursorid:312992172102059 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.761 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.761 [conn684] end connection 165.225.128.186:60925 (7 connections now open) m31000| Fri Feb 22 11:57:05.761 [initandlisten] connection accepted from 165.225.128.186:61829 #686 (8 connections now open) m31000| Fri Feb 22 11:57:05.856 [conn1] going to kill op: op: 13867.0 m31000| Fri Feb 22 11:57:05.856 [conn1] going to kill op: op: 13866.0 m31000| Fri Feb 22 11:57:05.862 [conn685] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.862 [conn685] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|74 } } cursorid:313425441203777 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.862 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.862 [conn685] end connection 165.225.128.186:60243 (7 connections now open) m31000| Fri Feb 22 11:57:05.863 [initandlisten] connection accepted from 165.225.128.186:62156 #687 (8 connections now open) m31000| Fri Feb 22 11:57:05.863 [conn686] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.863 [conn686] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|74 } } cursorid:313429822921235 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:05.863 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.863 [conn686] end connection 165.225.128.186:61829 (7 connections now open) m31000| Fri Feb 22 11:57:05.864 [initandlisten] connection accepted from 165.225.128.186:58138 #688 (8 connections now open) m31000| Fri Feb 22 11:57:05.957 [conn1] going to kill op: op: 13905.0 m31000| Fri Feb 22 11:57:05.957 [conn1] going to kill op: op: 13904.0 m31000| Fri Feb 22 11:57:05.965 [conn687] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.965 [conn687] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|84 } } cursorid:313863934850220 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:05.966 [conn688] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:05.966 [conn688] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|84 } } cursorid:313868237056430 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:05.966 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.966 [conn688] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:05.966 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:05.966 [conn687] end connection 165.225.128.186:62156 (7 connections now open) m31000| Fri Feb 22 11:57:05.966 [conn688] end connection 165.225.128.186:58138 (7 connections now open) m31000| Fri Feb 22 11:57:05.966 [initandlisten] connection accepted from 165.225.128.186:37130 #689 (7 connections now open) m31000| Fri Feb 22 11:57:05.966 [initandlisten] connection accepted from 165.225.128.186:60806 #690 (8 connections now open) m31000| Fri Feb 22 11:57:06.058 [conn1] going to kill op: op: 13942.0 m31000| Fri Feb 22 11:57:06.058 [conn1] going to kill op: op: 13941.0 m31000| Fri Feb 22 11:57:06.059 [conn689] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.059 [conn689] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|94 } } cursorid:314305772397290 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.059 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.059 [conn690] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.059 [conn689] end connection 165.225.128.186:37130 (7 connections now open) m31000| Fri Feb 22 11:57:06.059 [conn690] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534225000|94 } } cursorid:314305860778564 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.059 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.059 [conn690] end connection 165.225.128.186:60806 (6 connections now open) m31000| Fri Feb 22 11:57:06.059 [initandlisten] connection accepted from 165.225.128.186:42080 #691 (7 connections now open) m31000| Fri Feb 22 11:57:06.060 [initandlisten] connection accepted from 165.225.128.186:46724 #692 (8 connections now open) m31000| Fri Feb 22 11:57:06.158 [conn1] going to kill op: op: 13985.0 m31000| Fri Feb 22 11:57:06.159 [conn1] going to kill op: op: 13982.0 m31000| Fri Feb 22 11:57:06.159 [conn1] going to kill op: op: 13983.0 m31000| Fri Feb 22 11:57:06.162 [conn692] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.162 [conn692] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|5 } } cursorid:314702760922771 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.162 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.162 [conn691] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.162 [conn691] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|4 } } cursorid:314702386534529 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:06.162 [conn692] end connection 165.225.128.186:46724 (7 connections now open) m31001| Fri Feb 22 11:57:06.162 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.162 [conn691] end connection 165.225.128.186:42080 (6 connections now open) m31000| Fri Feb 22 11:57:06.162 [initandlisten] connection accepted from 165.225.128.186:38220 #693 (8 connections now open) m31000| Fri Feb 22 11:57:06.163 [initandlisten] connection accepted from 165.225.128.186:39578 #694 (8 connections now open) m31000| Fri Feb 22 11:57:06.165 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.165 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|15 } } cursorid:315087926508495 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.165 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.259 [conn1] going to kill op: op: 14024.0 m31000| Fri Feb 22 11:57:06.260 [conn1] going to kill op: op: 14023.0 m31000| Fri Feb 22 11:57:06.264 [conn693] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.264 [conn693] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|15 } } cursorid:315140760205623 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.264 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.265 [conn693] end connection 165.225.128.186:38220 (7 connections now open) m31000| Fri Feb 22 11:57:06.265 [initandlisten] connection accepted from 165.225.128.186:44759 #695 (8 connections now open) m31000| Fri Feb 22 11:57:06.265 [conn694] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.265 [conn694] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|15 } } cursorid:315139817195076 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.265 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.265 [conn694] end connection 165.225.128.186:39578 (7 connections now open) m31000| Fri Feb 22 11:57:06.266 [initandlisten] connection accepted from 165.225.128.186:52517 #696 (8 connections now open) m31000| Fri Feb 22 11:57:06.360 [conn1] going to kill op: op: 14063.0 m31000| Fri Feb 22 11:57:06.360 [conn1] going to kill op: op: 14062.0 m31000| Fri Feb 22 11:57:06.367 [conn695] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.367 [conn695] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|25 } } cursorid:315574496943938 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:06.367 [conn695] ClientCursor::find(): cursor not found in map '315574496943938' (ok after a drop) m31002| Fri Feb 22 11:57:06.368 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.368 [conn695] end connection 165.225.128.186:44759 (7 connections now open) m31000| Fri Feb 22 11:57:06.368 [initandlisten] connection accepted from 165.225.128.186:59104 #697 (8 connections now open) m31000| Fri Feb 22 11:57:06.368 [conn696] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.368 [conn696] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|25 } } cursorid:315577160709629 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:87 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.368 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.369 [conn696] end connection 165.225.128.186:52517 (7 connections now open) m31000| Fri Feb 22 11:57:06.369 [initandlisten] connection accepted from 165.225.128.186:62846 #698 (8 connections now open) m31000| Fri Feb 22 11:57:06.461 [conn1] going to kill op: op: 14102.0 m31000| Fri Feb 22 11:57:06.461 [conn1] going to kill op: op: 14101.0 m31000| Fri Feb 22 11:57:06.461 [conn1] going to kill op: op: 14103.0 m31000| Fri Feb 22 11:57:06.470 [conn697] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.470 [conn697] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|35 } } cursorid:316012096865850 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.470 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.470 [conn697] end connection 165.225.128.186:59104 (7 connections now open) m31000| Fri Feb 22 11:57:06.470 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.470 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|44 } } cursorid:316360313538391 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:06.471 [conn698] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.471 [conn698] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|35 } } cursorid:316015800268649 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.471 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.471 [conn698] end connection 165.225.128.186:62846 (6 connections now open) m31000| Fri Feb 22 11:57:06.471 [initandlisten] connection accepted from 165.225.128.186:56482 #699 (8 connections now open) m31000| Fri Feb 22 11:57:06.471 [initandlisten] connection accepted from 165.225.128.186:54476 #700 (8 connections now open) m31002| Fri Feb 22 11:57:06.471 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.562 [conn1] going to kill op: op: 14139.0 m31000| Fri Feb 22 11:57:06.562 [conn1] going to kill op: op: 14138.0 m31000| Fri Feb 22 11:57:06.563 [conn699] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.563 [conn699] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|45 } } cursorid:316454084366701 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.563 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.563 [conn699] end connection 165.225.128.186:56482 (7 connections now open) m31000| Fri Feb 22 11:57:06.563 [conn700] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.563 [conn700] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|45 } } cursorid:316453627486556 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.563 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.563 [conn700] end connection 165.225.128.186:54476 (6 connections now open) m31000| Fri Feb 22 11:57:06.563 [initandlisten] connection accepted from 165.225.128.186:55690 #701 (8 connections now open) m31000| Fri Feb 22 11:57:06.564 [initandlisten] connection accepted from 165.225.128.186:59649 #702 (8 connections now open) m31000| Fri Feb 22 11:57:06.663 [conn1] going to kill op: op: 14176.0 m31000| Fri Feb 22 11:57:06.663 [conn1] going to kill op: op: 14177.0 m31000| Fri Feb 22 11:57:06.666 [conn701] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.666 [conn701] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|54 } } cursorid:316848987326464 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:178 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:06.666 [conn702] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.666 [conn701] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:57:06.666 [conn702] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|54 } } cursorid:316849019589149 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.666 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.666 [conn701] end connection 165.225.128.186:55690 (7 connections now open) m31001| Fri Feb 22 11:57:06.666 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.666 [conn702] end connection 165.225.128.186:59649 (6 connections now open) m31000| Fri Feb 22 11:57:06.666 [initandlisten] connection accepted from 165.225.128.186:44999 #703 (7 connections now open) m31000| Fri Feb 22 11:57:06.667 [initandlisten] connection accepted from 165.225.128.186:48052 #704 (8 connections now open) m31000| Fri Feb 22 11:57:06.764 [conn1] going to kill op: op: 14218.0 m31000| Fri Feb 22 11:57:06.764 [conn1] going to kill op: op: 14217.0 m31000| Fri Feb 22 11:57:06.769 [conn703] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.769 [conn703] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|64 } } cursorid:317288048162518 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.769 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.769 [conn703] end connection 165.225.128.186:44999 (7 connections now open) m31000| Fri Feb 22 11:57:06.769 [conn704] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.769 [conn704] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|64 } } cursorid:317286978338114 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:06.770 [initandlisten] connection accepted from 165.225.128.186:60823 #705 (8 connections now open) m31001| Fri Feb 22 11:57:06.770 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.770 [conn704] end connection 165.225.128.186:48052 (7 connections now open) m31000| Fri Feb 22 11:57:06.770 [initandlisten] connection accepted from 165.225.128.186:32913 #706 (8 connections now open) m31000| Fri Feb 22 11:57:06.865 [conn1] going to kill op: op: 14256.0 m31000| Fri Feb 22 11:57:06.865 [conn1] going to kill op: op: 14255.0 m31000| Fri Feb 22 11:57:06.872 [conn705] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.872 [conn705] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|75 } } cursorid:317726622817765 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.872 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.872 [conn705] end connection 165.225.128.186:60823 (7 connections now open) m31000| Fri Feb 22 11:57:06.872 [conn706] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.872 [conn706] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|75 } } cursorid:317725321168595 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.873 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.873 [conn706] end connection 165.225.128.186:32913 (6 connections now open) m31000| Fri Feb 22 11:57:06.873 [initandlisten] connection accepted from 165.225.128.186:48874 #707 (8 connections now open) m31000| Fri Feb 22 11:57:06.873 [initandlisten] connection accepted from 165.225.128.186:36331 #708 (8 connections now open) m31000| Fri Feb 22 11:57:06.966 [conn1] going to kill op: op: 14293.0 m31000| Fri Feb 22 11:57:06.966 [conn1] going to kill op: op: 14294.0 m31000| Fri Feb 22 11:57:06.975 [conn707] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.975 [conn707] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|85 } } cursorid:318163393123128 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:06.975 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.975 [conn708] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:06.975 [conn707] end connection 165.225.128.186:48874 (7 connections now open) m31000| Fri Feb 22 11:57:06.975 [conn708] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|85 } } cursorid:318163740677239 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:06.975 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:06.975 [conn708] end connection 165.225.128.186:36331 (6 connections now open) m31000| Fri Feb 22 11:57:06.976 [initandlisten] connection accepted from 165.225.128.186:35498 #709 (7 connections now open) m31000| Fri Feb 22 11:57:06.976 [initandlisten] connection accepted from 165.225.128.186:63424 #710 (8 connections now open) m31000| Fri Feb 22 11:57:07.067 [conn1] going to kill op: op: 14328.0 m31000| Fri Feb 22 11:57:07.067 [conn1] going to kill op: op: 14329.0 m31000| Fri Feb 22 11:57:07.068 [conn710] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.068 [conn710] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|95 } } cursorid:318602203722919 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.068 [conn709] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.068 [conn709] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534226000|95 } } cursorid:318602493854727 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.068 [conn710] ClientCursor::find(): cursor not found in map '318602203722919' (ok after a drop) m31001| Fri Feb 22 11:57:07.068 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:07.068 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.068 [conn710] end connection 165.225.128.186:63424 (7 connections now open) m31000| Fri Feb 22 11:57:07.068 [conn709] end connection 165.225.128.186:35498 (7 connections now open) m31000| Fri Feb 22 11:57:07.068 [initandlisten] connection accepted from 165.225.128.186:34291 #711 (7 connections now open) m31000| Fri Feb 22 11:57:07.068 [initandlisten] connection accepted from 165.225.128.186:59187 #712 (8 connections now open) m31000| Fri Feb 22 11:57:07.168 [conn1] going to kill op: op: 14368.0 m31000| Fri Feb 22 11:57:07.168 [conn1] going to kill op: op: 14369.0 m31000| Fri Feb 22 11:57:07.171 [conn712] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.171 [conn712] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|6 } } cursorid:318996089848147 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.171 [conn711] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.171 [conn711] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|6 } } cursorid:318995905640984 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:07.171 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.171 [conn712] end connection 165.225.128.186:59187 (7 connections now open) m31002| Fri Feb 22 11:57:07.171 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.171 [conn711] end connection 165.225.128.186:34291 (6 connections now open) m31000| Fri Feb 22 11:57:07.171 [initandlisten] connection accepted from 165.225.128.186:49466 #713 (7 connections now open) m31000| Fri Feb 22 11:57:07.171 [initandlisten] connection accepted from 165.225.128.186:33078 #714 (8 connections now open) m31000| Fri Feb 22 11:57:07.268 [conn1] going to kill op: op: 14418.0 m31000| Fri Feb 22 11:57:07.269 [conn1] going to kill op: op: 14417.0 m31000| Fri Feb 22 11:57:07.269 [conn1] going to kill op: op: 14416.0 m31000| Fri Feb 22 11:57:07.273 [conn713] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.273 [conn713] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|16 } } cursorid:319434811665796 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.274 [conn714] { $err: "operation was interrupted", code: 11601 } m31001| Fri Feb 22 11:57:07.274 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.274 [conn714] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|16 } } cursorid:319435573880589 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.274 [conn713] end connection 165.225.128.186:49466 (7 connections now open) m31002| Fri Feb 22 11:57:07.274 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.274 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.274 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|16 } } cursorid:319383958223760 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.274 [conn714] end connection 165.225.128.186:33078 (6 connections now open) m31000| Fri Feb 22 11:57:07.274 [initandlisten] connection accepted from 165.225.128.186:55706 #715 (7 connections now open) m31000| Fri Feb 22 11:57:07.274 [initandlisten] connection accepted from 165.225.128.186:62279 #716 (8 connections now open) m31001| Fri Feb 22 11:57:07.275 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.369 [conn1] going to kill op: op: 14456.0 m31000| Fri Feb 22 11:57:07.370 [conn1] going to kill op: op: 14457.0 m31000| Fri Feb 22 11:57:07.376 [conn715] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.376 [conn715] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|26 } } cursorid:319872028885605 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.376 [conn716] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.376 [conn716] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|26 } } cursorid:319873161054903 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.376 [conn715] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:07.376 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:07.376 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.377 [conn715] end connection 165.225.128.186:55706 (7 connections now open) m31000| Fri Feb 22 11:57:07.377 [conn716] end connection 165.225.128.186:62279 (7 connections now open) m31000| Fri Feb 22 11:57:07.377 [initandlisten] connection accepted from 165.225.128.186:56816 #717 (7 connections now open) m31000| Fri Feb 22 11:57:07.377 [initandlisten] connection accepted from 165.225.128.186:35734 #718 (8 connections now open) m31000| Fri Feb 22 11:57:07.470 [conn1] going to kill op: op: 14495.0 m31000| Fri Feb 22 11:57:07.470 [conn1] going to kill op: op: 14496.0 m31000| Fri Feb 22 11:57:07.479 [conn717] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.479 [conn717] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|36 } } cursorid:320310659066393 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:07.479 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.479 [conn717] end connection 165.225.128.186:56816 (7 connections now open) m31000| Fri Feb 22 11:57:07.479 [conn718] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.479 [conn718] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|36 } } cursorid:320311858925472 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:07.480 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.480 [initandlisten] connection accepted from 165.225.128.186:53413 #719 (8 connections now open) m31000| Fri Feb 22 11:57:07.480 [conn718] end connection 165.225.128.186:35734 (7 connections now open) m31000| Fri Feb 22 11:57:07.480 [initandlisten] connection accepted from 165.225.128.186:52172 #720 (8 connections now open) m31000| Fri Feb 22 11:57:07.571 [conn1] going to kill op: op: 14542.0 m31000| Fri Feb 22 11:57:07.571 [conn1] going to kill op: op: 14541.0 m31000| Fri Feb 22 11:57:07.572 [conn1] going to kill op: op: 14540.0 m31000| Fri Feb 22 11:57:07.572 [conn720] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.572 [conn720] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|47 } } cursorid:320750117486860 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:07.572 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.572 [conn720] end connection 165.225.128.186:52172 (7 connections now open) m31000| Fri Feb 22 11:57:07.572 [initandlisten] connection accepted from 165.225.128.186:44033 #721 (8 connections now open) m31000| Fri Feb 22 11:57:07.573 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.573 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|47 } } cursorid:320698149151716 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:07.573 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.672 [conn1] going to kill op: op: 14577.0 m31000| Fri Feb 22 11:57:07.672 [conn1] going to kill op: op: 14576.0 m31000| Fri Feb 22 11:57:07.673 [conn719] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.673 [conn719] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|47 } } cursorid:320749464985977 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:07.673 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.673 [conn719] end connection 165.225.128.186:53413 (7 connections now open) m31000| Fri Feb 22 11:57:07.673 [initandlisten] connection accepted from 165.225.128.186:55826 #722 (8 connections now open) m31000| Fri Feb 22 11:57:07.674 [conn721] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.674 [conn721] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|56 } } cursorid:321139422129419 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:07.674 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.674 [conn721] end connection 165.225.128.186:44033 (7 connections now open) m31000| Fri Feb 22 11:57:07.675 [initandlisten] connection accepted from 165.225.128.186:33862 #723 (8 connections now open) m31000| Fri Feb 22 11:57:07.773 [conn1] going to kill op: op: 14614.0 m31000| Fri Feb 22 11:57:07.773 [conn1] going to kill op: op: 14615.0 m31000| Fri Feb 22 11:57:07.776 [conn722] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.776 [conn722] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|66 } } cursorid:321573078476118 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:07.776 [conn722] ClientCursor::find(): cursor not found in map '321573078476118' (ok after a drop) m31002| Fri Feb 22 11:57:07.776 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.776 [conn722] end connection 165.225.128.186:55826 (7 connections now open) m31000| Fri Feb 22 11:57:07.776 [initandlisten] connection accepted from 165.225.128.186:40420 #724 (8 connections now open) m31000| Fri Feb 22 11:57:07.777 [conn723] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.777 [conn723] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|66 } } cursorid:321578195208060 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:07.777 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.777 [conn723] end connection 165.225.128.186:33862 (7 connections now open) m31000| Fri Feb 22 11:57:07.777 [initandlisten] connection accepted from 165.225.128.186:39909 #725 (8 connections now open) m31000| Fri Feb 22 11:57:07.874 [conn1] going to kill op: op: 14652.0 m31000| Fri Feb 22 11:57:07.874 [conn1] going to kill op: op: 14653.0 m31000| Fri Feb 22 11:57:07.878 [conn724] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.878 [conn724] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|76 } } cursorid:322011755787844 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:07.878 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.879 [conn724] end connection 165.225.128.186:40420 (7 connections now open) m31000| Fri Feb 22 11:57:07.879 [initandlisten] connection accepted from 165.225.128.186:38160 #726 (8 connections now open) m31000| Fri Feb 22 11:57:07.879 [conn725] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.879 [conn725] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|76 } } cursorid:322015679385488 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:07.879 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.879 [conn725] end connection 165.225.128.186:39909 (7 connections now open) m31000| Fri Feb 22 11:57:07.880 [initandlisten] connection accepted from 165.225.128.186:57909 #727 (8 connections now open) m31000| Fri Feb 22 11:57:07.975 [conn1] going to kill op: op: 14691.0 m31000| Fri Feb 22 11:57:07.975 [conn1] going to kill op: op: 14690.0 m31000| Fri Feb 22 11:57:07.981 [conn726] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.981 [conn726] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|86 } } cursorid:322449511059385 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:07.981 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.981 [conn726] end connection 165.225.128.186:38160 (7 connections now open) m31000| Fri Feb 22 11:57:07.981 [conn727] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:07.981 [conn727] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|86 } } cursorid:322454160748808 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:07.981 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:07.981 [conn727] end connection 165.225.128.186:57909 (6 connections now open) m31000| Fri Feb 22 11:57:07.981 [initandlisten] connection accepted from 165.225.128.186:38940 #728 (7 connections now open) m31000| Fri Feb 22 11:57:07.982 [initandlisten] connection accepted from 165.225.128.186:56887 #729 (8 connections now open) m31000| Fri Feb 22 11:57:08.076 [conn1] going to kill op: op: 14730.0 m31000| Fri Feb 22 11:57:08.076 [conn1] going to kill op: op: 14729.0 m31000| Fri Feb 22 11:57:08.083 [conn729] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.083 [conn729] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|96 } } cursorid:322893132601831 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.083 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.083 [conn729] end connection 165.225.128.186:56887 (7 connections now open) m31000| Fri Feb 22 11:57:08.084 [initandlisten] connection accepted from 165.225.128.186:64095 #730 (8 connections now open) m31000| Fri Feb 22 11:57:08.084 [conn728] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.084 [conn728] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534227000|96 } } cursorid:322891578651462 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:08.084 [conn728] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:08.084 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.084 [conn728] end connection 165.225.128.186:38940 (7 connections now open) m31000| Fri Feb 22 11:57:08.085 [initandlisten] connection accepted from 165.225.128.186:43618 #731 (8 connections now open) m31000| Fri Feb 22 11:57:08.177 [conn1] going to kill op: op: 14770.0 m31000| Fri Feb 22 11:57:08.177 [conn1] going to kill op: op: 14769.0 m31000| Fri Feb 22 11:57:08.186 [conn730] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.186 [conn730] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|7 } } cursorid:323326057699694 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.186 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.186 [conn730] end connection 165.225.128.186:64095 (7 connections now open) m31000| Fri Feb 22 11:57:08.186 [initandlisten] connection accepted from 165.225.128.186:49546 #732 (8 connections now open) m31000| Fri Feb 22 11:57:08.187 [conn731] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.187 [conn731] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|7 } } cursorid:323330229844611 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:08.187 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.187 [conn731] end connection 165.225.128.186:43618 (7 connections now open) m31000| Fri Feb 22 11:57:08.187 [initandlisten] connection accepted from 165.225.128.186:39376 #733 (8 connections now open) m31000| Fri Feb 22 11:57:08.278 [conn1] going to kill op: op: 14805.0 m31000| Fri Feb 22 11:57:08.278 [conn1] going to kill op: op: 14806.0 m31000| Fri Feb 22 11:57:08.278 [conn732] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.278 [conn732] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|17 } } cursorid:323764284721853 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.278 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.278 [conn732] end connection 165.225.128.186:49546 (7 connections now open) m31000| Fri Feb 22 11:57:08.279 [initandlisten] connection accepted from 165.225.128.186:45274 #734 (8 connections now open) m31000| Fri Feb 22 11:57:08.279 [conn733] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.279 [conn733] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|17 } } cursorid:323768163442051 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:08.279 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.279 [conn733] end connection 165.225.128.186:39376 (7 connections now open) m31000| Fri Feb 22 11:57:08.279 [initandlisten] connection accepted from 165.225.128.186:41233 #735 (8 connections now open) m31000| Fri Feb 22 11:57:08.378 [conn1] going to kill op: op: 14858.0 m31000| Fri Feb 22 11:57:08.379 [conn1] going to kill op: op: 14857.0 m31000| Fri Feb 22 11:57:08.379 [conn1] going to kill op: op: 14856.0 m31000| Fri Feb 22 11:57:08.381 [conn734] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.381 [conn734] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|26 } } cursorid:324158877139096 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.381 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.381 [conn734] end connection 165.225.128.186:45274 (7 connections now open) m31000| Fri Feb 22 11:57:08.381 [conn735] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.381 [conn735] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|26 } } cursorid:324164461199978 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:08.381 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.381 [initandlisten] connection accepted from 165.225.128.186:34434 #736 (8 connections now open) m31000| Fri Feb 22 11:57:08.382 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.382 [conn735] end connection 165.225.128.186:41233 (7 connections now open) m31000| Fri Feb 22 11:57:08.382 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|26 } } cursorid:324111638994779 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:08.382 [initandlisten] connection accepted from 165.225.128.186:43858 #737 (8 connections now open) m31001| Fri Feb 22 11:57:08.382 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.479 [conn1] going to kill op: op: 14896.0 m31000| Fri Feb 22 11:57:08.480 [conn1] going to kill op: op: 14897.0 m31000| Fri Feb 22 11:57:08.483 [conn736] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.484 [conn736] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|37 } } cursorid:324601529930335 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:08.484 [conn736] ClientCursor::find(): cursor not found in map '324601529930335' (ok after a drop) m31001| Fri Feb 22 11:57:08.484 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.484 [conn737] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.484 [conn737] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|37 } } cursorid:324602833873130 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:08.484 [conn736] end connection 165.225.128.186:34434 (7 connections now open) m31002| Fri Feb 22 11:57:08.484 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.484 [conn737] end connection 165.225.128.186:43858 (6 connections now open) m31000| Fri Feb 22 11:57:08.484 [initandlisten] connection accepted from 165.225.128.186:44569 #738 (7 connections now open) m31000| Fri Feb 22 11:57:08.484 [initandlisten] connection accepted from 165.225.128.186:47593 #739 (8 connections now open) m31000| Fri Feb 22 11:57:08.580 [conn1] going to kill op: op: 14934.0 m31000| Fri Feb 22 11:57:08.581 [conn1] going to kill op: op: 14937.0 m31000| Fri Feb 22 11:57:08.581 [conn1] going to kill op: op: 14936.0 m31000| Fri Feb 22 11:57:08.584 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.584 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|56 } } cursorid:325383430902879 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:08.584 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.586 [conn739] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.586 [conn739] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|47 } } cursorid:325039766502191 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:08.586 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.586 [conn739] end connection 165.225.128.186:47593 (7 connections now open) m31000| Fri Feb 22 11:57:08.586 [conn738] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.586 [conn738] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|47 } } cursorid:325038952861555 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.586 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.586 [initandlisten] connection accepted from 165.225.128.186:61546 #740 (8 connections now open) m31000| Fri Feb 22 11:57:08.586 [conn738] end connection 165.225.128.186:44569 (7 connections now open) m31000| Fri Feb 22 11:57:08.587 [initandlisten] connection accepted from 165.225.128.186:37772 #741 (8 connections now open) m31000| Fri Feb 22 11:57:08.681 [conn1] going to kill op: op: 14975.0 m31000| Fri Feb 22 11:57:08.682 [conn1] going to kill op: op: 14976.0 m31000| Fri Feb 22 11:57:08.688 [conn740] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.689 [conn740] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|57 } } cursorid:325477884682338 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:08.689 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.689 [conn740] end connection 165.225.128.186:61546 (7 connections now open) m31000| Fri Feb 22 11:57:08.689 [conn741] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.689 [conn741] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|57 } } cursorid:325477325976953 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:08.689 [initandlisten] connection accepted from 165.225.128.186:54024 #742 (8 connections now open) m31001| Fri Feb 22 11:57:08.689 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.689 [conn741] end connection 165.225.128.186:37772 (7 connections now open) m31000| Fri Feb 22 11:57:08.689 [initandlisten] connection accepted from 165.225.128.186:55312 #743 (8 connections now open) m31000| Fri Feb 22 11:57:08.782 [conn1] going to kill op: op: 15014.0 m31000| Fri Feb 22 11:57:08.782 [conn1] going to kill op: op: 15013.0 m31000| Fri Feb 22 11:57:08.791 [conn742] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.791 [conn742] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|67 } } cursorid:325911404244843 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:08.791 [conn742] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:08.791 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.791 [conn742] end connection 165.225.128.186:54024 (7 connections now open) m31000| Fri Feb 22 11:57:08.792 [conn743] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.792 [initandlisten] connection accepted from 165.225.128.186:47338 #744 (8 connections now open) m31000| Fri Feb 22 11:57:08.792 [conn743] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|67 } } cursorid:325915605933304 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.792 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.792 [conn743] end connection 165.225.128.186:55312 (7 connections now open) m31000| Fri Feb 22 11:57:08.792 [initandlisten] connection accepted from 165.225.128.186:52810 #745 (8 connections now open) m31000| Fri Feb 22 11:57:08.883 [conn1] going to kill op: op: 15049.0 m31000| Fri Feb 22 11:57:08.883 [conn1] going to kill op: op: 15048.0 m31000| Fri Feb 22 11:57:08.884 [conn744] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.884 [conn744] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|77 } } cursorid:326349366233069 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:08.884 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.884 [conn744] end connection 165.225.128.186:47338 (7 connections now open) m31000| Fri Feb 22 11:57:08.884 [conn745] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.884 [conn745] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|77 } } cursorid:326353630240559 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.884 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.884 [initandlisten] connection accepted from 165.225.128.186:58293 #746 (8 connections now open) m31000| Fri Feb 22 11:57:08.884 [conn745] end connection 165.225.128.186:52810 (6 connections now open) m31000| Fri Feb 22 11:57:08.884 [initandlisten] connection accepted from 165.225.128.186:43823 #747 (8 connections now open) m31000| Fri Feb 22 11:57:08.984 [conn1] going to kill op: op: 15086.0 m31000| Fri Feb 22 11:57:08.984 [conn1] going to kill op: op: 15087.0 m31000| Fri Feb 22 11:57:08.986 [conn746] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.986 [conn747] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:08.986 [conn746] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|86 } } cursorid:326749250344759 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:08.986 [conn747] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|86 } } cursorid:326748669290964 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:08.986 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:08.986 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:08.986 [conn747] end connection 165.225.128.186:43823 (7 connections now open) m31000| Fri Feb 22 11:57:08.987 [conn746] end connection 165.225.128.186:58293 (7 connections now open) m31000| Fri Feb 22 11:57:08.987 [initandlisten] connection accepted from 165.225.128.186:38252 #748 (7 connections now open) m31000| Fri Feb 22 11:57:08.987 [initandlisten] connection accepted from 165.225.128.186:47662 #749 (8 connections now open) m31000| Fri Feb 22 11:57:09.085 [conn1] going to kill op: op: 15124.0 m31000| Fri Feb 22 11:57:09.085 [conn1] going to kill op: op: 15125.0 m31000| Fri Feb 22 11:57:09.090 [conn749] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.090 [conn749] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|96 } } cursorid:327186484037901 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.090 [conn748] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.090 [conn748] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534228000|96 } } cursorid:327186395499461 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:09.090 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.090 [conn749] end connection 165.225.128.186:47662 (7 connections now open) m31001| Fri Feb 22 11:57:09.090 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.090 [conn748] end connection 165.225.128.186:38252 (6 connections now open) m31000| Fri Feb 22 11:57:09.090 [initandlisten] connection accepted from 165.225.128.186:53507 #750 (7 connections now open) m31000| Fri Feb 22 11:57:09.090 [initandlisten] connection accepted from 165.225.128.186:55530 #751 (8 connections now open) m31000| Fri Feb 22 11:57:09.186 [conn1] going to kill op: op: 15166.0 m31000| Fri Feb 22 11:57:09.186 [conn1] going to kill op: op: 15167.0 m31000| Fri Feb 22 11:57:09.193 [conn750] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.193 [conn750] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|7 } } cursorid:327626073117691 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.193 [conn750] ClientCursor::find(): cursor not found in map '327626073117691' (ok after a drop) m31002| Fri Feb 22 11:57:09.193 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.193 [conn751] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.193 [conn751] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|7 } } cursorid:327626265685363 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.193 [conn750] end connection 165.225.128.186:53507 (7 connections now open) m31001| Fri Feb 22 11:57:09.193 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.193 [conn751] end connection 165.225.128.186:55530 (6 connections now open) m31000| Fri Feb 22 11:57:09.193 [initandlisten] connection accepted from 165.225.128.186:64837 #752 (7 connections now open) m31000| Fri Feb 22 11:57:09.194 [initandlisten] connection accepted from 165.225.128.186:57261 #753 (8 connections now open) m31000| Fri Feb 22 11:57:09.287 [conn1] going to kill op: op: 15204.0 m31000| Fri Feb 22 11:57:09.287 [conn1] going to kill op: op: 15205.0 m31000| Fri Feb 22 11:57:09.296 [conn752] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.296 [conn752] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|18 } } cursorid:328062497035719 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:09.296 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.296 [conn753] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.296 [conn753] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|18 } } cursorid:328064062640568 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.296 [conn752] end connection 165.225.128.186:64837 (7 connections now open) m31001| Fri Feb 22 11:57:09.296 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.296 [conn753] end connection 165.225.128.186:57261 (6 connections now open) m31000| Fri Feb 22 11:57:09.296 [initandlisten] connection accepted from 165.225.128.186:64922 #754 (7 connections now open) m31000| Fri Feb 22 11:57:09.296 [initandlisten] connection accepted from 165.225.128.186:59481 #755 (8 connections now open) m31000| Fri Feb 22 11:57:09.388 [conn1] going to kill op: op: 15242.0 m31000| Fri Feb 22 11:57:09.388 [conn1] going to kill op: op: 15239.0 m31000| Fri Feb 22 11:57:09.388 [conn1] going to kill op: op: 15240.0 m31000| Fri Feb 22 11:57:09.388 [conn754] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.388 [conn754] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|28 } } cursorid:328500591586069 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:09.389 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.389 [conn754] end connection 165.225.128.186:64922 (7 connections now open) m31000| Fri Feb 22 11:57:09.389 [initandlisten] connection accepted from 165.225.128.186:47111 #756 (8 connections now open) m31000| Fri Feb 22 11:57:09.389 [conn755] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.389 [conn755] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|28 } } cursorid:328501822673849 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:09.389 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.389 [conn755] end connection 165.225.128.186:59481 (7 connections now open) m31000| Fri Feb 22 11:57:09.390 [initandlisten] connection accepted from 165.225.128.186:35241 #757 (8 connections now open) m31000| Fri Feb 22 11:57:09.393 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.393 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|37 } } cursorid:328844539932060 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:09.393 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.489 [conn1] going to kill op: op: 15281.0 m31000| Fri Feb 22 11:57:09.489 [conn1] going to kill op: op: 15280.0 m31000| Fri Feb 22 11:57:09.492 [conn757] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.492 [conn756] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.492 [conn757] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|37 } } cursorid:328895753938462 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.492 [conn756] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|37 } } cursorid:328892200648992 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.492 [conn757] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:09.492 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:09.492 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.492 [conn757] end connection 165.225.128.186:35241 (7 connections now open) m31000| Fri Feb 22 11:57:09.492 [conn756] end connection 165.225.128.186:47111 (7 connections now open) m31000| Fri Feb 22 11:57:09.493 [initandlisten] connection accepted from 165.225.128.186:55142 #758 (7 connections now open) m31000| Fri Feb 22 11:57:09.493 [initandlisten] connection accepted from 165.225.128.186:33343 #759 (8 connections now open) m31000| Fri Feb 22 11:57:09.590 [conn1] going to kill op: op: 15321.0 m31000| Fri Feb 22 11:57:09.590 [conn1] going to kill op: op: 15320.0 m31000| Fri Feb 22 11:57:09.591 [conn1] going to kill op: op: 15319.0 m31000| Fri Feb 22 11:57:09.595 [conn759] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.595 [conn759] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|47 } } cursorid:329335724400142 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:09.595 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.595 [conn759] end connection 165.225.128.186:33343 (7 connections now open) m31000| Fri Feb 22 11:57:09.596 [initandlisten] connection accepted from 165.225.128.186:46247 #760 (8 connections now open) m31000| Fri Feb 22 11:57:09.596 [conn758] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.596 [conn758] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|47 } } cursorid:329334419523853 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.596 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.596 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|56 } } cursorid:329721813701443 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:104 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:09.596 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.596 [conn758] end connection 165.225.128.186:55142 (7 connections now open) m31000| Fri Feb 22 11:57:09.597 [initandlisten] connection accepted from 165.225.128.186:60298 #761 (8 connections now open) m31002| Fri Feb 22 11:57:09.597 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.691 [conn1] going to kill op: op: 15363.0 m31000| Fri Feb 22 11:57:09.692 [conn1] going to kill op: op: 15362.0 m31000| Fri Feb 22 11:57:09.699 [conn760] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.699 [conn761] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.700 [conn761] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|57 } } cursorid:329772484806759 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.700 [conn760] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|57 } } cursorid:329767683655951 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:103 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:09.700 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:09.700 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.700 [conn761] end connection 165.225.128.186:60298 (7 connections now open) m31000| Fri Feb 22 11:57:09.700 [conn760] end connection 165.225.128.186:46247 (7 connections now open) m31000| Fri Feb 22 11:57:09.700 [initandlisten] connection accepted from 165.225.128.186:54873 #762 (7 connections now open) m31000| Fri Feb 22 11:57:09.700 [initandlisten] connection accepted from 165.225.128.186:55947 #763 (8 connections now open) m31000| Fri Feb 22 11:57:09.792 [conn1] going to kill op: op: 15397.0 m31000| Fri Feb 22 11:57:09.793 [conn1] going to kill op: op: 15398.0 m31000| Fri Feb 22 11:57:09.793 [conn763] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.793 [conn763] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|68 } } cursorid:330211779013131 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:09.793 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.793 [conn762] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.793 [conn763] end connection 165.225.128.186:55947 (7 connections now open) m31000| Fri Feb 22 11:57:09.793 [conn762] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|68 } } cursorid:330210540559182 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.793 [conn762] ClientCursor::find(): cursor not found in map '330210540559182' (ok after a drop) m31002| Fri Feb 22 11:57:09.794 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.794 [conn762] end connection 165.225.128.186:54873 (6 connections now open) m31000| Fri Feb 22 11:57:09.794 [initandlisten] connection accepted from 165.225.128.186:43154 #764 (7 connections now open) m31000| Fri Feb 22 11:57:09.794 [initandlisten] connection accepted from 165.225.128.186:62859 #765 (8 connections now open) m31000| Fri Feb 22 11:57:09.893 [conn1] going to kill op: op: 15435.0 m31000| Fri Feb 22 11:57:09.894 [conn1] going to kill op: op: 15436.0 m31000| Fri Feb 22 11:57:09.896 [conn764] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.896 [conn764] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|77 } } cursorid:330605293689840 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:09.896 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.897 [conn764] end connection 165.225.128.186:43154 (7 connections now open) m31000| Fri Feb 22 11:57:09.897 [conn765] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.897 [conn765] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|77 } } cursorid:330606051361523 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:09.897 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:09.897 [conn765] end connection 165.225.128.186:62859 (6 connections now open) m31000| Fri Feb 22 11:57:09.897 [initandlisten] connection accepted from 165.225.128.186:49169 #766 (8 connections now open) m31000| Fri Feb 22 11:57:09.897 [initandlisten] connection accepted from 165.225.128.186:54935 #767 (8 connections now open) m31000| Fri Feb 22 11:57:09.994 [conn1] going to kill op: op: 15474.0 m31000| Fri Feb 22 11:57:09.995 [conn1] going to kill op: op: 15473.0 m31000| Fri Feb 22 11:57:09.999 [conn767] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:09.999 [conn767] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|87 } } cursorid:331045284304215 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:09.999 [conn766] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.000 [conn766] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|87 } } cursorid:331044793883442 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:10.000 [conn767] getMore: cursorid not found local.oplog.rs 331045284304215 m31002| Fri Feb 22 11:57:10.000 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:10.000 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.000 [conn767] end connection 165.225.128.186:54935 (7 connections now open) m31000| Fri Feb 22 11:57:10.000 [conn766] end connection 165.225.128.186:49169 (7 connections now open) m31000| Fri Feb 22 11:57:10.000 [initandlisten] connection accepted from 165.225.128.186:40114 #768 (7 connections now open) m31000| Fri Feb 22 11:57:10.000 [initandlisten] connection accepted from 165.225.128.186:50802 #769 (8 connections now open) m31000| Fri Feb 22 11:57:10.095 [conn1] going to kill op: op: 15512.0 m31000| Fri Feb 22 11:57:10.095 [conn1] going to kill op: op: 15513.0 m31000| Fri Feb 22 11:57:10.103 [conn768] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.103 [conn768] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|97 } } cursorid:331481494074736 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:10.103 [conn769] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.103 [conn769] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534229000|97 } } cursorid:331483131505784 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.103 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.103 [conn768] end connection 165.225.128.186:40114 (7 connections now open) m31002| Fri Feb 22 11:57:10.103 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.103 [conn769] end connection 165.225.128.186:50802 (6 connections now open) m31000| Fri Feb 22 11:57:10.104 [initandlisten] connection accepted from 165.225.128.186:33785 #770 (7 connections now open) m31000| Fri Feb 22 11:57:10.104 [initandlisten] connection accepted from 165.225.128.186:39534 #771 (8 connections now open) m31000| Fri Feb 22 11:57:10.196 [conn1] going to kill op: op: 15553.0 m31000| Fri Feb 22 11:57:10.196 [conn770] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.196 [conn1] going to kill op: op: 15552.0 m31000| Fri Feb 22 11:57:10.196 [conn770] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|9 } } cursorid:331919682342127 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:10.197 [conn770] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:10.197 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.197 [conn770] end connection 165.225.128.186:33785 (7 connections now open) m31000| Fri Feb 22 11:57:10.197 [initandlisten] connection accepted from 165.225.128.186:63049 #772 (8 connections now open) m31000| Fri Feb 22 11:57:10.297 [conn1] going to kill op: op: 15588.0 m31000| Fri Feb 22 11:57:10.297 [conn1] going to kill op: op: 15587.0 m31000| Fri Feb 22 11:57:10.298 [conn771] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.298 [conn771] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|9 } } cursorid:331921424622013 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:10.298 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.298 [conn771] end connection 165.225.128.186:39534 (7 connections now open) m31000| Fri Feb 22 11:57:10.298 [initandlisten] connection accepted from 165.225.128.186:52714 #773 (8 connections now open) m31000| Fri Feb 22 11:57:10.299 [conn772] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.299 [conn772] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|19 } } cursorid:332311862248408 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.299 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.299 [conn772] end connection 165.225.128.186:63049 (7 connections now open) m31000| Fri Feb 22 11:57:10.300 [initandlisten] connection accepted from 165.225.128.186:36126 #774 (8 connections now open) m31000| Fri Feb 22 11:57:10.398 [conn1] going to kill op: op: 15628.0 m31000| Fri Feb 22 11:57:10.398 [conn1] going to kill op: op: 15625.0 m31000| Fri Feb 22 11:57:10.399 [conn1] going to kill op: op: 15626.0 m31000| Fri Feb 22 11:57:10.401 [conn773] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.401 [conn773] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|29 } } cursorid:332745650318849 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:10.401 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.401 [conn773] end connection 165.225.128.186:52714 (7 connections now open) m31000| Fri Feb 22 11:57:10.401 [conn774] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.401 [initandlisten] connection accepted from 165.225.128.186:61802 #775 (8 connections now open) m31000| Fri Feb 22 11:57:10.401 [conn774] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|29 } } cursorid:332748744027696 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.401 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.402 [conn774] end connection 165.225.128.186:36126 (7 connections now open) m31000| Fri Feb 22 11:57:10.402 [initandlisten] connection accepted from 165.225.128.186:58174 #776 (8 connections now open) m31000| Fri Feb 22 11:57:10.403 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.403 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|39 } } cursorid:333136621021841 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.404 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.499 [conn1] going to kill op: op: 15666.0 m31000| Fri Feb 22 11:57:10.499 [conn1] going to kill op: op: 15667.0 m31000| Fri Feb 22 11:57:10.503 [conn775] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.503 [conn775] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|39 } } cursorid:333184086504591 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:10.503 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.503 [conn775] end connection 165.225.128.186:61802 (7 connections now open) m31000| Fri Feb 22 11:57:10.504 [initandlisten] connection accepted from 165.225.128.186:59719 #777 (8 connections now open) m31000| Fri Feb 22 11:57:10.504 [conn776] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.504 [conn776] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|39 } } cursorid:333187499200721 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.504 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.504 [conn776] end connection 165.225.128.186:58174 (7 connections now open) m31000| Fri Feb 22 11:57:10.505 [initandlisten] connection accepted from 165.225.128.186:60278 #778 (8 connections now open) m31000| Fri Feb 22 11:57:10.600 [conn1] going to kill op: op: 15707.0 m31000| Fri Feb 22 11:57:10.600 [conn1] going to kill op: op: 15705.0 m31000| Fri Feb 22 11:57:10.601 [conn1] going to kill op: op: 15704.0 m31000| Fri Feb 22 11:57:10.606 [conn777] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.606 [conn777] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|49 } } cursorid:333621593202378 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:10.606 [conn777] ClientCursor::find(): cursor not found in map '333621593202378' (ok after a drop) m31002| Fri Feb 22 11:57:10.606 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.606 [conn777] end connection 165.225.128.186:59719 (7 connections now open) m31000| Fri Feb 22 11:57:10.607 [initandlisten] connection accepted from 165.225.128.186:45970 #779 (8 connections now open) m31000| Fri Feb 22 11:57:10.607 [conn778] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.607 [conn778] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|49 } } cursorid:333625088184766 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.607 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.607 [conn778] end connection 165.225.128.186:60278 (7 connections now open) m31000| Fri Feb 22 11:57:10.607 [initandlisten] connection accepted from 165.225.128.186:57651 #780 (8 connections now open) m31000| Fri Feb 22 11:57:10.608 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.608 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|59 } } cursorid:334012490698214 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:10.608 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.701 [conn1] going to kill op: op: 15745.0 m31000| Fri Feb 22 11:57:10.701 [conn1] going to kill op: op: 15746.0 m31000| Fri Feb 22 11:57:10.709 [conn779] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.709 [conn779] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|59 } } cursorid:334058563412641 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:10.709 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.709 [conn779] end connection 165.225.128.186:45970 (7 connections now open) m31000| Fri Feb 22 11:57:10.709 [initandlisten] connection accepted from 165.225.128.186:53069 #781 (8 connections now open) m31000| Fri Feb 22 11:57:10.710 [conn780] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.710 [conn780] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|59 } } cursorid:334062750181743 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.710 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.710 [conn780] end connection 165.225.128.186:57651 (7 connections now open) m31000| Fri Feb 22 11:57:10.710 [initandlisten] connection accepted from 165.225.128.186:60696 #782 (8 connections now open) m31000| Fri Feb 22 11:57:10.803 [conn1] going to kill op: op: 15783.0 m31000| Fri Feb 22 11:57:10.803 [conn1] going to kill op: op: 15781.0 m31000| Fri Feb 22 11:57:10.812 [conn781] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.812 [conn781] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|69 } } cursorid:334497120240971 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:10.812 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.812 [conn781] end connection 165.225.128.186:53069 (7 connections now open) m31000| Fri Feb 22 11:57:10.812 [initandlisten] connection accepted from 165.225.128.186:38906 #783 (8 connections now open) m31000| Fri Feb 22 11:57:10.903 [conn1] going to kill op: op: 15815.0 m31000| Fri Feb 22 11:57:10.904 [conn1] going to kill op: op: 15814.0 m31000| Fri Feb 22 11:57:10.904 [conn782] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.904 [conn782] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|69 } } cursorid:334501363597895 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:10.904 [conn783] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:10.904 [conn783] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|79 } } cursorid:334934453053321 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:10.904 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.904 [conn783] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:57:10.904 [conn782] end connection 165.225.128.186:60696 (7 connections now open) m31002| Fri Feb 22 11:57:10.904 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:10.904 [conn783] end connection 165.225.128.186:38906 (6 connections now open) m31000| Fri Feb 22 11:57:10.905 [initandlisten] connection accepted from 165.225.128.186:41257 #784 (7 connections now open) m31000| Fri Feb 22 11:57:10.905 [initandlisten] connection accepted from 165.225.128.186:57771 #785 (8 connections now open) m31000| Fri Feb 22 11:57:11.004 [conn1] going to kill op: op: 15855.0 m31000| Fri Feb 22 11:57:11.005 [conn1] going to kill op: op: 15856.0 m31000| Fri Feb 22 11:57:11.007 [conn785] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.007 [conn785] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|88 } } cursorid:335330183790689 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:11.007 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.007 [conn785] end connection 165.225.128.186:57771 (7 connections now open) m31000| Fri Feb 22 11:57:11.007 [conn784] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.007 [conn784] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|88 } } cursorid:335330628716860 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.008 [initandlisten] connection accepted from 165.225.128.186:44705 #786 (8 connections now open) m31001| Fri Feb 22 11:57:11.008 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.008 [conn784] end connection 165.225.128.186:41257 (7 connections now open) m31000| Fri Feb 22 11:57:11.008 [initandlisten] connection accepted from 165.225.128.186:33716 #787 (8 connections now open) m31000| Fri Feb 22 11:57:11.105 [conn1] going to kill op: op: 15893.0 m31000| Fri Feb 22 11:57:11.106 [conn1] going to kill op: op: 15894.0 m31000| Fri Feb 22 11:57:11.110 [conn786] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.110 [conn786] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|99 } } cursorid:335769073689704 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:11.110 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.110 [conn786] end connection 165.225.128.186:44705 (7 connections now open) m31000| Fri Feb 22 11:57:11.110 [conn787] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.110 [conn787] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534230000|99 } } cursorid:335768356620659 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.110 [initandlisten] connection accepted from 165.225.128.186:49683 #788 (8 connections now open) m31001| Fri Feb 22 11:57:11.110 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.111 [conn787] end connection 165.225.128.186:33716 (7 connections now open) m31000| Fri Feb 22 11:57:11.111 [initandlisten] connection accepted from 165.225.128.186:62345 #789 (8 connections now open) m31000| Fri Feb 22 11:57:11.206 [conn1] going to kill op: op: 15933.0 m31000| Fri Feb 22 11:57:11.206 [conn1] going to kill op: op: 15934.0 m31000| Fri Feb 22 11:57:11.212 [conn788] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.212 [conn788] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|10 } } cursorid:336202855918629 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:11.212 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.213 [conn788] end connection 165.225.128.186:49683 (7 connections now open) m31000| Fri Feb 22 11:57:11.213 [conn789] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.213 [conn789] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|10 } } cursorid:336206448294236 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.213 [initandlisten] connection accepted from 165.225.128.186:38467 #790 (8 connections now open) m31001| Fri Feb 22 11:57:11.213 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.213 [conn789] end connection 165.225.128.186:62345 (7 connections now open) m31000| Fri Feb 22 11:57:11.213 [initandlisten] connection accepted from 165.225.128.186:47587 #791 (8 connections now open) m31000| Fri Feb 22 11:57:11.307 [conn1] going to kill op: op: 15971.0 m31000| Fri Feb 22 11:57:11.307 [conn1] going to kill op: op: 15972.0 m31000| Fri Feb 22 11:57:11.315 [conn790] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.315 [conn790] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|20 } } cursorid:336641279533032 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:11.315 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.315 [conn790] end connection 165.225.128.186:38467 (7 connections now open) m31000| Fri Feb 22 11:57:11.315 [conn791] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.315 [conn791] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|20 } } cursorid:336645809771545 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:32 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.315 [conn791] ClientCursor::find(): cursor not found in map '336645809771545' (ok after a drop) m31001| Fri Feb 22 11:57:11.315 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.315 [initandlisten] connection accepted from 165.225.128.186:56516 #792 (8 connections now open) m31000| Fri Feb 22 11:57:11.315 [conn791] end connection 165.225.128.186:47587 (6 connections now open) m31000| Fri Feb 22 11:57:11.316 [initandlisten] connection accepted from 165.225.128.186:65161 #793 (8 connections now open) m31000| Fri Feb 22 11:57:11.408 [conn1] going to kill op: op: 16012.0 m31000| Fri Feb 22 11:57:11.409 [conn1] going to kill op: op: 16010.0 m31000| Fri Feb 22 11:57:11.409 [conn1] going to kill op: op: 16011.0 m31000| Fri Feb 22 11:57:11.418 [conn793] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.418 [conn793] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|30 } } cursorid:337083011977258 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:11.418 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.418 [conn792] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.418 [conn792] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|30 } } cursorid:337083651394310 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.418 [conn793] end connection 165.225.128.186:65161 (7 connections now open) m31002| Fri Feb 22 11:57:11.418 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.418 [conn792] end connection 165.225.128.186:56516 (6 connections now open) m31000| Fri Feb 22 11:57:11.418 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.418 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|39 } } cursorid:337425879693818 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.418 [initandlisten] connection accepted from 165.225.128.186:47745 #794 (7 connections now open) m31000| Fri Feb 22 11:57:11.418 [initandlisten] connection accepted from 165.225.128.186:37820 #795 (8 connections now open) m31001| Fri Feb 22 11:57:11.419 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.509 [conn1] going to kill op: op: 16048.0 m31000| Fri Feb 22 11:57:11.510 [conn1] going to kill op: op: 16047.0 m31000| Fri Feb 22 11:57:11.510 [conn794] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.510 [conn794] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|40 } } cursorid:337521912115486 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.510 [conn795] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.510 [conn795] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|40 } } cursorid:337521249028036 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:11.510 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.510 [conn794] end connection 165.225.128.186:47745 (7 connections now open) m31002| Fri Feb 22 11:57:11.510 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.510 [conn795] end connection 165.225.128.186:37820 (6 connections now open) m31000| Fri Feb 22 11:57:11.511 [initandlisten] connection accepted from 165.225.128.186:64980 #796 (7 connections now open) m31000| Fri Feb 22 11:57:11.511 [initandlisten] connection accepted from 165.225.128.186:46419 #797 (8 connections now open) m31000| Fri Feb 22 11:57:11.610 [conn1] going to kill op: op: 16086.0 m31000| Fri Feb 22 11:57:11.611 [conn1] going to kill op: op: 16085.0 m31000| Fri Feb 22 11:57:11.613 [conn797] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.613 [conn797] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|49 } } cursorid:337917206679910 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:11.613 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.613 [conn797] end connection 165.225.128.186:46419 (7 connections now open) m31000| Fri Feb 22 11:57:11.613 [conn796] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.613 [conn796] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|49 } } cursorid:337915577148762 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.613 [conn796] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:11.613 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.613 [initandlisten] connection accepted from 165.225.128.186:41985 #798 (8 connections now open) m31000| Fri Feb 22 11:57:11.614 [conn796] end connection 165.225.128.186:64980 (7 connections now open) m31000| Fri Feb 22 11:57:11.614 [initandlisten] connection accepted from 165.225.128.186:34636 #799 (8 connections now open) m31000| Fri Feb 22 11:57:11.711 [conn1] going to kill op: op: 16135.0 m31000| Fri Feb 22 11:57:11.711 [conn1] going to kill op: op: 16134.0 m31000| Fri Feb 22 11:57:11.712 [conn1] going to kill op: op: 16133.0 m31000| Fri Feb 22 11:57:11.716 [conn798] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.716 [conn799] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.716 [conn798] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|59 } } cursorid:338353489247096 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.716 [conn799] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|59 } } cursorid:338355100545641 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:11.716 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:11.716 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.716 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.716 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|59 } } cursorid:338302092635233 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:44 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.716 [conn799] end connection 165.225.128.186:34636 (7 connections now open) m31000| Fri Feb 22 11:57:11.716 [conn798] end connection 165.225.128.186:41985 (7 connections now open) m31000| Fri Feb 22 11:57:11.717 [initandlisten] connection accepted from 165.225.128.186:64602 #800 (7 connections now open) m31000| Fri Feb 22 11:57:11.717 [initandlisten] connection accepted from 165.225.128.186:45466 #801 (8 connections now open) m31002| Fri Feb 22 11:57:11.717 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.812 [conn1] going to kill op: op: 16174.0 m31000| Fri Feb 22 11:57:11.813 [conn1] going to kill op: op: 16175.0 m31000| Fri Feb 22 11:57:11.818 [conn801] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.818 [conn801] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|69 } } cursorid:338792631033191 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:11.819 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.819 [conn801] end connection 165.225.128.186:45466 (7 connections now open) m31000| Fri Feb 22 11:57:11.819 [conn800] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.819 [conn800] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|69 } } cursorid:338791780292358 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:11.819 [initandlisten] connection accepted from 165.225.128.186:46414 #802 (8 connections now open) m31002| Fri Feb 22 11:57:11.819 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.819 [conn800] end connection 165.225.128.186:64602 (7 connections now open) m31000| Fri Feb 22 11:57:11.819 [initandlisten] connection accepted from 165.225.128.186:37232 #803 (8 connections now open) m31000| Fri Feb 22 11:57:11.914 [conn1] going to kill op: op: 16212.0 m31000| Fri Feb 22 11:57:11.914 [conn1] going to kill op: op: 16213.0 m31000| Fri Feb 22 11:57:11.921 [conn802] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.921 [conn802] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|80 } } cursorid:339229914452065 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:11.921 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.921 [conn802] end connection 165.225.128.186:46414 (7 connections now open) m31000| Fri Feb 22 11:57:11.922 [conn803] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:11.922 [conn803] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|80 } } cursorid:339230753862353 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:11.922 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:11.922 [conn803] end connection 165.225.128.186:37232 (6 connections now open) m31000| Fri Feb 22 11:57:11.922 [initandlisten] connection accepted from 165.225.128.186:43046 #804 (8 connections now open) m31000| Fri Feb 22 11:57:11.922 [initandlisten] connection accepted from 165.225.128.186:64216 #805 (8 connections now open) m31000| Fri Feb 22 11:57:12.014 [conn1] going to kill op: op: 16247.0 m31000| Fri Feb 22 11:57:12.015 [conn1] going to kill op: op: 16248.0 m31000| Fri Feb 22 11:57:12.115 [conn1] going to kill op: op: 16278.0 m31000| Fri Feb 22 11:57:12.116 [conn1] going to kill op: op: 16279.0 m31000| Fri Feb 22 11:57:12.116 [conn805] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.116 [conn805] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|90 } } cursorid:339668097437768 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:12.116 [conn805] ClientCursor::find(): cursor not found in map '339668097437768' (ok after a drop) m31002| Fri Feb 22 11:57:12.116 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.116 [conn805] end connection 165.225.128.186:64216 (7 connections now open) m31000| Fri Feb 22 11:57:12.117 [initandlisten] connection accepted from 165.225.128.186:50146 #806 (8 connections now open) m31000| Fri Feb 22 11:57:12.117 [conn804] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.117 [conn804] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534231000|90 } } cursorid:339669095042030 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:91 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.117 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.117 [conn804] end connection 165.225.128.186:43046 (7 connections now open) m31000| Fri Feb 22 11:57:12.118 [initandlisten] connection accepted from 165.225.128.186:65371 #807 (8 connections now open) m31000| Fri Feb 22 11:57:12.216 [conn1] going to kill op: op: 16318.0 m31000| Fri Feb 22 11:57:12.217 [conn1] going to kill op: op: 16319.0 m31000| Fri Feb 22 11:57:12.219 [conn807] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.219 [conn807] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|11 } } cursorid:340494225929058 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.219 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.219 [conn806] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.219 [conn807] end connection 165.225.128.186:65371 (7 connections now open) m31000| Fri Feb 22 11:57:12.219 [conn806] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|11 } } cursorid:340489885995161 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:12.220 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.220 [conn806] end connection 165.225.128.186:50146 (6 connections now open) m31000| Fri Feb 22 11:57:12.220 [initandlisten] connection accepted from 165.225.128.186:48717 #808 (8 connections now open) m31000| Fri Feb 22 11:57:12.220 [initandlisten] connection accepted from 165.225.128.186:48292 #809 (8 connections now open) m31001| Fri Feb 22 11:57:12.260 [conn6] end connection 165.225.128.186:38049 (2 connections now open) m31001| Fri Feb 22 11:57:12.260 [initandlisten] connection accepted from 165.225.128.186:53796 #8 (3 connections now open) m31000| Fri Feb 22 11:57:12.317 [conn1] going to kill op: op: 16357.0 m31000| Fri Feb 22 11:57:12.318 [conn1] going to kill op: op: 16358.0 m31000| Fri Feb 22 11:57:12.322 [conn808] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.322 [conn808] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|21 } } cursorid:340931472999472 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.322 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.322 [conn808] end connection 165.225.128.186:48717 (7 connections now open) m31000| Fri Feb 22 11:57:12.322 [initandlisten] connection accepted from 165.225.128.186:53890 #810 (8 connections now open) m31000| Fri Feb 22 11:57:12.323 [conn809] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.323 [conn809] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|21 } } cursorid:340932140262640 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:12.323 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.323 [conn809] end connection 165.225.128.186:48292 (7 connections now open) m31000| Fri Feb 22 11:57:12.323 [initandlisten] connection accepted from 165.225.128.186:61257 #811 (8 connections now open) m31000| Fri Feb 22 11:57:12.418 [conn1] going to kill op: op: 16395.0 m31000| Fri Feb 22 11:57:12.419 [conn1] going to kill op: op: 16396.0 m31000| Fri Feb 22 11:57:12.425 [conn810] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.425 [conn810] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|31 } } cursorid:341364229609702 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.425 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.425 [conn810] end connection 165.225.128.186:53890 (7 connections now open) m31000| Fri Feb 22 11:57:12.425 [initandlisten] connection accepted from 165.225.128.186:41370 #812 (8 connections now open) m31000| Fri Feb 22 11:57:12.425 [conn811] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.425 [conn811] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|31 } } cursorid:341369967838646 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:12.426 [conn811] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:12.426 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.426 [conn811] end connection 165.225.128.186:61257 (7 connections now open) m31000| Fri Feb 22 11:57:12.426 [initandlisten] connection accepted from 165.225.128.186:48216 #813 (8 connections now open) m31000| Fri Feb 22 11:57:12.519 [conn1] going to kill op: op: 16444.0 m31000| Fri Feb 22 11:57:12.520 [conn1] going to kill op: op: 16443.0 m31000| Fri Feb 22 11:57:12.520 [conn1] going to kill op: op: 16445.0 m31000| Fri Feb 22 11:57:12.528 [conn812] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.528 [conn812] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|41 } } cursorid:341803365267727 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.528 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.528 [conn812] end connection 165.225.128.186:41370 (7 connections now open) m31000| Fri Feb 22 11:57:12.528 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.528 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|41 } } cursorid:341756012631496 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:12.528 [conn813] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.528 [conn813] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|41 } } cursorid:341806865899517 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:12.528 [initandlisten] connection accepted from 165.225.128.186:43181 #814 (8 connections now open) m31002| Fri Feb 22 11:57:12.528 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.528 [conn813] end connection 165.225.128.186:48216 (7 connections now open) m31000| Fri Feb 22 11:57:12.528 [initandlisten] connection accepted from 165.225.128.186:34831 #815 (8 connections now open) m31001| Fri Feb 22 11:57:12.529 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.620 [conn1] going to kill op: op: 16481.0 m31000| Fri Feb 22 11:57:12.621 [conn1] going to kill op: op: 16483.0 m31000| Fri Feb 22 11:57:12.630 [conn814] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.630 [conn814] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|51 } } cursorid:342241802798505 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.630 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.630 [conn814] end connection 165.225.128.186:43181 (7 connections now open) m31000| Fri Feb 22 11:57:12.630 [initandlisten] connection accepted from 165.225.128.186:64971 #816 (8 connections now open) m31000| Fri Feb 22 11:57:12.721 [conn1] going to kill op: op: 16515.0 m31000| Fri Feb 22 11:57:12.722 [conn1] going to kill op: op: 16514.0 m31000| Fri Feb 22 11:57:12.722 [conn816] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.722 [conn816] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|61 } } cursorid:342679741871682 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.722 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.722 [conn815] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.722 [conn816] end connection 165.225.128.186:64971 (7 connections now open) m31000| Fri Feb 22 11:57:12.722 [conn815] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|51 } } cursorid:342245991700915 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:12.722 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.723 [initandlisten] connection accepted from 165.225.128.186:65111 #817 (8 connections now open) m31000| Fri Feb 22 11:57:12.723 [conn815] end connection 165.225.128.186:34831 (7 connections now open) m31000| Fri Feb 22 11:57:12.723 [initandlisten] connection accepted from 165.225.128.186:49872 #818 (8 connections now open) m31000| Fri Feb 22 11:57:12.822 [conn1] going to kill op: op: 16566.0 m31000| Fri Feb 22 11:57:12.823 [conn1] going to kill op: op: 16564.0 m31000| Fri Feb 22 11:57:12.823 [conn1] going to kill op: op: 16565.0 m31000| Fri Feb 22 11:57:12.825 [conn817] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.825 [conn817] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|70 } } cursorid:343073981955450 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.825 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.825 [conn817] end connection 165.225.128.186:65111 (7 connections now open) m31000| Fri Feb 22 11:57:12.825 [conn818] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.825 [conn818] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|70 } } cursorid:343073673106149 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:12.825 [initandlisten] connection accepted from 165.225.128.186:52095 #819 (8 connections now open) m31000| Fri Feb 22 11:57:12.825 [conn818] ClientCursor::find(): cursor not found in map '343073673106149' (ok after a drop) m31002| Fri Feb 22 11:57:12.825 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.825 [conn818] end connection 165.225.128.186:49872 (7 connections now open) m31000| Fri Feb 22 11:57:12.825 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.826 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|70 } } cursorid:343022149430880 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:12.826 [initandlisten] connection accepted from 165.225.128.186:36259 #820 (8 connections now open) m31002| Fri Feb 22 11:57:12.826 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.923 [conn1] going to kill op: op: 16605.0 m31000| Fri Feb 22 11:57:12.924 [conn1] going to kill op: op: 16604.0 m31000| Fri Feb 22 11:57:12.927 [conn819] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.927 [conn819] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|81 } } cursorid:343507771215178 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:12.927 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.927 [conn819] end connection 165.225.128.186:52095 (7 connections now open) m31000| Fri Feb 22 11:57:12.928 [conn820] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:12.928 [conn820] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|81 } } cursorid:343512939528561 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:12.928 [initandlisten] connection accepted from 165.225.128.186:43757 #821 (8 connections now open) m31002| Fri Feb 22 11:57:12.928 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:12.928 [conn820] end connection 165.225.128.186:36259 (7 connections now open) m31000| Fri Feb 22 11:57:12.928 [initandlisten] connection accepted from 165.225.128.186:46312 #822 (8 connections now open) m31000| Fri Feb 22 11:57:13.024 [conn1] going to kill op: op: 16643.0 m31000| Fri Feb 22 11:57:13.025 [conn1] going to kill op: op: 16642.0 m31000| Fri Feb 22 11:57:13.030 [conn821] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.030 [conn821] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|91 } } cursorid:343947247731974 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:13.030 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.030 [conn821] end connection 165.225.128.186:43757 (7 connections now open) m31000| Fri Feb 22 11:57:13.030 [conn822] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.030 [conn822] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534232000|91 } } cursorid:343950390398945 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:13.031 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.031 [initandlisten] connection accepted from 165.225.128.186:56899 #823 (8 connections now open) m31000| Fri Feb 22 11:57:13.031 [conn822] end connection 165.225.128.186:46312 (6 connections now open) m31000| Fri Feb 22 11:57:13.031 [initandlisten] connection accepted from 165.225.128.186:49878 #824 (8 connections now open) m31000| Fri Feb 22 11:57:13.125 [conn1] going to kill op: op: 16683.0 m31000| Fri Feb 22 11:57:13.126 [conn1] going to kill op: op: 16682.0 m31000| Fri Feb 22 11:57:13.134 [conn824] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.134 [conn824] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|2 } } cursorid:344389646485116 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.134 [conn823] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.134 [conn823] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|2 } } cursorid:344389555297474 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:13.134 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.134 [conn823] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:57:13.134 [conn824] end connection 165.225.128.186:49878 (7 connections now open) m31001| Fri Feb 22 11:57:13.134 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.134 [conn823] end connection 165.225.128.186:56899 (6 connections now open) m31000| Fri Feb 22 11:57:13.134 [initandlisten] connection accepted from 165.225.128.186:60066 #825 (7 connections now open) m31000| Fri Feb 22 11:57:13.135 [initandlisten] connection accepted from 165.225.128.186:55589 #826 (8 connections now open) m31000| Fri Feb 22 11:57:13.226 [conn1] going to kill op: op: 16717.0 m31000| Fri Feb 22 11:57:13.226 [conn1] going to kill op: op: 16718.0 m31000| Fri Feb 22 11:57:13.227 [conn825] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.227 [conn825] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|12 } } cursorid:344826120891402 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.227 [conn826] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.227 [conn826] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|12 } } cursorid:344826833291489 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:13.227 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.227 [conn825] end connection 165.225.128.186:60066 (7 connections now open) m31001| Fri Feb 22 11:57:13.227 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.227 [conn826] end connection 165.225.128.186:55589 (6 connections now open) m31000| Fri Feb 22 11:57:13.227 [initandlisten] connection accepted from 165.225.128.186:52296 #827 (7 connections now open) m31000| Fri Feb 22 11:57:13.227 [initandlisten] connection accepted from 165.225.128.186:53567 #828 (8 connections now open) m31000| Fri Feb 22 11:57:13.327 [conn1] going to kill op: op: 16756.0 m31000| Fri Feb 22 11:57:13.327 [conn1] going to kill op: op: 16755.0 m31000| Fri Feb 22 11:57:13.329 [conn827] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.329 [conn827] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|21 } } cursorid:345222749387911 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.329 [conn828] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.330 [conn828] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|21 } } cursorid:345222467182884 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:13.330 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:13.330 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.330 [conn827] end connection 165.225.128.186:52296 (7 connections now open) m31000| Fri Feb 22 11:57:13.330 [conn828] end connection 165.225.128.186:53567 (6 connections now open) m31000| Fri Feb 22 11:57:13.330 [initandlisten] connection accepted from 165.225.128.186:58439 #829 (7 connections now open) m31000| Fri Feb 22 11:57:13.330 [initandlisten] connection accepted from 165.225.128.186:46214 #830 (8 connections now open) m31000| Fri Feb 22 11:57:13.428 [conn1] going to kill op: op: 16795.0 m31000| Fri Feb 22 11:57:13.428 [conn1] going to kill op: op: 16793.0 m31000| Fri Feb 22 11:57:13.432 [conn830] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.432 [conn830] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|31 } } cursorid:345660817831055 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:13.432 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.432 [conn829] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.432 [conn829] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|31 } } cursorid:345660370238259 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.432 [conn830] end connection 165.225.128.186:46214 (7 connections now open) m31002| Fri Feb 22 11:57:13.432 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.433 [conn829] end connection 165.225.128.186:58439 (6 connections now open) m31000| Fri Feb 22 11:57:13.433 [initandlisten] connection accepted from 165.225.128.186:41680 #831 (8 connections now open) m31000| Fri Feb 22 11:57:13.433 [initandlisten] connection accepted from 165.225.128.186:52564 #832 (8 connections now open) m31000| Fri Feb 22 11:57:13.529 [conn1] going to kill op: op: 16832.0 m31000| Fri Feb 22 11:57:13.529 [conn1] going to kill op: op: 16833.0 m31000| Fri Feb 22 11:57:13.535 [conn831] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.535 [conn832] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.535 [conn831] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|41 } } cursorid:346097671905795 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.535 [conn832] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|41 } } cursorid:346097742977694 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.535 [conn832] ClientCursor::find(): cursor not found in map '346097742977694' (ok after a drop) m31001| Fri Feb 22 11:57:13.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:13.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.535 [conn831] end connection 165.225.128.186:41680 (7 connections now open) m31000| Fri Feb 22 11:57:13.535 [conn832] end connection 165.225.128.186:52564 (7 connections now open) m31000| Fri Feb 22 11:57:13.536 [initandlisten] connection accepted from 165.225.128.186:43504 #833 (7 connections now open) m31000| Fri Feb 22 11:57:13.536 [initandlisten] connection accepted from 165.225.128.186:44656 #834 (8 connections now open) m31000| Fri Feb 22 11:57:13.629 [conn1] going to kill op: op: 16879.0 m31000| Fri Feb 22 11:57:13.629 [conn1] going to kill op: op: 16881.0 m31000| Fri Feb 22 11:57:13.630 [conn1] going to kill op: op: 16882.0 m31000| Fri Feb 22 11:57:13.630 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.630 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|52 } } cursorid:346483723039302 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:13.630 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.637 [conn833] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.637 [conn833] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|52 } } cursorid:346535638155255 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:38 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:13.638 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.638 [conn833] end connection 165.225.128.186:43504 (7 connections now open) m31000| Fri Feb 22 11:57:13.638 [conn834] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.638 [conn834] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|52 } } cursorid:346535585008971 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:13.638 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.638 [conn834] end connection 165.225.128.186:44656 (6 connections now open) m31000| Fri Feb 22 11:57:13.639 [initandlisten] connection accepted from 165.225.128.186:52870 #835 (7 connections now open) m31000| Fri Feb 22 11:57:13.642 [initandlisten] connection accepted from 165.225.128.186:54378 #836 (8 connections now open) m31000| Fri Feb 22 11:57:13.730 [conn1] going to kill op: op: 16917.0 m31000| Fri Feb 22 11:57:13.730 [conn1] going to kill op: op: 16918.0 m31000| Fri Feb 22 11:57:13.731 [conn835] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.731 [conn835] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|62 } } cursorid:346970421504647 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:13.731 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.731 [conn835] end connection 165.225.128.186:52870 (7 connections now open) m31000| Fri Feb 22 11:57:13.731 [initandlisten] connection accepted from 165.225.128.186:50402 #837 (8 connections now open) m31000| Fri Feb 22 11:57:13.734 [conn836] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.734 [conn836] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|62 } } cursorid:346974859660047 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:13.734 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.735 [conn836] end connection 165.225.128.186:54378 (7 connections now open) m31000| Fri Feb 22 11:57:13.735 [initandlisten] connection accepted from 165.225.128.186:38528 #838 (8 connections now open) m31000| Fri Feb 22 11:57:13.831 [conn1] going to kill op: op: 16958.0 m31000| Fri Feb 22 11:57:13.831 [conn1] going to kill op: op: 16955.0 m31000| Fri Feb 22 11:57:13.831 [conn1] going to kill op: op: 16957.0 m31000| Fri Feb 22 11:57:13.834 [conn837] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.834 [conn837] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|71 } } cursorid:347321272211850 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:13.834 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.834 [conn837] end connection 165.225.128.186:50402 (7 connections now open) m31000| Fri Feb 22 11:57:13.834 [initandlisten] connection accepted from 165.225.128.186:36712 #839 (8 connections now open) m31000| Fri Feb 22 11:57:13.837 [conn838] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.837 [conn838] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|71 } } cursorid:347326315135273 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:86 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.837 [conn838] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:13.837 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.837 [conn838] end connection 165.225.128.186:38528 (7 connections now open) m31000| Fri Feb 22 11:57:13.838 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.838 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|80 } } cursorid:347713737779854 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:13.838 [initandlisten] connection accepted from 165.225.128.186:41481 #840 (8 connections now open) m31002| Fri Feb 22 11:57:13.839 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.932 [conn1] going to kill op: op: 16996.0 m31000| Fri Feb 22 11:57:13.932 [conn1] going to kill op: op: 16998.0 m31000| Fri Feb 22 11:57:13.936 [conn839] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.936 [conn839] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|81 } } cursorid:347717009324029 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:13.936 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.936 [conn839] end connection 165.225.128.186:36712 (7 connections now open) m31000| Fri Feb 22 11:57:13.937 [initandlisten] connection accepted from 165.225.128.186:55217 #841 (8 connections now open) m31000| Fri Feb 22 11:57:13.939 [conn840] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:13.939 [conn840] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|81 } } cursorid:347721341768096 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:13.939 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:13.939 [conn840] end connection 165.225.128.186:41481 (7 connections now open) m31000| Fri Feb 22 11:57:13.940 [initandlisten] connection accepted from 165.225.128.186:42995 #842 (8 connections now open) m31000| Fri Feb 22 11:57:14.033 [conn1] going to kill op: op: 17034.0 m31000| Fri Feb 22 11:57:14.033 [conn1] going to kill op: op: 17036.0 m31000| Fri Feb 22 11:57:14.039 [conn841] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.039 [conn841] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|91 } } cursorid:348112209551606 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.039 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.039 [conn841] end connection 165.225.128.186:55217 (7 connections now open) m31000| Fri Feb 22 11:57:14.040 [initandlisten] connection accepted from 165.225.128.186:40658 #843 (8 connections now open) m31000| Fri Feb 22 11:57:14.042 [conn842] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.042 [conn842] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534233000|92 } } cursorid:348116741644284 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.042 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.042 [conn842] end connection 165.225.128.186:42995 (7 connections now open) m31000| Fri Feb 22 11:57:14.043 [initandlisten] connection accepted from 165.225.128.186:44867 #844 (8 connections now open) m31000| Fri Feb 22 11:57:14.134 [conn1] going to kill op: op: 17075.0 m31000| Fri Feb 22 11:57:14.134 [conn1] going to kill op: op: 17074.0 m31000| Fri Feb 22 11:57:14.135 [conn844] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.135 [conn844] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|4 } } cursorid:348511844313392 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.135 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.135 [conn844] end connection 165.225.128.186:44867 (7 connections now open) m31000| Fri Feb 22 11:57:14.135 [initandlisten] connection accepted from 165.225.128.186:37508 #845 (8 connections now open) m31000| Fri Feb 22 11:57:14.142 [conn843] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.142 [conn843] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|3 } } cursorid:348507223847360 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.142 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.142 [conn843] end connection 165.225.128.186:40658 (7 connections now open) m31000| Fri Feb 22 11:57:14.142 [initandlisten] connection accepted from 165.225.128.186:44960 #846 (8 connections now open) m31000| Fri Feb 22 11:57:14.235 [conn1] going to kill op: op: 17111.0 m31000| Fri Feb 22 11:57:14.235 [conn1] going to kill op: op: 17112.0 m31000| Fri Feb 22 11:57:14.238 [conn845] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.238 [conn845] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|13 } } cursorid:348902022241775 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:14.238 [conn845] ClientCursor::find(): cursor not found in map '348902022241775' (ok after a drop) m31002| Fri Feb 22 11:57:14.238 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.238 [conn845] end connection 165.225.128.186:37508 (7 connections now open) m31000| Fri Feb 22 11:57:14.239 [initandlisten] connection accepted from 165.225.128.186:38826 #847 (8 connections now open) m31000| Fri Feb 22 11:57:14.336 [conn1] going to kill op: op: 17146.0 m31000| Fri Feb 22 11:57:14.336 [conn1] going to kill op: op: 17147.0 m31000| Fri Feb 22 11:57:14.336 [conn846] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.336 [conn846] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|13 } } cursorid:348907807707514 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.336 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.337 [conn846] end connection 165.225.128.186:44960 (7 connections now open) m31000| Fri Feb 22 11:57:14.337 [initandlisten] connection accepted from 165.225.128.186:52223 #848 (8 connections now open) m31000| Fri Feb 22 11:57:14.341 [conn847] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.341 [conn847] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|23 } } cursorid:349298407681657 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.341 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.341 [conn847] end connection 165.225.128.186:38826 (7 connections now open) m31000| Fri Feb 22 11:57:14.342 [initandlisten] connection accepted from 165.225.128.186:61612 #849 (8 connections now open) m31000| Fri Feb 22 11:57:14.436 [conn1] going to kill op: op: 17184.0 m31000| Fri Feb 22 11:57:14.437 [conn1] going to kill op: op: 17185.0 m31000| Fri Feb 22 11:57:14.439 [conn848] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.439 [conn848] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|33 } } cursorid:349689208898343 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.439 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.440 [conn848] end connection 165.225.128.186:52223 (7 connections now open) m31000| Fri Feb 22 11:57:14.440 [initandlisten] connection accepted from 165.225.128.186:34315 #850 (8 connections now open) m31000| Fri Feb 22 11:57:14.443 [conn849] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.443 [conn849] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|33 } } cursorid:349692371021059 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.444 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.444 [conn849] end connection 165.225.128.186:61612 (7 connections now open) m31000| Fri Feb 22 11:57:14.444 [initandlisten] connection accepted from 165.225.128.186:52582 #851 (8 connections now open) m31000| Fri Feb 22 11:57:14.537 [conn1] going to kill op: op: 17223.0 m31000| Fri Feb 22 11:57:14.538 [conn1] going to kill op: op: 17222.0 m31000| Fri Feb 22 11:57:14.542 [conn850] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.542 [conn850] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|43 } } cursorid:350083082432702 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.542 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.542 [conn850] end connection 165.225.128.186:34315 (7 connections now open) m31000| Fri Feb 22 11:57:14.542 [initandlisten] connection accepted from 165.225.128.186:65081 #852 (8 connections now open) m31000| Fri Feb 22 11:57:14.546 [conn851] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.546 [conn851] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|43 } } cursorid:350088907936688 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.546 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.546 [conn851] end connection 165.225.128.186:52582 (7 connections now open) m31000| Fri Feb 22 11:57:14.547 [initandlisten] connection accepted from 165.225.128.186:45540 #853 (8 connections now open) m31000| Fri Feb 22 11:57:14.638 [conn1] going to kill op: op: 17262.0 m31000| Fri Feb 22 11:57:14.638 [conn1] going to kill op: op: 17261.0 m31000| Fri Feb 22 11:57:14.639 [conn1] going to kill op: op: 17258.0 m31000| Fri Feb 22 11:57:14.644 [conn852] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.644 [conn852] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|53 } } cursorid:350478480779444 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:14.644 [conn852] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:14.644 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.644 [conn852] end connection 165.225.128.186:65081 (7 connections now open) m31000| Fri Feb 22 11:57:14.644 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.644 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|62 } } cursorid:350827026321462 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:14.645 [initandlisten] connection accepted from 165.225.128.186:44622 #854 (8 connections now open) m31001| Fri Feb 22 11:57:14.645 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.739 [conn1] going to kill op: op: 17297.0 m31000| Fri Feb 22 11:57:14.740 [conn1] going to kill op: op: 17295.0 m31000| Fri Feb 22 11:57:14.740 [conn853] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.740 [conn853] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|53 } } cursorid:350483857727844 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.740 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.740 [conn853] end connection 165.225.128.186:45540 (7 connections now open) m31000| Fri Feb 22 11:57:14.741 [initandlisten] connection accepted from 165.225.128.186:40969 #855 (8 connections now open) m31000| Fri Feb 22 11:57:14.747 [conn854] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.747 [conn854] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|63 } } cursorid:350875115143298 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.747 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.747 [conn854] end connection 165.225.128.186:44622 (7 connections now open) m31000| Fri Feb 22 11:57:14.748 [initandlisten] connection accepted from 165.225.128.186:37620 #856 (8 connections now open) m31000| Fri Feb 22 11:57:14.840 [conn1] going to kill op: op: 17336.0 m31000| Fri Feb 22 11:57:14.840 [conn1] going to kill op: op: 17333.0 m31000| Fri Feb 22 11:57:14.843 [conn855] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.843 [conn855] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|72 } } cursorid:351264148278955 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.843 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.843 [conn855] end connection 165.225.128.186:40969 (7 connections now open) m31000| Fri Feb 22 11:57:14.843 [initandlisten] connection accepted from 165.225.128.186:39347 #857 (8 connections now open) m31000| Fri Feb 22 11:57:14.850 [conn856] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.850 [conn856] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|73 } } cursorid:351270291076369 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.850 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.850 [conn856] end connection 165.225.128.186:37620 (7 connections now open) m31000| Fri Feb 22 11:57:14.850 [initandlisten] connection accepted from 165.225.128.186:38411 #858 (8 connections now open) m31000| Fri Feb 22 11:57:14.941 [conn1] going to kill op: op: 17385.0 m31000| Fri Feb 22 11:57:14.941 [conn1] going to kill op: op: 17382.0 m31000| Fri Feb 22 11:57:14.941 [conn1] going to kill op: op: 17384.0 m31000| Fri Feb 22 11:57:14.942 [conn858] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.942 [conn858] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|83 } } cursorid:351665345121841 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:14.942 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.942 [conn858] end connection 165.225.128.186:38411 (7 connections now open) m31000| Fri Feb 22 11:57:14.943 [initandlisten] connection accepted from 165.225.128.186:43624 #859 (8 connections now open) m31000| Fri Feb 22 11:57:14.946 [conn857] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.946 [conn857] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|82 } } cursorid:351659628131189 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:14.946 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:14.946 [conn857] end connection 165.225.128.186:39347 (7 connections now open) m31000| Fri Feb 22 11:57:14.946 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:14.946 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|82 } } cursorid:351655475478601 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:14.946 [initandlisten] connection accepted from 165.225.128.186:39742 #860 (8 connections now open) m31000| Fri Feb 22 11:57:14.947 [conn12] ClientCursor::find(): cursor not found in map '351655475478601' (ok after a drop) m31002| Fri Feb 22 11:57:14.947 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.042 [conn1] going to kill op: op: 17422.0 m31000| Fri Feb 22 11:57:15.042 [conn1] going to kill op: op: 17424.0 m31000| Fri Feb 22 11:57:15.045 [conn859] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.045 [conn859] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|92 } } cursorid:352055428450709 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.045 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.045 [conn859] end connection 165.225.128.186:43624 (7 connections now open) m31000| Fri Feb 22 11:57:15.045 [initandlisten] connection accepted from 165.225.128.186:44760 #861 (8 connections now open) m31000| Fri Feb 22 11:57:15.048 [conn860] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.048 [conn860] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534234000|93 } } cursorid:352058576765033 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.048 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.049 [conn860] end connection 165.225.128.186:39742 (7 connections now open) m31000| Fri Feb 22 11:57:15.049 [initandlisten] connection accepted from 165.225.128.186:62909 #862 (8 connections now open) m31000| Fri Feb 22 11:57:15.143 [conn1] going to kill op: op: 17462.0 m31000| Fri Feb 22 11:57:15.143 [conn1] going to kill op: op: 17464.0 m31000| Fri Feb 22 11:57:15.148 [conn861] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.148 [conn861] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|3 } } cursorid:352450462601501 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.148 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.148 [conn861] end connection 165.225.128.186:44760 (7 connections now open) m31000| Fri Feb 22 11:57:15.148 [initandlisten] connection accepted from 165.225.128.186:42262 #863 (8 connections now open) m31000| Fri Feb 22 11:57:15.151 [conn862] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.151 [conn862] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|4 } } cursorid:352455500926019 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.151 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.151 [conn862] end connection 165.225.128.186:62909 (7 connections now open) m31000| Fri Feb 22 11:57:15.151 [initandlisten] connection accepted from 165.225.128.186:50323 #864 (8 connections now open) m31000| Fri Feb 22 11:57:15.244 [conn1] going to kill op: op: 17501.0 m31000| Fri Feb 22 11:57:15.244 [conn1] going to kill op: op: 17502.0 m31000| Fri Feb 22 11:57:15.251 [conn863] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.251 [conn863] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|13 } } cursorid:352846409827197 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.251 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.251 [conn863] end connection 165.225.128.186:42262 (7 connections now open) m31000| Fri Feb 22 11:57:15.251 [initandlisten] connection accepted from 165.225.128.186:34076 #865 (8 connections now open) m31000| Fri Feb 22 11:57:15.253 [conn864] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.253 [conn864] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|14 } } cursorid:352849912205222 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.253 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.253 [conn864] end connection 165.225.128.186:50323 (7 connections now open) m31000| Fri Feb 22 11:57:15.254 [initandlisten] connection accepted from 165.225.128.186:41382 #866 (8 connections now open) m31000| Fri Feb 22 11:57:15.345 [conn1] going to kill op: op: 17539.0 m31000| Fri Feb 22 11:57:15.345 [conn1] going to kill op: op: 17537.0 m31000| Fri Feb 22 11:57:15.346 [conn866] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.346 [conn866] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|24 } } cursorid:353244725668477 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:15.346 [conn866] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:15.346 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.346 [conn866] end connection 165.225.128.186:41382 (7 connections now open) m31000| Fri Feb 22 11:57:15.347 [initandlisten] connection accepted from 165.225.128.186:63783 #867 (8 connections now open) m31000| Fri Feb 22 11:57:15.354 [conn865] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.354 [conn865] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|24 } } cursorid:353241017772090 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.354 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.354 [conn865] end connection 165.225.128.186:34076 (7 connections now open) m31000| Fri Feb 22 11:57:15.354 [initandlisten] connection accepted from 165.225.128.186:46015 #868 (8 connections now open) m31000| Fri Feb 22 11:57:15.446 [conn1] going to kill op: op: 17574.0 m31000| Fri Feb 22 11:57:15.446 [conn1] going to kill op: op: 17575.0 m31000| Fri Feb 22 11:57:15.449 [conn867] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.449 [conn867] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|33 } } cursorid:353635348287459 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.449 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.449 [conn867] end connection 165.225.128.186:63783 (7 connections now open) m31000| Fri Feb 22 11:57:15.450 [initandlisten] connection accepted from 165.225.128.186:33682 #869 (8 connections now open) m31000| Fri Feb 22 11:57:15.547 [conn1] going to kill op: op: 17608.0 m31000| Fri Feb 22 11:57:15.547 [conn1] going to kill op: op: 17609.0 m31000| Fri Feb 22 11:57:15.547 [conn868] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.547 [conn868] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|34 } } cursorid:353640285884812 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.548 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.548 [conn868] end connection 165.225.128.186:46015 (7 connections now open) m31000| Fri Feb 22 11:57:15.548 [initandlisten] connection accepted from 165.225.128.186:64811 #870 (8 connections now open) m31000| Fri Feb 22 11:57:15.552 [conn869] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.552 [conn869] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|43 } } cursorid:354031336271074 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:38 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.552 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.552 [conn869] end connection 165.225.128.186:33682 (7 connections now open) m31000| Fri Feb 22 11:57:15.552 [initandlisten] connection accepted from 165.225.128.186:62175 #871 (8 connections now open) m31000| Fri Feb 22 11:57:15.648 [conn1] going to kill op: op: 17649.0 m31000| Fri Feb 22 11:57:15.648 [conn1] going to kill op: op: 17647.0 m31000| Fri Feb 22 11:57:15.648 [conn1] going to kill op: op: 17646.0 m31000| Fri Feb 22 11:57:15.651 [conn870] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.651 [conn870] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|53 } } cursorid:354422715923761 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.651 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.651 [conn870] end connection 165.225.128.186:64811 (7 connections now open) m31000| Fri Feb 22 11:57:15.652 [initandlisten] connection accepted from 165.225.128.186:47974 #872 (8 connections now open) m31000| Fri Feb 22 11:57:15.654 [conn871] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.654 [conn871] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|53 } } cursorid:354425105564796 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.654 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.655 [conn871] end connection 165.225.128.186:62175 (7 connections now open) m31000| Fri Feb 22 11:57:15.655 [initandlisten] connection accepted from 165.225.128.186:39270 #873 (8 connections now open) m31000| Fri Feb 22 11:57:15.656 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.656 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|63 } } cursorid:354813257250718 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.656 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.749 [conn1] going to kill op: op: 17688.0 m31000| Fri Feb 22 11:57:15.749 [conn1] going to kill op: op: 17687.0 m31000| Fri Feb 22 11:57:15.754 [conn872] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.754 [conn872] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|63 } } cursorid:354815968981926 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:15.754 [conn872] ClientCursor::find(): cursor not found in map '354815968981926' (ok after a drop) m31001| Fri Feb 22 11:57:15.754 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.755 [conn872] end connection 165.225.128.186:47974 (7 connections now open) m31000| Fri Feb 22 11:57:15.755 [initandlisten] connection accepted from 165.225.128.186:35679 #874 (8 connections now open) m31000| Fri Feb 22 11:57:15.757 [conn873] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.757 [conn873] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|63 } } cursorid:354821934929543 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.758 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.758 [conn873] end connection 165.225.128.186:39270 (7 connections now open) m31000| Fri Feb 22 11:57:15.758 [initandlisten] connection accepted from 165.225.128.186:52758 #875 (8 connections now open) m31001| Fri Feb 22 11:57:15.836 [conn7] end connection 165.225.128.186:60247 (2 connections now open) m31001| Fri Feb 22 11:57:15.836 [initandlisten] connection accepted from 165.225.128.186:56023 #9 (3 connections now open) m31000| Fri Feb 22 11:57:15.850 [conn1] going to kill op: op: 17723.0 m31000| Fri Feb 22 11:57:15.850 [conn1] going to kill op: op: 17725.0 m31000| Fri Feb 22 11:57:15.858 [conn874] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.858 [conn874] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|73 } } cursorid:355211598146657 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.858 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.858 [conn874] end connection 165.225.128.186:35679 (7 connections now open) m31000| Fri Feb 22 11:57:15.858 [initandlisten] connection accepted from 165.225.128.186:54949 #876 (8 connections now open) m31000| Fri Feb 22 11:57:15.951 [conn1] going to kill op: op: 17757.0 m31000| Fri Feb 22 11:57:15.952 [conn1] going to kill op: op: 17760.0 m31000| Fri Feb 22 11:57:15.952 [conn875] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.952 [conn875] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|73 } } cursorid:355216627242347 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:92 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:15.952 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.952 [conn875] end connection 165.225.128.186:52758 (7 connections now open) m31000| Fri Feb 22 11:57:15.953 [initandlisten] connection accepted from 165.225.128.186:50917 #877 (8 connections now open) m31000| Fri Feb 22 11:57:15.960 [conn876] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:15.961 [conn876] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|83 } } cursorid:355650743168931 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:15.961 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:15.961 [conn876] end connection 165.225.128.186:54949 (7 connections now open) m31000| Fri Feb 22 11:57:15.961 [initandlisten] connection accepted from 165.225.128.186:52103 #878 (8 connections now open) m31000| Fri Feb 22 11:57:16.052 [conn1] going to kill op: op: 17808.0 m31000| Fri Feb 22 11:57:16.052 [conn1] going to kill op: op: 17807.0 m31000| Fri Feb 22 11:57:16.053 [conn1] going to kill op: op: 17805.0 m31000| Fri Feb 22 11:57:16.054 [conn878] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.054 [conn878] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|93 } } cursorid:356046292655546 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.054 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.054 [conn878] end connection 165.225.128.186:52103 (7 connections now open) m31000| Fri Feb 22 11:57:16.054 [initandlisten] connection accepted from 165.225.128.186:46374 #879 (8 connections now open) m31000| Fri Feb 22 11:57:16.055 [conn877] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.055 [conn877] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|92 } } cursorid:356040809090167 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:16.055 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.056 [conn877] end connection 165.225.128.186:50917 (7 connections now open) m31000| Fri Feb 22 11:57:16.056 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.056 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534235000|92 } } cursorid:355993100659822 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:16.056 [initandlisten] connection accepted from 165.225.128.186:39906 #880 (8 connections now open) m31000| Fri Feb 22 11:57:16.057 [conn12] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:16.057 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.153 [conn1] going to kill op: op: 17851.0 m31000| Fri Feb 22 11:57:16.154 [conn1] going to kill op: op: 17850.0 m31000| Fri Feb 22 11:57:16.157 [conn879] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.157 [conn879] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|4 } } cursorid:356436127259181 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.157 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.158 [conn879] end connection 165.225.128.186:46374 (7 connections now open) m31000| Fri Feb 22 11:57:16.158 [initandlisten] connection accepted from 165.225.128.186:39904 #881 (8 connections now open) m31000| Fri Feb 22 11:57:16.158 [conn880] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.158 [conn880] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|5 } } cursorid:356439684997800 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:16.158 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.158 [conn880] end connection 165.225.128.186:39906 (7 connections now open) m31000| Fri Feb 22 11:57:16.159 [initandlisten] connection accepted from 165.225.128.186:64450 #882 (8 connections now open) m31000| Fri Feb 22 11:57:16.254 [conn1] going to kill op: op: 17888.0 m31000| Fri Feb 22 11:57:16.255 [conn1] going to kill op: op: 17889.0 m31000| Fri Feb 22 11:57:16.260 [conn881] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.260 [conn881] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|15 } } cursorid:356873553821740 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.260 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.260 [conn881] end connection 165.225.128.186:39904 (7 connections now open) m31000| Fri Feb 22 11:57:16.260 [conn882] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.261 [conn882] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|15 } } cursorid:356878226806828 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:16.261 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.261 [conn882] end connection 165.225.128.186:64450 (6 connections now open) m31000| Fri Feb 22 11:57:16.261 [initandlisten] connection accepted from 165.225.128.186:42471 #883 (8 connections now open) m31000| Fri Feb 22 11:57:16.261 [initandlisten] connection accepted from 165.225.128.186:43959 #884 (8 connections now open) m31000| Fri Feb 22 11:57:16.355 [conn1] going to kill op: op: 17927.0 m31000| Fri Feb 22 11:57:16.356 [conn1] going to kill op: op: 17928.0 m31000| Fri Feb 22 11:57:16.364 [conn883] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.364 [conn884] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.364 [conn883] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|25 } } cursorid:357317013134632 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:96 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:16.364 [conn884] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|25 } } cursorid:357316584637261 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.364 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:16.364 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.364 [conn883] end connection 165.225.128.186:42471 (7 connections now open) m31000| Fri Feb 22 11:57:16.364 [conn884] end connection 165.225.128.186:43959 (7 connections now open) m31000| Fri Feb 22 11:57:16.364 [initandlisten] connection accepted from 165.225.128.186:33122 #885 (7 connections now open) m31000| Fri Feb 22 11:57:16.364 [initandlisten] connection accepted from 165.225.128.186:62169 #886 (8 connections now open) m31000| Fri Feb 22 11:57:16.456 [conn1] going to kill op: op: 17963.0 m31000| Fri Feb 22 11:57:16.457 [conn1] going to kill op: op: 17962.0 m31000| Fri Feb 22 11:57:16.457 [conn886] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.457 [conn886] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|35 } } cursorid:357754137576451 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:89 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:16.457 [conn885] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.457 [conn885] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|35 } } cursorid:357753635122807 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.457 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.457 [conn885] ClientCursor::find(): cursor not found in map '357753635122807' (ok after a drop) m31000| Fri Feb 22 11:57:16.457 [conn886] end connection 165.225.128.186:62169 (7 connections now open) m31002| Fri Feb 22 11:57:16.457 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.457 [conn885] end connection 165.225.128.186:33122 (6 connections now open) m31000| Fri Feb 22 11:57:16.458 [initandlisten] connection accepted from 165.225.128.186:50800 #887 (7 connections now open) m31000| Fri Feb 22 11:57:16.458 [initandlisten] connection accepted from 165.225.128.186:41876 #888 (8 connections now open) m31000| Fri Feb 22 11:57:16.557 [conn1] going to kill op: op: 18001.0 m31000| Fri Feb 22 11:57:16.558 [conn1] going to kill op: op: 18000.0 m31000| Fri Feb 22 11:57:16.560 [conn888] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.560 [conn888] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|44 } } cursorid:358148749957649 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:16.561 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.561 [conn888] end connection 165.225.128.186:41876 (7 connections now open) m31000| Fri Feb 22 11:57:16.561 [conn887] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.561 [conn887] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|44 } } cursorid:358148764044548 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.561 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.561 [initandlisten] connection accepted from 165.225.128.186:49079 #889 (8 connections now open) m31000| Fri Feb 22 11:57:16.561 [conn887] end connection 165.225.128.186:50800 (6 connections now open) m31000| Fri Feb 22 11:57:16.561 [initandlisten] connection accepted from 165.225.128.186:49666 #890 (8 connections now open) m31000| Fri Feb 22 11:57:16.658 [conn1] going to kill op: op: 18044.0 m31000| Fri Feb 22 11:57:16.659 [conn1] going to kill op: op: 18042.0 m31000| Fri Feb 22 11:57:16.659 [conn1] going to kill op: op: 18041.0 m31000| Fri Feb 22 11:57:16.664 [conn889] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.664 [conn889] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|54 } } cursorid:358587872679086 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:16.664 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.664 [conn890] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.664 [conn889] end connection 165.225.128.186:49079 (7 connections now open) m31000| Fri Feb 22 11:57:16.664 [conn890] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|54 } } cursorid:358588560721131 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.664 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.664 [conn890] end connection 165.225.128.186:49666 (6 connections now open) m31000| Fri Feb 22 11:57:16.665 [initandlisten] connection accepted from 165.225.128.186:64052 #891 (7 connections now open) m31000| Fri Feb 22 11:57:16.665 [initandlisten] connection accepted from 165.225.128.186:38025 #892 (8 connections now open) m31000| Fri Feb 22 11:57:16.667 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.667 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|65 } } cursorid:358973860691152 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.667 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.760 [conn1] going to kill op: op: 18082.0 m31000| Fri Feb 22 11:57:16.760 [conn1] going to kill op: op: 18083.0 m31000| Fri Feb 22 11:57:16.767 [conn891] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.767 [conn891] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|65 } } cursorid:359025173258782 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:16.767 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.767 [conn892] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.767 [conn891] end connection 165.225.128.186:64052 (7 connections now open) m31000| Fri Feb 22 11:57:16.767 [conn892] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|65 } } cursorid:359026893726206 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:16.767 [conn892] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:16.767 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.767 [conn892] end connection 165.225.128.186:38025 (6 connections now open) m31000| Fri Feb 22 11:57:16.767 [initandlisten] connection accepted from 165.225.128.186:36715 #893 (8 connections now open) m31000| Fri Feb 22 11:57:16.768 [initandlisten] connection accepted from 165.225.128.186:45802 #894 (8 connections now open) m31000| Fri Feb 22 11:57:16.860 [conn1] going to kill op: op: 18120.0 m31000| Fri Feb 22 11:57:16.861 [conn1] going to kill op: op: 18121.0 m31000| Fri Feb 22 11:57:16.870 [conn893] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.870 [conn893] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|75 } } cursorid:359464835472713 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:16.870 [conn894] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:57:16.870 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.870 [conn894] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|75 } } cursorid:359465066634455 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:16.870 [conn893] end connection 165.225.128.186:36715 (7 connections now open) m31001| Fri Feb 22 11:57:16.870 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.870 [conn894] end connection 165.225.128.186:45802 (6 connections now open) m31000| Fri Feb 22 11:57:16.870 [initandlisten] connection accepted from 165.225.128.186:44366 #895 (7 connections now open) m31000| Fri Feb 22 11:57:16.871 [initandlisten] connection accepted from 165.225.128.186:58895 #896 (8 connections now open) m31000| Fri Feb 22 11:57:16.961 [conn1] going to kill op: op: 18155.0 m31000| Fri Feb 22 11:57:16.962 [conn1] going to kill op: op: 18156.0 m31000| Fri Feb 22 11:57:16.962 [conn895] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.963 [conn895] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|85 } } cursorid:359902826508732 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:16.963 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.963 [conn895] end connection 165.225.128.186:44366 (7 connections now open) m31000| Fri Feb 22 11:57:16.963 [conn896] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:16.963 [conn896] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|85 } } cursorid:359902699191287 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:16.963 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:16.963 [initandlisten] connection accepted from 165.225.128.186:41398 #897 (8 connections now open) m31000| Fri Feb 22 11:57:16.963 [conn896] end connection 165.225.128.186:58895 (7 connections now open) m31000| Fri Feb 22 11:57:16.963 [initandlisten] connection accepted from 165.225.128.186:46698 #898 (8 connections now open) m31000| Fri Feb 22 11:57:17.062 [conn1] going to kill op: op: 18196.0 m31000| Fri Feb 22 11:57:17.063 [conn1] going to kill op: op: 18193.0 m31000| Fri Feb 22 11:57:17.063 [conn1] going to kill op: op: 18194.0 m31000| Fri Feb 22 11:57:17.065 [conn898] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.065 [conn898] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|94 } } cursorid:360296856382044 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.066 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.066 [conn898] end connection 165.225.128.186:46698 (7 connections now open) m31000| Fri Feb 22 11:57:17.066 [conn897] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.066 [conn897] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534236000|94 } } cursorid:360297697850305 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:101 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:17.066 [initandlisten] connection accepted from 165.225.128.186:60369 #899 (8 connections now open) m31002| Fri Feb 22 11:57:17.066 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.066 [conn897] end connection 165.225.128.186:41398 (7 connections now open) m31000| Fri Feb 22 11:57:17.067 [initandlisten] connection accepted from 165.225.128.186:36262 #900 (8 connections now open) m31000| Fri Feb 22 11:57:17.067 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.067 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|5 } } cursorid:360684358367529 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:17.067 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.163 [conn1] going to kill op: op: 18236.0 m31000| Fri Feb 22 11:57:17.164 [conn1] going to kill op: op: 18237.0 m31000| Fri Feb 22 11:57:17.168 [conn899] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.168 [conn899] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|5 } } cursorid:360731049612205 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:17.168 [conn899] ClientCursor::find(): cursor not found in map '360731049612205' (ok after a drop) m31001| Fri Feb 22 11:57:17.168 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.168 [conn899] end connection 165.225.128.186:60369 (7 connections now open) m31000| Fri Feb 22 11:57:17.169 [initandlisten] connection accepted from 165.225.128.186:52253 #901 (8 connections now open) m31000| Fri Feb 22 11:57:17.169 [conn900] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.169 [conn900] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|5 } } cursorid:360734817741061 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:17.169 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.169 [conn900] end connection 165.225.128.186:36262 (7 connections now open) m31000| Fri Feb 22 11:57:17.170 [initandlisten] connection accepted from 165.225.128.186:53479 #902 (8 connections now open) m31000| Fri Feb 22 11:57:17.264 [conn1] going to kill op: op: 18274.0 m31000| Fri Feb 22 11:57:17.265 [conn1] going to kill op: op: 18275.0 m31000| Fri Feb 22 11:57:17.271 [conn901] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.271 [conn901] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|15 } } cursorid:361170048861324 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.271 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.271 [conn901] end connection 165.225.128.186:52253 (7 connections now open) m31000| Fri Feb 22 11:57:17.272 [initandlisten] connection accepted from 165.225.128.186:47475 #903 (8 connections now open) m31000| Fri Feb 22 11:57:17.272 [conn902] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.272 [conn902] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|15 } } cursorid:361173943704612 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:17.272 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.272 [conn902] end connection 165.225.128.186:53479 (7 connections now open) m31000| Fri Feb 22 11:57:17.272 [initandlisten] connection accepted from 165.225.128.186:37861 #904 (8 connections now open) m31000| Fri Feb 22 11:57:17.365 [conn1] going to kill op: op: 18313.0 m31000| Fri Feb 22 11:57:17.366 [conn1] going to kill op: op: 18312.0 m31000| Fri Feb 22 11:57:17.373 [conn903] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.373 [conn903] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|25 } } cursorid:361607186224357 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.374 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.374 [conn903] end connection 165.225.128.186:47475 (7 connections now open) m31000| Fri Feb 22 11:57:17.374 [initandlisten] connection accepted from 165.225.128.186:54373 #905 (8 connections now open) m31000| Fri Feb 22 11:57:17.374 [conn904] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.374 [conn904] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|25 } } cursorid:361610558145485 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:39 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:17.374 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.374 [conn904] end connection 165.225.128.186:37861 (7 connections now open) m31000| Fri Feb 22 11:57:17.375 [initandlisten] connection accepted from 165.225.128.186:41188 #906 (8 connections now open) m31000| Fri Feb 22 11:57:17.466 [conn1] going to kill op: op: 18350.0 m31000| Fri Feb 22 11:57:17.467 [conn1] going to kill op: op: 18349.0 m31000| Fri Feb 22 11:57:17.467 [conn906] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.467 [conn906] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|35 } } cursorid:362049238327755 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:17.467 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.467 [conn906] end connection 165.225.128.186:41188 (7 connections now open) m31000| Fri Feb 22 11:57:17.467 [initandlisten] connection accepted from 165.225.128.186:43778 #907 (8 connections now open) m31000| Fri Feb 22 11:57:17.476 [conn905] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.476 [conn905] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|35 } } cursorid:362045834744680 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:17.476 [conn905] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:17.476 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.476 [conn905] end connection 165.225.128.186:54373 (7 connections now open) m31000| Fri Feb 22 11:57:17.476 [initandlisten] connection accepted from 165.225.128.186:45445 #908 (8 connections now open) m31000| Fri Feb 22 11:57:17.567 [conn1] going to kill op: op: 18386.0 m31000| Fri Feb 22 11:57:17.569 [conn1] going to kill op: op: 18387.0 m31000| Fri Feb 22 11:57:17.569 [conn908] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.569 [conn908] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|45 } } cursorid:362444833800007 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:98 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.569 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.569 [conn908] end connection 165.225.128.186:45445 (7 connections now open) m31000| Fri Feb 22 11:57:17.570 [initandlisten] connection accepted from 165.225.128.186:38529 #909 (8 connections now open) m31000| Fri Feb 22 11:57:17.570 [conn907] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.570 [conn907] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|45 } } cursorid:362439881694855 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:17.570 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.570 [conn907] end connection 165.225.128.186:43778 (7 connections now open) m31000| Fri Feb 22 11:57:17.570 [initandlisten] connection accepted from 165.225.128.186:53843 #910 (8 connections now open) m31000| Fri Feb 22 11:57:17.670 [conn1] going to kill op: op: 18427.0 m31000| Fri Feb 22 11:57:17.670 [conn1] going to kill op: op: 18424.0 m31000| Fri Feb 22 11:57:17.670 [conn1] going to kill op: op: 18425.0 m31000| Fri Feb 22 11:57:17.673 [conn909] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.673 [conn909] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|55 } } cursorid:362835467423978 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:121 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:17.673 [conn910] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.673 [conn910] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|55 } } cursorid:362840800032274 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.673 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.673 [conn909] end connection 165.225.128.186:38529 (7 connections now open) m31002| Fri Feb 22 11:57:17.673 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.673 [conn910] end connection 165.225.128.186:53843 (6 connections now open) m31000| Fri Feb 22 11:57:17.673 [initandlisten] connection accepted from 165.225.128.186:52476 #911 (7 connections now open) m31000| Fri Feb 22 11:57:17.673 [initandlisten] connection accepted from 165.225.128.186:46481 #912 (8 connections now open) m31000| Fri Feb 22 11:57:17.677 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.677 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|65 } } cursorid:363227272078104 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.677 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.771 [conn1] going to kill op: op: 18466.0 m31000| Fri Feb 22 11:57:17.771 [conn1] going to kill op: op: 18465.0 m31000| Fri Feb 22 11:57:17.776 [conn911] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.776 [conn911] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|65 } } cursorid:363278709608286 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:17.776 [conn912] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.776 [conn912] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|65 } } cursorid:363278586018089 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.777 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.777 [conn911] end connection 165.225.128.186:52476 (7 connections now open) m31002| Fri Feb 22 11:57:17.777 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.777 [conn912] end connection 165.225.128.186:46481 (6 connections now open) m31000| Fri Feb 22 11:57:17.777 [initandlisten] connection accepted from 165.225.128.186:58961 #913 (7 connections now open) m31000| Fri Feb 22 11:57:17.777 [initandlisten] connection accepted from 165.225.128.186:52694 #914 (8 connections now open) m31000| Fri Feb 22 11:57:17.872 [conn1] going to kill op: op: 18504.0 m31000| Fri Feb 22 11:57:17.872 [conn1] going to kill op: op: 18503.0 m31000| Fri Feb 22 11:57:17.880 [conn914] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.880 [conn914] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|75 } } cursorid:363715539190078 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:17.880 [conn914] ClientCursor::find(): cursor not found in map '363715539190078' (ok after a drop) m31002| Fri Feb 22 11:57:17.880 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.880 [conn914] end connection 165.225.128.186:52694 (7 connections now open) m31000| Fri Feb 22 11:57:17.880 [conn913] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:17.880 [conn913] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|75 } } cursorid:363715792721309 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:17.880 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:17.880 [conn913] end connection 165.225.128.186:58961 (6 connections now open) m31000| Fri Feb 22 11:57:17.880 [initandlisten] connection accepted from 165.225.128.186:42823 #915 (8 connections now open) m31000| Fri Feb 22 11:57:17.880 [initandlisten] connection accepted from 165.225.128.186:63478 #916 (8 connections now open) m31000| Fri Feb 22 11:57:17.973 [conn1] going to kill op: op: 18538.0 m31000| Fri Feb 22 11:57:17.973 [conn1] going to kill op: op: 18539.0 m31000| Fri Feb 22 11:57:18.051 [conn347] end connection 165.225.128.186:51293 (7 connections now open) m31000| Fri Feb 22 11:57:18.052 [initandlisten] connection accepted from 165.225.128.186:58824 #917 (8 connections now open) m31000| Fri Feb 22 11:57:18.074 [conn1] going to kill op: op: 18571.0 m31000| Fri Feb 22 11:57:18.074 [conn1] going to kill op: op: 18573.0 m31000| Fri Feb 22 11:57:18.075 [conn915] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.075 [conn915] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|85 } } cursorid:364154896824335 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.075 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.075 [conn916] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.075 [conn916] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534237000|85 } } cursorid:364154182269933 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.075 [conn915] end connection 165.225.128.186:42823 (7 connections now open) m31001| Fri Feb 22 11:57:18.076 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.076 [conn916] end connection 165.225.128.186:63478 (6 connections now open) m31000| Fri Feb 22 11:57:18.076 [initandlisten] connection accepted from 165.225.128.186:65525 #918 (8 connections now open) m31000| Fri Feb 22 11:57:18.076 [initandlisten] connection accepted from 165.225.128.186:48775 #919 (8 connections now open) m31000| Fri Feb 22 11:57:18.175 [conn1] going to kill op: op: 18625.0 m31000| Fri Feb 22 11:57:18.175 [conn1] going to kill op: op: 18623.0 m31000| Fri Feb 22 11:57:18.175 [conn1] going to kill op: op: 18624.0 m31000| Fri Feb 22 11:57:18.178 [conn918] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.178 [conn918] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|6 } } cursorid:364982619608542 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.178 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.178 [conn918] end connection 165.225.128.186:65525 (7 connections now open) m31000| Fri Feb 22 11:57:18.178 [initandlisten] connection accepted from 165.225.128.186:36855 #920 (8 connections now open) m31000| Fri Feb 22 11:57:18.178 [conn919] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.178 [conn919] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|7 } } cursorid:364982104737366 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:18.179 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.179 [conn919] end connection 165.225.128.186:48775 (7 connections now open) m31000| Fri Feb 22 11:57:18.179 [initandlisten] connection accepted from 165.225.128.186:47762 #921 (8 connections now open) m31000| Fri Feb 22 11:57:18.180 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.180 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|6 } } cursorid:364932503335813 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:41 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.180 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.276 [conn1] going to kill op: op: 18665.0 m31000| Fri Feb 22 11:57:18.276 [conn1] going to kill op: op: 18664.0 m31000| Fri Feb 22 11:57:18.280 [conn920] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.281 [conn920] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|17 } } cursorid:365416835271016 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.281 [conn920] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:18.281 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.281 [conn920] end connection 165.225.128.186:36855 (7 connections now open) m31000| Fri Feb 22 11:57:18.281 [conn921] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.281 [conn921] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|17 } } cursorid:365420937431068 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:18.281 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.281 [conn921] end connection 165.225.128.186:47762 (6 connections now open) m31000| Fri Feb 22 11:57:18.281 [initandlisten] connection accepted from 165.225.128.186:57300 #922 (7 connections now open) m31000| Fri Feb 22 11:57:18.281 [initandlisten] connection accepted from 165.225.128.186:39356 #923 (8 connections now open) m31000| Fri Feb 22 11:57:18.377 [conn1] going to kill op: op: 18703.0 m31000| Fri Feb 22 11:57:18.377 [conn1] going to kill op: op: 18702.0 m31000| Fri Feb 22 11:57:18.383 [conn923] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.383 [conn922] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.383 [conn923] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|27 } } cursorid:365859298718457 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.384 [conn922] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|27 } } cursorid:365858382198558 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:18.384 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:18.384 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.384 [conn923] end connection 165.225.128.186:39356 (7 connections now open) m31000| Fri Feb 22 11:57:18.384 [conn922] end connection 165.225.128.186:57300 (7 connections now open) m31000| Fri Feb 22 11:57:18.384 [initandlisten] connection accepted from 165.225.128.186:52307 #924 (7 connections now open) m31000| Fri Feb 22 11:57:18.384 [initandlisten] connection accepted from 165.225.128.186:55837 #925 (8 connections now open) m31000| Fri Feb 22 11:57:18.478 [conn1] going to kill op: op: 18741.0 m31000| Fri Feb 22 11:57:18.478 [conn1] going to kill op: op: 18740.0 m31000| Fri Feb 22 11:57:18.486 [conn925] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.486 [conn925] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|37 } } cursorid:366296750872036 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.487 [conn924] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.487 [conn924] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|37 } } cursorid:366298285761221 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.487 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.487 [conn925] end connection 165.225.128.186:55837 (7 connections now open) m31001| Fri Feb 22 11:57:18.487 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.487 [conn924] end connection 165.225.128.186:52307 (6 connections now open) m31000| Fri Feb 22 11:57:18.487 [initandlisten] connection accepted from 165.225.128.186:50609 #926 (7 connections now open) m31000| Fri Feb 22 11:57:18.487 [initandlisten] connection accepted from 165.225.128.186:62266 #927 (8 connections now open) m31000| Fri Feb 22 11:57:18.578 [conn1] going to kill op: op: 18776.0 m31000| Fri Feb 22 11:57:18.579 [conn1] going to kill op: op: 18775.0 m31000| Fri Feb 22 11:57:18.579 [conn926] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.579 [conn926] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|47 } } cursorid:366736142582613 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.579 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.579 [conn926] end connection 165.225.128.186:50609 (7 connections now open) m31000| Fri Feb 22 11:57:18.579 [conn927] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.579 [conn927] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|47 } } cursorid:366736273942610 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:18.580 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.580 [initandlisten] connection accepted from 165.225.128.186:48683 #928 (8 connections now open) m31000| Fri Feb 22 11:57:18.580 [conn927] end connection 165.225.128.186:62266 (7 connections now open) m31000| Fri Feb 22 11:57:18.580 [initandlisten] connection accepted from 165.225.128.186:57394 #929 (8 connections now open) m31000| Fri Feb 22 11:57:18.679 [conn1] going to kill op: op: 18813.0 m31000| Fri Feb 22 11:57:18.680 [conn1] going to kill op: op: 18814.0 m31000| Fri Feb 22 11:57:18.682 [conn928] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.682 [conn928] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|56 } } cursorid:367130886480685 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.682 [conn928] ClientCursor::find(): cursor not found in map '367130886480685' (ok after a drop) m31002| Fri Feb 22 11:57:18.682 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.682 [conn928] end connection 165.225.128.186:48683 (7 connections now open) m31000| Fri Feb 22 11:57:18.682 [conn929] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.682 [conn929] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|56 } } cursorid:367130535855421 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:18.682 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.682 [conn929] end connection 165.225.128.186:57394 (6 connections now open) m31000| Fri Feb 22 11:57:18.682 [initandlisten] connection accepted from 165.225.128.186:53092 #930 (8 connections now open) m31000| Fri Feb 22 11:57:18.683 [initandlisten] connection accepted from 165.225.128.186:46235 #931 (8 connections now open) m31000| Fri Feb 22 11:57:18.780 [conn1] going to kill op: op: 18863.0 m31000| Fri Feb 22 11:57:18.781 [conn1] going to kill op: op: 18862.0 m31000| Fri Feb 22 11:57:18.781 [conn1] going to kill op: op: 18861.0 m31000| Fri Feb 22 11:57:18.785 [conn930] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.785 [conn930] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|66 } } cursorid:367568756241567 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.785 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.785 [conn931] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.785 [conn931] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|66 } } cursorid:367568047360068 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.785 [conn930] end connection 165.225.128.186:53092 (7 connections now open) m31001| Fri Feb 22 11:57:18.785 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.785 [conn931] end connection 165.225.128.186:46235 (6 connections now open) m31000| Fri Feb 22 11:57:18.785 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.785 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|66 } } cursorid:367516771502398 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.785 [initandlisten] connection accepted from 165.225.128.186:48086 #932 (7 connections now open) m31000| Fri Feb 22 11:57:18.785 [initandlisten] connection accepted from 165.225.128.186:45295 #933 (8 connections now open) m31001| Fri Feb 22 11:57:18.786 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.881 [conn1] going to kill op: op: 18903.0 m31000| Fri Feb 22 11:57:18.882 [conn1] going to kill op: op: 18902.0 m31000| Fri Feb 22 11:57:18.887 [conn932] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.887 [conn932] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|76 } } cursorid:368005901319917 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.888 [conn933] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.888 [conn933] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|76 } } cursorid:368006949179801 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.888 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.888 [conn932] end connection 165.225.128.186:48086 (7 connections now open) m31001| Fri Feb 22 11:57:18.888 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.888 [conn933] end connection 165.225.128.186:45295 (6 connections now open) m31000| Fri Feb 22 11:57:18.888 [initandlisten] connection accepted from 165.225.128.186:61100 #934 (7 connections now open) m31000| Fri Feb 22 11:57:18.888 [initandlisten] connection accepted from 165.225.128.186:64626 #935 (8 connections now open) m31000| Fri Feb 22 11:57:18.982 [conn1] going to kill op: op: 18941.0 m31000| Fri Feb 22 11:57:18.983 [conn1] going to kill op: op: 18940.0 m31000| Fri Feb 22 11:57:18.990 [conn934] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.990 [conn934] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|87 } } cursorid:368444420272354 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:18.990 [conn935] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:18.990 [conn934] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:57:18.991 [conn935] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|87 } } cursorid:368445213630302 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:18.991 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:18.991 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:18.991 [conn934] end connection 165.225.128.186:61100 (7 connections now open) m31000| Fri Feb 22 11:57:18.991 [conn935] end connection 165.225.128.186:64626 (7 connections now open) m31000| Fri Feb 22 11:57:18.991 [initandlisten] connection accepted from 165.225.128.186:49106 #936 (7 connections now open) m31000| Fri Feb 22 11:57:18.991 [initandlisten] connection accepted from 165.225.128.186:63634 #937 (8 connections now open) m31000| Fri Feb 22 11:57:19.083 [conn1] going to kill op: op: 18976.0 m31000| Fri Feb 22 11:57:19.084 [conn1] going to kill op: op: 18975.0 m31000| Fri Feb 22 11:57:19.184 [conn1] going to kill op: op: 19010.0 m31000| Fri Feb 22 11:57:19.185 [conn1] going to kill op: op: 19008.0 m31000| Fri Feb 22 11:57:19.185 [conn1] going to kill op: op: 19007.0 m31000| Fri Feb 22 11:57:19.185 [conn937] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.185 [conn937] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|97 } } cursorid:368881989359384 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.185 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.186 [conn937] end connection 165.225.128.186:63634 (7 connections now open) m31000| Fri Feb 22 11:57:19.186 [initandlisten] connection accepted from 165.225.128.186:33860 #938 (8 connections now open) m31000| Fri Feb 22 11:57:19.190 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.190 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|17 } } cursorid:369656390644291 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:19.190 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.285 [conn1] going to kill op: op: 19045.0 m31000| Fri Feb 22 11:57:19.286 [conn1] going to kill op: op: 19044.0 m31000| Fri Feb 22 11:57:19.287 [conn936] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.287 [conn936] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534238000|97 } } cursorid:368883830086879 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:19.287 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.287 [conn936] end connection 165.225.128.186:49106 (7 connections now open) m31000| Fri Feb 22 11:57:19.287 [initandlisten] connection accepted from 165.225.128.186:64313 #939 (8 connections now open) m31000| Fri Feb 22 11:57:19.288 [conn938] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.288 [conn938] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|17 } } cursorid:369703156909213 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.288 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.288 [conn938] end connection 165.225.128.186:33860 (7 connections now open) m31000| Fri Feb 22 11:57:19.288 [initandlisten] connection accepted from 165.225.128.186:55957 #940 (8 connections now open) m31000| Fri Feb 22 11:57:19.386 [conn1] going to kill op: op: 19082.0 m31000| Fri Feb 22 11:57:19.387 [conn1] going to kill op: op: 19083.0 m31000| Fri Feb 22 11:57:19.389 [conn939] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.389 [conn939] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|27 } } cursorid:370136548073816 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:19.389 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.389 [conn939] end connection 165.225.128.186:64313 (7 connections now open) m31000| Fri Feb 22 11:57:19.390 [initandlisten] connection accepted from 165.225.128.186:35796 #941 (8 connections now open) m31000| Fri Feb 22 11:57:19.390 [conn940] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.390 [conn940] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|27 } } cursorid:370141517111545 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.390 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.390 [conn940] end connection 165.225.128.186:55957 (7 connections now open) m31000| Fri Feb 22 11:57:19.391 [initandlisten] connection accepted from 165.225.128.186:56828 #942 (8 connections now open) m31000| Fri Feb 22 11:57:19.487 [conn1] going to kill op: op: 19121.0 m31000| Fri Feb 22 11:57:19.487 [conn1] going to kill op: op: 19120.0 m31000| Fri Feb 22 11:57:19.492 [conn941] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.492 [conn941] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|37 } } cursorid:370574543804873 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:19.492 [conn941] ClientCursor::find(): cursor not found in map '370574543804873' (ok after a drop) m31002| Fri Feb 22 11:57:19.492 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.492 [conn941] end connection 165.225.128.186:35796 (7 connections now open) m31000| Fri Feb 22 11:57:19.492 [initandlisten] connection accepted from 165.225.128.186:56680 #943 (8 connections now open) m31000| Fri Feb 22 11:57:19.493 [conn942] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.493 [conn942] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|37 } } cursorid:370580230718762 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.493 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.493 [conn942] end connection 165.225.128.186:56828 (7 connections now open) m31000| Fri Feb 22 11:57:19.493 [initandlisten] connection accepted from 165.225.128.186:37731 #944 (8 connections now open) m31000| Fri Feb 22 11:57:19.588 [conn1] going to kill op: op: 19158.0 m31000| Fri Feb 22 11:57:19.588 [conn1] going to kill op: op: 19159.0 m31000| Fri Feb 22 11:57:19.595 [conn943] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.595 [conn943] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|47 } } cursorid:371012886804825 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:19.595 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.595 [conn943] end connection 165.225.128.186:56680 (7 connections now open) m31000| Fri Feb 22 11:57:19.595 [initandlisten] connection accepted from 165.225.128.186:36986 #945 (8 connections now open) m31000| Fri Feb 22 11:57:19.595 [conn944] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.596 [conn944] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|47 } } cursorid:371016892010147 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.596 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.596 [conn944] end connection 165.225.128.186:37731 (7 connections now open) m31000| Fri Feb 22 11:57:19.596 [initandlisten] connection accepted from 165.225.128.186:43884 #946 (8 connections now open) m31000| Fri Feb 22 11:57:19.689 [conn1] going to kill op: op: 19199.0 m31000| Fri Feb 22 11:57:19.689 [conn1] going to kill op: op: 19198.0 m31000| Fri Feb 22 11:57:19.698 [conn945] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.698 [conn945] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|57 } } cursorid:371451575400148 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:19.698 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.698 [conn945] end connection 165.225.128.186:36986 (7 connections now open) m31000| Fri Feb 22 11:57:19.698 [conn946] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.698 [conn946] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|57 } } cursorid:371455973271312 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.698 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.698 [initandlisten] connection accepted from 165.225.128.186:60741 #947 (8 connections now open) m31000| Fri Feb 22 11:57:19.698 [conn946] end connection 165.225.128.186:43884 (6 connections now open) m31000| Fri Feb 22 11:57:19.699 [initandlisten] connection accepted from 165.225.128.186:49709 #948 (8 connections now open) m31000| Fri Feb 22 11:57:19.790 [conn1] going to kill op: op: 19236.0 m31000| Fri Feb 22 11:57:19.790 [conn1] going to kill op: op: 19233.0 m31000| Fri Feb 22 11:57:19.790 [conn947] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.790 [conn947] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|68 } } cursorid:371892678238149 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:19.790 [conn1] going to kill op: op: 19234.0 m31002| Fri Feb 22 11:57:19.791 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.791 [conn947] end connection 165.225.128.186:60741 (7 connections now open) m31000| Fri Feb 22 11:57:19.791 [initandlisten] connection accepted from 165.225.128.186:57291 #949 (8 connections now open) m31000| Fri Feb 22 11:57:19.791 [conn948] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.791 [conn948] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|68 } } cursorid:371894733697074 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:96 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:19.791 [conn948] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:19.791 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.791 [conn948] end connection 165.225.128.186:49709 (7 connections now open) m31000| Fri Feb 22 11:57:19.792 [initandlisten] connection accepted from 165.225.128.186:45803 #950 (8 connections now open) m31000| Fri Feb 22 11:57:19.797 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.797 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|77 } } cursorid:372236405180895 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.797 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.892 [conn1] going to kill op: op: 19275.0 m31000| Fri Feb 22 11:57:19.892 [conn1] going to kill op: op: 19274.0 m31000| Fri Feb 22 11:57:19.893 [conn949] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.893 [conn949] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|77 } } cursorid:372284589497408 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:19.893 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.893 [conn949] end connection 165.225.128.186:57291 (7 connections now open) m31000| Fri Feb 22 11:57:19.894 [initandlisten] connection accepted from 165.225.128.186:36837 #951 (8 connections now open) m31000| Fri Feb 22 11:57:19.894 [conn950] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.894 [conn950] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|77 } } cursorid:372288376885814 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:96 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.894 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.895 [conn950] end connection 165.225.128.186:45803 (7 connections now open) m31000| Fri Feb 22 11:57:19.895 [initandlisten] connection accepted from 165.225.128.186:42401 #952 (8 connections now open) m31000| Fri Feb 22 11:57:19.992 [conn1] going to kill op: op: 19312.0 m31000| Fri Feb 22 11:57:19.993 [conn1] going to kill op: op: 19313.0 m31000| Fri Feb 22 11:57:19.996 [conn951] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.996 [conn951] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|87 } } cursorid:372721844298596 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:19.996 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.996 [conn951] end connection 165.225.128.186:36837 (7 connections now open) m31000| Fri Feb 22 11:57:19.996 [conn952] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:19.996 [conn952] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|87 } } cursorid:372727806783273 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:19.997 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:19.997 [initandlisten] connection accepted from 165.225.128.186:51703 #953 (8 connections now open) m31000| Fri Feb 22 11:57:19.997 [conn952] end connection 165.225.128.186:42401 (6 connections now open) m31000| Fri Feb 22 11:57:19.997 [initandlisten] connection accepted from 165.225.128.186:35317 #954 (8 connections now open) m31000| Fri Feb 22 11:57:20.093 [conn1] going to kill op: op: 19352.0 m31000| Fri Feb 22 11:57:20.094 [conn1] going to kill op: op: 19351.0 m31000| Fri Feb 22 11:57:20.100 [conn953] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.100 [conn954] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.100 [conn953] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|97 } } cursorid:373165517124548 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:86 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.100 [conn954] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534239000|97 } } cursorid:373165566514784 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:20.100 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:20.100 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.100 [conn954] end connection 165.225.128.186:35317 (7 connections now open) m31000| Fri Feb 22 11:57:20.100 [conn953] end connection 165.225.128.186:51703 (7 connections now open) m31000| Fri Feb 22 11:57:20.100 [initandlisten] connection accepted from 165.225.128.186:63524 #955 (7 connections now open) m31000| Fri Feb 22 11:57:20.101 [initandlisten] connection accepted from 165.225.128.186:52323 #956 (8 connections now open) m31000| Fri Feb 22 11:57:20.194 [conn1] going to kill op: op: 19394.0 m31000| Fri Feb 22 11:57:20.195 [conn1] going to kill op: op: 19393.0 m31000| Fri Feb 22 11:57:20.195 [conn1] going to kill op: op: 19392.0 m31000| Fri Feb 22 11:57:20.203 [conn955] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.203 [conn955] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|9 } } cursorid:373604143351451 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.203 [conn955] ClientCursor::find(): cursor not found in map '373604143351451' (ok after a drop) m31001| Fri Feb 22 11:57:20.203 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.203 [conn955] end connection 165.225.128.186:63524 (7 connections now open) m31000| Fri Feb 22 11:57:20.203 [initandlisten] connection accepted from 165.225.128.186:62904 #957 (8 connections now open) m31000| Fri Feb 22 11:57:20.203 [conn956] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.204 [conn956] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|9 } } cursorid:373603863690690 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:20.204 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.204 [conn956] end connection 165.225.128.186:52323 (7 connections now open) m31000| Fri Feb 22 11:57:20.204 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.204 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|18 } } cursorid:373947404661895 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.204 [initandlisten] connection accepted from 165.225.128.186:37031 #958 (8 connections now open) m31002| Fri Feb 22 11:57:20.205 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.296 [conn1] going to kill op: op: 19431.0 m31000| Fri Feb 22 11:57:20.296 [conn1] going to kill op: op: 19430.0 m31000| Fri Feb 22 11:57:20.296 [conn957] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.296 [conn957] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|19 } } cursorid:374037475334573 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:89 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.296 [conn958] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.296 [conn958] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|19 } } cursorid:374042045610027 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:20.296 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:20.296 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.296 [conn958] end connection 165.225.128.186:37031 (7 connections now open) m31000| Fri Feb 22 11:57:20.296 [conn957] end connection 165.225.128.186:62904 (7 connections now open) m31000| Fri Feb 22 11:57:20.297 [initandlisten] connection accepted from 165.225.128.186:60202 #959 (7 connections now open) m31000| Fri Feb 22 11:57:20.297 [initandlisten] connection accepted from 165.225.128.186:59098 #960 (8 connections now open) m31000| Fri Feb 22 11:57:20.397 [conn1] going to kill op: op: 19468.0 m31000| Fri Feb 22 11:57:20.397 [conn1] going to kill op: op: 19469.0 m31000| Fri Feb 22 11:57:20.399 [conn959] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.399 [conn960] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.399 [conn959] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|28 } } cursorid:374436471843777 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:90 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.399 [conn960] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|28 } } cursorid:374436836893669 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:20.399 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:20.399 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.400 [conn960] end connection 165.225.128.186:59098 (7 connections now open) m31000| Fri Feb 22 11:57:20.400 [conn959] end connection 165.225.128.186:60202 (7 connections now open) m31000| Fri Feb 22 11:57:20.400 [initandlisten] connection accepted from 165.225.128.186:65370 #961 (7 connections now open) m31000| Fri Feb 22 11:57:20.400 [initandlisten] connection accepted from 165.225.128.186:44657 #962 (8 connections now open) m31000| Fri Feb 22 11:57:20.498 [conn1] going to kill op: op: 19510.0 m31000| Fri Feb 22 11:57:20.498 [conn1] going to kill op: op: 19509.0 m31000| Fri Feb 22 11:57:20.503 [conn962] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.503 [conn962] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|38 } } cursorid:374873857431821 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.503 [conn962] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:20.503 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.503 [conn961] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.503 [conn961] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|38 } } cursorid:374873818239563 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.503 [conn962] end connection 165.225.128.186:44657 (7 connections now open) m31002| Fri Feb 22 11:57:20.503 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.503 [conn961] end connection 165.225.128.186:65370 (6 connections now open) m31000| Fri Feb 22 11:57:20.503 [initandlisten] connection accepted from 165.225.128.186:55680 #963 (7 connections now open) m31000| Fri Feb 22 11:57:20.504 [initandlisten] connection accepted from 165.225.128.186:57604 #964 (8 connections now open) m31000| Fri Feb 22 11:57:20.599 [conn1] going to kill op: op: 19547.0 m31000| Fri Feb 22 11:57:20.599 [conn1] going to kill op: op: 19548.0 m31000| Fri Feb 22 11:57:20.606 [conn963] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.606 [conn963] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|49 } } cursorid:375313524384015 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:99 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.606 [conn964] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.606 [conn964] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|49 } } cursorid:375311514449648 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:20.606 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.607 [conn963] end connection 165.225.128.186:55680 (7 connections now open) m31002| Fri Feb 22 11:57:20.607 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.607 [conn964] end connection 165.225.128.186:57604 (6 connections now open) m31000| Fri Feb 22 11:57:20.607 [initandlisten] connection accepted from 165.225.128.186:34907 #965 (7 connections now open) m31000| Fri Feb 22 11:57:20.607 [initandlisten] connection accepted from 165.225.128.186:60262 #966 (8 connections now open) m31000| Fri Feb 22 11:57:20.700 [conn1] going to kill op: op: 19582.0 m31000| Fri Feb 22 11:57:20.700 [conn1] going to kill op: op: 19583.0 m31000| Fri Feb 22 11:57:20.820 [conn1] going to kill op: op: 19612.0 m31000| Fri Feb 22 11:57:20.820 [conn1] going to kill op: op: 19613.0 m31000| Fri Feb 22 11:57:20.921 [conn1] going to kill op: op: 19661.0 m31000| Fri Feb 22 11:57:20.922 [conn1] going to kill op: op: 19660.0 m31000| Fri Feb 22 11:57:20.922 [conn1] going to kill op: op: 19662.0 m31000| Fri Feb 22 11:57:20.923 [conn965] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.923 [conn965] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|59 } } cursorid:375750245458784 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:20.923 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.923 [conn965] end connection 165.225.128.186:34907 (7 connections now open) m31000| Fri Feb 22 11:57:20.924 [initandlisten] connection accepted from 165.225.128.186:55245 #967 (8 connections now open) m31000| Fri Feb 22 11:57:20.924 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.924 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|78 } } cursorid:376523541970037 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:20.924 [conn966] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:20.924 [conn966] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|59 } } cursorid:375750736582382 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:20.924 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.924 [conn966] end connection 165.225.128.186:60262 (7 connections now open) m31001| Fri Feb 22 11:57:20.925 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:20.925 [initandlisten] connection accepted from 165.225.128.186:43639 #968 (8 connections now open) m31000| Fri Feb 22 11:57:21.023 [conn1] going to kill op: op: 19700.0 m31000| Fri Feb 22 11:57:21.023 [conn1] going to kill op: op: 19701.0 m31000| Fri Feb 22 11:57:21.026 [conn967] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.026 [conn967] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|90 } } cursorid:377086561886121 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:21.026 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.026 [conn967] end connection 165.225.128.186:55245 (7 connections now open) m31000| Fri Feb 22 11:57:21.027 [initandlisten] connection accepted from 165.225.128.186:58745 #969 (8 connections now open) m31000| Fri Feb 22 11:57:21.027 [conn968] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.027 [conn968] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534240000|90 } } cursorid:377091543022295 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:21.027 [conn968] ClientCursor::find(): cursor not found in map '377091543022295' (ok after a drop) m31002| Fri Feb 22 11:57:21.027 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.028 [conn968] end connection 165.225.128.186:43639 (7 connections now open) m31000| Fri Feb 22 11:57:21.028 [initandlisten] connection accepted from 165.225.128.186:48497 #970 (8 connections now open) m31000| Fri Feb 22 11:57:21.124 [conn1] going to kill op: op: 19741.0 m31000| Fri Feb 22 11:57:21.124 [conn1] going to kill op: op: 19740.0 m31000| Fri Feb 22 11:57:21.129 [conn969] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.129 [conn969] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|2 } } cursorid:377524733990562 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:21.129 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.129 [conn969] end connection 165.225.128.186:58745 (7 connections now open) m31000| Fri Feb 22 11:57:21.130 [initandlisten] connection accepted from 165.225.128.186:59037 #971 (8 connections now open) m31000| Fri Feb 22 11:57:21.131 [conn970] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.131 [conn970] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|2 } } cursorid:377529248617162 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:21.131 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.131 [conn970] end connection 165.225.128.186:48497 (7 connections now open) m31000| Fri Feb 22 11:57:21.132 [initandlisten] connection accepted from 165.225.128.186:44789 #972 (8 connections now open) m31000| Fri Feb 22 11:57:21.225 [conn1] going to kill op: op: 19782.0 m31000| Fri Feb 22 11:57:21.225 [conn1] going to kill op: op: 19780.0 m31000| Fri Feb 22 11:57:21.225 [conn1] going to kill op: op: 19781.0 m31000| Fri Feb 22 11:57:21.232 [conn971] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.232 [conn971] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|12 } } cursorid:377961440004867 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:21.232 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.233 [conn971] end connection 165.225.128.186:59037 (7 connections now open) m31000| Fri Feb 22 11:57:21.233 [initandlisten] connection accepted from 165.225.128.186:47694 #973 (8 connections now open) m31000| Fri Feb 22 11:57:21.234 [conn972] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.234 [conn972] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|12 } } cursorid:377966877598835 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:21.234 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.234 [conn972] end connection 165.225.128.186:44789 (7 connections now open) m31000| Fri Feb 22 11:57:21.234 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.234 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|20 } } cursorid:378266935747761 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:21.234 [initandlisten] connection accepted from 165.225.128.186:54318 #974 (8 connections now open) m31002| Fri Feb 22 11:57:21.235 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.325 [conn1] going to kill op: op: 19820.0 m31000| Fri Feb 22 11:57:21.326 [conn1] going to kill op: op: 19819.0 m31000| Fri Feb 22 11:57:21.326 [conn974] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.326 [conn974] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|22 } } cursorid:378405921606661 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:21.326 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.326 [conn974] end connection 165.225.128.186:54318 (7 connections now open) m31000| Fri Feb 22 11:57:21.326 [initandlisten] connection accepted from 165.225.128.186:52858 #975 (8 connections now open) m31000| Fri Feb 22 11:57:21.335 [conn973] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.335 [conn973] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|22 } } cursorid:378400349713297 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:21.335 [conn973] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:21.335 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.335 [conn973] end connection 165.225.128.186:47694 (7 connections now open) m31000| Fri Feb 22 11:57:21.335 [initandlisten] connection accepted from 165.225.128.186:45095 #976 (8 connections now open) m31000| Fri Feb 22 11:57:21.426 [conn1] going to kill op: op: 19857.0 m31000| Fri Feb 22 11:57:21.426 [conn1] going to kill op: op: 19858.0 m31000| Fri Feb 22 11:57:21.427 [conn976] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.427 [conn976] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|32 } } cursorid:378800164429642 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:21.428 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.428 [conn976] end connection 165.225.128.186:45095 (7 connections now open) m31000| Fri Feb 22 11:57:21.428 [initandlisten] connection accepted from 165.225.128.186:42088 #977 (8 connections now open) m31000| Fri Feb 22 11:57:21.428 [conn975] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.428 [conn975] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|31 } } cursorid:378795805316124 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:21.428 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.428 [conn975] end connection 165.225.128.186:52858 (7 connections now open) m31000| Fri Feb 22 11:57:21.429 [initandlisten] connection accepted from 165.225.128.186:45073 #978 (8 connections now open) m31000| Fri Feb 22 11:57:21.527 [conn1] going to kill op: op: 19895.0 m31000| Fri Feb 22 11:57:21.527 [conn1] going to kill op: op: 19896.0 m31000| Fri Feb 22 11:57:21.530 [conn977] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.530 [conn977] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|42 } } cursorid:379189839289279 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:21.530 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.530 [conn977] end connection 165.225.128.186:42088 (7 connections now open) m31000| Fri Feb 22 11:57:21.531 [initandlisten] connection accepted from 165.225.128.186:42550 #979 (8 connections now open) m31000| Fri Feb 22 11:57:21.531 [conn978] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.531 [conn978] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|42 } } cursorid:379194336828897 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:21.531 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.531 [conn978] end connection 165.225.128.186:45073 (7 connections now open) m31000| Fri Feb 22 11:57:21.531 [initandlisten] connection accepted from 165.225.128.186:55756 #980 (8 connections now open) m31000| Fri Feb 22 11:57:21.628 [conn1] going to kill op: op: 19934.0 m31000| Fri Feb 22 11:57:21.628 [conn1] going to kill op: op: 19933.0 m31000| Fri Feb 22 11:57:21.633 [conn979] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.633 [conn979] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|52 } } cursorid:379628444111846 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:21.633 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.633 [conn979] end connection 165.225.128.186:42550 (7 connections now open) m31000| Fri Feb 22 11:57:21.633 [conn980] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.633 [initandlisten] connection accepted from 165.225.128.186:62273 #981 (8 connections now open) m31000| Fri Feb 22 11:57:21.633 [conn980] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|52 } } cursorid:379632187797563 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:21.634 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.634 [conn980] end connection 165.225.128.186:55756 (7 connections now open) m31000| Fri Feb 22 11:57:21.634 [initandlisten] connection accepted from 165.225.128.186:50139 #982 (8 connections now open) m31000| Fri Feb 22 11:57:21.728 [conn1] going to kill op: op: 19971.0 m31000| Fri Feb 22 11:57:21.729 [conn1] going to kill op: op: 19972.0 m31000| Fri Feb 22 11:57:21.736 [conn981] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.736 [conn982] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.736 [conn981] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|62 } } cursorid:380067118325883 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:21.736 [conn982] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|62 } } cursorid:380071361882594 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:21.736 [conn981] ClientCursor::find(): cursor not found in map '380067118325883' (ok after a drop) m31002| Fri Feb 22 11:57:21.736 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:21.736 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.736 [conn982] end connection 165.225.128.186:50139 (7 connections now open) m31000| Fri Feb 22 11:57:21.736 [conn981] end connection 165.225.128.186:62273 (7 connections now open) m31000| Fri Feb 22 11:57:21.737 [initandlisten] connection accepted from 165.225.128.186:52570 #983 (7 connections now open) m31000| Fri Feb 22 11:57:21.737 [initandlisten] connection accepted from 165.225.128.186:51651 #984 (8 connections now open) m31000| Fri Feb 22 11:57:21.829 [conn1] going to kill op: op: 20006.0 m31000| Fri Feb 22 11:57:21.829 [conn1] going to kill op: op: 20007.0 m31000| Fri Feb 22 11:57:21.930 [conn1] going to kill op: op: 20036.0 m31000| Fri Feb 22 11:57:21.930 [conn1] going to kill op: op: 20037.0 m31000| Fri Feb 22 11:57:21.931 [conn983] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.931 [conn983] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|72 } } cursorid:380508719916191 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:21.931 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.931 [conn983] end connection 165.225.128.186:52570 (7 connections now open) m31000| Fri Feb 22 11:57:21.931 [initandlisten] connection accepted from 165.225.128.186:45519 #985 (8 connections now open) m31000| Fri Feb 22 11:57:21.931 [conn984] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:21.931 [conn984] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|72 } } cursorid:380509086684762 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:21.932 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:21.932 [conn984] end connection 165.225.128.186:51651 (7 connections now open) m31000| Fri Feb 22 11:57:21.932 [initandlisten] connection accepted from 165.225.128.186:49586 #986 (8 connections now open) m31000| Fri Feb 22 11:57:22.031 [conn1] going to kill op: op: 20086.0 m31000| Fri Feb 22 11:57:22.031 [conn1] going to kill op: op: 20084.0 m31000| Fri Feb 22 11:57:22.032 [conn1] going to kill op: op: 20085.0 m31000| Fri Feb 22 11:57:22.033 [conn985] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.034 [conn985] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|91 } } cursorid:381329755622983 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:22.034 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.034 [conn985] end connection 165.225.128.186:45519 (7 connections now open) m31000| Fri Feb 22 11:57:22.034 [initandlisten] connection accepted from 165.225.128.186:54508 #987 (8 connections now open) m31000| Fri Feb 22 11:57:22.034 [conn986] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.034 [conn986] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|91 } } cursorid:381334528005045 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:22.035 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:22.035 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:57:22.035 [conn986] end connection 165.225.128.186:49586 (7 connections now open) m31000| Fri Feb 22 11:57:22.035 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.035 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534241000|91 } } cursorid:381282100164391 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.035 [initandlisten] connection accepted from 165.225.128.186:35948 #988 (8 connections now open) m31001| Fri Feb 22 11:57:22.036 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.132 [conn1] going to kill op: op: 20129.0 m31000| Fri Feb 22 11:57:22.132 [conn1] going to kill op: op: 20127.0 m31000| Fri Feb 22 11:57:22.137 [conn987] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.137 [conn987] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|2 } } cursorid:381768472025569 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:115 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:22.137 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.137 [conn987] end connection 165.225.128.186:54508 (7 connections now open) m31000| Fri Feb 22 11:57:22.137 [initandlisten] connection accepted from 165.225.128.186:37963 #989 (8 connections now open) m31000| Fri Feb 22 11:57:22.137 [conn988] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.138 [conn988] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|2 } } cursorid:381772656797293 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.138 [conn988] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:22.138 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.138 [conn988] end connection 165.225.128.186:35948 (7 connections now open) m31000| Fri Feb 22 11:57:22.138 [initandlisten] connection accepted from 165.225.128.186:37171 #990 (8 connections now open) m31000| Fri Feb 22 11:57:22.233 [conn1] going to kill op: op: 20167.0 m31000| Fri Feb 22 11:57:22.233 [conn1] going to kill op: op: 20166.0 m31000| Fri Feb 22 11:57:22.240 [conn989] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.240 [conn989] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|12 } } cursorid:382205617520032 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:22.240 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.240 [conn989] end connection 165.225.128.186:37963 (7 connections now open) m31000| Fri Feb 22 11:57:22.240 [conn990] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.240 [conn990] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|13 } } cursorid:382209517959931 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.240 [initandlisten] connection accepted from 165.225.128.186:53405 #991 (8 connections now open) m31001| Fri Feb 22 11:57:22.241 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.241 [conn990] end connection 165.225.128.186:37171 (7 connections now open) m31000| Fri Feb 22 11:57:22.241 [initandlisten] connection accepted from 165.225.128.186:58876 #992 (8 connections now open) m31000| Fri Feb 22 11:57:22.334 [conn1] going to kill op: op: 20214.0 m31000| Fri Feb 22 11:57:22.334 [conn1] going to kill op: op: 20217.0 m31000| Fri Feb 22 11:57:22.334 [conn1] going to kill op: op: 20216.0 m31000| Fri Feb 22 11:57:22.337 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.337 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|23 } } cursorid:382596385089227 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:22.338 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.344 [conn992] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.344 [conn991] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.344 [conn992] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|23 } } cursorid:382648632285439 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.344 [conn991] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|23 } } cursorid:382644574918468 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:22.344 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:22.344 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.344 [conn991] end connection 165.225.128.186:53405 (7 connections now open) m31000| Fri Feb 22 11:57:22.344 [conn992] end connection 165.225.128.186:58876 (7 connections now open) m31000| Fri Feb 22 11:57:22.344 [initandlisten] connection accepted from 165.225.128.186:60073 #993 (7 connections now open) m31000| Fri Feb 22 11:57:22.344 [initandlisten] connection accepted from 165.225.128.186:41379 #994 (8 connections now open) m31000| Fri Feb 22 11:57:22.435 [conn1] going to kill op: op: 20253.0 m31000| Fri Feb 22 11:57:22.435 [conn1] going to kill op: op: 20252.0 m31000| Fri Feb 22 11:57:22.437 [conn994] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.437 [conn994] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|33 } } cursorid:383085852208145 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:22.437 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.437 [conn994] end connection 165.225.128.186:41379 (7 connections now open) m31000| Fri Feb 22 11:57:22.437 [conn993] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.437 [conn993] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|33 } } cursorid:383086189246843 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.437 [initandlisten] connection accepted from 165.225.128.186:34521 #995 (8 connections now open) m31002| Fri Feb 22 11:57:22.437 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.437 [conn993] end connection 165.225.128.186:60073 (7 connections now open) m31000| Fri Feb 22 11:57:22.438 [initandlisten] connection accepted from 165.225.128.186:53717 #996 (8 connections now open) m31000| Fri Feb 22 11:57:22.536 [conn1] going to kill op: op: 20290.0 m31000| Fri Feb 22 11:57:22.536 [conn1] going to kill op: op: 20291.0 m31000| Fri Feb 22 11:57:22.539 [conn995] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.539 [conn995] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|42 } } cursorid:383482130849782 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.539 [conn995] ClientCursor::find(): cursor not found in map '383482130849782' (ok after a drop) m31001| Fri Feb 22 11:57:22.539 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.540 [conn995] end connection 165.225.128.186:34521 (7 connections now open) m31000| Fri Feb 22 11:57:22.540 [initandlisten] connection accepted from 165.225.128.186:42787 #997 (8 connections now open) m31000| Fri Feb 22 11:57:22.540 [conn996] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.540 [conn996] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|42 } } cursorid:383481463279113 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:22.540 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.541 [conn996] end connection 165.225.128.186:53717 (7 connections now open) m31000| Fri Feb 22 11:57:22.541 [initandlisten] connection accepted from 165.225.128.186:50105 #998 (8 connections now open) m31000| Fri Feb 22 11:57:22.637 [conn1] going to kill op: op: 20329.0 m31000| Fri Feb 22 11:57:22.637 [conn1] going to kill op: op: 20328.0 m31000| Fri Feb 22 11:57:22.643 [conn997] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.643 [conn997] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|52 } } cursorid:383916259081666 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:22.643 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.643 [conn998] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.643 [conn997] end connection 165.225.128.186:42787 (7 connections now open) m31000| Fri Feb 22 11:57:22.643 [conn998] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|52 } } cursorid:383919052160600 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:22.643 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.644 [conn998] end connection 165.225.128.186:50105 (6 connections now open) m31000| Fri Feb 22 11:57:22.644 [initandlisten] connection accepted from 165.225.128.186:40809 #999 (8 connections now open) m31000| Fri Feb 22 11:57:22.644 [initandlisten] connection accepted from 165.225.128.186:44751 #1000 (8 connections now open) m31000| Fri Feb 22 11:57:22.737 [conn1] going to kill op: op: 20367.0 m31000| Fri Feb 22 11:57:22.738 [conn1] going to kill op: op: 20366.0 m31000| Fri Feb 22 11:57:22.747 [conn999] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.747 [conn1000] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.747 [conn999] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|62 } } cursorid:384358068335122 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:95 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.747 [conn1000] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|62 } } cursorid:384358580126964 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:22.747 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:22.747 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.747 [conn999] end connection 165.225.128.186:40809 (7 connections now open) m31000| Fri Feb 22 11:57:22.747 [conn1000] end connection 165.225.128.186:44751 (7 connections now open) m31000| Fri Feb 22 11:57:22.747 [initandlisten] connection accepted from 165.225.128.186:57547 #1001 (7 connections now open) m31000| Fri Feb 22 11:57:22.747 [initandlisten] connection accepted from 165.225.128.186:39975 #1002 (8 connections now open) m31000| Fri Feb 22 11:57:22.838 [conn1] going to kill op: op: 20403.0 m31000| Fri Feb 22 11:57:22.839 [conn1] going to kill op: op: 20404.0 m31000| Fri Feb 22 11:57:22.840 [conn1002] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.840 [conn1001] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.840 [conn1002] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|72 } } cursorid:384796461279317 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.840 [conn1001] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|72 } } cursorid:384796617170984 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.840 [conn1002] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:22.840 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:22.840 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.840 [conn1001] end connection 165.225.128.186:57547 (7 connections now open) m31000| Fri Feb 22 11:57:22.840 [conn1002] end connection 165.225.128.186:39975 (6 connections now open) m31000| Fri Feb 22 11:57:22.840 [initandlisten] connection accepted from 165.225.128.186:40186 #1003 (7 connections now open) m31000| Fri Feb 22 11:57:22.840 [initandlisten] connection accepted from 165.225.128.186:50363 #1004 (8 connections now open) m31000| Fri Feb 22 11:57:22.939 [conn1] going to kill op: op: 20441.0 m31000| Fri Feb 22 11:57:22.939 [conn1] going to kill op: op: 20442.0 m31000| Fri Feb 22 11:57:22.942 [conn1003] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.942 [conn1003] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|82 } } cursorid:385191971215383 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:22.943 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.943 [conn1003] end connection 165.225.128.186:40186 (7 connections now open) m31000| Fri Feb 22 11:57:22.943 [conn1004] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:22.943 [conn1004] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|82 } } cursorid:385190204543932 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:22.943 [initandlisten] connection accepted from 165.225.128.186:40953 #1005 (8 connections now open) m31002| Fri Feb 22 11:57:22.943 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:22.943 [conn1004] end connection 165.225.128.186:50363 (7 connections now open) m31000| Fri Feb 22 11:57:22.944 [initandlisten] connection accepted from 165.225.128.186:56742 #1006 (8 connections now open) m31000| Fri Feb 22 11:57:23.040 [conn1] going to kill op: op: 20482.0 m31000| Fri Feb 22 11:57:23.040 [conn1] going to kill op: op: 20480.0 m31000| Fri Feb 22 11:57:23.041 [conn1] going to kill op: op: 20479.0 m31000| Fri Feb 22 11:57:23.046 [conn1005] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.046 [conn1005] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|92 } } cursorid:385629197086318 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.046 [conn1006] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.046 [conn1006] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534242000|92 } } cursorid:385628890238466 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:91 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:23.046 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:23.046 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.046 [conn1005] end connection 165.225.128.186:40953 (7 connections now open) m31000| Fri Feb 22 11:57:23.046 [conn1006] end connection 165.225.128.186:56742 (7 connections now open) m31000| Fri Feb 22 11:57:23.046 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.046 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|3 } } cursorid:386015220244034 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.047 [initandlisten] connection accepted from 165.225.128.186:62402 #1007 (7 connections now open) m31000| Fri Feb 22 11:57:23.047 [initandlisten] connection accepted from 165.225.128.186:33550 #1008 (8 connections now open) m31001| Fri Feb 22 11:57:23.048 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.141 [conn1] going to kill op: op: 20522.0 m31000| Fri Feb 22 11:57:23.141 [conn1] going to kill op: op: 20523.0 m31000| Fri Feb 22 11:57:23.149 [conn1007] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.149 [conn1008] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.149 [conn1007] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|4 } } cursorid:386066296765953 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.150 [conn1008] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|4 } } cursorid:386067998805802 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:23.150 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:23.150 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.150 [conn1007] end connection 165.225.128.186:62402 (7 connections now open) m31000| Fri Feb 22 11:57:23.150 [conn1008] end connection 165.225.128.186:33550 (7 connections now open) m31000| Fri Feb 22 11:57:23.150 [initandlisten] connection accepted from 165.225.128.186:35349 #1009 (7 connections now open) m31000| Fri Feb 22 11:57:23.150 [initandlisten] connection accepted from 165.225.128.186:45186 #1010 (8 connections now open) m31000| Fri Feb 22 11:57:23.242 [conn1] going to kill op: op: 20557.0 m31000| Fri Feb 22 11:57:23.242 [conn1] going to kill op: op: 20558.0 m31000| Fri Feb 22 11:57:23.243 [conn1009] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.243 [conn1009] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|14 } } cursorid:386505086524764 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.243 [conn1010] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.243 [conn1010] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|14 } } cursorid:386505344481184 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.243 [conn1009] ClientCursor::find(): cursor not found in map '386505086524764' (ok after a drop) m31001| Fri Feb 22 11:57:23.243 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:23.243 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.243 [conn1009] end connection 165.225.128.186:35349 (7 connections now open) m31000| Fri Feb 22 11:57:23.243 [conn1010] end connection 165.225.128.186:45186 (7 connections now open) m31000| Fri Feb 22 11:57:23.243 [initandlisten] connection accepted from 165.225.128.186:36429 #1011 (7 connections now open) m31000| Fri Feb 22 11:57:23.243 [initandlisten] connection accepted from 165.225.128.186:53655 #1012 (8 connections now open) m31000| Fri Feb 22 11:57:23.343 [conn1] going to kill op: op: 20598.0 m31000| Fri Feb 22 11:57:23.343 [conn1] going to kill op: op: 20596.0 m31000| Fri Feb 22 11:57:23.343 [conn1] going to kill op: op: 20595.0 m31000| Fri Feb 22 11:57:23.346 [conn1012] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.346 [conn1011] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.346 [conn1012] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|23 } } cursorid:386900001093595 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:97 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.346 [conn1011] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|23 } } cursorid:386899917106334 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:89 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:23.346 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:23.346 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.346 [conn1011] end connection 165.225.128.186:36429 (7 connections now open) m31000| Fri Feb 22 11:57:23.346 [conn1012] end connection 165.225.128.186:53655 (7 connections now open) m31000| Fri Feb 22 11:57:23.347 [initandlisten] connection accepted from 165.225.128.186:51018 #1013 (7 connections now open) m31000| Fri Feb 22 11:57:23.347 [initandlisten] connection accepted from 165.225.128.186:32965 #1014 (8 connections now open) m31000| Fri Feb 22 11:57:23.348 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.348 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|33 } } cursorid:387285882493671 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:23.348 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.444 [conn1] going to kill op: op: 20637.0 m31000| Fri Feb 22 11:57:23.444 [conn1] going to kill op: op: 20636.0 m31000| Fri Feb 22 11:57:23.450 [conn1014] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.450 [conn1014] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|33 } } cursorid:387337884259308 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.450 [conn1013] { $err: "operation was interrupted", code: 11601 } m31001| Fri Feb 22 11:57:23.450 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.450 [conn1013] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|33 } } cursorid:387338422977967 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.450 [conn1014] end connection 165.225.128.186:32965 (7 connections now open) m31002| Fri Feb 22 11:57:23.450 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.450 [conn1013] end connection 165.225.128.186:51018 (6 connections now open) m31000| Fri Feb 22 11:57:23.451 [initandlisten] connection accepted from 165.225.128.186:60463 #1015 (7 connections now open) m31000| Fri Feb 22 11:57:23.451 [initandlisten] connection accepted from 165.225.128.186:60406 #1016 (8 connections now open) m31000| Fri Feb 22 11:57:23.545 [conn1] going to kill op: op: 20676.0 m31000| Fri Feb 22 11:57:23.545 [conn1] going to kill op: op: 20675.0 m31000| Fri Feb 22 11:57:23.553 [conn1016] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.553 [conn1016] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|43 } } cursorid:387776272366058 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.553 [conn1016] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:23.553 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.554 [conn1015] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.554 [conn1015] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|43 } } cursorid:387775855382867 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:117 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.554 [conn1016] end connection 165.225.128.186:60406 (7 connections now open) m31001| Fri Feb 22 11:57:23.554 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.554 [conn1015] end connection 165.225.128.186:60463 (6 connections now open) m31000| Fri Feb 22 11:57:23.554 [initandlisten] connection accepted from 165.225.128.186:57531 #1017 (7 connections now open) m31000| Fri Feb 22 11:57:23.562 [initandlisten] connection accepted from 165.225.128.186:39986 #1018 (8 connections now open) m31000| Fri Feb 22 11:57:23.646 [conn1] going to kill op: op: 20711.0 m31000| Fri Feb 22 11:57:23.646 [conn1017] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.646 [conn1] going to kill op: op: 20713.0 m31000| Fri Feb 22 11:57:23.646 [conn1017] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|54 } } cursorid:388209591234693 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:23.646 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.646 [conn1017] end connection 165.225.128.186:57531 (7 connections now open) m31000| Fri Feb 22 11:57:23.647 [initandlisten] connection accepted from 165.225.128.186:60598 #1019 (8 connections now open) m31000| Fri Feb 22 11:57:23.654 [conn1018] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.654 [conn1018] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|54 } } cursorid:388214620731793 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:23.654 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.654 [conn1018] end connection 165.225.128.186:39986 (7 connections now open) m31000| Fri Feb 22 11:57:23.655 [initandlisten] connection accepted from 165.225.128.186:35259 #1020 (8 connections now open) m31000| Fri Feb 22 11:57:23.747 [conn1] going to kill op: op: 20748.0 m31000| Fri Feb 22 11:57:23.747 [conn1] going to kill op: op: 20749.0 m31000| Fri Feb 22 11:57:23.747 [conn1020] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.747 [conn1020] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|64 } } cursorid:388566122531568 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:23.747 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.747 [conn1020] end connection 165.225.128.186:35259 (7 connections now open) m31000| Fri Feb 22 11:57:23.747 [initandlisten] connection accepted from 165.225.128.186:63482 #1021 (8 connections now open) m31000| Fri Feb 22 11:57:23.749 [conn1019] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.749 [conn1019] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|63 } } cursorid:388563116499461 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:23.749 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.749 [conn1019] end connection 165.225.128.186:60598 (7 connections now open) m31000| Fri Feb 22 11:57:23.749 [initandlisten] connection accepted from 165.225.128.186:38089 #1022 (8 connections now open) m31000| Fri Feb 22 11:57:23.847 [conn1] going to kill op: op: 20787.0 m31000| Fri Feb 22 11:57:23.848 [conn1] going to kill op: op: 20786.0 m31000| Fri Feb 22 11:57:23.850 [conn1021] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.850 [conn1021] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|73 } } cursorid:388958361242547 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:23.850 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.850 [conn1021] end connection 165.225.128.186:63482 (7 connections now open) m31000| Fri Feb 22 11:57:23.850 [initandlisten] connection accepted from 165.225.128.186:42971 #1023 (8 connections now open) m31000| Fri Feb 22 11:57:23.851 [conn1022] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.851 [conn1022] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|73 } } cursorid:388961873539516 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:23.851 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.851 [conn1022] end connection 165.225.128.186:38089 (7 connections now open) m31000| Fri Feb 22 11:57:23.852 [initandlisten] connection accepted from 165.225.128.186:41175 #1024 (8 connections now open) m31000| Fri Feb 22 11:57:23.948 [conn1] going to kill op: op: 20825.0 m31000| Fri Feb 22 11:57:23.948 [conn1] going to kill op: op: 20824.0 m31000| Fri Feb 22 11:57:23.952 [conn1023] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.952 [conn1023] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|83 } } cursorid:389395750753946 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:23.953 [conn1023] ClientCursor::find(): cursor not found in map '389395750753946' (ok after a drop) m31001| Fri Feb 22 11:57:23.953 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.953 [conn1023] end connection 165.225.128.186:42971 (7 connections now open) m31000| Fri Feb 22 11:57:23.953 [initandlisten] connection accepted from 165.225.128.186:51737 #1025 (8 connections now open) m31000| Fri Feb 22 11:57:23.953 [conn1024] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:23.953 [conn1024] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|83 } } cursorid:389399896676539 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:23.954 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:23.954 [conn1024] end connection 165.225.128.186:41175 (7 connections now open) m31000| Fri Feb 22 11:57:23.954 [initandlisten] connection accepted from 165.225.128.186:35011 #1026 (8 connections now open) m31000| Fri Feb 22 11:57:24.049 [conn1] going to kill op: op: 20865.0 m31000| Fri Feb 22 11:57:24.049 [conn1] going to kill op: op: 20862.0 m31000| Fri Feb 22 11:57:24.049 [conn1] going to kill op: op: 20863.0 m31000| Fri Feb 22 11:57:24.055 [conn1025] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.056 [conn1025] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|93 } } cursorid:389833212944596 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.056 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.056 [conn1025] end connection 165.225.128.186:51737 (7 connections now open) m31000| Fri Feb 22 11:57:24.056 [initandlisten] connection accepted from 165.225.128.186:37661 #1027 (8 connections now open) m31000| Fri Feb 22 11:57:24.056 [conn1026] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.056 [conn1026] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534243000|93 } } cursorid:389838374760228 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.056 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.056 [conn1026] end connection 165.225.128.186:35011 (7 connections now open) m31000| Fri Feb 22 11:57:24.057 [initandlisten] connection accepted from 165.225.128.186:57585 #1028 (8 connections now open) m31000| Fri Feb 22 11:57:24.058 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.058 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|4 } } cursorid:390224332884963 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.058 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.150 [conn1] going to kill op: op: 20907.0 m31000| Fri Feb 22 11:57:24.150 [conn1] going to kill op: op: 20906.0 m31000| Fri Feb 22 11:57:24.159 [conn1027] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.159 [conn1027] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|4 } } cursorid:390270901992165 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:24.159 [conn1028] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.159 [conn1028] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|4 } } cursorid:390276898650141 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.159 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:24.159 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.159 [conn1027] end connection 165.225.128.186:37661 (7 connections now open) m31000| Fri Feb 22 11:57:24.159 [conn1028] end connection 165.225.128.186:57585 (7 connections now open) m31000| Fri Feb 22 11:57:24.159 [initandlisten] connection accepted from 165.225.128.186:59357 #1029 (7 connections now open) m31000| Fri Feb 22 11:57:24.159 [initandlisten] connection accepted from 165.225.128.186:43653 #1030 (8 connections now open) m31000| Fri Feb 22 11:57:24.251 [conn1] going to kill op: op: 20942.0 m31000| Fri Feb 22 11:57:24.251 [conn1] going to kill op: op: 20941.0 m31000| Fri Feb 22 11:57:24.251 [conn1030] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.251 [conn1030] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|14 } } cursorid:390714951561287 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:24.251 [conn1029] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.252 [conn1030] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:57:24.252 [conn1029] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|14 } } cursorid:390714509381977 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.252 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.252 [conn1030] end connection 165.225.128.186:43653 (7 connections now open) m31002| Fri Feb 22 11:57:24.252 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.252 [conn1029] end connection 165.225.128.186:59357 (6 connections now open) m31000| Fri Feb 22 11:57:24.252 [initandlisten] connection accepted from 165.225.128.186:60726 #1031 (7 connections now open) m31000| Fri Feb 22 11:57:24.252 [initandlisten] connection accepted from 165.225.128.186:58663 #1032 (8 connections now open) m31000| Fri Feb 22 11:57:24.351 [conn1] going to kill op: op: 20984.0 m31000| Fri Feb 22 11:57:24.352 [conn1] going to kill op: op: 20982.0 m31000| Fri Feb 22 11:57:24.352 [conn1] going to kill op: op: 20981.0 m31000| Fri Feb 22 11:57:24.354 [conn1032] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.354 [conn1032] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|23 } } cursorid:391108333731450 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:40 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.354 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.354 [conn1032] end connection 165.225.128.186:58663 (7 connections now open) m31000| Fri Feb 22 11:57:24.354 [conn1031] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.354 [conn1031] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|23 } } cursorid:391110096409338 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:24.354 [initandlisten] connection accepted from 165.225.128.186:33494 #1033 (8 connections now open) m31001| Fri Feb 22 11:57:24.355 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.355 [conn1031] end connection 165.225.128.186:60726 (7 connections now open) m31000| Fri Feb 22 11:57:24.355 [initandlisten] connection accepted from 165.225.128.186:49135 #1034 (8 connections now open) m31000| Fri Feb 22 11:57:24.359 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.359 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|34 } } cursorid:391496200903411 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.359 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.453 [conn1] going to kill op: op: 21022.0 m31000| Fri Feb 22 11:57:24.453 [conn1] going to kill op: op: 21023.0 m31000| Fri Feb 22 11:57:24.457 [conn1033] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.457 [conn1033] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|34 } } cursorid:391547331711507 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:96 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.457 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.457 [conn1033] end connection 165.225.128.186:33494 (7 connections now open) m31000| Fri Feb 22 11:57:24.457 [conn1034] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.457 [conn1034] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|34 } } cursorid:391547777921683 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.457 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.457 [conn1034] end connection 165.225.128.186:49135 (6 connections now open) m31000| Fri Feb 22 11:57:24.457 [initandlisten] connection accepted from 165.225.128.186:64582 #1035 (7 connections now open) m31000| Fri Feb 22 11:57:24.457 [initandlisten] connection accepted from 165.225.128.186:39751 #1036 (8 connections now open) m31000| Fri Feb 22 11:57:24.553 [conn1] going to kill op: op: 21060.0 m31000| Fri Feb 22 11:57:24.553 [conn1] going to kill op: op: 21061.0 m31000| Fri Feb 22 11:57:24.559 [conn1036] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.559 [conn1035] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.559 [conn1036] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|44 } } cursorid:391985910332835 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:24.559 [conn1035] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|44 } } cursorid:391986107364579 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:44 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:24.559 [conn1035] ClientCursor::find(): cursor not found in map '391986107364579' (ok after a drop) m31001| Fri Feb 22 11:57:24.559 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:24.559 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.560 [conn1036] end connection 165.225.128.186:39751 (7 connections now open) m31000| Fri Feb 22 11:57:24.560 [conn1035] end connection 165.225.128.186:64582 (7 connections now open) m31000| Fri Feb 22 11:57:24.560 [initandlisten] connection accepted from 165.225.128.186:49693 #1037 (7 connections now open) m31000| Fri Feb 22 11:57:24.560 [initandlisten] connection accepted from 165.225.128.186:44266 #1038 (8 connections now open) m31000| Fri Feb 22 11:57:24.654 [conn1] going to kill op: op: 21099.0 m31000| Fri Feb 22 11:57:24.654 [conn1] going to kill op: op: 21098.0 m31000| Fri Feb 22 11:57:24.662 [conn1037] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.662 [conn1037] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|54 } } cursorid:392424245664984 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.662 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.662 [conn1037] end connection 165.225.128.186:49693 (7 connections now open) m31000| Fri Feb 22 11:57:24.662 [initandlisten] connection accepted from 165.225.128.186:46651 #1039 (8 connections now open) m31000| Fri Feb 22 11:57:24.663 [conn1038] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.663 [conn1038] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|54 } } cursorid:392423310515452 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.663 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.663 [conn1038] end connection 165.225.128.186:44266 (7 connections now open) m31000| Fri Feb 22 11:57:24.664 [initandlisten] connection accepted from 165.225.128.186:52452 #1040 (8 connections now open) m31000| Fri Feb 22 11:57:24.755 [conn1] going to kill op: op: 21136.0 m31000| Fri Feb 22 11:57:24.755 [conn1] going to kill op: op: 21134.0 m31000| Fri Feb 22 11:57:24.756 [conn1040] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.756 [conn1040] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|64 } } cursorid:392862699451625 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.756 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.756 [conn1040] end connection 165.225.128.186:52452 (7 connections now open) m31000| Fri Feb 22 11:57:24.756 [initandlisten] connection accepted from 165.225.128.186:59373 #1041 (8 connections now open) m31000| Fri Feb 22 11:57:24.764 [conn1039] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.764 [conn1039] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|64 } } cursorid:392858173235055 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:39 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.764 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.764 [conn1039] end connection 165.225.128.186:46651 (7 connections now open) m31000| Fri Feb 22 11:57:24.764 [initandlisten] connection accepted from 165.225.128.186:35251 #1042 (8 connections now open) m31000| Fri Feb 22 11:57:24.855 [conn1] going to kill op: op: 21172.0 m31000| Fri Feb 22 11:57:24.856 [conn1] going to kill op: op: 21171.0 m31000| Fri Feb 22 11:57:24.856 [conn1042] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.856 [conn1042] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|74 } } cursorid:393256467488221 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:24.857 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.857 [conn1042] end connection 165.225.128.186:35251 (7 connections now open) m31000| Fri Feb 22 11:57:24.857 [initandlisten] connection accepted from 165.225.128.186:64817 #1043 (8 connections now open) m31000| Fri Feb 22 11:57:24.858 [conn1041] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.858 [conn1041] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|73 } } cursorid:393252654985121 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.858 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.858 [conn1041] end connection 165.225.128.186:59373 (7 connections now open) m31000| Fri Feb 22 11:57:24.859 [initandlisten] connection accepted from 165.225.128.186:44741 #1044 (8 connections now open) m31000| Fri Feb 22 11:57:24.956 [conn1] going to kill op: op: 21209.0 m31000| Fri Feb 22 11:57:24.956 [conn1] going to kill op: op: 21210.0 m31000| Fri Feb 22 11:57:24.959 [conn1043] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.959 [conn1043] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|83 } } cursorid:393648187401144 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:24.959 [conn1043] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:24.959 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.959 [conn1043] end connection 165.225.128.186:64817 (7 connections now open) m31000| Fri Feb 22 11:57:24.959 [initandlisten] connection accepted from 165.225.128.186:50925 #1045 (8 connections now open) m31000| Fri Feb 22 11:57:24.961 [conn1044] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:24.961 [conn1044] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|83 } } cursorid:393651219837463 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:24.961 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:24.961 [conn1044] end connection 165.225.128.186:44741 (7 connections now open) m31000| Fri Feb 22 11:57:24.961 [initandlisten] connection accepted from 165.225.128.186:55548 #1046 (8 connections now open) m31000| Fri Feb 22 11:57:25.057 [conn1] going to kill op: op: 21250.0 m31000| Fri Feb 22 11:57:25.057 [conn1] going to kill op: op: 21248.0 m31000| Fri Feb 22 11:57:25.061 [conn1045] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.061 [conn1045] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|93 } } cursorid:394085713475067 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.061 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.062 [conn1045] end connection 165.225.128.186:50925 (7 connections now open) m31000| Fri Feb 22 11:57:25.062 [initandlisten] connection accepted from 165.225.128.186:49684 #1047 (8 connections now open) m31000| Fri Feb 22 11:57:25.063 [conn1046] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.063 [conn1046] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534244000|93 } } cursorid:394091093402071 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:25.063 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.063 [conn1046] end connection 165.225.128.186:55548 (7 connections now open) m31000| Fri Feb 22 11:57:25.064 [initandlisten] connection accepted from 165.225.128.186:41049 #1048 (8 connections now open) m31000| Fri Feb 22 11:57:25.158 [conn1] going to kill op: op: 21301.0 m31000| Fri Feb 22 11:57:25.158 [conn1] going to kill op: op: 21298.0 m31000| Fri Feb 22 11:57:25.164 [conn1047] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.164 [conn1047] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|5 } } cursorid:394524415883407 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.164 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.164 [conn1047] end connection 165.225.128.186:49684 (7 connections now open) m31000| Fri Feb 22 11:57:25.164 [initandlisten] connection accepted from 165.225.128.186:45206 #1049 (8 connections now open) m31000| Fri Feb 22 11:57:25.166 [conn1048] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.166 [conn1048] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|6 } } cursorid:394528612017443 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:25.166 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.166 [conn1048] end connection 165.225.128.186:41049 (7 connections now open) m31000| Fri Feb 22 11:57:25.166 [initandlisten] connection accepted from 165.225.128.186:50278 #1050 (8 connections now open) m31000| Fri Feb 22 11:57:25.259 [conn1] going to kill op: op: 21349.0 m31000| Fri Feb 22 11:57:25.259 [conn1] going to kill op: op: 21350.0 m31000| Fri Feb 22 11:57:25.259 [conn1] going to kill op: op: 21348.0 m31000| Fri Feb 22 11:57:25.266 [conn1049] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.266 [conn1049] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|15 } } cursorid:394962631787362 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.266 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.266 [conn1049] end connection 165.225.128.186:45206 (7 connections now open) m31000| Fri Feb 22 11:57:25.267 [initandlisten] connection accepted from 165.225.128.186:38062 #1051 (8 connections now open) m31000| Fri Feb 22 11:57:25.267 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.267 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|5 } } cursorid:394476860250070 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.267 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.268 [conn1050] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.268 [conn1050] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|16 } } cursorid:394965795690083 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:25.268 [conn1050] ClientCursor::find(): cursor not found in map '394965795690083' (ok after a drop) m31002| Fri Feb 22 11:57:25.268 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.268 [conn1050] end connection 165.225.128.186:50278 (7 connections now open) m31000| Fri Feb 22 11:57:25.268 [initandlisten] connection accepted from 165.225.128.186:59492 #1052 (8 connections now open) m31000| Fri Feb 22 11:57:25.360 [conn1] going to kill op: op: 21388.0 m31000| Fri Feb 22 11:57:25.360 [conn1] going to kill op: op: 21386.0 m31000| Fri Feb 22 11:57:25.360 [conn1052] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.360 [conn1052] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|26 } } cursorid:395404003331361 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:25.360 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.360 [conn1052] end connection 165.225.128.186:59492 (7 connections now open) m31000| Fri Feb 22 11:57:25.361 [initandlisten] connection accepted from 165.225.128.186:56696 #1053 (8 connections now open) m31000| Fri Feb 22 11:57:25.369 [conn1051] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.369 [conn1051] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|26 } } cursorid:395400276768563 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.369 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.369 [conn1051] end connection 165.225.128.186:38062 (7 connections now open) m31000| Fri Feb 22 11:57:25.369 [initandlisten] connection accepted from 165.225.128.186:43109 #1054 (8 connections now open) m31000| Fri Feb 22 11:57:25.460 [conn1] going to kill op: op: 21435.0 m31000| Fri Feb 22 11:57:25.461 [conn1] going to kill op: op: 21434.0 m31000| Fri Feb 22 11:57:25.461 [conn1] going to kill op: op: 21433.0 m31000| Fri Feb 22 11:57:25.461 [conn1054] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.461 [conn1054] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|36 } } cursorid:395798648740014 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.461 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.461 [conn1054] end connection 165.225.128.186:43109 (7 connections now open) m31000| Fri Feb 22 11:57:25.462 [initandlisten] connection accepted from 165.225.128.186:54997 #1055 (8 connections now open) m31000| Fri Feb 22 11:57:25.463 [conn1053] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.463 [conn1053] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|35 } } cursorid:395796054410986 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:25.463 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.463 [conn1053] end connection 165.225.128.186:56696 (7 connections now open) m31000| Fri Feb 22 11:57:25.463 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.463 [initandlisten] connection accepted from 165.225.128.186:62155 #1056 (8 connections now open) m31000| Fri Feb 22 11:57:25.463 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|35 } } cursorid:395790731250651 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:25.464 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.561 [conn1] going to kill op: op: 21474.0 m31000| Fri Feb 22 11:57:25.562 [conn1] going to kill op: op: 21473.0 m31000| Fri Feb 22 11:57:25.564 [conn1055] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.564 [conn1055] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|45 } } cursorid:396189918933829 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.564 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.564 [conn1055] end connection 165.225.128.186:54997 (7 connections now open) m31000| Fri Feb 22 11:57:25.565 [initandlisten] connection accepted from 165.225.128.186:58883 #1057 (8 connections now open) m31000| Fri Feb 22 11:57:25.565 [conn1056] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.565 [conn1056] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|45 } } cursorid:396193891002438 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:25.565 [conn1056] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:25.565 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.566 [conn1056] end connection 165.225.128.186:62155 (7 connections now open) m31000| Fri Feb 22 11:57:25.566 [initandlisten] connection accepted from 165.225.128.186:42958 #1058 (8 connections now open) m31000| Fri Feb 22 11:57:25.662 [conn1] going to kill op: op: 21512.0 m31000| Fri Feb 22 11:57:25.663 [conn1] going to kill op: op: 21511.0 m31000| Fri Feb 22 11:57:25.667 [conn1057] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.667 [conn1057] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|55 } } cursorid:396628694456157 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.667 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.667 [conn1057] end connection 165.225.128.186:58883 (7 connections now open) m31000| Fri Feb 22 11:57:25.668 [initandlisten] connection accepted from 165.225.128.186:37168 #1059 (8 connections now open) m31000| Fri Feb 22 11:57:25.668 [conn1058] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.668 [conn1058] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|55 } } cursorid:396632348170499 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:93 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:25.668 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.669 [conn1058] end connection 165.225.128.186:42958 (7 connections now open) m31000| Fri Feb 22 11:57:25.669 [initandlisten] connection accepted from 165.225.128.186:63156 #1060 (8 connections now open) m31000| Fri Feb 22 11:57:25.763 [conn1] going to kill op: op: 21550.0 m31000| Fri Feb 22 11:57:25.764 [conn1] going to kill op: op: 21549.0 m31000| Fri Feb 22 11:57:25.770 [conn1059] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.770 [conn1059] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|65 } } cursorid:397066374689992 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.771 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.771 [conn1059] end connection 165.225.128.186:37168 (7 connections now open) m31000| Fri Feb 22 11:57:25.771 [conn1060] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.771 [conn1060] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|65 } } cursorid:397071839959692 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:25.771 [initandlisten] connection accepted from 165.225.128.186:60888 #1061 (8 connections now open) m31002| Fri Feb 22 11:57:25.771 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.771 [conn1060] end connection 165.225.128.186:63156 (7 connections now open) m31000| Fri Feb 22 11:57:25.771 [initandlisten] connection accepted from 165.225.128.186:64821 #1062 (8 connections now open) m31000| Fri Feb 22 11:57:25.864 [conn1] going to kill op: op: 21588.0 m31000| Fri Feb 22 11:57:25.864 [conn1] going to kill op: op: 21587.0 m31000| Fri Feb 22 11:57:25.874 [conn1061] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.874 [conn1061] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|75 } } cursorid:397507973343462 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:25.874 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.874 [conn1062] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.874 [conn1062] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|75 } } cursorid:397508888230745 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:25.874 [conn1061] end connection 165.225.128.186:60888 (7 connections now open) m31002| Fri Feb 22 11:57:25.874 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.874 [conn1062] end connection 165.225.128.186:64821 (6 connections now open) m31000| Fri Feb 22 11:57:25.874 [initandlisten] connection accepted from 165.225.128.186:60896 #1063 (7 connections now open) m31000| Fri Feb 22 11:57:25.874 [initandlisten] connection accepted from 165.225.128.186:52762 #1064 (8 connections now open) m31000| Fri Feb 22 11:57:25.965 [conn1] going to kill op: op: 21623.0 m31000| Fri Feb 22 11:57:25.965 [conn1] going to kill op: op: 21624.0 m31000| Fri Feb 22 11:57:25.967 [conn1063] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.967 [conn1064] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:25.967 [conn1063] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|85 } } cursorid:397946722463325 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:87 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:25.967 [conn1064] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|85 } } cursorid:397946200077480 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:25.967 [conn1064] ClientCursor::find(): cursor not found in map '397946200077480' (ok after a drop) m31001| Fri Feb 22 11:57:25.967 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:25.967 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:25.967 [conn1063] end connection 165.225.128.186:60896 (7 connections now open) m31000| Fri Feb 22 11:57:25.967 [conn1064] end connection 165.225.128.186:52762 (7 connections now open) m31000| Fri Feb 22 11:57:25.967 [initandlisten] connection accepted from 165.225.128.186:54600 #1065 (7 connections now open) m31000| Fri Feb 22 11:57:25.970 [initandlisten] connection accepted from 165.225.128.186:38717 #1066 (8 connections now open) m31000| Fri Feb 22 11:57:26.066 [conn1] going to kill op: op: 21662.0 m31000| Fri Feb 22 11:57:26.066 [conn1] going to kill op: op: 21663.0 m31000| Fri Feb 22 11:57:26.070 [conn1065] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.070 [conn1065] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|95 } } cursorid:398338261948730 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.070 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.070 [conn1065] end connection 165.225.128.186:54600 (7 connections now open) m31000| Fri Feb 22 11:57:26.070 [initandlisten] connection accepted from 165.225.128.186:42248 #1067 (8 connections now open) m31000| Fri Feb 22 11:57:26.073 [conn1066] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.073 [conn1066] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534245000|95 } } cursorid:398342772435066 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:26.073 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.073 [conn1066] end connection 165.225.128.186:38717 (7 connections now open) m31000| Fri Feb 22 11:57:26.073 [initandlisten] connection accepted from 165.225.128.186:36067 #1068 (8 connections now open) m31000| Fri Feb 22 11:57:26.167 [conn1] going to kill op: op: 21702.0 m31000| Fri Feb 22 11:57:26.167 [conn1] going to kill op: op: 21703.0 m31000| Fri Feb 22 11:57:26.172 [conn1067] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.172 [conn1067] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|6 } } cursorid:398733292645669 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.172 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.172 [conn1067] end connection 165.225.128.186:42248 (7 connections now open) m31000| Fri Feb 22 11:57:26.173 [initandlisten] connection accepted from 165.225.128.186:55357 #1069 (8 connections now open) m31000| Fri Feb 22 11:57:26.175 [conn1068] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.175 [conn1068] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|6 } } cursorid:398738050598405 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:26.175 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.175 [conn1068] end connection 165.225.128.186:36067 (7 connections now open) m31000| Fri Feb 22 11:57:26.176 [initandlisten] connection accepted from 165.225.128.186:43687 #1070 (8 connections now open) m31000| Fri Feb 22 11:57:26.268 [conn1] going to kill op: op: 21741.0 m31000| Fri Feb 22 11:57:26.268 [conn1] going to kill op: op: 21738.0 m31000| Fri Feb 22 11:57:26.275 [conn1069] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.275 [conn1069] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|16 } } cursorid:399128827771741 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.275 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.275 [conn1069] end connection 165.225.128.186:55357 (7 connections now open) m31000| Fri Feb 22 11:57:26.275 [initandlisten] connection accepted from 165.225.128.186:64333 #1071 (8 connections now open) m31000| Fri Feb 22 11:57:26.368 [conn1] going to kill op: op: 21784.0 m31000| Fri Feb 22 11:57:26.369 [conn1] going to kill op: op: 21786.0 m31000| Fri Feb 22 11:57:26.369 [conn1] going to kill op: op: 21783.0 m31000| Fri Feb 22 11:57:26.369 [conn1070] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.369 [conn1070] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|16 } } cursorid:399131618270343 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:26.369 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.369 [conn1070] end connection 165.225.128.186:43687 (7 connections now open) m31000| Fri Feb 22 11:57:26.370 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.370 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|25 } } cursorid:399517868631339 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:26.370 [initandlisten] connection accepted from 165.225.128.186:56185 #1072 (8 connections now open) m31000| Fri Feb 22 11:57:26.370 [conn8] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:26.371 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.377 [conn1071] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.377 [conn1071] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|26 } } cursorid:399523766241546 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.377 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.378 [conn1071] end connection 165.225.128.186:64333 (7 connections now open) m31000| Fri Feb 22 11:57:26.378 [initandlisten] connection accepted from 165.225.128.186:52828 #1073 (8 connections now open) m31000| Fri Feb 22 11:57:26.469 [conn1] going to kill op: op: 21823.0 m31000| Fri Feb 22 11:57:26.470 [conn1] going to kill op: op: 21822.0 m31000| Fri Feb 22 11:57:26.470 [conn1073] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.470 [conn1073] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|36 } } cursorid:399918686365073 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.470 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.470 [conn1073] end connection 165.225.128.186:52828 (7 connections now open) m31000| Fri Feb 22 11:57:26.470 [initandlisten] connection accepted from 165.225.128.186:43381 #1074 (8 connections now open) m31000| Fri Feb 22 11:57:26.472 [conn1072] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.472 [conn1072] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|35 } } cursorid:399913138683305 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:26.472 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.472 [conn1072] end connection 165.225.128.186:56185 (7 connections now open) m31000| Fri Feb 22 11:57:26.472 [initandlisten] connection accepted from 165.225.128.186:47176 #1075 (8 connections now open) m31000| Fri Feb 22 11:57:26.570 [conn1] going to kill op: op: 21871.0 m31000| Fri Feb 22 11:57:26.570 [conn1] going to kill op: op: 21870.0 m31000| Fri Feb 22 11:57:26.570 [conn1] going to kill op: op: 21873.0 m31000| Fri Feb 22 11:57:26.572 [conn1074] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.572 [conn1074] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|45 } } cursorid:400308389535891 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.572 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.572 [conn1074] end connection 165.225.128.186:43381 (7 connections now open) m31000| Fri Feb 22 11:57:26.573 [initandlisten] connection accepted from 165.225.128.186:57482 #1076 (8 connections now open) m31000| Fri Feb 22 11:57:26.573 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.573 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|45 } } cursorid:400262796881258 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.573 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.574 [conn1075] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.574 [conn1075] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|45 } } cursorid:400312518285103 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:26.574 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.575 [conn1075] end connection 165.225.128.186:47176 (7 connections now open) m31000| Fri Feb 22 11:57:26.575 [initandlisten] connection accepted from 165.225.128.186:58145 #1077 (8 connections now open) m31000| Fri Feb 22 11:57:26.671 [conn1] going to kill op: op: 21912.0 m31000| Fri Feb 22 11:57:26.671 [conn1] going to kill op: op: 21913.0 m31000| Fri Feb 22 11:57:26.675 [conn1076] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.675 [conn1076] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|55 } } cursorid:400746302971565 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.675 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.675 [conn1076] end connection 165.225.128.186:57482 (7 connections now open) m31000| Fri Feb 22 11:57:26.675 [initandlisten] connection accepted from 165.225.128.186:43935 #1078 (8 connections now open) m31000| Fri Feb 22 11:57:26.677 [conn1077] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.677 [conn1077] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|56 } } cursorid:400751448363514 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:26.677 [conn1077] ClientCursor::find(): cursor not found in map '400751448363514' (ok after a drop) m31001| Fri Feb 22 11:57:26.677 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.677 [conn1077] end connection 165.225.128.186:58145 (7 connections now open) m31000| Fri Feb 22 11:57:26.677 [initandlisten] connection accepted from 165.225.128.186:43752 #1079 (8 connections now open) m31000| Fri Feb 22 11:57:26.772 [conn1] going to kill op: op: 21950.0 m31000| Fri Feb 22 11:57:26.772 [conn1] going to kill op: op: 21951.0 m31000| Fri Feb 22 11:57:26.778 [conn1078] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.778 [conn1078] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|66 } } cursorid:401185810023922 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.778 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.778 [conn1078] end connection 165.225.128.186:43935 (7 connections now open) m31000| Fri Feb 22 11:57:26.779 [initandlisten] connection accepted from 165.225.128.186:62747 #1080 (8 connections now open) m31000| Fri Feb 22 11:57:26.780 [conn1079] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.780 [conn1079] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|66 } } cursorid:401188668844212 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:26.780 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.780 [conn1079] end connection 165.225.128.186:43752 (7 connections now open) m31000| Fri Feb 22 11:57:26.780 [initandlisten] connection accepted from 165.225.128.186:53174 #1081 (8 connections now open) m31000| Fri Feb 22 11:57:26.873 [conn1] going to kill op: op: 21988.0 m31000| Fri Feb 22 11:57:26.873 [conn1] going to kill op: op: 21986.0 m31000| Fri Feb 22 11:57:26.881 [conn1080] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.881 [conn1080] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|76 } } cursorid:401623940746794 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.881 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.881 [conn1080] end connection 165.225.128.186:62747 (7 connections now open) m31000| Fri Feb 22 11:57:26.882 [initandlisten] connection accepted from 165.225.128.186:57659 #1082 (8 connections now open) m31000| Fri Feb 22 11:57:26.974 [conn1] going to kill op: op: 22019.0 m31000| Fri Feb 22 11:57:26.974 [conn1082] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.974 [conn1] going to kill op: op: 22020.0 m31000| Fri Feb 22 11:57:26.974 [conn1082] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|86 } } cursorid:402061281628880 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:26.974 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.974 [conn1082] end connection 165.225.128.186:57659 (7 connections now open) m31000| Fri Feb 22 11:57:26.975 [initandlisten] connection accepted from 165.225.128.186:47422 #1083 (8 connections now open) m31000| Fri Feb 22 11:57:26.975 [conn1081] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:26.975 [conn1081] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|76 } } cursorid:401628520025023 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:26.975 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:26.975 [conn1081] end connection 165.225.128.186:53174 (7 connections now open) m31000| Fri Feb 22 11:57:26.975 [initandlisten] connection accepted from 165.225.128.186:46276 #1084 (8 connections now open) m31000| Fri Feb 22 11:57:27.074 [conn1] going to kill op: op: 22057.0 m31000| Fri Feb 22 11:57:27.075 [conn1] going to kill op: op: 22058.0 m31000| Fri Feb 22 11:57:27.077 [conn1083] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.077 [conn1083] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|95 } } cursorid:402452151458383 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:27.077 [conn1084] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.077 [conn1084] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534246000|95 } } cursorid:402457267108667 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:27.077 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.077 [conn1083] end connection 165.225.128.186:47422 (7 connections now open) m31000| Fri Feb 22 11:57:27.077 [conn1084] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:27.078 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.078 [conn1084] end connection 165.225.128.186:46276 (6 connections now open) m31000| Fri Feb 22 11:57:27.078 [initandlisten] connection accepted from 165.225.128.186:38239 #1085 (7 connections now open) m31000| Fri Feb 22 11:57:27.078 [initandlisten] connection accepted from 165.225.128.186:62712 #1086 (8 connections now open) m31000| Fri Feb 22 11:57:27.175 [conn1] going to kill op: op: 22098.0 m31000| Fri Feb 22 11:57:27.176 [conn1] going to kill op: op: 22097.0 m31000| Fri Feb 22 11:57:27.180 [conn1085] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.180 [conn1085] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|6 } } cursorid:402894551396663 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:27.180 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.180 [conn1086] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.180 [conn1085] end connection 165.225.128.186:38239 (7 connections now open) m31000| Fri Feb 22 11:57:27.180 [conn1086] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|6 } } cursorid:402895415083165 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:27.181 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.181 [conn1086] end connection 165.225.128.186:62712 (6 connections now open) m31000| Fri Feb 22 11:57:27.181 [initandlisten] connection accepted from 165.225.128.186:40023 #1087 (7 connections now open) m31000| Fri Feb 22 11:57:27.181 [initandlisten] connection accepted from 165.225.128.186:38812 #1088 (8 connections now open) m31000| Fri Feb 22 11:57:27.276 [conn1] going to kill op: op: 22136.0 m31000| Fri Feb 22 11:57:27.276 [conn1] going to kill op: op: 22135.0 m31000| Fri Feb 22 11:57:27.283 [conn1087] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.283 [conn1087] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|16 } } cursorid:403332672491408 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:27.283 [conn1088] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.283 [conn1088] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|16 } } cursorid:403333591933922 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:27.283 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.283 [conn1087] end connection 165.225.128.186:40023 (7 connections now open) m31001| Fri Feb 22 11:57:27.283 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.283 [conn1088] end connection 165.225.128.186:38812 (6 connections now open) m31000| Fri Feb 22 11:57:27.283 [initandlisten] connection accepted from 165.225.128.186:60380 #1089 (7 connections now open) m31000| Fri Feb 22 11:57:27.284 [initandlisten] connection accepted from 165.225.128.186:34119 #1090 (8 connections now open) m31000| Fri Feb 22 11:57:27.377 [conn1] going to kill op: op: 22174.0 m31000| Fri Feb 22 11:57:27.377 [conn1] going to kill op: op: 22176.0 m31000| Fri Feb 22 11:57:27.377 [conn1] going to kill op: op: 22177.0 m31000| Fri Feb 22 11:57:27.381 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.381 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|36 } } cursorid:404114534014546 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:27.381 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.386 [conn1090] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.386 [conn1090] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|26 } } cursorid:403770735709759 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:27.386 [conn1089] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.386 [conn1089] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|26 } } cursorid:403771408834525 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:27.386 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:27.386 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.386 [conn1090] end connection 165.225.128.186:34119 (7 connections now open) m31000| Fri Feb 22 11:57:27.386 [conn1089] end connection 165.225.128.186:60380 (7 connections now open) m31000| Fri Feb 22 11:57:27.386 [initandlisten] connection accepted from 165.225.128.186:41745 #1091 (7 connections now open) m31000| Fri Feb 22 11:57:27.387 [initandlisten] connection accepted from 165.225.128.186:55216 #1092 (8 connections now open) m31000| Fri Feb 22 11:57:27.478 [conn1] going to kill op: op: 22213.0 m31000| Fri Feb 22 11:57:27.478 [conn1] going to kill op: op: 22212.0 m31000| Fri Feb 22 11:57:27.478 [conn1092] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.478 [conn1092] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|37 } } cursorid:404209246124906 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:27.478 [conn1092] ClientCursor::find(): cursor not found in map '404209246124906' (ok after a drop) m31002| Fri Feb 22 11:57:27.478 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.479 [conn1092] end connection 165.225.128.186:55216 (7 connections now open) m31000| Fri Feb 22 11:57:27.479 [conn1091] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.479 [conn1091] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|37 } } cursorid:404209836396879 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:27.479 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.479 [conn1091] end connection 165.225.128.186:41745 (6 connections now open) m31000| Fri Feb 22 11:57:27.479 [initandlisten] connection accepted from 165.225.128.186:61616 #1093 (8 connections now open) m31000| Fri Feb 22 11:57:27.479 [initandlisten] connection accepted from 165.225.128.186:61296 #1094 (8 connections now open) m31000| Fri Feb 22 11:57:27.578 [conn1] going to kill op: op: 22253.0 m31000| Fri Feb 22 11:57:27.579 [conn1] going to kill op: op: 22250.0 m31000| Fri Feb 22 11:57:27.579 [conn1] going to kill op: op: 22251.0 m31000| Fri Feb 22 11:57:27.581 [conn1093] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.581 [conn1093] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|46 } } cursorid:404603964077881 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:27.581 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.581 [conn1093] end connection 165.225.128.186:61616 (7 connections now open) m31000| Fri Feb 22 11:57:27.581 [conn1094] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.581 [conn1094] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|46 } } cursorid:404605054810295 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:27.581 [initandlisten] connection accepted from 165.225.128.186:34281 #1095 (8 connections now open) m31001| Fri Feb 22 11:57:27.581 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.582 [conn1094] end connection 165.225.128.186:61296 (7 connections now open) m31000| Fri Feb 22 11:57:27.582 [initandlisten] connection accepted from 165.225.128.186:58659 #1096 (8 connections now open) m31000| Fri Feb 22 11:57:27.584 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.584 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|56 } } cursorid:404990905638662 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:27.584 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.679 [conn1] going to kill op: op: 22291.0 m31000| Fri Feb 22 11:57:27.680 [conn1] going to kill op: op: 22292.0 m31000| Fri Feb 22 11:57:27.683 [conn1095] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.683 [conn1095] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|56 } } cursorid:405038666447341 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:27.683 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.683 [conn1095] end connection 165.225.128.186:34281 (7 connections now open) m31000| Fri Feb 22 11:57:27.684 [conn1096] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.684 [initandlisten] connection accepted from 165.225.128.186:39121 #1097 (8 connections now open) m31000| Fri Feb 22 11:57:27.684 [conn1096] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|56 } } cursorid:405041425066171 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:27.684 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.684 [conn1096] end connection 165.225.128.186:58659 (7 connections now open) m31000| Fri Feb 22 11:57:27.684 [initandlisten] connection accepted from 165.225.128.186:34088 #1098 (8 connections now open) m31000| Fri Feb 22 11:57:27.780 [conn1] going to kill op: op: 22330.0 m31000| Fri Feb 22 11:57:27.780 [conn1] going to kill op: op: 22329.0 m31000| Fri Feb 22 11:57:27.786 [conn1097] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.786 [conn1097] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|66 } } cursorid:405476369923118 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:27.786 [conn1097] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:27.786 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.786 [conn1097] end connection 165.225.128.186:39121 (7 connections now open) m31000| Fri Feb 22 11:57:27.786 [conn1098] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.786 [conn1098] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|66 } } cursorid:405481194268225 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:27.787 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.787 [conn1098] end connection 165.225.128.186:34088 (6 connections now open) m31000| Fri Feb 22 11:57:27.787 [initandlisten] connection accepted from 165.225.128.186:58918 #1099 (7 connections now open) m31000| Fri Feb 22 11:57:27.787 [initandlisten] connection accepted from 165.225.128.186:60846 #1100 (8 connections now open) m31000| Fri Feb 22 11:57:27.881 [conn1] going to kill op: op: 22367.0 m31000| Fri Feb 22 11:57:27.881 [conn1] going to kill op: op: 22368.0 m31000| Fri Feb 22 11:57:27.889 [conn1100] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.889 [conn1100] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|76 } } cursorid:405918143582381 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:27.889 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.889 [conn1099] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.889 [conn1099] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|76 } } cursorid:405917636465040 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:27.889 [conn1100] end connection 165.225.128.186:60846 (7 connections now open) m31002| Fri Feb 22 11:57:27.889 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.889 [conn1099] end connection 165.225.128.186:58918 (6 connections now open) m31000| Fri Feb 22 11:57:27.890 [initandlisten] connection accepted from 165.225.128.186:62947 #1101 (7 connections now open) m31000| Fri Feb 22 11:57:27.890 [initandlisten] connection accepted from 165.225.128.186:41826 #1102 (8 connections now open) m31000| Fri Feb 22 11:57:27.982 [conn1] going to kill op: op: 22402.0 m31000| Fri Feb 22 11:57:27.982 [conn1] going to kill op: op: 22403.0 m31000| Fri Feb 22 11:57:27.983 [conn1102] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:27.983 [conn1102] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|86 } } cursorid:406355724269284 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:27.983 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:27.983 [conn1102] end connection 165.225.128.186:41826 (7 connections now open) m31000| Fri Feb 22 11:57:27.983 [initandlisten] connection accepted from 165.225.128.186:60843 #1103 (8 connections now open) m31000| Fri Feb 22 11:57:28.083 [conn1] going to kill op: op: 22440.0 m31000| Fri Feb 22 11:57:28.083 [conn1] going to kill op: op: 22438.0 m31000| Fri Feb 22 11:57:28.084 [conn1101] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.084 [conn1101] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|86 } } cursorid:406355492753881 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.084 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.084 [conn1101] end connection 165.225.128.186:62947 (7 connections now open) m31000| Fri Feb 22 11:57:28.085 [initandlisten] connection accepted from 165.225.128.186:64706 #1104 (8 connections now open) m31000| Fri Feb 22 11:57:28.086 [conn1103] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.086 [conn1103] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534247000|95 } } cursorid:406748156873952 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.086 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.087 [conn1103] end connection 165.225.128.186:60843 (7 connections now open) m31000| Fri Feb 22 11:57:28.087 [initandlisten] connection accepted from 165.225.128.186:54712 #1105 (8 connections now open) m31000| Fri Feb 22 11:57:28.184 [conn1] going to kill op: op: 22480.0 m31000| Fri Feb 22 11:57:28.184 [conn1] going to kill op: op: 22478.0 m31000| Fri Feb 22 11:57:28.187 [conn1104] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.187 [conn1104] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|7 } } cursorid:407181736873169 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.187 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.188 [conn1104] end connection 165.225.128.186:64706 (7 connections now open) m31000| Fri Feb 22 11:57:28.188 [initandlisten] connection accepted from 165.225.128.186:59996 #1106 (8 connections now open) m31000| Fri Feb 22 11:57:28.189 [conn1105] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.189 [conn1105] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|8 } } cursorid:407185758372675 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:28.189 [conn1105] ClientCursor::find(): cursor not found in map '407185758372675' (ok after a drop) m31002| Fri Feb 22 11:57:28.189 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.189 [conn1105] end connection 165.225.128.186:54712 (7 connections now open) m31000| Fri Feb 22 11:57:28.190 [initandlisten] connection accepted from 165.225.128.186:61974 #1107 (8 connections now open) m31000| Fri Feb 22 11:57:28.261 [conn542] end connection 165.225.128.186:34688 (7 connections now open) m31000| Fri Feb 22 11:57:28.261 [initandlisten] connection accepted from 165.225.128.186:51780 #1108 (8 connections now open) m31000| Fri Feb 22 11:57:28.284 [conn1] going to kill op: op: 22520.0 m31000| Fri Feb 22 11:57:28.285 [conn1] going to kill op: op: 22519.0 m31000| Fri Feb 22 11:57:28.290 [conn1106] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.290 [conn1106] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|17 } } cursorid:407619367637512 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.290 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.290 [conn1106] end connection 165.225.128.186:59996 (7 connections now open) m31000| Fri Feb 22 11:57:28.291 [initandlisten] connection accepted from 165.225.128.186:54889 #1109 (8 connections now open) m31000| Fri Feb 22 11:57:28.292 [conn1107] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.292 [conn1107] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|18 } } cursorid:407623602613539 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.292 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.293 [conn1107] end connection 165.225.128.186:61974 (7 connections now open) m31000| Fri Feb 22 11:57:28.293 [initandlisten] connection accepted from 165.225.128.186:50837 #1110 (8 connections now open) m31000| Fri Feb 22 11:57:28.385 [conn1] going to kill op: op: 22557.0 m31000| Fri Feb 22 11:57:28.386 [conn1] going to kill op: op: 22559.0 m31000| Fri Feb 22 11:57:28.386 [conn1] going to kill op: op: 22560.0 m31000| Fri Feb 22 11:57:28.392 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.392 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|37 } } cursorid:408367021590694 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.392 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.393 [conn1109] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.393 [conn1109] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|28 } } cursorid:408018665792112 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.393 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.393 [conn1109] end connection 165.225.128.186:54889 (7 connections now open) m31000| Fri Feb 22 11:57:28.394 [initandlisten] connection accepted from 165.225.128.186:59764 #1111 (8 connections now open) m31000| Fri Feb 22 11:57:28.395 [conn1110] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.395 [conn1110] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|28 } } cursorid:408023241714107 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.395 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.395 [conn1110] end connection 165.225.128.186:50837 (7 connections now open) m31000| Fri Feb 22 11:57:28.396 [initandlisten] connection accepted from 165.225.128.186:46693 #1112 (8 connections now open) m31000| Fri Feb 22 11:57:28.486 [conn1] going to kill op: op: 22596.0 m31000| Fri Feb 22 11:57:28.487 [conn1] going to kill op: op: 22598.0 m31000| Fri Feb 22 11:57:28.487 [conn1112] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.487 [conn1112] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|38 } } cursorid:408460023124576 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.488 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.488 [conn1112] end connection 165.225.128.186:46693 (7 connections now open) m31000| Fri Feb 22 11:57:28.488 [initandlisten] connection accepted from 165.225.128.186:47669 #1113 (8 connections now open) m31000| Fri Feb 22 11:57:28.496 [conn1111] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.496 [conn1111] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|38 } } cursorid:408457771187722 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:28.496 [conn1111] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:28.496 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.496 [conn1111] end connection 165.225.128.186:59764 (7 connections now open) m31000| Fri Feb 22 11:57:28.496 [initandlisten] connection accepted from 165.225.128.186:35848 #1114 (8 connections now open) m31000| Fri Feb 22 11:57:28.587 [conn1] going to kill op: op: 22636.0 m31000| Fri Feb 22 11:57:28.587 [conn1] going to kill op: op: 22634.0 m31000| Fri Feb 22 11:57:28.587 [conn1] going to kill op: op: 22633.0 m31000| Fri Feb 22 11:57:28.588 [conn1114] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.588 [conn1114] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|48 } } cursorid:408856968503416 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.588 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.588 [conn1114] end connection 165.225.128.186:35848 (7 connections now open) m31000| Fri Feb 22 11:57:28.589 [initandlisten] connection accepted from 165.225.128.186:48149 #1115 (8 connections now open) m31000| Fri Feb 22 11:57:28.590 [conn1113] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.590 [conn1113] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|47 } } cursorid:408852328520019 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.590 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.590 [conn1113] end connection 165.225.128.186:47669 (7 connections now open) m31000| Fri Feb 22 11:57:28.591 [initandlisten] connection accepted from 165.225.128.186:45501 #1116 (8 connections now open) m31000| Fri Feb 22 11:57:28.594 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.594 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|57 } } cursorid:409198829996449 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.595 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.688 [conn1] going to kill op: op: 22675.0 m31000| Fri Feb 22 11:57:28.688 [conn1] going to kill op: op: 22674.0 m31000| Fri Feb 22 11:57:28.691 [conn1115] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.691 [conn1115] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|57 } } cursorid:409246267161477 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.691 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.691 [conn1115] end connection 165.225.128.186:48149 (7 connections now open) m31000| Fri Feb 22 11:57:28.691 [initandlisten] connection accepted from 165.225.128.186:58967 #1117 (8 connections now open) m31000| Fri Feb 22 11:57:28.693 [conn1116] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.693 [conn1116] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|57 } } cursorid:409251618220026 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.693 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.693 [conn1116] end connection 165.225.128.186:45501 (7 connections now open) m31000| Fri Feb 22 11:57:28.693 [initandlisten] connection accepted from 165.225.128.186:44946 #1118 (8 connections now open) m31000| Fri Feb 22 11:57:28.789 [conn1] going to kill op: op: 22712.0 m31000| Fri Feb 22 11:57:28.789 [conn1] going to kill op: op: 22713.0 m31000| Fri Feb 22 11:57:28.793 [conn1117] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.794 [conn1117] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|67 } } cursorid:409685463925539 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.794 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.794 [conn1117] end connection 165.225.128.186:58967 (7 connections now open) m31000| Fri Feb 22 11:57:28.794 [initandlisten] connection accepted from 165.225.128.186:51095 #1119 (8 connections now open) m31000| Fri Feb 22 11:57:28.795 [conn1118] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.795 [conn1118] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|67 } } cursorid:409689531805245 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.795 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.795 [conn1118] end connection 165.225.128.186:44946 (7 connections now open) m31000| Fri Feb 22 11:57:28.796 [initandlisten] connection accepted from 165.225.128.186:34674 #1120 (8 connections now open) m31000| Fri Feb 22 11:57:28.890 [conn1] going to kill op: op: 22750.0 m31000| Fri Feb 22 11:57:28.890 [conn1] going to kill op: op: 22751.0 m31000| Fri Feb 22 11:57:28.896 [conn1119] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.896 [conn1119] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|77 } } cursorid:410123468025903 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:28.897 [conn1119] ClientCursor::find(): cursor not found in map '410123468025903' (ok after a drop) m31001| Fri Feb 22 11:57:28.897 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.897 [conn1119] end connection 165.225.128.186:51095 (7 connections now open) m31000| Fri Feb 22 11:57:28.897 [initandlisten] connection accepted from 165.225.128.186:53987 #1121 (8 connections now open) m31000| Fri Feb 22 11:57:28.898 [conn1120] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.898 [conn1120] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|77 } } cursorid:410127145055553 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:28.898 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.898 [conn1120] end connection 165.225.128.186:34674 (7 connections now open) m31000| Fri Feb 22 11:57:28.898 [initandlisten] connection accepted from 165.225.128.186:51746 #1122 (8 connections now open) m31000| Fri Feb 22 11:57:28.991 [conn1] going to kill op: op: 22789.0 m31000| Fri Feb 22 11:57:28.991 [conn1] going to kill op: op: 22791.0 m31000| Fri Feb 22 11:57:28.999 [conn1121] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:28.999 [conn1121] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|87 } } cursorid:410561631118231 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:28.999 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:28.999 [conn1121] end connection 165.225.128.186:53987 (7 connections now open) m31000| Fri Feb 22 11:57:29.000 [initandlisten] connection accepted from 165.225.128.186:36549 #1123 (8 connections now open) m31000| Fri Feb 22 11:57:29.000 [conn1122] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.001 [conn1122] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|87 } } cursorid:410565508593179 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.001 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.001 [conn1122] end connection 165.225.128.186:51746 (7 connections now open) m31000| Fri Feb 22 11:57:29.001 [initandlisten] connection accepted from 165.225.128.186:34385 #1124 (8 connections now open) m31000| Fri Feb 22 11:57:29.092 [conn1] going to kill op: op: 22827.0 m31000| Fri Feb 22 11:57:29.092 [conn1] going to kill op: op: 22826.0 m31000| Fri Feb 22 11:57:29.092 [conn1123] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.092 [conn1123] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|97 } } cursorid:410999513285278 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.092 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.093 [conn1123] end connection 165.225.128.186:36549 (7 connections now open) m31000| Fri Feb 22 11:57:29.093 [initandlisten] connection accepted from 165.225.128.186:44846 #1125 (8 connections now open) m31000| Fri Feb 22 11:57:29.093 [conn1124] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.093 [conn1124] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534248000|98 } } cursorid:411003472834831 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.093 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.093 [conn1124] end connection 165.225.128.186:34385 (7 connections now open) m31000| Fri Feb 22 11:57:29.094 [initandlisten] connection accepted from 165.225.128.186:43892 #1126 (8 connections now open) m31000| Fri Feb 22 11:57:29.192 [conn1] going to kill op: op: 22866.0 m31000| Fri Feb 22 11:57:29.193 [conn1] going to kill op: op: 22867.0 m31000| Fri Feb 22 11:57:29.195 [conn1125] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.195 [conn1125] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|9 } } cursorid:411394497346751 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.195 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.195 [conn1125] end connection 165.225.128.186:44846 (7 connections now open) m31000| Fri Feb 22 11:57:29.196 [initandlisten] connection accepted from 165.225.128.186:51301 #1127 (8 connections now open) m31000| Fri Feb 22 11:57:29.196 [conn1126] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.196 [conn1126] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|9 } } cursorid:411398614267091 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:29.196 [conn1126] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:29.196 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.196 [conn1126] end connection 165.225.128.186:43892 (7 connections now open) m31000| Fri Feb 22 11:57:29.197 [initandlisten] connection accepted from 165.225.128.186:53518 #1128 (8 connections now open) m31000| Fri Feb 22 11:57:29.293 [conn1] going to kill op: op: 22905.0 m31000| Fri Feb 22 11:57:29.293 [conn1] going to kill op: op: 22904.0 m31000| Fri Feb 22 11:57:29.298 [conn1127] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.298 [conn1127] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|19 } } cursorid:411831710908671 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.298 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.298 [conn1127] end connection 165.225.128.186:51301 (7 connections now open) m31000| Fri Feb 22 11:57:29.298 [initandlisten] connection accepted from 165.225.128.186:39535 #1129 (8 connections now open) m31000| Fri Feb 22 11:57:29.298 [conn1128] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.299 [conn1128] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|19 } } cursorid:411836532013143 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.299 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.299 [conn1128] end connection 165.225.128.186:53518 (7 connections now open) m31000| Fri Feb 22 11:57:29.299 [initandlisten] connection accepted from 165.225.128.186:64533 #1130 (8 connections now open) m31000| Fri Feb 22 11:57:29.394 [conn1] going to kill op: op: 22945.0 m31000| Fri Feb 22 11:57:29.394 [conn1] going to kill op: op: 22942.0 m31000| Fri Feb 22 11:57:29.394 [conn1] going to kill op: op: 22943.0 m31000| Fri Feb 22 11:57:29.400 [conn1129] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.400 [conn1129] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|29 } } cursorid:412270295504757 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.400 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.401 [conn1129] end connection 165.225.128.186:39535 (7 connections now open) m31000| Fri Feb 22 11:57:29.401 [initandlisten] connection accepted from 165.225.128.186:34351 #1131 (8 connections now open) m31000| Fri Feb 22 11:57:29.401 [conn1130] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.401 [conn1130] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|29 } } cursorid:412274927855581 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.401 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.401 [conn1130] end connection 165.225.128.186:64533 (7 connections now open) m31000| Fri Feb 22 11:57:29.401 [initandlisten] connection accepted from 165.225.128.186:60676 #1132 (8 connections now open) m31000| Fri Feb 22 11:57:29.403 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.403 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|39 } } cursorid:412660794171455 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.403 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.495 [conn1] going to kill op: op: 22984.0 m31000| Fri Feb 22 11:57:29.495 [conn1] going to kill op: op: 22983.0 m31000| Fri Feb 22 11:57:29.503 [conn1131] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.503 [conn1131] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|39 } } cursorid:412709400920007 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.503 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.503 [conn1131] end connection 165.225.128.186:34351 (7 connections now open) m31000| Fri Feb 22 11:57:29.503 [initandlisten] connection accepted from 165.225.128.186:36114 #1133 (8 connections now open) m31000| Fri Feb 22 11:57:29.504 [conn1132] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.504 [conn1132] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|39 } } cursorid:412713935067787 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.504 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.504 [conn1132] end connection 165.225.128.186:60676 (7 connections now open) m31000| Fri Feb 22 11:57:29.504 [initandlisten] connection accepted from 165.225.128.186:58602 #1134 (8 connections now open) m31000| Fri Feb 22 11:57:29.596 [conn1] going to kill op: op: 23019.0 m31000| Fri Feb 22 11:57:29.596 [conn1134] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.596 [conn1134] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|49 } } cursorid:413151779914796 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:29.596 [conn1134] ClientCursor::find(): cursor not found in map '413151779914796' (ok after a drop) m31002| Fri Feb 22 11:57:29.596 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.596 [conn1134] end connection 165.225.128.186:58602 (7 connections now open) m31000| Fri Feb 22 11:57:29.597 [initandlisten] connection accepted from 165.225.128.186:40975 #1135 (8 connections now open) m31000| Fri Feb 22 11:57:29.696 [conn1] going to kill op: op: 23064.0 m31000| Fri Feb 22 11:57:29.696 [conn1] going to kill op: op: 23063.0 m31000| Fri Feb 22 11:57:29.697 [conn1] going to kill op: op: 23062.0 m31000| Fri Feb 22 11:57:29.697 [conn1133] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.697 [conn1133] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|49 } } cursorid:413145908319554 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.697 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.697 [conn1133] end connection 165.225.128.186:36114 (7 connections now open) m31000| Fri Feb 22 11:57:29.698 [initandlisten] connection accepted from 165.225.128.186:49735 #1136 (8 connections now open) m31000| Fri Feb 22 11:57:29.699 [conn1135] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.699 [conn1135] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|58 } } cursorid:413542496597642 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.699 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.699 [conn1135] end connection 165.225.128.186:40975 (7 connections now open) m31000| Fri Feb 22 11:57:29.699 [initandlisten] connection accepted from 165.225.128.186:64938 #1137 (8 connections now open) m31000| Fri Feb 22 11:57:29.699 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.699 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|58 } } cursorid:413537973393471 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.700 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.797 [conn1] going to kill op: op: 23105.0 m31000| Fri Feb 22 11:57:29.797 [conn1] going to kill op: op: 23103.0 m31000| Fri Feb 22 11:57:29.800 [conn1136] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.800 [conn1136] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|68 } } cursorid:413976334921821 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.800 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.800 [conn1136] end connection 165.225.128.186:49735 (7 connections now open) m31000| Fri Feb 22 11:57:29.801 [initandlisten] connection accepted from 165.225.128.186:36270 #1138 (8 connections now open) m31000| Fri Feb 22 11:57:29.802 [conn1137] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.802 [conn1137] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|68 } } cursorid:413979111201733 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:29.802 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.802 [conn1137] end connection 165.225.128.186:64938 (7 connections now open) m31000| Fri Feb 22 11:57:29.802 [initandlisten] connection accepted from 165.225.128.186:58156 #1139 (8 connections now open) m31000| Fri Feb 22 11:57:29.898 [conn1] going to kill op: op: 23144.0 m31000| Fri Feb 22 11:57:29.898 [conn1] going to kill op: op: 23143.0 m31000| Fri Feb 22 11:57:29.903 [conn1138] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.903 [conn1138] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|78 } } cursorid:414413217159042 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:92 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:29.903 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.903 [conn1138] end connection 165.225.128.186:36270 (7 connections now open) m31000| Fri Feb 22 11:57:29.904 [initandlisten] connection accepted from 165.225.128.186:38625 #1140 (8 connections now open) m31000| Fri Feb 22 11:57:29.904 [conn1139] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:29.904 [conn1139] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|79 } } cursorid:414417326681056 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:29.905 [conn1139] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:29.905 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:29.905 [conn1139] end connection 165.225.128.186:58156 (7 connections now open) m31000| Fri Feb 22 11:57:29.905 [initandlisten] connection accepted from 165.225.128.186:45051 #1141 (8 connections now open) m31000| Fri Feb 22 11:57:29.999 [conn1] going to kill op: op: 23181.0 m31000| Fri Feb 22 11:57:29.999 [conn1] going to kill op: op: 23182.0 m31000| Fri Feb 22 11:57:30.006 [conn1140] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.006 [conn1140] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|89 } } cursorid:414852927508900 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.006 [conn1140] getMore: cursorid not found local.oplog.rs 414852927508900 m31001| Fri Feb 22 11:57:30.006 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.006 [conn1140] end connection 165.225.128.186:38625 (7 connections now open) m31000| Fri Feb 22 11:57:30.007 [initandlisten] connection accepted from 165.225.128.186:53050 #1142 (8 connections now open) m31000| Fri Feb 22 11:57:30.007 [conn1141] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.007 [conn1141] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|89 } } cursorid:414855296393857 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:30.007 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:30.007 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 11:57:30.007 [conn1141] end connection 165.225.128.186:45051 (7 connections now open) m31000| Fri Feb 22 11:57:30.008 [initandlisten] connection accepted from 165.225.128.186:34036 #1143 (8 connections now open) m31000| Fri Feb 22 11:57:30.100 [conn1] going to kill op: op: 23220.0 m31000| Fri Feb 22 11:57:30.100 [conn1] going to kill op: op: 23218.0 m31000| Fri Feb 22 11:57:30.109 [conn1142] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.109 [conn1142] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|99 } } cursorid:415289369425163 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:30.110 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.110 [conn1142] end connection 165.225.128.186:53050 (7 connections now open) m31000| Fri Feb 22 11:57:30.110 [initandlisten] connection accepted from 165.225.128.186:40717 #1144 (8 connections now open) m31000| Fri Feb 22 11:57:30.201 [conn1] going to kill op: op: 23254.0 m31000| Fri Feb 22 11:57:30.201 [conn1] going to kill op: op: 23253.0 m31000| Fri Feb 22 11:57:30.202 [conn1143] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.202 [conn1143] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534249000|99 } } cursorid:415294066026412 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:130 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:30.202 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.202 [conn1143] end connection 165.225.128.186:34036 (7 connections now open) m31000| Fri Feb 22 11:57:30.202 [conn1144] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.202 [conn1144] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|10 } } cursorid:415727663776860 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:30.202 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.202 [initandlisten] connection accepted from 165.225.128.186:61700 #1145 (8 connections now open) m31000| Fri Feb 22 11:57:30.202 [conn1144] end connection 165.225.128.186:40717 (7 connections now open) m31000| Fri Feb 22 11:57:30.203 [initandlisten] connection accepted from 165.225.128.186:39529 #1146 (8 connections now open) m31000| Fri Feb 22 11:57:30.301 [conn1] going to kill op: op: 23292.0 m31000| Fri Feb 22 11:57:30.301 [conn1] going to kill op: op: 23293.0 m31000| Fri Feb 22 11:57:30.305 [conn1145] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.305 [conn1145] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|19 } } cursorid:416123603554504 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.305 [conn1146] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:57:30.305 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.305 [conn1146] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|19 } } cursorid:416122823497598 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.305 [conn1145] end connection 165.225.128.186:61700 (7 connections now open) m31001| Fri Feb 22 11:57:30.305 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.305 [conn1146] end connection 165.225.128.186:39529 (6 connections now open) m31000| Fri Feb 22 11:57:30.305 [initandlisten] connection accepted from 165.225.128.186:40735 #1147 (7 connections now open) m31000| Fri Feb 22 11:57:30.306 [initandlisten] connection accepted from 165.225.128.186:58074 #1148 (8 connections now open) m31000| Fri Feb 22 11:57:30.402 [conn1] going to kill op: op: 23331.0 m31000| Fri Feb 22 11:57:30.402 [conn1] going to kill op: op: 23330.0 m31000| Fri Feb 22 11:57:30.408 [conn1147] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.408 [conn1147] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|29 } } cursorid:416560631103164 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.408 [conn1148] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.408 [conn1148] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|29 } } cursorid:416562235554967 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.408 [conn1147] ClientCursor::find(): cursor not found in map '416560631103164' (ok after a drop) m31002| Fri Feb 22 11:57:30.408 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:30.408 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.408 [conn1147] end connection 165.225.128.186:40735 (7 connections now open) m31000| Fri Feb 22 11:57:30.408 [conn1148] end connection 165.225.128.186:58074 (7 connections now open) m31000| Fri Feb 22 11:57:30.408 [initandlisten] connection accepted from 165.225.128.186:55164 #1149 (7 connections now open) m31000| Fri Feb 22 11:57:30.408 [initandlisten] connection accepted from 165.225.128.186:53976 #1150 (8 connections now open) m31000| Fri Feb 22 11:57:30.503 [conn1] going to kill op: op: 23380.0 m31000| Fri Feb 22 11:57:30.503 [conn1] going to kill op: op: 23379.0 m31000| Fri Feb 22 11:57:30.503 [conn1] going to kill op: op: 23378.0 m31000| Fri Feb 22 11:57:30.510 [conn1149] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.511 [conn1149] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|39 } } cursorid:416999903427773 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:30.511 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.511 [conn1149] end connection 165.225.128.186:55164 (7 connections now open) m31000| Fri Feb 22 11:57:30.511 [conn1150] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.511 [conn1150] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|39 } } cursorid:417000025751333 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:30.511 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.511 [initandlisten] connection accepted from 165.225.128.186:35306 #1151 (8 connections now open) m31000| Fri Feb 22 11:57:30.511 [conn1150] end connection 165.225.128.186:53976 (7 connections now open) m31000| Fri Feb 22 11:57:30.511 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.511 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|39 } } cursorid:416948055328748 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.511 [initandlisten] connection accepted from 165.225.128.186:37940 #1152 (8 connections now open) m31001| Fri Feb 22 11:57:30.512 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.604 [conn1] going to kill op: op: 23421.0 m31000| Fri Feb 22 11:57:30.604 [conn1] going to kill op: op: 23420.0 m31000| Fri Feb 22 11:57:30.613 [conn1151] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.613 [conn1151] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|49 } } cursorid:417438266121791 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:30.613 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.613 [conn1151] end connection 165.225.128.186:35306 (7 connections now open) m31000| Fri Feb 22 11:57:30.614 [conn1152] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.614 [conn1152] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|49 } } cursorid:417436654122507 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.614 [initandlisten] connection accepted from 165.225.128.186:40618 #1153 (8 connections now open) m31001| Fri Feb 22 11:57:30.614 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.614 [conn1152] end connection 165.225.128.186:37940 (7 connections now open) m31000| Fri Feb 22 11:57:30.614 [initandlisten] connection accepted from 165.225.128.186:54398 #1154 (8 connections now open) m31000| Fri Feb 22 11:57:30.704 [conn1] going to kill op: op: 23458.0 m31000| Fri Feb 22 11:57:30.705 [conn1] going to kill op: op: 23456.0 m31000| Fri Feb 22 11:57:30.705 [conn1] going to kill op: op: 23455.0 m31000| Fri Feb 22 11:57:30.706 [conn1153] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.706 [conn1154] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.706 [conn1153] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|60 } } cursorid:417875387378394 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.706 [conn1154] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|60 } } cursorid:417876095133751 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.706 [conn1153] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:30.706 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:30.706 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.706 [conn1154] end connection 165.225.128.186:54398 (7 connections now open) m31000| Fri Feb 22 11:57:30.706 [conn1153] end connection 165.225.128.186:40618 (7 connections now open) m31000| Fri Feb 22 11:57:30.706 [initandlisten] connection accepted from 165.225.128.186:52449 #1155 (7 connections now open) m31000| Fri Feb 22 11:57:30.706 [initandlisten] connection accepted from 165.225.128.186:57943 #1156 (8 connections now open) m31000| Fri Feb 22 11:57:30.710 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.710 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|69 } } cursorid:418219070350546 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:30.711 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.805 [conn1] going to kill op: op: 23496.0 m31000| Fri Feb 22 11:57:30.805 [conn1] going to kill op: op: 23497.0 m31000| Fri Feb 22 11:57:30.808 [conn1156] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.808 [conn1155] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.808 [conn1156] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|69 } } cursorid:418269709983827 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:30.808 [conn1155] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|69 } } cursorid:418270947474522 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:30.808 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:30.808 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.808 [conn1156] end connection 165.225.128.186:57943 (7 connections now open) m31000| Fri Feb 22 11:57:30.808 [conn1155] end connection 165.225.128.186:52449 (7 connections now open) m31000| Fri Feb 22 11:57:30.809 [initandlisten] connection accepted from 165.225.128.186:55755 #1157 (7 connections now open) m31000| Fri Feb 22 11:57:30.809 [initandlisten] connection accepted from 165.225.128.186:46441 #1158 (8 connections now open) m31000| Fri Feb 22 11:57:30.906 [conn1] going to kill op: op: 23535.0 m31000| Fri Feb 22 11:57:30.906 [conn1] going to kill op: op: 23534.0 m31000| Fri Feb 22 11:57:30.911 [conn1158] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.911 [conn1158] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|79 } } cursorid:418708302590784 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:30.911 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.911 [conn1157] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:30.911 [conn1158] end connection 165.225.128.186:46441 (7 connections now open) m31000| Fri Feb 22 11:57:30.911 [conn1157] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|79 } } cursorid:418707946318680 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:30.911 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:30.911 [conn1157] end connection 165.225.128.186:55755 (6 connections now open) m31000| Fri Feb 22 11:57:30.911 [initandlisten] connection accepted from 165.225.128.186:37663 #1159 (8 connections now open) m31000| Fri Feb 22 11:57:30.911 [initandlisten] connection accepted from 165.225.128.186:33657 #1160 (8 connections now open) m31000| Fri Feb 22 11:57:31.007 [conn1] going to kill op: op: 23573.0 m31000| Fri Feb 22 11:57:31.007 [conn1] going to kill op: op: 23572.0 m31000| Fri Feb 22 11:57:31.013 [conn1159] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.013 [conn1159] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|89 } } cursorid:419146168478869 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:31.013 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.013 [conn1159] end connection 165.225.128.186:37663 (7 connections now open) m31000| Fri Feb 22 11:57:31.014 [initandlisten] connection accepted from 165.225.128.186:56521 #1161 (8 connections now open) m31000| Fri Feb 22 11:57:31.014 [conn1160] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.014 [conn1160] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|89 } } cursorid:419147633833251 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:31.014 [conn1160] ClientCursor::find(): cursor not found in map '419147633833251' (ok after a drop) m31002| Fri Feb 22 11:57:31.014 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.014 [conn1160] end connection 165.225.128.186:33657 (7 connections now open) m31000| Fri Feb 22 11:57:31.014 [initandlisten] connection accepted from 165.225.128.186:40064 #1162 (8 connections now open) m31000| Fri Feb 22 11:57:31.108 [conn1] going to kill op: op: 23610.0 m31000| Fri Feb 22 11:57:31.108 [conn1] going to kill op: op: 23611.0 m31000| Fri Feb 22 11:57:31.116 [conn1161] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.116 [conn1161] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|99 } } cursorid:419580215399866 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:31.116 [conn1162] { $err: "operation was interrupted", code: 11601 } m31001| Fri Feb 22 11:57:31.116 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.116 [conn1162] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534250000|99 } } cursorid:419585798719213 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:31.116 [conn1161] end connection 165.225.128.186:56521 (7 connections now open) m31002| Fri Feb 22 11:57:31.116 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.116 [conn1162] end connection 165.225.128.186:40064 (6 connections now open) m31000| Fri Feb 22 11:57:31.116 [initandlisten] connection accepted from 165.225.128.186:45866 #1163 (7 connections now open) m31000| Fri Feb 22 11:57:31.117 [initandlisten] connection accepted from 165.225.128.186:42597 #1164 (8 connections now open) m31000| Fri Feb 22 11:57:31.208 [conn1] going to kill op: op: 23648.0 m31000| Fri Feb 22 11:57:31.208 [conn1] going to kill op: op: 23647.0 m31000| Fri Feb 22 11:57:31.209 [conn1163] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.209 [conn1163] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|10 } } cursorid:420023844675571 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:31.209 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.209 [conn1163] end connection 165.225.128.186:45866 (7 connections now open) m31000| Fri Feb 22 11:57:31.209 [initandlisten] connection accepted from 165.225.128.186:36417 #1165 (8 connections now open) m31000| Fri Feb 22 11:57:31.309 [conn1] going to kill op: op: 23682.0 m31000| Fri Feb 22 11:57:31.309 [conn1] going to kill op: op: 23681.0 m31000| Fri Feb 22 11:57:31.310 [conn1164] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.310 [conn1164] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|10 } } cursorid:420023590455095 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.310 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.310 [conn1164] end connection 165.225.128.186:42597 (7 connections now open) m31000| Fri Feb 22 11:57:31.310 [initandlisten] connection accepted from 165.225.128.186:43532 #1166 (8 connections now open) m31000| Fri Feb 22 11:57:31.311 [conn1165] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.311 [conn1165] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|19 } } cursorid:420414457005860 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:31.311 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.311 [conn1165] end connection 165.225.128.186:36417 (7 connections now open) m31000| Fri Feb 22 11:57:31.312 [initandlisten] connection accepted from 165.225.128.186:48872 #1167 (8 connections now open) m31000| Fri Feb 22 11:57:31.410 [conn1] going to kill op: op: 23721.0 m31000| Fri Feb 22 11:57:31.410 [conn1] going to kill op: op: 23719.0 m31000| Fri Feb 22 11:57:31.413 [conn1166] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.413 [conn1166] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|29 } } cursorid:420847352476047 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.413 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.413 [conn1166] end connection 165.225.128.186:43532 (7 connections now open) m31000| Fri Feb 22 11:57:31.413 [initandlisten] connection accepted from 165.225.128.186:39479 #1168 (8 connections now open) m31000| Fri Feb 22 11:57:31.414 [conn1167] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.414 [conn1167] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|29 } } cursorid:420852096879015 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:31.414 [conn1167] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:31.414 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.414 [conn1167] end connection 165.225.128.186:48872 (7 connections now open) m31000| Fri Feb 22 11:57:31.414 [initandlisten] connection accepted from 165.225.128.186:41919 #1169 (8 connections now open) m31000| Fri Feb 22 11:57:31.511 [conn1] going to kill op: op: 23758.0 m31000| Fri Feb 22 11:57:31.511 [conn1] going to kill op: op: 23759.0 m31000| Fri Feb 22 11:57:31.515 [conn1168] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.515 [conn1168] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|39 } } cursorid:421286288760616 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.515 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.515 [conn1168] end connection 165.225.128.186:39479 (7 connections now open) m31000| Fri Feb 22 11:57:31.516 [initandlisten] connection accepted from 165.225.128.186:39948 #1170 (8 connections now open) m31000| Fri Feb 22 11:57:31.516 [conn1169] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.516 [conn1169] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|40 } } cursorid:421290953280849 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:31.516 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.516 [conn1169] end connection 165.225.128.186:41919 (7 connections now open) m31000| Fri Feb 22 11:57:31.517 [initandlisten] connection accepted from 165.225.128.186:56389 #1171 (8 connections now open) m31000| Fri Feb 22 11:57:31.611 [conn1] going to kill op: op: 23805.0 m31000| Fri Feb 22 11:57:31.612 [conn1] going to kill op: op: 23807.0 m31000| Fri Feb 22 11:57:31.612 [conn1] going to kill op: op: 23808.0 m31000| Fri Feb 22 11:57:31.615 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.615 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|50 } } cursorid:421677367459283 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:31.615 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.618 [conn1170] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.618 [conn1170] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|50 } } cursorid:421724306178594 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.618 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.618 [conn1170] end connection 165.225.128.186:39948 (7 connections now open) m31000| Fri Feb 22 11:57:31.618 [initandlisten] connection accepted from 165.225.128.186:54430 #1172 (8 connections now open) m31000| Fri Feb 22 11:57:31.619 [conn1171] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.619 [conn1171] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|50 } } cursorid:421728595227256 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:31.619 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.619 [conn1171] end connection 165.225.128.186:56389 (7 connections now open) m31000| Fri Feb 22 11:57:31.619 [initandlisten] connection accepted from 165.225.128.186:48524 #1173 (8 connections now open) m31000| Fri Feb 22 11:57:31.712 [conn1] going to kill op: op: 23848.0 m31000| Fri Feb 22 11:57:31.713 [conn1] going to kill op: op: 23846.0 m31000| Fri Feb 22 11:57:31.713 [conn1] going to kill op: op: 23849.0 m31000| Fri Feb 22 11:57:31.720 [conn1172] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.720 [conn1172] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|60 } } cursorid:422162055662610 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.720 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.720 [conn1172] end connection 165.225.128.186:54430 (7 connections now open) m31000| Fri Feb 22 11:57:31.720 [initandlisten] connection accepted from 165.225.128.186:43802 #1174 (8 connections now open) m31000| Fri Feb 22 11:57:31.721 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.721 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|70 } } cursorid:422552098653472 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.721 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.721 [conn1173] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.721 [conn1173] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|60 } } cursorid:422166332875817 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:31.721 [conn1173] ClientCursor::find(): cursor not found in map '422166332875817' (ok after a drop) m31001| Fri Feb 22 11:57:31.721 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.721 [conn1173] end connection 165.225.128.186:48524 (7 connections now open) m31000| Fri Feb 22 11:57:31.722 [initandlisten] connection accepted from 165.225.128.186:35646 #1175 (8 connections now open) m31000| Fri Feb 22 11:57:31.813 [conn1] going to kill op: op: 23885.0 m31000| Fri Feb 22 11:57:31.813 [conn1175] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.813 [conn1175] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|70 } } cursorid:422603480184312 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:31.813 [conn1] going to kill op: op: 23887.0 m31001| Fri Feb 22 11:57:31.814 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.814 [conn1175] end connection 165.225.128.186:35646 (7 connections now open) m31000| Fri Feb 22 11:57:31.814 [initandlisten] connection accepted from 165.225.128.186:34680 #1176 (8 connections now open) m31000| Fri Feb 22 11:57:31.822 [conn1174] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.822 [conn1174] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|70 } } cursorid:422600363955807 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.822 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.822 [conn1174] end connection 165.225.128.186:43802 (7 connections now open) m31000| Fri Feb 22 11:57:31.823 [initandlisten] connection accepted from 165.225.128.186:63963 #1177 (8 connections now open) m31002| Fri Feb 22 11:57:31.838 [conn7] end connection 165.225.128.186:63700 (2 connections now open) m31002| Fri Feb 22 11:57:31.838 [initandlisten] connection accepted from 165.225.128.186:57914 #9 (3 connections now open) m31000| Fri Feb 22 11:57:31.914 [conn1] going to kill op: op: 23923.0 m31000| Fri Feb 22 11:57:31.914 [conn1] going to kill op: op: 23922.0 m31000| Fri Feb 22 11:57:31.914 [conn1177] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.914 [conn1177] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|80 } } cursorid:422999134382270 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:31.915 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.915 [conn1177] end connection 165.225.128.186:63963 (7 connections now open) m31000| Fri Feb 22 11:57:31.915 [initandlisten] connection accepted from 165.225.128.186:64209 #1178 (8 connections now open) m31000| Fri Feb 22 11:57:31.916 [conn1176] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:31.916 [conn1176] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|79 } } cursorid:422994930562696 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:31.916 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:31.916 [conn1176] end connection 165.225.128.186:34680 (7 connections now open) m31000| Fri Feb 22 11:57:31.916 [initandlisten] connection accepted from 165.225.128.186:33390 #1179 (8 connections now open) m31000| Fri Feb 22 11:57:32.015 [conn1] going to kill op: op: 23961.0 m31000| Fri Feb 22 11:57:32.015 [conn1] going to kill op: op: 23960.0 m31000| Fri Feb 22 11:57:32.017 [conn1178] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.017 [conn1178] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|89 } } cursorid:423390217232318 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.017 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.017 [conn1178] end connection 165.225.128.186:64209 (7 connections now open) m31000| Fri Feb 22 11:57:32.018 [initandlisten] connection accepted from 165.225.128.186:53442 #1180 (8 connections now open) m31000| Fri Feb 22 11:57:32.018 [conn1179] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.018 [conn1179] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534251000|89 } } cursorid:423395201002830 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.018 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.018 [conn1179] end connection 165.225.128.186:33390 (7 connections now open) m31000| Fri Feb 22 11:57:32.019 [initandlisten] connection accepted from 165.225.128.186:47661 #1181 (8 connections now open) m31002| Fri Feb 22 11:57:32.053 [conn8] end connection 165.225.128.186:55951 (2 connections now open) m31002| Fri Feb 22 11:57:32.053 [initandlisten] connection accepted from 165.225.128.186:53592 #10 (3 connections now open) m31000| Fri Feb 22 11:57:32.115 [conn1] going to kill op: op: 24000.0 m31000| Fri Feb 22 11:57:32.116 [conn1] going to kill op: op: 23999.0 m31000| Fri Feb 22 11:57:32.120 [conn1180] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.120 [conn1180] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|1 } } cursorid:423829350174986 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:32.120 [conn1180] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:32.120 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.120 [conn1180] end connection 165.225.128.186:53442 (7 connections now open) m31000| Fri Feb 22 11:57:32.120 [initandlisten] connection accepted from 165.225.128.186:38617 #1182 (8 connections now open) m31000| Fri Feb 22 11:57:32.121 [conn1181] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.121 [conn1181] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|1 } } cursorid:423831798216177 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.121 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.121 [conn1181] end connection 165.225.128.186:47661 (7 connections now open) m31000| Fri Feb 22 11:57:32.121 [initandlisten] connection accepted from 165.225.128.186:43662 #1183 (8 connections now open) m31000| Fri Feb 22 11:57:32.216 [conn1] going to kill op: op: 24040.0 m31000| Fri Feb 22 11:57:32.216 [conn1] going to kill op: op: 24039.0 m31000| Fri Feb 22 11:57:32.222 [conn1182] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.222 [conn1182] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|11 } } cursorid:424265746526797 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.222 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.222 [conn1182] end connection 165.225.128.186:38617 (7 connections now open) m31000| Fri Feb 22 11:57:32.222 [initandlisten] connection accepted from 165.225.128.186:40827 #1184 (8 connections now open) m31000| Fri Feb 22 11:57:32.223 [conn1183] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.223 [conn1183] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|11 } } cursorid:424270933776133 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:37 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.223 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.223 [conn1183] end connection 165.225.128.186:43662 (7 connections now open) m31000| Fri Feb 22 11:57:32.223 [initandlisten] connection accepted from 165.225.128.186:59787 #1185 (8 connections now open) m31000| Fri Feb 22 11:57:32.317 [conn1] going to kill op: op: 24080.0 m31000| Fri Feb 22 11:57:32.317 [conn1] going to kill op: op: 24078.0 m31000| Fri Feb 22 11:57:32.325 [conn1184] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.325 [conn1184] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|21 } } cursorid:424704341566088 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.325 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.325 [conn1184] end connection 165.225.128.186:40827 (7 connections now open) m31000| Fri Feb 22 11:57:32.325 [initandlisten] connection accepted from 165.225.128.186:59432 #1186 (8 connections now open) m31000| Fri Feb 22 11:57:32.326 [conn1185] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.326 [conn1185] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|21 } } cursorid:424709678788533 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.326 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.326 [conn1185] end connection 165.225.128.186:59787 (7 connections now open) m31000| Fri Feb 22 11:57:32.326 [initandlisten] connection accepted from 165.225.128.186:57960 #1187 (8 connections now open) m31000| Fri Feb 22 11:57:32.418 [conn1] going to kill op: op: 24117.0 m31000| Fri Feb 22 11:57:32.418 [conn1] going to kill op: op: 24115.0 m31000| Fri Feb 22 11:57:32.418 [conn1187] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.418 [conn1187] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|32 } } cursorid:425147148169652 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.418 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.418 [conn1187] end connection 165.225.128.186:57960 (7 connections now open) m31000| Fri Feb 22 11:57:32.419 [initandlisten] connection accepted from 165.225.128.186:46153 #1188 (8 connections now open) m31000| Fri Feb 22 11:57:32.427 [conn1186] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.427 [conn1186] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|31 } } cursorid:425143148328427 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.427 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.428 [conn1186] end connection 165.225.128.186:59432 (7 connections now open) m31000| Fri Feb 22 11:57:32.428 [initandlisten] connection accepted from 165.225.128.186:52611 #1189 (8 connections now open) m31000| Fri Feb 22 11:57:32.519 [conn1] going to kill op: op: 24152.0 m31000| Fri Feb 22 11:57:32.519 [conn1] going to kill op: op: 24153.0 m31000| Fri Feb 22 11:57:32.520 [conn1189] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.520 [conn1189] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|42 } } cursorid:425541351060910 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:32.520 [conn1189] ClientCursor::find(): cursor not found in map '425541351060910' (ok after a drop) m31002| Fri Feb 22 11:57:32.520 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.520 [conn1189] end connection 165.225.128.186:52611 (7 connections now open) m31000| Fri Feb 22 11:57:32.521 [initandlisten] connection accepted from 165.225.128.186:38069 #1190 (8 connections now open) m31000| Fri Feb 22 11:57:32.525 [conn1188] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.525 [conn1188] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|41 } } cursorid:425537066230912 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.525 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.525 [conn1188] end connection 165.225.128.186:46153 (7 connections now open) m31000| Fri Feb 22 11:57:32.526 [initandlisten] connection accepted from 165.225.128.186:50238 #1191 (8 connections now open) m31000| Fri Feb 22 11:57:32.619 [conn1] going to kill op: op: 24193.0 m31000| Fri Feb 22 11:57:32.620 [conn1] going to kill op: op: 24190.0 m31000| Fri Feb 22 11:57:32.620 [conn1] going to kill op: op: 24192.0 m31000| Fri Feb 22 11:57:32.622 [conn1190] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.622 [conn1190] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|51 } } cursorid:425933283748990 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.622 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.623 [conn1190] end connection 165.225.128.186:38069 (7 connections now open) m31000| Fri Feb 22 11:57:32.623 [initandlisten] connection accepted from 165.225.128.186:56086 #1192 (8 connections now open) m31000| Fri Feb 22 11:57:32.627 [conn1191] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.627 [conn1191] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|51 } } cursorid:425936632061782 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:38 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.627 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.627 [conn1191] end connection 165.225.128.186:50238 (7 connections now open) m31000| Fri Feb 22 11:57:32.628 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.628 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|60 } } cursorid:426280334423221 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:32.628 [initandlisten] connection accepted from 165.225.128.186:53129 #1193 (8 connections now open) m31001| Fri Feb 22 11:57:32.628 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.720 [conn1] going to kill op: op: 24231.0 m31000| Fri Feb 22 11:57:32.721 [conn1] going to kill op: op: 24232.0 m31000| Fri Feb 22 11:57:32.725 [conn1192] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.725 [conn1192] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|61 } } cursorid:426327679476777 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.725 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.725 [conn1192] end connection 165.225.128.186:56086 (7 connections now open) m31000| Fri Feb 22 11:57:32.726 [initandlisten] connection accepted from 165.225.128.186:36379 #1194 (8 connections now open) m31000| Fri Feb 22 11:57:32.730 [conn1193] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.730 [conn1193] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|61 } } cursorid:426333022166592 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.730 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.730 [conn1193] end connection 165.225.128.186:53129 (7 connections now open) m31000| Fri Feb 22 11:57:32.730 [initandlisten] connection accepted from 165.225.128.186:36361 #1195 (8 connections now open) m31000| Fri Feb 22 11:57:32.821 [conn1] going to kill op: op: 24281.0 m31000| Fri Feb 22 11:57:32.822 [conn1] going to kill op: op: 24280.0 m31000| Fri Feb 22 11:57:32.822 [conn1] going to kill op: op: 24279.0 m31000| Fri Feb 22 11:57:32.822 [conn1195] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.822 [conn1195] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|71 } } cursorid:426726892371107 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:32.822 [conn1195] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:32.823 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.823 [conn1195] end connection 165.225.128.186:36361 (7 connections now open) m31000| Fri Feb 22 11:57:32.823 [initandlisten] connection accepted from 165.225.128.186:48371 #1196 (8 connections now open) m31000| Fri Feb 22 11:57:32.828 [conn1194] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.828 [conn1194] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|71 } } cursorid:426723411265859 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.828 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.828 [conn1194] end connection 165.225.128.186:36379 (7 connections now open) m31000| Fri Feb 22 11:57:32.829 [initandlisten] connection accepted from 165.225.128.186:47094 #1197 (8 connections now open) m31000| Fri Feb 22 11:57:32.829 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.829 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|71 } } cursorid:426718271943156 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.829 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.923 [conn1] going to kill op: op: 24319.0 m31000| Fri Feb 22 11:57:32.923 [conn1] going to kill op: op: 24320.0 m31000| Fri Feb 22 11:57:32.926 [conn1196] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.926 [conn1196] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|81 } } cursorid:427117661864098 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:32.926 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.926 [conn1196] end connection 165.225.128.186:48371 (7 connections now open) m31000| Fri Feb 22 11:57:32.926 [initandlisten] connection accepted from 165.225.128.186:57181 #1198 (8 connections now open) m31000| Fri Feb 22 11:57:32.931 [conn1197] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:32.931 [conn1197] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|81 } } cursorid:427123241935521 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:32.931 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:32.931 [conn1197] end connection 165.225.128.186:47094 (7 connections now open) m31000| Fri Feb 22 11:57:32.932 [initandlisten] connection accepted from 165.225.128.186:40410 #1199 (8 connections now open) m31000| Fri Feb 22 11:57:33.024 [conn1] going to kill op: op: 24355.0 m31000| Fri Feb 22 11:57:33.024 [conn1199] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.024 [conn1199] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|91 } } cursorid:427517419694791 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:33.024 [conn1] going to kill op: op: 24357.0 m31002| Fri Feb 22 11:57:33.024 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.024 [conn1199] end connection 165.225.128.186:40410 (7 connections now open) m31000| Fri Feb 22 11:57:33.025 [initandlisten] connection accepted from 165.225.128.186:56580 #1200 (8 connections now open) m31000| Fri Feb 22 11:57:33.029 [conn1198] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.029 [conn1198] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534252000|91 } } cursorid:427513695623439 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:33.029 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.029 [conn1198] end connection 165.225.128.186:57181 (7 connections now open) m31000| Fri Feb 22 11:57:33.029 [initandlisten] connection accepted from 165.225.128.186:44737 #1201 (8 connections now open) m31000| Fri Feb 22 11:57:33.125 [conn1] going to kill op: op: 24395.0 m31000| Fri Feb 22 11:57:33.125 [conn1] going to kill op: op: 24396.0 m31000| Fri Feb 22 11:57:33.127 [conn1200] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.127 [conn1200] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|1 } } cursorid:427909042805857 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:33.128 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.128 [conn1200] end connection 165.225.128.186:56580 (7 connections now open) m31000| Fri Feb 22 11:57:33.128 [initandlisten] connection accepted from 165.225.128.186:65415 #1202 (8 connections now open) m31000| Fri Feb 22 11:57:33.131 [conn1201] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.131 [conn1201] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|2 } } cursorid:427913788816479 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:33.131 [conn1201] ClientCursor::find(): cursor not found in map '427913788816479' (ok after a drop) m31001| Fri Feb 22 11:57:33.132 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.132 [conn1201] end connection 165.225.128.186:44737 (7 connections now open) m31000| Fri Feb 22 11:57:33.132 [initandlisten] connection accepted from 165.225.128.186:57562 #1203 (8 connections now open) m31000| Fri Feb 22 11:57:33.226 [conn1] going to kill op: op: 24435.0 m31000| Fri Feb 22 11:57:33.226 [conn1] going to kill op: op: 24436.0 m31000| Fri Feb 22 11:57:33.230 [conn1202] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.230 [conn1202] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|12 } } cursorid:428304818698919 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:33.230 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.231 [conn1202] end connection 165.225.128.186:65415 (7 connections now open) m31000| Fri Feb 22 11:57:33.231 [initandlisten] connection accepted from 165.225.128.186:38984 #1204 (8 connections now open) m31000| Fri Feb 22 11:57:33.234 [conn1203] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.234 [conn1203] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|12 } } cursorid:428308451622340 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:33.234 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.234 [conn1203] end connection 165.225.128.186:57562 (7 connections now open) m31000| Fri Feb 22 11:57:33.235 [initandlisten] connection accepted from 165.225.128.186:41770 #1205 (8 connections now open) m31000| Fri Feb 22 11:57:33.326 [conn1] going to kill op: op: 24471.0 m31000| Fri Feb 22 11:57:33.327 [conn1] going to kill op: op: 24473.0 m31000| Fri Feb 22 11:57:33.333 [conn1204] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.333 [conn1204] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|22 } } cursorid:428698100619730 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:33.333 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.333 [conn1204] end connection 165.225.128.186:38984 (7 connections now open) m31000| Fri Feb 22 11:57:33.334 [initandlisten] connection accepted from 165.225.128.186:38806 #1206 (8 connections now open) m31000| Fri Feb 22 11:57:33.427 [conn1] going to kill op: op: 24505.0 m31000| Fri Feb 22 11:57:33.428 [conn1] going to kill op: op: 24507.0 m31000| Fri Feb 22 11:57:33.428 [conn1205] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.428 [conn1205] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|22 } } cursorid:428702947093188 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:33.428 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.429 [conn1205] end connection 165.225.128.186:41770 (7 connections now open) m31000| Fri Feb 22 11:57:33.429 [initandlisten] connection accepted from 165.225.128.186:46175 #1207 (8 connections now open) m31000| Fri Feb 22 11:57:33.436 [conn1206] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.436 [conn1206] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|32 } } cursorid:429094360326778 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:33.436 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.436 [conn1206] end connection 165.225.128.186:38806 (7 connections now open) m31000| Fri Feb 22 11:57:33.436 [initandlisten] connection accepted from 165.225.128.186:40249 #1208 (8 connections now open) m31000| Fri Feb 22 11:57:33.528 [conn1] going to kill op: op: 24543.0 m31000| Fri Feb 22 11:57:33.528 [conn1] going to kill op: op: 24542.0 m31000| Fri Feb 22 11:57:33.531 [conn1207] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.531 [conn1207] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|41 } } cursorid:429485547804258 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:33.531 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.531 [conn1207] end connection 165.225.128.186:46175 (7 connections now open) m31000| Fri Feb 22 11:57:33.532 [initandlisten] connection accepted from 165.225.128.186:36947 #1209 (8 connections now open) m31000| Fri Feb 22 11:57:33.630 [conn1] going to kill op: op: 24579.0 m31000| Fri Feb 22 11:57:33.630 [conn1] going to kill op: op: 24577.0 m31000| Fri Feb 22 11:57:33.634 [conn1209] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.634 [conn1209] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|51 } } cursorid:429879503890357 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:33.634 [conn1209] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:33.634 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.634 [conn1209] end connection 165.225.128.186:36947 (7 connections now open) m31000| Fri Feb 22 11:57:33.635 [initandlisten] connection accepted from 165.225.128.186:33442 #1210 (8 connections now open) m31000| Fri Feb 22 11:57:33.730 [conn1] going to kill op: op: 24621.0 m31000| Fri Feb 22 11:57:33.731 [conn1] going to kill op: op: 24624.0 m31000| Fri Feb 22 11:57:33.731 [conn1] going to kill op: op: 24622.0 m31000| Fri Feb 22 11:57:33.731 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.731 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|62 } } cursorid:430265643875616 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:33.731 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.731 [conn1208] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.731 [conn1208] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|42 } } cursorid:429489117570359 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:33.731 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.732 [conn1208] end connection 165.225.128.186:40249 (7 connections now open) m31000| Fri Feb 22 11:57:33.732 [initandlisten] connection accepted from 165.225.128.186:53627 #1211 (8 connections now open) m31000| Fri Feb 22 11:57:33.737 [conn1210] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.737 [conn1210] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|62 } } cursorid:430313695599331 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:33.738 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.738 [conn1210] end connection 165.225.128.186:33442 (7 connections now open) m31000| Fri Feb 22 11:57:33.738 [initandlisten] connection accepted from 165.225.128.186:48634 #1212 (8 connections now open) m31000| Fri Feb 22 11:57:33.831 [conn1] going to kill op: op: 24664.0 m31000| Fri Feb 22 11:57:33.832 [conn1] going to kill op: op: 24661.0 m31000| Fri Feb 22 11:57:33.834 [conn1211] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.834 [conn1211] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|71 } } cursorid:430705058365511 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:33.834 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.834 [conn1211] end connection 165.225.128.186:53627 (7 connections now open) m31000| Fri Feb 22 11:57:33.834 [initandlisten] connection accepted from 165.225.128.186:58743 #1213 (8 connections now open) m31000| Fri Feb 22 11:57:33.840 [conn1212] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.840 [conn1212] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|72 } } cursorid:430709991635833 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:33.840 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.840 [conn1212] end connection 165.225.128.186:48634 (7 connections now open) m31000| Fri Feb 22 11:57:33.841 [initandlisten] connection accepted from 165.225.128.186:42625 #1214 (8 connections now open) m31000| Fri Feb 22 11:57:33.932 [conn1] going to kill op: op: 24710.0 m31000| Fri Feb 22 11:57:33.932 [conn1] going to kill op: op: 24709.0 m31000| Fri Feb 22 11:57:33.933 [conn1] going to kill op: op: 24708.0 m31000| Fri Feb 22 11:57:33.936 [conn1213] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.936 [conn1213] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|81 } } cursorid:431100727661464 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:33.936 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:33.937 [conn1213] end connection 165.225.128.186:58743 (7 connections now open) m31000| Fri Feb 22 11:57:33.937 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:33.937 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|81 } } cursorid:431095119767890 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:33.937 [initandlisten] connection accepted from 165.225.128.186:35207 #1215 (8 connections now open) m31002| Fri Feb 22 11:57:33.938 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.033 [conn1] going to kill op: op: 24747.0 m31000| Fri Feb 22 11:57:34.033 [conn1] going to kill op: op: 24745.0 m31000| Fri Feb 22 11:57:34.035 [conn1214] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.035 [conn1214] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|82 } } cursorid:431105161217639 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:34.035 [conn1214] ClientCursor::find(): cursor not found in map '431105161217639' (ok after a drop) m31001| Fri Feb 22 11:57:34.035 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.035 [conn1214] end connection 165.225.128.186:42625 (7 connections now open) m31000| Fri Feb 22 11:57:34.035 [initandlisten] connection accepted from 165.225.128.186:35836 #1216 (8 connections now open) m31000| Fri Feb 22 11:57:34.040 [conn1215] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.040 [conn1215] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534253000|91 } } cursorid:431494722634649 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.040 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.040 [conn1215] end connection 165.225.128.186:35207 (7 connections now open) m31000| Fri Feb 22 11:57:34.040 [initandlisten] connection accepted from 165.225.128.186:59406 #1217 (8 connections now open) m31000| Fri Feb 22 11:57:34.134 [conn1] going to kill op: op: 24786.0 m31000| Fri Feb 22 11:57:34.134 [conn1] going to kill op: op: 24788.0 m31000| Fri Feb 22 11:57:34.138 [conn1216] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.138 [conn1216] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|2 } } cursorid:431886084016593 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:34.138 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.138 [conn1216] end connection 165.225.128.186:35836 (7 connections now open) m31000| Fri Feb 22 11:57:34.138 [initandlisten] connection accepted from 165.225.128.186:61024 #1218 (8 connections now open) m31000| Fri Feb 22 11:57:34.142 [conn1217] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.142 [conn1217] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|3 } } cursorid:431889350448159 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.143 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.143 [conn1217] end connection 165.225.128.186:59406 (7 connections now open) m31000| Fri Feb 22 11:57:34.143 [initandlisten] connection accepted from 165.225.128.186:43036 #1219 (8 connections now open) m31000| Fri Feb 22 11:57:34.235 [conn1] going to kill op: op: 24824.0 m31000| Fri Feb 22 11:57:34.235 [conn1] going to kill op: op: 24823.0 m31000| Fri Feb 22 11:57:34.241 [conn1218] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.241 [conn1218] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|12 } } cursorid:432280273220235 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:34.241 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.241 [conn1218] end connection 165.225.128.186:61024 (7 connections now open) m31000| Fri Feb 22 11:57:34.242 [initandlisten] connection accepted from 165.225.128.186:34795 #1220 (8 connections now open) m31000| Fri Feb 22 11:57:34.336 [conn1] going to kill op: op: 24861.0 m31000| Fri Feb 22 11:57:34.336 [conn1] going to kill op: op: 24859.0 m31000| Fri Feb 22 11:57:34.336 [conn1219] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.336 [conn1219] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|13 } } cursorid:432285098892847 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.336 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.337 [conn1219] end connection 165.225.128.186:43036 (7 connections now open) m31000| Fri Feb 22 11:57:34.337 [initandlisten] connection accepted from 165.225.128.186:46560 #1221 (8 connections now open) m31000| Fri Feb 22 11:57:34.344 [conn1220] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.344 [conn1220] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|22 } } cursorid:432676550398700 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:34.344 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.344 [conn1220] end connection 165.225.128.186:34795 (7 connections now open) m31000| Fri Feb 22 11:57:34.345 [initandlisten] connection accepted from 165.225.128.186:48474 #1222 (8 connections now open) m31000| Fri Feb 22 11:57:34.436 [conn1] going to kill op: op: 24897.0 m31000| Fri Feb 22 11:57:34.437 [conn1] going to kill op: op: 24896.0 m31000| Fri Feb 22 11:57:34.439 [conn1221] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.439 [conn1221] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|32 } } cursorid:433066990906077 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:34.439 [conn1221] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:34.439 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.439 [conn1221] end connection 165.225.128.186:46560 (7 connections now open) m31000| Fri Feb 22 11:57:34.439 [initandlisten] connection accepted from 165.225.128.186:33943 #1223 (8 connections now open) m31000| Fri Feb 22 11:57:34.537 [conn1] going to kill op: op: 24931.0 m31000| Fri Feb 22 11:57:34.537 [conn1] going to kill op: op: 24930.0 m31000| Fri Feb 22 11:57:34.538 [conn1222] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.538 [conn1222] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|33 } } cursorid:433070662537778 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:34.538 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.538 [conn1222] end connection 165.225.128.186:48474 (7 connections now open) m31000| Fri Feb 22 11:57:34.539 [initandlisten] connection accepted from 165.225.128.186:54230 #1224 (8 connections now open) m31000| Fri Feb 22 11:57:34.541 [conn1223] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.541 [conn1223] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|42 } } cursorid:433461086515849 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.541 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.542 [conn1223] end connection 165.225.128.186:33943 (7 connections now open) m31000| Fri Feb 22 11:57:34.542 [initandlisten] connection accepted from 165.225.128.186:46978 #1225 (8 connections now open) m31000| Fri Feb 22 11:57:34.638 [conn1] going to kill op: op: 24968.0 m31000| Fri Feb 22 11:57:34.638 [conn1] going to kill op: op: 24969.0 m31000| Fri Feb 22 11:57:34.641 [conn1224] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.641 [conn1224] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|52 } } cursorid:433852209969402 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:34.641 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.641 [conn1224] end connection 165.225.128.186:54230 (7 connections now open) m31000| Fri Feb 22 11:57:34.641 [initandlisten] connection accepted from 165.225.128.186:55934 #1226 (8 connections now open) m31000| Fri Feb 22 11:57:34.644 [conn1225] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.644 [conn1225] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|52 } } cursorid:433856258023831 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.644 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.644 [conn1225] end connection 165.225.128.186:46978 (7 connections now open) m31000| Fri Feb 22 11:57:34.644 [initandlisten] connection accepted from 165.225.128.186:47677 #1227 (8 connections now open) m31000| Fri Feb 22 11:57:34.739 [conn1] going to kill op: op: 25008.0 m31000| Fri Feb 22 11:57:34.739 [conn1] going to kill op: op: 25009.0 m31000| Fri Feb 22 11:57:34.739 [conn1] going to kill op: op: 25007.0 m31000| Fri Feb 22 11:57:34.744 [conn1226] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.744 [conn1226] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|62 } } cursorid:434247058477077 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:34.744 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.744 [conn1226] end connection 165.225.128.186:55934 (7 connections now open) m31000| Fri Feb 22 11:57:34.744 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.744 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|71 } } cursorid:434595800773044 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:34.744 [initandlisten] connection accepted from 165.225.128.186:59622 #1228 (8 connections now open) m31001| Fri Feb 22 11:57:34.745 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.746 [conn1227] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.746 [conn1227] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|62 } } cursorid:434253179673620 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.746 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.746 [conn1227] end connection 165.225.128.186:47677 (7 connections now open) m31000| Fri Feb 22 11:57:34.747 [initandlisten] connection accepted from 165.225.128.186:57327 #1229 (8 connections now open) m31000| Fri Feb 22 11:57:34.839 [conn1] going to kill op: op: 25048.0 m31000| Fri Feb 22 11:57:34.840 [conn1] going to kill op: op: 25047.0 m31000| Fri Feb 22 11:57:34.847 [conn1228] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.847 [conn1228] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|72 } } cursorid:434644040058397 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:34.847 [conn1228] ClientCursor::find(): cursor not found in map '434644040058397' (ok after a drop) m31001| Fri Feb 22 11:57:34.847 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.847 [conn1228] end connection 165.225.128.186:59622 (7 connections now open) m31000| Fri Feb 22 11:57:34.847 [initandlisten] connection accepted from 165.225.128.186:46005 #1230 (8 connections now open) m31000| Fri Feb 22 11:57:34.849 [conn1229] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.849 [conn1229] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|72 } } cursorid:434647097356140 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.849 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.849 [conn1229] end connection 165.225.128.186:57327 (7 connections now open) m31000| Fri Feb 22 11:57:34.849 [initandlisten] connection accepted from 165.225.128.186:63978 #1231 (8 connections now open) m31000| Fri Feb 22 11:57:34.940 [conn1] going to kill op: op: 25083.0 m31000| Fri Feb 22 11:57:34.940 [conn1] going to kill op: op: 25086.0 m31000| Fri Feb 22 11:57:34.941 [conn1231] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.941 [conn1231] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|82 } } cursorid:435085051739938 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:34.941 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.941 [conn1231] end connection 165.225.128.186:63978 (7 connections now open) m31000| Fri Feb 22 11:57:34.941 [initandlisten] connection accepted from 165.225.128.186:33309 #1232 (8 connections now open) m31000| Fri Feb 22 11:57:34.949 [conn1230] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:34.949 [conn1230] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|82 } } cursorid:435082223639528 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:34.949 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:34.950 [conn1230] end connection 165.225.128.186:46005 (7 connections now open) m31000| Fri Feb 22 11:57:34.950 [initandlisten] connection accepted from 165.225.128.186:56220 #1233 (8 connections now open) m31000| Fri Feb 22 11:57:35.041 [conn1] going to kill op: op: 25134.0 m31000| Fri Feb 22 11:57:35.041 [conn1] going to kill op: op: 25131.0 m31000| Fri Feb 22 11:57:35.041 [conn1233] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.041 [conn1233] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|92 } } cursorid:435480678755320 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.041 [conn1] going to kill op: op: 25133.0 m31001| Fri Feb 22 11:57:35.042 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.042 [conn1233] end connection 165.225.128.186:56220 (7 connections now open) m31000| Fri Feb 22 11:57:35.042 [initandlisten] connection accepted from 165.225.128.186:53563 #1234 (8 connections now open) m31000| Fri Feb 22 11:57:35.043 [conn1232] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.043 [conn1232] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|91 } } cursorid:435477023303441 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:33 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:35.043 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.043 [conn1232] end connection 165.225.128.186:33309 (7 connections now open) m31000| Fri Feb 22 11:57:35.043 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.043 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534254000|91 } } cursorid:435429737830726 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.044 [initandlisten] connection accepted from 165.225.128.186:34939 #1235 (8 connections now open) m31002| Fri Feb 22 11:57:35.044 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.142 [conn1] going to kill op: op: 25175.0 m31000| Fri Feb 22 11:57:35.142 [conn1] going to kill op: op: 25176.0 m31000| Fri Feb 22 11:57:35.144 [conn1234] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.144 [conn1234] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|3 } } cursorid:435871950393273 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.145 [conn1234] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:35.145 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.145 [conn1234] end connection 165.225.128.186:53563 (7 connections now open) m31000| Fri Feb 22 11:57:35.145 [initandlisten] connection accepted from 165.225.128.186:45532 #1236 (8 connections now open) m31000| Fri Feb 22 11:57:35.146 [conn1235] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.146 [conn1235] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|4 } } cursorid:435875046355737 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:35.146 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.146 [conn1235] end connection 165.225.128.186:34939 (7 connections now open) m31000| Fri Feb 22 11:57:35.146 [initandlisten] connection accepted from 165.225.128.186:58221 #1237 (8 connections now open) m31000| Fri Feb 22 11:57:35.243 [conn1] going to kill op: op: 25213.0 m31000| Fri Feb 22 11:57:35.243 [conn1] going to kill op: op: 25214.0 m31000| Fri Feb 22 11:57:35.247 [conn1236] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.247 [conn1236] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|14 } } cursorid:436310507886954 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:35.247 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.247 [conn1236] end connection 165.225.128.186:45532 (7 connections now open) m31000| Fri Feb 22 11:57:35.247 [initandlisten] connection accepted from 165.225.128.186:55451 #1238 (8 connections now open) m31000| Fri Feb 22 11:57:35.248 [conn1237] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.248 [conn1237] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|14 } } cursorid:436313430245547 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:35.248 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.248 [conn1237] end connection 165.225.128.186:58221 (7 connections now open) m31000| Fri Feb 22 11:57:35.249 [initandlisten] connection accepted from 165.225.128.186:33273 #1239 (8 connections now open) m31000| Fri Feb 22 11:57:35.343 [conn1] going to kill op: op: 25251.0 m31000| Fri Feb 22 11:57:35.343 [conn1] going to kill op: op: 25252.0 m31000| Fri Feb 22 11:57:35.349 [conn1238] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.349 [conn1238] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|24 } } cursorid:436746897706266 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:35.350 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.350 [conn1238] end connection 165.225.128.186:55451 (7 connections now open) m31000| Fri Feb 22 11:57:35.350 [initandlisten] connection accepted from 165.225.128.186:40810 #1240 (8 connections now open) m31000| Fri Feb 22 11:57:35.350 [conn1239] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.350 [conn1239] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|24 } } cursorid:436752013635879 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:35.350 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.350 [conn1239] end connection 165.225.128.186:33273 (7 connections now open) m31000| Fri Feb 22 11:57:35.351 [initandlisten] connection accepted from 165.225.128.186:33103 #1241 (8 connections now open) m31000| Fri Feb 22 11:57:35.444 [conn1] going to kill op: op: 25289.0 m31000| Fri Feb 22 11:57:35.444 [conn1] going to kill op: op: 25290.0 m31000| Fri Feb 22 11:57:35.452 [conn1240] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.452 [conn1240] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|34 } } cursorid:437186733942499 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:35.452 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.452 [conn1240] end connection 165.225.128.186:40810 (7 connections now open) m31000| Fri Feb 22 11:57:35.453 [initandlisten] connection accepted from 165.225.128.186:57800 #1242 (8 connections now open) m31002| Fri Feb 22 11:57:35.453 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.453 [conn1241] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.453 [conn1241] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|34 } } cursorid:437190380839184 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.453 [conn1241] end connection 165.225.128.186:33103 (7 connections now open) m31000| Fri Feb 22 11:57:35.453 [initandlisten] connection accepted from 165.225.128.186:61717 #1243 (8 connections now open) m31000| Fri Feb 22 11:57:35.545 [conn1] going to kill op: op: 25327.0 m31000| Fri Feb 22 11:57:35.545 [conn1] going to kill op: op: 25325.0 m31000| Fri Feb 22 11:57:35.545 [conn1243] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.545 [conn1243] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|44 } } cursorid:437628790767978 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.545 [conn1243] ClientCursor::find(): cursor not found in map '437628790767978' (ok after a drop) m31002| Fri Feb 22 11:57:35.545 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.545 [conn1243] end connection 165.225.128.186:61717 (7 connections now open) m31000| Fri Feb 22 11:57:35.545 [initandlisten] connection accepted from 165.225.128.186:46988 #1244 (8 connections now open) m31000| Fri Feb 22 11:57:35.554 [conn1242] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.554 [conn1242] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|44 } } cursorid:437623442928719 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:35.555 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.555 [conn1242] end connection 165.225.128.186:57800 (7 connections now open) m31000| Fri Feb 22 11:57:35.555 [initandlisten] connection accepted from 165.225.128.186:59944 #1245 (8 connections now open) m31000| Fri Feb 22 11:57:35.645 [conn1] going to kill op: op: 25362.0 m31000| Fri Feb 22 11:57:35.646 [conn1] going to kill op: op: 25363.0 m31000| Fri Feb 22 11:57:35.647 [conn1245] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.647 [conn1245] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|54 } } cursorid:438024175808140 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:35.647 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.647 [conn1245] end connection 165.225.128.186:59944 (7 connections now open) m31000| Fri Feb 22 11:57:35.647 [initandlisten] connection accepted from 165.225.128.186:44558 #1246 (8 connections now open) m31000| Fri Feb 22 11:57:35.647 [conn1244] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.648 [conn1244] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|53 } } cursorid:438019127828946 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:35.648 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.648 [conn1244] end connection 165.225.128.186:46988 (7 connections now open) m31000| Fri Feb 22 11:57:35.648 [initandlisten] connection accepted from 165.225.128.186:41866 #1247 (8 connections now open) m31000| Fri Feb 22 11:57:35.746 [conn1] going to kill op: op: 25401.0 m31000| Fri Feb 22 11:57:35.746 [conn1] going to kill op: op: 25400.0 m31000| Fri Feb 22 11:57:35.749 [conn1246] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.749 [conn1246] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|63 } } cursorid:438413991893743 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:35.750 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.750 [conn1246] end connection 165.225.128.186:44558 (7 connections now open) m31000| Fri Feb 22 11:57:35.750 [conn1247] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.750 [conn1247] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|63 } } cursorid:438419485976735 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.750 [initandlisten] connection accepted from 165.225.128.186:46846 #1248 (8 connections now open) m31002| Fri Feb 22 11:57:35.750 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.750 [conn1247] end connection 165.225.128.186:41866 (7 connections now open) m31000| Fri Feb 22 11:57:35.750 [initandlisten] connection accepted from 165.225.128.186:42990 #1249 (8 connections now open) m31000| Fri Feb 22 11:57:35.847 [conn1] going to kill op: op: 25450.0 m31000| Fri Feb 22 11:57:35.847 [conn1] going to kill op: op: 25448.0 m31000| Fri Feb 22 11:57:35.847 [conn1] going to kill op: op: 25449.0 m31000| Fri Feb 22 11:57:35.852 [conn1248] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.852 [conn1248] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|73 } } cursorid:438851331543149 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.852 [conn1249] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.852 [conn1249] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|73 } } cursorid:438855994244718 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.852 [conn1249] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:35.852 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:35.852 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.852 [conn1248] end connection 165.225.128.186:46846 (7 connections now open) m31000| Fri Feb 22 11:57:35.852 [conn1249] end connection 165.225.128.186:42990 (7 connections now open) m31000| Fri Feb 22 11:57:35.853 [initandlisten] connection accepted from 165.225.128.186:34095 #1250 (7 connections now open) m31000| Fri Feb 22 11:57:35.853 [initandlisten] connection accepted from 165.225.128.186:41550 #1251 (8 connections now open) m31000| Fri Feb 22 11:57:35.853 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.853 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|73 } } cursorid:438804492781787 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:35.854 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.948 [conn1] going to kill op: op: 25492.0 m31000| Fri Feb 22 11:57:35.948 [conn1] going to kill op: op: 25491.0 m31000| Fri Feb 22 11:57:35.955 [conn1250] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.955 [conn1251] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:35.955 [conn1250] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|83 } } cursorid:439294677372796 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:35.956 [conn1251] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|83 } } cursorid:439295135007816 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:35.956 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:35.956 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:35.956 [conn1251] end connection 165.225.128.186:41550 (7 connections now open) m31000| Fri Feb 22 11:57:35.956 [conn1250] end connection 165.225.128.186:34095 (7 connections now open) m31000| Fri Feb 22 11:57:35.956 [initandlisten] connection accepted from 165.225.128.186:49263 #1252 (7 connections now open) m31000| Fri Feb 22 11:57:35.956 [initandlisten] connection accepted from 165.225.128.186:33297 #1253 (8 connections now open) m31000| Fri Feb 22 11:57:36.049 [conn1] going to kill op: op: 25532.0 m31000| Fri Feb 22 11:57:36.049 [conn1] going to kill op: op: 25529.0 m31000| Fri Feb 22 11:57:36.049 [conn1] going to kill op: op: 25531.0 m31000| Fri Feb 22 11:57:36.055 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.055 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|4 } } cursorid:440075959579575 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:36.055 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.058 [conn1253] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.058 [conn1253] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|94 } } cursorid:439732488693270 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:37 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:36.058 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.058 [conn1253] end connection 165.225.128.186:33297 (7 connections now open) m31000| Fri Feb 22 11:57:36.059 [conn1252] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.059 [conn1252] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534255000|94 } } cursorid:439732353137714 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.059 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.059 [conn1252] end connection 165.225.128.186:49263 (6 connections now open) m31000| Fri Feb 22 11:57:36.059 [initandlisten] connection accepted from 165.225.128.186:58668 #1254 (7 connections now open) m31000| Fri Feb 22 11:57:36.059 [initandlisten] connection accepted from 165.225.128.186:54839 #1255 (8 connections now open) m31000| Fri Feb 22 11:57:36.150 [conn1] going to kill op: op: 25570.0 m31000| Fri Feb 22 11:57:36.150 [conn1] going to kill op: op: 25571.0 m31000| Fri Feb 22 11:57:36.151 [conn1255] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.151 [conn1255] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|5 } } cursorid:440171172652728 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.151 [conn1254] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.151 [conn1254] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|5 } } cursorid:440170976823088 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.151 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:36.151 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.151 [conn1255] end connection 165.225.128.186:54839 (7 connections now open) m31000| Fri Feb 22 11:57:36.151 [conn1254] ClientCursor::find(): cursor not found in map '440170976823088' (ok after a drop) m31000| Fri Feb 22 11:57:36.151 [conn1254] end connection 165.225.128.186:58668 (6 connections now open) m31000| Fri Feb 22 11:57:36.151 [initandlisten] connection accepted from 165.225.128.186:39469 #1256 (7 connections now open) m31000| Fri Feb 22 11:57:36.151 [initandlisten] connection accepted from 165.225.128.186:41413 #1257 (8 connections now open) m31000| Fri Feb 22 11:57:36.250 [conn1] going to kill op: op: 25609.0 m31000| Fri Feb 22 11:57:36.251 [conn1] going to kill op: op: 25608.0 m31000| Fri Feb 22 11:57:36.253 [conn1256] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.253 [conn1256] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|14 } } cursorid:440566973728945 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.253 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.253 [conn1256] end connection 165.225.128.186:39469 (7 connections now open) m31000| Fri Feb 22 11:57:36.254 [conn1257] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.254 [conn1257] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|14 } } cursorid:440566545632949 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.254 [initandlisten] connection accepted from 165.225.128.186:64525 #1258 (8 connections now open) m31002| Fri Feb 22 11:57:36.254 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.254 [conn1257] end connection 165.225.128.186:41413 (7 connections now open) m31000| Fri Feb 22 11:57:36.254 [initandlisten] connection accepted from 165.225.128.186:35267 #1259 (8 connections now open) m31000| Fri Feb 22 11:57:36.351 [conn1] going to kill op: op: 25648.0 m31000| Fri Feb 22 11:57:36.351 [conn1] going to kill op: op: 25647.0 m31000| Fri Feb 22 11:57:36.355 [conn1258] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.355 [conn1258] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|24 } } cursorid:440999042260579 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.356 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.356 [conn1258] end connection 165.225.128.186:64525 (7 connections now open) m31000| Fri Feb 22 11:57:36.356 [initandlisten] connection accepted from 165.225.128.186:52106 #1260 (8 connections now open) m31000| Fri Feb 22 11:57:36.356 [conn1259] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.356 [conn1259] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|24 } } cursorid:441004697370289 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:36.356 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.356 [conn1259] end connection 165.225.128.186:35267 (7 connections now open) m31000| Fri Feb 22 11:57:36.357 [initandlisten] connection accepted from 165.225.128.186:47784 #1261 (8 connections now open) m31000| Fri Feb 22 11:57:36.452 [conn1] going to kill op: op: 25685.0 m31000| Fri Feb 22 11:57:36.452 [conn1] going to kill op: op: 25686.0 m31000| Fri Feb 22 11:57:36.458 [conn1260] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.458 [conn1260] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|34 } } cursorid:441438626621844 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.458 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.458 [conn1260] end connection 165.225.128.186:52106 (7 connections now open) m31000| Fri Feb 22 11:57:36.458 [initandlisten] connection accepted from 165.225.128.186:60008 #1262 (8 connections now open) m31000| Fri Feb 22 11:57:36.459 [conn1261] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.459 [conn1261] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|34 } } cursorid:441442907104537 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:36.459 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.459 [conn1261] end connection 165.225.128.186:47784 (7 connections now open) m31000| Fri Feb 22 11:57:36.459 [initandlisten] connection accepted from 165.225.128.186:44752 #1263 (8 connections now open) m31000| Fri Feb 22 11:57:36.553 [conn1] going to kill op: op: 25724.0 m31000| Fri Feb 22 11:57:36.553 [conn1] going to kill op: op: 25723.0 m31000| Fri Feb 22 11:57:36.560 [conn1262] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.560 [conn1262] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|44 } } cursorid:441876226887729 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.561 [conn1262] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:36.561 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.561 [conn1262] end connection 165.225.128.186:60008 (7 connections now open) m31000| Fri Feb 22 11:57:36.561 [conn1263] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.561 [conn1263] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|44 } } cursorid:441880119045021 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.561 [initandlisten] connection accepted from 165.225.128.186:51854 #1264 (8 connections now open) m31002| Fri Feb 22 11:57:36.561 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.561 [conn1263] end connection 165.225.128.186:44752 (7 connections now open) m31000| Fri Feb 22 11:57:36.561 [initandlisten] connection accepted from 165.225.128.186:46109 #1265 (8 connections now open) m31000| Fri Feb 22 11:57:36.654 [conn1] going to kill op: op: 25761.0 m31000| Fri Feb 22 11:57:36.654 [conn1] going to kill op: op: 25762.0 m31000| Fri Feb 22 11:57:36.663 [conn1265] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.663 [conn1265] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|54 } } cursorid:442317846507396 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:36.664 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.664 [conn1264] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.664 [conn1264] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|54 } } cursorid:442313756734649 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.664 [conn1265] end connection 165.225.128.186:46109 (7 connections now open) m31001| Fri Feb 22 11:57:36.664 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.664 [conn1264] end connection 165.225.128.186:51854 (6 connections now open) m31000| Fri Feb 22 11:57:36.664 [initandlisten] connection accepted from 165.225.128.186:40419 #1266 (7 connections now open) m31000| Fri Feb 22 11:57:36.664 [initandlisten] connection accepted from 165.225.128.186:59235 #1267 (8 connections now open) m31000| Fri Feb 22 11:57:36.754 [conn1] going to kill op: op: 25797.0 m31000| Fri Feb 22 11:57:36.755 [conn1] going to kill op: op: 25798.0 m31000| Fri Feb 22 11:57:36.756 [conn1267] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.756 [conn1266] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.756 [conn1267] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|64 } } cursorid:442756817092454 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.756 [conn1266] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|64 } } cursorid:442755407856544 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:38 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.756 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:36.756 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.756 [conn1267] end connection 165.225.128.186:59235 (7 connections now open) m31000| Fri Feb 22 11:57:36.756 [conn1266] end connection 165.225.128.186:40419 (7 connections now open) m31000| Fri Feb 22 11:57:36.756 [initandlisten] connection accepted from 165.225.128.186:37309 #1268 (7 connections now open) m31000| Fri Feb 22 11:57:36.756 [initandlisten] connection accepted from 165.225.128.186:45607 #1269 (8 connections now open) m31000| Fri Feb 22 11:57:36.855 [conn1] going to kill op: op: 25838.0 m31000| Fri Feb 22 11:57:36.855 [conn1] going to kill op: op: 25835.0 m31000| Fri Feb 22 11:57:36.856 [conn1] going to kill op: op: 25836.0 m31000| Fri Feb 22 11:57:36.858 [conn1268] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.858 [conn1268] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|74 } } cursorid:443152421581816 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.858 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.858 [conn1268] end connection 165.225.128.186:37309 (7 connections now open) m31000| Fri Feb 22 11:57:36.859 [conn1269] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.859 [conn1269] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|74 } } cursorid:443152562215565 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.859 [initandlisten] connection accepted from 165.225.128.186:34770 #1270 (8 connections now open) m31002| Fri Feb 22 11:57:36.859 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.859 [conn1269] end connection 165.225.128.186:45607 (7 connections now open) m31000| Fri Feb 22 11:57:36.859 [initandlisten] connection accepted from 165.225.128.186:46272 #1271 (8 connections now open) m31000| Fri Feb 22 11:57:36.864 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.864 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|84 } } cursorid:443537477044920 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:36.864 [conn8] ClientCursor::find(): cursor not found in map '443537477044920' (ok after a drop) m31001| Fri Feb 22 11:57:36.864 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.956 [conn1] going to kill op: op: 25876.0 m31000| Fri Feb 22 11:57:36.956 [conn1] going to kill op: op: 25877.0 m31000| Fri Feb 22 11:57:36.961 [conn1270] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.961 [conn1270] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|84 } } cursorid:443585674373631 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:36.961 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.961 [conn1270] end connection 165.225.128.186:34770 (7 connections now open) m31000| Fri Feb 22 11:57:36.961 [initandlisten] connection accepted from 165.225.128.186:63034 #1272 (8 connections now open) m31000| Fri Feb 22 11:57:36.962 [conn1271] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:36.962 [conn1271] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|84 } } cursorid:443590288546226 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:97 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:36.962 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:36.962 [conn1271] end connection 165.225.128.186:46272 (7 connections now open) m31000| Fri Feb 22 11:57:36.963 [initandlisten] connection accepted from 165.225.128.186:50239 #1273 (8 connections now open) m31000| Fri Feb 22 11:57:37.057 [conn1] going to kill op: op: 25917.0 m31000| Fri Feb 22 11:57:37.057 [conn1] going to kill op: op: 25914.0 m31000| Fri Feb 22 11:57:37.057 [conn1] going to kill op: op: 25915.0 m31000| Fri Feb 22 11:57:37.064 [conn1272] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.064 [conn1272] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|94 } } cursorid:444024084346763 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.064 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.064 [conn1272] end connection 165.225.128.186:63034 (7 connections now open) m31000| Fri Feb 22 11:57:37.064 [initandlisten] connection accepted from 165.225.128.186:54992 #1274 (8 connections now open) m31000| Fri Feb 22 11:57:37.064 [conn1273] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.065 [conn1273] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534256000|94 } } cursorid:444027529553594 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.065 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.065 [conn1273] end connection 165.225.128.186:50239 (7 connections now open) m31000| Fri Feb 22 11:57:37.065 [initandlisten] connection accepted from 165.225.128.186:33111 #1275 (8 connections now open) m31000| Fri Feb 22 11:57:37.065 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.065 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|5 } } cursorid:444413621211473 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.066 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.158 [conn1] going to kill op: op: 25958.0 m31000| Fri Feb 22 11:57:37.158 [conn1] going to kill op: op: 25957.0 m31000| Fri Feb 22 11:57:37.166 [conn1274] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.166 [conn1274] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|5 } } cursorid:444460805146283 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.166 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.166 [conn1274] end connection 165.225.128.186:54992 (7 connections now open) m31000| Fri Feb 22 11:57:37.166 [initandlisten] connection accepted from 165.225.128.186:32863 #1276 (8 connections now open) m31000| Fri Feb 22 11:57:37.167 [conn1275] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.167 [conn1275] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|5 } } cursorid:444464731339064 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:37.167 [conn1275] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:37.167 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.167 [conn1275] end connection 165.225.128.186:33111 (7 connections now open) m31000| Fri Feb 22 11:57:37.168 [initandlisten] connection accepted from 165.225.128.186:35145 #1277 (8 connections now open) m31000| Fri Feb 22 11:57:37.259 [conn1] going to kill op: op: 25993.0 m31000| Fri Feb 22 11:57:37.259 [conn1] going to kill op: op: 25995.0 m31000| Fri Feb 22 11:57:37.259 [conn1277] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.259 [conn1277] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|15 } } cursorid:444904894324665 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.259 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.260 [conn1277] end connection 165.225.128.186:35145 (7 connections now open) m31000| Fri Feb 22 11:57:37.260 [initandlisten] connection accepted from 165.225.128.186:38345 #1278 (8 connections now open) m31000| Fri Feb 22 11:57:37.269 [conn1276] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.269 [conn1276] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|15 } } cursorid:444900546602920 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.269 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.269 [conn1276] end connection 165.225.128.186:32863 (7 connections now open) m31000| Fri Feb 22 11:57:37.269 [initandlisten] connection accepted from 165.225.128.186:58879 #1279 (8 connections now open) m31000| Fri Feb 22 11:57:37.360 [conn1] going to kill op: op: 26030.0 m31000| Fri Feb 22 11:57:37.360 [conn1] going to kill op: op: 26031.0 m31000| Fri Feb 22 11:57:37.361 [conn1279] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.361 [conn1279] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|25 } } cursorid:445299715970195 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.361 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.361 [conn1279] end connection 165.225.128.186:58879 (7 connections now open) m31000| Fri Feb 22 11:57:37.361 [initandlisten] connection accepted from 165.225.128.186:37689 #1280 (8 connections now open) m31000| Fri Feb 22 11:57:37.362 [conn1278] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.362 [conn1278] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|24 } } cursorid:445295743296684 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.362 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.362 [conn1278] end connection 165.225.128.186:38345 (7 connections now open) m31000| Fri Feb 22 11:57:37.362 [initandlisten] connection accepted from 165.225.128.186:52794 #1281 (8 connections now open) m31000| Fri Feb 22 11:57:37.460 [conn1] going to kill op: op: 26069.0 m31000| Fri Feb 22 11:57:37.461 [conn1] going to kill op: op: 26068.0 m31000| Fri Feb 22 11:57:37.463 [conn1280] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.464 [conn1280] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|34 } } cursorid:445689786725892 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.464 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.464 [conn1280] end connection 165.225.128.186:37689 (7 connections now open) m31000| Fri Feb 22 11:57:37.464 [initandlisten] connection accepted from 165.225.128.186:49396 #1282 (8 connections now open) m31000| Fri Feb 22 11:57:37.465 [conn1281] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.465 [conn1281] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|34 } } cursorid:445693533356687 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.465 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.465 [conn1281] end connection 165.225.128.186:52794 (7 connections now open) m31000| Fri Feb 22 11:57:37.465 [initandlisten] connection accepted from 165.225.128.186:50991 #1283 (8 connections now open) m31000| Fri Feb 22 11:57:37.561 [conn1] going to kill op: op: 26109.0 m31000| Fri Feb 22 11:57:37.561 [conn1] going to kill op: op: 26107.0 m31000| Fri Feb 22 11:57:37.566 [conn1282] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.566 [conn1282] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|44 } } cursorid:446128976736312 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.566 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.566 [conn1282] end connection 165.225.128.186:49396 (7 connections now open) m31000| Fri Feb 22 11:57:37.567 [initandlisten] connection accepted from 165.225.128.186:51182 #1284 (8 connections now open) m31000| Fri Feb 22 11:57:37.567 [conn1283] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.567 [conn1283] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|44 } } cursorid:446131154216925 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:37.567 [conn1283] ClientCursor::find(): cursor not found in map '446131154216925' (ok after a drop) m31002| Fri Feb 22 11:57:37.568 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.568 [conn1283] end connection 165.225.128.186:50991 (7 connections now open) m31000| Fri Feb 22 11:57:37.568 [initandlisten] connection accepted from 165.225.128.186:54291 #1285 (8 connections now open) m31000| Fri Feb 22 11:57:37.662 [conn1] going to kill op: op: 26147.0 m31000| Fri Feb 22 11:57:37.662 [conn1] going to kill op: op: 26145.0 m31000| Fri Feb 22 11:57:37.669 [conn1284] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.669 [conn1284] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|54 } } cursorid:446565517401458 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.669 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.669 [conn1284] end connection 165.225.128.186:51182 (7 connections now open) m31000| Fri Feb 22 11:57:37.669 [initandlisten] connection accepted from 165.225.128.186:39136 #1286 (8 connections now open) m31000| Fri Feb 22 11:57:37.670 [conn1285] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.670 [conn1285] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|55 } } cursorid:446569757743063 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.670 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.670 [conn1285] end connection 165.225.128.186:54291 (7 connections now open) m31000| Fri Feb 22 11:57:37.670 [initandlisten] connection accepted from 165.225.128.186:60671 #1287 (8 connections now open) m31000| Fri Feb 22 11:57:37.763 [conn1] going to kill op: op: 26185.0 m31000| Fri Feb 22 11:57:37.763 [conn1] going to kill op: op: 26184.0 m31000| Fri Feb 22 11:57:37.771 [conn1286] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.771 [conn1286] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|64 } } cursorid:447004334344361 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.771 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.772 [conn1286] end connection 165.225.128.186:39136 (7 connections now open) m31000| Fri Feb 22 11:57:37.772 [initandlisten] connection accepted from 165.225.128.186:53204 #1288 (8 connections now open) m31000| Fri Feb 22 11:57:37.772 [conn1287] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.772 [conn1287] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|65 } } cursorid:447008921544011 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.773 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.773 [conn1287] end connection 165.225.128.186:60671 (7 connections now open) m31000| Fri Feb 22 11:57:37.773 [initandlisten] connection accepted from 165.225.128.186:45158 #1289 (8 connections now open) m31000| Fri Feb 22 11:57:37.864 [conn1] going to kill op: op: 26220.0 m31000| Fri Feb 22 11:57:37.864 [conn1] going to kill op: op: 26219.0 m31000| Fri Feb 22 11:57:37.865 [conn1289] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.865 [conn1289] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|75 } } cursorid:447446475352206 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.865 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.865 [conn1289] end connection 165.225.128.186:45158 (7 connections now open) m31000| Fri Feb 22 11:57:37.866 [initandlisten] connection accepted from 165.225.128.186:42781 #1290 (8 connections now open) m31000| Fri Feb 22 11:57:37.964 [conn1] going to kill op: op: 26264.0 m31000| Fri Feb 22 11:57:37.965 [conn1] going to kill op: op: 26263.0 m31000| Fri Feb 22 11:57:37.965 [conn1] going to kill op: op: 26265.0 m31000| Fri Feb 22 11:57:37.965 [conn1288] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.965 [conn1288] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|75 } } cursorid:447442593358673 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:37.965 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.965 [conn1288] end connection 165.225.128.186:53204 (7 connections now open) m31000| Fri Feb 22 11:57:37.966 [initandlisten] connection accepted from 165.225.128.186:64393 #1291 (8 connections now open) m31000| Fri Feb 22 11:57:37.967 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.967 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|85 } } cursorid:447832592715677 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:37.967 [conn8] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:37.967 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.968 [conn1290] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:37.968 [conn1290] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|84 } } cursorid:447836956902713 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:37.968 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:37.968 [conn1290] end connection 165.225.128.186:42781 (7 connections now open) m31000| Fri Feb 22 11:57:37.968 [initandlisten] connection accepted from 165.225.128.186:64569 #1292 (8 connections now open) m31000| Fri Feb 22 11:57:38.065 [conn1] going to kill op: op: 26304.0 m31000| Fri Feb 22 11:57:38.066 [conn1] going to kill op: op: 26305.0 m31000| Fri Feb 22 11:57:38.068 [conn1291] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.068 [conn1291] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|94 } } cursorid:448228557595796 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:38.069 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.069 [conn1291] end connection 165.225.128.186:64393 (7 connections now open) m31000| Fri Feb 22 11:57:38.069 [initandlisten] connection accepted from 165.225.128.186:49060 #1293 (8 connections now open) m31000| Fri Feb 22 11:57:38.071 [conn1292] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.071 [conn1292] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534257000|94 } } cursorid:448232657939742 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.071 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.071 [conn1292] end connection 165.225.128.186:64569 (7 connections now open) m31000| Fri Feb 22 11:57:38.071 [initandlisten] connection accepted from 165.225.128.186:43508 #1294 (8 connections now open) m31000| Fri Feb 22 11:57:38.166 [conn1] going to kill op: op: 26355.0 m31000| Fri Feb 22 11:57:38.167 [conn1] going to kill op: op: 26356.0 m31000| Fri Feb 22 11:57:38.167 [conn1] going to kill op: op: 26354.0 m31000| Fri Feb 22 11:57:38.172 [conn1293] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.172 [conn1293] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|6 } } cursorid:448665925098720 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:38.172 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.172 [conn1293] end connection 165.225.128.186:49060 (7 connections now open) m31000| Fri Feb 22 11:57:38.172 [initandlisten] connection accepted from 165.225.128.186:60004 #1295 (8 connections now open) m31000| Fri Feb 22 11:57:38.173 [conn1294] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.173 [conn1294] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|6 } } cursorid:448671492140470 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.173 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.173 [conn1294] end connection 165.225.128.186:43508 (7 connections now open) m31000| Fri Feb 22 11:57:38.174 [initandlisten] connection accepted from 165.225.128.186:41131 #1296 (8 connections now open) m31000| Fri Feb 22 11:57:38.174 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.174 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|6 } } cursorid:448619792709955 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.175 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.268 [conn1] going to kill op: op: 26396.0 m31000| Fri Feb 22 11:57:38.268 [conn1] going to kill op: op: 26395.0 m31000| Fri Feb 22 11:57:38.275 [conn1295] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.275 [conn1295] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|16 } } cursorid:449104493945655 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:38.275 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.275 [conn1295] end connection 165.225.128.186:60004 (7 connections now open) m31000| Fri Feb 22 11:57:38.275 [initandlisten] connection accepted from 165.225.128.186:43794 #1297 (8 connections now open) m31000| Fri Feb 22 11:57:38.276 [conn1296] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.276 [conn1296] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|16 } } cursorid:449108496737870 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:38.276 [conn1296] ClientCursor::find(): cursor not found in map '449108496737870' (ok after a drop) m31002| Fri Feb 22 11:57:38.276 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.276 [conn1296] end connection 165.225.128.186:41131 (7 connections now open) m31000| Fri Feb 22 11:57:38.276 [initandlisten] connection accepted from 165.225.128.186:42490 #1298 (8 connections now open) m31000| Fri Feb 22 11:57:38.368 [conn1] going to kill op: op: 26431.0 m31000| Fri Feb 22 11:57:38.369 [conn1] going to kill op: op: 26433.0 m31000| Fri Feb 22 11:57:38.378 [conn1297] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.378 [conn1297] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|26 } } cursorid:449541544303360 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:38.378 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.378 [conn1297] end connection 165.225.128.186:43794 (7 connections now open) m31000| Fri Feb 22 11:57:38.378 [initandlisten] connection accepted from 165.225.128.186:41411 #1299 (8 connections now open) m31000| Fri Feb 22 11:57:38.469 [conn1] going to kill op: op: 26464.0 m31000| Fri Feb 22 11:57:38.469 [conn1] going to kill op: op: 26466.0 m31000| Fri Feb 22 11:57:38.470 [conn1298] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.470 [conn1298] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|26 } } cursorid:449547396309937 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.470 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.470 [conn1298] end connection 165.225.128.186:42490 (7 connections now open) m31000| Fri Feb 22 11:57:38.471 [initandlisten] connection accepted from 165.225.128.186:37157 #1300 (8 connections now open) m31000| Fri Feb 22 11:57:38.471 [conn1299] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.471 [conn1299] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|36 } } cursorid:449980787233357 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:38.471 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.471 [conn1299] end connection 165.225.128.186:41411 (7 connections now open) m31000| Fri Feb 22 11:57:38.471 [initandlisten] connection accepted from 165.225.128.186:42966 #1301 (8 connections now open) m31000| Fri Feb 22 11:57:38.570 [conn1] going to kill op: op: 26503.0 m31000| Fri Feb 22 11:57:38.570 [conn1] going to kill op: op: 26504.0 m31000| Fri Feb 22 11:57:38.573 [conn1300] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.573 [conn1300] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|45 } } cursorid:450370651620063 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.573 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.573 [conn1300] end connection 165.225.128.186:37157 (7 connections now open) m31000| Fri Feb 22 11:57:38.573 [initandlisten] connection accepted from 165.225.128.186:51616 #1302 (8 connections now open) m31000| Fri Feb 22 11:57:38.574 [conn1301] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.574 [conn1301] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|46 } } cursorid:450374632420715 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:38.574 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.574 [conn1301] end connection 165.225.128.186:42966 (7 connections now open) m31000| Fri Feb 22 11:57:38.574 [initandlisten] connection accepted from 165.225.128.186:42068 #1303 (8 connections now open) m31000| Fri Feb 22 11:57:38.671 [conn1] going to kill op: op: 26542.0 m31000| Fri Feb 22 11:57:38.671 [conn1] going to kill op: op: 26541.0 m31000| Fri Feb 22 11:57:38.676 [conn1302] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.676 [conn1302] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|56 } } cursorid:450810092039744 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.676 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.676 [conn1302] end connection 165.225.128.186:51616 (7 connections now open) m31000| Fri Feb 22 11:57:38.676 [conn1303] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.676 [conn1303] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|56 } } cursorid:450814533342277 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:38 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:38.676 [conn1303] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:38.676 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.676 [conn1303] end connection 165.225.128.186:42068 (6 connections now open) m31000| Fri Feb 22 11:57:38.676 [initandlisten] connection accepted from 165.225.128.186:62309 #1304 (8 connections now open) m31000| Fri Feb 22 11:57:38.677 [initandlisten] connection accepted from 165.225.128.186:50064 #1305 (8 connections now open) m31000| Fri Feb 22 11:57:38.772 [conn1] going to kill op: op: 26579.0 m31000| Fri Feb 22 11:57:38.772 [conn1] going to kill op: op: 26580.0 m31000| Fri Feb 22 11:57:38.779 [conn1304] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.779 [conn1305] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.779 [conn1304] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|66 } } cursorid:451252061232440 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:38.779 [conn1305] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|66 } } cursorid:451251645239766 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.779 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:38.779 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.779 [conn1304] end connection 165.225.128.186:62309 (7 connections now open) m31000| Fri Feb 22 11:57:38.779 [conn1305] end connection 165.225.128.186:50064 (7 connections now open) m31000| Fri Feb 22 11:57:38.779 [initandlisten] connection accepted from 165.225.128.186:43422 #1306 (7 connections now open) m31000| Fri Feb 22 11:57:38.779 [initandlisten] connection accepted from 165.225.128.186:49941 #1307 (8 connections now open) m31000| Fri Feb 22 11:57:38.872 [conn1] going to kill op: op: 26617.0 m31000| Fri Feb 22 11:57:38.873 [conn1] going to kill op: op: 26618.0 m31000| Fri Feb 22 11:57:38.881 [conn1306] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.882 [conn1306] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|76 } } cursorid:451690889502168 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:38.882 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.882 [conn1307] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.882 [conn1306] end connection 165.225.128.186:43422 (7 connections now open) m31000| Fri Feb 22 11:57:38.882 [conn1307] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|76 } } cursorid:451688834855396 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.882 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.882 [conn1307] end connection 165.225.128.186:49941 (6 connections now open) m31000| Fri Feb 22 11:57:38.882 [initandlisten] connection accepted from 165.225.128.186:37618 #1308 (7 connections now open) m31000| Fri Feb 22 11:57:38.882 [initandlisten] connection accepted from 165.225.128.186:33425 #1309 (8 connections now open) m31000| Fri Feb 22 11:57:38.973 [conn1] going to kill op: op: 26653.0 m31000| Fri Feb 22 11:57:38.973 [conn1] going to kill op: op: 26652.0 m31000| Fri Feb 22 11:57:38.974 [conn1309] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.974 [conn1309] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|86 } } cursorid:452127506558210 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:38.974 [conn1308] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:38.974 [conn1308] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|86 } } cursorid:452127755279925 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:38.974 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:38.975 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:38.975 [conn1309] end connection 165.225.128.186:33425 (7 connections now open) m31000| Fri Feb 22 11:57:38.975 [conn1308] end connection 165.225.128.186:37618 (6 connections now open) m31000| Fri Feb 22 11:57:38.975 [initandlisten] connection accepted from 165.225.128.186:36316 #1310 (7 connections now open) m31000| Fri Feb 22 11:57:38.975 [initandlisten] connection accepted from 165.225.128.186:52777 #1311 (8 connections now open) m31000| Fri Feb 22 11:57:39.074 [conn1] going to kill op: op: 26702.0 m31000| Fri Feb 22 11:57:39.074 [conn1] going to kill op: op: 26701.0 m31000| Fri Feb 22 11:57:39.074 [conn1] going to kill op: op: 26700.0 m31000| Fri Feb 22 11:57:39.077 [conn1310] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.077 [conn1310] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|95 } } cursorid:452523286825338 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:39.077 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.077 [conn1310] end connection 165.225.128.186:36316 (7 connections now open) m31000| Fri Feb 22 11:57:39.078 [conn1311] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.078 [conn1311] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|95 } } cursorid:452522920675941 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.078 [conn1311] ClientCursor::find(): cursor not found in map '452522920675941' (ok after a drop) m31000| Fri Feb 22 11:57:39.078 [initandlisten] connection accepted from 165.225.128.186:42188 #1312 (8 connections now open) m31002| Fri Feb 22 11:57:39.078 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.078 [conn1311] end connection 165.225.128.186:52777 (7 connections now open) m31000| Fri Feb 22 11:57:39.078 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.078 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534258000|95 } } cursorid:452472387155739 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.078 [initandlisten] connection accepted from 165.225.128.186:61543 #1313 (8 connections now open) m31001| Fri Feb 22 11:57:39.079 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.175 [conn1] going to kill op: op: 26742.0 m31000| Fri Feb 22 11:57:39.175 [conn1] going to kill op: op: 26743.0 m31000| Fri Feb 22 11:57:39.180 [conn1312] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.180 [conn1312] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|6 } } cursorid:452961239464053 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.180 [conn1313] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.180 [conn1313] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|6 } } cursorid:452961396781274 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:39.180 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:39.180 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.180 [conn1312] end connection 165.225.128.186:42188 (7 connections now open) m31000| Fri Feb 22 11:57:39.180 [conn1313] end connection 165.225.128.186:61543 (7 connections now open) m31000| Fri Feb 22 11:57:39.180 [initandlisten] connection accepted from 165.225.128.186:62320 #1314 (7 connections now open) m31000| Fri Feb 22 11:57:39.181 [initandlisten] connection accepted from 165.225.128.186:53480 #1315 (8 connections now open) m31000| Fri Feb 22 11:57:39.276 [conn1] going to kill op: op: 26794.0 m31000| Fri Feb 22 11:57:39.276 [conn1] going to kill op: op: 26792.0 m31000| Fri Feb 22 11:57:39.276 [conn1] going to kill op: op: 26793.0 m31000| Fri Feb 22 11:57:39.283 [conn1315] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.283 [conn1315] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|16 } } cursorid:453399504234380 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:39.283 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.283 [conn1315] end connection 165.225.128.186:53480 (7 connections now open) m31000| Fri Feb 22 11:57:39.283 [conn1314] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.283 [conn1314] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|16 } } cursorid:453400065541895 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.283 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.283 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|16 } } cursorid:453348628758531 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:44 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:39.283 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.284 [initandlisten] connection accepted from 165.225.128.186:61826 #1316 (8 connections now open) m31000| Fri Feb 22 11:57:39.284 [conn1314] end connection 165.225.128.186:62320 (7 connections now open) m31000| Fri Feb 22 11:57:39.284 [initandlisten] connection accepted from 165.225.128.186:59409 #1317 (8 connections now open) m31002| Fri Feb 22 11:57:39.285 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.377 [conn1] going to kill op: op: 26833.0 m31000| Fri Feb 22 11:57:39.377 [conn1] going to kill op: op: 26832.0 m31000| Fri Feb 22 11:57:39.386 [conn1316] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.386 [conn1317] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.386 [conn1316] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|27 } } cursorid:453837206293160 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.386 [conn1317] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|27 } } cursorid:453836455418323 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.386 [conn1316] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:39.386 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:39.386 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.386 [conn1316] end connection 165.225.128.186:61826 (7 connections now open) m31000| Fri Feb 22 11:57:39.386 [conn1317] end connection 165.225.128.186:59409 (7 connections now open) m31000| Fri Feb 22 11:57:39.387 [initandlisten] connection accepted from 165.225.128.186:37014 #1318 (7 connections now open) m31000| Fri Feb 22 11:57:39.387 [initandlisten] connection accepted from 165.225.128.186:54916 #1319 (8 connections now open) m31000| Fri Feb 22 11:57:39.478 [conn1] going to kill op: op: 26868.0 m31000| Fri Feb 22 11:57:39.478 [conn1] going to kill op: op: 26867.0 m31000| Fri Feb 22 11:57:39.478 [conn1318] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.478 [conn1318] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|37 } } cursorid:454275698172098 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:39.479 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.479 [conn1318] end connection 165.225.128.186:37014 (7 connections now open) m31000| Fri Feb 22 11:57:39.479 [conn1319] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.479 [conn1319] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|37 } } cursorid:454275281105318 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.479 [initandlisten] connection accepted from 165.225.128.186:49499 #1320 (8 connections now open) m31001| Fri Feb 22 11:57:39.479 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.479 [conn1319] end connection 165.225.128.186:54916 (7 connections now open) m31000| Fri Feb 22 11:57:39.479 [initandlisten] connection accepted from 165.225.128.186:60149 #1321 (8 connections now open) m31000| Fri Feb 22 11:57:39.578 [conn1] going to kill op: op: 26905.0 m31000| Fri Feb 22 11:57:39.578 [conn1] going to kill op: op: 26906.0 m31000| Fri Feb 22 11:57:39.581 [conn1320] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.581 [conn1320] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|46 } } cursorid:454667023871146 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:39.581 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.581 [conn1320] end connection 165.225.128.186:49499 (7 connections now open) m31000| Fri Feb 22 11:57:39.581 [initandlisten] connection accepted from 165.225.128.186:44683 #1322 (8 connections now open) m31000| Fri Feb 22 11:57:39.581 [conn1321] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.581 [conn1321] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|46 } } cursorid:454671071367431 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:39.581 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.582 [conn1321] end connection 165.225.128.186:60149 (7 connections now open) m31000| Fri Feb 22 11:57:39.582 [initandlisten] connection accepted from 165.225.128.186:54819 #1323 (8 connections now open) m31000| Fri Feb 22 11:57:39.679 [conn1] going to kill op: op: 26943.0 m31000| Fri Feb 22 11:57:39.679 [conn1] going to kill op: op: 26944.0 m31000| Fri Feb 22 11:57:39.684 [conn1322] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.684 [conn1322] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|56 } } cursorid:455103612101152 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:39.684 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.684 [conn1322] end connection 165.225.128.186:44683 (7 connections now open) m31000| Fri Feb 22 11:57:39.684 [conn1323] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.684 [conn1323] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|56 } } cursorid:455109094550694 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:39.684 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.684 [initandlisten] connection accepted from 165.225.128.186:44687 #1324 (8 connections now open) m31000| Fri Feb 22 11:57:39.684 [conn1323] end connection 165.225.128.186:54819 (7 connections now open) m31000| Fri Feb 22 11:57:39.684 [initandlisten] connection accepted from 165.225.128.186:50065 #1325 (8 connections now open) m31000| Fri Feb 22 11:57:39.780 [conn1] going to kill op: op: 26981.0 m31000| Fri Feb 22 11:57:39.780 [conn1] going to kill op: op: 26982.0 m31000| Fri Feb 22 11:57:39.787 [conn1325] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.787 [conn1325] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|66 } } cursorid:455546660007383 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.787 [conn1324] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.787 [conn1324] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|66 } } cursorid:455547333747679 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.787 [conn1325] ClientCursor::find(): cursor not found in map '455546660007383' (ok after a drop) m31001| Fri Feb 22 11:57:39.787 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:39.787 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.787 [conn1325] end connection 165.225.128.186:50065 (7 connections now open) m31000| Fri Feb 22 11:57:39.787 [conn1324] end connection 165.225.128.186:44687 (7 connections now open) m31000| Fri Feb 22 11:57:39.787 [initandlisten] connection accepted from 165.225.128.186:40421 #1326 (7 connections now open) m31000| Fri Feb 22 11:57:39.787 [initandlisten] connection accepted from 165.225.128.186:63064 #1327 (8 connections now open) m31000| Fri Feb 22 11:57:39.880 [conn1] going to kill op: op: 27019.0 m31000| Fri Feb 22 11:57:39.881 [conn1] going to kill op: op: 27020.0 m31000| Fri Feb 22 11:57:39.889 [conn1326] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.890 [conn1326] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|76 } } cursorid:455984965026164 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:39.890 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.890 [conn1326] end connection 165.225.128.186:40421 (7 connections now open) m31000| Fri Feb 22 11:57:39.890 [conn1327] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.890 [conn1327] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|76 } } cursorid:455984138760043 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:87 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:39.890 [initandlisten] connection accepted from 165.225.128.186:57200 #1328 (8 connections now open) m31001| Fri Feb 22 11:57:39.890 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.890 [conn1327] end connection 165.225.128.186:63064 (7 connections now open) m31000| Fri Feb 22 11:57:39.891 [initandlisten] connection accepted from 165.225.128.186:43003 #1329 (8 connections now open) m31000| Fri Feb 22 11:57:39.981 [conn1] going to kill op: op: 27055.0 m31000| Fri Feb 22 11:57:39.981 [conn1] going to kill op: op: 27056.0 m31000| Fri Feb 22 11:57:39.982 [conn1328] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.982 [conn1328] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|86 } } cursorid:456419389122955 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:39.982 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.982 [conn1328] end connection 165.225.128.186:57200 (7 connections now open) m31000| Fri Feb 22 11:57:39.982 [initandlisten] connection accepted from 165.225.128.186:54151 #1330 (8 connections now open) m31000| Fri Feb 22 11:57:39.983 [conn1329] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:39.983 [conn1329] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|86 } } cursorid:456423489205612 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:39.983 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:39.983 [conn1329] end connection 165.225.128.186:43003 (7 connections now open) m31000| Fri Feb 22 11:57:39.983 [initandlisten] connection accepted from 165.225.128.186:35199 #1331 (8 connections now open) m31000| Fri Feb 22 11:57:40.082 [conn1] going to kill op: op: 27097.0 m31000| Fri Feb 22 11:57:40.082 [conn1] going to kill op: op: 27094.0 m31000| Fri Feb 22 11:57:40.082 [conn1] going to kill op: op: 27095.0 m31000| Fri Feb 22 11:57:40.085 [conn1330] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.085 [conn1330] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|95 } } cursorid:456812916042520 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:40.085 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.085 [conn1330] end connection 165.225.128.186:54151 (7 connections now open) m31000| Fri Feb 22 11:57:40.086 [initandlisten] connection accepted from 165.225.128.186:52465 #1332 (8 connections now open) m31000| Fri Feb 22 11:57:40.086 [conn1331] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.086 [conn1331] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534259000|96 } } cursorid:456818778099352 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.086 [conn1331] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:40.086 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.086 [conn1331] end connection 165.225.128.186:35199 (7 connections now open) m31000| Fri Feb 22 11:57:40.087 [initandlisten] connection accepted from 165.225.128.186:47013 #1333 (8 connections now open) m31000| Fri Feb 22 11:57:40.089 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.089 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|8 } } cursorid:457204139492062 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:40.089 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.183 [conn1] going to kill op: op: 27138.0 m31000| Fri Feb 22 11:57:40.183 [conn1] going to kill op: op: 27137.0 m31000| Fri Feb 22 11:57:40.188 [conn1332] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.188 [conn1332] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|8 } } cursorid:457252061117616 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:40.188 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.188 [conn1332] end connection 165.225.128.186:52465 (7 connections now open) m31000| Fri Feb 22 11:57:40.189 [initandlisten] connection accepted from 165.225.128.186:38328 #1334 (8 connections now open) m31000| Fri Feb 22 11:57:40.189 [conn1333] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.189 [conn1333] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|8 } } cursorid:457255312522125 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:40.189 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.189 [conn1333] end connection 165.225.128.186:47013 (7 connections now open) m31000| Fri Feb 22 11:57:40.189 [initandlisten] connection accepted from 165.225.128.186:52590 #1335 (8 connections now open) m31000| Fri Feb 22 11:57:40.284 [conn1] going to kill op: op: 27177.0 m31000| Fri Feb 22 11:57:40.284 [conn1] going to kill op: op: 27176.0 m31000| Fri Feb 22 11:57:40.291 [conn1334] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.291 [conn1334] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|18 } } cursorid:457689344198410 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:40.291 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.291 [conn1334] end connection 165.225.128.186:38328 (7 connections now open) m31000| Fri Feb 22 11:57:40.291 [conn1335] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.291 [conn1335] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|18 } } cursorid:457694955485998 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.291 [initandlisten] connection accepted from 165.225.128.186:49032 #1336 (8 connections now open) m31001| Fri Feb 22 11:57:40.291 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.291 [conn1335] end connection 165.225.128.186:52590 (7 connections now open) m31000| Fri Feb 22 11:57:40.292 [initandlisten] connection accepted from 165.225.128.186:60863 #1337 (8 connections now open) m31000| Fri Feb 22 11:57:40.384 [conn1] going to kill op: op: 27225.0 m31000| Fri Feb 22 11:57:40.385 [conn1] going to kill op: op: 27223.0 m31000| Fri Feb 22 11:57:40.385 [conn1] going to kill op: op: 27226.0 m31000| Fri Feb 22 11:57:40.386 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.386 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|28 } } cursorid:458081525827231 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:40.386 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.393 [conn1336] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.394 [conn1337] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.394 [conn1336] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|28 } } cursorid:458128362503560 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.394 [conn1337] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|28 } } cursorid:458131443454926 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.394 [conn1337] ClientCursor::find(): cursor not found in map '458131443454926' (ok after a drop) m31002| Fri Feb 22 11:57:40.394 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:40.394 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.394 [conn1336] end connection 165.225.128.186:49032 (7 connections now open) m31000| Fri Feb 22 11:57:40.394 [conn1337] end connection 165.225.128.186:60863 (7 connections now open) m31000| Fri Feb 22 11:57:40.394 [initandlisten] connection accepted from 165.225.128.186:46012 #1338 (7 connections now open) m31000| Fri Feb 22 11:57:40.394 [initandlisten] connection accepted from 165.225.128.186:59165 #1339 (8 connections now open) m31000| Fri Feb 22 11:57:40.485 [conn1] going to kill op: op: 27261.0 m31000| Fri Feb 22 11:57:40.486 [conn1] going to kill op: op: 27262.0 m31000| Fri Feb 22 11:57:40.486 [conn1339] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.486 [conn1339] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|38 } } cursorid:458569596953891 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.486 [conn1338] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.486 [conn1338] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|38 } } cursorid:458569842947556 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:40.486 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:40.487 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.487 [conn1339] end connection 165.225.128.186:59165 (7 connections now open) m31000| Fri Feb 22 11:57:40.487 [conn1338] end connection 165.225.128.186:46012 (6 connections now open) m31000| Fri Feb 22 11:57:40.487 [initandlisten] connection accepted from 165.225.128.186:59608 #1340 (7 connections now open) m31000| Fri Feb 22 11:57:40.487 [initandlisten] connection accepted from 165.225.128.186:45340 #1341 (8 connections now open) m31000| Fri Feb 22 11:57:40.586 [conn1] going to kill op: op: 27300.0 m31000| Fri Feb 22 11:57:40.586 [conn1] going to kill op: op: 27299.0 m31000| Fri Feb 22 11:57:40.589 [conn1341] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.589 [conn1341] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|47 } } cursorid:458965331501983 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.589 [conn1340] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.589 [conn1340] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|47 } } cursorid:458966600243208 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:40.589 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:40.590 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.589 [conn1341] end connection 165.225.128.186:45340 (7 connections now open) m31000| Fri Feb 22 11:57:40.590 [conn1340] end connection 165.225.128.186:59608 (6 connections now open) m31000| Fri Feb 22 11:57:40.590 [initandlisten] connection accepted from 165.225.128.186:49170 #1342 (7 connections now open) m31000| Fri Feb 22 11:57:40.590 [initandlisten] connection accepted from 165.225.128.186:55679 #1343 (8 connections now open) m31000| Fri Feb 22 11:57:40.687 [conn1] going to kill op: op: 27339.0 m31000| Fri Feb 22 11:57:40.687 [conn1] going to kill op: op: 27337.0 m31000| Fri Feb 22 11:57:40.692 [conn1343] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.692 [conn1343] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|57 } } cursorid:459403596191820 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:40.692 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.692 [conn1342] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.692 [conn1342] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|57 } } cursorid:459403255653184 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.692 [conn1343] end connection 165.225.128.186:55679 (7 connections now open) m31002| Fri Feb 22 11:57:40.693 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.693 [conn1342] end connection 165.225.128.186:49170 (6 connections now open) m31000| Fri Feb 22 11:57:40.693 [initandlisten] connection accepted from 165.225.128.186:52930 #1344 (7 connections now open) m31000| Fri Feb 22 11:57:40.693 [initandlisten] connection accepted from 165.225.128.186:42437 #1345 (8 connections now open) m31000| Fri Feb 22 11:57:40.788 [conn1] going to kill op: op: 27376.0 m31000| Fri Feb 22 11:57:40.788 [conn1] going to kill op: op: 27377.0 m31000| Fri Feb 22 11:57:40.795 [conn1345] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.795 [conn1345] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|68 } } cursorid:459841520633954 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:40.795 [conn1345] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:40.795 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.795 [conn1345] end connection 165.225.128.186:42437 (7 connections now open) m31000| Fri Feb 22 11:57:40.795 [conn1344] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.795 [initandlisten] connection accepted from 165.225.128.186:33469 #1346 (8 connections now open) m31000| Fri Feb 22 11:57:40.795 [conn1344] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|67 } } cursorid:459841971111405 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:40.795 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.795 [conn1344] end connection 165.225.128.186:52930 (7 connections now open) m31000| Fri Feb 22 11:57:40.796 [initandlisten] connection accepted from 165.225.128.186:53464 #1347 (8 connections now open) m31000| Fri Feb 22 11:57:40.888 [conn1] going to kill op: op: 27414.0 m31000| Fri Feb 22 11:57:40.889 [conn1] going to kill op: op: 27415.0 m31000| Fri Feb 22 11:57:40.897 [conn1346] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.897 [conn1346] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|78 } } cursorid:460275343260189 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:40.897 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.897 [conn1346] end connection 165.225.128.186:33469 (7 connections now open) m31000| Fri Feb 22 11:57:40.897 [initandlisten] connection accepted from 165.225.128.186:58921 #1348 (8 connections now open) m31000| Fri Feb 22 11:57:40.897 [conn1347] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.897 [conn1347] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|78 } } cursorid:460279336705279 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:40.897 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.898 [conn1347] end connection 165.225.128.186:53464 (7 connections now open) m31000| Fri Feb 22 11:57:40.898 [initandlisten] connection accepted from 165.225.128.186:58182 #1349 (8 connections now open) m31000| Fri Feb 22 11:57:40.989 [conn1] going to kill op: op: 27450.0 m31000| Fri Feb 22 11:57:40.989 [conn1] going to kill op: op: 27449.0 m31000| Fri Feb 22 11:57:40.989 [conn1349] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:40.989 [conn1349] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|88 } } cursorid:460717718230393 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:40.990 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:40.990 [conn1349] end connection 165.225.128.186:58182 (7 connections now open) m31000| Fri Feb 22 11:57:40.990 [initandlisten] connection accepted from 165.225.128.186:41660 #1350 (8 connections now open) m31000| Fri Feb 22 11:57:41.090 [conn1] going to kill op: op: 27484.0 m31000| Fri Feb 22 11:57:41.090 [conn1] going to kill op: op: 27483.0 m31000| Fri Feb 22 11:57:41.091 [conn1348] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.091 [conn1348] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|88 } } cursorid:460713461479472 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:41.091 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.091 [conn1348] end connection 165.225.128.186:58921 (7 connections now open) m31000| Fri Feb 22 11:57:41.092 [initandlisten] connection accepted from 165.225.128.186:42572 #1351 (8 connections now open) m31000| Fri Feb 22 11:57:41.092 [conn1350] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.092 [conn1350] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534260000|97 } } cursorid:461109776852209 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.092 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.092 [conn1350] end connection 165.225.128.186:41660 (7 connections now open) m31000| Fri Feb 22 11:57:41.092 [initandlisten] connection accepted from 165.225.128.186:62497 #1352 (8 connections now open) m31000| Fri Feb 22 11:57:41.190 [conn1] going to kill op: op: 27535.0 m31000| Fri Feb 22 11:57:41.191 [conn1] going to kill op: op: 27533.0 m31000| Fri Feb 22 11:57:41.191 [conn1] going to kill op: op: 27534.0 m31000| Fri Feb 22 11:57:41.194 [conn1351] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.194 [conn1351] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|8 } } cursorid:461542506419404 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:41.194 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.194 [conn1351] end connection 165.225.128.186:42572 (7 connections now open) m31000| Fri Feb 22 11:57:41.194 [initandlisten] connection accepted from 165.225.128.186:53596 #1353 (8 connections now open) m31000| Fri Feb 22 11:57:41.194 [conn1352] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.194 [conn1352] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|8 } } cursorid:461546332486817 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:41.195 [conn1352] ClientCursor::find(): cursor not found in map '461546332486817' (ok after a drop) m31000| Fri Feb 22 11:57:41.195 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.195 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|8 } } cursorid:461495229796126 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.195 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.195 [conn1352] end connection 165.225.128.186:62497 (7 connections now open) m31000| Fri Feb 22 11:57:41.195 [initandlisten] connection accepted from 165.225.128.186:55890 #1354 (8 connections now open) m31001| Fri Feb 22 11:57:41.196 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.291 [conn1] going to kill op: op: 27573.0 m31000| Fri Feb 22 11:57:41.291 [conn1] going to kill op: op: 27574.0 m31000| Fri Feb 22 11:57:41.296 [conn1353] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.296 [conn1353] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|18 } } cursorid:461981031418108 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:41.296 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.296 [conn1353] end connection 165.225.128.186:53596 (7 connections now open) m31000| Fri Feb 22 11:57:41.296 [initandlisten] connection accepted from 165.225.128.186:54023 #1355 (8 connections now open) m31000| Fri Feb 22 11:57:41.297 [conn1354] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.297 [conn1354] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|18 } } cursorid:461983873527946 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.297 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.297 [conn1354] end connection 165.225.128.186:55890 (7 connections now open) m31000| Fri Feb 22 11:57:41.297 [initandlisten] connection accepted from 165.225.128.186:60078 #1356 (8 connections now open) m31000| Fri Feb 22 11:57:41.392 [conn1] going to kill op: op: 27613.0 m31000| Fri Feb 22 11:57:41.392 [conn1] going to kill op: op: 27612.0 m31000| Fri Feb 22 11:57:41.392 [conn1] going to kill op: op: 27614.0 m31000| Fri Feb 22 11:57:41.398 [conn1355] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.398 [conn1355] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|28 } } cursorid:462419433480157 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:41.398 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.398 [conn1355] end connection 165.225.128.186:54023 (7 connections now open) m31000| Fri Feb 22 11:57:41.398 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.398 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|37 } } cursorid:462767094307842 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:41.398 [initandlisten] connection accepted from 165.225.128.186:63841 #1357 (8 connections now open) m31002| Fri Feb 22 11:57:41.399 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.399 [conn1356] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.399 [conn1356] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|28 } } cursorid:462422935937842 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.399 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.399 [conn1356] end connection 165.225.128.186:60078 (7 connections now open) m31000| Fri Feb 22 11:57:41.399 [initandlisten] connection accepted from 165.225.128.186:63032 #1358 (8 connections now open) m31000| Fri Feb 22 11:57:41.492 [conn1] going to kill op: op: 27652.0 m31000| Fri Feb 22 11:57:41.492 [conn1] going to kill op: op: 27654.0 m31000| Fri Feb 22 11:57:41.500 [conn1357] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.500 [conn1357] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|38 } } cursorid:462857608670940 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:37 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:41.500 [conn1357] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:41.500 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.500 [conn1357] end connection 165.225.128.186:63841 (7 connections now open) m31000| Fri Feb 22 11:57:41.500 [initandlisten] connection accepted from 165.225.128.186:50077 #1359 (8 connections now open) m31000| Fri Feb 22 11:57:41.501 [conn1358] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.501 [conn1358] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|38 } } cursorid:462861767236237 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:31 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.501 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.501 [conn1358] end connection 165.225.128.186:63032 (7 connections now open) m31000| Fri Feb 22 11:57:41.501 [initandlisten] connection accepted from 165.225.128.186:62304 #1360 (8 connections now open) m31000| Fri Feb 22 11:57:41.593 [conn1] going to kill op: op: 27691.0 m31000| Fri Feb 22 11:57:41.593 [conn1] going to kill op: op: 27689.0 m31000| Fri Feb 22 11:57:41.593 [conn1360] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.593 [conn1360] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|49 } } cursorid:463299178674241 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:93 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.594 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.594 [conn1360] end connection 165.225.128.186:62304 (7 connections now open) m31000| Fri Feb 22 11:57:41.594 [initandlisten] connection accepted from 165.225.128.186:50022 #1361 (8 connections now open) m31000| Fri Feb 22 11:57:41.602 [conn1359] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.602 [conn1359] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|48 } } cursorid:463295595714856 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:41.602 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.602 [conn1359] end connection 165.225.128.186:50077 (7 connections now open) m31000| Fri Feb 22 11:57:41.602 [initandlisten] connection accepted from 165.225.128.186:57125 #1362 (8 connections now open) m31000| Fri Feb 22 11:57:41.693 [conn1] going to kill op: op: 27727.0 m31000| Fri Feb 22 11:57:41.694 [conn1] going to kill op: op: 27726.0 m31000| Fri Feb 22 11:57:41.694 [conn1362] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.694 [conn1362] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|59 } } cursorid:463693437044792 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:41.694 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.694 [conn1362] end connection 165.225.128.186:57125 (7 connections now open) m31000| Fri Feb 22 11:57:41.694 [initandlisten] connection accepted from 165.225.128.186:62360 #1363 (8 connections now open) m31000| Fri Feb 22 11:57:41.696 [conn1361] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.696 [conn1361] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|58 } } cursorid:463690148023508 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.696 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.696 [conn1361] end connection 165.225.128.186:50022 (7 connections now open) m31000| Fri Feb 22 11:57:41.696 [initandlisten] connection accepted from 165.225.128.186:61521 #1364 (8 connections now open) m31000| Fri Feb 22 11:57:41.794 [conn1] going to kill op: op: 27765.0 m31000| Fri Feb 22 11:57:41.794 [conn1] going to kill op: op: 27764.0 m31000| Fri Feb 22 11:57:41.796 [conn1363] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.796 [conn1363] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|68 } } cursorid:464086003043809 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:41.796 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.796 [conn1363] end connection 165.225.128.186:62360 (7 connections now open) m31000| Fri Feb 22 11:57:41.797 [initandlisten] connection accepted from 165.225.128.186:38090 #1365 (8 connections now open) m31000| Fri Feb 22 11:57:41.798 [conn1364] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.798 [conn1364] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|68 } } cursorid:464089297186089 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.799 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.799 [conn1364] end connection 165.225.128.186:61521 (7 connections now open) m31000| Fri Feb 22 11:57:41.799 [initandlisten] connection accepted from 165.225.128.186:46301 #1366 (8 connections now open) m31000| Fri Feb 22 11:57:41.895 [conn1] going to kill op: op: 27802.0 m31000| Fri Feb 22 11:57:41.895 [conn1] going to kill op: op: 27803.0 m31000| Fri Feb 22 11:57:41.898 [conn1365] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.899 [conn1365] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|78 } } cursorid:464523430359147 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:41.899 [conn1365] ClientCursor::find(): cursor not found in map '464523430359147' (ok after a drop) m31002| Fri Feb 22 11:57:41.899 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.899 [conn1365] end connection 165.225.128.186:38090 (7 connections now open) m31000| Fri Feb 22 11:57:41.899 [initandlisten] connection accepted from 165.225.128.186:38464 #1367 (8 connections now open) m31000| Fri Feb 22 11:57:41.901 [conn1366] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:41.901 [conn1366] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|78 } } cursorid:464526671649426 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:41.901 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:41.901 [conn1366] end connection 165.225.128.186:46301 (7 connections now open) m31000| Fri Feb 22 11:57:41.901 [initandlisten] connection accepted from 165.225.128.186:47288 #1368 (8 connections now open) m31000| Fri Feb 22 11:57:41.995 [conn1] going to kill op: op: 27841.0 m31000| Fri Feb 22 11:57:41.996 [conn1] going to kill op: op: 27840.0 m31000| Fri Feb 22 11:57:42.001 [conn1367] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.001 [conn1367] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|88 } } cursorid:464918612681919 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:42.001 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.001 [conn1367] end connection 165.225.128.186:38464 (7 connections now open) m31000| Fri Feb 22 11:57:42.001 [initandlisten] connection accepted from 165.225.128.186:41101 #1369 (8 connections now open) m31000| Fri Feb 22 11:57:42.003 [conn1368] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.003 [conn1368] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|88 } } cursorid:464921965168510 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.003 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.003 [conn1368] end connection 165.225.128.186:47288 (7 connections now open) m31000| Fri Feb 22 11:57:42.003 [initandlisten] connection accepted from 165.225.128.186:35772 #1370 (8 connections now open) m31000| Fri Feb 22 11:57:42.096 [conn1] going to kill op: op: 27880.0 m31000| Fri Feb 22 11:57:42.096 [conn1] going to kill op: op: 27879.0 m31000| Fri Feb 22 11:57:42.103 [conn1369] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.103 [conn1369] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|98 } } cursorid:465312924202344 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:42.103 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.103 [conn1369] end connection 165.225.128.186:41101 (7 connections now open) m31000| Fri Feb 22 11:57:42.103 [initandlisten] connection accepted from 165.225.128.186:52565 #1371 (8 connections now open) m31000| Fri Feb 22 11:57:42.105 [conn1370] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.105 [conn1370] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534261000|98 } } cursorid:465318119735158 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.106 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.106 [conn1370] end connection 165.225.128.186:35772 (7 connections now open) m31000| Fri Feb 22 11:57:42.106 [initandlisten] connection accepted from 165.225.128.186:61399 #1372 (8 connections now open) m31000| Fri Feb 22 11:57:42.197 [conn1] going to kill op: op: 27917.0 m31000| Fri Feb 22 11:57:42.197 [conn1] going to kill op: op: 27919.0 m31000| Fri Feb 22 11:57:42.198 [conn1372] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.198 [conn1372] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|9 } } cursorid:465712049368196 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.198 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.198 [conn1372] end connection 165.225.128.186:61399 (7 connections now open) m31000| Fri Feb 22 11:57:42.198 [initandlisten] connection accepted from 165.225.128.186:59198 #1373 (8 connections now open) m31000| Fri Feb 22 11:57:42.205 [conn1371] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.206 [conn1371] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|9 } } cursorid:465709208766760 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:42.206 [conn1371] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:42.206 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.206 [conn1371] end connection 165.225.128.186:52565 (7 connections now open) m31000| Fri Feb 22 11:57:42.206 [initandlisten] connection accepted from 165.225.128.186:45484 #1374 (8 connections now open) m31001| Fri Feb 22 11:57:42.264 [conn8] end connection 165.225.128.186:53796 (2 connections now open) m31001| Fri Feb 22 11:57:42.264 [initandlisten] connection accepted from 165.225.128.186:38255 #10 (3 connections now open) m31000| Fri Feb 22 11:57:42.298 [conn1] going to kill op: op: 27968.0 m31000| Fri Feb 22 11:57:42.298 [conn1] going to kill op: op: 27969.0 m31000| Fri Feb 22 11:57:42.298 [conn1] going to kill op: op: 27966.0 m31000| Fri Feb 22 11:57:42.300 [conn1373] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.300 [conn1373] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|18 } } cursorid:466103854325838 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.300 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.301 [conn1373] end connection 165.225.128.186:59198 (7 connections now open) m31000| Fri Feb 22 11:57:42.301 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.301 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|18 } } cursorid:466056587912431 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:42.301 [initandlisten] connection accepted from 165.225.128.186:63932 #1375 (8 connections now open) m31001| Fri Feb 22 11:57:42.302 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.399 [conn1] going to kill op: op: 28004.0 m31000| Fri Feb 22 11:57:42.399 [conn1] going to kill op: op: 28002.0 m31000| Fri Feb 22 11:57:42.399 [conn1374] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.400 [conn1374] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|19 } } cursorid:466107860539667 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:42.400 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.400 [conn1374] end connection 165.225.128.186:45484 (7 connections now open) m31000| Fri Feb 22 11:57:42.400 [initandlisten] connection accepted from 165.225.128.186:55022 #1376 (8 connections now open) m31000| Fri Feb 22 11:57:42.403 [conn1375] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.403 [conn1375] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|29 } } cursorid:466498265330345 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.403 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.404 [conn1375] end connection 165.225.128.186:63932 (7 connections now open) m31000| Fri Feb 22 11:57:42.404 [initandlisten] connection accepted from 165.225.128.186:46867 #1377 (8 connections now open) m31000| Fri Feb 22 11:57:42.499 [conn1] going to kill op: op: 28050.0 m31000| Fri Feb 22 11:57:42.500 [conn1] going to kill op: op: 28053.0 m31000| Fri Feb 22 11:57:42.502 [conn1376] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.502 [conn1376] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|38 } } cursorid:466889423301654 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:42.502 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.502 [conn1376] end connection 165.225.128.186:55022 (7 connections now open) m31000| Fri Feb 22 11:57:42.503 [initandlisten] connection accepted from 165.225.128.186:52067 #1378 (8 connections now open) m31000| Fri Feb 22 11:57:42.506 [conn1377] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.506 [conn1377] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|39 } } cursorid:466894974329031 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.506 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.506 [conn1377] end connection 165.225.128.186:46867 (7 connections now open) m31000| Fri Feb 22 11:57:42.507 [initandlisten] connection accepted from 165.225.128.186:61013 #1379 (8 connections now open) m31000| Fri Feb 22 11:57:42.600 [conn1] going to kill op: op: 28101.0 m31000| Fri Feb 22 11:57:42.601 [conn1] going to kill op: op: 28102.0 m31000| Fri Feb 22 11:57:42.601 [conn1] going to kill op: op: 28100.0 m31000| Fri Feb 22 11:57:42.605 [conn1378] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.605 [conn1378] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|48 } } cursorid:467284246715657 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:42.605 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.605 [conn1378] end connection 165.225.128.186:52067 (7 connections now open) m31000| Fri Feb 22 11:57:42.606 [initandlisten] connection accepted from 165.225.128.186:49710 #1380 (8 connections now open) m31000| Fri Feb 22 11:57:42.606 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.606 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|38 } } cursorid:466884750129938 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:42.606 [conn12] ClientCursor::find(): cursor not found in map '466884750129938' (ok after a drop) m31002| Fri Feb 22 11:57:42.606 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.609 [conn1379] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.609 [conn1379] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|49 } } cursorid:467289027985237 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.609 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.609 [conn1379] end connection 165.225.128.186:61013 (7 connections now open) m31000| Fri Feb 22 11:57:42.609 [initandlisten] connection accepted from 165.225.128.186:52649 #1381 (8 connections now open) m31000| Fri Feb 22 11:57:42.701 [conn1] going to kill op: op: 28138.0 m31000| Fri Feb 22 11:57:42.702 [conn1] going to kill op: op: 28140.0 m31000| Fri Feb 22 11:57:42.708 [conn1380] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.708 [conn1380] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|59 } } cursorid:467679248149856 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:42.708 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.708 [conn1380] end connection 165.225.128.186:49710 (7 connections now open) m31000| Fri Feb 22 11:57:42.708 [initandlisten] connection accepted from 165.225.128.186:65339 #1382 (8 connections now open) m31000| Fri Feb 22 11:57:42.802 [conn1] going to kill op: op: 28172.0 m31000| Fri Feb 22 11:57:42.802 [conn1] going to kill op: op: 28174.0 m31000| Fri Feb 22 11:57:42.803 [conn1381] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.803 [conn1381] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|59 } } cursorid:467684250117529 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.804 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.804 [conn1381] end connection 165.225.128.186:52649 (7 connections now open) m31000| Fri Feb 22 11:57:42.804 [initandlisten] connection accepted from 165.225.128.186:42674 #1383 (8 connections now open) m31000| Fri Feb 22 11:57:42.810 [conn1382] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.811 [conn1382] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|69 } } cursorid:468074242979617 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:42.811 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.811 [conn1382] end connection 165.225.128.186:65339 (7 connections now open) m31000| Fri Feb 22 11:57:42.811 [initandlisten] connection accepted from 165.225.128.186:50523 #1384 (8 connections now open) m31000| Fri Feb 22 11:57:42.903 [conn1] going to kill op: op: 28209.0 m31000| Fri Feb 22 11:57:42.903 [conn1] going to kill op: op: 28210.0 m31000| Fri Feb 22 11:57:42.906 [conn1383] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:42.906 [conn1383] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|78 } } cursorid:468466070518082 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:42.907 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:42.907 [conn1383] end connection 165.225.128.186:42674 (7 connections now open) m31000| Fri Feb 22 11:57:42.907 [initandlisten] connection accepted from 165.225.128.186:45810 #1385 (8 connections now open) m31000| Fri Feb 22 11:57:43.004 [conn1] going to kill op: op: 28244.0 m31000| Fri Feb 22 11:57:43.004 [conn1] going to kill op: op: 28246.0 m31000| Fri Feb 22 11:57:43.005 [conn1384] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.005 [conn1384] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|79 } } cursorid:468470908074071 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.005 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.005 [conn1384] end connection 165.225.128.186:50523 (7 connections now open) m31000| Fri Feb 22 11:57:43.005 [initandlisten] connection accepted from 165.225.128.186:38573 #1386 (8 connections now open) m31000| Fri Feb 22 11:57:43.009 [conn1385] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.009 [conn1385] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|88 } } cursorid:468860648220372 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:43.009 [conn1385] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:43.010 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.010 [conn1385] end connection 165.225.128.186:45810 (7 connections now open) m31000| Fri Feb 22 11:57:43.010 [initandlisten] connection accepted from 165.225.128.186:59651 #1387 (8 connections now open) m31000| Fri Feb 22 11:57:43.105 [conn1] going to kill op: op: 28284.0 m31000| Fri Feb 22 11:57:43.105 [conn1] going to kill op: op: 28282.0 m31000| Fri Feb 22 11:57:43.108 [conn1386] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.108 [conn1386] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|98 } } cursorid:469252333071297 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.108 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.108 [conn1386] end connection 165.225.128.186:38573 (7 connections now open) m31000| Fri Feb 22 11:57:43.108 [initandlisten] connection accepted from 165.225.128.186:51636 #1388 (8 connections now open) m31000| Fri Feb 22 11:57:43.112 [conn1387] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.112 [conn1387] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534262000|99 } } cursorid:469255620475802 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:43.113 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.113 [conn1387] end connection 165.225.128.186:59651 (7 connections now open) m31000| Fri Feb 22 11:57:43.113 [initandlisten] connection accepted from 165.225.128.186:37444 #1389 (8 connections now open) m31000| Fri Feb 22 11:57:43.205 [conn1] going to kill op: op: 28321.0 m31000| Fri Feb 22 11:57:43.205 [conn1] going to kill op: op: 28322.0 m31000| Fri Feb 22 11:57:43.211 [conn1388] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.211 [conn1388] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|9 } } cursorid:469647022408829 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.211 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.211 [conn1388] end connection 165.225.128.186:51636 (7 connections now open) m31000| Fri Feb 22 11:57:43.211 [initandlisten] connection accepted from 165.225.128.186:35706 #1390 (8 connections now open) m31000| Fri Feb 22 11:57:43.306 [conn1] going to kill op: op: 28358.0 m31000| Fri Feb 22 11:57:43.306 [conn1] going to kill op: op: 28356.0 m31000| Fri Feb 22 11:57:43.306 [conn1] going to kill op: op: 28360.0 m31000| Fri Feb 22 11:57:43.307 [conn1389] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.307 [conn1389] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|10 } } cursorid:469651332659099 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:43.307 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.307 [conn1389] end connection 165.225.128.186:37444 (7 connections now open) m31000| Fri Feb 22 11:57:43.308 [initandlisten] connection accepted from 165.225.128.186:60334 #1391 (8 connections now open) m31000| Fri Feb 22 11:57:43.312 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.312 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|29 } } cursorid:470427980376319 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:43.313 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.313 [conn1390] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.313 [conn1390] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|19 } } cursorid:470042539465428 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.313 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.314 [conn1390] end connection 165.225.128.186:35706 (7 connections now open) m31000| Fri Feb 22 11:57:43.314 [initandlisten] connection accepted from 165.225.128.186:47132 #1392 (8 connections now open) m31000| Fri Feb 22 11:57:43.407 [conn1] going to kill op: op: 28399.0 m31000| Fri Feb 22 11:57:43.407 [conn1] going to kill op: op: 28397.0 m31000| Fri Feb 22 11:57:43.410 [conn1391] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.410 [conn1391] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|29 } } cursorid:470433391022647 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:43.410 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.410 [conn1391] end connection 165.225.128.186:60334 (7 connections now open) m31000| Fri Feb 22 11:57:43.410 [initandlisten] connection accepted from 165.225.128.186:58665 #1393 (8 connections now open) m31000| Fri Feb 22 11:57:43.416 [conn1392] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.416 [conn1392] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|30 } } cursorid:470437123292897 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:43.416 [conn1392] ClientCursor::find(): cursor not found in map '470437123292897' (ok after a drop) m31002| Fri Feb 22 11:57:43.416 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.416 [conn1392] end connection 165.225.128.186:47132 (7 connections now open) m31000| Fri Feb 22 11:57:43.416 [initandlisten] connection accepted from 165.225.128.186:51686 #1394 (8 connections now open) m31000| Fri Feb 22 11:57:43.508 [conn1] going to kill op: op: 28434.0 m31000| Fri Feb 22 11:57:43.508 [conn1] going to kill op: op: 28435.0 m31000| Fri Feb 22 11:57:43.508 [conn1394] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.508 [conn1394] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|40 } } cursorid:470832320758220 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.508 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.508 [conn1394] end connection 165.225.128.186:51686 (7 connections now open) m31000| Fri Feb 22 11:57:43.509 [initandlisten] connection accepted from 165.225.128.186:48728 #1395 (8 connections now open) m31000| Fri Feb 22 11:57:43.513 [conn1393] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.513 [conn1393] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|39 } } cursorid:470827810710282 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:43.513 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.513 [conn1393] end connection 165.225.128.186:58665 (7 connections now open) m31000| Fri Feb 22 11:57:43.513 [initandlisten] connection accepted from 165.225.128.186:56998 #1396 (8 connections now open) m31000| Fri Feb 22 11:57:43.608 [conn1] going to kill op: op: 28472.0 m31000| Fri Feb 22 11:57:43.609 [conn1] going to kill op: op: 28473.0 m31000| Fri Feb 22 11:57:43.611 [conn1395] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.611 [conn1395] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|49 } } cursorid:471224134442299 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.611 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.611 [conn1395] end connection 165.225.128.186:48728 (7 connections now open) m31000| Fri Feb 22 11:57:43.611 [initandlisten] connection accepted from 165.225.128.186:32771 #1397 (8 connections now open) m31000| Fri Feb 22 11:57:43.615 [conn1396] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.615 [conn1396] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|49 } } cursorid:471226937350040 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:43.615 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.615 [conn1396] end connection 165.225.128.186:56998 (7 connections now open) m31000| Fri Feb 22 11:57:43.616 [initandlisten] connection accepted from 165.225.128.186:58694 #1398 (8 connections now open) m31000| Fri Feb 22 11:57:43.709 [conn1] going to kill op: op: 28522.0 m31000| Fri Feb 22 11:57:43.709 [conn1] going to kill op: op: 28523.0 m31000| Fri Feb 22 11:57:43.709 [conn1] going to kill op: op: 28521.0 m31000| Fri Feb 22 11:57:43.713 [conn1397] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.713 [conn1397] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|59 } } cursorid:471619193680113 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.713 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.713 [conn1397] end connection 165.225.128.186:32771 (7 connections now open) m31000| Fri Feb 22 11:57:43.713 [initandlisten] connection accepted from 165.225.128.186:37423 #1399 (8 connections now open) m31000| Fri Feb 22 11:57:43.714 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.714 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|59 } } cursorid:471614260574370 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.714 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.718 [conn1398] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.718 [conn1398] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|59 } } cursorid:471623751933743 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:43.718 [conn1398] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:43.718 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.718 [conn1398] end connection 165.225.128.186:58694 (7 connections now open) m31000| Fri Feb 22 11:57:43.718 [initandlisten] connection accepted from 165.225.128.186:49496 #1400 (8 connections now open) m31000| Fri Feb 22 11:57:43.810 [conn1] going to kill op: op: 28560.0 m31000| Fri Feb 22 11:57:43.810 [conn1] going to kill op: op: 28561.0 m31000| Fri Feb 22 11:57:43.815 [conn1399] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.815 [conn1399] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|69 } } cursorid:472013240107621 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.816 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.816 [conn1399] end connection 165.225.128.186:37423 (7 connections now open) m31000| Fri Feb 22 11:57:43.816 [initandlisten] connection accepted from 165.225.128.186:46712 #1401 (8 connections now open) m31000| Fri Feb 22 11:57:43.911 [conn1] going to kill op: op: 28594.0 m31000| Fri Feb 22 11:57:43.911 [conn1] going to kill op: op: 28595.0 m31000| Fri Feb 22 11:57:43.911 [conn1400] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.911 [conn1400] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|69 } } cursorid:472017730540408 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:43.911 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.912 [conn1400] end connection 165.225.128.186:49496 (7 connections now open) m31000| Fri Feb 22 11:57:43.912 [initandlisten] connection accepted from 165.225.128.186:45075 #1402 (8 connections now open) m31000| Fri Feb 22 11:57:43.918 [conn1401] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:43.918 [conn1401] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|79 } } cursorid:472409817989590 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:43.918 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:43.918 [conn1401] end connection 165.225.128.186:46712 (7 connections now open) m31000| Fri Feb 22 11:57:43.919 [initandlisten] connection accepted from 165.225.128.186:57432 #1403 (8 connections now open) m31000| Fri Feb 22 11:57:44.011 [conn1] going to kill op: op: 28632.0 m31000| Fri Feb 22 11:57:44.012 [conn1] going to kill op: op: 28633.0 m31000| Fri Feb 22 11:57:44.013 [conn1402] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.014 [conn1402] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|89 } } cursorid:472799462859914 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.014 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.014 [conn1402] end connection 165.225.128.186:45075 (7 connections now open) m31000| Fri Feb 22 11:57:44.014 [initandlisten] connection accepted from 165.225.128.186:62245 #1404 (8 connections now open) m31000| Fri Feb 22 11:57:44.021 [conn1403] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.021 [conn1403] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534263000|89 } } cursorid:472804366418890 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:44.021 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.021 [conn1403] end connection 165.225.128.186:57432 (7 connections now open) m31000| Fri Feb 22 11:57:44.021 [initandlisten] connection accepted from 165.225.128.186:48821 #1405 (8 connections now open) m31000| Fri Feb 22 11:57:44.112 [conn1] going to kill op: op: 28671.0 m31000| Fri Feb 22 11:57:44.112 [conn1] going to kill op: op: 28669.0 m31000| Fri Feb 22 11:57:44.113 [conn1405] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.113 [conn1405] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|1 } } cursorid:473199138042613 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:44.114 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.114 [conn1405] end connection 165.225.128.186:48821 (7 connections now open) m31000| Fri Feb 22 11:57:44.114 [initandlisten] connection accepted from 165.225.128.186:35996 #1406 (8 connections now open) m31000| Fri Feb 22 11:57:44.116 [conn1404] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.116 [conn1404] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|1 } } cursorid:473194290850353 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.117 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.117 [conn1404] end connection 165.225.128.186:62245 (7 connections now open) m31000| Fri Feb 22 11:57:44.117 [initandlisten] connection accepted from 165.225.128.186:43434 #1407 (8 connections now open) m31000| Fri Feb 22 11:57:44.213 [conn1] going to kill op: op: 28711.0 m31000| Fri Feb 22 11:57:44.213 [conn1] going to kill op: op: 28710.0 m31000| Fri Feb 22 11:57:44.215 [conn1406] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.215 [conn1406] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|10 } } cursorid:473590583338730 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:44.216 [conn1406] ClientCursor::find(): cursor not found in map '473590583338730' (ok after a drop) m31002| Fri Feb 22 11:57:44.216 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.216 [conn1406] end connection 165.225.128.186:35996 (7 connections now open) m31000| Fri Feb 22 11:57:44.216 [initandlisten] connection accepted from 165.225.128.186:53499 #1408 (8 connections now open) m31000| Fri Feb 22 11:57:44.218 [conn1407] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.218 [conn1407] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|11 } } cursorid:473593609463483 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:33 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.218 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.219 [conn1407] end connection 165.225.128.186:43434 (7 connections now open) m31000| Fri Feb 22 11:57:44.219 [initandlisten] connection accepted from 165.225.128.186:35596 #1409 (8 connections now open) m31000| Fri Feb 22 11:57:44.314 [conn1] going to kill op: op: 28752.0 m31000| Fri Feb 22 11:57:44.314 [conn1] going to kill op: op: 28750.0 m31000| Fri Feb 22 11:57:44.314 [conn1] going to kill op: op: 28749.0 m31000| Fri Feb 22 11:57:44.318 [conn1408] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.318 [conn1408] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|21 } } cursorid:473984818262238 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:44.318 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.318 [conn1408] end connection 165.225.128.186:53499 (7 connections now open) m31000| Fri Feb 22 11:57:44.318 [initandlisten] connection accepted from 165.225.128.186:41869 #1410 (8 connections now open) m31000| Fri Feb 22 11:57:44.321 [conn1409] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.321 [conn1409] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|21 } } cursorid:473988721073496 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.321 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.321 [conn1409] end connection 165.225.128.186:35596 (7 connections now open) m31000| Fri Feb 22 11:57:44.321 [initandlisten] connection accepted from 165.225.128.186:60892 #1411 (8 connections now open) m31000| Fri Feb 22 11:57:44.323 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.323 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|31 } } cursorid:474376577942127 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.323 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.414 [conn1] going to kill op: op: 28791.0 m31000| Fri Feb 22 11:57:44.415 [conn1] going to kill op: op: 28790.0 m31000| Fri Feb 22 11:57:44.420 [conn1410] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.420 [conn1410] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|31 } } cursorid:474380615691327 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:44.421 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.421 [conn1410] end connection 165.225.128.186:41869 (7 connections now open) m31000| Fri Feb 22 11:57:44.421 [initandlisten] connection accepted from 165.225.128.186:59943 #1412 (8 connections now open) m31000| Fri Feb 22 11:57:44.423 [conn1411] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.423 [conn1411] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|31 } } cursorid:474384576814631 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.423 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.423 [conn1411] end connection 165.225.128.186:60892 (7 connections now open) m31000| Fri Feb 22 11:57:44.423 [initandlisten] connection accepted from 165.225.128.186:62770 #1413 (8 connections now open) m31000| Fri Feb 22 11:57:44.515 [conn1] going to kill op: op: 28828.0 m31000| Fri Feb 22 11:57:44.515 [conn1] going to kill op: op: 28826.0 m31000| Fri Feb 22 11:57:44.523 [conn1412] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.523 [conn1412] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|41 } } cursorid:474776057036032 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:44.523 [conn1412] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:44.523 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.523 [conn1412] end connection 165.225.128.186:59943 (7 connections now open) m31000| Fri Feb 22 11:57:44.524 [initandlisten] connection accepted from 165.225.128.186:55096 #1414 (8 connections now open) m31000| Fri Feb 22 11:57:44.616 [conn1] going to kill op: op: 28862.0 m31000| Fri Feb 22 11:57:44.616 [conn1] going to kill op: op: 28860.0 m31000| Fri Feb 22 11:57:44.616 [conn1413] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.616 [conn1413] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|41 } } cursorid:474780002169402 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.617 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.617 [conn1413] end connection 165.225.128.186:62770 (7 connections now open) m31000| Fri Feb 22 11:57:44.617 [initandlisten] connection accepted from 165.225.128.186:45381 #1415 (8 connections now open) m31000| Fri Feb 22 11:57:44.626 [conn1414] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.626 [conn1414] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|51 } } cursorid:475169602597776 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:44.626 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.626 [conn1414] end connection 165.225.128.186:55096 (7 connections now open) m31000| Fri Feb 22 11:57:44.626 [initandlisten] connection accepted from 165.225.128.186:34276 #1416 (8 connections now open) m31000| Fri Feb 22 11:57:44.717 [conn1] going to kill op: op: 28897.0 m31000| Fri Feb 22 11:57:44.717 [conn1] going to kill op: op: 28898.0 m31000| Fri Feb 22 11:57:44.718 [conn1416] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.718 [conn1416] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|61 } } cursorid:475565752932319 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:44.718 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.718 [conn1416] end connection 165.225.128.186:34276 (7 connections now open) m31000| Fri Feb 22 11:57:44.718 [initandlisten] connection accepted from 165.225.128.186:59239 #1417 (8 connections now open) m31000| Fri Feb 22 11:57:44.719 [conn1415] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.719 [conn1415] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|60 } } cursorid:475562069064653 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.719 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.719 [conn1415] end connection 165.225.128.186:45381 (7 connections now open) m31000| Fri Feb 22 11:57:44.719 [initandlisten] connection accepted from 165.225.128.186:57583 #1418 (8 connections now open) m31000| Fri Feb 22 11:57:44.817 [conn1] going to kill op: op: 28946.0 m31000| Fri Feb 22 11:57:44.818 [conn1] going to kill op: op: 28945.0 m31000| Fri Feb 22 11:57:44.818 [conn1] going to kill op: op: 28947.0 m31000| Fri Feb 22 11:57:44.820 [conn1417] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.820 [conn1417] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|70 } } cursorid:475957025886011 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:44.820 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.821 [conn1417] end connection 165.225.128.186:59239 (7 connections now open) m31000| Fri Feb 22 11:57:44.821 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.821 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|70 } } cursorid:475909694871118 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:42 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:44.821 [initandlisten] connection accepted from 165.225.128.186:47259 #1419 (8 connections now open) m31000| Fri Feb 22 11:57:44.821 [conn1418] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.821 [conn1418] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|70 } } cursorid:475961020817885 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.821 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.822 [conn1418] end connection 165.225.128.186:57583 (7 connections now open) m31002| Fri Feb 22 11:57:44.822 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.822 [initandlisten] connection accepted from 165.225.128.186:63312 #1420 (8 connections now open) m31000| Fri Feb 22 11:57:44.918 [conn1] going to kill op: op: 28988.0 m31000| Fri Feb 22 11:57:44.919 [conn1] going to kill op: op: 28986.0 m31000| Fri Feb 22 11:57:44.923 [conn1419] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.923 [conn1419] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|80 } } cursorid:476393982967504 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:44.923 [conn1419] ClientCursor::find(): cursor not found in map '476393982967504' (ok after a drop) m31002| Fri Feb 22 11:57:44.923 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.923 [conn1419] end connection 165.225.128.186:47259 (7 connections now open) m31000| Fri Feb 22 11:57:44.924 [initandlisten] connection accepted from 165.225.128.186:33884 #1421 (8 connections now open) m31000| Fri Feb 22 11:57:44.924 [conn1420] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:44.924 [conn1420] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|80 } } cursorid:476398995654267 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:44.924 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:44.924 [conn1420] end connection 165.225.128.186:63312 (7 connections now open) m31000| Fri Feb 22 11:57:44.925 [initandlisten] connection accepted from 165.225.128.186:48687 #1422 (8 connections now open) m31000| Fri Feb 22 11:57:45.019 [conn1] going to kill op: op: 29026.0 m31000| Fri Feb 22 11:57:45.020 [conn1] going to kill op: op: 29025.0 m31000| Fri Feb 22 11:57:45.026 [conn1421] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.026 [conn1421] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|90 } } cursorid:476832137194230 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.026 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.026 [conn1422] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.026 [conn1422] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534264000|91 } } cursorid:476837377134048 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:45.026 [conn1421] end connection 165.225.128.186:33884 (7 connections now open) m31001| Fri Feb 22 11:57:45.026 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.026 [conn1422] end connection 165.225.128.186:48687 (6 connections now open) m31000| Fri Feb 22 11:57:45.027 [initandlisten] connection accepted from 165.225.128.186:47112 #1423 (7 connections now open) m31000| Fri Feb 22 11:57:45.027 [initandlisten] connection accepted from 165.225.128.186:62542 #1424 (8 connections now open) m31000| Fri Feb 22 11:57:45.120 [conn1] going to kill op: op: 29063.0 m31000| Fri Feb 22 11:57:45.120 [conn1] going to kill op: op: 29064.0 m31000| Fri Feb 22 11:57:45.129 [conn1423] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.129 [conn1424] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.129 [conn1423] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|2 } } cursorid:477274509458881 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:45.129 [conn1424] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|2 } } cursorid:477275608722222 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:45.129 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:45.129 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.129 [conn1423] end connection 165.225.128.186:47112 (7 connections now open) m31000| Fri Feb 22 11:57:45.129 [conn1424] end connection 165.225.128.186:62542 (7 connections now open) m31000| Fri Feb 22 11:57:45.129 [initandlisten] connection accepted from 165.225.128.186:40180 #1425 (7 connections now open) m31000| Fri Feb 22 11:57:45.129 [initandlisten] connection accepted from 165.225.128.186:48846 #1426 (8 connections now open) m31000| Fri Feb 22 11:57:45.221 [conn1] going to kill op: op: 29100.0 m31000| Fri Feb 22 11:57:45.221 [conn1] going to kill op: op: 29101.0 m31000| Fri Feb 22 11:57:45.222 [conn1426] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.222 [conn1426] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|12 } } cursorid:477713631700987 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.222 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.222 [conn1426] end connection 165.225.128.186:48846 (7 connections now open) m31000| Fri Feb 22 11:57:45.222 [initandlisten] connection accepted from 165.225.128.186:52481 #1427 (8 connections now open) m31000| Fri Feb 22 11:57:45.322 [conn1] going to kill op: op: 29134.0 m31000| Fri Feb 22 11:57:45.322 [conn1] going to kill op: op: 29135.0 m31000| Fri Feb 22 11:57:45.323 [conn1425] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.323 [conn1425] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|12 } } cursorid:477712106889263 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:45.323 [conn1425] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:45.323 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.323 [conn1425] end connection 165.225.128.186:40180 (7 connections now open) m31000| Fri Feb 22 11:57:45.323 [initandlisten] connection accepted from 165.225.128.186:44876 #1428 (8 connections now open) m31000| Fri Feb 22 11:57:45.324 [conn1427] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.324 [conn1427] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|21 } } cursorid:478104398214951 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.325 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.325 [conn1427] end connection 165.225.128.186:52481 (7 connections now open) m31000| Fri Feb 22 11:57:45.325 [initandlisten] connection accepted from 165.225.128.186:43657 #1429 (8 connections now open) m31000| Fri Feb 22 11:57:45.423 [conn1] going to kill op: op: 29183.0 m31000| Fri Feb 22 11:57:45.423 [conn1] going to kill op: op: 29182.0 m31000| Fri Feb 22 11:57:45.423 [conn1] going to kill op: op: 29184.0 m31000| Fri Feb 22 11:57:45.425 [conn1428] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.425 [conn1428] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|31 } } cursorid:478537271520515 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:45.426 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.426 [conn1428] end connection 165.225.128.186:44876 (7 connections now open) m31000| Fri Feb 22 11:57:45.426 [initandlisten] connection accepted from 165.225.128.186:48741 #1430 (8 connections now open) m31000| Fri Feb 22 11:57:45.426 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.426 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|31 } } cursorid:478537548006757 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:45.427 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.427 [conn1429] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.427 [conn1429] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|31 } } cursorid:478542009334553 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.427 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.427 [conn1429] end connection 165.225.128.186:43657 (7 connections now open) m31000| Fri Feb 22 11:57:45.428 [initandlisten] connection accepted from 165.225.128.186:37881 #1431 (8 connections now open) m31000| Fri Feb 22 11:57:45.524 [conn1] going to kill op: op: 29222.0 m31000| Fri Feb 22 11:57:45.524 [conn1] going to kill op: op: 29223.0 m31000| Fri Feb 22 11:57:45.528 [conn1430] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.528 [conn1430] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|41 } } cursorid:478975650142790 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:45.528 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.528 [conn1430] end connection 165.225.128.186:48741 (7 connections now open) m31000| Fri Feb 22 11:57:45.529 [initandlisten] connection accepted from 165.225.128.186:42630 #1432 (8 connections now open) m31000| Fri Feb 22 11:57:45.530 [conn1431] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.530 [conn1431] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|41 } } cursorid:478979873107569 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.530 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.530 [conn1431] end connection 165.225.128.186:37881 (7 connections now open) m31000| Fri Feb 22 11:57:45.530 [initandlisten] connection accepted from 165.225.128.186:40652 #1433 (8 connections now open) m31000| Fri Feb 22 11:57:45.624 [conn1] going to kill op: op: 29260.0 m31000| Fri Feb 22 11:57:45.625 [conn1] going to kill op: op: 29261.0 m31000| Fri Feb 22 11:57:45.630 [conn1432] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.630 [conn1432] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|51 } } cursorid:479414893859545 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:45.631 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.631 [conn1432] end connection 165.225.128.186:42630 (7 connections now open) m31000| Fri Feb 22 11:57:45.631 [initandlisten] connection accepted from 165.225.128.186:47629 #1434 (8 connections now open) m31000| Fri Feb 22 11:57:45.632 [conn1433] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.632 [conn1433] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|51 } } cursorid:479418531999100 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:45.632 [conn1433] ClientCursor::find(): cursor not found in map '479418531999100' (ok after a drop) m31002| Fri Feb 22 11:57:45.632 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.632 [conn1433] end connection 165.225.128.186:40652 (7 connections now open) m31000| Fri Feb 22 11:57:45.633 [initandlisten] connection accepted from 165.225.128.186:38664 #1435 (8 connections now open) m31000| Fri Feb 22 11:57:45.725 [conn1] going to kill op: op: 29299.0 m31000| Fri Feb 22 11:57:45.725 [conn1] going to kill op: op: 29298.0 m31000| Fri Feb 22 11:57:45.733 [conn1434] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.733 [conn1434] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|61 } } cursorid:479852384909703 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:45.733 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.733 [conn1434] end connection 165.225.128.186:47629 (7 connections now open) m31000| Fri Feb 22 11:57:45.733 [initandlisten] connection accepted from 165.225.128.186:58334 #1436 (8 connections now open) m31000| Fri Feb 22 11:57:45.734 [conn1435] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.735 [conn1435] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|61 } } cursorid:479856085546263 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.735 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.735 [conn1435] end connection 165.225.128.186:38664 (7 connections now open) m31000| Fri Feb 22 11:57:45.735 [initandlisten] connection accepted from 165.225.128.186:55858 #1437 (8 connections now open) m31000| Fri Feb 22 11:57:45.826 [conn1] going to kill op: op: 29334.0 m31000| Fri Feb 22 11:57:45.826 [conn1] going to kill op: op: 29338.0 m31000| Fri Feb 22 11:57:45.826 [conn1] going to kill op: op: 29336.0 m31000| Fri Feb 22 11:57:45.827 [conn1437] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.827 [conn1437] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|71 } } cursorid:480293778401732 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.827 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.827 [conn1437] end connection 165.225.128.186:55858 (7 connections now open) m31000| Fri Feb 22 11:57:45.827 [initandlisten] connection accepted from 165.225.128.186:53975 #1438 (8 connections now open) m31000| Fri Feb 22 11:57:45.832 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.832 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|81 } } cursorid:480638841677962 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.832 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:45.840 [conn9] end connection 165.225.128.186:56023 (2 connections now open) m31001| Fri Feb 22 11:57:45.840 [initandlisten] connection accepted from 165.225.128.186:41757 #11 (3 connections now open) m31000| Fri Feb 22 11:57:45.927 [conn1] going to kill op: op: 29371.0 m31000| Fri Feb 22 11:57:45.927 [conn1] going to kill op: op: 29373.0 m31000| Fri Feb 22 11:57:45.928 [conn1436] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.928 [conn1436] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|71 } } cursorid:480291055014108 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:45.928 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.928 [conn1436] end connection 165.225.128.186:58334 (7 connections now open) m31000| Fri Feb 22 11:57:45.928 [initandlisten] connection accepted from 165.225.128.186:54415 #1439 (8 connections now open) m31000| Fri Feb 22 11:57:45.930 [conn1438] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:45.930 [conn1438] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|81 } } cursorid:480685758289069 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:45.930 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:45.930 [conn1438] end connection 165.225.128.186:53975 (7 connections now open) m31000| Fri Feb 22 11:57:45.930 [initandlisten] connection accepted from 165.225.128.186:34300 #1440 (8 connections now open) m31000| Fri Feb 22 11:57:46.028 [conn1] going to kill op: op: 29411.0 m31000| Fri Feb 22 11:57:46.028 [conn1] going to kill op: op: 29410.0 m31000| Fri Feb 22 11:57:46.031 [conn1439] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.031 [conn1439] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|90 } } cursorid:481118272289759 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:162 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.031 [conn1439] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:46.031 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.031 [conn1439] end connection 165.225.128.186:54415 (7 connections now open) m31000| Fri Feb 22 11:57:46.031 [initandlisten] connection accepted from 165.225.128.186:59446 #1441 (8 connections now open) m31000| Fri Feb 22 11:57:46.032 [conn1440] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.032 [conn1440] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534265000|91 } } cursorid:481123357313365 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:46.032 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.032 [conn1440] end connection 165.225.128.186:34300 (7 connections now open) m31000| Fri Feb 22 11:57:46.033 [initandlisten] connection accepted from 165.225.128.186:63401 #1442 (8 connections now open) m31000| Fri Feb 22 11:57:46.129 [conn1] going to kill op: op: 29449.0 m31000| Fri Feb 22 11:57:46.129 [conn1] going to kill op: op: 29450.0 m31000| Fri Feb 22 11:57:46.134 [conn1441] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.134 [conn1441] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|2 } } cursorid:481556972195924 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.134 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.134 [conn1441] end connection 165.225.128.186:59446 (7 connections now open) m31000| Fri Feb 22 11:57:46.134 [initandlisten] connection accepted from 165.225.128.186:63442 #1443 (8 connections now open) m31000| Fri Feb 22 11:57:46.135 [conn1442] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.135 [conn1442] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|2 } } cursorid:481562181905594 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:46.135 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.135 [conn1442] end connection 165.225.128.186:63401 (7 connections now open) m31000| Fri Feb 22 11:57:46.136 [initandlisten] connection accepted from 165.225.128.186:59589 #1444 (8 connections now open) m31000| Fri Feb 22 11:57:46.229 [conn1] going to kill op: op: 29490.0 m31000| Fri Feb 22 11:57:46.230 [conn1] going to kill op: op: 29489.0 m31000| Fri Feb 22 11:57:46.237 [conn1443] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.237 [conn1443] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|12 } } cursorid:481994948524403 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.237 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.237 [conn1443] end connection 165.225.128.186:63442 (7 connections now open) m31000| Fri Feb 22 11:57:46.237 [initandlisten] connection accepted from 165.225.128.186:51516 #1445 (8 connections now open) m31000| Fri Feb 22 11:57:46.238 [conn1444] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.238 [conn1444] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|12 } } cursorid:481999588759391 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:46.238 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.238 [conn1444] end connection 165.225.128.186:59589 (7 connections now open) m31000| Fri Feb 22 11:57:46.238 [initandlisten] connection accepted from 165.225.128.186:38318 #1446 (8 connections now open) m31000| Fri Feb 22 11:57:46.330 [conn1] going to kill op: op: 29528.0 m31000| Fri Feb 22 11:57:46.330 [conn1] going to kill op: op: 29526.0 m31000| Fri Feb 22 11:57:46.339 [conn1445] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.339 [conn1445] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|22 } } cursorid:482434105088981 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.339 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.339 [conn1445] end connection 165.225.128.186:51516 (7 connections now open) m31000| Fri Feb 22 11:57:46.340 [initandlisten] connection accepted from 165.225.128.186:61103 #1447 (8 connections now open) m31000| Fri Feb 22 11:57:46.431 [conn1] going to kill op: op: 29559.0 m31000| Fri Feb 22 11:57:46.431 [conn1] going to kill op: op: 29560.0 m31000| Fri Feb 22 11:57:46.432 [conn1447] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.432 [conn1447] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|32 } } cursorid:482872300332352 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.432 [conn1446] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.432 [conn1446] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|22 } } cursorid:482438437569459 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.432 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.432 [conn1446] ClientCursor::find(): cursor not found in map '482438437569459' (ok after a drop) m31000| Fri Feb 22 11:57:46.432 [conn1447] end connection 165.225.128.186:61103 (7 connections now open) m31002| Fri Feb 22 11:57:46.432 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.432 [conn1446] end connection 165.225.128.186:38318 (6 connections now open) m31000| Fri Feb 22 11:57:46.432 [initandlisten] connection accepted from 165.225.128.186:41733 #1448 (7 connections now open) m31000| Fri Feb 22 11:57:46.432 [initandlisten] connection accepted from 165.225.128.186:33527 #1449 (8 connections now open) m31000| Fri Feb 22 11:57:46.532 [conn1] going to kill op: op: 29607.0 m31000| Fri Feb 22 11:57:46.532 [conn1] going to kill op: op: 29609.0 m31000| Fri Feb 22 11:57:46.532 [conn1] going to kill op: op: 29608.0 m31000| Fri Feb 22 11:57:46.535 [conn1448] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.535 [conn1448] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|41 } } cursorid:483267346910295 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.535 [conn1449] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.535 [conn1449] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|41 } } cursorid:483266306877521 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.535 [conn1448] end connection 165.225.128.186:41733 (7 connections now open) m31002| Fri Feb 22 11:57:46.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.535 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.535 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|41 } } cursorid:483214734092588 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.535 [conn1449] end connection 165.225.128.186:33527 (6 connections now open) m31000| Fri Feb 22 11:57:46.535 [initandlisten] connection accepted from 165.225.128.186:46638 #1450 (7 connections now open) m31000| Fri Feb 22 11:57:46.536 [initandlisten] connection accepted from 165.225.128.186:57549 #1451 (8 connections now open) m31001| Fri Feb 22 11:57:46.536 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.633 [conn1] going to kill op: op: 29650.0 m31000| Fri Feb 22 11:57:46.633 [conn1] going to kill op: op: 29649.0 m31000| Fri Feb 22 11:57:46.638 [conn1450] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.638 [conn1450] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|51 } } cursorid:483703533541687 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.638 [conn1451] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.638 [conn1451] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|51 } } cursorid:483705053925776 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:100 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.638 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.638 [conn1450] end connection 165.225.128.186:46638 (7 connections now open) m31002| Fri Feb 22 11:57:46.639 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.639 [conn1451] end connection 165.225.128.186:57549 (6 connections now open) m31000| Fri Feb 22 11:57:46.639 [initandlisten] connection accepted from 165.225.128.186:56836 #1452 (7 connections now open) m31000| Fri Feb 22 11:57:46.639 [initandlisten] connection accepted from 165.225.128.186:57183 #1453 (8 connections now open) m31000| Fri Feb 22 11:57:46.733 [conn1] going to kill op: op: 29687.0 m31000| Fri Feb 22 11:57:46.734 [conn1] going to kill op: op: 29688.0 m31000| Fri Feb 22 11:57:46.741 [conn1452] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.741 [conn1452] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|62 } } cursorid:484142252872865 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.741 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.741 [conn1453] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.742 [conn1452] end connection 165.225.128.186:56836 (7 connections now open) m31000| Fri Feb 22 11:57:46.742 [conn1453] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|62 } } cursorid:484142776580518 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.742 [conn1453] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:46.742 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.742 [conn1453] end connection 165.225.128.186:57183 (6 connections now open) m31000| Fri Feb 22 11:57:46.742 [initandlisten] connection accepted from 165.225.128.186:49623 #1454 (8 connections now open) m31000| Fri Feb 22 11:57:46.742 [initandlisten] connection accepted from 165.225.128.186:64669 #1455 (8 connections now open) m31000| Fri Feb 22 11:57:46.834 [conn1] going to kill op: op: 29722.0 m31000| Fri Feb 22 11:57:46.834 [conn1] going to kill op: op: 29723.0 m31000| Fri Feb 22 11:57:46.935 [conn1] going to kill op: op: 29764.0 m31000| Fri Feb 22 11:57:46.935 [conn1] going to kill op: op: 29762.0 m31000| Fri Feb 22 11:57:46.935 [conn1] going to kill op: op: 29763.0 m31000| Fri Feb 22 11:57:46.936 [conn1454] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.936 [conn1454] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|72 } } cursorid:484580916721834 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.936 [conn1455] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.936 [conn1455] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|72 } } cursorid:484581352964750 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:46.936 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.936 [conn1454] end connection 165.225.128.186:49623 (7 connections now open) m31002| Fri Feb 22 11:57:46.936 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:46.936 [conn1455] end connection 165.225.128.186:64669 (6 connections now open) m31000| Fri Feb 22 11:57:46.937 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:46.937 [initandlisten] connection accepted from 165.225.128.186:36902 #1456 (7 connections now open) m31000| Fri Feb 22 11:57:46.937 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|81 } } cursorid:484966465098476 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:46.937 [initandlisten] connection accepted from 165.225.128.186:46406 #1457 (8 connections now open) m31002| Fri Feb 22 11:57:46.938 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.036 [conn1] going to kill op: op: 29802.0 m31000| Fri Feb 22 11:57:47.036 [conn1] going to kill op: op: 29803.0 m31000| Fri Feb 22 11:57:47.040 [conn1456] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.040 [conn1456] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|91 } } cursorid:485405003927785 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:47.040 [conn1457] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.040 [conn1457] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534266000|91 } } cursorid:485404897065132 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.040 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:47.040 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.040 [conn1456] end connection 165.225.128.186:36902 (7 connections now open) m31000| Fri Feb 22 11:57:47.040 [conn1457] end connection 165.225.128.186:46406 (7 connections now open) m31000| Fri Feb 22 11:57:47.040 [initandlisten] connection accepted from 165.225.128.186:46591 #1458 (7 connections now open) m31000| Fri Feb 22 11:57:47.040 [initandlisten] connection accepted from 165.225.128.186:63851 #1459 (8 connections now open) m31000| Fri Feb 22 11:57:47.137 [conn1] going to kill op: op: 29843.0 m31000| Fri Feb 22 11:57:47.137 [conn1] going to kill op: op: 29842.0 m31000| Fri Feb 22 11:57:47.143 [conn1458] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.143 [conn1458] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|3 } } cursorid:485843762635153 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:47.143 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.143 [conn1458] end connection 165.225.128.186:46591 (7 connections now open) m31000| Fri Feb 22 11:57:47.143 [conn1459] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.143 [conn1459] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|3 } } cursorid:485843406886584 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.143 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.143 [conn1459] end connection 165.225.128.186:63851 (6 connections now open) m31000| Fri Feb 22 11:57:47.143 [initandlisten] connection accepted from 165.225.128.186:38909 #1460 (8 connections now open) m31000| Fri Feb 22 11:57:47.144 [initandlisten] connection accepted from 165.225.128.186:47565 #1461 (8 connections now open) m31000| Fri Feb 22 11:57:47.238 [conn1] going to kill op: op: 29880.0 m31000| Fri Feb 22 11:57:47.238 [conn1] going to kill op: op: 29881.0 m31000| Fri Feb 22 11:57:47.246 [conn1460] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.246 [conn1460] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|13 } } cursorid:486281262518371 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:47.246 [conn1460] ClientCursor::find(): cursor not found in map '486281262518371' (ok after a drop) m31002| Fri Feb 22 11:57:47.246 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.246 [conn1460] end connection 165.225.128.186:38909 (7 connections now open) m31000| Fri Feb 22 11:57:47.246 [conn1461] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.246 [conn1461] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|13 } } cursorid:486281374455748 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.246 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.246 [initandlisten] connection accepted from 165.225.128.186:59715 #1462 (8 connections now open) m31000| Fri Feb 22 11:57:47.247 [conn1461] end connection 165.225.128.186:47565 (7 connections now open) m31000| Fri Feb 22 11:57:47.247 [initandlisten] connection accepted from 165.225.128.186:38756 #1463 (8 connections now open) m31000| Fri Feb 22 11:57:47.339 [conn1] going to kill op: op: 29916.0 m31000| Fri Feb 22 11:57:47.339 [conn1463] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.339 [conn1463] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|23 } } cursorid:486718731012816 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:47.339 [conn1] going to kill op: op: 29917.0 m31001| Fri Feb 22 11:57:47.339 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.339 [conn1463] end connection 165.225.128.186:38756 (7 connections now open) m31000| Fri Feb 22 11:57:47.339 [initandlisten] connection accepted from 165.225.128.186:65175 #1464 (8 connections now open) m31000| Fri Feb 22 11:57:47.439 [conn1] going to kill op: op: 29950.0 m31000| Fri Feb 22 11:57:47.440 [conn1] going to kill op: op: 29951.0 m31000| Fri Feb 22 11:57:47.440 [conn1462] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.440 [conn1462] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|23 } } cursorid:486720049049759 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:47.441 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.441 [conn1462] end connection 165.225.128.186:59715 (7 connections now open) m31000| Fri Feb 22 11:57:47.441 [initandlisten] connection accepted from 165.225.128.186:62387 #1465 (8 connections now open) m31000| Fri Feb 22 11:57:47.441 [conn1464] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.441 [conn1464] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|33 } } cursorid:487111364867305 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.442 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.442 [conn1464] end connection 165.225.128.186:65175 (7 connections now open) m31000| Fri Feb 22 11:57:47.442 [initandlisten] connection accepted from 165.225.128.186:55202 #1466 (8 connections now open) m31000| Fri Feb 22 11:57:47.540 [conn1] going to kill op: op: 29991.0 m31000| Fri Feb 22 11:57:47.540 [conn1] going to kill op: op: 29989.0 m31000| Fri Feb 22 11:57:47.541 [conn1] going to kill op: op: 29988.0 m31000| Fri Feb 22 11:57:47.543 [conn1465] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.543 [conn1465] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|43 } } cursorid:487544107681418 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:47.543 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.543 [conn1465] end connection 165.225.128.186:62387 (7 connections now open) m31000| Fri Feb 22 11:57:47.544 [initandlisten] connection accepted from 165.225.128.186:33748 #1467 (8 connections now open) m31000| Fri Feb 22 11:57:47.544 [conn1466] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.544 [conn1466] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|43 } } cursorid:487549365390950 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.544 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.544 [conn1466] end connection 165.225.128.186:55202 (7 connections now open) m31000| Fri Feb 22 11:57:47.544 [initandlisten] connection accepted from 165.225.128.186:36435 #1468 (8 connections now open) m31000| Fri Feb 22 11:57:47.547 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.547 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|53 } } cursorid:487935408664414 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:47.547 [conn8] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:47.547 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.641 [conn1] going to kill op: op: 30030.0 m31000| Fri Feb 22 11:57:47.641 [conn1] going to kill op: op: 30029.0 m31000| Fri Feb 22 11:57:47.646 [conn1467] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.646 [conn1467] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|53 } } cursorid:487983390406676 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:47.646 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.646 [conn1467] end connection 165.225.128.186:33748 (7 connections now open) m31000| Fri Feb 22 11:57:47.646 [conn1468] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.646 [conn1468] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|53 } } cursorid:487987196712497 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.646 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.646 [conn1468] end connection 165.225.128.186:36435 (6 connections now open) m31000| Fri Feb 22 11:57:47.646 [initandlisten] connection accepted from 165.225.128.186:53292 #1469 (8 connections now open) m31000| Fri Feb 22 11:57:47.647 [initandlisten] connection accepted from 165.225.128.186:63558 #1470 (8 connections now open) m31000| Fri Feb 22 11:57:47.742 [conn1] going to kill op: op: 30068.0 m31000| Fri Feb 22 11:57:47.742 [conn1] going to kill op: op: 30067.0 m31000| Fri Feb 22 11:57:47.749 [conn1469] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.749 [conn1469] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|63 } } cursorid:488424476369447 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:47.749 [conn1470] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.749 [conn1470] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|63 } } cursorid:488424181896336 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:47.749 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:47.749 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.749 [conn1469] end connection 165.225.128.186:53292 (7 connections now open) m31000| Fri Feb 22 11:57:47.749 [conn1470] end connection 165.225.128.186:63558 (7 connections now open) m31000| Fri Feb 22 11:57:47.749 [initandlisten] connection accepted from 165.225.128.186:51007 #1471 (7 connections now open) m31000| Fri Feb 22 11:57:47.749 [initandlisten] connection accepted from 165.225.128.186:59842 #1472 (8 connections now open) m31000| Fri Feb 22 11:57:47.843 [conn1] going to kill op: op: 30106.0 m31000| Fri Feb 22 11:57:47.843 [conn1] going to kill op: op: 30105.0 m31000| Fri Feb 22 11:57:47.851 [conn1471] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.851 [conn1471] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|73 } } cursorid:488863253729679 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:47.851 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.852 [conn1472] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.852 [conn1471] end connection 165.225.128.186:51007 (7 connections now open) m31000| Fri Feb 22 11:57:47.852 [conn1472] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|73 } } cursorid:488863055867089 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.852 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.852 [conn1472] end connection 165.225.128.186:59842 (6 connections now open) m31000| Fri Feb 22 11:57:47.852 [initandlisten] connection accepted from 165.225.128.186:46777 #1473 (8 connections now open) m31000| Fri Feb 22 11:57:47.852 [initandlisten] connection accepted from 165.225.128.186:54572 #1474 (8 connections now open) m31000| Fri Feb 22 11:57:47.943 [conn1] going to kill op: op: 30141.0 m31000| Fri Feb 22 11:57:47.943 [conn1] going to kill op: op: 30140.0 m31000| Fri Feb 22 11:57:47.944 [conn1474] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.944 [conn1474] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|83 } } cursorid:489300733465934 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:47.944 [conn1473] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:47.944 [conn1473] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|83 } } cursorid:489300165681805 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:47.944 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.944 [conn1473] ClientCursor::find(): cursor not found in map '489300165681805' (ok after a drop) m31000| Fri Feb 22 11:57:47.944 [conn1474] end connection 165.225.128.186:54572 (7 connections now open) m31002| Fri Feb 22 11:57:47.944 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:47.944 [conn1473] end connection 165.225.128.186:46777 (6 connections now open) m31000| Fri Feb 22 11:57:47.944 [initandlisten] connection accepted from 165.225.128.186:54773 #1475 (7 connections now open) m31000| Fri Feb 22 11:57:47.944 [initandlisten] connection accepted from 165.225.128.186:49413 #1476 (8 connections now open) m31000| Fri Feb 22 11:57:48.044 [conn1] going to kill op: op: 30190.0 m31000| Fri Feb 22 11:57:48.044 [conn1] going to kill op: op: 30188.0 m31000| Fri Feb 22 11:57:48.044 [conn1] going to kill op: op: 30189.0 m31000| Fri Feb 22 11:57:48.046 [conn1475] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.046 [conn1475] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|92 } } cursorid:489696084345576 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:48.046 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.046 [conn1475] end connection 165.225.128.186:54773 (7 connections now open) m31000| Fri Feb 22 11:57:48.047 [initandlisten] connection accepted from 165.225.128.186:44016 #1477 (8 connections now open) m31000| Fri Feb 22 11:57:48.047 [conn1476] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.047 [conn1476] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|92 } } cursorid:489695221171148 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.047 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.047 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534267000|92 } } cursorid:489644703791246 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.047 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.047 [conn1476] end connection 165.225.128.186:49413 (7 connections now open) m31000| Fri Feb 22 11:57:48.047 [initandlisten] connection accepted from 165.225.128.186:61073 #1478 (8 connections now open) m31002| Fri Feb 22 11:57:48.048 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.055 [conn917] end connection 165.225.128.186:58824 (7 connections now open) m31000| Fri Feb 22 11:57:48.055 [initandlisten] connection accepted from 165.225.128.186:60474 #1479 (8 connections now open) m31000| Fri Feb 22 11:57:48.145 [conn1] going to kill op: op: 30233.0 m31000| Fri Feb 22 11:57:48.145 [conn1] going to kill op: op: 30232.0 m31000| Fri Feb 22 11:57:48.149 [conn1477] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.149 [conn1477] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|3 } } cursorid:490128784401377 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:48.149 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.149 [conn1477] end connection 165.225.128.186:44016 (7 connections now open) m31000| Fri Feb 22 11:57:48.149 [initandlisten] connection accepted from 165.225.128.186:58309 #1480 (8 connections now open) m31000| Fri Feb 22 11:57:48.149 [conn1478] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.149 [conn1478] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|3 } } cursorid:490133968775327 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.150 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.150 [conn1478] end connection 165.225.128.186:61073 (7 connections now open) m31000| Fri Feb 22 11:57:48.150 [initandlisten] connection accepted from 165.225.128.186:35056 #1481 (8 connections now open) m31000| Fri Feb 22 11:57:48.246 [conn1] going to kill op: op: 30273.0 m31000| Fri Feb 22 11:57:48.246 [conn1] going to kill op: op: 30272.0 m31000| Fri Feb 22 11:57:48.251 [conn1480] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.252 [conn1480] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|13 } } cursorid:490529699978851 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:48.252 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.252 [conn1480] end connection 165.225.128.186:58309 (7 connections now open) m31000| Fri Feb 22 11:57:48.252 [conn1481] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.252 [conn1481] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|13 } } cursorid:490533900713969 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.252 [initandlisten] connection accepted from 165.225.128.186:45376 #1482 (8 connections now open) m31000| Fri Feb 22 11:57:48.252 [conn1481] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:48.252 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.252 [conn1481] end connection 165.225.128.186:35056 (7 connections now open) m31000| Fri Feb 22 11:57:48.253 [initandlisten] connection accepted from 165.225.128.186:55331 #1483 (8 connections now open) m31000| Fri Feb 22 11:57:48.346 [conn1] going to kill op: op: 30311.0 m31000| Fri Feb 22 11:57:48.347 [conn1] going to kill op: op: 30312.0 m31000| Fri Feb 22 11:57:48.355 [conn1482] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.355 [conn1482] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|24 } } cursorid:490971823446226 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:48.355 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.355 [conn1483] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.355 [conn1482] end connection 165.225.128.186:45376 (7 connections now open) m31000| Fri Feb 22 11:57:48.355 [conn1483] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|24 } } cursorid:490970933438092 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.355 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.355 [conn1483] end connection 165.225.128.186:55331 (6 connections now open) m31000| Fri Feb 22 11:57:48.356 [initandlisten] connection accepted from 165.225.128.186:36048 #1484 (7 connections now open) m31000| Fri Feb 22 11:57:48.356 [initandlisten] connection accepted from 165.225.128.186:60555 #1485 (8 connections now open) m31000| Fri Feb 22 11:57:48.447 [conn1] going to kill op: op: 30347.0 m31000| Fri Feb 22 11:57:48.448 [conn1] going to kill op: op: 30346.0 m31000| Fri Feb 22 11:57:48.448 [conn1485] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.448 [conn1485] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|34 } } cursorid:491408853971701 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.448 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.448 [conn1484] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.448 [conn1484] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|34 } } cursorid:491410576930162 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.448 [conn1485] end connection 165.225.128.186:60555 (7 connections now open) m31001| Fri Feb 22 11:57:48.449 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.449 [conn1484] end connection 165.225.128.186:36048 (6 connections now open) m31000| Fri Feb 22 11:57:48.449 [initandlisten] connection accepted from 165.225.128.186:48763 #1486 (7 connections now open) m31000| Fri Feb 22 11:57:48.449 [initandlisten] connection accepted from 165.225.128.186:47487 #1487 (8 connections now open) m31000| Fri Feb 22 11:57:48.548 [conn1] going to kill op: op: 30384.0 m31000| Fri Feb 22 11:57:48.548 [conn1] going to kill op: op: 30385.0 m31000| Fri Feb 22 11:57:48.551 [conn1486] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.551 [conn1486] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|43 } } cursorid:491805315682504 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.551 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.551 [conn1487] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.551 [conn1487] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|43 } } cursorid:491805815084965 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.551 [conn1486] end connection 165.225.128.186:48763 (7 connections now open) m31001| Fri Feb 22 11:57:48.551 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.552 [conn1487] end connection 165.225.128.186:47487 (6 connections now open) m31000| Fri Feb 22 11:57:48.552 [initandlisten] connection accepted from 165.225.128.186:34450 #1488 (7 connections now open) m31000| Fri Feb 22 11:57:48.552 [initandlisten] connection accepted from 165.225.128.186:50454 #1489 (8 connections now open) m31000| Fri Feb 22 11:57:48.649 [conn1] going to kill op: op: 30434.0 m31000| Fri Feb 22 11:57:48.649 [conn1] going to kill op: op: 30433.0 m31000| Fri Feb 22 11:57:48.650 [conn1] going to kill op: op: 30432.0 m31000| Fri Feb 22 11:57:48.654 [conn1488] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.654 [conn1488] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|53 } } cursorid:492243249314818 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.654 [conn1489] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.654 [conn1489] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|53 } } cursorid:492243807599019 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.654 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.654 [conn1488] end connection 165.225.128.186:34450 (7 connections now open) m31000| Fri Feb 22 11:57:48.654 [conn1489] ClientCursor::find(): cursor not found in map '492243807599019' (ok after a drop) m31001| Fri Feb 22 11:57:48.654 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.654 [conn1489] end connection 165.225.128.186:50454 (6 connections now open) m31000| Fri Feb 22 11:57:48.654 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.654 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|53 } } cursorid:492191006150686 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.655 [initandlisten] connection accepted from 165.225.128.186:50531 #1490 (7 connections now open) m31000| Fri Feb 22 11:57:48.655 [initandlisten] connection accepted from 165.225.128.186:44151 #1491 (8 connections now open) m31001| Fri Feb 22 11:57:48.656 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.750 [conn1] going to kill op: op: 30472.0 m31000| Fri Feb 22 11:57:48.750 [conn1] going to kill op: op: 30473.0 m31000| Fri Feb 22 11:57:48.757 [conn1491] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.757 [conn1490] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.757 [conn1491] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|63 } } cursorid:492681810267668 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:102 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.757 [conn1490] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|63 } } cursorid:492680150383577 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.757 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:48.757 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.758 [conn1490] end connection 165.225.128.186:50531 (7 connections now open) m31000| Fri Feb 22 11:57:48.758 [conn1491] end connection 165.225.128.186:44151 (7 connections now open) m31000| Fri Feb 22 11:57:48.758 [initandlisten] connection accepted from 165.225.128.186:55469 #1492 (7 connections now open) m31000| Fri Feb 22 11:57:48.758 [initandlisten] connection accepted from 165.225.128.186:36860 #1493 (8 connections now open) m31000| Fri Feb 22 11:57:48.851 [conn1] going to kill op: op: 30514.0 m31000| Fri Feb 22 11:57:48.851 [conn1] going to kill op: op: 30513.0 m31000| Fri Feb 22 11:57:48.860 [conn1493] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.861 [conn1493] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|73 } } cursorid:493119671130902 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.861 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.861 [conn1492] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.861 [conn1493] end connection 165.225.128.186:36860 (7 connections now open) m31000| Fri Feb 22 11:57:48.861 [conn1492] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|73 } } cursorid:493119344185775 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:48.861 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.861 [conn1492] end connection 165.225.128.186:55469 (6 connections now open) m31000| Fri Feb 22 11:57:48.861 [initandlisten] connection accepted from 165.225.128.186:62851 #1494 (7 connections now open) m31000| Fri Feb 22 11:57:48.861 [initandlisten] connection accepted from 165.225.128.186:33732 #1495 (8 connections now open) m31000| Fri Feb 22 11:57:48.952 [conn1] going to kill op: op: 30548.0 m31000| Fri Feb 22 11:57:48.952 [conn1] going to kill op: op: 30549.0 m31000| Fri Feb 22 11:57:48.954 [conn1494] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.954 [conn1494] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|84 } } cursorid:493557573074023 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:48.954 [conn1495] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:48.954 [conn1495] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|84 } } cursorid:493558158949301 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:48.954 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.954 [conn1495] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:57:48.954 [conn1494] end connection 165.225.128.186:62851 (7 connections now open) m31001| Fri Feb 22 11:57:48.954 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:48.954 [conn1495] end connection 165.225.128.186:33732 (6 connections now open) m31000| Fri Feb 22 11:57:48.954 [initandlisten] connection accepted from 165.225.128.186:61518 #1496 (7 connections now open) m31000| Fri Feb 22 11:57:48.954 [initandlisten] connection accepted from 165.225.128.186:33973 #1497 (8 connections now open) m31000| Fri Feb 22 11:57:49.053 [conn1] going to kill op: op: 30589.0 m31000| Fri Feb 22 11:57:49.053 [conn1] going to kill op: op: 30586.0 m31000| Fri Feb 22 11:57:49.053 [conn1] going to kill op: op: 30587.0 m31000| Fri Feb 22 11:57:49.057 [conn1496] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.057 [conn1496] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|93 } } cursorid:493952071261829 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.057 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.057 [conn1496] end connection 165.225.128.186:61518 (7 connections now open) m31000| Fri Feb 22 11:57:49.057 [conn1497] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.057 [conn1497] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534268000|93 } } cursorid:493952871154883 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:49.057 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.057 [initandlisten] connection accepted from 165.225.128.186:63306 #1498 (8 connections now open) m31000| Fri Feb 22 11:57:49.058 [conn1497] end connection 165.225.128.186:33973 (6 connections now open) m31000| Fri Feb 22 11:57:49.058 [initandlisten] connection accepted from 165.225.128.186:40322 #1499 (8 connections now open) m31000| Fri Feb 22 11:57:49.059 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.059 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|5 } } cursorid:494338296627214 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.059 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.154 [conn1] going to kill op: op: 30629.0 m31000| Fri Feb 22 11:57:49.154 [conn1] going to kill op: op: 30630.0 m31000| Fri Feb 22 11:57:49.160 [conn1498] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.160 [conn1498] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|5 } } cursorid:494389805480352 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.160 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.160 [conn1499] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.160 [conn1498] end connection 165.225.128.186:63306 (7 connections now open) m31000| Fri Feb 22 11:57:49.160 [conn1499] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|5 } } cursorid:494390115893485 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:49.160 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.160 [conn1499] end connection 165.225.128.186:40322 (6 connections now open) m31000| Fri Feb 22 11:57:49.161 [initandlisten] connection accepted from 165.225.128.186:63010 #1500 (7 connections now open) m31000| Fri Feb 22 11:57:49.161 [initandlisten] connection accepted from 165.225.128.186:39704 #1501 (8 connections now open) m31000| Fri Feb 22 11:57:49.255 [conn1] going to kill op: op: 30667.0 m31000| Fri Feb 22 11:57:49.256 [conn1] going to kill op: op: 30668.0 m31000| Fri Feb 22 11:57:49.263 [conn1500] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.263 [conn1500] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|15 } } cursorid:494828113216986 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:49.263 [conn1501] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:57:49.263 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.263 [conn1501] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|15 } } cursorid:494828486026033 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:49.263 [conn1500] end connection 165.225.128.186:63010 (7 connections now open) m31001| Fri Feb 22 11:57:49.263 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.263 [conn1501] end connection 165.225.128.186:39704 (6 connections now open) m31000| Fri Feb 22 11:57:49.263 [initandlisten] connection accepted from 165.225.128.186:48235 #1502 (7 connections now open) m31000| Fri Feb 22 11:57:49.263 [initandlisten] connection accepted from 165.225.128.186:56133 #1503 (8 connections now open) m31000| Fri Feb 22 11:57:49.357 [conn1] going to kill op: op: 30705.0 m31000| Fri Feb 22 11:57:49.357 [conn1] going to kill op: op: 30706.0 m31000| Fri Feb 22 11:57:49.366 [conn1502] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.366 [conn1502] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|25 } } cursorid:495265944003882 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:49.366 [conn1503] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.366 [conn1502] ClientCursor::find(): cursor not found in map '495265944003882' (ok after a drop) m31000| Fri Feb 22 11:57:49.366 [conn1503] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|25 } } cursorid:495266059880410 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.366 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.366 [conn1502] end connection 165.225.128.186:48235 (7 connections now open) m31001| Fri Feb 22 11:57:49.366 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.366 [conn1503] end connection 165.225.128.186:56133 (6 connections now open) m31000| Fri Feb 22 11:57:49.366 [initandlisten] connection accepted from 165.225.128.186:58119 #1504 (7 connections now open) m31000| Fri Feb 22 11:57:49.366 [initandlisten] connection accepted from 165.225.128.186:42819 #1505 (8 connections now open) m31000| Fri Feb 22 11:57:49.458 [conn1] going to kill op: op: 30743.0 m31000| Fri Feb 22 11:57:49.458 [conn1] going to kill op: op: 30744.0 m31000| Fri Feb 22 11:57:49.459 [conn1504] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.459 [conn1504] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|35 } } cursorid:495704988077138 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:49.459 [conn1505] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.459 [conn1505] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|35 } } cursorid:495704831611976 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.459 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:49.459 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.459 [conn1504] end connection 165.225.128.186:58119 (7 connections now open) m31000| Fri Feb 22 11:57:49.459 [conn1505] end connection 165.225.128.186:42819 (6 connections now open) m31000| Fri Feb 22 11:57:49.459 [initandlisten] connection accepted from 165.225.128.186:46085 #1506 (7 connections now open) m31000| Fri Feb 22 11:57:49.459 [initandlisten] connection accepted from 165.225.128.186:53329 #1507 (8 connections now open) m31000| Fri Feb 22 11:57:49.559 [conn1] going to kill op: op: 30782.0 m31000| Fri Feb 22 11:57:49.559 [conn1] going to kill op: op: 30781.0 m31000| Fri Feb 22 11:57:49.562 [conn1506] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.562 [conn1506] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|45 } } cursorid:496099478651304 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.562 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.562 [conn1507] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.562 [conn1506] end connection 165.225.128.186:46085 (7 connections now open) m31000| Fri Feb 22 11:57:49.562 [conn1507] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|45 } } cursorid:496100284750928 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:49.562 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.562 [conn1507] end connection 165.225.128.186:53329 (6 connections now open) m31000| Fri Feb 22 11:57:49.562 [initandlisten] connection accepted from 165.225.128.186:35513 #1508 (8 connections now open) m31000| Fri Feb 22 11:57:49.562 [initandlisten] connection accepted from 165.225.128.186:51904 #1509 (8 connections now open) m31000| Fri Feb 22 11:57:49.660 [conn1] going to kill op: op: 30822.0 m31000| Fri Feb 22 11:57:49.660 [conn1] going to kill op: op: 30819.0 m31000| Fri Feb 22 11:57:49.660 [conn1] going to kill op: op: 30820.0 m31000| Fri Feb 22 11:57:49.665 [conn1508] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.665 [conn1508] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|55 } } cursorid:496537408956419 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.665 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.665 [conn1508] end connection 165.225.128.186:35513 (7 connections now open) m31000| Fri Feb 22 11:57:49.665 [conn1509] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.665 [conn1509] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|55 } } cursorid:496537234579899 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:49.665 [conn1509] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:49.665 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.665 [initandlisten] connection accepted from 165.225.128.186:53891 #1510 (8 connections now open) m31000| Fri Feb 22 11:57:49.665 [conn1509] end connection 165.225.128.186:51904 (7 connections now open) m31000| Fri Feb 22 11:57:49.665 [initandlisten] connection accepted from 165.225.128.186:47808 #1511 (8 connections now open) m31000| Fri Feb 22 11:57:49.666 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.666 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|65 } } cursorid:496924251154710 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:49.666 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.761 [conn1] going to kill op: op: 30860.0 m31000| Fri Feb 22 11:57:49.761 [conn1] going to kill op: op: 30861.0 m31000| Fri Feb 22 11:57:49.767 [conn1510] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.767 [conn1510] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|65 } } cursorid:496975554807061 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:49.767 [conn1511] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.767 [conn1511] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|65 } } cursorid:496976749992523 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.767 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.768 [conn1510] end connection 165.225.128.186:53891 (7 connections now open) m31001| Fri Feb 22 11:57:49.768 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.768 [conn1511] end connection 165.225.128.186:47808 (6 connections now open) m31000| Fri Feb 22 11:57:49.768 [initandlisten] connection accepted from 165.225.128.186:54702 #1512 (7 connections now open) m31000| Fri Feb 22 11:57:49.768 [initandlisten] connection accepted from 165.225.128.186:44211 #1513 (8 connections now open) m31000| Fri Feb 22 11:57:49.861 [conn1] going to kill op: op: 30899.0 m31000| Fri Feb 22 11:57:49.862 [conn1] going to kill op: op: 30898.0 m31000| Fri Feb 22 11:57:49.870 [conn1512] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.870 [conn1513] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.870 [conn1512] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|75 } } cursorid:497414805554041 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:49.870 [conn1513] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|75 } } cursorid:497415142004743 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.870 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:49.870 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.870 [conn1512] end connection 165.225.128.186:54702 (7 connections now open) m31000| Fri Feb 22 11:57:49.870 [conn1513] end connection 165.225.128.186:44211 (7 connections now open) m31000| Fri Feb 22 11:57:49.871 [initandlisten] connection accepted from 165.225.128.186:41484 #1514 (7 connections now open) m31000| Fri Feb 22 11:57:49.871 [initandlisten] connection accepted from 165.225.128.186:46549 #1515 (8 connections now open) m31000| Fri Feb 22 11:57:49.962 [conn1] going to kill op: op: 30934.0 m31000| Fri Feb 22 11:57:49.962 [conn1] going to kill op: op: 30933.0 m31000| Fri Feb 22 11:57:49.963 [conn1515] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.963 [conn1515] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|85 } } cursorid:497852883352288 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:49.963 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.963 [conn1515] end connection 165.225.128.186:46549 (7 connections now open) m31000| Fri Feb 22 11:57:49.963 [conn1514] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:49.963 [conn1514] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|85 } } cursorid:497852816323857 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:91 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:49.963 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:49.963 [initandlisten] connection accepted from 165.225.128.186:46468 #1516 (8 connections now open) m31000| Fri Feb 22 11:57:49.963 [conn1514] end connection 165.225.128.186:41484 (7 connections now open) m31000| Fri Feb 22 11:57:49.964 [initandlisten] connection accepted from 165.225.128.186:65285 #1517 (8 connections now open) m31000| Fri Feb 22 11:57:50.063 [conn1] going to kill op: op: 30972.0 m31000| Fri Feb 22 11:57:50.063 [conn1] going to kill op: op: 30975.0 m31000| Fri Feb 22 11:57:50.066 [conn1516] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.066 [conn1516] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|94 } } cursorid:498247812165666 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.066 [conn1516] ClientCursor::find(): cursor not found in map '498247812165666' (ok after a drop) m31000| Fri Feb 22 11:57:50.066 [conn1516] getMore: cursorid not found local.oplog.rs 498247812165666 m31002| Fri Feb 22 11:57:50.066 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.066 [conn1516] end connection 165.225.128.186:46468 (7 connections now open) m31000| Fri Feb 22 11:57:50.066 [conn1517] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.066 [conn1517] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534269000|94 } } cursorid:498247966165066 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:50.067 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.067 [initandlisten] connection accepted from 165.225.128.186:32879 #1518 (8 connections now open) m31000| Fri Feb 22 11:57:50.067 [conn1517] end connection 165.225.128.186:65285 (6 connections now open) m31000| Fri Feb 22 11:57:50.067 [initandlisten] connection accepted from 165.225.128.186:46306 #1519 (8 connections now open) m31000| Fri Feb 22 11:57:50.164 [conn1] going to kill op: op: 31026.0 m31000| Fri Feb 22 11:57:50.164 [conn1] going to kill op: op: 31027.0 m31000| Fri Feb 22 11:57:50.164 [conn1] going to kill op: op: 31025.0 m31000| Fri Feb 22 11:57:50.169 [conn1518] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.169 [conn1519] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.169 [conn1518] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|5 } } cursorid:498684537361023 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.169 [conn1519] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|6 } } cursorid:498686382140972 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:50.169 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:50.169 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.169 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.169 [conn1519] end connection 165.225.128.186:46306 (7 connections now open) m31000| Fri Feb 22 11:57:50.169 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|5 } } cursorid:498634457511970 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.169 [conn1518] end connection 165.225.128.186:32879 (7 connections now open) m31000| Fri Feb 22 11:57:50.169 [initandlisten] connection accepted from 165.225.128.186:34634 #1520 (7 connections now open) m31000| Fri Feb 22 11:57:50.170 [initandlisten] connection accepted from 165.225.128.186:54677 #1521 (8 connections now open) m31002| Fri Feb 22 11:57:50.170 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.265 [conn1] going to kill op: op: 31066.0 m31000| Fri Feb 22 11:57:50.265 [conn1] going to kill op: op: 31065.0 m31000| Fri Feb 22 11:57:50.272 [conn1521] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.272 [conn1521] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|16 } } cursorid:499123698037571 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.272 [conn1520] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.272 [conn1520] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|16 } } cursorid:499124569972208 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:50.272 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.272 [conn1521] end connection 165.225.128.186:54677 (7 connections now open) m31001| Fri Feb 22 11:57:50.272 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.272 [conn1520] end connection 165.225.128.186:34634 (6 connections now open) m31000| Fri Feb 22 11:57:50.272 [initandlisten] connection accepted from 165.225.128.186:39785 #1522 (7 connections now open) m31000| Fri Feb 22 11:57:50.272 [initandlisten] connection accepted from 165.225.128.186:37269 #1523 (8 connections now open) m31000| Fri Feb 22 11:57:50.366 [conn1] going to kill op: op: 31105.0 m31000| Fri Feb 22 11:57:50.366 [conn1] going to kill op: op: 31104.0 m31000| Fri Feb 22 11:57:50.375 [conn1523] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.375 [conn1523] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|26 } } cursorid:499560557057289 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.375 [conn1522] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.375 [conn1523] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:57:50.375 [conn1522] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|26 } } cursorid:499561418093403 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:50.375 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.375 [conn1523] end connection 165.225.128.186:37269 (7 connections now open) m31002| Fri Feb 22 11:57:50.375 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.375 [conn1522] end connection 165.225.128.186:39785 (6 connections now open) m31000| Fri Feb 22 11:57:50.375 [initandlisten] connection accepted from 165.225.128.186:58517 #1524 (7 connections now open) m31000| Fri Feb 22 11:57:50.375 [initandlisten] connection accepted from 165.225.128.186:47187 #1525 (8 connections now open) m31000| Fri Feb 22 11:57:50.467 [conn1] going to kill op: op: 31139.0 m31000| Fri Feb 22 11:57:50.467 [conn1] going to kill op: op: 31140.0 m31000| Fri Feb 22 11:57:50.467 [conn1524] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.467 [conn1524] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|36 } } cursorid:499998964154852 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.468 [conn1525] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.468 [conn1525] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|36 } } cursorid:500000391664753 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:50.468 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:50.468 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.468 [conn1524] end connection 165.225.128.186:58517 (7 connections now open) m31000| Fri Feb 22 11:57:50.468 [conn1525] end connection 165.225.128.186:47187 (6 connections now open) m31000| Fri Feb 22 11:57:50.468 [initandlisten] connection accepted from 165.225.128.186:54898 #1526 (7 connections now open) m31000| Fri Feb 22 11:57:50.468 [initandlisten] connection accepted from 165.225.128.186:36496 #1527 (8 connections now open) m31000| Fri Feb 22 11:57:50.567 [conn1] going to kill op: op: 31178.0 m31000| Fri Feb 22 11:57:50.568 [conn1] going to kill op: op: 31177.0 m31000| Fri Feb 22 11:57:50.570 [conn1527] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.570 [conn1527] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|45 } } cursorid:500395107045878 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:50.570 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.571 [conn1526] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.571 [conn1526] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|45 } } cursorid:500394469137955 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.571 [conn1527] end connection 165.225.128.186:36496 (7 connections now open) m31001| Fri Feb 22 11:57:50.571 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.571 [conn1526] end connection 165.225.128.186:54898 (6 connections now open) m31000| Fri Feb 22 11:57:50.571 [initandlisten] connection accepted from 165.225.128.186:44001 #1528 (7 connections now open) m31000| Fri Feb 22 11:57:50.571 [initandlisten] connection accepted from 165.225.128.186:53844 #1529 (8 connections now open) m31000| Fri Feb 22 11:57:50.668 [conn1] going to kill op: op: 31215.0 m31000| Fri Feb 22 11:57:50.668 [conn1] going to kill op: op: 31216.0 m31000| Fri Feb 22 11:57:50.674 [conn1528] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.674 [conn1529] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.675 [conn1528] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|55 } } cursorid:500832611959825 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.675 [conn1529] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|55 } } cursorid:500833899374892 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:50.675 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:50.675 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.675 [conn1528] end connection 165.225.128.186:44001 (7 connections now open) m31000| Fri Feb 22 11:57:50.675 [conn1529] end connection 165.225.128.186:53844 (6 connections now open) m31000| Fri Feb 22 11:57:50.675 [initandlisten] connection accepted from 165.225.128.186:56902 #1530 (7 connections now open) m31000| Fri Feb 22 11:57:50.675 [initandlisten] connection accepted from 165.225.128.186:56460 #1531 (8 connections now open) m31000| Fri Feb 22 11:57:50.769 [conn1] going to kill op: op: 31265.0 m31000| Fri Feb 22 11:57:50.769 [conn1] going to kill op: op: 31267.0 m31000| Fri Feb 22 11:57:50.770 [conn1] going to kill op: op: 31266.0 m31000| Fri Feb 22 11:57:50.778 [conn1530] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.778 [conn1530] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|65 } } cursorid:501271524243458 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.778 [conn1530] ClientCursor::find(): cursor not found in map '501271524243458' (ok after a drop) m31002| Fri Feb 22 11:57:50.778 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.778 [conn1531] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.778 [conn1530] end connection 165.225.128.186:56902 (7 connections now open) m31000| Fri Feb 22 11:57:50.778 [conn1531] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|65 } } cursorid:501270384366301 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:50.778 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.778 [conn1531] end connection 165.225.128.186:56460 (6 connections now open) m31000| Fri Feb 22 11:57:50.778 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.778 [initandlisten] connection accepted from 165.225.128.186:48696 #1532 (7 connections now open) m31000| Fri Feb 22 11:57:50.778 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|65 } } cursorid:501220221476476 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.779 [initandlisten] connection accepted from 165.225.128.186:53718 #1533 (8 connections now open) m31001| Fri Feb 22 11:57:50.780 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.870 [conn1] going to kill op: op: 31302.0 m31000| Fri Feb 22 11:57:50.870 [conn1] going to kill op: op: 31303.0 m31000| Fri Feb 22 11:57:50.871 [conn1532] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.871 [conn1532] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|76 } } cursorid:501708474139728 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.871 [conn1533] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:57:50.871 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.871 [conn1533] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|76 } } cursorid:501708108098098 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.871 [conn1532] end connection 165.225.128.186:48696 (7 connections now open) m31001| Fri Feb 22 11:57:50.871 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.871 [conn1533] end connection 165.225.128.186:53718 (6 connections now open) m31000| Fri Feb 22 11:57:50.871 [initandlisten] connection accepted from 165.225.128.186:42376 #1534 (7 connections now open) m31000| Fri Feb 22 11:57:50.872 [initandlisten] connection accepted from 165.225.128.186:58877 #1535 (8 connections now open) m31000| Fri Feb 22 11:57:50.971 [conn1] going to kill op: op: 31340.0 m31000| Fri Feb 22 11:57:50.971 [conn1] going to kill op: op: 31341.0 m31000| Fri Feb 22 11:57:50.974 [conn1534] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.974 [conn1534] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|85 } } cursorid:502104325570872 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:50.974 [conn1535] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:50.974 [conn1535] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|85 } } cursorid:502104274682057 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:50.974 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:50.974 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:50.975 [conn1534] end connection 165.225.128.186:42376 (7 connections now open) m31000| Fri Feb 22 11:57:50.975 [conn1535] end connection 165.225.128.186:58877 (7 connections now open) m31000| Fri Feb 22 11:57:50.975 [initandlisten] connection accepted from 165.225.128.186:42538 #1536 (7 connections now open) m31000| Fri Feb 22 11:57:50.975 [initandlisten] connection accepted from 165.225.128.186:62709 #1537 (8 connections now open) m31000| Fri Feb 22 11:57:51.072 [conn1] going to kill op: op: 31379.0 m31000| Fri Feb 22 11:57:51.072 [conn1] going to kill op: op: 31378.0 m31000| Fri Feb 22 11:57:51.078 [conn1536] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.078 [conn1536] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|95 } } cursorid:502542886478690 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.078 [conn1537] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.078 [conn1537] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534270000|95 } } cursorid:502542475301949 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:106 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.078 [conn1536] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:51.078 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:51.078 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.078 [conn1536] end connection 165.225.128.186:42538 (7 connections now open) m31000| Fri Feb 22 11:57:51.078 [conn1537] end connection 165.225.128.186:62709 (7 connections now open) m31000| Fri Feb 22 11:57:51.079 [initandlisten] connection accepted from 165.225.128.186:65038 #1538 (7 connections now open) m31000| Fri Feb 22 11:57:51.079 [initandlisten] connection accepted from 165.225.128.186:39501 #1539 (8 connections now open) m31000| Fri Feb 22 11:57:51.173 [conn1] going to kill op: op: 31421.0 m31000| Fri Feb 22 11:57:51.173 [conn1] going to kill op: op: 31420.0 m31000| Fri Feb 22 11:57:51.173 [conn1] going to kill op: op: 31419.0 m31000| Fri Feb 22 11:57:51.181 [conn1539] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.181 [conn1538] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.181 [conn1539] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|7 } } cursorid:502980864276567 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.181 [conn1538] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|7 } } cursorid:502979477167115 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:51.182 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:51.182 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.182 [conn1538] end connection 165.225.128.186:65038 (7 connections now open) m31000| Fri Feb 22 11:57:51.182 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.182 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|16 } } cursorid:503367055206642 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.182 [conn1539] end connection 165.225.128.186:39501 (7 connections now open) m31000| Fri Feb 22 11:57:51.182 [initandlisten] connection accepted from 165.225.128.186:49538 #1540 (7 connections now open) m31000| Fri Feb 22 11:57:51.182 [initandlisten] connection accepted from 165.225.128.186:32976 #1541 (8 connections now open) m31002| Fri Feb 22 11:57:51.183 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.274 [conn1] going to kill op: op: 31459.0 m31000| Fri Feb 22 11:57:51.274 [conn1] going to kill op: op: 31460.0 m31000| Fri Feb 22 11:57:51.274 [conn1540] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.274 [conn1540] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|17 } } cursorid:503418938326103 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:51.274 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.274 [conn1540] end connection 165.225.128.186:49538 (7 connections now open) m31000| Fri Feb 22 11:57:51.274 [conn1541] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.274 [conn1541] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|17 } } cursorid:503418597510024 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:51.275 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.275 [conn1541] end connection 165.225.128.186:32976 (6 connections now open) m31000| Fri Feb 22 11:57:51.275 [initandlisten] connection accepted from 165.225.128.186:58463 #1542 (8 connections now open) m31000| Fri Feb 22 11:57:51.275 [initandlisten] connection accepted from 165.225.128.186:45661 #1543 (8 connections now open) m31000| Fri Feb 22 11:57:51.375 [conn1] going to kill op: op: 31498.0 m31000| Fri Feb 22 11:57:51.375 [conn1] going to kill op: op: 31497.0 m31000| Fri Feb 22 11:57:51.377 [conn1543] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.377 [conn1543] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|27 } } cursorid:503814687549625 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:51.377 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.377 [conn1542] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.377 [conn1542] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|27 } } cursorid:503813358653751 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.377 [conn1543] end connection 165.225.128.186:45661 (7 connections now open) m31000| Fri Feb 22 11:57:51.377 [conn1542] ClientCursor::find(): cursor not found in map '503813358653751' (ok after a drop) m31001| Fri Feb 22 11:57:51.377 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.377 [conn1542] end connection 165.225.128.186:58463 (6 connections now open) m31000| Fri Feb 22 11:57:51.378 [initandlisten] connection accepted from 165.225.128.186:63173 #1544 (7 connections now open) m31000| Fri Feb 22 11:57:51.378 [initandlisten] connection accepted from 165.225.128.186:42363 #1545 (8 connections now open) m31000| Fri Feb 22 11:57:51.475 [conn1] going to kill op: op: 31535.0 m31000| Fri Feb 22 11:57:51.476 [conn1] going to kill op: op: 31536.0 m31000| Fri Feb 22 11:57:51.479 [conn1544] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.479 [conn1544] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|37 } } cursorid:504250799301294 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:51.479 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.480 [conn1544] end connection 165.225.128.186:63173 (7 connections now open) m31000| Fri Feb 22 11:57:51.480 [conn1545] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.480 [conn1545] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|37 } } cursorid:504252522711148 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.480 [initandlisten] connection accepted from 165.225.128.186:43873 #1546 (8 connections now open) m31001| Fri Feb 22 11:57:51.480 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.480 [conn1545] end connection 165.225.128.186:42363 (7 connections now open) m31000| Fri Feb 22 11:57:51.480 [initandlisten] connection accepted from 165.225.128.186:56600 #1547 (8 connections now open) m31000| Fri Feb 22 11:57:51.576 [conn1] going to kill op: op: 31573.0 m31000| Fri Feb 22 11:57:51.576 [conn1] going to kill op: op: 31574.0 m31000| Fri Feb 22 11:57:51.582 [conn1547] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.582 [conn1547] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|47 } } cursorid:504690341389644 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:51.582 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.582 [conn1547] end connection 165.225.128.186:56600 (7 connections now open) m31000| Fri Feb 22 11:57:51.582 [conn1546] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.583 [conn1546] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|47 } } cursorid:504690594265405 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.583 [initandlisten] connection accepted from 165.225.128.186:58604 #1548 (8 connections now open) m31002| Fri Feb 22 11:57:51.583 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.583 [conn1546] end connection 165.225.128.186:43873 (7 connections now open) m31000| Fri Feb 22 11:57:51.583 [initandlisten] connection accepted from 165.225.128.186:57765 #1549 (8 connections now open) m31000| Fri Feb 22 11:57:51.677 [conn1] going to kill op: op: 31611.0 m31000| Fri Feb 22 11:57:51.677 [conn1] going to kill op: op: 31612.0 m31000| Fri Feb 22 11:57:51.685 [conn1548] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.685 [conn1548] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|57 } } cursorid:505128443683756 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.685 [conn1549] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.685 [conn1549] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|57 } } cursorid:505126997906497 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:51.685 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:51.685 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.685 [conn1548] end connection 165.225.128.186:58604 (7 connections now open) m31000| Fri Feb 22 11:57:51.685 [conn1549] end connection 165.225.128.186:57765 (6 connections now open) m31000| Fri Feb 22 11:57:51.685 [initandlisten] connection accepted from 165.225.128.186:40999 #1550 (7 connections now open) m31000| Fri Feb 22 11:57:51.685 [initandlisten] connection accepted from 165.225.128.186:47341 #1551 (8 connections now open) m31000| Fri Feb 22 11:57:51.778 [conn1] going to kill op: op: 31646.0 m31000| Fri Feb 22 11:57:51.778 [conn1] going to kill op: op: 31647.0 m31000| Fri Feb 22 11:57:51.778 [conn1550] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.778 [conn1551] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.778 [conn1550] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|67 } } cursorid:505565621706000 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:86 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.778 [conn1551] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|67 } } cursorid:505565753711479 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.778 [conn1550] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:51.778 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:51.778 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.778 [conn1550] end connection 165.225.128.186:40999 (7 connections now open) m31000| Fri Feb 22 11:57:51.778 [conn1551] end connection 165.225.128.186:47341 (7 connections now open) m31000| Fri Feb 22 11:57:51.778 [initandlisten] connection accepted from 165.225.128.186:48087 #1552 (7 connections now open) m31000| Fri Feb 22 11:57:51.779 [initandlisten] connection accepted from 165.225.128.186:43670 #1553 (8 connections now open) m31000| Fri Feb 22 11:57:51.878 [conn1] going to kill op: op: 31696.0 m31000| Fri Feb 22 11:57:51.879 [conn1] going to kill op: op: 31698.0 m31000| Fri Feb 22 11:57:51.879 [conn1] going to kill op: op: 31697.0 m31000| Fri Feb 22 11:57:51.881 [conn1553] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.881 [conn1553] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|76 } } cursorid:505961427928643 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.881 [conn1552] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.882 [conn1552] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|76 } } cursorid:505961304198115 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:51.882 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.882 [conn1553] end connection 165.225.128.186:43670 (7 connections now open) m31002| Fri Feb 22 11:57:51.882 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.882 [conn1552] end connection 165.225.128.186:48087 (6 connections now open) m31000| Fri Feb 22 11:57:51.882 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.882 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|77 } } cursorid:505961862650592 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.882 [initandlisten] connection accepted from 165.225.128.186:50075 #1554 (7 connections now open) m31000| Fri Feb 22 11:57:51.882 [initandlisten] connection accepted from 165.225.128.186:50505 #1555 (8 connections now open) m31001| Fri Feb 22 11:57:51.883 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.979 [conn1] going to kill op: op: 31737.0 m31000| Fri Feb 22 11:57:51.979 [conn1] going to kill op: op: 31738.0 m31000| Fri Feb 22 11:57:51.984 [conn1555] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.984 [conn1554] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:51.984 [conn1555] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|86 } } cursorid:506399001009270 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:51.984 [conn1554] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|86 } } cursorid:506399234957624 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:51.984 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:51.985 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:51.985 [conn1555] end connection 165.225.128.186:50505 (7 connections now open) m31000| Fri Feb 22 11:57:51.985 [conn1554] end connection 165.225.128.186:50075 (7 connections now open) m31000| Fri Feb 22 11:57:51.985 [initandlisten] connection accepted from 165.225.128.186:36091 #1556 (7 connections now open) m31000| Fri Feb 22 11:57:51.985 [initandlisten] connection accepted from 165.225.128.186:34262 #1557 (8 connections now open) m31000| Fri Feb 22 11:57:52.080 [conn1] going to kill op: op: 31777.0 m31000| Fri Feb 22 11:57:52.080 [conn1] going to kill op: op: 31776.0 m31000| Fri Feb 22 11:57:52.087 [conn1557] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.087 [conn1556] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.087 [conn1557] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|97 } } cursorid:506836812173939 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.087 [conn1556] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534271000|97 } } cursorid:506838008775652 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.088 [conn1557] ClientCursor::find(): cursor not found in map '506836812173939' (ok after a drop) m31002| Fri Feb 22 11:57:52.088 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:52.088 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.088 [conn1556] end connection 165.225.128.186:36091 (7 connections now open) m31000| Fri Feb 22 11:57:52.088 [conn1557] end connection 165.225.128.186:34262 (7 connections now open) m31000| Fri Feb 22 11:57:52.088 [initandlisten] connection accepted from 165.225.128.186:41524 #1558 (7 connections now open) m31000| Fri Feb 22 11:57:52.088 [initandlisten] connection accepted from 165.225.128.186:58304 #1559 (8 connections now open) m31000| Fri Feb 22 11:57:52.181 [conn1] going to kill op: op: 31816.0 m31000| Fri Feb 22 11:57:52.181 [conn1] going to kill op: op: 31817.0 m31000| Fri Feb 22 11:57:52.190 [conn1559] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.190 [conn1559] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|8 } } cursorid:507276151812720 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.191 [conn1558] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.191 [conn1558] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|8 } } cursorid:507276236566716 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.191 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.191 [conn1559] end connection 165.225.128.186:58304 (7 connections now open) m31001| Fri Feb 22 11:57:52.191 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.191 [conn1558] end connection 165.225.128.186:41524 (6 connections now open) m31000| Fri Feb 22 11:57:52.191 [initandlisten] connection accepted from 165.225.128.186:64475 #1560 (7 connections now open) m31000| Fri Feb 22 11:57:52.191 [initandlisten] connection accepted from 165.225.128.186:51789 #1561 (8 connections now open) m31000| Fri Feb 22 11:57:52.282 [conn1] going to kill op: op: 31864.0 m31000| Fri Feb 22 11:57:52.282 [conn1] going to kill op: op: 31863.0 m31000| Fri Feb 22 11:57:52.282 [conn1] going to kill op: op: 31862.0 m31000| Fri Feb 22 11:57:52.283 [conn1561] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.283 [conn1561] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|18 } } cursorid:507713679179994 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:52.283 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.283 [conn1561] end connection 165.225.128.186:51789 (7 connections now open) m31000| Fri Feb 22 11:57:52.283 [conn1560] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.283 [conn1560] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|18 } } cursorid:507714207154199 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.283 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.283 [initandlisten] connection accepted from 165.225.128.186:61299 #1562 (8 connections now open) m31000| Fri Feb 22 11:57:52.283 [conn1560] end connection 165.225.128.186:64475 (6 connections now open) m31000| Fri Feb 22 11:57:52.284 [initandlisten] connection accepted from 165.225.128.186:56554 #1563 (8 connections now open) m31000| Fri Feb 22 11:57:52.285 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.285 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|18 } } cursorid:507662935818857 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:41 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.285 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.383 [conn1] going to kill op: op: 31903.0 m31000| Fri Feb 22 11:57:52.383 [conn1] going to kill op: op: 31902.0 m31000| Fri Feb 22 11:57:52.385 [conn1562] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.385 [conn1562] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|27 } } cursorid:508108934598209 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:52.385 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.385 [conn1562] end connection 165.225.128.186:61299 (7 connections now open) m31000| Fri Feb 22 11:57:52.386 [conn1563] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.386 [initandlisten] connection accepted from 165.225.128.186:55952 #1564 (8 connections now open) m31000| Fri Feb 22 11:57:52.386 [conn1563] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|27 } } cursorid:508108491447058 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.386 [conn1563] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:52.386 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.386 [conn1563] end connection 165.225.128.186:56554 (7 connections now open) m31000| Fri Feb 22 11:57:52.386 [initandlisten] connection accepted from 165.225.128.186:44806 #1565 (8 connections now open) m31000| Fri Feb 22 11:57:52.483 [conn1] going to kill op: op: 31941.0 m31000| Fri Feb 22 11:57:52.484 [conn1] going to kill op: op: 31940.0 m31000| Fri Feb 22 11:57:52.488 [conn1564] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.488 [conn1565] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.488 [conn1564] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|37 } } cursorid:508543320138182 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.488 [conn1565] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|37 } } cursorid:508547069037064 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:52.488 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:52.488 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.488 [conn1564] end connection 165.225.128.186:55952 (7 connections now open) m31000| Fri Feb 22 11:57:52.488 [conn1565] end connection 165.225.128.186:44806 (7 connections now open) m31000| Fri Feb 22 11:57:52.489 [initandlisten] connection accepted from 165.225.128.186:65297 #1566 (7 connections now open) m31000| Fri Feb 22 11:57:52.489 [initandlisten] connection accepted from 165.225.128.186:33300 #1567 (8 connections now open) m31000| Fri Feb 22 11:57:52.584 [conn1] going to kill op: op: 31979.0 m31000| Fri Feb 22 11:57:52.584 [conn1] going to kill op: op: 31978.0 m31000| Fri Feb 22 11:57:52.591 [conn1567] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.591 [conn1567] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|47 } } cursorid:508985708083182 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.591 [conn1566] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.591 [conn1566] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|47 } } cursorid:508985222714337 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.591 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.591 [conn1567] end connection 165.225.128.186:33300 (7 connections now open) m31001| Fri Feb 22 11:57:52.591 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.591 [conn1566] end connection 165.225.128.186:65297 (6 connections now open) m31000| Fri Feb 22 11:57:52.591 [initandlisten] connection accepted from 165.225.128.186:34545 #1568 (7 connections now open) m31000| Fri Feb 22 11:57:52.593 [initandlisten] connection accepted from 165.225.128.186:60729 #1569 (8 connections now open) m31000| Fri Feb 22 11:57:52.685 [conn1] going to kill op: op: 32017.0 m31000| Fri Feb 22 11:57:52.685 [conn1] going to kill op: op: 32018.0 m31000| Fri Feb 22 11:57:52.693 [conn1568] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.693 [conn1568] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|57 } } cursorid:509418918169875 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.693 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.694 [conn1568] end connection 165.225.128.186:34545 (7 connections now open) m31000| Fri Feb 22 11:57:52.694 [initandlisten] connection accepted from 165.225.128.186:59301 #1570 (8 connections now open) m31000| Fri Feb 22 11:57:52.694 [conn1569] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.694 [conn1569] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|57 } } cursorid:509422215843537 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:52.694 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.694 [conn1569] end connection 165.225.128.186:60729 (7 connections now open) m31000| Fri Feb 22 11:57:52.695 [initandlisten] connection accepted from 165.225.128.186:41480 #1571 (8 connections now open) m31000| Fri Feb 22 11:57:52.786 [conn1] going to kill op: op: 32053.0 m31000| Fri Feb 22 11:57:52.786 [conn1] going to kill op: op: 32052.0 m31000| Fri Feb 22 11:57:52.786 [conn1570] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.786 [conn1570] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|68 } } cursorid:509856912726343 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.786 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.786 [conn1570] end connection 165.225.128.186:59301 (7 connections now open) m31000| Fri Feb 22 11:57:52.786 [conn1571] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.786 [conn1571] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|68 } } cursorid:509861931920421 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.786 [conn1571] ClientCursor::find(): cursor not found in map '509861931920421' (ok after a drop) m31001| Fri Feb 22 11:57:52.786 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.786 [conn1571] end connection 165.225.128.186:41480 (6 connections now open) m31000| Fri Feb 22 11:57:52.787 [initandlisten] connection accepted from 165.225.128.186:52185 #1572 (7 connections now open) m31000| Fri Feb 22 11:57:52.787 [initandlisten] connection accepted from 165.225.128.186:55421 #1573 (8 connections now open) m31000| Fri Feb 22 11:57:52.886 [conn1] going to kill op: op: 32093.0 m31000| Fri Feb 22 11:57:52.887 [conn1] going to kill op: op: 32090.0 m31000| Fri Feb 22 11:57:52.887 [conn1] going to kill op: op: 32091.0 m31000| Fri Feb 22 11:57:52.888 [conn1572] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.888 [conn1572] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|77 } } cursorid:510255930295211 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.889 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.889 [conn1572] end connection 165.225.128.186:52185 (7 connections now open) m31000| Fri Feb 22 11:57:52.889 [conn1573] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.889 [conn1573] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|77 } } cursorid:510257135073370 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:52.889 [initandlisten] connection accepted from 165.225.128.186:38041 #1574 (8 connections now open) m31001| Fri Feb 22 11:57:52.889 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.889 [conn1573] end connection 165.225.128.186:55421 (7 connections now open) m31000| Fri Feb 22 11:57:52.889 [initandlisten] connection accepted from 165.225.128.186:45913 #1575 (8 connections now open) m31000| Fri Feb 22 11:57:52.894 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.894 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|87 } } cursorid:510641995911429 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:52.894 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.987 [conn1] going to kill op: op: 32132.0 m31000| Fri Feb 22 11:57:52.988 [conn1] going to kill op: op: 32131.0 m31000| Fri Feb 22 11:57:52.991 [conn1574] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.992 [conn1574] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|87 } } cursorid:510689932095255 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:131 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:52.992 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.992 [conn1574] end connection 165.225.128.186:38041 (7 connections now open) m31000| Fri Feb 22 11:57:52.992 [conn1575] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:52.992 [conn1575] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|87 } } cursorid:510694205686464 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:52.992 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:52.992 [conn1575] end connection 165.225.128.186:45913 (6 connections now open) m31000| Fri Feb 22 11:57:52.992 [initandlisten] connection accepted from 165.225.128.186:63110 #1576 (7 connections now open) m31000| Fri Feb 22 11:57:52.993 [initandlisten] connection accepted from 165.225.128.186:65061 #1577 (8 connections now open) m31000| Fri Feb 22 11:57:53.088 [conn1] going to kill op: op: 32170.0 m31000| Fri Feb 22 11:57:53.088 [conn1] going to kill op: op: 32169.0 m31000| Fri Feb 22 11:57:53.095 [conn1577] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.095 [conn1577] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|97 } } cursorid:511132045651402 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:53.095 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.095 [conn1576] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.095 [conn1576] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534272000|97 } } cursorid:511131271361571 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.095 [conn1577] end connection 165.225.128.186:65061 (7 connections now open) m31000| Fri Feb 22 11:57:53.095 [conn1576] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:53.095 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.095 [conn1576] end connection 165.225.128.186:63110 (6 connections now open) m31000| Fri Feb 22 11:57:53.096 [initandlisten] connection accepted from 165.225.128.186:48585 #1578 (7 connections now open) m31000| Fri Feb 22 11:57:53.096 [initandlisten] connection accepted from 165.225.128.186:54012 #1579 (8 connections now open) m31000| Fri Feb 22 11:57:53.189 [conn1] going to kill op: op: 32210.0 m31000| Fri Feb 22 11:57:53.189 [conn1] going to kill op: op: 32209.0 m31000| Fri Feb 22 11:57:53.198 [conn1579] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.198 [conn1579] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|8 } } cursorid:511569483217155 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:53.198 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.198 [conn1579] end connection 165.225.128.186:54012 (7 connections now open) m31000| Fri Feb 22 11:57:53.198 [conn1578] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.198 [conn1578] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|8 } } cursorid:511571236785036 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:35 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:53.198 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.198 [conn1578] end connection 165.225.128.186:48585 (6 connections now open) m31000| Fri Feb 22 11:57:53.198 [initandlisten] connection accepted from 165.225.128.186:37656 #1580 (7 connections now open) m31000| Fri Feb 22 11:57:53.199 [initandlisten] connection accepted from 165.225.128.186:64566 #1581 (8 connections now open) m31000| Fri Feb 22 11:57:53.290 [conn1] going to kill op: op: 32244.0 m31000| Fri Feb 22 11:57:53.290 [conn1580] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.290 [conn1580] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|18 } } cursorid:512007543664745 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:38 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:53.290 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.290 [conn1] going to kill op: op: 32245.0 m31000| Fri Feb 22 11:57:53.290 [conn1580] end connection 165.225.128.186:37656 (7 connections now open) m31000| Fri Feb 22 11:57:53.290 [initandlisten] connection accepted from 165.225.128.186:64411 #1582 (8 connections now open) m31000| Fri Feb 22 11:57:53.291 [conn1581] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.291 [conn1581] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|18 } } cursorid:512007816694587 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:53.291 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.291 [conn1581] end connection 165.225.128.186:64566 (7 connections now open) m31000| Fri Feb 22 11:57:53.291 [initandlisten] connection accepted from 165.225.128.186:36116 #1583 (8 connections now open) m31000| Fri Feb 22 11:57:53.391 [conn1] going to kill op: op: 32294.0 m31000| Fri Feb 22 11:57:53.391 [conn1] going to kill op: op: 32293.0 m31000| Fri Feb 22 11:57:53.391 [conn1] going to kill op: op: 32292.0 m31000| Fri Feb 22 11:57:53.392 [conn1582] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.392 [conn1582] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|27 } } cursorid:512399785534816 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:53.392 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.392 [conn1582] end connection 165.225.128.186:64411 (7 connections now open) m31000| Fri Feb 22 11:57:53.392 [initandlisten] connection accepted from 165.225.128.186:61663 #1584 (8 connections now open) m31000| Fri Feb 22 11:57:53.394 [conn1583] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.394 [conn1583] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|27 } } cursorid:512404446110213 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:53.394 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.394 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.394 [conn1583] end connection 165.225.128.186:36116 (7 connections now open) m31000| Fri Feb 22 11:57:53.394 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|27 } } cursorid:512351419334906 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:40 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.394 [initandlisten] connection accepted from 165.225.128.186:61050 #1585 (8 connections now open) m31002| Fri Feb 22 11:57:53.395 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.492 [conn1] going to kill op: op: 32332.0 m31000| Fri Feb 22 11:57:53.492 [conn1] going to kill op: op: 32334.0 m31000| Fri Feb 22 11:57:53.495 [conn1584] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.495 [conn1584] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|37 } } cursorid:512836754049464 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.495 [conn1584] ClientCursor::find(): cursor not found in map '512836754049464' (ok after a drop) m31001| Fri Feb 22 11:57:53.495 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.495 [conn1584] end connection 165.225.128.186:61663 (7 connections now open) m31000| Fri Feb 22 11:57:53.495 [initandlisten] connection accepted from 165.225.128.186:62983 #1586 (8 connections now open) m31000| Fri Feb 22 11:57:53.496 [conn1585] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.496 [conn1585] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|37 } } cursorid:512841701886359 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:53.496 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.496 [conn1585] end connection 165.225.128.186:61050 (7 connections now open) m31000| Fri Feb 22 11:57:53.496 [initandlisten] connection accepted from 165.225.128.186:50185 #1587 (8 connections now open) m31000| Fri Feb 22 11:57:53.592 [conn1] going to kill op: op: 32372.0 m31000| Fri Feb 22 11:57:53.593 [conn1] going to kill op: op: 32373.0 m31000| Fri Feb 22 11:57:53.598 [conn1586] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.598 [conn1586] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|47 } } cursorid:513276307828415 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:53.598 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.598 [conn1586] end connection 165.225.128.186:62983 (7 connections now open) m31000| Fri Feb 22 11:57:53.598 [initandlisten] connection accepted from 165.225.128.186:64865 #1588 (8 connections now open) m31000| Fri Feb 22 11:57:53.598 [conn1587] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.598 [conn1587] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|48 } } cursorid:513280160617807 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:53.599 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.599 [conn1587] end connection 165.225.128.186:50185 (7 connections now open) m31000| Fri Feb 22 11:57:53.599 [initandlisten] connection accepted from 165.225.128.186:54331 #1589 (8 connections now open) m31000| Fri Feb 22 11:57:53.693 [conn1] going to kill op: op: 32410.0 m31000| Fri Feb 22 11:57:53.693 [conn1] going to kill op: op: 32411.0 m31000| Fri Feb 22 11:57:53.701 [conn1588] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.701 [conn1588] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|58 } } cursorid:513714502249954 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.701 [conn1589] { $err: "operation was interrupted", code: 11601 } m31001| Fri Feb 22 11:57:53.701 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.701 [conn1589] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|58 } } cursorid:513717668276420 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.701 [conn1588] end connection 165.225.128.186:64865 (7 connections now open) m31002| Fri Feb 22 11:57:53.701 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.701 [conn1589] end connection 165.225.128.186:54331 (6 connections now open) m31000| Fri Feb 22 11:57:53.701 [initandlisten] connection accepted from 165.225.128.186:60661 #1590 (7 connections now open) m31000| Fri Feb 22 11:57:53.702 [initandlisten] connection accepted from 165.225.128.186:57571 #1591 (8 connections now open) m31000| Fri Feb 22 11:57:53.794 [conn1] going to kill op: op: 32448.0 m31000| Fri Feb 22 11:57:53.794 [conn1] going to kill op: op: 32449.0 m31000| Fri Feb 22 11:57:53.804 [conn1591] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.804 [conn1590] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.804 [conn1591] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|68 } } cursorid:514156560127174 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.804 [conn1590] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|68 } } cursorid:514156331794948 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.804 [conn1590] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:53.804 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:53.804 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.804 [conn1591] end connection 165.225.128.186:57571 (7 connections now open) m31000| Fri Feb 22 11:57:53.804 [conn1590] end connection 165.225.128.186:60661 (7 connections now open) m31000| Fri Feb 22 11:57:53.805 [initandlisten] connection accepted from 165.225.128.186:53876 #1592 (7 connections now open) m31000| Fri Feb 22 11:57:53.805 [initandlisten] connection accepted from 165.225.128.186:51572 #1593 (8 connections now open) m31000| Fri Feb 22 11:57:53.895 [conn1] going to kill op: op: 32484.0 m31000| Fri Feb 22 11:57:53.895 [conn1] going to kill op: op: 32483.0 m31000| Fri Feb 22 11:57:53.897 [conn1593] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.897 [conn1593] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|78 } } cursorid:514593799911462 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:53.897 [conn1592] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:53.897 [conn1592] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|78 } } cursorid:514594498372460 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:53.897 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:53.897 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:53.897 [conn1593] end connection 165.225.128.186:51572 (7 connections now open) m31000| Fri Feb 22 11:57:53.897 [conn1592] end connection 165.225.128.186:53876 (7 connections now open) m31000| Fri Feb 22 11:57:53.897 [initandlisten] connection accepted from 165.225.128.186:63390 #1594 (7 connections now open) m31000| Fri Feb 22 11:57:53.898 [initandlisten] connection accepted from 165.225.128.186:59345 #1595 (8 connections now open) m31000| Fri Feb 22 11:57:53.996 [conn1] going to kill op: op: 32533.0 m31000| Fri Feb 22 11:57:53.996 [conn1] going to kill op: op: 32531.0 m31000| Fri Feb 22 11:57:53.996 [conn1] going to kill op: op: 32532.0 m31000| Fri Feb 22 11:57:54.000 [conn1594] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.000 [conn1594] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|87 } } cursorid:514988139462830 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.000 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.000 [conn1595] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.000 [conn1595] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|87 } } cursorid:514988104544218 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.000 [conn1594] end connection 165.225.128.186:63390 (7 connections now open) m31002| Fri Feb 22 11:57:54.000 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.000 [conn1595] end connection 165.225.128.186:59345 (6 connections now open) m31000| Fri Feb 22 11:57:54.000 [initandlisten] connection accepted from 165.225.128.186:52639 #1596 (7 connections now open) m31000| Fri Feb 22 11:57:54.000 [initandlisten] connection accepted from 165.225.128.186:41305 #1597 (8 connections now open) m31000| Fri Feb 22 11:57:54.001 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.001 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|87 } } cursorid:514937939829774 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:89 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.001 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.097 [conn1] going to kill op: op: 32572.0 m31000| Fri Feb 22 11:57:54.097 [conn1] going to kill op: op: 32573.0 m31000| Fri Feb 22 11:57:54.103 [conn1597] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.103 [conn1597] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|97 } } cursorid:515428103511623 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:54.103 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.103 [conn1597] end connection 165.225.128.186:41305 (7 connections now open) m31000| Fri Feb 22 11:57:54.103 [conn1596] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.103 [conn1596] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534273000|97 } } cursorid:515426799459151 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.103 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.103 [conn1596] end connection 165.225.128.186:52639 (6 connections now open) m31000| Fri Feb 22 11:57:54.103 [initandlisten] connection accepted from 165.225.128.186:39283 #1598 (8 connections now open) m31000| Fri Feb 22 11:57:54.104 [initandlisten] connection accepted from 165.225.128.186:58392 #1599 (8 connections now open) m31000| Fri Feb 22 11:57:54.198 [conn1] going to kill op: op: 32615.0 m31000| Fri Feb 22 11:57:54.198 [conn1] going to kill op: op: 32616.0 m31000| Fri Feb 22 11:57:54.207 [conn1598] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.207 [conn1599] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.207 [conn1598] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|9 } } cursorid:515866028882411 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.207 [conn1599] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|9 } } cursorid:515865161245530 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.207 [conn1598] ClientCursor::find(): cursor not found in map '515866028882411' (ok after a drop) m31002| Fri Feb 22 11:57:54.207 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:54.207 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.207 [conn1599] end connection 165.225.128.186:58392 (7 connections now open) m31000| Fri Feb 22 11:57:54.207 [conn1598] end connection 165.225.128.186:39283 (7 connections now open) m31000| Fri Feb 22 11:57:54.207 [initandlisten] connection accepted from 165.225.128.186:54747 #1600 (7 connections now open) m31000| Fri Feb 22 11:57:54.207 [initandlisten] connection accepted from 165.225.128.186:44178 #1601 (8 connections now open) m31000| Fri Feb 22 11:57:54.299 [conn1] going to kill op: op: 32651.0 m31000| Fri Feb 22 11:57:54.299 [conn1] going to kill op: op: 32652.0 m31000| Fri Feb 22 11:57:54.299 [conn1600] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.299 [conn1600] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|20 } } cursorid:516304320299921 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.300 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.300 [conn1601] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.300 [conn1601] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|20 } } cursorid:516302910369380 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.300 [conn1600] end connection 165.225.128.186:54747 (7 connections now open) m31002| Fri Feb 22 11:57:54.300 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.300 [conn1601] end connection 165.225.128.186:44178 (6 connections now open) m31000| Fri Feb 22 11:57:54.300 [initandlisten] connection accepted from 165.225.128.186:39113 #1602 (8 connections now open) m31000| Fri Feb 22 11:57:54.300 [initandlisten] connection accepted from 165.225.128.186:62269 #1603 (8 connections now open) m31000| Fri Feb 22 11:57:54.400 [conn1] going to kill op: op: 32692.0 m31000| Fri Feb 22 11:57:54.400 [conn1] going to kill op: op: 32689.0 m31000| Fri Feb 22 11:57:54.400 [conn1] going to kill op: op: 32690.0 m31000| Fri Feb 22 11:57:54.402 [conn1602] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.402 [conn1602] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|29 } } cursorid:516698268289554 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.402 [conn1603] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.402 [conn1603] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|29 } } cursorid:516699324695686 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.403 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:54.403 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.403 [conn1602] end connection 165.225.128.186:39113 (7 connections now open) m31000| Fri Feb 22 11:57:54.403 [conn1603] end connection 165.225.128.186:62269 (6 connections now open) m31000| Fri Feb 22 11:57:54.403 [initandlisten] connection accepted from 165.225.128.186:47736 #1604 (7 connections now open) m31000| Fri Feb 22 11:57:54.403 [initandlisten] connection accepted from 165.225.128.186:48504 #1605 (8 connections now open) m31000| Fri Feb 22 11:57:54.405 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.405 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|39 } } cursorid:517085545013709 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:54.405 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.501 [conn1] going to kill op: op: 32730.0 m31000| Fri Feb 22 11:57:54.501 [conn1] going to kill op: op: 32731.0 m31000| Fri Feb 22 11:57:54.505 [conn1605] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.505 [conn1605] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|39 } } cursorid:517137414181577 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.505 [conn1604] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.505 [conn1604] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|39 } } cursorid:517135688909250 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.505 [conn1605] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:54.506 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:54.506 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.506 [conn1605] end connection 165.225.128.186:48504 (7 connections now open) m31000| Fri Feb 22 11:57:54.506 [conn1604] end connection 165.225.128.186:47736 (7 connections now open) m31000| Fri Feb 22 11:57:54.506 [initandlisten] connection accepted from 165.225.128.186:52056 #1606 (7 connections now open) m31000| Fri Feb 22 11:57:54.506 [initandlisten] connection accepted from 165.225.128.186:60618 #1607 (8 connections now open) m31000| Fri Feb 22 11:57:54.601 [conn1] going to kill op: op: 32768.0 m31000| Fri Feb 22 11:57:54.602 [conn1] going to kill op: op: 32769.0 m31000| Fri Feb 22 11:57:54.608 [conn1606] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.608 [conn1606] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|49 } } cursorid:517574904896276 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:54.608 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.608 [conn1606] end connection 165.225.128.186:52056 (7 connections now open) m31000| Fri Feb 22 11:57:54.609 [conn1607] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.609 [conn1607] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|49 } } cursorid:517575243247822 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.609 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.609 [conn1607] end connection 165.225.128.186:60618 (6 connections now open) m31000| Fri Feb 22 11:57:54.609 [initandlisten] connection accepted from 165.225.128.186:43120 #1608 (8 connections now open) m31000| Fri Feb 22 11:57:54.609 [initandlisten] connection accepted from 165.225.128.186:39775 #1609 (8 connections now open) m31000| Fri Feb 22 11:57:54.702 [conn1] going to kill op: op: 32806.0 m31000| Fri Feb 22 11:57:54.702 [conn1] going to kill op: op: 32807.0 m31000| Fri Feb 22 11:57:54.711 [conn1608] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.711 [conn1608] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|59 } } cursorid:518012955398990 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.711 [conn1609] { $err: "operation was interrupted", code: 11601 } m31002| Fri Feb 22 11:57:54.711 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.711 [conn1609] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|59 } } cursorid:518013108922529 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.711 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.711 [conn1608] end connection 165.225.128.186:43120 (7 connections now open) m31000| Fri Feb 22 11:57:54.712 [conn1609] end connection 165.225.128.186:39775 (6 connections now open) m31000| Fri Feb 22 11:57:54.712 [initandlisten] connection accepted from 165.225.128.186:49013 #1610 (7 connections now open) m31000| Fri Feb 22 11:57:54.712 [initandlisten] connection accepted from 165.225.128.186:42400 #1611 (8 connections now open) m31000| Fri Feb 22 11:57:54.803 [conn1] going to kill op: op: 32844.0 m31000| Fri Feb 22 11:57:54.803 [conn1] going to kill op: op: 32842.0 m31000| Fri Feb 22 11:57:54.805 [conn1611] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.805 [conn1611] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|69 } } cursorid:518451483134263 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:54.805 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.805 [conn1611] end connection 165.225.128.186:42400 (7 connections now open) m31000| Fri Feb 22 11:57:54.805 [conn1610] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.805 [conn1610] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|69 } } cursorid:518450791918948 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:54.805 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.805 [conn1610] end connection 165.225.128.186:49013 (6 connections now open) m31000| Fri Feb 22 11:57:54.805 [initandlisten] connection accepted from 165.225.128.186:59723 #1612 (8 connections now open) m31000| Fri Feb 22 11:57:54.805 [initandlisten] connection accepted from 165.225.128.186:57466 #1613 (8 connections now open) m31000| Fri Feb 22 11:57:54.904 [conn1] going to kill op: op: 32881.0 m31000| Fri Feb 22 11:57:54.904 [conn1] going to kill op: op: 32882.0 m31000| Fri Feb 22 11:57:54.908 [conn1612] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.908 [conn1612] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|78 } } cursorid:518845982718235 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.908 [conn1613] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:54.908 [conn1613] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|79 } } cursorid:518847078003489 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:54.908 [conn1612] ClientCursor::find(): cursor not found in map '518845982718235' (ok after a drop) m31001| Fri Feb 22 11:57:54.908 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:54.908 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:54.908 [conn1612] end connection 165.225.128.186:59723 (7 connections now open) m31000| Fri Feb 22 11:57:54.908 [conn1613] end connection 165.225.128.186:57466 (7 connections now open) m31000| Fri Feb 22 11:57:54.908 [initandlisten] connection accepted from 165.225.128.186:50551 #1614 (7 connections now open) m31000| Fri Feb 22 11:57:54.908 [initandlisten] connection accepted from 165.225.128.186:59335 #1615 (8 connections now open) m31000| Fri Feb 22 11:57:55.005 [conn1] going to kill op: op: 32919.0 m31000| Fri Feb 22 11:57:55.005 [conn1] going to kill op: op: 32922.0 m31000| Fri Feb 22 11:57:55.005 [conn1] going to kill op: op: 32920.0 m31000| Fri Feb 22 11:57:55.011 [conn1614] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.011 [conn1615] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.011 [conn1614] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|89 } } cursorid:519283142120576 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:86 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.011 [conn1615] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|89 } } cursorid:519283251517054 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:55.011 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:55.011 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.011 [conn1614] end connection 165.225.128.186:50551 (7 connections now open) m31000| Fri Feb 22 11:57:55.011 [conn1615] end connection 165.225.128.186:59335 (7 connections now open) m31000| Fri Feb 22 11:57:55.012 [initandlisten] connection accepted from 165.225.128.186:52057 #1616 (7 connections now open) m31000| Fri Feb 22 11:57:55.012 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.012 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|99 } } cursorid:519671079828086 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.012 [initandlisten] connection accepted from 165.225.128.186:40282 #1617 (8 connections now open) m31001| Fri Feb 22 11:57:55.013 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.106 [conn1] going to kill op: op: 32960.0 m31000| Fri Feb 22 11:57:55.106 [conn1] going to kill op: op: 32961.0 m31000| Fri Feb 22 11:57:55.114 [conn1616] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.114 [conn1616] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|99 } } cursorid:519722401660765 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:55.114 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.114 [conn1617] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.114 [conn1617] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534274000|99 } } cursorid:519721592605340 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.114 [conn1616] end connection 165.225.128.186:52057 (7 connections now open) m31001| Fri Feb 22 11:57:55.115 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.115 [conn1617] end connection 165.225.128.186:40282 (6 connections now open) m31000| Fri Feb 22 11:57:55.115 [initandlisten] connection accepted from 165.225.128.186:57065 #1618 (7 connections now open) m31000| Fri Feb 22 11:57:55.115 [initandlisten] connection accepted from 165.225.128.186:33714 #1619 (8 connections now open) m31000| Fri Feb 22 11:57:55.207 [conn1] going to kill op: op: 32997.0 m31000| Fri Feb 22 11:57:55.207 [conn1] going to kill op: op: 32998.0 m31000| Fri Feb 22 11:57:55.207 [conn1618] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.207 [conn1618] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|10 } } cursorid:520160598754222 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.207 [conn1618] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:55.207 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.208 [conn1618] end connection 165.225.128.186:57065 (7 connections now open) m31000| Fri Feb 22 11:57:55.208 [initandlisten] connection accepted from 165.225.128.186:37916 #1620 (8 connections now open) m31000| Fri Feb 22 11:57:55.208 [conn1619] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.208 [conn1619] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|10 } } cursorid:520161030879148 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:55.208 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.208 [conn1619] end connection 165.225.128.186:33714 (7 connections now open) m31000| Fri Feb 22 11:57:55.208 [initandlisten] connection accepted from 165.225.128.186:46173 #1621 (8 connections now open) m31000| Fri Feb 22 11:57:55.307 [conn1] going to kill op: op: 33035.0 m31000| Fri Feb 22 11:57:55.307 [conn1] going to kill op: op: 33036.0 m31000| Fri Feb 22 11:57:55.310 [conn1621] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.310 [conn1620] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.310 [conn1621] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|19 } } cursorid:520555190110922 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.310 [conn1620] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|19 } } cursorid:520551023293650 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:55.311 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:55.311 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.311 [conn1621] end connection 165.225.128.186:46173 (7 connections now open) m31000| Fri Feb 22 11:57:55.311 [conn1620] end connection 165.225.128.186:37916 (7 connections now open) m31000| Fri Feb 22 11:57:55.311 [initandlisten] connection accepted from 165.225.128.186:33701 #1622 (7 connections now open) m31000| Fri Feb 22 11:57:55.311 [initandlisten] connection accepted from 165.225.128.186:36120 #1623 (8 connections now open) m31000| Fri Feb 22 11:57:55.408 [conn1] going to kill op: op: 33077.0 m31000| Fri Feb 22 11:57:55.408 [conn1] going to kill op: op: 33079.0 m31000| Fri Feb 22 11:57:55.408 [conn1] going to kill op: op: 33076.0 m31000| Fri Feb 22 11:57:55.413 [conn1623] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.413 [conn1623] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|29 } } cursorid:520994395025129 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:55.413 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.413 [conn1622] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.413 [conn1623] end connection 165.225.128.186:36120 (7 connections now open) m31000| Fri Feb 22 11:57:55.413 [conn1622] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|29 } } cursorid:520994082324792 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:55.413 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.414 [conn1622] end connection 165.225.128.186:33701 (6 connections now open) m31000| Fri Feb 22 11:57:55.414 [initandlisten] connection accepted from 165.225.128.186:35480 #1624 (7 connections now open) m31000| Fri Feb 22 11:57:55.414 [initandlisten] connection accepted from 165.225.128.186:33597 #1625 (8 connections now open) m31000| Fri Feb 22 11:57:55.416 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.416 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|40 } } cursorid:521380517290113 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:55.416 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.509 [conn1] going to kill op: op: 33118.0 m31000| Fri Feb 22 11:57:55.509 [conn1] going to kill op: op: 33117.0 m31000| Fri Feb 22 11:57:55.516 [conn1624] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.516 [conn1625] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.516 [conn1624] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|40 } } cursorid:521430889976302 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.516 [conn1625] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|40 } } cursorid:521430763534701 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.516 [conn1624] ClientCursor::find(): cursor not found in map '521430889976302' (ok after a drop) m31002| Fri Feb 22 11:57:55.516 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:55.516 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.516 [conn1625] end connection 165.225.128.186:33597 (7 connections now open) m31000| Fri Feb 22 11:57:55.516 [conn1624] end connection 165.225.128.186:35480 (7 connections now open) m31000| Fri Feb 22 11:57:55.516 [initandlisten] connection accepted from 165.225.128.186:46367 #1626 (7 connections now open) m31000| Fri Feb 22 11:57:55.516 [initandlisten] connection accepted from 165.225.128.186:42035 #1627 (8 connections now open) m31000| Fri Feb 22 11:57:55.610 [conn1] going to kill op: op: 33155.0 m31000| Fri Feb 22 11:57:55.610 [conn1] going to kill op: op: 33156.0 m31000| Fri Feb 22 11:57:55.618 [conn1626] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.618 [conn1626] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|50 } } cursorid:521868981139634 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:55.618 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.618 [conn1626] end connection 165.225.128.186:46367 (7 connections now open) m31000| Fri Feb 22 11:57:55.619 [conn1627] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.619 [conn1627] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|50 } } cursorid:521869364193137 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.619 [initandlisten] connection accepted from 165.225.128.186:36113 #1628 (8 connections now open) m31001| Fri Feb 22 11:57:55.619 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.619 [conn1627] end connection 165.225.128.186:42035 (7 connections now open) m31000| Fri Feb 22 11:57:55.619 [initandlisten] connection accepted from 165.225.128.186:41057 #1629 (8 connections now open) m31000| Fri Feb 22 11:57:55.710 [conn1] going to kill op: op: 33191.0 m31000| Fri Feb 22 11:57:55.710 [conn1] going to kill op: op: 33190.0 m31000| Fri Feb 22 11:57:55.711 [conn1628] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.711 [conn1629] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.711 [conn1628] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|60 } } cursorid:522302948146514 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.711 [conn1629] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|60 } } cursorid:522306912928447 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:55.711 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:55.711 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.711 [conn1628] end connection 165.225.128.186:36113 (7 connections now open) m31000| Fri Feb 22 11:57:55.711 [conn1629] end connection 165.225.128.186:41057 (7 connections now open) m31000| Fri Feb 22 11:57:55.711 [initandlisten] connection accepted from 165.225.128.186:39055 #1630 (7 connections now open) m31000| Fri Feb 22 11:57:55.711 [initandlisten] connection accepted from 165.225.128.186:62477 #1631 (8 connections now open) m31000| Fri Feb 22 11:57:55.811 [conn1] going to kill op: op: 33228.0 m31000| Fri Feb 22 11:57:55.811 [conn1] going to kill op: op: 33229.0 m31000| Fri Feb 22 11:57:55.813 [conn1630] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.813 [conn1630] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|69 } } cursorid:522703594827496 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:55.813 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.813 [conn1631] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.814 [conn1631] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|69 } } cursorid:522702184049043 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.814 [conn1630] end connection 165.225.128.186:39055 (7 connections now open) m31002| Fri Feb 22 11:57:55.814 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.814 [conn1631] end connection 165.225.128.186:62477 (6 connections now open) m31000| Fri Feb 22 11:57:55.814 [initandlisten] connection accepted from 165.225.128.186:42037 #1632 (7 connections now open) m31000| Fri Feb 22 11:57:55.814 [initandlisten] connection accepted from 165.225.128.186:57464 #1633 (8 connections now open) m31000| Fri Feb 22 11:57:55.912 [conn1] going to kill op: op: 33266.0 m31000| Fri Feb 22 11:57:55.912 [conn1] going to kill op: op: 33267.0 m31000| Fri Feb 22 11:57:55.916 [conn1632] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.916 [conn1632] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|79 } } cursorid:523140304772771 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.916 [conn1632] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:55.916 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.916 [conn1632] end connection 165.225.128.186:42037 (7 connections now open) m31000| Fri Feb 22 11:57:55.916 [conn1633] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:55.916 [conn1633] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|79 } } cursorid:523141166136257 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:55.916 [initandlisten] connection accepted from 165.225.128.186:63513 #1634 (8 connections now open) m31002| Fri Feb 22 11:57:55.916 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:55.916 [conn1633] end connection 165.225.128.186:57464 (7 connections now open) m31000| Fri Feb 22 11:57:55.917 [initandlisten] connection accepted from 165.225.128.186:56461 #1635 (8 connections now open) m31000| Fri Feb 22 11:57:56.013 [conn1] going to kill op: op: 33305.0 m31000| Fri Feb 22 11:57:56.013 [conn1] going to kill op: op: 33304.0 m31000| Fri Feb 22 11:57:56.018 [conn1634] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.018 [conn1635] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.018 [conn1635] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|89 } } cursorid:523579434678205 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:38 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:56.018 [conn1634] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534275000|89 } } cursorid:523575135273489 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:56.018 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:56.018 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.018 [conn1634] end connection 165.225.128.186:63513 (7 connections now open) m31000| Fri Feb 22 11:57:56.019 [conn1635] end connection 165.225.128.186:56461 (7 connections now open) m31000| Fri Feb 22 11:57:56.019 [initandlisten] connection accepted from 165.225.128.186:46176 #1636 (7 connections now open) m31000| Fri Feb 22 11:57:56.019 [initandlisten] connection accepted from 165.225.128.186:37717 #1637 (8 connections now open) m31000| Fri Feb 22 11:57:56.113 [conn1] going to kill op: op: 33355.0 m31000| Fri Feb 22 11:57:56.114 [conn1] going to kill op: op: 33354.0 m31000| Fri Feb 22 11:57:56.114 [conn1] going to kill op: op: 33353.0 m31000| Fri Feb 22 11:57:56.121 [conn1637] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.121 [conn1637] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|1 } } cursorid:524017535727209 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:89 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:56.121 [conn1636] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.121 [conn1636] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|1 } } cursorid:524017020016589 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.121 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:56.122 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.122 [conn1637] end connection 165.225.128.186:37717 (7 connections now open) m31000| Fri Feb 22 11:57:56.122 [conn1636] end connection 165.225.128.186:46176 (6 connections now open) m31000| Fri Feb 22 11:57:56.122 [initandlisten] connection accepted from 165.225.128.186:46962 #1638 (7 connections now open) m31000| Fri Feb 22 11:57:56.122 [initandlisten] connection accepted from 165.225.128.186:51052 #1639 (8 connections now open) m31000| Fri Feb 22 11:57:56.122 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.122 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|1 } } cursorid:523964986958505 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:56.123 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.214 [conn1] going to kill op: op: 33395.0 m31000| Fri Feb 22 11:57:56.214 [conn1] going to kill op: op: 33396.0 m31000| Fri Feb 22 11:57:56.315 [conn1] going to kill op: op: 33426.0 m31000| Fri Feb 22 11:57:56.315 [conn1] going to kill op: op: 33427.0 m31000| Fri Feb 22 11:57:56.316 [conn1639] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.316 [conn1639] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|11 } } cursorid:524455847547103 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.316 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.316 [conn1639] end connection 165.225.128.186:51052 (7 connections now open) m31000| Fri Feb 22 11:57:56.316 [conn1638] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.316 [conn1638] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|11 } } cursorid:524454670103641 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:56.316 [initandlisten] connection accepted from 165.225.128.186:57932 #1640 (8 connections now open) m31000| Fri Feb 22 11:57:56.316 [conn1638] ClientCursor::find(): cursor not found in map '524454670103641' (ok after a drop) m31001| Fri Feb 22 11:57:56.316 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.316 [conn1638] end connection 165.225.128.186:46962 (7 connections now open) m31000| Fri Feb 22 11:57:56.317 [initandlisten] connection accepted from 165.225.128.186:56697 #1641 (8 connections now open) m31000| Fri Feb 22 11:57:56.416 [conn1] going to kill op: op: 33464.0 m31000| Fri Feb 22 11:57:56.416 [conn1] going to kill op: op: 33465.0 m31000| Fri Feb 22 11:57:56.418 [conn1640] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.418 [conn1640] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|31 } } cursorid:525274521281114 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.418 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.419 [conn1640] end connection 165.225.128.186:57932 (7 connections now open) m31000| Fri Feb 22 11:57:56.419 [initandlisten] connection accepted from 165.225.128.186:39974 #1642 (8 connections now open) m31000| Fri Feb 22 11:57:56.419 [conn1641] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.419 [conn1641] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|31 } } cursorid:525280685478201 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:56.419 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.419 [conn1641] end connection 165.225.128.186:56697 (7 connections now open) m31000| Fri Feb 22 11:57:56.420 [initandlisten] connection accepted from 165.225.128.186:39841 #1643 (8 connections now open) m31000| Fri Feb 22 11:57:56.516 [conn1] going to kill op: op: 33511.0 m31000| Fri Feb 22 11:57:56.517 [conn1] going to kill op: op: 33513.0 m31000| Fri Feb 22 11:57:56.517 [conn1] going to kill op: op: 33514.0 m31000| Fri Feb 22 11:57:56.518 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.518 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|41 } } cursorid:525666275817051 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.518 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.521 [conn1642] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.521 [conn1642] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|41 } } cursorid:525713820402691 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.521 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.521 [conn1642] end connection 165.225.128.186:39974 (7 connections now open) m31000| Fri Feb 22 11:57:56.521 [initandlisten] connection accepted from 165.225.128.186:45809 #1644 (8 connections now open) m31000| Fri Feb 22 11:57:56.522 [conn1643] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.522 [conn1643] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|41 } } cursorid:525717231336151 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:56.522 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.522 [conn1643] end connection 165.225.128.186:39841 (7 connections now open) m31000| Fri Feb 22 11:57:56.522 [initandlisten] connection accepted from 165.225.128.186:40393 #1645 (8 connections now open) m31000| Fri Feb 22 11:57:56.617 [conn1] going to kill op: op: 33553.0 m31000| Fri Feb 22 11:57:56.617 [conn1] going to kill op: op: 33552.0 m31000| Fri Feb 22 11:57:56.623 [conn1644] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.623 [conn1644] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|51 } } cursorid:526152351222118 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.623 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.624 [conn1644] end connection 165.225.128.186:45809 (7 connections now open) m31000| Fri Feb 22 11:57:56.624 [initandlisten] connection accepted from 165.225.128.186:55728 #1646 (8 connections now open) m31000| Fri Feb 22 11:57:56.624 [conn1645] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.624 [conn1645] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|51 } } cursorid:526156173627046 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:56.624 [conn1645] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:57:56.624 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.625 [conn1645] end connection 165.225.128.186:40393 (7 connections now open) m31000| Fri Feb 22 11:57:56.625 [initandlisten] connection accepted from 165.225.128.186:47457 #1647 (8 connections now open) m31000| Fri Feb 22 11:57:56.718 [conn1] going to kill op: op: 33591.0 m31000| Fri Feb 22 11:57:56.718 [conn1] going to kill op: op: 33590.0 m31000| Fri Feb 22 11:57:56.726 [conn1646] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.726 [conn1646] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|61 } } cursorid:526588878543907 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.726 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.726 [conn1646] end connection 165.225.128.186:55728 (7 connections now open) m31000| Fri Feb 22 11:57:56.726 [initandlisten] connection accepted from 165.225.128.186:51313 #1648 (8 connections now open) m31000| Fri Feb 22 11:57:56.727 [conn1647] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.727 [conn1647] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|61 } } cursorid:526594323104615 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:56.727 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.727 [conn1647] end connection 165.225.128.186:47457 (7 connections now open) m31000| Fri Feb 22 11:57:56.727 [initandlisten] connection accepted from 165.225.128.186:42183 #1649 (8 connections now open) m31000| Fri Feb 22 11:57:56.819 [conn1] going to kill op: op: 33628.0 m31000| Fri Feb 22 11:57:56.819 [conn1] going to kill op: op: 33626.0 m31000| Fri Feb 22 11:57:56.828 [conn1648] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.828 [conn1648] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|71 } } cursorid:527028193042591 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.828 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.829 [conn1648] end connection 165.225.128.186:51313 (7 connections now open) m31000| Fri Feb 22 11:57:56.829 [initandlisten] connection accepted from 165.225.128.186:62888 #1650 (8 connections now open) m31000| Fri Feb 22 11:57:56.919 [conn1] going to kill op: op: 33660.0 m31000| Fri Feb 22 11:57:56.920 [conn1] going to kill op: op: 33659.0 m31000| Fri Feb 22 11:57:56.921 [conn1649] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.921 [conn1649] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|71 } } cursorid:527032345570177 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:56.921 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.921 [conn1649] end connection 165.225.128.186:42183 (7 connections now open) m31000| Fri Feb 22 11:57:56.921 [conn1650] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:56.921 [conn1650] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|81 } } cursorid:527466412606540 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:56.921 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:56.921 [initandlisten] connection accepted from 165.225.128.186:40395 #1651 (8 connections now open) m31000| Fri Feb 22 11:57:56.921 [conn1650] end connection 165.225.128.186:62888 (6 connections now open) m31000| Fri Feb 22 11:57:56.921 [initandlisten] connection accepted from 165.225.128.186:38757 #1652 (8 connections now open) m31000| Fri Feb 22 11:57:57.020 [conn1] going to kill op: op: 33698.0 m31000| Fri Feb 22 11:57:57.020 [conn1] going to kill op: op: 33697.0 m31000| Fri Feb 22 11:57:57.024 [conn1652] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.024 [conn1652] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|90 } } cursorid:527861368138693 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.024 [conn1651] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.024 [conn1651] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534276000|90 } } cursorid:527860818111810 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:57.024 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.024 [conn1652] end connection 165.225.128.186:38757 (7 connections now open) m31001| Fri Feb 22 11:57:57.024 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.024 [conn1651] end connection 165.225.128.186:40395 (6 connections now open) m31000| Fri Feb 22 11:57:57.024 [initandlisten] connection accepted from 165.225.128.186:50593 #1653 (7 connections now open) m31000| Fri Feb 22 11:57:57.024 [initandlisten] connection accepted from 165.225.128.186:62405 #1654 (8 connections now open) m31000| Fri Feb 22 11:57:57.121 [conn1] going to kill op: op: 33738.0 m31000| Fri Feb 22 11:57:57.121 [conn1] going to kill op: op: 33736.0 m31000| Fri Feb 22 11:57:57.126 [conn1654] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.126 [conn1654] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|1 } } cursorid:528299714770097 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.126 [conn1654] ClientCursor::find(): cursor not found in map '528299714770097' (ok after a drop) m31001| Fri Feb 22 11:57:57.127 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.127 [conn1654] end connection 165.225.128.186:62405 (7 connections now open) m31000| Fri Feb 22 11:57:57.127 [conn1653] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.127 [conn1653] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|1 } } cursorid:528298706194527 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.127 [initandlisten] connection accepted from 165.225.128.186:47557 #1655 (8 connections now open) m31002| Fri Feb 22 11:57:57.127 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.127 [conn1653] end connection 165.225.128.186:50593 (7 connections now open) m31000| Fri Feb 22 11:57:57.127 [initandlisten] connection accepted from 165.225.128.186:57112 #1656 (8 connections now open) m31000| Fri Feb 22 11:57:57.222 [conn1] going to kill op: op: 33790.0 m31000| Fri Feb 22 11:57:57.222 [conn1] going to kill op: op: 33789.0 m31000| Fri Feb 22 11:57:57.222 [conn1] going to kill op: op: 33788.0 m31000| Fri Feb 22 11:57:57.229 [conn1655] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.229 [conn1655] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|11 } } cursorid:528733205256950 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.229 [conn1656] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.229 [conn1656] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|12 } } cursorid:528738369295164 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:57.229 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:57.229 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.229 [conn1655] end connection 165.225.128.186:47557 (7 connections now open) m31000| Fri Feb 22 11:57:57.229 [conn1656] end connection 165.225.128.186:57112 (7 connections now open) m31000| Fri Feb 22 11:57:57.230 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.230 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|11 } } cursorid:528685127699685 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.230 [initandlisten] connection accepted from 165.225.128.186:61777 #1657 (7 connections now open) m31000| Fri Feb 22 11:57:57.230 [initandlisten] connection accepted from 165.225.128.186:64884 #1658 (8 connections now open) m31001| Fri Feb 22 11:57:57.231 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.323 [conn1] going to kill op: op: 33828.0 m31000| Fri Feb 22 11:57:57.323 [conn1] going to kill op: op: 33829.0 m31000| Fri Feb 22 11:57:57.332 [conn1657] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.332 [conn1657] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|22 } } cursorid:529174627731844 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:57.332 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.332 [conn1657] end connection 165.225.128.186:61777 (7 connections now open) m31000| Fri Feb 22 11:57:57.332 [conn1658] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.332 [conn1658] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|22 } } cursorid:529174604388027 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:57.332 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.332 [initandlisten] connection accepted from 165.225.128.186:32902 #1659 (8 connections now open) m31000| Fri Feb 22 11:57:57.332 [conn1658] end connection 165.225.128.186:64884 (7 connections now open) m31000| Fri Feb 22 11:57:57.333 [initandlisten] connection accepted from 165.225.128.186:60141 #1660 (8 connections now open) m31000| Fri Feb 22 11:57:57.423 [conn1] going to kill op: op: 33864.0 m31000| Fri Feb 22 11:57:57.424 [conn1] going to kill op: op: 33863.0 m31000| Fri Feb 22 11:57:57.424 [conn1659] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.424 [conn1659] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|32 } } cursorid:529613004669436 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.424 [conn1659] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:57.424 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.425 [conn1659] end connection 165.225.128.186:32902 (7 connections now open) m31000| Fri Feb 22 11:57:57.425 [conn1660] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.425 [conn1660] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|32 } } cursorid:529614444027930 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:102 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.425 [initandlisten] connection accepted from 165.225.128.186:41689 #1661 (8 connections now open) m31001| Fri Feb 22 11:57:57.425 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.425 [conn1660] end connection 165.225.128.186:60141 (7 connections now open) m31000| Fri Feb 22 11:57:57.425 [initandlisten] connection accepted from 165.225.128.186:59308 #1662 (8 connections now open) m31000| Fri Feb 22 11:57:57.524 [conn1] going to kill op: op: 33904.0 m31000| Fri Feb 22 11:57:57.524 [conn1] going to kill op: op: 33901.0 m31000| Fri Feb 22 11:57:57.525 [conn1] going to kill op: op: 33902.0 m31000| Fri Feb 22 11:57:57.527 [conn1661] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.527 [conn1661] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|41 } } cursorid:530004203290550 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:57.527 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.527 [conn1661] end connection 165.225.128.186:41689 (7 connections now open) m31000| Fri Feb 22 11:57:57.527 [conn1662] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.527 [conn1662] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|41 } } cursorid:530009193031703 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:57.528 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.528 [initandlisten] connection accepted from 165.225.128.186:40473 #1663 (8 connections now open) m31000| Fri Feb 22 11:57:57.528 [conn1662] end connection 165.225.128.186:59308 (6 connections now open) m31000| Fri Feb 22 11:57:57.528 [initandlisten] connection accepted from 165.225.128.186:36543 #1664 (8 connections now open) m31000| Fri Feb 22 11:57:57.528 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.528 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|51 } } cursorid:530394293366626 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:42 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:57.529 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.625 [conn1] going to kill op: op: 33942.0 m31000| Fri Feb 22 11:57:57.625 [conn1] going to kill op: op: 33943.0 m31000| Fri Feb 22 11:57:57.630 [conn1663] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.630 [conn1664] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.630 [conn1663] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|51 } } cursorid:530447225144223 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.630 [conn1664] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|51 } } cursorid:530446581703610 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:57.630 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:57.630 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.630 [conn1663] end connection 165.225.128.186:40473 (7 connections now open) m31000| Fri Feb 22 11:57:57.630 [conn1664] end connection 165.225.128.186:36543 (7 connections now open) m31000| Fri Feb 22 11:57:57.631 [initandlisten] connection accepted from 165.225.128.186:56315 #1665 (7 connections now open) m31000| Fri Feb 22 11:57:57.631 [initandlisten] connection accepted from 165.225.128.186:58652 #1666 (8 connections now open) m31000| Fri Feb 22 11:57:57.726 [conn1] going to kill op: op: 33981.0 m31000| Fri Feb 22 11:57:57.726 [conn1] going to kill op: op: 33980.0 m31000| Fri Feb 22 11:57:57.732 [conn1666] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.732 [conn1666] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|61 } } cursorid:530884195708374 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:57.733 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.733 [conn1666] end connection 165.225.128.186:58652 (7 connections now open) m31000| Fri Feb 22 11:57:57.733 [conn1665] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.733 [conn1665] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|61 } } cursorid:530884778902026 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.733 [conn1665] ClientCursor::find(): cursor not found in map '530884778902026' (ok after a drop) m31002| Fri Feb 22 11:57:57.733 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.733 [initandlisten] connection accepted from 165.225.128.186:40192 #1667 (8 connections now open) m31000| Fri Feb 22 11:57:57.733 [conn1665] end connection 165.225.128.186:56315 (6 connections now open) m31000| Fri Feb 22 11:57:57.733 [initandlisten] connection accepted from 165.225.128.186:61193 #1668 (8 connections now open) m31000| Fri Feb 22 11:57:57.827 [conn1] going to kill op: op: 34018.0 m31000| Fri Feb 22 11:57:57.827 [conn1] going to kill op: op: 34019.0 m31000| Fri Feb 22 11:57:57.835 [conn1667] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.835 [conn1668] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.835 [conn1667] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|71 } } cursorid:531323940320813 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.835 [conn1668] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|71 } } cursorid:531323344965707 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:57.835 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:57:57.835 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.835 [conn1668] end connection 165.225.128.186:61193 (7 connections now open) m31000| Fri Feb 22 11:57:57.835 [conn1667] end connection 165.225.128.186:40192 (7 connections now open) m31000| Fri Feb 22 11:57:57.836 [initandlisten] connection accepted from 165.225.128.186:55281 #1669 (7 connections now open) m31000| Fri Feb 22 11:57:57.836 [initandlisten] connection accepted from 165.225.128.186:48161 #1670 (8 connections now open) m31000| Fri Feb 22 11:57:57.927 [conn1] going to kill op: op: 34057.0 m31000| Fri Feb 22 11:57:57.928 [conn1] going to kill op: op: 34056.0 m31000| Fri Feb 22 11:57:57.928 [conn1670] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.928 [conn1670] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|81 } } cursorid:531760322582294 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:57.928 [conn1669] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:57.928 [conn1669] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|81 } } cursorid:531760931569610 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:57.928 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.928 [conn1670] end connection 165.225.128.186:48161 (7 connections now open) m31002| Fri Feb 22 11:57:57.928 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:57.928 [conn1669] end connection 165.225.128.186:55281 (6 connections now open) m31000| Fri Feb 22 11:57:57.929 [initandlisten] connection accepted from 165.225.128.186:39655 #1671 (7 connections now open) m31000| Fri Feb 22 11:57:57.929 [initandlisten] connection accepted from 165.225.128.186:40014 #1672 (8 connections now open) m31000| Fri Feb 22 11:57:58.028 [conn1] going to kill op: op: 34094.0 m31000| Fri Feb 22 11:57:58.028 [conn1] going to kill op: op: 34095.0 m31000| Fri Feb 22 11:57:58.031 [conn1672] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.031 [conn1671] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.031 [conn1672] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|91 } } cursorid:532155775876491 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:58.031 [conn1671] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534277000|91 } } cursorid:532156069845996 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:105 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.031 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:57:58.031 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.031 [conn1671] end connection 165.225.128.186:39655 (7 connections now open) m31000| Fri Feb 22 11:57:58.031 [conn1672] end connection 165.225.128.186:40014 (7 connections now open) m31000| Fri Feb 22 11:57:58.032 [initandlisten] connection accepted from 165.225.128.186:36230 #1673 (7 connections now open) m31000| Fri Feb 22 11:57:58.032 [initandlisten] connection accepted from 165.225.128.186:37892 #1674 (8 connections now open) m31000| Fri Feb 22 11:57:58.129 [conn1] going to kill op: op: 34134.0 m31000| Fri Feb 22 11:57:58.129 [conn1] going to kill op: op: 34133.0 m31000| Fri Feb 22 11:57:58.134 [conn1674] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.134 [conn1674] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|2 } } cursorid:532593962349588 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:58.134 [conn1674] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:58.134 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.134 [conn1674] end connection 165.225.128.186:37892 (7 connections now open) m31000| Fri Feb 22 11:57:58.134 [conn1673] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.135 [conn1673] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|2 } } cursorid:532594069059710 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.135 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.135 [conn1673] end connection 165.225.128.186:36230 (6 connections now open) m31000| Fri Feb 22 11:57:58.135 [initandlisten] connection accepted from 165.225.128.186:44369 #1675 (8 connections now open) m31000| Fri Feb 22 11:57:58.135 [initandlisten] connection accepted from 165.225.128.186:56958 #1676 (8 connections now open) m31000| Fri Feb 22 11:57:58.230 [conn1] going to kill op: op: 34173.0 m31000| Fri Feb 22 11:57:58.230 [conn1] going to kill op: op: 34174.0 m31000| Fri Feb 22 11:57:58.237 [conn1675] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.237 [conn1675] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|12 } } cursorid:533031942284360 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.237 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.237 [conn1675] end connection 165.225.128.186:44369 (7 connections now open) m31000| Fri Feb 22 11:57:58.237 [initandlisten] connection accepted from 165.225.128.186:42752 #1677 (8 connections now open) m31000| Fri Feb 22 11:57:58.238 [conn1676] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.238 [conn1676] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|12 } } cursorid:533031841596245 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.238 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.238 [conn1676] end connection 165.225.128.186:56958 (7 connections now open) m31000| Fri Feb 22 11:57:58.238 [initandlisten] connection accepted from 165.225.128.186:57921 #1678 (8 connections now open) m31000| Fri Feb 22 11:57:58.265 [conn1108] end connection 165.225.128.186:51780 (7 connections now open) m31000| Fri Feb 22 11:57:58.265 [initandlisten] connection accepted from 165.225.128.186:36214 #1679 (8 connections now open) m31000| Fri Feb 22 11:57:58.331 [conn1] going to kill op: op: 34224.0 m31000| Fri Feb 22 11:57:58.331 [conn1] going to kill op: op: 34222.0 m31000| Fri Feb 22 11:57:58.331 [conn1] going to kill op: op: 34225.0 m31000| Fri Feb 22 11:57:58.333 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.333 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|22 } } cursorid:533417833824590 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.333 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.340 [conn1677] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.340 [conn1677] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|22 } } cursorid:533465589204639 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.340 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.340 [conn1677] end connection 165.225.128.186:42752 (7 connections now open) m31000| Fri Feb 22 11:57:58.340 [initandlisten] connection accepted from 165.225.128.186:58962 #1680 (8 connections now open) m31000| Fri Feb 22 11:57:58.340 [conn1678] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.340 [conn1678] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|22 } } cursorid:533470466838519 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.341 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.341 [conn1678] end connection 165.225.128.186:57921 (7 connections now open) m31000| Fri Feb 22 11:57:58.341 [initandlisten] connection accepted from 165.225.128.186:42270 #1681 (8 connections now open) m31000| Fri Feb 22 11:57:58.431 [conn1] going to kill op: op: 34260.0 m31000| Fri Feb 22 11:57:58.432 [conn1] going to kill op: op: 34261.0 m31000| Fri Feb 22 11:57:58.432 [conn1680] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.432 [conn1680] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|32 } } cursorid:533865724760351 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.432 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.432 [conn1680] end connection 165.225.128.186:58962 (7 connections now open) m31000| Fri Feb 22 11:57:58.433 [conn1681] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.433 [conn1681] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|32 } } cursorid:533870240290175 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:58.433 [initandlisten] connection accepted from 165.225.128.186:58415 #1682 (8 connections now open) m31000| Fri Feb 22 11:57:58.433 [conn1681] ClientCursor::find(): cursor not found in map '533870240290175' (ok after a drop) m31001| Fri Feb 22 11:57:58.433 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.433 [conn1681] end connection 165.225.128.186:42270 (7 connections now open) m31000| Fri Feb 22 11:57:58.433 [initandlisten] connection accepted from 165.225.128.186:52393 #1683 (8 connections now open) m31000| Fri Feb 22 11:57:58.532 [conn1] going to kill op: op: 34299.0 m31000| Fri Feb 22 11:57:58.532 [conn1] going to kill op: op: 34298.0 m31000| Fri Feb 22 11:57:58.535 [conn1682] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.535 [conn1682] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|41 } } cursorid:534260613662118 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.535 [conn1682] end connection 165.225.128.186:58415 (7 connections now open) m31000| Fri Feb 22 11:57:58.535 [conn1683] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.535 [conn1683] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|41 } } cursorid:534265735676114 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:58.535 [initandlisten] connection accepted from 165.225.128.186:41917 #1684 (8 connections now open) m31001| Fri Feb 22 11:57:58.535 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.535 [conn1683] end connection 165.225.128.186:52393 (7 connections now open) m31000| Fri Feb 22 11:57:58.536 [initandlisten] connection accepted from 165.225.128.186:42744 #1685 (8 connections now open) m31000| Fri Feb 22 11:57:58.633 [conn1] going to kill op: op: 34350.0 m31000| Fri Feb 22 11:57:58.633 [conn1] going to kill op: op: 34347.0 m31000| Fri Feb 22 11:57:58.638 [conn1684] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.638 [conn1684] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|51 } } cursorid:534699637111939 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.638 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.638 [conn1684] end connection 165.225.128.186:41917 (7 connections now open) m31000| Fri Feb 22 11:57:58.638 [initandlisten] connection accepted from 165.225.128.186:37725 #1686 (8 connections now open) m31000| Fri Feb 22 11:57:58.638 [conn1685] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.638 [conn1685] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|51 } } cursorid:534702644620094 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.638 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.639 [conn1685] end connection 165.225.128.186:42744 (7 connections now open) m31000| Fri Feb 22 11:57:58.639 [initandlisten] connection accepted from 165.225.128.186:41121 #1687 (8 connections now open) m31000| Fri Feb 22 11:57:58.734 [conn1] going to kill op: op: 34398.0 m31000| Fri Feb 22 11:57:58.734 [conn1] going to kill op: op: 34399.0 m31000| Fri Feb 22 11:57:58.734 [conn1] going to kill op: op: 34397.0 m31000| Fri Feb 22 11:57:58.741 [conn1686] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.741 [conn1686] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|61 } } cursorid:535136403574980 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.741 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.741 [conn1686] end connection 165.225.128.186:37725 (7 connections now open) m31000| Fri Feb 22 11:57:58.741 [conn1687] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.741 [conn1687] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|62 } } cursorid:535141183301454 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:58.741 [initandlisten] connection accepted from 165.225.128.186:40361 #1688 (8 connections now open) m31000| Fri Feb 22 11:57:58.741 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.741 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|51 } } cursorid:534650452983777 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.741 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.742 [conn1687] end connection 165.225.128.186:41121 (7 connections now open) m31000| Fri Feb 22 11:57:58.742 [initandlisten] connection accepted from 165.225.128.186:47760 #1689 (8 connections now open) m31000| Fri Feb 22 11:57:58.742 [conn12] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:58.742 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.835 [conn1] going to kill op: op: 34437.0 m31000| Fri Feb 22 11:57:58.835 [conn1] going to kill op: op: 34438.0 m31000| Fri Feb 22 11:57:58.844 [conn1689] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.844 [conn1689] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|72 } } cursorid:535579570644199 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.844 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.844 [conn1688] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.844 [conn1688] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|72 } } cursorid:535575368221454 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:58.844 [conn1689] end connection 165.225.128.186:47760 (7 connections now open) m31002| Fri Feb 22 11:57:58.845 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.845 [conn1688] end connection 165.225.128.186:40361 (6 connections now open) m31000| Fri Feb 22 11:57:58.845 [initandlisten] connection accepted from 165.225.128.186:37710 #1690 (7 connections now open) m31000| Fri Feb 22 11:57:58.845 [initandlisten] connection accepted from 165.225.128.186:33702 #1691 (8 connections now open) m31000| Fri Feb 22 11:57:58.936 [conn1] going to kill op: op: 34473.0 m31000| Fri Feb 22 11:57:58.936 [conn1] going to kill op: op: 34472.0 m31000| Fri Feb 22 11:57:58.937 [conn1691] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.937 [conn1691] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|82 } } cursorid:536017242103388 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:58.937 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.937 [conn1691] end connection 165.225.128.186:33702 (7 connections now open) m31000| Fri Feb 22 11:57:58.937 [initandlisten] connection accepted from 165.225.128.186:36380 #1692 (8 connections now open) m31000| Fri Feb 22 11:57:58.937 [conn1690] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:58.937 [conn1690] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|82 } } cursorid:536016517669580 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:58.938 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:58.938 [conn1690] end connection 165.225.128.186:37710 (7 connections now open) m31000| Fri Feb 22 11:57:58.938 [initandlisten] connection accepted from 165.225.128.186:45600 #1693 (8 connections now open) m31000| Fri Feb 22 11:57:59.036 [conn1] going to kill op: op: 34511.0 m31000| Fri Feb 22 11:57:59.037 [conn1] going to kill op: op: 34510.0 m31000| Fri Feb 22 11:57:59.040 [conn1692] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.040 [conn1692] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|91 } } cursorid:536408783351046 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.040 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.040 [conn1692] end connection 165.225.128.186:36380 (7 connections now open) m31000| Fri Feb 22 11:57:59.040 [initandlisten] connection accepted from 165.225.128.186:38992 #1694 (8 connections now open) m31000| Fri Feb 22 11:57:59.041 [conn1693] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.041 [conn1693] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534278000|91 } } cursorid:536411586819561 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.041 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.041 [conn1693] end connection 165.225.128.186:45600 (7 connections now open) m31000| Fri Feb 22 11:57:59.041 [initandlisten] connection accepted from 165.225.128.186:38654 #1695 (8 connections now open) m31000| Fri Feb 22 11:57:59.137 [conn1] going to kill op: op: 34548.0 m31000| Fri Feb 22 11:57:59.137 [conn1] going to kill op: op: 34549.0 m31000| Fri Feb 22 11:57:59.143 [conn1694] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.143 [conn1694] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|3 } } cursorid:536845259470371 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.143 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.143 [conn1694] end connection 165.225.128.186:38992 (7 connections now open) m31000| Fri Feb 22 11:57:59.143 [initandlisten] connection accepted from 165.225.128.186:52613 #1696 (8 connections now open) m31000| Fri Feb 22 11:57:59.143 [conn1695] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.143 [conn1695] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|3 } } cursorid:536850790216048 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:59.144 [conn1695] ClientCursor::find(): cursor not found in map '536850790216048' (ok after a drop) m31001| Fri Feb 22 11:57:59.144 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.144 [conn1695] end connection 165.225.128.186:38654 (7 connections now open) m31000| Fri Feb 22 11:57:59.144 [initandlisten] connection accepted from 165.225.128.186:60933 #1697 (8 connections now open) m31000| Fri Feb 22 11:57:59.238 [conn1] going to kill op: op: 34589.0 m31000| Fri Feb 22 11:57:59.238 [conn1] going to kill op: op: 34591.0 m31000| Fri Feb 22 11:57:59.246 [conn1696] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.246 [conn1696] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|13 } } cursorid:537284642680305 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.246 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.246 [conn1696] end connection 165.225.128.186:52613 (7 connections now open) m31000| Fri Feb 22 11:57:59.246 [conn1697] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.246 [conn1697] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|13 } } cursorid:537288461273523 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:36 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.246 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.246 [conn1697] end connection 165.225.128.186:60933 (6 connections now open) m31000| Fri Feb 22 11:57:59.246 [initandlisten] connection accepted from 165.225.128.186:44994 #1698 (7 connections now open) m31000| Fri Feb 22 11:57:59.253 [initandlisten] connection accepted from 165.225.128.186:34937 #1699 (8 connections now open) m31000| Fri Feb 22 11:57:59.339 [conn1] going to kill op: op: 34627.0 m31000| Fri Feb 22 11:57:59.339 [conn1] going to kill op: op: 34630.0 m31000| Fri Feb 22 11:57:59.339 [conn1] going to kill op: op: 34628.0 m31000| Fri Feb 22 11:57:59.343 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.343 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|33 } } cursorid:538070555125360 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.343 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.344 [conn1699] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.344 [conn1699] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|23 } } cursorid:537726921434857 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.344 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.345 [conn1699] end connection 165.225.128.186:34937 (7 connections now open) m31000| Fri Feb 22 11:57:59.345 [initandlisten] connection accepted from 165.225.128.186:47217 #1700 (8 connections now open) m31000| Fri Feb 22 11:57:59.348 [conn1698] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.348 [conn1698] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|24 } } cursorid:537722745683151 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.348 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.348 [conn1698] end connection 165.225.128.186:44994 (7 connections now open) m31000| Fri Feb 22 11:57:59.348 [initandlisten] connection accepted from 165.225.128.186:43054 #1701 (8 connections now open) m31000| Fri Feb 22 11:57:59.440 [conn1] going to kill op: op: 34666.0 m31000| Fri Feb 22 11:57:59.440 [conn1] going to kill op: op: 34667.0 m31000| Fri Feb 22 11:57:59.440 [conn1701] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.440 [conn1701] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|34 } } cursorid:538120774475440 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.440 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.440 [conn1701] end connection 165.225.128.186:43054 (7 connections now open) m31000| Fri Feb 22 11:57:59.441 [initandlisten] connection accepted from 165.225.128.186:56004 #1702 (8 connections now open) m31000| Fri Feb 22 11:57:59.447 [conn1700] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.447 [conn1700] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|33 } } cursorid:538117412219405 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:59.447 [conn1700] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:57:59.447 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.447 [conn1700] end connection 165.225.128.186:47217 (7 connections now open) m31000| Fri Feb 22 11:57:59.447 [initandlisten] connection accepted from 165.225.128.186:62739 #1703 (8 connections now open) m31000| Fri Feb 22 11:57:59.541 [conn1] going to kill op: op: 34704.0 m31000| Fri Feb 22 11:57:59.541 [conn1] going to kill op: op: 34705.0 m31000| Fri Feb 22 11:57:59.543 [conn1702] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.543 [conn1702] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|43 } } cursorid:538512264211298 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.543 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.543 [conn1702] end connection 165.225.128.186:56004 (7 connections now open) m31000| Fri Feb 22 11:57:59.543 [initandlisten] connection accepted from 165.225.128.186:59859 #1704 (8 connections now open) m31000| Fri Feb 22 11:57:59.549 [conn1703] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.549 [conn1703] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|43 } } cursorid:538516512315676 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.549 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.549 [conn1703] end connection 165.225.128.186:62739 (7 connections now open) m31000| Fri Feb 22 11:57:59.550 [initandlisten] connection accepted from 165.225.128.186:41632 #1705 (8 connections now open) m31000| Fri Feb 22 11:57:59.641 [conn1] going to kill op: op: 34741.0 m31000| Fri Feb 22 11:57:59.642 [conn1] going to kill op: op: 34742.0 m31000| Fri Feb 22 11:57:59.645 [conn1704] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.645 [conn1704] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|53 } } cursorid:538908101377786 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.645 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.645 [conn1704] end connection 165.225.128.186:59859 (7 connections now open) m31000| Fri Feb 22 11:57:59.645 [initandlisten] connection accepted from 165.225.128.186:55950 #1706 (8 connections now open) m31000| Fri Feb 22 11:57:59.743 [conn1] going to kill op: op: 34776.0 m31000| Fri Feb 22 11:57:59.743 [conn1] going to kill op: op: 34775.0 m31000| Fri Feb 22 11:57:59.743 [conn1705] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.743 [conn1705] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|53 } } cursorid:538912115984567 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.744 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.744 [conn1705] end connection 165.225.128.186:41632 (7 connections now open) m31000| Fri Feb 22 11:57:59.744 [initandlisten] connection accepted from 165.225.128.186:50343 #1707 (8 connections now open) m31000| Fri Feb 22 11:57:59.748 [conn1706] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.748 [conn1706] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|63 } } cursorid:539303509685454 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.748 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.748 [conn1706] end connection 165.225.128.186:55950 (7 connections now open) m31000| Fri Feb 22 11:57:59.749 [initandlisten] connection accepted from 165.225.128.186:45350 #1708 (8 connections now open) m31000| Fri Feb 22 11:57:59.843 [conn1] going to kill op: op: 34824.0 m31000| Fri Feb 22 11:57:59.844 [conn1] going to kill op: op: 34822.0 m31000| Fri Feb 22 11:57:59.844 [conn1] going to kill op: op: 34825.0 m31000| Fri Feb 22 11:57:59.844 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.844 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|73 } } cursorid:539690189246991 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.845 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.846 [conn1707] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.846 [conn1707] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|73 } } cursorid:539693487380882 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.846 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.846 [conn1707] end connection 165.225.128.186:50343 (7 connections now open) m31000| Fri Feb 22 11:57:59.847 [initandlisten] connection accepted from 165.225.128.186:47452 #1709 (8 connections now open) m31000| Fri Feb 22 11:57:59.851 [conn1708] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.851 [conn1708] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|73 } } cursorid:539697539066910 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:57:59.851 [conn1708] ClientCursor::find(): cursor not found in map '539697539066910' (ok after a drop) m31001| Fri Feb 22 11:57:59.851 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.851 [conn1708] end connection 165.225.128.186:45350 (7 connections now open) m31000| Fri Feb 22 11:57:59.851 [initandlisten] connection accepted from 165.225.128.186:35348 #1710 (8 connections now open) m31000| Fri Feb 22 11:57:59.944 [conn1] going to kill op: op: 34864.0 m31000| Fri Feb 22 11:57:59.945 [conn1] going to kill op: op: 34863.0 m31000| Fri Feb 22 11:57:59.948 [conn1709] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.948 [conn1709] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|83 } } cursorid:540089442488997 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:57:59.949 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.949 [conn1709] end connection 165.225.128.186:47452 (7 connections now open) m31000| Fri Feb 22 11:57:59.949 [initandlisten] connection accepted from 165.225.128.186:59305 #1711 (8 connections now open) m31000| Fri Feb 22 11:57:59.953 [conn1710] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:57:59.953 [conn1710] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|83 } } cursorid:540092745266001 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:41 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:57:59.953 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:57:59.953 [conn1710] end connection 165.225.128.186:35348 (7 connections now open) m31000| Fri Feb 22 11:57:59.953 [initandlisten] connection accepted from 165.225.128.186:45244 #1712 (8 connections now open) m31000| Fri Feb 22 11:58:00.045 [conn1] going to kill op: op: 34899.0 m31000| Fri Feb 22 11:58:00.045 [conn1] going to kill op: op: 34901.0 m31000| Fri Feb 22 11:58:00.046 [conn1712] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.046 [conn1712] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|93 } } cursorid:540487416070100 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.046 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.046 [conn1712] end connection 165.225.128.186:45244 (7 connections now open) m31000| Fri Feb 22 11:58:00.046 [initandlisten] connection accepted from 165.225.128.186:57191 #1713 (8 connections now open) m31000| Fri Feb 22 11:58:00.051 [conn1711] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.051 [conn1711] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534279000|93 } } cursorid:540483288140393 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.051 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.051 [conn1711] end connection 165.225.128.186:59305 (7 connections now open) m31000| Fri Feb 22 11:58:00.051 [initandlisten] connection accepted from 165.225.128.186:56430 #1714 (8 connections now open) m31000| Fri Feb 22 11:58:00.146 [conn1] going to kill op: op: 34940.0 m31000| Fri Feb 22 11:58:00.146 [conn1] going to kill op: op: 34942.0 m31000| Fri Feb 22 11:58:00.148 [conn1713] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.148 [conn1713] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|3 } } cursorid:540878194202082 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.149 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.149 [conn1713] end connection 165.225.128.186:57191 (7 connections now open) m31000| Fri Feb 22 11:58:00.149 [initandlisten] connection accepted from 165.225.128.186:46027 #1715 (8 connections now open) m31000| Fri Feb 22 11:58:00.154 [conn1714] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.154 [conn1714] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|4 } } cursorid:540883302920579 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.154 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.154 [conn1714] end connection 165.225.128.186:56430 (7 connections now open) m31000| Fri Feb 22 11:58:00.154 [initandlisten] connection accepted from 165.225.128.186:43253 #1716 (8 connections now open) m31000| Fri Feb 22 11:58:00.247 [conn1] going to kill op: op: 34979.0 m31000| Fri Feb 22 11:58:00.247 [conn1] going to kill op: op: 34980.0 m31000| Fri Feb 22 11:58:00.251 [conn1715] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.251 [conn1715] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|13 } } cursorid:541273842314483 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:00.251 [conn1715] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:00.251 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.251 [conn1715] end connection 165.225.128.186:46027 (7 connections now open) m31000| Fri Feb 22 11:58:00.252 [initandlisten] connection accepted from 165.225.128.186:64476 #1717 (8 connections now open) m31000| Fri Feb 22 11:58:00.256 [conn1716] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.256 [conn1716] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|14 } } cursorid:541279292467034 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.256 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.256 [conn1716] end connection 165.225.128.186:43253 (7 connections now open) m31000| Fri Feb 22 11:58:00.256 [initandlisten] connection accepted from 165.225.128.186:33667 #1718 (8 connections now open) m31000| Fri Feb 22 11:58:00.348 [conn1] going to kill op: op: 35020.0 m31000| Fri Feb 22 11:58:00.348 [conn1] going to kill op: op: 35018.0 m31000| Fri Feb 22 11:58:00.348 [conn1] going to kill op: op: 35016.0 m31000| Fri Feb 22 11:58:00.353 [conn1717] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.353 [conn1717] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|24 } } cursorid:541670130816294 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.354 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.354 [conn1717] end connection 165.225.128.186:64476 (7 connections now open) m31000| Fri Feb 22 11:58:00.354 [initandlisten] connection accepted from 165.225.128.186:34722 #1719 (8 connections now open) m31000| Fri Feb 22 11:58:00.354 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.354 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|34 } } cursorid:542017558322425 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.355 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.449 [conn1] going to kill op: op: 35055.0 m31000| Fri Feb 22 11:58:00.449 [conn1] going to kill op: op: 35053.0 m31000| Fri Feb 22 11:58:00.450 [conn1718] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.450 [conn1718] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|24 } } cursorid:541672996435037 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.450 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.450 [conn1718] end connection 165.225.128.186:33667 (7 connections now open) m31000| Fri Feb 22 11:58:00.450 [initandlisten] connection accepted from 165.225.128.186:38231 #1720 (8 connections now open) m31000| Fri Feb 22 11:58:00.456 [conn1719] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.456 [conn1719] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|34 } } cursorid:542064956971512 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.456 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.456 [conn1719] end connection 165.225.128.186:34722 (7 connections now open) m31000| Fri Feb 22 11:58:00.457 [initandlisten] connection accepted from 165.225.128.186:47415 #1721 (8 connections now open) m31000| Fri Feb 22 11:58:00.550 [conn1] going to kill op: op: 35091.0 m31000| Fri Feb 22 11:58:00.550 [conn1] going to kill op: op: 35093.0 m31000| Fri Feb 22 11:58:00.552 [conn1720] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.552 [conn1720] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|43 } } cursorid:542455276101931 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.552 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.553 [conn1720] end connection 165.225.128.186:38231 (7 connections now open) m31000| Fri Feb 22 11:58:00.553 [initandlisten] connection accepted from 165.225.128.186:35801 #1722 (8 connections now open) m31000| Fri Feb 22 11:58:00.559 [conn1721] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.559 [conn1721] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|44 } } cursorid:542459401709426 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.559 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.559 [conn1721] end connection 165.225.128.186:47415 (7 connections now open) m31000| Fri Feb 22 11:58:00.559 [initandlisten] connection accepted from 165.225.128.186:38176 #1723 (8 connections now open) m31000| Fri Feb 22 11:58:00.650 [conn1] going to kill op: op: 35129.0 m31000| Fri Feb 22 11:58:00.651 [conn1] going to kill op: op: 35128.0 m31000| Fri Feb 22 11:58:00.651 [conn1723] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.651 [conn1723] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|54 } } cursorid:542855463621246 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:00.651 [conn1723] ClientCursor::find(): cursor not found in map '542855463621246' (ok after a drop) m31001| Fri Feb 22 11:58:00.651 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.651 [conn1723] end connection 165.225.128.186:38176 (7 connections now open) m31000| Fri Feb 22 11:58:00.651 [initandlisten] connection accepted from 165.225.128.186:35588 #1724 (8 connections now open) m31000| Fri Feb 22 11:58:00.655 [conn1722] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.655 [conn1722] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|53 } } cursorid:542850905440097 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:40 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.655 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.655 [conn1722] end connection 165.225.128.186:35801 (7 connections now open) m31000| Fri Feb 22 11:58:00.655 [initandlisten] connection accepted from 165.225.128.186:44002 #1725 (8 connections now open) m31000| Fri Feb 22 11:58:00.751 [conn1] going to kill op: op: 35168.0 m31000| Fri Feb 22 11:58:00.751 [conn1] going to kill op: op: 35166.0 m31000| Fri Feb 22 11:58:00.753 [conn1724] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.753 [conn1724] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|63 } } cursorid:543245363252403 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.753 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.753 [conn1724] end connection 165.225.128.186:35588 (7 connections now open) m31000| Fri Feb 22 11:58:00.754 [initandlisten] connection accepted from 165.225.128.186:39127 #1726 (8 connections now open) m31000| Fri Feb 22 11:58:00.758 [conn1725] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.758 [conn1725] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|63 } } cursorid:543250224736433 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.758 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.758 [conn1725] end connection 165.225.128.186:44002 (7 connections now open) m31000| Fri Feb 22 11:58:00.758 [initandlisten] connection accepted from 165.225.128.186:58541 #1727 (8 connections now open) m31000| Fri Feb 22 11:58:00.852 [conn1] going to kill op: op: 35205.0 m31000| Fri Feb 22 11:58:00.852 [conn1] going to kill op: op: 35206.0 m31000| Fri Feb 22 11:58:00.852 [conn1] going to kill op: op: 35208.0 m31000| Fri Feb 22 11:58:00.855 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.855 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|83 } } cursorid:543988951785414 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.855 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.856 [conn1726] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.856 [conn1726] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|73 } } cursorid:543640143523836 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:31 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:00.856 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.856 [conn1726] end connection 165.225.128.186:39127 (7 connections now open) m31000| Fri Feb 22 11:58:00.856 [initandlisten] connection accepted from 165.225.128.186:34648 #1728 (8 connections now open) m31000| Fri Feb 22 11:58:00.860 [conn1727] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.860 [conn1727] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|74 } } cursorid:543644818912703 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:00.861 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.861 [conn1727] end connection 165.225.128.186:58541 (7 connections now open) m31000| Fri Feb 22 11:58:00.861 [initandlisten] connection accepted from 165.225.128.186:57369 #1729 (8 connections now open) m31000| Fri Feb 22 11:58:00.953 [conn1] going to kill op: op: 35245.0 m31000| Fri Feb 22 11:58:00.953 [conn1] going to kill op: op: 35244.0 m31000| Fri Feb 22 11:58:00.959 [conn1728] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:00.959 [conn1728] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|83 } } cursorid:544036700533124 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:00.959 [conn1728] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:00.959 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:00.959 [conn1728] end connection 165.225.128.186:34648 (7 connections now open) m31000| Fri Feb 22 11:58:00.959 [initandlisten] connection accepted from 165.225.128.186:42082 #1730 (8 connections now open) m31000| Fri Feb 22 11:58:01.054 [conn1] going to kill op: op: 35279.0 m31000| Fri Feb 22 11:58:01.054 [conn1] going to kill op: op: 35278.0 m31000| Fri Feb 22 11:58:01.055 [conn1729] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.055 [conn1729] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|84 } } cursorid:544040014221378 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.055 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.055 [conn1729] end connection 165.225.128.186:57369 (7 connections now open) m31000| Fri Feb 22 11:58:01.055 [initandlisten] connection accepted from 165.225.128.186:35594 #1731 (8 connections now open) m31000| Fri Feb 22 11:58:01.061 [conn1730] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.061 [conn1730] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534280000|93 } } cursorid:544430459037635 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.061 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.061 [conn1730] end connection 165.225.128.186:42082 (7 connections now open) m31000| Fri Feb 22 11:58:01.062 [initandlisten] connection accepted from 165.225.128.186:39558 #1732 (8 connections now open) m31000| Fri Feb 22 11:58:01.155 [conn1] going to kill op: op: 35320.0 m31000| Fri Feb 22 11:58:01.155 [conn1] going to kill op: op: 35318.0 m31000| Fri Feb 22 11:58:01.157 [conn1731] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.157 [conn1731] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|5 } } cursorid:544821668434140 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.157 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.157 [conn1731] end connection 165.225.128.186:35594 (7 connections now open) m31000| Fri Feb 22 11:58:01.158 [initandlisten] connection accepted from 165.225.128.186:56719 #1733 (8 connections now open) m31000| Fri Feb 22 11:58:01.164 [conn1732] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.164 [conn1732] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|5 } } cursorid:544826431765532 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.164 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.164 [conn1732] end connection 165.225.128.186:39558 (7 connections now open) m31000| Fri Feb 22 11:58:01.164 [initandlisten] connection accepted from 165.225.128.186:34046 #1734 (8 connections now open) m31000| Fri Feb 22 11:58:01.255 [conn1] going to kill op: op: 35355.0 m31000| Fri Feb 22 11:58:01.256 [conn1] going to kill op: op: 35356.0 m31000| Fri Feb 22 11:58:01.256 [conn1734] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.256 [conn1734] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|16 } } cursorid:545221188661562 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.256 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.256 [conn1734] end connection 165.225.128.186:34046 (7 connections now open) m31000| Fri Feb 22 11:58:01.257 [initandlisten] connection accepted from 165.225.128.186:52865 #1735 (8 connections now open) m31000| Fri Feb 22 11:58:01.260 [conn1733] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.260 [conn1733] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|15 } } cursorid:545217798716664 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.260 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.260 [conn1733] end connection 165.225.128.186:56719 (7 connections now open) m31000| Fri Feb 22 11:58:01.260 [initandlisten] connection accepted from 165.225.128.186:47256 #1736 (8 connections now open) m31000| Fri Feb 22 11:58:01.356 [conn1] going to kill op: op: 35395.0 m31000| Fri Feb 22 11:58:01.356 [conn1] going to kill op: op: 35393.0 m31000| Fri Feb 22 11:58:01.357 [conn1] going to kill op: op: 35392.0 m31000| Fri Feb 22 11:58:01.357 [conn1736] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.357 [conn1736] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|25 } } cursorid:545615671454898 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.357 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.357 [conn1736] end connection 165.225.128.186:47256 (7 connections now open) m31000| Fri Feb 22 11:58:01.358 [initandlisten] connection accepted from 165.225.128.186:47414 #1737 (8 connections now open) m31000| Fri Feb 22 11:58:01.359 [conn1735] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.359 [conn1735] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|25 } } cursorid:545612583911943 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:01.359 [conn1735] ClientCursor::find(): cursor not found in map '545612583911943' (ok after a drop) m31001| Fri Feb 22 11:58:01.359 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.359 [conn1735] end connection 165.225.128.186:52865 (7 connections now open) m31000| Fri Feb 22 11:58:01.359 [initandlisten] connection accepted from 165.225.128.186:60788 #1738 (8 connections now open) m31000| Fri Feb 22 11:58:01.365 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.365 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|35 } } cursorid:545960575788813 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.365 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.457 [conn1] going to kill op: op: 35434.0 m31000| Fri Feb 22 11:58:01.457 [conn1] going to kill op: op: 35433.0 m31000| Fri Feb 22 11:58:01.459 [conn1737] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.459 [conn1737] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|35 } } cursorid:546007280543345 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.460 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.460 [conn1737] end connection 165.225.128.186:47414 (7 connections now open) m31000| Fri Feb 22 11:58:01.460 [initandlisten] connection accepted from 165.225.128.186:61370 #1739 (8 connections now open) m31000| Fri Feb 22 11:58:01.461 [conn1738] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.461 [conn1738] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|35 } } cursorid:546011982594796 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.461 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.461 [conn1738] end connection 165.225.128.186:60788 (7 connections now open) m31000| Fri Feb 22 11:58:01.462 [initandlisten] connection accepted from 165.225.128.186:43352 #1740 (8 connections now open) m31000| Fri Feb 22 11:58:01.558 [conn1] going to kill op: op: 35472.0 m31000| Fri Feb 22 11:58:01.558 [conn1] going to kill op: op: 35471.0 m31000| Fri Feb 22 11:58:01.562 [conn1739] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.562 [conn1739] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|45 } } cursorid:546445541902571 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.562 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.562 [conn1739] end connection 165.225.128.186:61370 (7 connections now open) m31000| Fri Feb 22 11:58:01.563 [initandlisten] connection accepted from 165.225.128.186:55958 #1741 (8 connections now open) m31000| Fri Feb 22 11:58:01.563 [conn1740] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.563 [conn1740] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|45 } } cursorid:546448738645897 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.563 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.564 [conn1740] end connection 165.225.128.186:43352 (7 connections now open) m31000| Fri Feb 22 11:58:01.564 [initandlisten] connection accepted from 165.225.128.186:58175 #1742 (8 connections now open) m31000| Fri Feb 22 11:58:01.659 [conn1] going to kill op: op: 35509.0 m31000| Fri Feb 22 11:58:01.659 [conn1] going to kill op: op: 35510.0 m31000| Fri Feb 22 11:58:01.665 [conn1741] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.665 [conn1741] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|55 } } cursorid:546883056532515 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.665 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.665 [conn1742] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.665 [conn1742] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|55 } } cursorid:546887645523951 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:01.665 [conn1741] end connection 165.225.128.186:55958 (7 connections now open) m31000| Fri Feb 22 11:58:01.665 [conn1742] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:01.666 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.666 [conn1742] end connection 165.225.128.186:58175 (6 connections now open) m31000| Fri Feb 22 11:58:01.666 [initandlisten] connection accepted from 165.225.128.186:49414 #1743 (7 connections now open) m31000| Fri Feb 22 11:58:01.666 [initandlisten] connection accepted from 165.225.128.186:41628 #1744 (8 connections now open) m31000| Fri Feb 22 11:58:01.760 [conn1] going to kill op: op: 35547.0 m31000| Fri Feb 22 11:58:01.760 [conn1] going to kill op: op: 35548.0 m31000| Fri Feb 22 11:58:01.768 [conn1743] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.768 [conn1743] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|65 } } cursorid:547325795780044 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.768 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.768 [conn1743] end connection 165.225.128.186:49414 (7 connections now open) m31000| Fri Feb 22 11:58:01.768 [conn1744] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.768 [conn1744] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|65 } } cursorid:547325577670465 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:01.768 [initandlisten] connection accepted from 165.225.128.186:53845 #1745 (8 connections now open) m31001| Fri Feb 22 11:58:01.768 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.768 [conn1744] end connection 165.225.128.186:41628 (7 connections now open) m31000| Fri Feb 22 11:58:01.769 [initandlisten] connection accepted from 165.225.128.186:41306 #1746 (8 connections now open) m31002| Fri Feb 22 11:58:01.841 [conn9] end connection 165.225.128.186:57914 (2 connections now open) m31002| Fri Feb 22 11:58:01.842 [initandlisten] connection accepted from 165.225.128.186:48725 #11 (3 connections now open) m31000| Fri Feb 22 11:58:01.860 [conn1] going to kill op: op: 35583.0 m31000| Fri Feb 22 11:58:01.861 [conn1746] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.861 [conn1746] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|75 } } cursorid:547763260545812 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.861 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.861 [conn1746] end connection 165.225.128.186:41306 (7 connections now open) m31000| Fri Feb 22 11:58:01.861 [initandlisten] connection accepted from 165.225.128.186:57644 #1747 (8 connections now open) m31000| Fri Feb 22 11:58:01.961 [conn1] going to kill op: op: 35627.0 m31000| Fri Feb 22 11:58:01.961 [conn1] going to kill op: op: 35626.0 m31000| Fri Feb 22 11:58:01.961 [conn1] going to kill op: op: 35628.0 m31000| Fri Feb 22 11:58:01.962 [conn1745] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.962 [conn1745] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|75 } } cursorid:547759256098743 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:01.962 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.962 [conn1745] end connection 165.225.128.186:53845 (7 connections now open) m31000| Fri Feb 22 11:58:01.963 [initandlisten] connection accepted from 165.225.128.186:60744 #1748 (8 connections now open) m31000| Fri Feb 22 11:58:01.963 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.963 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|84 } } cursorid:548106788607321 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:93 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:01.963 [conn1747] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:01.963 [conn1747] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|84 } } cursorid:548154607714240 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:101 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:01.963 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:01.964 [conn1747] end connection 165.225.128.186:57644 (7 connections now open) m31000| Fri Feb 22 11:58:01.964 [initandlisten] connection accepted from 165.225.128.186:45362 #1749 (8 connections now open) m31002| Fri Feb 22 11:58:01.964 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.062 [conn1] going to kill op: op: 35668.0 m31000| Fri Feb 22 11:58:02.062 [conn1] going to kill op: op: 35669.0 m31000| Fri Feb 22 11:58:02.066 [conn1748] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.066 [conn1748] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|94 } } cursorid:548588133563431 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:02.066 [conn10] end connection 165.225.128.186:53592 (2 connections now open) m31002| Fri Feb 22 11:58:02.066 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.066 [conn1748] end connection 165.225.128.186:60744 (7 connections now open) m31002| Fri Feb 22 11:58:02.066 [initandlisten] connection accepted from 165.225.128.186:61788 #12 (3 connections now open) m31000| Fri Feb 22 11:58:02.066 [initandlisten] connection accepted from 165.225.128.186:33450 #1750 (8 connections now open) m31000| Fri Feb 22 11:58:02.066 [conn1749] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.067 [conn1749] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534281000|94 } } cursorid:548592028086543 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.067 [conn1749] ClientCursor::find(): cursor not found in map '548592028086543' (ok after a drop) m31001| Fri Feb 22 11:58:02.067 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.067 [conn1749] end connection 165.225.128.186:45362 (7 connections now open) m31000| Fri Feb 22 11:58:02.067 [initandlisten] connection accepted from 165.225.128.186:36257 #1751 (8 connections now open) m31000| Fri Feb 22 11:58:02.163 [conn1] going to kill op: op: 35709.0 m31000| Fri Feb 22 11:58:02.163 [conn1] going to kill op: op: 35710.0 m31000| Fri Feb 22 11:58:02.168 [conn1750] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.168 [conn1750] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|6 } } cursorid:549026456089983 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:02.168 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.169 [conn1750] end connection 165.225.128.186:33450 (7 connections now open) m31000| Fri Feb 22 11:58:02.169 [initandlisten] connection accepted from 165.225.128.186:43110 #1752 (8 connections now open) m31000| Fri Feb 22 11:58:02.169 [conn1751] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.169 [conn1751] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|6 } } cursorid:549030927211896 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:02.169 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.169 [conn1751] end connection 165.225.128.186:36257 (7 connections now open) m31000| Fri Feb 22 11:58:02.170 [initandlisten] connection accepted from 165.225.128.186:58637 #1753 (8 connections now open) m31000| Fri Feb 22 11:58:02.263 [conn1] going to kill op: op: 35747.0 m31000| Fri Feb 22 11:58:02.264 [conn1] going to kill op: op: 35748.0 m31000| Fri Feb 22 11:58:02.271 [conn1752] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.271 [conn1752] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|16 } } cursorid:549463945540595 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:02.271 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.271 [conn1752] end connection 165.225.128.186:43110 (7 connections now open) m31000| Fri Feb 22 11:58:02.271 [initandlisten] connection accepted from 165.225.128.186:59883 #1754 (8 connections now open) m31000| Fri Feb 22 11:58:02.272 [conn1753] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.272 [conn1753] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|16 } } cursorid:549469926907461 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:02.272 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.272 [conn1753] end connection 165.225.128.186:58637 (7 connections now open) m31000| Fri Feb 22 11:58:02.272 [initandlisten] connection accepted from 165.225.128.186:37189 #1755 (8 connections now open) m31000| Fri Feb 22 11:58:02.364 [conn1] going to kill op: op: 35784.0 m31000| Fri Feb 22 11:58:02.364 [conn1] going to kill op: op: 35786.0 m31000| Fri Feb 22 11:58:02.374 [conn1754] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.374 [conn1754] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|26 } } cursorid:549903491977388 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:02.374 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.374 [conn1754] end connection 165.225.128.186:59883 (7 connections now open) m31000| Fri Feb 22 11:58:02.374 [initandlisten] connection accepted from 165.225.128.186:42463 #1756 (8 connections now open) m31000| Fri Feb 22 11:58:02.465 [conn1] going to kill op: op: 35827.0 m31000| Fri Feb 22 11:58:02.466 [conn1] going to kill op: op: 35829.0 m31000| Fri Feb 22 11:58:02.466 [conn1] going to kill op: op: 35828.0 m31000| Fri Feb 22 11:58:02.466 [conn1756] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.466 [conn1755] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.466 [conn1756] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|36 } } cursorid:550341013498184 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.466 [conn1755] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|26 } } cursorid:549907098797437 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.466 [conn1755] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:58:02.466 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:02.466 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.466 [conn1755] end connection 165.225.128.186:37189 (7 connections now open) m31000| Fri Feb 22 11:58:02.466 [conn1756] end connection 165.225.128.186:42463 (7 connections now open) m31000| Fri Feb 22 11:58:02.467 [initandlisten] connection accepted from 165.225.128.186:34055 #1757 (7 connections now open) m31000| Fri Feb 22 11:58:02.467 [initandlisten] connection accepted from 165.225.128.186:46424 #1758 (8 connections now open) m31000| Fri Feb 22 11:58:02.467 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.468 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|36 } } cursorid:550293427364086 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:02.468 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.566 [conn1] going to kill op: op: 35868.0 m31000| Fri Feb 22 11:58:02.567 [conn1] going to kill op: op: 35867.0 m31000| Fri Feb 22 11:58:02.569 [conn1757] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.569 [conn1757] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|45 } } cursorid:550735446996277 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:02.569 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.569 [conn1758] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.569 [conn1758] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|45 } } cursorid:550735340541864 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.569 [conn1757] end connection 165.225.128.186:34055 (7 connections now open) m31001| Fri Feb 22 11:58:02.569 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.570 [conn1758] end connection 165.225.128.186:46424 (6 connections now open) m31000| Fri Feb 22 11:58:02.570 [initandlisten] connection accepted from 165.225.128.186:54018 #1759 (7 connections now open) m31000| Fri Feb 22 11:58:02.570 [initandlisten] connection accepted from 165.225.128.186:64957 #1760 (8 connections now open) m31000| Fri Feb 22 11:58:02.667 [conn1] going to kill op: op: 35905.0 m31000| Fri Feb 22 11:58:02.667 [conn1] going to kill op: op: 35906.0 m31000| Fri Feb 22 11:58:02.672 [conn1759] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.672 [conn1759] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|55 } } cursorid:551174215787722 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.672 [conn1760] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.672 [conn1760] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|55 } } cursorid:551173897282702 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:02.672 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:02.672 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.672 [conn1759] end connection 165.225.128.186:54018 (7 connections now open) m31000| Fri Feb 22 11:58:02.672 [conn1760] end connection 165.225.128.186:64957 (6 connections now open) m31000| Fri Feb 22 11:58:02.673 [initandlisten] connection accepted from 165.225.128.186:56455 #1761 (7 connections now open) m31000| Fri Feb 22 11:58:02.673 [initandlisten] connection accepted from 165.225.128.186:35811 #1762 (8 connections now open) m31000| Fri Feb 22 11:58:02.768 [conn1] going to kill op: op: 35947.0 m31000| Fri Feb 22 11:58:02.768 [conn1] going to kill op: op: 35946.0 m31000| Fri Feb 22 11:58:02.776 [conn1761] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.776 [conn1761] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|65 } } cursorid:551611866333360 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.776 [conn1762] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.776 [conn1762] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|65 } } cursorid:551613096036361 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:02.776 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:58:02.776 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.776 [conn1761] end connection 165.225.128.186:56455 (7 connections now open) m31000| Fri Feb 22 11:58:02.776 [conn1762] end connection 165.225.128.186:35811 (7 connections now open) m31000| Fri Feb 22 11:58:02.776 [initandlisten] connection accepted from 165.225.128.186:43944 #1763 (7 connections now open) m31000| Fri Feb 22 11:58:02.776 [initandlisten] connection accepted from 165.225.128.186:38530 #1764 (8 connections now open) m31000| Fri Feb 22 11:58:02.869 [conn1] going to kill op: op: 35984.0 m31000| Fri Feb 22 11:58:02.869 [conn1] going to kill op: op: 35985.0 m31000| Fri Feb 22 11:58:02.879 [conn1763] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.879 [conn1764] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.879 [conn1763] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|76 } } cursorid:552050352465937 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.879 [conn1764] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|76 } } cursorid:552049443843777 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.879 [conn1764] ClientCursor::find(): cursor not found in map '552049443843777' (ok after a drop) m31002| Fri Feb 22 11:58:02.879 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:02.879 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.879 [conn1764] end connection 165.225.128.186:38530 (7 connections now open) m31000| Fri Feb 22 11:58:02.879 [conn1763] end connection 165.225.128.186:43944 (7 connections now open) m31000| Fri Feb 22 11:58:02.880 [initandlisten] connection accepted from 165.225.128.186:55666 #1765 (7 connections now open) m31000| Fri Feb 22 11:58:02.880 [initandlisten] connection accepted from 165.225.128.186:44924 #1766 (8 connections now open) m31000| Fri Feb 22 11:58:02.970 [conn1] going to kill op: op: 36022.0 m31000| Fri Feb 22 11:58:02.970 [conn1] going to kill op: op: 36019.0 m31000| Fri Feb 22 11:58:02.970 [conn1] going to kill op: op: 36020.0 m31000| Fri Feb 22 11:58:02.972 [conn1766] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.972 [conn1766] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|86 } } cursorid:552489287865393 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:02.972 [conn1765] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.972 [conn1765] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|86 } } cursorid:552487497378029 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:02.972 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:58:02.972 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:02.972 [conn1766] end connection 165.225.128.186:44924 (7 connections now open) m31000| Fri Feb 22 11:58:02.972 [conn1765] end connection 165.225.128.186:55666 (7 connections now open) m31000| Fri Feb 22 11:58:02.972 [initandlisten] connection accepted from 165.225.128.186:63394 #1767 (7 connections now open) m31000| Fri Feb 22 11:58:02.973 [initandlisten] connection accepted from 165.225.128.186:36475 #1768 (8 connections now open) m31000| Fri Feb 22 11:58:02.974 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:02.975 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|95 } } cursorid:552831039458651 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:02.975 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.071 [conn1] going to kill op: op: 36060.0 m31000| Fri Feb 22 11:58:03.072 [conn1] going to kill op: op: 36061.0 m31000| Fri Feb 22 11:58:03.075 [conn1767] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.075 [conn1767] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|95 } } cursorid:552883892783112 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:03.075 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.075 [conn1768] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.075 [conn1768] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534282000|95 } } cursorid:552882719821775 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:03.075 [conn1767] end connection 165.225.128.186:63394 (7 connections now open) m31002| Fri Feb 22 11:58:03.075 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.076 [conn1768] end connection 165.225.128.186:36475 (6 connections now open) m31000| Fri Feb 22 11:58:03.076 [initandlisten] connection accepted from 165.225.128.186:41082 #1769 (7 connections now open) m31000| Fri Feb 22 11:58:03.076 [initandlisten] connection accepted from 165.225.128.186:52378 #1770 (8 connections now open) m31000| Fri Feb 22 11:58:03.172 [conn1] going to kill op: op: 36101.0 m31000| Fri Feb 22 11:58:03.172 [conn1] going to kill op: op: 36100.0 m31000| Fri Feb 22 11:58:03.178 [conn1769] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.178 [conn1769] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|6 } } cursorid:553322656099964 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:03.178 [conn1769] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:03.178 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.178 [conn1769] end connection 165.225.128.186:41082 (7 connections now open) m31000| Fri Feb 22 11:58:03.179 [initandlisten] connection accepted from 165.225.128.186:65473 #1771 (8 connections now open) m31000| Fri Feb 22 11:58:03.179 [conn1770] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.179 [conn1770] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|6 } } cursorid:553321407221746 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:77 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:03.179 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.179 [conn1770] end connection 165.225.128.186:52378 (7 connections now open) m31000| Fri Feb 22 11:58:03.179 [initandlisten] connection accepted from 165.225.128.186:49612 #1772 (8 connections now open) m31000| Fri Feb 22 11:58:03.273 [conn1] going to kill op: op: 36138.0 m31000| Fri Feb 22 11:58:03.273 [conn1] going to kill op: op: 36139.0 m31000| Fri Feb 22 11:58:03.281 [conn1771] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.281 [conn1771] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|16 } } cursorid:553755258219920 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:03.281 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.281 [conn1771] end connection 165.225.128.186:65473 (7 connections now open) m31000| Fri Feb 22 11:58:03.282 [initandlisten] connection accepted from 165.225.128.186:56087 #1773 (8 connections now open) m31000| Fri Feb 22 11:58:03.282 [conn1772] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.282 [conn1772] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|16 } } cursorid:553759443767032 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:82 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:03.282 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.282 [conn1772] end connection 165.225.128.186:49612 (7 connections now open) m31000| Fri Feb 22 11:58:03.282 [initandlisten] connection accepted from 165.225.128.186:39884 #1774 (8 connections now open) m31000| Fri Feb 22 11:58:03.374 [conn1] going to kill op: op: 36174.0 m31000| Fri Feb 22 11:58:03.374 [conn1] going to kill op: op: 36173.0 m31000| Fri Feb 22 11:58:03.374 [conn1774] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.375 [conn1774] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|26 } } cursorid:554197806004437 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:03.375 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.375 [conn1774] end connection 165.225.128.186:39884 (7 connections now open) m31000| Fri Feb 22 11:58:03.375 [initandlisten] connection accepted from 165.225.128.186:59331 #1775 (8 connections now open) m31000| Fri Feb 22 11:58:03.475 [conn1] going to kill op: op: 36207.0 m31000| Fri Feb 22 11:58:03.475 [conn1] going to kill op: op: 36209.0 m31000| Fri Feb 22 11:58:03.476 [conn1773] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.476 [conn1773] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|26 } } cursorid:554192614404881 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:03.476 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.476 [conn1773] end connection 165.225.128.186:56087 (7 connections now open) m31000| Fri Feb 22 11:58:03.477 [initandlisten] connection accepted from 165.225.128.186:45597 #1776 (8 connections now open) m31000| Fri Feb 22 11:58:03.477 [conn1775] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.477 [conn1775] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|35 } } cursorid:554588084002944 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:03.478 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.478 [conn1775] end connection 165.225.128.186:59331 (7 connections now open) m31000| Fri Feb 22 11:58:03.478 [initandlisten] connection accepted from 165.225.128.186:37170 #1777 (8 connections now open) m31000| Fri Feb 22 11:58:03.576 [conn1] going to kill op: op: 36258.0 m31000| Fri Feb 22 11:58:03.576 [conn1] going to kill op: op: 36259.0 m31000| Fri Feb 22 11:58:03.576 [conn1] going to kill op: op: 36257.0 m31000| Fri Feb 22 11:58:03.579 [conn1776] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.579 [conn1776] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|45 } } cursorid:555022199825438 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:03.579 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.579 [conn1776] end connection 165.225.128.186:45597 (7 connections now open) m31000| Fri Feb 22 11:58:03.579 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.579 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|45 } } cursorid:554974723296452 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:03.580 [initandlisten] connection accepted from 165.225.128.186:60016 #1778 (8 connections now open) m31000| Fri Feb 22 11:58:03.580 [conn1777] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.580 [conn1777] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|46 } } cursorid:555026216035544 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:03.580 [conn1777] ClientCursor::find(): cursor not found in map '555026216035544' (ok after a drop) m31002| Fri Feb 22 11:58:03.580 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.580 [conn1777] end connection 165.225.128.186:37170 (7 connections now open) m31000| Fri Feb 22 11:58:03.580 [initandlisten] connection accepted from 165.225.128.186:35237 #1779 (8 connections now open) m31001| Fri Feb 22 11:58:03.581 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.677 [conn1] going to kill op: op: 36297.0 m31000| Fri Feb 22 11:58:03.677 [conn1] going to kill op: op: 36298.0 m31000| Fri Feb 22 11:58:03.682 [conn1778] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.682 [conn1778] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|56 } } cursorid:555460040003705 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:03.682 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.682 [conn1778] end connection 165.225.128.186:60016 (7 connections now open) m31000| Fri Feb 22 11:58:03.683 [conn1779] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.683 [conn1779] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|56 } } cursorid:555464865220085 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:03.683 [initandlisten] connection accepted from 165.225.128.186:37316 #1780 (8 connections now open) m31002| Fri Feb 22 11:58:03.683 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.683 [conn1779] end connection 165.225.128.186:35237 (7 connections now open) m31000| Fri Feb 22 11:58:03.683 [initandlisten] connection accepted from 165.225.128.186:52686 #1781 (8 connections now open) m31000| Fri Feb 22 11:58:03.778 [conn1] going to kill op: op: 36335.0 m31000| Fri Feb 22 11:58:03.778 [conn1] going to kill op: op: 36336.0 m31000| Fri Feb 22 11:58:03.785 [conn1780] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.785 [conn1780] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|66 } } cursorid:555898941383931 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:03.785 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.785 [conn1780] end connection 165.225.128.186:37316 (7 connections now open) m31000| Fri Feb 22 11:58:03.785 [conn1781] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.785 [conn1781] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|66 } } cursorid:555903043592920 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:03.785 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.785 [conn1781] end connection 165.225.128.186:52686 (6 connections now open) m31000| Fri Feb 22 11:58:03.785 [initandlisten] connection accepted from 165.225.128.186:52950 #1782 (7 connections now open) m31000| Fri Feb 22 11:58:03.786 [initandlisten] connection accepted from 165.225.128.186:63359 #1783 (8 connections now open) m31000| Fri Feb 22 11:58:03.878 [conn1] going to kill op: op: 36374.0 m31000| Fri Feb 22 11:58:03.879 [conn1] going to kill op: op: 36373.0 m31000| Fri Feb 22 11:58:03.888 [conn1783] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.888 [conn1783] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|76 } } cursorid:556340845140343 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:03.888 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.888 [conn1782] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.888 [conn1782] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|76 } } cursorid:556340098523367 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:03.888 [conn1783] end connection 165.225.128.186:63359 (7 connections now open) m31000| Fri Feb 22 11:58:03.888 [conn1782] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:03.888 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.888 [conn1782] end connection 165.225.128.186:52950 (6 connections now open) m31000| Fri Feb 22 11:58:03.889 [initandlisten] connection accepted from 165.225.128.186:62787 #1784 (7 connections now open) m31000| Fri Feb 22 11:58:03.889 [initandlisten] connection accepted from 165.225.128.186:54271 #1785 (8 connections now open) m31000| Fri Feb 22 11:58:03.979 [conn1] going to kill op: op: 36408.0 m31000| Fri Feb 22 11:58:03.980 [conn1] going to kill op: op: 36409.0 m31000| Fri Feb 22 11:58:03.981 [conn1784] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.981 [conn1785] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:03.981 [conn1784] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|86 } } cursorid:556778642232530 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:80 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:03.981 [conn1785] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|86 } } cursorid:556780108220441 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:03.981 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:03.981 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:03.981 [conn1784] end connection 165.225.128.186:62787 (7 connections now open) m31000| Fri Feb 22 11:58:03.981 [conn1785] end connection 165.225.128.186:54271 (7 connections now open) m31000| Fri Feb 22 11:58:03.982 [initandlisten] connection accepted from 165.225.128.186:42025 #1786 (7 connections now open) m31000| Fri Feb 22 11:58:03.982 [initandlisten] connection accepted from 165.225.128.186:35228 #1787 (8 connections now open) m31000| Fri Feb 22 11:58:04.080 [conn1] going to kill op: op: 36459.0 m31000| Fri Feb 22 11:58:04.080 [conn1] going to kill op: op: 36458.0 m31000| Fri Feb 22 11:58:04.081 [conn1] going to kill op: op: 36457.0 m31000| Fri Feb 22 11:58:04.084 [conn1786] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.084 [conn1787] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.084 [conn1786] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|95 } } cursorid:557174467209522 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:85 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.084 [conn1787] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|95 } } cursorid:557173495348083 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:04.085 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:04.085 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.085 [conn1786] end connection 165.225.128.186:42025 (7 connections now open) m31000| Fri Feb 22 11:58:04.085 [conn1787] end connection 165.225.128.186:35228 (7 connections now open) m31000| Fri Feb 22 11:58:04.085 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.085 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534283000|95 } } cursorid:557122444741998 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.085 [initandlisten] connection accepted from 165.225.128.186:42965 #1788 (7 connections now open) m31000| Fri Feb 22 11:58:04.085 [initandlisten] connection accepted from 165.225.128.186:59744 #1789 (8 connections now open) m31002| Fri Feb 22 11:58:04.086 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.181 [conn1] going to kill op: op: 36501.0 m31000| Fri Feb 22 11:58:04.181 [conn1] going to kill op: op: 36500.0 m31000| Fri Feb 22 11:58:04.187 [conn1789] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.187 [conn1789] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|7 } } cursorid:557612742100234 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.187 [conn1788] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.187 [conn1788] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|7 } } cursorid:557612159080881 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:04.188 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.188 [conn1789] end connection 165.225.128.186:59744 (7 connections now open) m31002| Fri Feb 22 11:58:04.188 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.188 [conn1788] end connection 165.225.128.186:42965 (6 connections now open) m31000| Fri Feb 22 11:58:04.188 [initandlisten] connection accepted from 165.225.128.186:34018 #1790 (7 connections now open) m31000| Fri Feb 22 11:58:04.188 [initandlisten] connection accepted from 165.225.128.186:65030 #1791 (8 connections now open) m31000| Fri Feb 22 11:58:04.282 [conn1] going to kill op: op: 36539.0 m31000| Fri Feb 22 11:58:04.282 [conn1] going to kill op: op: 36540.0 m31000| Fri Feb 22 11:58:04.290 [conn1790] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.290 [conn1791] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.290 [conn1790] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|18 } } cursorid:558051489487493 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.290 [conn1791] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|18 } } cursorid:558050759999679 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.291 [conn1790] ClientCursor::find(): cursor not found in map '558051489487493' (ok after a drop) m31002| Fri Feb 22 11:58:04.291 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:04.291 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.291 [conn1791] end connection 165.225.128.186:65030 (7 connections now open) m31000| Fri Feb 22 11:58:04.291 [conn1790] end connection 165.225.128.186:34018 (7 connections now open) m31000| Fri Feb 22 11:58:04.291 [initandlisten] connection accepted from 165.225.128.186:59650 #1792 (7 connections now open) m31000| Fri Feb 22 11:58:04.291 [initandlisten] connection accepted from 165.225.128.186:44658 #1793 (8 connections now open) m31000| Fri Feb 22 11:58:04.383 [conn1] going to kill op: op: 36575.0 m31000| Fri Feb 22 11:58:04.383 [conn1] going to kill op: op: 36574.0 m31000| Fri Feb 22 11:58:04.383 [conn1792] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.383 [conn1792] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|28 } } cursorid:558488667980036 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:04.383 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.384 [conn1792] end connection 165.225.128.186:59650 (7 connections now open) m31000| Fri Feb 22 11:58:04.384 [conn1793] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.384 [conn1793] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|28 } } cursorid:558487954617846 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:04.384 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.384 [conn1793] end connection 165.225.128.186:44658 (6 connections now open) m31000| Fri Feb 22 11:58:04.384 [initandlisten] connection accepted from 165.225.128.186:50844 #1794 (8 connections now open) m31000| Fri Feb 22 11:58:04.384 [initandlisten] connection accepted from 165.225.128.186:60083 #1795 (8 connections now open) m31000| Fri Feb 22 11:58:04.484 [conn1] going to kill op: op: 36612.0 m31000| Fri Feb 22 11:58:04.484 [conn1] going to kill op: op: 36613.0 m31000| Fri Feb 22 11:58:04.486 [conn1794] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.486 [conn1794] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|37 } } cursorid:558884525245812 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:04.486 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.486 [conn1794] end connection 165.225.128.186:50844 (7 connections now open) m31000| Fri Feb 22 11:58:04.486 [initandlisten] connection accepted from 165.225.128.186:50009 #1796 (8 connections now open) m31000| Fri Feb 22 11:58:04.486 [conn1795] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.486 [conn1795] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|37 } } cursorid:558883876417864 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:04.486 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.486 [conn1795] end connection 165.225.128.186:60083 (7 connections now open) m31000| Fri Feb 22 11:58:04.487 [initandlisten] connection accepted from 165.225.128.186:44263 #1797 (8 connections now open) m31000| Fri Feb 22 11:58:04.584 [conn1] going to kill op: op: 36651.0 m31000| Fri Feb 22 11:58:04.585 [conn1] going to kill op: op: 36650.0 m31000| Fri Feb 22 11:58:04.588 [conn1796] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.588 [conn1796] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|47 } } cursorid:559317651664843 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:04.588 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.588 [conn1796] end connection 165.225.128.186:50009 (7 connections now open) m31000| Fri Feb 22 11:58:04.589 [conn1797] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.589 [conn1797] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|47 } } cursorid:559321681691987 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.589 [conn1797] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:58:04.589 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.589 [conn1797] end connection 165.225.128.186:44263 (6 connections now open) m31000| Fri Feb 22 11:58:04.589 [initandlisten] connection accepted from 165.225.128.186:33418 #1798 (7 connections now open) m31000| Fri Feb 22 11:58:04.589 [initandlisten] connection accepted from 165.225.128.186:35350 #1799 (8 connections now open) m31000| Fri Feb 22 11:58:04.685 [conn1] going to kill op: op: 36701.0 m31000| Fri Feb 22 11:58:04.685 [conn1] going to kill op: op: 36700.0 m31000| Fri Feb 22 11:58:04.686 [conn1] going to kill op: op: 36699.0 m31000| Fri Feb 22 11:58:04.691 [conn1798] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.691 [conn1798] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|57 } } cursorid:559760117248210 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:04.691 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.691 [conn1798] end connection 165.225.128.186:33418 (7 connections now open) m31000| Fri Feb 22 11:58:04.691 [conn1799] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.691 [conn1799] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|57 } } cursorid:559760048230417 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:63 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.691 [initandlisten] connection accepted from 165.225.128.186:44875 #1800 (8 connections now open) m31002| Fri Feb 22 11:58:04.691 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.692 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.692 [conn1799] end connection 165.225.128.186:35350 (7 connections now open) m31000| Fri Feb 22 11:58:04.692 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|57 } } cursorid:559708868525871 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.692 [initandlisten] connection accepted from 165.225.128.186:40066 #1801 (8 connections now open) m31001| Fri Feb 22 11:58:04.692 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.786 [conn1] going to kill op: op: 36740.0 m31000| Fri Feb 22 11:58:04.786 [conn1] going to kill op: op: 36739.0 m31000| Fri Feb 22 11:58:04.793 [conn1800] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.793 [conn1800] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|67 } } cursorid:560197121267780 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:04.793 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.793 [conn1800] end connection 165.225.128.186:44875 (7 connections now open) m31000| Fri Feb 22 11:58:04.794 [conn1801] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.794 [conn1801] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|67 } } cursorid:560198011547487 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.794 [initandlisten] connection accepted from 165.225.128.186:61979 #1802 (8 connections now open) m31002| Fri Feb 22 11:58:04.794 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.794 [conn1801] end connection 165.225.128.186:40066 (7 connections now open) m31000| Fri Feb 22 11:58:04.794 [initandlisten] connection accepted from 165.225.128.186:62745 #1803 (8 connections now open) m31000| Fri Feb 22 11:58:04.887 [conn1] going to kill op: op: 36780.0 m31000| Fri Feb 22 11:58:04.887 [conn1] going to kill op: op: 36779.0 m31000| Fri Feb 22 11:58:04.896 [conn1802] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.896 [conn1802] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|77 } } cursorid:560630740366005 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:04.896 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.896 [conn1802] end connection 165.225.128.186:61979 (7 connections now open) m31000| Fri Feb 22 11:58:04.896 [conn1803] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.896 [conn1803] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|77 } } cursorid:560636289929531 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.896 [initandlisten] connection accepted from 165.225.128.186:59934 #1804 (8 connections now open) m31002| Fri Feb 22 11:58:04.896 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.896 [conn1803] end connection 165.225.128.186:62745 (7 connections now open) m31000| Fri Feb 22 11:58:04.897 [initandlisten] connection accepted from 165.225.128.186:40365 #1805 (8 connections now open) m31000| Fri Feb 22 11:58:04.987 [conn1] going to kill op: op: 36815.0 m31000| Fri Feb 22 11:58:04.988 [conn1] going to kill op: op: 36814.0 m31000| Fri Feb 22 11:58:04.988 [conn1804] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.988 [conn1804] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|88 } } cursorid:561074083844636 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:04.988 [conn1804] ClientCursor::find(): cursor not found in map '561074083844636' (ok after a drop) m31001| Fri Feb 22 11:58:04.988 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.988 [conn1804] end connection 165.225.128.186:59934 (7 connections now open) m31000| Fri Feb 22 11:58:04.988 [initandlisten] connection accepted from 165.225.128.186:53189 #1806 (8 connections now open) m31000| Fri Feb 22 11:58:04.988 [conn1805] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:04.988 [conn1805] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|88 } } cursorid:561075102313740 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:04.989 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:04.989 [conn1805] end connection 165.225.128.186:40365 (7 connections now open) m31000| Fri Feb 22 11:58:04.989 [initandlisten] connection accepted from 165.225.128.186:54327 #1807 (8 connections now open) m31000| Fri Feb 22 11:58:05.088 [conn1] going to kill op: op: 36855.0 m31000| Fri Feb 22 11:58:05.088 [conn1] going to kill op: op: 36852.0 m31000| Fri Feb 22 11:58:05.089 [conn1] going to kill op: op: 36853.0 m31000| Fri Feb 22 11:58:05.090 [conn1806] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.090 [conn1806] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|97 } } cursorid:561465230318785 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:05.090 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.090 [conn1806] end connection 165.225.128.186:53189 (7 connections now open) m31000| Fri Feb 22 11:58:05.091 [initandlisten] connection accepted from 165.225.128.186:54998 #1808 (8 connections now open) m31000| Fri Feb 22 11:58:05.091 [conn1807] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.091 [conn1807] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534284000|97 } } cursorid:561468415227152 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.091 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.091 [conn1807] end connection 165.225.128.186:54327 (7 connections now open) m31000| Fri Feb 22 11:58:05.092 [initandlisten] connection accepted from 165.225.128.186:54272 #1809 (8 connections now open) m31000| Fri Feb 22 11:58:05.097 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.097 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|8 } } cursorid:561855420524628 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.097 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.189 [conn1] going to kill op: op: 36896.0 m31000| Fri Feb 22 11:58:05.189 [conn1] going to kill op: op: 36895.0 m31000| Fri Feb 22 11:58:05.193 [conn1808] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.193 [conn1808] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|8 } } cursorid:561902279171282 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:05.193 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.193 [conn1808] end connection 165.225.128.186:54998 (7 connections now open) m31000| Fri Feb 22 11:58:05.193 [initandlisten] connection accepted from 165.225.128.186:52616 #1810 (8 connections now open) m31000| Fri Feb 22 11:58:05.194 [conn1809] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.194 [conn1809] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|8 } } cursorid:561907874895854 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.194 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.194 [conn1809] end connection 165.225.128.186:54272 (7 connections now open) m31000| Fri Feb 22 11:58:05.194 [initandlisten] connection accepted from 165.225.128.186:34686 #1811 (8 connections now open) m31000| Fri Feb 22 11:58:05.290 [conn1] going to kill op: op: 36933.0 m31000| Fri Feb 22 11:58:05.290 [conn1] going to kill op: op: 36934.0 m31000| Fri Feb 22 11:58:05.295 [conn1810] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.295 [conn1810] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|18 } } cursorid:562341972028938 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:05.295 [conn1810] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:05.295 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.295 [conn1810] end connection 165.225.128.186:52616 (7 connections now open) m31000| Fri Feb 22 11:58:05.296 [initandlisten] connection accepted from 165.225.128.186:43441 #1812 (8 connections now open) m31000| Fri Feb 22 11:58:05.296 [conn1811] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.296 [conn1811] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|18 } } cursorid:562345421478544 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.296 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.296 [conn1811] end connection 165.225.128.186:34686 (7 connections now open) m31000| Fri Feb 22 11:58:05.297 [initandlisten] connection accepted from 165.225.128.186:42571 #1813 (8 connections now open) m31000| Fri Feb 22 11:58:05.391 [conn1] going to kill op: op: 36972.0 m31000| Fri Feb 22 11:58:05.391 [conn1] going to kill op: op: 36971.0 m31000| Fri Feb 22 11:58:05.398 [conn1812] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.398 [conn1812] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|28 } } cursorid:562779777820285 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:05.398 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.398 [conn1812] end connection 165.225.128.186:43441 (7 connections now open) m31000| Fri Feb 22 11:58:05.398 [initandlisten] connection accepted from 165.225.128.186:39926 #1814 (8 connections now open) m31000| Fri Feb 22 11:58:05.399 [conn1813] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.399 [conn1813] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|28 } } cursorid:562783238480826 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.399 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.399 [conn1813] end connection 165.225.128.186:42571 (7 connections now open) m31000| Fri Feb 22 11:58:05.399 [initandlisten] connection accepted from 165.225.128.186:54546 #1815 (8 connections now open) m31000| Fri Feb 22 11:58:05.492 [conn1] going to kill op: op: 37009.0 m31000| Fri Feb 22 11:58:05.492 [conn1] going to kill op: op: 37007.0 m31000| Fri Feb 22 11:58:05.500 [conn1814] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.500 [conn1814] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|38 } } cursorid:563216507511682 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:05.500 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.500 [conn1814] end connection 165.225.128.186:39926 (7 connections now open) m31000| Fri Feb 22 11:58:05.501 [initandlisten] connection accepted from 165.225.128.186:33017 #1816 (8 connections now open) m31000| Fri Feb 22 11:58:05.593 [conn1] going to kill op: op: 37041.0 m31000| Fri Feb 22 11:58:05.593 [conn1] going to kill op: op: 37040.0 m31000| Fri Feb 22 11:58:05.593 [conn1815] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.593 [conn1815] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|38 } } cursorid:563222488172532 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:62 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.593 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.594 [conn1815] end connection 165.225.128.186:54546 (7 connections now open) m31000| Fri Feb 22 11:58:05.594 [initandlisten] connection accepted from 165.225.128.186:43102 #1817 (8 connections now open) m31000| Fri Feb 22 11:58:05.693 [conn1] going to kill op: op: 37075.0 m31000| Fri Feb 22 11:58:05.694 [conn1] going to kill op: op: 37074.0 m31000| Fri Feb 22 11:58:05.694 [conn1816] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.694 [conn1816] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|48 } } cursorid:563655950365326 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:05.694 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.694 [conn1816] end connection 165.225.128.186:33017 (7 connections now open) m31000| Fri Feb 22 11:58:05.695 [initandlisten] connection accepted from 165.225.128.186:51143 #1818 (8 connections now open) m31000| Fri Feb 22 11:58:05.696 [conn1817] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.696 [conn1817] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|57 } } cursorid:564046388442768 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.696 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.696 [conn1817] end connection 165.225.128.186:43102 (7 connections now open) m31000| Fri Feb 22 11:58:05.696 [initandlisten] connection accepted from 165.225.128.186:51224 #1819 (8 connections now open) m31000| Fri Feb 22 11:58:05.794 [conn1] going to kill op: op: 37124.0 m31000| Fri Feb 22 11:58:05.795 [conn1] going to kill op: op: 37126.0 m31000| Fri Feb 22 11:58:05.795 [conn1] going to kill op: op: 37123.0 m31000| Fri Feb 22 11:58:05.797 [conn1818] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.797 [conn1818] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|67 } } cursorid:564480417901870 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:05.797 [conn1818] ClientCursor::find(): cursor not found in map '564480417901870' (ok after a drop) m31001| Fri Feb 22 11:58:05.797 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.797 [conn1818] end connection 165.225.128.186:51143 (7 connections now open) m31000| Fri Feb 22 11:58:05.797 [initandlisten] connection accepted from 165.225.128.186:42174 #1820 (8 connections now open) m31000| Fri Feb 22 11:58:05.797 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.798 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|67 } } cursorid:564431962267504 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:05.798 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.799 [conn1819] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.799 [conn1819] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|67 } } cursorid:564483738838812 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.799 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.799 [conn1819] end connection 165.225.128.186:51224 (7 connections now open) m31000| Fri Feb 22 11:58:05.799 [initandlisten] connection accepted from 165.225.128.186:51046 #1821 (8 connections now open) m31000| Fri Feb 22 11:58:05.895 [conn1] going to kill op: op: 37163.0 m31000| Fri Feb 22 11:58:05.895 [conn1] going to kill op: op: 37165.0 m31000| Fri Feb 22 11:58:05.900 [conn1820] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.900 [conn1820] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|77 } } cursorid:564917098167554 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:05.900 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.900 [conn1820] end connection 165.225.128.186:42174 (7 connections now open) m31000| Fri Feb 22 11:58:05.900 [initandlisten] connection accepted from 165.225.128.186:36890 #1822 (8 connections now open) m31000| Fri Feb 22 11:58:05.901 [conn1821] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:05.901 [conn1821] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|78 } } cursorid:564922132049748 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:05.902 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:05.902 [conn1821] end connection 165.225.128.186:51046 (7 connections now open) m31000| Fri Feb 22 11:58:05.902 [initandlisten] connection accepted from 165.225.128.186:59261 #1823 (8 connections now open) m31000| Fri Feb 22 11:58:05.996 [conn1] going to kill op: op: 37202.0 m31000| Fri Feb 22 11:58:05.996 [conn1] going to kill op: op: 37203.0 m31000| Fri Feb 22 11:58:06.002 [conn1822] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.002 [conn1822] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|87 } } cursorid:565356927385680 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:06.003 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.003 [conn1822] end connection 165.225.128.186:36890 (7 connections now open) m31000| Fri Feb 22 11:58:06.003 [initandlisten] connection accepted from 165.225.128.186:35676 #1824 (8 connections now open) m31000| Fri Feb 22 11:58:06.004 [conn1823] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.004 [conn1823] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|88 } } cursorid:565359682891487 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:06.004 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.004 [conn1823] end connection 165.225.128.186:59261 (7 connections now open) m31000| Fri Feb 22 11:58:06.005 [initandlisten] connection accepted from 165.225.128.186:50243 #1825 (8 connections now open) m31000| Fri Feb 22 11:58:06.097 [conn1] going to kill op: op: 37241.0 m31000| Fri Feb 22 11:58:06.097 [conn1] going to kill op: op: 37242.0 m31000| Fri Feb 22 11:58:06.106 [conn1824] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.106 [conn1824] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|98 } } cursorid:565794903786724 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:06.106 [conn1824] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:06.106 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.106 [conn1824] end connection 165.225.128.186:35676 (7 connections now open) m31000| Fri Feb 22 11:58:06.107 [initandlisten] connection accepted from 165.225.128.186:41612 #1826 (8 connections now open) m31000| Fri Feb 22 11:58:06.107 [conn1825] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.107 [conn1825] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534285000|98 } } cursorid:565797955410215 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:06.107 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.107 [conn1825] end connection 165.225.128.186:50243 (7 connections now open) m31000| Fri Feb 22 11:58:06.107 [initandlisten] connection accepted from 165.225.128.186:64728 #1827 (8 connections now open) m31000| Fri Feb 22 11:58:06.198 [conn1] going to kill op: op: 37290.0 m31000| Fri Feb 22 11:58:06.198 [conn1] going to kill op: op: 37289.0 m31000| Fri Feb 22 11:58:06.198 [conn1] going to kill op: op: 37288.0 m31000| Fri Feb 22 11:58:06.199 [conn1826] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.199 [conn1826] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|10 } } cursorid:566231710913962 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:06.199 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.199 [conn1826] end connection 165.225.128.186:41612 (7 connections now open) m31000| Fri Feb 22 11:58:06.199 [conn1827] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.199 [conn1827] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|10 } } cursorid:566237492044767 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:06.199 [initandlisten] connection accepted from 165.225.128.186:58611 #1828 (8 connections now open) m31002| Fri Feb 22 11:58:06.199 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.199 [conn1827] end connection 165.225.128.186:64728 (7 connections now open) m31000| Fri Feb 22 11:58:06.200 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.200 [initandlisten] connection accepted from 165.225.128.186:47820 #1829 (8 connections now open) m31000| Fri Feb 22 11:58:06.200 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|10 } } cursorid:566185832508136 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:06.200 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.299 [conn1] going to kill op: op: 37330.0 m31000| Fri Feb 22 11:58:06.299 [conn1] going to kill op: op: 37329.0 m31000| Fri Feb 22 11:58:06.301 [conn1828] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.301 [conn1828] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|19 } } cursorid:566631810415084 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:57 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:06.302 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.302 [conn1828] end connection 165.225.128.186:58611 (7 connections now open) m31000| Fri Feb 22 11:58:06.302 [conn1829] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.302 [conn1829] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|19 } } cursorid:566632384667152 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:06.302 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.302 [conn1829] end connection 165.225.128.186:47820 (6 connections now open) m31000| Fri Feb 22 11:58:06.302 [initandlisten] connection accepted from 165.225.128.186:53435 #1830 (8 connections now open) m31000| Fri Feb 22 11:58:06.302 [initandlisten] connection accepted from 165.225.128.186:59279 #1831 (8 connections now open) m31000| Fri Feb 22 11:58:06.399 [conn1] going to kill op: op: 37368.0 m31000| Fri Feb 22 11:58:06.399 [conn1] going to kill op: op: 37367.0 m31000| Fri Feb 22 11:58:06.404 [conn1830] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.404 [conn1830] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|29 } } cursorid:567069645318918 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:06.404 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.404 [conn1831] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.404 [conn1830] end connection 165.225.128.186:53435 (7 connections now open) m31000| Fri Feb 22 11:58:06.404 [conn1831] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|29 } } cursorid:567070264693712 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:06.404 [conn1831] ClientCursor::find(): cursor not found in map '567070264693712' (ok after a drop) m31002| Fri Feb 22 11:58:06.405 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.405 [conn1831] end connection 165.225.128.186:59279 (6 connections now open) m31000| Fri Feb 22 11:58:06.405 [initandlisten] connection accepted from 165.225.128.186:64763 #1832 (7 connections now open) m31000| Fri Feb 22 11:58:06.405 [initandlisten] connection accepted from 165.225.128.186:33806 #1833 (8 connections now open) m31000| Fri Feb 22 11:58:06.500 [conn1] going to kill op: op: 37405.0 m31000| Fri Feb 22 11:58:06.500 [conn1] going to kill op: op: 37406.0 m31000| Fri Feb 22 11:58:06.507 [conn1832] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.507 [conn1833] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.507 [conn1832] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|39 } } cursorid:567507228348029 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:06.507 [conn1833] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|39 } } cursorid:567507960825467 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:06.507 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:58:06.507 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.507 [conn1832] end connection 165.225.128.186:64763 (7 connections now open) m31000| Fri Feb 22 11:58:06.507 [conn1833] end connection 165.225.128.186:33806 (7 connections now open) m31000| Fri Feb 22 11:58:06.508 [initandlisten] connection accepted from 165.225.128.186:62655 #1834 (7 connections now open) m31000| Fri Feb 22 11:58:06.508 [initandlisten] connection accepted from 165.225.128.186:65076 #1835 (8 connections now open) m31000| Fri Feb 22 11:58:06.601 [conn1] going to kill op: op: 37444.0 m31000| Fri Feb 22 11:58:06.701 [conn1] going to kill op: op: 37473.0 m31000| Fri Feb 22 11:58:06.702 [conn1] going to kill op: op: 37474.0 m31000| Fri Feb 22 11:58:06.702 [conn1834] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.702 [conn1835] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.702 [conn1835] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|49 } } cursorid:567946090612676 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:06.702 [conn1834] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|49 } } cursorid:567945590111605 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:06.703 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:06.703 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.703 [conn1835] end connection 165.225.128.186:65076 (7 connections now open) m31000| Fri Feb 22 11:58:06.703 [conn1834] end connection 165.225.128.186:62655 (7 connections now open) m31000| Fri Feb 22 11:58:06.703 [initandlisten] connection accepted from 165.225.128.186:61198 #1836 (7 connections now open) m31000| Fri Feb 22 11:58:06.703 [initandlisten] connection accepted from 165.225.128.186:48156 #1837 (8 connections now open) m31000| Fri Feb 22 11:58:06.802 [conn1] going to kill op: op: 37512.0 m31000| Fri Feb 22 11:58:06.803 [conn1] going to kill op: op: 37514.0 m31000| Fri Feb 22 11:58:06.803 [conn1] going to kill op: op: 37511.0 m31000| Fri Feb 22 11:58:06.805 [conn1836] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.805 [conn1836] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|69 } } cursorid:568769688065674 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:06.805 [conn1837] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.805 [conn1837] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|69 } } cursorid:568769990715920 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:06.805 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.806 [conn1836] end connection 165.225.128.186:61198 (7 connections now open) m31001| Fri Feb 22 11:58:06.806 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.806 [conn1837] end connection 165.225.128.186:48156 (6 connections now open) m31000| Fri Feb 22 11:58:06.806 [initandlisten] connection accepted from 165.225.128.186:43890 #1838 (7 connections now open) m31000| Fri Feb 22 11:58:06.806 [initandlisten] connection accepted from 165.225.128.186:39401 #1839 (8 connections now open) m31000| Fri Feb 22 11:58:06.809 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.809 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|79 } } cursorid:569156786146881 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:06.809 [conn8] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:06.809 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.904 [conn1] going to kill op: op: 37553.0 m31000| Fri Feb 22 11:58:06.904 [conn1] going to kill op: op: 37552.0 m31000| Fri Feb 22 11:58:06.908 [conn1838] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.908 [conn1838] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|79 } } cursorid:569209273018315 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:06.908 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.908 [conn1838] end connection 165.225.128.186:43890 (7 connections now open) m31000| Fri Feb 22 11:58:06.908 [conn1839] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:06.908 [conn1839] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|79 } } cursorid:569209237836274 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:70 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:06.909 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:06.909 [conn1839] end connection 165.225.128.186:39401 (6 connections now open) m31000| Fri Feb 22 11:58:06.909 [initandlisten] connection accepted from 165.225.128.186:55746 #1840 (8 connections now open) m31000| Fri Feb 22 11:58:06.909 [initandlisten] connection accepted from 165.225.128.186:60991 #1841 (8 connections now open) m31000| Fri Feb 22 11:58:07.004 [conn1] going to kill op: op: 37590.0 m31000| Fri Feb 22 11:58:07.005 [conn1] going to kill op: op: 37591.0 m31000| Fri Feb 22 11:58:07.011 [conn1840] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.011 [conn1840] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|89 } } cursorid:569646092075993 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.011 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.011 [conn1841] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.011 [conn1841] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|89 } } cursorid:569647311981836 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.011 [conn1840] end connection 165.225.128.186:55746 (7 connections now open) m31001| Fri Feb 22 11:58:07.012 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.012 [conn1841] end connection 165.225.128.186:60991 (6 connections now open) m31000| Fri Feb 22 11:58:07.012 [initandlisten] connection accepted from 165.225.128.186:35117 #1842 (7 connections now open) m31000| Fri Feb 22 11:58:07.012 [initandlisten] connection accepted from 165.225.128.186:42295 #1843 (8 connections now open) m31000| Fri Feb 22 11:58:07.105 [conn1] going to kill op: op: 37628.0 m31000| Fri Feb 22 11:58:07.106 [conn1] going to kill op: op: 37629.0 m31000| Fri Feb 22 11:58:07.115 [conn1842] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.115 [conn1842] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|99 } } cursorid:570084157111640 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:68 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.115 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.115 [conn1843] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.115 [conn1842] end connection 165.225.128.186:35117 (7 connections now open) m31000| Fri Feb 22 11:58:07.115 [conn1843] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534286000|99 } } cursorid:570085316139882 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:07.115 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.115 [conn1843] end connection 165.225.128.186:42295 (6 connections now open) m31000| Fri Feb 22 11:58:07.115 [initandlisten] connection accepted from 165.225.128.186:60543 #1844 (7 connections now open) m31000| Fri Feb 22 11:58:07.116 [initandlisten] connection accepted from 165.225.128.186:50002 #1845 (8 connections now open) m31000| Fri Feb 22 11:58:07.206 [conn1] going to kill op: op: 37665.0 m31000| Fri Feb 22 11:58:07.206 [conn1] going to kill op: op: 37666.0 m31000| Fri Feb 22 11:58:07.208 [conn1844] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.208 [conn1844] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|10 } } cursorid:570522699455901 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.208 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.208 [conn1844] end connection 165.225.128.186:60543 (7 connections now open) m31000| Fri Feb 22 11:58:07.208 [conn1845] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.208 [conn1845] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|10 } } cursorid:570522014264416 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.208 [conn1845] ClientCursor::find(): cursor not found in map '570522014264416' (ok after a drop) m31000| Fri Feb 22 11:58:07.208 [initandlisten] connection accepted from 165.225.128.186:38498 #1846 (8 connections now open) m31001| Fri Feb 22 11:58:07.208 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.208 [conn1845] end connection 165.225.128.186:50002 (7 connections now open) m31000| Fri Feb 22 11:58:07.209 [initandlisten] connection accepted from 165.225.128.186:35123 #1847 (8 connections now open) m31000| Fri Feb 22 11:58:07.307 [conn1] going to kill op: op: 37717.0 m31000| Fri Feb 22 11:58:07.307 [conn1] going to kill op: op: 37716.0 m31000| Fri Feb 22 11:58:07.307 [conn1] going to kill op: op: 37715.0 m31000| Fri Feb 22 11:58:07.311 [conn1846] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.311 [conn1846] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|19 } } cursorid:570918132074311 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.311 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.311 [conn1846] end connection 165.225.128.186:38498 (7 connections now open) m31000| Fri Feb 22 11:58:07.311 [conn1847] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.311 [conn1847] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|19 } } cursorid:570918776849228 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:07.311 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.311 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.311 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|19 } } cursorid:570865721183555 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.311 [conn1847] end connection 165.225.128.186:35123 (6 connections now open) m31000| Fri Feb 22 11:58:07.311 [initandlisten] connection accepted from 165.225.128.186:49439 #1848 (8 connections now open) m31000| Fri Feb 22 11:58:07.311 [initandlisten] connection accepted from 165.225.128.186:46977 #1849 (8 connections now open) m31002| Fri Feb 22 11:58:07.312 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.408 [conn1] going to kill op: op: 37755.0 m31000| Fri Feb 22 11:58:07.408 [conn1] going to kill op: op: 37756.0 m31000| Fri Feb 22 11:58:07.413 [conn1848] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.413 [conn1848] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|30 } } cursorid:571356070599392 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:51 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.413 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.414 [conn1848] end connection 165.225.128.186:49439 (7 connections now open) m31000| Fri Feb 22 11:58:07.414 [conn1849] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.414 [conn1849] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|30 } } cursorid:571355794777136 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:07.414 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.414 [conn1849] end connection 165.225.128.186:46977 (6 connections now open) m31000| Fri Feb 22 11:58:07.414 [initandlisten] connection accepted from 165.225.128.186:38144 #1850 (8 connections now open) m31000| Fri Feb 22 11:58:07.414 [initandlisten] connection accepted from 165.225.128.186:47348 #1851 (8 connections now open) m31000| Fri Feb 22 11:58:07.509 [conn1] going to kill op: op: 37793.0 m31000| Fri Feb 22 11:58:07.509 [conn1] going to kill op: op: 37794.0 m31000| Fri Feb 22 11:58:07.516 [conn1850] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.516 [conn1850] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|40 } } cursorid:571793917012670 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.516 [conn1851] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.516 [conn1851] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|40 } } cursorid:571794561885847 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.516 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.517 [conn1850] end connection 165.225.128.186:38144 (7 connections now open) m31000| Fri Feb 22 11:58:07.517 [conn1851] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:07.517 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.517 [conn1851] end connection 165.225.128.186:47348 (6 connections now open) m31000| Fri Feb 22 11:58:07.517 [initandlisten] connection accepted from 165.225.128.186:37980 #1852 (7 connections now open) m31000| Fri Feb 22 11:58:07.517 [initandlisten] connection accepted from 165.225.128.186:48940 #1853 (8 connections now open) m31000| Fri Feb 22 11:58:07.610 [conn1] going to kill op: op: 37832.0 m31000| Fri Feb 22 11:58:07.610 [conn1] going to kill op: op: 37831.0 m31000| Fri Feb 22 11:58:07.619 [conn1852] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.619 [conn1852] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|50 } } cursorid:572231923735461 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.619 [conn1853] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.619 [conn1853] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|50 } } cursorid:572231570550072 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.619 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:07.620 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.620 [conn1852] end connection 165.225.128.186:37980 (7 connections now open) m31000| Fri Feb 22 11:58:07.620 [conn1853] end connection 165.225.128.186:48940 (6 connections now open) m31000| Fri Feb 22 11:58:07.620 [initandlisten] connection accepted from 165.225.128.186:58640 #1854 (7 connections now open) m31000| Fri Feb 22 11:58:07.620 [initandlisten] connection accepted from 165.225.128.186:60717 #1855 (8 connections now open) m31000| Fri Feb 22 11:58:07.710 [conn1] going to kill op: op: 37867.0 m31000| Fri Feb 22 11:58:07.711 [conn1] going to kill op: op: 37866.0 m31000| Fri Feb 22 11:58:07.712 [conn1855] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.712 [conn1855] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|60 } } cursorid:572671008718572 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.713 [conn1854] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.713 [conn1854] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|60 } } cursorid:572670430080456 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:07.713 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.713 [conn1855] end connection 165.225.128.186:60717 (7 connections now open) m31002| Fri Feb 22 11:58:07.713 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.713 [conn1854] end connection 165.225.128.186:58640 (6 connections now open) m31000| Fri Feb 22 11:58:07.713 [initandlisten] connection accepted from 165.225.128.186:33982 #1856 (7 connections now open) m31000| Fri Feb 22 11:58:07.713 [initandlisten] connection accepted from 165.225.128.186:48663 #1857 (8 connections now open) m31000| Fri Feb 22 11:58:07.811 [conn1] going to kill op: op: 37904.0 m31000| Fri Feb 22 11:58:07.811 [conn1] going to kill op: op: 37905.0 m31000| Fri Feb 22 11:58:07.815 [conn1856] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.816 [conn1856] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|69 } } cursorid:573065767472005 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:07.816 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.816 [conn1856] end connection 165.225.128.186:33982 (7 connections now open) m31000| Fri Feb 22 11:58:07.816 [conn1857] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.816 [conn1857] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|69 } } cursorid:573064797336848 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:07.816 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.816 [initandlisten] connection accepted from 165.225.128.186:34723 #1858 (8 connections now open) m31000| Fri Feb 22 11:58:07.816 [conn1857] end connection 165.225.128.186:48663 (7 connections now open) m31000| Fri Feb 22 11:58:07.816 [initandlisten] connection accepted from 165.225.128.186:65331 #1859 (8 connections now open) m31000| Fri Feb 22 11:58:07.912 [conn1] going to kill op: op: 37958.0 m31000| Fri Feb 22 11:58:07.912 [conn1] going to kill op: op: 37956.0 m31000| Fri Feb 22 11:58:07.912 [conn1] going to kill op: op: 37957.0 m31000| Fri Feb 22 11:58:07.919 [conn1858] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.919 [conn1858] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|79 } } cursorid:573503036181551 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:83 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.919 [conn1859] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.919 [conn1859] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|79 } } cursorid:573504528414471 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:07.919 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.919 [conn1858] end connection 165.225.128.186:34723 (7 connections now open) m31000| Fri Feb 22 11:58:07.919 [conn1859] ClientCursor::find(): cursor not found in map '573504528414471' (ok after a drop) m31002| Fri Feb 22 11:58:07.919 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:07.919 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:07.919 [conn1859] end connection 165.225.128.186:65331 (6 connections now open) m31000| Fri Feb 22 11:58:07.919 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|79 } } cursorid:573452651380385 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:59 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:07.919 [initandlisten] connection accepted from 165.225.128.186:57622 #1860 (7 connections now open) m31000| Fri Feb 22 11:58:07.920 [initandlisten] connection accepted from 165.225.128.186:45622 #1861 (8 connections now open) m31001| Fri Feb 22 11:58:07.920 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.013 [conn1] going to kill op: op: 37997.0 m31000| Fri Feb 22 11:58:08.013 [conn1] going to kill op: op: 37996.0 m31000| Fri Feb 22 11:58:08.022 [conn1861] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.022 [conn1861] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|90 } } cursorid:573940883571555 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:08.022 [conn1860] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.022 [conn1860] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534287000|90 } } cursorid:573942886686711 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.022 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:08.022 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.022 [conn1861] end connection 165.225.128.186:45622 (7 connections now open) m31000| Fri Feb 22 11:58:08.022 [conn1860] end connection 165.225.128.186:57622 (7 connections now open) m31000| Fri Feb 22 11:58:08.023 [initandlisten] connection accepted from 165.225.128.186:39054 #1862 (7 connections now open) m31000| Fri Feb 22 11:58:08.023 [initandlisten] connection accepted from 165.225.128.186:64773 #1863 (8 connections now open) m31000| Fri Feb 22 11:58:08.114 [conn1] going to kill op: op: 38032.0 m31000| Fri Feb 22 11:58:08.114 [conn1] going to kill op: op: 38033.0 m31000| Fri Feb 22 11:58:08.115 [conn1862] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.115 [conn1863] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.115 [conn1862] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|2 } } cursorid:574380505021357 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:08.115 [conn1863] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|2 } } cursorid:574379511676970 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:88 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.115 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:08.116 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.116 [conn1862] end connection 165.225.128.186:39054 (7 connections now open) m31000| Fri Feb 22 11:58:08.116 [conn1863] end connection 165.225.128.186:64773 (7 connections now open) m31000| Fri Feb 22 11:58:08.116 [initandlisten] connection accepted from 165.225.128.186:35411 #1864 (7 connections now open) m31000| Fri Feb 22 11:58:08.116 [initandlisten] connection accepted from 165.225.128.186:55501 #1865 (8 connections now open) m31000| Fri Feb 22 11:58:08.215 [conn1] going to kill op: op: 38072.0 m31000| Fri Feb 22 11:58:08.215 [conn1] going to kill op: op: 38073.0 m31000| Fri Feb 22 11:58:08.219 [conn1865] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.219 [conn1865] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|11 } } cursorid:574775081452870 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.219 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.219 [conn1865] end connection 165.225.128.186:55501 (7 connections now open) m31000| Fri Feb 22 11:58:08.219 [conn1864] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.219 [conn1864] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|11 } } cursorid:574775522629130 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:08.219 [conn1864] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31000| Fri Feb 22 11:58:08.219 [initandlisten] connection accepted from 165.225.128.186:63270 #1866 (8 connections now open) m31001| Fri Feb 22 11:58:08.219 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.219 [conn1864] end connection 165.225.128.186:35411 (7 connections now open) m31000| Fri Feb 22 11:58:08.220 [initandlisten] connection accepted from 165.225.128.186:38156 #1867 (8 connections now open) m31000| Fri Feb 22 11:58:08.316 [conn1] going to kill op: op: 38111.0 m31000| Fri Feb 22 11:58:08.316 [conn1] going to kill op: op: 38114.0 m31000| Fri Feb 22 11:58:08.316 [conn1] going to kill op: op: 38112.0 m31000| Fri Feb 22 11:58:08.321 [conn1866] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.321 [conn1866] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|21 } } cursorid:575208904499588 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.322 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.322 [conn1866] end connection 165.225.128.186:63270 (7 connections now open) m31000| Fri Feb 22 11:58:08.322 [conn1867] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.322 [conn1867] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|21 } } cursorid:575212891564050 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:08.322 [initandlisten] connection accepted from 165.225.128.186:48228 #1868 (8 connections now open) m31001| Fri Feb 22 11:58:08.322 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.322 [conn1867] end connection 165.225.128.186:38156 (7 connections now open) m31000| Fri Feb 22 11:58:08.323 [initandlisten] connection accepted from 165.225.128.186:37015 #1869 (8 connections now open) m31000| Fri Feb 22 11:58:08.323 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.323 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|31 } } cursorid:575600424954864 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:66 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.323 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.417 [conn1] going to kill op: op: 38153.0 m31000| Fri Feb 22 11:58:08.417 [conn1] going to kill op: op: 38152.0 m31000| Fri Feb 22 11:58:08.424 [conn1868] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.424 [conn1868] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|31 } } cursorid:575645948265059 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:58 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.424 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.424 [conn1868] end connection 165.225.128.186:48228 (7 connections now open) m31000| Fri Feb 22 11:58:08.425 [initandlisten] connection accepted from 165.225.128.186:49206 #1870 (8 connections now open) m31000| Fri Feb 22 11:58:08.425 [conn1869] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.425 [conn1869] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|31 } } cursorid:575652311295863 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:60 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:08.425 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.425 [conn1869] end connection 165.225.128.186:37015 (7 connections now open) m31000| Fri Feb 22 11:58:08.426 [initandlisten] connection accepted from 165.225.128.186:51770 #1871 (8 connections now open) m31000| Fri Feb 22 11:58:08.518 [conn1] going to kill op: op: 38191.0 m31000| Fri Feb 22 11:58:08.518 [conn1] going to kill op: op: 38190.0 m31000| Fri Feb 22 11:58:08.527 [conn1870] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.527 [conn1870] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|41 } } cursorid:576084728099752 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.527 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.527 [conn1870] end connection 165.225.128.186:49206 (7 connections now open) m31000| Fri Feb 22 11:58:08.533 [initandlisten] connection accepted from 165.225.128.186:61841 #1872 (8 connections now open) m31000| Fri Feb 22 11:58:08.618 [conn1] going to kill op: op: 38223.0 m31000| Fri Feb 22 11:58:08.619 [conn1] going to kill op: op: 38224.0 m31000| Fri Feb 22 11:58:08.620 [conn1871] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.620 [conn1871] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|41 } } cursorid:576088559810398 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:08.620 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.620 [conn1871] end connection 165.225.128.186:51770 (7 connections now open) m31000| Fri Feb 22 11:58:08.620 [initandlisten] connection accepted from 165.225.128.186:33866 #1873 (8 connections now open) m31000| Fri Feb 22 11:58:08.625 [conn1872] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.625 [conn1872] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|51 } } cursorid:576523238982002 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:08.625 [conn1872] ClientCursor::find(): cursor not found in map '576523238982002' (ok after a drop) m31002| Fri Feb 22 11:58:08.625 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.625 [conn1872] end connection 165.225.128.186:61841 (7 connections now open) m31000| Fri Feb 22 11:58:08.625 [initandlisten] connection accepted from 165.225.128.186:46786 #1874 (8 connections now open) m31000| Fri Feb 22 11:58:08.719 [conn1] going to kill op: op: 38262.0 m31000| Fri Feb 22 11:58:08.719 [conn1] going to kill op: op: 38261.0 m31000| Fri Feb 22 11:58:08.722 [conn1873] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.722 [conn1873] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|61 } } cursorid:576870464256979 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:08.722 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.723 [conn1873] end connection 165.225.128.186:33866 (7 connections now open) m31000| Fri Feb 22 11:58:08.723 [initandlisten] connection accepted from 165.225.128.186:53071 #1875 (8 connections now open) m31000| Fri Feb 22 11:58:08.727 [conn1874] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.727 [conn1874] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|61 } } cursorid:576875331569765 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:52 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.727 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.727 [conn1874] end connection 165.225.128.186:46786 (7 connections now open) m31000| Fri Feb 22 11:58:08.728 [initandlisten] connection accepted from 165.225.128.186:64154 #1876 (8 connections now open) m31000| Fri Feb 22 11:58:08.820 [conn1] going to kill op: op: 38299.0 m31000| Fri Feb 22 11:58:08.820 [conn1] going to kill op: op: 38297.0 m31000| Fri Feb 22 11:58:08.820 [conn1876] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.820 [conn1876] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|71 } } cursorid:577270043430582 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:106 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.820 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.821 [conn1876] end connection 165.225.128.186:64154 (7 connections now open) m31000| Fri Feb 22 11:58:08.821 [initandlisten] connection accepted from 165.225.128.186:46594 #1877 (8 connections now open) m31000| Fri Feb 22 11:58:08.825 [conn1875] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.825 [conn1875] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|71 } } cursorid:577267086165885 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:48 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:08.825 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.825 [conn1875] end connection 165.225.128.186:53071 (7 connections now open) m31000| Fri Feb 22 11:58:08.825 [initandlisten] connection accepted from 165.225.128.186:34247 #1878 (8 connections now open) m31000| Fri Feb 22 11:58:08.921 [conn1] going to kill op: op: 38338.0 m31000| Fri Feb 22 11:58:08.921 [conn1] going to kill op: op: 38337.0 m31000| Fri Feb 22 11:58:08.924 [conn1877] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.924 [conn1877] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|80 } } cursorid:577661841126056 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:08.924 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.924 [conn1877] end connection 165.225.128.186:46594 (7 connections now open) m31000| Fri Feb 22 11:58:08.924 [initandlisten] connection accepted from 165.225.128.186:35088 #1879 (8 connections now open) m31000| Fri Feb 22 11:58:08.927 [conn1878] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:08.927 [conn1878] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|81 } } cursorid:577665258131046 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:08.927 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:08.928 [conn1878] end connection 165.225.128.186:34247 (7 connections now open) m31000| Fri Feb 22 11:58:08.928 [initandlisten] connection accepted from 165.225.128.186:33967 #1880 (8 connections now open) m31000| Fri Feb 22 11:58:09.022 [conn1] going to kill op: op: 38387.0 m31000| Fri Feb 22 11:58:09.022 [conn1] going to kill op: op: 38384.0 m31000| Fri Feb 22 11:58:09.022 [conn1] going to kill op: op: 38386.0 m31000| Fri Feb 22 11:58:09.023 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.023 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|91 } } cursorid:578051173773019 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:79 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:09.023 [conn8] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:09.023 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.027 [conn1879] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.027 [conn1879] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|91 } } cursorid:578056803314028 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:09.027 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.027 [conn1879] end connection 165.225.128.186:35088 (7 connections now open) m31000| Fri Feb 22 11:58:09.027 [initandlisten] connection accepted from 165.225.128.186:59514 #1881 (8 connections now open) m31000| Fri Feb 22 11:58:09.030 [conn1880] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.030 [conn1880] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534288000|91 } } cursorid:578061256245922 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.030 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.030 [conn1880] end connection 165.225.128.186:33967 (7 connections now open) m31000| Fri Feb 22 11:58:09.030 [initandlisten] connection accepted from 165.225.128.186:36614 #1882 (8 connections now open) m31000| Fri Feb 22 11:58:09.123 [conn1] going to kill op: op: 38426.0 m31000| Fri Feb 22 11:58:09.123 [conn1] going to kill op: op: 38425.0 m31000| Fri Feb 22 11:58:09.129 [conn1881] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.129 [conn1881] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|2 } } cursorid:578452554919698 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:43 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:09.129 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.130 [conn1881] end connection 165.225.128.186:59514 (7 connections now open) m31000| Fri Feb 22 11:58:09.130 [initandlisten] connection accepted from 165.225.128.186:38065 #1883 (8 connections now open) m31000| Fri Feb 22 11:58:09.132 [conn1882] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.132 [conn1882] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|2 } } cursorid:578456769616623 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:34 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.132 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.132 [conn1882] end connection 165.225.128.186:36614 (7 connections now open) m31000| Fri Feb 22 11:58:09.132 [initandlisten] connection accepted from 165.225.128.186:51151 #1884 (8 connections now open) m31000| Fri Feb 22 11:58:09.224 [conn1] going to kill op: op: 38465.0 m31000| Fri Feb 22 11:58:09.224 [conn1] going to kill op: op: 38463.0 m31000| Fri Feb 22 11:58:09.224 [conn1884] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.224 [conn1884] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|12 } } cursorid:578850688281518 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.225 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.225 [conn1884] end connection 165.225.128.186:51151 (7 connections now open) m31000| Fri Feb 22 11:58:09.225 [initandlisten] connection accepted from 165.225.128.186:39708 #1885 (8 connections now open) m31000| Fri Feb 22 11:58:09.232 [conn1883] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.232 [conn1883] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|12 } } cursorid:578847288341402 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:09.232 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.232 [conn1883] end connection 165.225.128.186:38065 (7 connections now open) m31000| Fri Feb 22 11:58:09.233 [initandlisten] connection accepted from 165.225.128.186:58971 #1886 (8 connections now open) m31000| Fri Feb 22 11:58:09.324 [conn1] going to kill op: op: 38501.0 m31000| Fri Feb 22 11:58:09.325 [conn1] going to kill op: op: 38500.0 m31000| Fri Feb 22 11:58:09.327 [conn1885] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.327 [conn1885] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|21 } } cursorid:579241898620690 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.327 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.327 [conn1885] end connection 165.225.128.186:39708 (7 connections now open) m31000| Fri Feb 22 11:58:09.328 [initandlisten] connection accepted from 165.225.128.186:53343 #1887 (8 connections now open) m31000| Fri Feb 22 11:58:09.425 [conn1] going to kill op: op: 38545.0 m31000| Fri Feb 22 11:58:09.426 [conn1] going to kill op: op: 38546.0 m31000| Fri Feb 22 11:58:09.426 [conn1] going to kill op: op: 38544.0 m31000| Fri Feb 22 11:58:09.427 [conn1886] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.427 [conn1886] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|22 } } cursorid:579245334163947 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:78 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:09.427 [conn1886] ClientCursor::find(): cursor not found in map '579245334163947' (ok after a drop) m31002| Fri Feb 22 11:58:09.427 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.427 [conn1886] end connection 165.225.128.186:58971 (7 connections now open) m31000| Fri Feb 22 11:58:09.427 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.427 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|31 } } cursorid:579632537746358 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:09.428 [initandlisten] connection accepted from 165.225.128.186:55068 #1888 (8 connections now open) m31002| Fri Feb 22 11:58:09.428 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.430 [conn1887] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.430 [conn1887] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|31 } } cursorid:579637254964212 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:50 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.430 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.430 [conn1887] end connection 165.225.128.186:53343 (7 connections now open) m31000| Fri Feb 22 11:58:09.430 [initandlisten] connection accepted from 165.225.128.186:45820 #1889 (8 connections now open) m31000| Fri Feb 22 11:58:09.526 [conn1] going to kill op: op: 38584.0 m31000| Fri Feb 22 11:58:09.527 [conn1] going to kill op: op: 38585.0 m31000| Fri Feb 22 11:58:09.530 [conn1888] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.530 [conn1888] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|41 } } cursorid:580027622663010 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:09.530 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.530 [conn1888] end connection 165.225.128.186:55068 (7 connections now open) m31000| Fri Feb 22 11:58:09.531 [initandlisten] connection accepted from 165.225.128.186:54444 #1890 (8 connections now open) m31000| Fri Feb 22 11:58:09.533 [conn1889] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.533 [conn1889] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|41 } } cursorid:580032605918366 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.533 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.533 [conn1889] end connection 165.225.128.186:45820 (7 connections now open) m31000| Fri Feb 22 11:58:09.534 [initandlisten] connection accepted from 165.225.128.186:61791 #1891 (8 connections now open) m31000| Fri Feb 22 11:58:09.627 [conn1] going to kill op: op: 38623.0 m31000| Fri Feb 22 11:58:09.628 [conn1] going to kill op: op: 38622.0 m31000| Fri Feb 22 11:58:09.633 [conn1890] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.633 [conn1890] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|51 } } cursorid:580466753281494 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:09.633 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.633 [conn1890] end connection 165.225.128.186:54444 (7 connections now open) m31000| Fri Feb 22 11:58:09.634 [initandlisten] connection accepted from 165.225.128.186:48325 #1892 (8 connections now open) m31000| Fri Feb 22 11:58:09.636 [conn1891] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.636 [conn1891] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|51 } } cursorid:580471079176361 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.636 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.636 [conn1891] end connection 165.225.128.186:61791 (7 connections now open) m31000| Fri Feb 22 11:58:09.636 [initandlisten] connection accepted from 165.225.128.186:34116 #1893 (8 connections now open) m31000| Fri Feb 22 11:58:09.728 [conn1] going to kill op: op: 38659.0 m31000| Fri Feb 22 11:58:09.728 [conn1] going to kill op: op: 38660.0 m31000| Fri Feb 22 11:58:09.736 [conn1892] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.736 [conn1892] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|61 } } cursorid:580861523496062 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:09.736 [conn1892] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31002| Fri Feb 22 11:58:09.736 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.736 [conn1892] end connection 165.225.128.186:48325 (7 connections now open) m31000| Fri Feb 22 11:58:09.737 [initandlisten] connection accepted from 165.225.128.186:47098 #1894 (8 connections now open) m31000| Fri Feb 22 11:58:09.829 [conn1] going to kill op: op: 38693.0 m31000| Fri Feb 22 11:58:09.829 [conn1] going to kill op: op: 38694.0 m31000| Fri Feb 22 11:58:09.830 [conn1893] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.830 [conn1893] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|61 } } cursorid:580865551452742 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:72 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.830 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.830 [conn1893] end connection 165.225.128.186:34116 (7 connections now open) m31000| Fri Feb 22 11:58:09.830 [initandlisten] connection accepted from 165.225.128.186:64994 #1895 (8 connections now open) m31000| Fri Feb 22 11:58:09.839 [conn1894] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.839 [conn1894] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|71 } } cursorid:581255772534710 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:84 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:09.839 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.839 [conn1894] end connection 165.225.128.186:47098 (7 connections now open) m31000| Fri Feb 22 11:58:09.840 [initandlisten] connection accepted from 165.225.128.186:61869 #1896 (8 connections now open) m31000| Fri Feb 22 11:58:09.930 [conn1] going to kill op: op: 38731.0 m31000| Fri Feb 22 11:58:09.930 [conn1] going to kill op: op: 38730.0 m31000| Fri Feb 22 11:58:09.932 [conn1896] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.932 [conn1896] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|81 } } cursorid:581651929247542 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:55 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:09.932 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.932 [conn1896] end connection 165.225.128.186:61869 (7 connections now open) m31000| Fri Feb 22 11:58:09.932 [initandlisten] connection accepted from 165.225.128.186:65225 #1897 (8 connections now open) m31000| Fri Feb 22 11:58:09.933 [conn1895] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:09.933 [conn1895] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|81 } } cursorid:581646056760962 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:75 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:09.934 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:09.934 [conn1895] end connection 165.225.128.186:64994 (7 connections now open) m31000| Fri Feb 22 11:58:09.934 [initandlisten] connection accepted from 165.225.128.186:33239 #1898 (8 connections now open) m31000| Fri Feb 22 11:58:10.031 [conn1] going to kill op: op: 38771.0 m31000| Fri Feb 22 11:58:10.031 [conn1] going to kill op: op: 38770.0 m31000| Fri Feb 22 11:58:10.031 [conn1] going to kill op: op: 38769.0 m31000| Fri Feb 22 11:58:10.035 [conn1897] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.035 [conn1897] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|91 } } cursorid:582041737136670 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:91 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.035 [conn1897] getMore: cursorid not found local.oplog.rs 582041737136670 m31002| Fri Feb 22 11:58:10.035 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.035 [conn1897] end connection 165.225.128.186:65225 (7 connections now open) m31000| Fri Feb 22 11:58:10.035 [initandlisten] connection accepted from 165.225.128.186:63142 #1899 (8 connections now open) m31000| Fri Feb 22 11:58:10.036 [conn1898] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.036 [conn1898] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534289000|91 } } cursorid:582047076774910 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:54 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:10.036 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.036 [conn1898] end connection 165.225.128.186:33239 (7 connections now open) m31000| Fri Feb 22 11:58:10.037 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.037 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|2 } } cursorid:582391076014293 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.037 [initandlisten] connection accepted from 165.225.128.186:62893 #1900 (8 connections now open) m31001| Fri Feb 22 11:58:10.038 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.132 [conn1] going to kill op: op: 38810.0 m31000| Fri Feb 22 11:58:10.132 [conn1] going to kill op: op: 38811.0 m31000| Fri Feb 22 11:58:10.137 [conn1899] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.137 [conn1899] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|3 } } cursorid:582480863836843 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.137 [conn1899] ClientCursor::find(): cursor not found in map '582480863836843' (ok after a drop) m31002| Fri Feb 22 11:58:10.137 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.138 [conn1899] end connection 165.225.128.186:63142 (7 connections now open) m31000| Fri Feb 22 11:58:10.138 [initandlisten] connection accepted from 165.225.128.186:55262 #1901 (8 connections now open) m31000| Fri Feb 22 11:58:10.139 [conn1900] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.139 [conn1900] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|3 } } cursorid:582484110277513 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:47 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:10.139 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.139 [conn1900] end connection 165.225.128.186:62893 (7 connections now open) m31000| Fri Feb 22 11:58:10.139 [initandlisten] connection accepted from 165.225.128.186:57262 #1902 (8 connections now open) m31000| Fri Feb 22 11:58:10.233 [conn1] going to kill op: op: 38850.0 m31000| Fri Feb 22 11:58:10.233 [conn1] going to kill op: op: 38851.0 m31000| Fri Feb 22 11:58:10.240 [conn1901] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.240 [conn1901] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|13 } } cursorid:582918670684810 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:49 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:10.240 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.240 [conn1901] end connection 165.225.128.186:55262 (7 connections now open) m31000| Fri Feb 22 11:58:10.241 [initandlisten] connection accepted from 165.225.128.186:43387 #1903 (8 connections now open) m31000| Fri Feb 22 11:58:10.241 [conn1902] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.241 [conn1902] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|13 } } cursorid:582922131576603 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:10.241 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.241 [conn1902] end connection 165.225.128.186:57262 (7 connections now open) m31000| Fri Feb 22 11:58:10.242 [initandlisten] connection accepted from 165.225.128.186:32892 #1904 (8 connections now open) m31000| Fri Feb 22 11:58:10.334 [conn1] going to kill op: op: 38887.0 m31000| Fri Feb 22 11:58:10.334 [conn1] going to kill op: op: 38889.0 m31000| Fri Feb 22 11:58:10.343 [conn1903] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.343 [conn1903] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|23 } } cursorid:583355466734787 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:10.343 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.343 [conn1903] end connection 165.225.128.186:43387 (7 connections now open) m31000| Fri Feb 22 11:58:10.343 [initandlisten] connection accepted from 165.225.128.186:37156 #1905 (8 connections now open) m31000| Fri Feb 22 11:58:10.435 [conn1] going to kill op: op: 38921.0 m31000| Fri Feb 22 11:58:10.435 [conn1] going to kill op: op: 38920.0 m31000| Fri Feb 22 11:58:10.435 [conn1905] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.435 [conn1905] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|33 } } cursorid:583793617565935 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:10.435 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.436 [conn1905] end connection 165.225.128.186:37156 (7 connections now open) m31000| Fri Feb 22 11:58:10.436 [conn1904] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.436 [conn1904] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|23 } } cursorid:583359674721856 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:69 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:10.436 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.436 [initandlisten] connection accepted from 165.225.128.186:34509 #1906 (8 connections now open) m31000| Fri Feb 22 11:58:10.436 [conn1904] end connection 165.225.128.186:32892 (6 connections now open) m31000| Fri Feb 22 11:58:10.436 [initandlisten] connection accepted from 165.225.128.186:49655 #1907 (8 connections now open) m31000| Fri Feb 22 11:58:10.535 [conn1] going to kill op: op: 38970.0 m31000| Fri Feb 22 11:58:10.536 [conn1] going to kill op: op: 38968.0 m31000| Fri Feb 22 11:58:10.536 [conn1] going to kill op: op: 38969.0 m31000| Fri Feb 22 11:58:10.538 [conn1906] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.538 [conn1907] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.538 [conn1906] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|42 } } cursorid:584190292276556 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:76 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.538 [conn1907] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|42 } } cursorid:584189365833583 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:67 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.539 [conn1906] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:10.539 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:58:10.539 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.539 [conn1907] end connection 165.225.128.186:49655 (7 connections now open) m31000| Fri Feb 22 11:58:10.539 [conn1906] end connection 165.225.128.186:34509 (7 connections now open) m31000| Fri Feb 22 11:58:10.539 [conn12] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.539 [conn12] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|42 } } cursorid:584139098571889 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:56 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.539 [initandlisten] connection accepted from 165.225.128.186:62271 #1908 (7 connections now open) m31000| Fri Feb 22 11:58:10.539 [initandlisten] connection accepted from 165.225.128.186:57057 #1909 (8 connections now open) m31002| Fri Feb 22 11:58:10.540 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.636 [conn1] going to kill op: op: 39009.0 m31000| Fri Feb 22 11:58:10.637 [conn1] going to kill op: op: 39010.0 m31000| Fri Feb 22 11:58:10.642 [conn1909] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.642 [conn1909] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|52 } } cursorid:584627835451966 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:10.642 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.642 [conn1909] end connection 165.225.128.186:57057 (7 connections now open) m31000| Fri Feb 22 11:58:10.642 [conn1908] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.642 [conn1908] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|52 } } cursorid:584626696935324 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:100 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:10.642 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.642 [conn1908] end connection 165.225.128.186:62271 (6 connections now open) m31000| Fri Feb 22 11:58:10.642 [initandlisten] connection accepted from 165.225.128.186:48426 #1910 (8 connections now open) m31000| Fri Feb 22 11:58:10.643 [initandlisten] connection accepted from 165.225.128.186:59473 #1911 (8 connections now open) m31000| Fri Feb 22 11:58:10.737 [conn1] going to kill op: op: 39047.0 m31000| Fri Feb 22 11:58:10.737 [conn1] going to kill op: op: 39048.0 m31000| Fri Feb 22 11:58:10.745 [conn1910] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.745 [conn1910] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|63 } } cursorid:585065377557661 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:61 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:10.745 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.745 [conn1910] end connection 165.225.128.186:48426 (7 connections now open) m31000| Fri Feb 22 11:58:10.745 [conn1911] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.745 [conn1911] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|63 } } cursorid:585065687915852 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:73 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:10.745 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.745 [initandlisten] connection accepted from 165.225.128.186:65481 #1912 (8 connections now open) m31000| Fri Feb 22 11:58:10.745 [conn1911] end connection 165.225.128.186:59473 (7 connections now open) m31000| Fri Feb 22 11:58:10.745 [initandlisten] connection accepted from 165.225.128.186:55146 #1913 (8 connections now open) m31000| Fri Feb 22 11:58:10.838 [conn1] going to kill op: op: 39085.0 m31000| Fri Feb 22 11:58:10.838 [conn1] going to kill op: op: 39086.0 m31000| Fri Feb 22 11:58:10.847 [conn1912] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.847 [conn1912] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|73 } } cursorid:585503240633918 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:65 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.847 [conn1913] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.848 [conn1913] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|73 } } cursorid:585504300612252 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:46 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.848 [conn1913] ClientCursor::find(): cursor not found in map '585504300612252' (ok after a drop) m31001| Fri Feb 22 11:58:10.848 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31002| Fri Feb 22 11:58:10.848 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.848 [conn1912] end connection 165.225.128.186:65481 (7 connections now open) m31000| Fri Feb 22 11:58:10.848 [conn1913] end connection 165.225.128.186:55146 (7 connections now open) m31000| Fri Feb 22 11:58:10.848 [initandlisten] connection accepted from 165.225.128.186:52119 #1914 (7 connections now open) m31000| Fri Feb 22 11:58:10.848 [initandlisten] connection accepted from 165.225.128.186:58512 #1915 (8 connections now open) m31000| Fri Feb 22 11:58:10.939 [conn1] going to kill op: op: 39121.0 m31000| Fri Feb 22 11:58:10.939 [conn1] going to kill op: op: 39120.0 m31000| Fri Feb 22 11:58:10.940 [conn1915] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.940 [conn1915] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|83 } } cursorid:585942084562985 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:10.940 [conn1914] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:10.940 [conn1914] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|83 } } cursorid:585942970613690 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:64 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:10.940 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:10.940 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:10.940 [conn1915] end connection 165.225.128.186:58512 (7 connections now open) m31000| Fri Feb 22 11:58:10.940 [conn1914] end connection 165.225.128.186:52119 (6 connections now open) m31000| Fri Feb 22 11:58:10.941 [initandlisten] connection accepted from 165.225.128.186:55279 #1916 (7 connections now open) m31000| Fri Feb 22 11:58:10.941 [initandlisten] connection accepted from 165.225.128.186:34924 #1917 (8 connections now open) m31000| Fri Feb 22 11:58:11.040 [conn1] going to kill op: op: 39158.0 m31000| Fri Feb 22 11:58:11.040 [conn1] going to kill op: op: 39159.0 m31000| Fri Feb 22 11:58:11.043 [conn1916] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:11.043 [conn1916] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|92 } } cursorid:586336535902285 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:71 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:11.043 [conn1917] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:11.043 [conn1917] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534290000|92 } } cursorid:586336692131457 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:74 nreturned:0 reslen:20 10ms m31002| Fri Feb 22 11:58:11.043 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:11.043 [conn1916] end connection 165.225.128.186:55279 (7 connections now open) m31001| Fri Feb 22 11:58:11.043 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:11.043 [conn1917] end connection 165.225.128.186:34924 (6 connections now open) m31000| Fri Feb 22 11:58:11.044 [initandlisten] connection accepted from 165.225.128.186:52416 #1918 (7 connections now open) m31000| Fri Feb 22 11:58:11.044 [initandlisten] connection accepted from 165.225.128.186:63044 #1919 (8 connections now open) m31000| Fri Feb 22 11:58:11.140 [conn1] going to kill op: op: 39207.0 m31000| Fri Feb 22 11:58:11.141 [conn1] going to kill op: op: 39206.0 m31000| Fri Feb 22 11:58:11.141 [conn1] going to kill op: op: 39208.0 m31000| Fri Feb 22 11:58:11.146 [conn1919] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:11.147 [conn1919] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534291000|3 } } cursorid:586776206067191 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:53 nreturned:0 reslen:20 10ms m31001| Fri Feb 22 11:58:11.147 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:11.147 [conn1919] end connection 165.225.128.186:63044 (7 connections now open) m31000| Fri Feb 22 11:58:11.147 [conn8] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:11.147 [conn8] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534291000|3 } } cursorid:586722670671210 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:81 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:11.147 [conn1918] { $err: "operation was interrupted", code: 11601 } m31000| Fri Feb 22 11:58:11.147 [conn1918] getmore local.oplog.rs query: { ts: { $gte: Timestamp 1361534291000|3 } } cursorid:586775159681125 ntoreturn:0 keyUpdates:0 exception: operation was interrupted code:11601 locks(micros) r:144 nreturned:0 reslen:20 10ms m31000| Fri Feb 22 11:58:11.147 [initandlisten] connection accepted from 165.225.128.186:63852 #1920 (8 connections now open) m31002| Fri Feb 22 11:58:11.147 [rsBackgroundSync] repl: old cursor isDead, will initiate a new one m31000| Fri Feb 22 11:58:11.148 [conn1918] end connection 165.225.128.186:52416 (7 connections now open) m31000| Fri Feb 22 11:58:11.148 [initandlisten] connection accepted from 165.225.128.186:54201 #1921 (8 connections now open) m31000| Fri Feb 22 11:58:11.148 [conn8] ClientCursor::find(): cursor not found in map '-1' (ok after a drop) m31001| Fri Feb 22 11:58:11.148 [rsSyncNotifier] repl: old cursor isDead, will initiate a new one m31001| Fri Feb 22 11:58:12.267 [conn10] end connection 165.225.128.186:38255 (2 connections now open) m31001| Fri Feb 22 11:58:12.267 [initandlisten] connection accepted from 165.225.128.186:60749 #12 (3 connections now open) m31001| Fri Feb 22 11:58:15.843 [conn11] end connection 165.225.128.186:41757 (2 connections now open) m31001| Fri Feb 22 11:58:15.844 [initandlisten] connection accepted from 165.225.128.186:62363 #13 (3 connections now open) m31000| Fri Feb 22 11:58:18.059 [conn1479] end connection 165.225.128.186:60474 (7 connections now open) m31000| Fri Feb 22 11:58:18.060 [initandlisten] connection accepted from 165.225.128.186:41762 #1922 (8 connections now open) m31000| Fri Feb 22 11:58:28.269 [conn1679] end connection 165.225.128.186:36214 (7 connections now open) m31000| Fri Feb 22 11:58:28.269 [initandlisten] connection accepted from 165.225.128.186:62813 #1923 (8 connections now open) m31002| Fri Feb 22 11:58:31.846 [conn11] end connection 165.225.128.186:48725 (2 connections now open) m31002| Fri Feb 22 11:58:31.846 [initandlisten] connection accepted from 165.225.128.186:54817 #13 (3 connections now open) m31002| Fri Feb 22 11:58:32.069 [conn12] end connection 165.225.128.186:61788 (2 connections now open) m31002| Fri Feb 22 11:58:32.070 [initandlisten] connection accepted from 165.225.128.186:58861 #14 (3 connections now open) m31001| Fri Feb 22 11:58:42.271 [conn12] end connection 165.225.128.186:60749 (2 connections now open) m31001| Fri Feb 22 11:58:42.271 [initandlisten] connection accepted from 165.225.128.186:50989 #14 (3 connections now open) m31001| Fri Feb 22 11:58:45.847 [conn13] end connection 165.225.128.186:62363 (2 connections now open) m31001| Fri Feb 22 11:58:45.847 [initandlisten] connection accepted from 165.225.128.186:47276 #15 (3 connections now open) m31000| Fri Feb 22 11:58:48.064 [conn1922] end connection 165.225.128.186:41762 (7 connections now open) m31000| Fri Feb 22 11:58:48.064 [initandlisten] connection accepted from 165.225.128.186:44240 #1924 (8 connections now open) m31000| Fri Feb 22 11:58:58.272 [conn1923] end connection 165.225.128.186:62813 (7 connections now open) m31000| Fri Feb 22 11:58:58.273 [initandlisten] connection accepted from 165.225.128.186:35887 #1925 (8 connections now open) m31002| Fri Feb 22 11:59:01.849 [conn13] end connection 165.225.128.186:54817 (2 connections now open) m31002| Fri Feb 22 11:59:01.849 [initandlisten] connection accepted from 165.225.128.186:62579 #15 (3 connections now open) m31002| Fri Feb 22 11:59:02.073 [conn14] end connection 165.225.128.186:58861 (2 connections now open) m31002| Fri Feb 22 11:59:02.074 [initandlisten] connection accepted from 165.225.128.186:65029 #16 (3 connections now open) m31001| Fri Feb 22 11:59:12.274 [conn14] end connection 165.225.128.186:50989 (2 connections now open) m31001| Fri Feb 22 11:59:12.275 [initandlisten] connection accepted from 165.225.128.186:53754 #16 (3 connections now open) m31001| Fri Feb 22 11:59:15.851 [conn15] end connection 165.225.128.186:47276 (2 connections now open) m31001| Fri Feb 22 11:59:15.851 [initandlisten] connection accepted from 165.225.128.186:58348 #17 (3 connections now open) m31000| Fri Feb 22 11:59:18.067 [conn1924] end connection 165.225.128.186:44240 (7 connections now open) m31000| Fri Feb 22 11:59:18.068 [initandlisten] connection accepted from 165.225.128.186:55847 #1926 (8 connections now open) m31000| Fri Feb 22 11:59:28.276 [conn1925] end connection 165.225.128.186:35887 (7 connections now open) m31000| Fri Feb 22 11:59:28.277 [initandlisten] connection accepted from 165.225.128.186:57308 #1927 (8 connections now open) m31002| Fri Feb 22 11:59:31.853 [conn15] end connection 165.225.128.186:62579 (2 connections now open) m31002| Fri Feb 22 11:59:31.853 [initandlisten] connection accepted from 165.225.128.186:55549 #17 (3 connections now open) m31002| Fri Feb 22 11:59:32.077 [conn16] end connection 165.225.128.186:65029 (2 connections now open) m31002| Fri Feb 22 11:59:32.078 [initandlisten] connection accepted from 165.225.128.186:33593 #18 (3 connections now open) m31001| Fri Feb 22 11:59:42.278 [conn16] end connection 165.225.128.186:53754 (2 connections now open) m31001| Fri Feb 22 11:59:42.278 [initandlisten] connection accepted from 165.225.128.186:60415 #18 (3 connections now open) m31001| Fri Feb 22 11:59:45.855 [conn17] end connection 165.225.128.186:58348 (2 connections now open) m31001| Fri Feb 22 11:59:45.855 [initandlisten] connection accepted from 165.225.128.186:43874 #19 (3 connections now open) m31000| Fri Feb 22 11:59:48.071 [conn1926] end connection 165.225.128.186:55847 (7 connections now open) m31000| Fri Feb 22 11:59:48.071 [initandlisten] connection accepted from 165.225.128.186:37743 #1928 (8 connections now open) m31000| Fri Feb 22 11:59:58.280 [conn1927] end connection 165.225.128.186:57308 (7 connections now open) m31000| Fri Feb 22 11:59:58.280 [initandlisten] connection accepted from 165.225.128.186:63305 #1929 (8 connections now open) m31002| Fri Feb 22 12:00:01.856 [conn17] end connection 165.225.128.186:55549 (2 connections now open) m31002| Fri Feb 22 12:00:01.857 [initandlisten] connection accepted from 165.225.128.186:50432 #19 (3 connections now open) m31002| Fri Feb 22 12:00:02.081 [conn18] end connection 165.225.128.186:33593 (2 connections now open) m31002| Fri Feb 22 12:00:02.081 [initandlisten] connection accepted from 165.225.128.186:38364 #20 (3 connections now open) m31001| Fri Feb 22 12:00:12.286 [conn18] end connection 165.225.128.186:60415 (2 connections now open) m31001| Fri Feb 22 12:00:12.286 [initandlisten] connection accepted from 165.225.128.186:50904 #20 (3 connections now open) m31001| Fri Feb 22 12:00:15.858 [conn19] end connection 165.225.128.186:43874 (2 connections now open) m31001| Fri Feb 22 12:00:15.859 [initandlisten] connection accepted from 165.225.128.186:54313 #21 (3 connections now open) m31000| Fri Feb 22 12:00:18.075 [conn1928] end connection 165.225.128.186:37743 (7 connections now open) m31000| Fri Feb 22 12:00:18.075 [initandlisten] connection accepted from 165.225.128.186:49789 #1930 (8 connections now open) m31000| Fri Feb 22 12:00:28.284 [conn1929] end connection 165.225.128.186:63305 (7 connections now open) m31000| Fri Feb 22 12:00:28.284 [initandlisten] connection accepted from 165.225.128.186:41834 #1931 (8 connections now open) m31002| Fri Feb 22 12:00:31.864 [conn19] end connection 165.225.128.186:50432 (2 connections now open) m31002| Fri Feb 22 12:00:31.864 [initandlisten] connection accepted from 165.225.128.186:52081 #21 (3 connections now open) m31002| Fri Feb 22 12:00:32.085 [conn20] end connection 165.225.128.186:38364 (2 connections now open) m31002| Fri Feb 22 12:00:32.085 [initandlisten] connection accepted from 165.225.128.186:45904 #22 (3 connections now open) m31001| Fri Feb 22 12:00:42.290 [conn20] end connection 165.225.128.186:50904 (2 connections now open) m31001| Fri Feb 22 12:00:42.290 [initandlisten] connection accepted from 165.225.128.186:42047 #22 (3 connections now open) m31001| Fri Feb 22 12:00:45.862 [conn21] end connection 165.225.128.186:54313 (2 connections now open) m31001| Fri Feb 22 12:00:45.862 [initandlisten] connection accepted from 165.225.128.186:50087 #23 (3 connections now open) m31000| Fri Feb 22 12:00:48.079 [conn1930] end connection 165.225.128.186:49789 (7 connections now open) m31000| Fri Feb 22 12:00:48.079 [initandlisten] connection accepted from 165.225.128.186:64028 #1932 (8 connections now open) m31000| Fri Feb 22 12:00:58.288 [conn1931] end connection 165.225.128.186:41834 (7 connections now open) m31000| Fri Feb 22 12:00:58.288 [initandlisten] connection accepted from 165.225.128.186:37592 #1933 (8 connections now open) m31002| Fri Feb 22 12:01:01.867 [conn21] end connection 165.225.128.186:52081 (2 connections now open) m31002| Fri Feb 22 12:01:01.867 [initandlisten] connection accepted from 165.225.128.186:45341 #23 (3 connections now open) m31002| Fri Feb 22 12:01:02.088 [conn22] end connection 165.225.128.186:45904 (2 connections now open) m31002| Fri Feb 22 12:01:02.089 [initandlisten] connection accepted from 165.225.128.186:48925 #24 (3 connections now open) m31001| Fri Feb 22 12:01:12.293 [conn22] end connection 165.225.128.186:42047 (2 connections now open) m31001| Fri Feb 22 12:01:12.294 [initandlisten] connection accepted from 165.225.128.186:46296 #24 (3 connections now open) m31001| Fri Feb 22 12:01:15.866 [conn23] end connection 165.225.128.186:50087 (2 connections now open) m31001| Fri Feb 22 12:01:15.866 [initandlisten] connection accepted from 165.225.128.186:34875 #25 (3 connections now open) m31000| Fri Feb 22 12:01:18.083 [conn1932] end connection 165.225.128.186:64028 (7 connections now open) m31000| Fri Feb 22 12:01:18.084 [initandlisten] connection accepted from 165.225.128.186:41821 #1934 (8 connections now open) m31000| Fri Feb 22 12:01:28.292 [conn1933] end connection 165.225.128.186:37592 (7 connections now open) m31000| Fri Feb 22 12:01:28.292 [initandlisten] connection accepted from 165.225.128.186:39119 #1935 (8 connections now open) m31002| Fri Feb 22 12:01:31.871 [conn23] end connection 165.225.128.186:45341 (2 connections now open) m31002| Fri Feb 22 12:01:31.871 [initandlisten] connection accepted from 165.225.128.186:33477 #25 (3 connections now open) m31002| Fri Feb 22 12:01:32.092 [conn24] end connection 165.225.128.186:48925 (2 connections now open) m31002| Fri Feb 22 12:01:32.092 [initandlisten] connection accepted from 165.225.128.186:37779 #26 (3 connections now open) m31001| Fri Feb 22 12:01:42.297 [conn24] end connection 165.225.128.186:46296 (2 connections now open) m31001| Fri Feb 22 12:01:42.298 [initandlisten] connection accepted from 165.225.128.186:47940 #26 (3 connections now open) m31001| Fri Feb 22 12:01:45.870 [conn25] end connection 165.225.128.186:34875 (2 connections now open) m31001| Fri Feb 22 12:01:45.870 [initandlisten] connection accepted from 165.225.128.186:33994 #27 (3 connections now open) m31000| Fri Feb 22 12:01:48.087 [conn1934] end connection 165.225.128.186:41821 (7 connections now open) m31000| Fri Feb 22 12:01:48.087 [initandlisten] connection accepted from 165.225.128.186:42421 #1936 (8 connections now open) m31000| Fri Feb 22 12:01:58.295 [conn1935] end connection 165.225.128.186:39119 (7 connections now open) m31000| Fri Feb 22 12:01:58.296 [initandlisten] connection accepted from 165.225.128.186:65133 #1937 (8 connections now open) m31002| Fri Feb 22 12:02:01.874 [conn25] end connection 165.225.128.186:33477 (2 connections now open) m31002| Fri Feb 22 12:02:01.874 [initandlisten] connection accepted from 165.225.128.186:38583 #27 (3 connections now open) m31002| Fri Feb 22 12:02:02.095 [conn26] end connection 165.225.128.186:37779 (2 connections now open) m31002| Fri Feb 22 12:02:02.095 [initandlisten] connection accepted from 165.225.128.186:39205 #28 (3 connections now open) m31001| Fri Feb 22 12:02:12.301 [conn26] end connection 165.225.128.186:47940 (2 connections now open) m31001| Fri Feb 22 12:02:12.301 [initandlisten] connection accepted from 165.225.128.186:47806 #28 (3 connections now open) m31001| Fri Feb 22 12:02:15.873 [conn27] end connection 165.225.128.186:33994 (2 connections now open) m31001| Fri Feb 22 12:02:15.874 [initandlisten] connection accepted from 165.225.128.186:47757 #29 (3 connections now open) m31000| Fri Feb 22 12:02:18.092 [conn1936] end connection 165.225.128.186:42421 (7 connections now open) m31000| Fri Feb 22 12:02:18.092 [initandlisten] connection accepted from 165.225.128.186:38711 #1938 (8 connections now open) m31000| Fri Feb 22 12:02:28.299 [conn1937] end connection 165.225.128.186:65133 (7 connections now open) m31000| Fri Feb 22 12:02:28.300 [initandlisten] connection accepted from 165.225.128.186:59396 #1939 (8 connections now open) m31002| Fri Feb 22 12:02:31.880 [conn27] end connection 165.225.128.186:38583 (2 connections now open) m31002| Fri Feb 22 12:02:31.880 [initandlisten] connection accepted from 165.225.128.186:34567 #29 (3 connections now open) m31002| Fri Feb 22 12:02:32.099 [conn28] end connection 165.225.128.186:39205 (2 connections now open) m31002| Fri Feb 22 12:02:32.099 [initandlisten] connection accepted from 165.225.128.186:33057 #30 (3 connections now open) m31001| Fri Feb 22 12:02:42.304 [conn28] end connection 165.225.128.186:47806 (2 connections now open) m31001| Fri Feb 22 12:02:42.305 [initandlisten] connection accepted from 165.225.128.186:50975 #30 (3 connections now open) m31001| Fri Feb 22 12:02:45.877 [conn29] end connection 165.225.128.186:47757 (2 connections now open) m31001| Fri Feb 22 12:02:45.878 [initandlisten] connection accepted from 165.225.128.186:46722 #31 (3 connections now open) m31000| Fri Feb 22 12:02:48.096 [conn1938] end connection 165.225.128.186:38711 (7 connections now open) m31000| Fri Feb 22 12:02:48.096 [initandlisten] connection accepted from 165.225.128.186:49564 #1940 (8 connections now open) m31000| Fri Feb 22 12:02:58.303 [conn1939] end connection 165.225.128.186:59396 (7 connections now open) m31000| Fri Feb 22 12:02:58.303 [initandlisten] connection accepted from 165.225.128.186:34857 #1941 (8 connections now open) m31002| Fri Feb 22 12:03:01.884 [conn29] end connection 165.225.128.186:34567 (2 connections now open) m31002| Fri Feb 22 12:03:01.884 [initandlisten] connection accepted from 165.225.128.186:36510 #31 (3 connections now open) m31002| Fri Feb 22 12:03:02.103 [conn30] end connection 165.225.128.186:33057 (2 connections now open) m31002| Fri Feb 22 12:03:02.103 [initandlisten] connection accepted from 165.225.128.186:59703 #32 (3 connections now open) m31001| Fri Feb 22 12:03:12.315 [conn30] end connection 165.225.128.186:50975 (2 connections now open) m31001| Fri Feb 22 12:03:12.316 [initandlisten] connection accepted from 165.225.128.186:56435 #32 (3 connections now open) m31001| Fri Feb 22 12:03:15.881 [conn31] end connection 165.225.128.186:46722 (2 connections now open) m31001| Fri Feb 22 12:03:15.881 [initandlisten] connection accepted from 165.225.128.186:51078 #33 (3 connections now open) m31000| Fri Feb 22 12:03:18.100 [conn1940] end connection 165.225.128.186:49564 (7 connections now open) m31000| Fri Feb 22 12:03:18.100 [initandlisten] connection accepted from 165.225.128.186:54191 #1942 (8 connections now open) m31000| Fri Feb 22 12:03:28.307 [conn1941] end connection 165.225.128.186:34857 (7 connections now open) m31000| Fri Feb 22 12:03:28.307 [initandlisten] connection accepted from 165.225.128.186:42644 #1943 (8 connections now open) m31002| Fri Feb 22 12:03:31.887 [conn31] end connection 165.225.128.186:36510 (2 connections now open) m31002| Fri Feb 22 12:03:31.888 [initandlisten] connection accepted from 165.225.128.186:47048 #33 (3 connections now open) m31002| Fri Feb 22 12:03:32.106 [conn32] end connection 165.225.128.186:59703 (2 connections now open) m31002| Fri Feb 22 12:03:32.107 [initandlisten] connection accepted from 165.225.128.186:38255 #34 (3 connections now open) m31001| Fri Feb 22 12:03:42.319 [conn32] end connection 165.225.128.186:56435 (2 connections now open) m31001| Fri Feb 22 12:03:42.319 [initandlisten] connection accepted from 165.225.128.186:54013 #34 (3 connections now open) m31001| Fri Feb 22 12:03:45.885 [conn33] end connection 165.225.128.186:51078 (2 connections now open) m31001| Fri Feb 22 12:03:45.885 [initandlisten] connection accepted from 165.225.128.186:49729 #35 (3 connections now open) m31000| Fri Feb 22 12:03:48.104 [conn1942] end connection 165.225.128.186:54191 (7 connections now open) m31000| Fri Feb 22 12:03:48.104 [initandlisten] connection accepted from 165.225.128.186:40609 #1944 (8 connections now open) m31000| Fri Feb 22 12:03:58.310 [conn1943] end connection 165.225.128.186:42644 (7 connections now open) m31000| Fri Feb 22 12:03:58.310 [initandlisten] connection accepted from 165.225.128.186:52491 #1945 (8 connections now open) m31002| Fri Feb 22 12:04:01.891 [conn33] end connection 165.225.128.186:47048 (2 connections now open) m31002| Fri Feb 22 12:04:01.891 [initandlisten] connection accepted from 165.225.128.186:55243 #35 (3 connections now open) m31002| Fri Feb 22 12:04:02.110 [conn34] end connection 165.225.128.186:38255 (2 connections now open) m31002| Fri Feb 22 12:04:02.110 [initandlisten] connection accepted from 165.225.128.186:41273 #36 (3 connections now open) m31001| Fri Feb 22 12:04:12.323 [conn34] end connection 165.225.128.186:54013 (2 connections now open) m31001| Fri Feb 22 12:04:12.323 [initandlisten] connection accepted from 165.225.128.186:36547 #36 (3 connections now open) m31001| Fri Feb 22 12:04:15.889 [conn35] end connection 165.225.128.186:49729 (2 connections now open) m31001| Fri Feb 22 12:04:15.889 [initandlisten] connection accepted from 165.225.128.186:58238 #37 (3 connections now open) m31000| Fri Feb 22 12:04:18.108 [conn1944] end connection 165.225.128.186:40609 (7 connections now open) m31000| Fri Feb 22 12:04:18.109 [initandlisten] connection accepted from 165.225.128.186:36316 #1946 (8 connections now open) m31000| Fri Feb 22 12:04:28.314 [conn1945] end connection 165.225.128.186:52491 (7 connections now open) m31000| Fri Feb 22 12:04:28.314 [initandlisten] connection accepted from 165.225.128.186:40929 #1947 (8 connections now open) m31002| Fri Feb 22 12:04:31.895 [conn35] end connection 165.225.128.186:55243 (2 connections now open) m31002| Fri Feb 22 12:04:31.895 [initandlisten] connection accepted from 165.225.128.186:52860 #37 (3 connections now open) m31002| Fri Feb 22 12:04:32.114 [conn36] end connection 165.225.128.186:41273 (2 connections now open) m31002| Fri Feb 22 12:04:32.115 [initandlisten] connection accepted from 165.225.128.186:51590 #38 (3 connections now open) m31001| Fri Feb 22 12:04:42.327 [conn36] end connection 165.225.128.186:36547 (2 connections now open) m31001| Fri Feb 22 12:04:42.335 [initandlisten] connection accepted from 165.225.128.186:36478 #38 (3 connections now open) m31001| Fri Feb 22 12:04:45.892 [conn37] end connection 165.225.128.186:58238 (2 connections now open) m31001| Fri Feb 22 12:04:45.893 [initandlisten] connection accepted from 165.225.128.186:34628 #39 (3 connections now open) m31000| Fri Feb 22 12:04:48.112 [conn1946] end connection 165.225.128.186:36316 (7 connections now open) m31000| Fri Feb 22 12:04:48.113 [initandlisten] connection accepted from 165.225.128.186:44655 #1948 (8 connections now open) m31000| Fri Feb 22 12:04:58.318 [conn1947] end connection 165.225.128.186:40929 (7 connections now open) m31000| Fri Feb 22 12:04:58.318 [initandlisten] connection accepted from 165.225.128.186:40495 #1949 (8 connections now open) m31002| Fri Feb 22 12:05:01.898 [conn37] end connection 165.225.128.186:52860 (2 connections now open) m31002| Fri Feb 22 12:05:01.898 [initandlisten] connection accepted from 165.225.128.186:36203 #39 (3 connections now open) m31002| Fri Feb 22 12:05:02.118 [conn38] end connection 165.225.128.186:51590 (2 connections now open) m31002| Fri Feb 22 12:05:02.118 [initandlisten] connection accepted from 165.225.128.186:53981 #40 (3 connections now open) m31001| Fri Feb 22 12:05:12.339 [conn38] end connection 165.225.128.186:36478 (2 connections now open) m31001| Fri Feb 22 12:05:12.339 [initandlisten] connection accepted from 165.225.128.186:38903 #40 (3 connections now open) m31001| Fri Feb 22 12:05:15.896 [conn39] end connection 165.225.128.186:34628 (2 connections now open) m31001| Fri Feb 22 12:05:15.896 [initandlisten] connection accepted from 165.225.128.186:63080 #41 (3 connections now open) m31000| Fri Feb 22 12:05:18.116 [conn1948] end connection 165.225.128.186:44655 (7 connections now open) m31000| Fri Feb 22 12:05:18.117 [initandlisten] connection accepted from 165.225.128.186:41559 #1950 (8 connections now open) m31000| Fri Feb 22 12:05:28.321 [conn1949] end connection 165.225.128.186:40495 (7 connections now open) m31000| Fri Feb 22 12:05:28.321 [initandlisten] connection accepted from 165.225.128.186:42242 #1951 (8 connections now open) m31002| Fri Feb 22 12:05:31.902 [conn39] end connection 165.225.128.186:36203 (2 connections now open) m31002| Fri Feb 22 12:05:31.902 [initandlisten] connection accepted from 165.225.128.186:59305 #41 (3 connections now open) m31002| Fri Feb 22 12:05:32.122 [conn40] end connection 165.225.128.186:53981 (2 connections now open) m31002| Fri Feb 22 12:05:32.122 [initandlisten] connection accepted from 165.225.128.186:64911 #42 (3 connections now open) m31001| Fri Feb 22 12:05:42.343 [conn40] end connection 165.225.128.186:38903 (2 connections now open) m31001| Fri Feb 22 12:05:42.343 [initandlisten] connection accepted from 165.225.128.186:58939 #42 (3 connections now open) m31001| Fri Feb 22 12:05:45.900 [conn41] end connection 165.225.128.186:63080 (2 connections now open) m31001| Fri Feb 22 12:05:45.900 [initandlisten] connection accepted from 165.225.128.186:58067 #43 (3 connections now open) m31000| Fri Feb 22 12:05:48.120 [conn1950] end connection 165.225.128.186:41559 (7 connections now open) m31000| Fri Feb 22 12:05:48.120 [initandlisten] connection accepted from 165.225.128.186:54551 #1952 (8 connections now open) m31000| Fri Feb 22 12:05:58.325 [conn1951] end connection 165.225.128.186:42242 (7 connections now open) m31000| Fri Feb 22 12:05:58.325 [initandlisten] connection accepted from 165.225.128.186:52295 #1953 (8 connections now open) m31002| Fri Feb 22 12:06:01.905 [conn41] end connection 165.225.128.186:59305 (2 connections now open) m31002| Fri Feb 22 12:06:01.905 [initandlisten] connection accepted from 165.225.128.186:37301 #43 (3 connections now open) m31002| Fri Feb 22 12:06:02.125 [conn42] end connection 165.225.128.186:64911 (2 connections now open) m31002| Fri Feb 22 12:06:02.126 [initandlisten] connection accepted from 165.225.128.186:61157 #44 (3 connections now open) m31001| Fri Feb 22 12:06:12.347 [conn42] end connection 165.225.128.186:58939 (2 connections now open) m31001| Fri Feb 22 12:06:12.347 [initandlisten] connection accepted from 165.225.128.186:35783 #44 (3 connections now open) m31001| Fri Feb 22 12:06:15.904 [conn43] end connection 165.225.128.186:58067 (2 connections now open) m31001| Fri Feb 22 12:06:15.904 [initandlisten] connection accepted from 165.225.128.186:43357 #45 (3 connections now open) m31000| Fri Feb 22 12:06:18.124 [conn1952] end connection 165.225.128.186:54551 (7 connections now open) m31000| Fri Feb 22 12:06:18.124 [initandlisten] connection accepted from 165.225.128.186:37290 #1954 (8 connections now open) m31000| Fri Feb 22 12:06:28.329 [conn1953] end connection 165.225.128.186:52295 (7 connections now open) m31000| Fri Feb 22 12:06:28.329 [initandlisten] connection accepted from 165.225.128.186:51245 #1955 (8 connections now open) m31002| Fri Feb 22 12:06:31.908 [conn43] end connection 165.225.128.186:37301 (2 connections now open) m31002| Fri Feb 22 12:06:31.909 [initandlisten] connection accepted from 165.225.128.186:52787 #45 (3 connections now open) m31002| Fri Feb 22 12:06:32.129 [conn44] end connection 165.225.128.186:61157 (2 connections now open) m31002| Fri Feb 22 12:06:32.129 [initandlisten] connection accepted from 165.225.128.186:45214 #46 (3 connections now open) m31001| Fri Feb 22 12:06:42.350 [conn44] end connection 165.225.128.186:35783 (2 connections now open) m31001| Fri Feb 22 12:06:42.351 [initandlisten] connection accepted from 165.225.128.186:47948 #46 (3 connections now open) m31001| Fri Feb 22 12:06:45.907 [conn45] end connection 165.225.128.186:43357 (2 connections now open) m31001| Fri Feb 22 12:06:45.908 [initandlisten] connection accepted from 165.225.128.186:54284 #47 (3 connections now open) m31000| Fri Feb 22 12:06:48.128 [conn1954] end connection 165.225.128.186:37290 (7 connections now open) m31000| Fri Feb 22 12:06:48.128 [initandlisten] connection accepted from 165.225.128.186:47879 #1956 (8 connections now open) m31000| Fri Feb 22 12:06:58.332 [conn1955] end connection 165.225.128.186:51245 (7 connections now open) m31000| Fri Feb 22 12:06:58.333 [initandlisten] connection accepted from 165.225.128.186:33350 #1957 (8 connections now open) m31002| Fri Feb 22 12:07:01.912 [conn45] end connection 165.225.128.186:52787 (2 connections now open) m31002| Fri Feb 22 12:07:01.912 [initandlisten] connection accepted from 165.225.128.186:59425 #47 (3 connections now open) m31002| Fri Feb 22 12:07:02.133 [conn46] end connection 165.225.128.186:45214 (2 connections now open) m31002| Fri Feb 22 12:07:02.133 [initandlisten] connection accepted from 165.225.128.186:42206 #48 (3 connections now open) m31001| Fri Feb 22 12:07:12.354 [conn46] end connection 165.225.128.186:47948 (2 connections now open) m31001| Fri Feb 22 12:07:12.355 [initandlisten] connection accepted from 165.225.128.186:40147 #48 (3 connections now open) m31001| Fri Feb 22 12:07:15.911 [conn47] end connection 165.225.128.186:54284 (2 connections now open) m31001| Fri Feb 22 12:07:15.912 [initandlisten] connection accepted from 165.225.128.186:59743 #49 (3 connections now open) m31000| Fri Feb 22 12:07:18.132 [conn1956] end connection 165.225.128.186:47879 (7 connections now open) m31000| Fri Feb 22 12:07:18.132 [initandlisten] connection accepted from 165.225.128.186:47518 #1958 (8 connections now open) m31000| Fri Feb 22 12:07:28.337 [conn1957] end connection 165.225.128.186:33350 (7 connections now open) m31000| Fri Feb 22 12:07:28.337 [initandlisten] connection accepted from 165.225.128.186:40642 #1959 (8 connections now open) m31002| Fri Feb 22 12:07:31.915 [conn47] end connection 165.225.128.186:59425 (2 connections now open) m31002| Fri Feb 22 12:07:31.916 [initandlisten] connection accepted from 165.225.128.186:61970 #49 (3 connections now open) m31002| Fri Feb 22 12:07:32.136 [conn48] end connection 165.225.128.186:42206 (2 connections now open) m31002| Fri Feb 22 12:07:32.136 [initandlisten] connection accepted from 165.225.128.186:52844 #50 (3 connections now open) m31001| Fri Feb 22 12:07:42.358 [conn48] end connection 165.225.128.186:40147 (2 connections now open) m31001| Fri Feb 22 12:07:42.358 [initandlisten] connection accepted from 165.225.128.186:53182 #50 (3 connections now open) m31001| Fri Feb 22 12:07:45.915 [conn49] end connection 165.225.128.186:59743 (2 connections now open) m31001| Fri Feb 22 12:07:45.915 [initandlisten] connection accepted from 165.225.128.186:52920 #51 (3 connections now open) m31000| Fri Feb 22 12:07:48.136 [conn1958] end connection 165.225.128.186:47518 (7 connections now open) m31000| Fri Feb 22 12:07:48.136 [initandlisten] connection accepted from 165.225.128.186:55737 #1960 (8 connections now open) m31000| Fri Feb 22 12:07:58.340 [conn1959] end connection 165.225.128.186:40642 (7 connections now open) m31000| Fri Feb 22 12:07:58.340 [initandlisten] connection accepted from 165.225.128.186:46188 #1961 (8 connections now open) m31002| Fri Feb 22 12:08:01.919 [conn49] end connection 165.225.128.186:61970 (2 connections now open) m31002| Fri Feb 22 12:08:01.919 [initandlisten] connection accepted from 165.225.128.186:39823 #51 (3 connections now open) m31002| Fri Feb 22 12:08:02.140 [conn50] end connection 165.225.128.186:52844 (2 connections now open) m31002| Fri Feb 22 12:08:02.140 [initandlisten] connection accepted from 165.225.128.186:57577 #52 (3 connections now open) m31001| Fri Feb 22 12:08:12.362 [conn50] end connection 165.225.128.186:53182 (2 connections now open) m31001| Fri Feb 22 12:08:12.362 [initandlisten] connection accepted from 165.225.128.186:47500 #52 (3 connections now open) m31001| Fri Feb 22 12:08:15.926 [conn51] end connection 165.225.128.186:52920 (2 connections now open) m31001| Fri Feb 22 12:08:15.927 [initandlisten] connection accepted from 165.225.128.186:54985 #53 (3 connections now open) m31000| Fri Feb 22 12:08:18.140 [conn1960] end connection 165.225.128.186:55737 (7 connections now open) m31000| Fri Feb 22 12:08:18.140 [initandlisten] connection accepted from 165.225.128.186:47123 #1962 (8 connections now open) m31000| Fri Feb 22 12:08:28.344 [conn1961] end connection 165.225.128.186:46188 (7 connections now open) m31000| Fri Feb 22 12:08:28.344 [initandlisten] connection accepted from 165.225.128.186:51742 #1963 (8 connections now open) m31002| Fri Feb 22 12:08:31.922 [conn51] end connection 165.225.128.186:39823 (2 connections now open) m31002| Fri Feb 22 12:08:31.923 [initandlisten] connection accepted from 165.225.128.186:37325 #53 (3 connections now open) m31002| Fri Feb 22 12:08:32.144 [conn52] end connection 165.225.128.186:57577 (2 connections now open) m31002| Fri Feb 22 12:08:32.144 [initandlisten] connection accepted from 165.225.128.186:54964 #54 (3 connections now open) m31001| Fri Feb 22 12:08:42.366 [conn52] end connection 165.225.128.186:47500 (2 connections now open) m31001| Fri Feb 22 12:08:42.366 [initandlisten] connection accepted from 165.225.128.186:54137 #54 (3 connections now open) m31001| Fri Feb 22 12:08:45.930 [conn53] end connection 165.225.128.186:54985 (2 connections now open) m31001| Fri Feb 22 12:08:45.930 [initandlisten] connection accepted from 165.225.128.186:55725 #55 (3 connections now open) m31000| Fri Feb 22 12:08:48.144 [conn1962] end connection 165.225.128.186:47123 (7 connections now open) m31000| Fri Feb 22 12:08:48.144 [initandlisten] connection accepted from 165.225.128.186:49269 #1964 (8 connections now open) m31000| Fri Feb 22 12:08:58.348 [conn1963] end connection 165.225.128.186:51742 (7 connections now open) m31000| Fri Feb 22 12:08:58.348 [initandlisten] connection accepted from 165.225.128.186:60681 #1965 (8 connections now open) m31002| Fri Feb 22 12:09:01.926 [conn53] end connection 165.225.128.186:37325 (2 connections now open) m31002| Fri Feb 22 12:09:01.926 [initandlisten] connection accepted from 165.225.128.186:51263 #55 (3 connections now open) m31002| Fri Feb 22 12:09:02.147 [conn54] end connection 165.225.128.186:54964 (2 connections now open) m31002| Fri Feb 22 12:09:02.148 [initandlisten] connection accepted from 165.225.128.186:62684 #56 (3 connections now open) m31001| Fri Feb 22 12:09:12.370 [conn54] end connection 165.225.128.186:54137 (2 connections now open) m31001| Fri Feb 22 12:09:12.370 [initandlisten] connection accepted from 165.225.128.186:50254 #56 (3 connections now open) m31001| Fri Feb 22 12:09:15.934 [conn55] end connection 165.225.128.186:55725 (2 connections now open) m31001| Fri Feb 22 12:09:15.934 [initandlisten] connection accepted from 165.225.128.186:36132 #57 (3 connections now open) m31000| Fri Feb 22 12:09:18.148 [conn1964] end connection 165.225.128.186:49269 (7 connections now open) m31000| Fri Feb 22 12:09:18.148 [initandlisten] connection accepted from 165.225.128.186:63557 #1966 (8 connections now open) m31000| Fri Feb 22 12:09:28.351 [conn1965] end connection 165.225.128.186:60681 (7 connections now open) m31000| Fri Feb 22 12:09:28.351 [initandlisten] connection accepted from 165.225.128.186:43256 #1967 (8 connections now open) m31002| Fri Feb 22 12:09:31.929 [conn55] end connection 165.225.128.186:51263 (2 connections now open) m31002| Fri Feb 22 12:09:31.929 [initandlisten] connection accepted from 165.225.128.186:61922 #57 (3 connections now open) m31002| Fri Feb 22 12:09:32.151 [conn56] end connection 165.225.128.186:62684 (2 connections now open) m31002| Fri Feb 22 12:09:32.151 [initandlisten] connection accepted from 165.225.128.186:59627 #58 (3 connections now open) m31001| Fri Feb 22 12:09:42.374 [conn56] end connection 165.225.128.186:50254 (2 connections now open) m31001| Fri Feb 22 12:09:42.374 [initandlisten] connection accepted from 165.225.128.186:63865 #58 (3 connections now open) m31001| Fri Feb 22 12:09:45.937 [conn57] end connection 165.225.128.186:36132 (2 connections now open) m31001| Fri Feb 22 12:09:45.938 [initandlisten] connection accepted from 165.225.128.186:61828 #59 (3 connections now open) m31000| Fri Feb 22 12:09:48.152 [conn1966] end connection 165.225.128.186:63557 (7 connections now open) m31000| Fri Feb 22 12:09:48.152 [initandlisten] connection accepted from 165.225.128.186:53015 #1968 (8 connections now open) m31000| Fri Feb 22 12:09:58.355 [conn1967] end connection 165.225.128.186:43256 (7 connections now open) m31000| Fri Feb 22 12:09:58.355 [initandlisten] connection accepted from 165.225.128.186:52368 #1969 (8 connections now open) m31002| Fri Feb 22 12:10:01.933 [conn57] end connection 165.225.128.186:61922 (2 connections now open) m31002| Fri Feb 22 12:10:01.933 [initandlisten] connection accepted from 165.225.128.186:37975 #59 (3 connections now open) m31002| Fri Feb 22 12:10:02.155 [conn58] end connection 165.225.128.186:59627 (2 connections now open) m31002| Fri Feb 22 12:10:02.155 [initandlisten] connection accepted from 165.225.128.186:37861 #60 (3 connections now open) m31001| Fri Feb 22 12:10:12.377 [conn58] end connection 165.225.128.186:63865 (2 connections now open) m31001| Fri Feb 22 12:10:12.378 [initandlisten] connection accepted from 165.225.128.186:46746 #60 (3 connections now open) m31001| Fri Feb 22 12:10:15.941 [conn59] end connection 165.225.128.186:61828 (2 connections now open) m31001| Fri Feb 22 12:10:15.941 [initandlisten] connection accepted from 165.225.128.186:33295 #61 (3 connections now open) m31000| Fri Feb 22 12:10:18.155 [conn1968] end connection 165.225.128.186:53015 (7 connections now open) m31000| Fri Feb 22 12:10:18.156 [initandlisten] connection accepted from 165.225.128.186:50405 #1970 (8 connections now open) m31000| Fri Feb 22 12:10:28.358 [conn1969] end connection 165.225.128.186:52368 (7 connections now open) m31000| Fri Feb 22 12:10:28.359 [initandlisten] connection accepted from 165.225.128.186:47946 #1971 (8 connections now open) m31002| Fri Feb 22 12:10:31.936 [conn59] end connection 165.225.128.186:37975 (2 connections now open) m31002| Fri Feb 22 12:10:31.937 [initandlisten] connection accepted from 165.225.128.186:58844 #61 (3 connections now open) m31002| Fri Feb 22 12:10:32.158 [conn60] end connection 165.225.128.186:37861 (2 connections now open) m31002| Fri Feb 22 12:10:32.159 [initandlisten] connection accepted from 165.225.128.186:56130 #62 (3 connections now open) m31001| Fri Feb 22 12:10:42.381 [conn60] end connection 165.225.128.186:46746 (2 connections now open) m31001| Fri Feb 22 12:10:42.381 [initandlisten] connection accepted from 165.225.128.186:64212 #62 (3 connections now open) m31001| Fri Feb 22 12:10:45.945 [conn61] end connection 165.225.128.186:33295 (2 connections now open) m31001| Fri Feb 22 12:10:45.945 [initandlisten] connection accepted from 165.225.128.186:39927 #63 (3 connections now open) m31000| Fri Feb 22 12:10:48.159 [conn1970] end connection 165.225.128.186:50405 (7 connections now open) m31000| Fri Feb 22 12:10:48.160 [initandlisten] connection accepted from 165.225.128.186:39940 #1972 (8 connections now open) m31000| Fri Feb 22 12:10:58.362 [conn1971] end connection 165.225.128.186:47946 (7 connections now open) m31000| Fri Feb 22 12:10:58.362 [initandlisten] connection accepted from 165.225.128.186:56460 #1973 (8 connections now open) m31002| Fri Feb 22 12:11:01.940 [conn61] end connection 165.225.128.186:58844 (2 connections now open) m31002| Fri Feb 22 12:11:01.940 [initandlisten] connection accepted from 165.225.128.186:65418 #63 (3 connections now open) m31002| Fri Feb 22 12:11:02.162 [conn62] end connection 165.225.128.186:56130 (2 connections now open) m31002| Fri Feb 22 12:11:02.162 [initandlisten] connection accepted from 165.225.128.186:57594 #64 (3 connections now open) m31001| Fri Feb 22 12:11:12.385 [conn62] end connection 165.225.128.186:64212 (2 connections now open) m31001| Fri Feb 22 12:11:12.385 [initandlisten] connection accepted from 165.225.128.186:50194 #64 (3 connections now open) m31001| Fri Feb 22 12:11:15.948 [conn63] end connection 165.225.128.186:39927 (2 connections now open) m31001| Fri Feb 22 12:11:15.949 [initandlisten] connection accepted from 165.225.128.186:50255 #65 (3 connections now open) m31000| Fri Feb 22 12:11:18.164 [conn1972] end connection 165.225.128.186:39940 (7 connections now open) m31000| Fri Feb 22 12:11:18.164 [initandlisten] connection accepted from 165.225.128.186:44056 #1974 (8 connections now open) m31000| Fri Feb 22 12:11:28.365 [conn1973] end connection 165.225.128.186:56460 (7 connections now open) m31000| Fri Feb 22 12:11:28.366 [initandlisten] connection accepted from 165.225.128.186:46185 #1975 (8 connections now open) m31002| Fri Feb 22 12:11:31.944 [conn63] end connection 165.225.128.186:65418 (2 connections now open) m31002| Fri Feb 22 12:11:31.944 [initandlisten] connection accepted from 165.225.128.186:34464 #65 (3 connections now open) m31002| Fri Feb 22 12:11:32.166 [conn64] end connection 165.225.128.186:57594 (2 connections now open) m31002| Fri Feb 22 12:11:32.166 [initandlisten] connection accepted from 165.225.128.186:55719 #66 (3 connections now open) m31000| Fri Feb 22 12:11:33.126 [conn13] insert test.test ninserted:1 keyUpdates:0 locks(micros) w:381907 190ms m31001| Fri Feb 22 12:11:42.389 [conn64] end connection 165.225.128.186:50194 (2 connections now open) m31001| Fri Feb 22 12:11:42.389 [initandlisten] connection accepted from 165.225.128.186:50146 #66 (3 connections now open) m31001| Fri Feb 22 12:11:45.952 [conn65] end connection 165.225.128.186:50255 (2 connections now open) m31001| Fri Feb 22 12:11:45.952 [initandlisten] connection accepted from 165.225.128.186:49962 #67 (3 connections now open) m31000| Fri Feb 22 12:11:48.168 [conn1974] end connection 165.225.128.186:44056 (7 connections now open) m31000| Fri Feb 22 12:11:48.168 [initandlisten] connection accepted from 165.225.128.186:62348 #1976 (8 connections now open) m31000| Fri Feb 22 12:11:58.378 [conn1975] end connection 165.225.128.186:46185 (7 connections now open) m31000| Fri Feb 22 12:11:58.378 [initandlisten] connection accepted from 165.225.128.186:44249 #1977 (8 connections now open) m31002| Fri Feb 22 12:12:01.948 [conn65] end connection 165.225.128.186:34464 (2 connections now open) m31002| Fri Feb 22 12:12:01.948 [initandlisten] connection accepted from 165.225.128.186:58396 #67 (3 connections now open) m31002| Fri Feb 22 12:12:02.169 [conn66] end connection 165.225.128.186:55719 (2 connections now open) m31002| Fri Feb 22 12:12:02.170 [initandlisten] connection accepted from 165.225.128.186:62275 #68 (3 connections now open) m31001| Fri Feb 22 12:12:12.393 [conn66] end connection 165.225.128.186:50146 (2 connections now open) m31001| Fri Feb 22 12:12:12.393 [initandlisten] connection accepted from 165.225.128.186:59187 #68 (3 connections now open) m31001| Fri Feb 22 12:12:15.960 [conn67] end connection 165.225.128.186:49962 (2 connections now open) m31001| Fri Feb 22 12:12:15.960 [initandlisten] connection accepted from 165.225.128.186:34299 #69 (3 connections now open) m31000| Fri Feb 22 12:12:18.171 [conn1976] end connection 165.225.128.186:62348 (7 connections now open) m31000| Fri Feb 22 12:12:18.172 [initandlisten] connection accepted from 165.225.128.186:43978 #1978 (8 connections now open) m31000| Fri Feb 22 12:12:28.381 [conn1977] end connection 165.225.128.186:44249 (7 connections now open) m31000| Fri Feb 22 12:12:28.382 [initandlisten] connection accepted from 165.225.128.186:59023 #1979 (8 connections now open) m31002| Fri Feb 22 12:12:31.952 [conn67] end connection 165.225.128.186:58396 (2 connections now open) m31002| Fri Feb 22 12:12:31.952 [initandlisten] connection accepted from 165.225.128.186:43129 #69 (3 connections now open) m31002| Fri Feb 22 12:12:32.173 [conn68] end connection 165.225.128.186:62275 (2 connections now open) m31002| Fri Feb 22 12:12:32.174 [initandlisten] connection accepted from 165.225.128.186:58365 #70 (3 connections now open) m31000| Fri Feb 22 12:12:35.418 [FileAllocator] allocating new datafile /data/db/test-0/test.1, filling with zeroes... m31000| Fri Feb 22 12:12:35.419 [FileAllocator] done allocating datafile /data/db/test-0/test.1, size: 32MB, took 0 secs m31001| Fri Feb 22 12:12:35.423 [FileAllocator] allocating new datafile /data/db/test-1/test.1, filling with zeroes... m31001| Fri Feb 22 12:12:35.423 [FileAllocator] done allocating datafile /data/db/test-1/test.1, size: 32MB, took 0 secs m31002| Fri Feb 22 12:12:35.424 [FileAllocator] allocating new datafile /data/db/test-2/test.1, filling with zeroes... m31002| Fri Feb 22 12:12:35.424 [FileAllocator] done allocating datafile /data/db/test-2/test.1, size: 32MB, took 0 secs m31001| Fri Feb 22 12:12:42.406 [conn68] end connection 165.225.128.186:59187 (2 connections now open) m31001| Fri Feb 22 12:12:42.407 [initandlisten] connection accepted from 165.225.128.186:46384 #70 (3 connections now open) m31001| Fri Feb 22 12:12:45.973 [conn69] end connection 165.225.128.186:34299 (2 connections now open) m31001| Fri Feb 22 12:12:45.973 [initandlisten] connection accepted from 165.225.128.186:42917 #71 (3 connections now open) m31000| Fri Feb 22 12:12:48.193 [conn1978] end connection 165.225.128.186:43978 (7 connections now open) m31000| Fri Feb 22 12:12:48.194 [initandlisten] connection accepted from 165.225.128.186:57675 #1980 (8 connections now open) m31000| Fri Feb 22 12:12:58.395 [conn1979] end connection 165.225.128.186:59023 (7 connections now open) m31000| Fri Feb 22 12:12:58.395 [initandlisten] connection accepted from 165.225.128.186:49224 #1981 (8 connections now open) m31002| Fri Feb 22 12:13:01.965 [conn69] end connection 165.225.128.186:43129 (2 connections now open) m31002| Fri Feb 22 12:13:01.966 [initandlisten] connection accepted from 165.225.128.186:65183 #71 (3 connections now open) m31002| Fri Feb 22 12:13:02.187 [conn70] end connection 165.225.128.186:58365 (2 connections now open) m31002| Fri Feb 22 12:13:02.187 [initandlisten] connection accepted from 165.225.128.186:50096 #72 (3 connections now open) m31001| Fri Feb 22 12:13:12.410 [conn70] end connection 165.225.128.186:46384 (2 connections now open) m31001| Fri Feb 22 12:13:12.411 [initandlisten] connection accepted from 165.225.128.186:55285 #72 (3 connections now open) m31001| Fri Feb 22 12:13:15.977 [conn71] end connection 165.225.128.186:42917 (2 connections now open) m31001| Fri Feb 22 12:13:15.977 [initandlisten] connection accepted from 165.225.128.186:41792 #73 (3 connections now open) m31000| Fri Feb 22 12:13:18.197 [conn1980] end connection 165.225.128.186:57675 (7 connections now open) m31000| Fri Feb 22 12:13:18.198 [initandlisten] connection accepted from 165.225.128.186:38881 #1982 (8 connections now open) sh3029| null m31000| Fri Feb 22 12:13:24.569 [conn13] end connection 165.225.128.186:52721 (7 connections now open) m31000| Fri Feb 22 12:13:24.576 got signal 15 (Terminated), will terminate after current cmd ends m31000| Fri Feb 22 12:13:24.576 [interruptThread] now exiting m31000| Fri Feb 22 12:13:24.576 dbexit: m31000| Fri Feb 22 12:13:24.576 [interruptThread] shutdown: going to close listening sockets... m31000| Fri Feb 22 12:13:24.576 [interruptThread] closing listening socket: 12 m31000| Fri Feb 22 12:13:24.576 [interruptThread] closing listening socket: 13 m31000| Fri Feb 22 12:13:24.577 [interruptThread] closing listening socket: 14 m31000| Fri Feb 22 12:13:24.577 [interruptThread] removing socket file: /tmp/mongodb-31000.sock m31000| Fri Feb 22 12:13:24.577 [interruptThread] shutdown: going to flush diaglog... m31000| Fri Feb 22 12:13:24.577 [interruptThread] shutdown: going to close sockets... m31000| Fri Feb 22 12:13:24.577 [interruptThread] shutdown: waiting for fs preallocator... m31000| Fri Feb 22 12:13:24.577 [interruptThread] shutdown: lock for final commit... m31000| Fri Feb 22 12:13:24.577 [interruptThread] shutdown: final commit... m31000| Fri Feb 22 12:13:24.577 [conn1] end connection 127.0.0.1:65197 (6 connections now open) m31001| Fri Feb 22 12:13:24.577 [conn73] end connection 165.225.128.186:41792 (2 connections now open) m31000| Fri Feb 22 12:13:24.577 [conn1981] end connection 165.225.128.186:49224 (6 connections now open) m31000| Fri Feb 22 12:13:24.577 [conn1982] end connection 165.225.128.186:38881 (6 connections now open) m31002| Fri Feb 22 12:13:24.577 [conn71] end connection 165.225.128.186:65183 (2 connections now open) m31001| Fri Feb 22 12:13:24.577 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:13:24.577 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:13:24.577 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:13:24.577 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:13:24.594 [interruptThread] shutdown: closing all files... m31000| Fri Feb 22 12:13:24.600 [slaveTracking] ERROR: Client::shutdown not called: slaveTracking m31000| Fri Feb 22 12:13:24.601 [interruptThread] closeAllFiles() finished m31000| Fri Feb 22 12:13:24.601 [interruptThread] journalCleanup... m31000| Fri Feb 22 12:13:24.601 [interruptThread] removeJournalFiles m31000| Fri Feb 22 12:13:24.603 dbexit: really exiting now m31001| Fri Feb 22 12:13:25.576 got signal 15 (Terminated), will terminate after current cmd ends m31001| Fri Feb 22 12:13:25.577 [interruptThread] now exiting m31001| Fri Feb 22 12:13:25.577 dbexit: m31001| Fri Feb 22 12:13:25.577 [interruptThread] shutdown: going to close listening sockets... m31001| Fri Feb 22 12:13:25.577 [interruptThread] closing listening socket: 15 m31001| Fri Feb 22 12:13:25.577 [interruptThread] closing listening socket: 16 m31001| Fri Feb 22 12:13:25.577 [interruptThread] closing listening socket: 17 m31001| Fri Feb 22 12:13:25.577 [interruptThread] removing socket file: /tmp/mongodb-31001.sock m31001| Fri Feb 22 12:13:25.577 [interruptThread] shutdown: going to flush diaglog... m31001| Fri Feb 22 12:13:25.577 [interruptThread] shutdown: going to close sockets... m31001| Fri Feb 22 12:13:25.577 [interruptThread] shutdown: waiting for fs preallocator... m31001| Fri Feb 22 12:13:25Fri Feb 22 12:13:26.602 [conn114] end connection 127.0.0.1:38284 (0 connections now open) 17.6015 minutes Fri Feb 22 12:13:27.606 [initandlisten] connection accepted from 127.0.0.1:61090 #115 (1 connection now open) Fri Feb 22 12:13:27.607 [conn115] end connection 127.0.0.1:61090 (0 connections now open) Fri Feb 22 12:13:27.607 got signal 15 (Terminated), will terminate after current cmd ends Fri Feb 22 12:13:27.607 [interruptThread] now exiting Fri Feb 22 12:13:27.607 dbexit: Fri Feb 22 12:13:27.607 [interruptThread] shutdown: going to close listening sockets... Fri Feb 22 12:13:27.607 [interruptThread] closing listening socket: 8 Fri Feb 22 12:13:27.607 [interruptThread] closing listening socket: 9 Fri Feb 22 12:13:27.607 [interruptThread] closing listening socket: 10 Fri Feb 22 12:13:27.607 [interruptThread] removing socket file: /tmp/mongodb-27999.sock Fri Feb 22 12:13:27.608 [interruptThread] shutdown: going to flush diaglog... Fri Feb 22 12:13:27.608 [interruptThread] shutdown: going to close sockets... Fri Feb 22 12:13:27.608 [interruptThread] shutdown: waiting for fs preallocator... Fri Feb 22 12:13:27.608 [interruptThread] shutdown: lock for final commit... Fri Feb 22 12:13:27.608 [interruptThread] shutdown: final commit... Fri Feb 22 12:13:27.696 [interruptThread] shutdown: closing all files... Fri Feb 22 12:13:27.698 [interruptThread] closeAllFiles() finished Fri Feb 22 12:13:27.698 [interruptThread] journalCleanup... Fri Feb 22 12:13:27.699 [interruptThread] removeJournalFiles Fri Feb 22 12:13:27.709 dbexit: really exiting now cwd [/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo] num procs:1 removing: /data/db/sconsTests//test.ns removing: /data/db/sconsTests//test.0 removing: /data/db/sconsTests//test.3 removing: /data/db/sconsTests//test.1 removing: /data/db/sconsTests//local.0 removing: /data/db/sconsTests//local.ns removing: /data/db/sconsTests//test.2 buildlogger: could not find or import buildbot.tac for authentication Fri Feb 22 12:13:28.119 [initandlisten] MongoDB starting : pid=8351 port=27999 dbpath=/data/db/sconsTests/ 64-bit host=bs-smartos-x86-64-1.10gen.cc Fri Feb 22 12:13:28.120 [initandlisten] Fri Feb 22 12:13:28.120 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB Fri Feb 22 12:13:28.120 [initandlisten] ** uses to detect impending page faults. Fri Feb 22 12:13:28.120 [initandlisten] ** This may result in slower performance for certain use cases Fri Feb 22 12:13:28.120 [initandlisten] Fri Feb 22 12:13:28.120 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 Fri Feb 22 12:13:28.120 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 Fri Feb 22 12:13:28.120 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 Fri Feb 22 12:13:28.120 [initandlisten] allocator: system Fri Feb 22 12:13:28.120 [initandlisten] options: { dbpath: "/data/db/sconsTests/", port: 27999, setParameter: [ "enableTestCommands=1" ] } Fri Feb 22 12:13:28.120 [initandlisten] journal dir=/data/db/sconsTests/journal Fri Feb 22 12:13:28.120 [initandlisten] recover : no journal files present, no recovery needed Fri Feb 22 12:13:28.136 [FileAllocator] allocating new datafile /data/db/sconsTests/local.ns, filling with zeroes... Fri Feb 22 12:13:28.137 [FileAllocator] creating directory /data/db/sconsTests/_tmp Fri Feb 22 12:13:28.137 [FileAllocator] done allocating datafile /data/db/sconsTests/local.ns, size: 16MB, took 0 secs Fri Feb 22 12:13:28.137 [FileAllocator] allocating new datafile /data/db/sconsTests/local.0, filling with zeroes... Fri Feb 22 12:13:28.137 [FileAllocator] done allocating datafile /data/db/sconsTests/local.0, size: 64MB, took 0 secs Fri Feb 22 12:13:28.140 [initandlisten] waiting for connections on port 27999 Fri Feb 22 12:13:28.140 [websvr] admin web console waiting for connections on port 28999 Fri Feb 22 12:13:28.932 [initandlisten] connection accepted from 127.0.0.1:44390 #1 (1 connection now open) running /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 27999 --dbpath /data/db/sconsTests/ --setParameter enableTestCommands=1 ******************************************* Test : replsets_priority1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replsets_priority1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/replsets_priority1.js";TestData.testFile = "replsets_priority1.js";TestData.testName = "replsets_priority1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:13:28 2013 Fri Feb 22 12:13:28.950 [conn1] end connection 127.0.0.1:44390 (0 connections now open) buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:13:29.109 [initandlisten] connection accepted from 127.0.0.1:39210 #2 (1 connection now open) null replsets_priority1.js BEGIN ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 0, "set" : "testSet" } } ReplSetTest Starting.... Resetting db path '/data/db/testSet-0' Fri Feb 22 12:13:29.134 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-0 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Fri Feb 22 12:13:29.226 [initandlisten] MongoDB starting : pid=8355 port=31000 dbpath=/data/db/testSet-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 12:13:29.227 [initandlisten] m31000| Fri Feb 22 12:13:29.227 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 12:13:29.227 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 12:13:29.227 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 12:13:29.227 [initandlisten] m31000| Fri Feb 22 12:13:29.227 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 12:13:29.227 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 12:13:29.227 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 12:13:29.227 [initandlisten] allocator: system m31000| Fri Feb 22 12:13:29.227 [initandlisten] options: { dbpath: "/data/db/testSet-0", noprealloc: true, oplogSize: 40, port: 31000, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Fri Feb 22 12:13:29.227 [initandlisten] journal dir=/data/db/testSet-0/journal m31000| Fri Feb 22 12:13:29.227 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 12:13:29.241 [FileAllocator] allocating new datafile /data/db/testSet-0/local.ns, filling with zeroes... m31000| Fri Feb 22 12:13:29.241 [FileAllocator] creating directory /data/db/testSet-0/_tmp m31000| Fri Feb 22 12:13:29.242 [FileAllocator] done allocating datafile /data/db/testSet-0/local.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 12:13:29.242 [FileAllocator] allocating new datafile /data/db/testSet-0/local.0, filling with zeroes... m31000| Fri Feb 22 12:13:29.242 [FileAllocator] done allocating datafile /data/db/testSet-0/local.0, size: 16MB, took 0 secs m31000| Fri Feb 22 12:13:29.245 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 12:13:29.245 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 12:13:29.248 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Fri Feb 22 12:13:29.248 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31000| Fri Feb 22 12:13:29.337 [initandlisten] connection accepted from 127.0.0.1:62102 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31001, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 1, "set" : "testSet" } } ReplSetTest Starting.... Resetting db path '/data/db/testSet-1' Fri Feb 22 12:13:29.345 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31001 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-1 --setParameter enableTestCommands=1 m31001| note: noprealloc may hurt performance in many applications m31001| Fri Feb 22 12:13:29.434 [initandlisten] MongoDB starting : pid=8357 port=31001 dbpath=/data/db/testSet-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31001| Fri Feb 22 12:13:29.434 [initandlisten] m31001| Fri Feb 22 12:13:29.434 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31001| Fri Feb 22 12:13:29.434 [initandlisten] ** uses to detect impending page faults. m31001| Fri Feb 22 12:13:29.434 [initandlisten] ** This may result in slower performance for certain use cases m31001| Fri Feb 22 12:13:29.434 [initandlisten] m31001| Fri Feb 22 12:13:29.434 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31001| Fri Feb 22 12:13:29.434 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31001| Fri Feb 22 12:13:29.434 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31001| Fri Feb 22 12:13:29.434 [initandlisten] allocator: system m31001| Fri Feb 22 12:13:29.434 [initandlisten] options: { dbpath: "/data/db/testSet-1", noprealloc: true, oplogSize: 40, port: 31001, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31001| Fri Feb 22 12:13:29.434 [initandlisten] journal dir=/data/db/testSet-1/journal m31001| Fri Feb 22 12:13:29.435 [initandlisten] recover : no journal files present, no recovery needed m31001| Fri Feb 22 12:13:29.449 [FileAllocator] allocating new datafile /data/db/testSet-1/local.ns, filling with zeroes... m31001| Fri Feb 22 12:13:29.449 [FileAllocator] creating directory /data/db/testSet-1/_tmp m31001| Fri Feb 22 12:13:29.449 [FileAllocator] done allocating datafile /data/db/testSet-1/local.ns, size: 16MB, took 0 secs m31001| Fri Feb 22 12:13:29.449 [FileAllocator] allocating new datafile /data/db/testSet-1/local.0, filling with zeroes... m31001| Fri Feb 22 12:13:29.450 [FileAllocator] done allocating datafile /data/db/testSet-1/local.0, size: 16MB, took 0 secs m31001| Fri Feb 22 12:13:29.453 [initandlisten] waiting for connections on port 31001 m31001| Fri Feb 22 12:13:29.453 [websvr] admin web console waiting for connections on port 32001 m31001| Fri Feb 22 12:13:29.456 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31001| Fri Feb 22 12:13:29.456 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31001| Fri Feb 22 12:13:29.547 [initandlisten] connection accepted from 127.0.0.1:51402 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31000, 31001, 31002 ] 31002 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31002, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : undefined, "pathOpts" : { "node" : 2, "set" : "testSet" } } ReplSetTest Starting.... Resetting db path '/data/db/testSet-2' Fri Feb 22 12:13:29.551 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31002 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-2 --setParameter enableTestCommands=1 m31002| note: noprealloc may hurt performance in many applications m31002| Fri Feb 22 12:13:29.661 [initandlisten] MongoDB starting : pid=8358 port=31002 dbpath=/data/db/testSet-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31002| Fri Feb 22 12:13:29.661 [initandlisten] m31002| Fri Feb 22 12:13:29.661 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31002| Fri Feb 22 12:13:29.661 [initandlisten] ** uses to detect impending page faults. m31002| Fri Feb 22 12:13:29.661 [initandlisten] ** This may result in slower performance for certain use cases m31002| Fri Feb 22 12:13:29.661 [initandlisten] m31002| Fri Feb 22 12:13:29.661 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31002| Fri Feb 22 12:13:29.661 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31002| Fri Feb 22 12:13:29.661 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31002| Fri Feb 22 12:13:29.661 [initandlisten] allocator: system m31002| Fri Feb 22 12:13:29.661 [initandlisten] options: { dbpath: "/data/db/testSet-2", noprealloc: true, oplogSize: 40, port: 31002, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31002| Fri Feb 22 12:13:29.661 [initandlisten] journal dir=/data/db/testSet-2/journal m31002| Fri Feb 22 12:13:29.662 [initandlisten] recover : no journal files present, no recovery needed m31002| Fri Feb 22 12:13:29.678 [FileAllocator] allocating new datafile /data/db/testSet-2/local.ns, filling with zeroes... m31002| Fri Feb 22 12:13:29.678 [FileAllocator] creating directory /data/db/testSet-2/_tmp m31002| Fri Feb 22 12:13:29.678 [FileAllocator] done allocating datafile /data/db/testSet-2/local.ns, size: 16MB, took 0 secs m31002| Fri Feb 22 12:13:29.679 [FileAllocator] allocating new datafile /data/db/testSet-2/local.0, filling with zeroes... m31002| Fri Feb 22 12:13:29.679 [FileAllocator] done allocating datafile /data/db/testSet-2/local.0, size: 16MB, took 0 secs m31002| Fri Feb 22 12:13:29.682 [initandlisten] waiting for connections on port 31002 m31002| Fri Feb 22 12:13:29.682 [websvr] admin web console waiting for connections on port 32002 m31002| Fri Feb 22 12:13:29.685 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31002| Fri Feb 22 12:13:29.685 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31002| Fri Feb 22 12:13:29.753 [initandlisten] connection accepted from 127.0.0.1:64142 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001, connection to bs-smartos-x86-64-1.10gen.cc:31002 ] { "replSetInitiate" : { "_id" : "testSet", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31000" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31001" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31002" } ] } } m31000| Fri Feb 22 12:13:29.757 [conn1] replSet replSetInitiate admin command received from client m31000| Fri Feb 22 12:13:29.759 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31000| Fri Feb 22 12:13:29.759 [initandlisten] connection accepted from 165.225.128.186:38224 #2 (2 connections now open) m31001| Fri Feb 22 12:13:29.760 [initandlisten] connection accepted from 165.225.128.186:42135 #2 (2 connections now open) m31002| Fri Feb 22 12:13:29.762 [initandlisten] connection accepted from 165.225.128.186:47360 #2 (2 connections now open) m31000| Fri Feb 22 12:13:29.763 [conn1] replSet replSetInitiate all members seem up m31000| Fri Feb 22 12:13:29.763 [conn1] ****** m31000| Fri Feb 22 12:13:29.763 [conn1] creating replication oplog of size: 40MB... m31000| Fri Feb 22 12:13:29.763 [FileAllocator] allocating new datafile /data/db/testSet-0/local.1, filling with zeroes... m31000| Fri Feb 22 12:13:29.763 [FileAllocator] done allocating datafile /data/db/testSet-0/local.1, size: 64MB, took 0 secs m31000| Fri Feb 22 12:13:29.779 [conn1] ****** m31000| Fri Feb 22 12:13:29.780 [conn1] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 12:13:29.781 [conn2] end connection 165.225.128.186:38224 (1 connection now open) m31000| Fri Feb 22 12:13:29.797 [conn1] replSet saveConfigLocally done m31000| Fri Feb 22 12:13:29.797 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31000| Fri Feb 22 12:13:39.248 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:13:39.248 [rsStart] replSet STARTUP2 m31000| Fri Feb 22 12:13:39.249 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31001| Fri Feb 22 12:13:39.456 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:13:39.457 [initandlisten] connection accepted from 165.225.128.186:51990 #3 (2 connections now open) m31001| Fri Feb 22 12:13:39.458 [initandlisten] connection accepted from 165.225.128.186:41632 #3 (3 connections now open) m31001| Fri Feb 22 12:13:39.458 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:13:39.458 [rsStart] replSet got config version 1 from a remote, saving locally m31001| Fri Feb 22 12:13:39.458 [rsStart] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 12:13:39.462 [rsStart] replSet saveConfigLocally done m31001| Fri Feb 22 12:13:39.462 [rsStart] replSet STARTUP2 m31001| Fri Feb 22 12:13:39.462 [rsSync] ****** m31001| Fri Feb 22 12:13:39.462 [rsSync] creating replication oplog of size: 40MB... m31001| Fri Feb 22 12:13:39.462 [FileAllocator] allocating new datafile /data/db/testSet-1/local.1, filling with zeroes... m31001| Fri Feb 22 12:13:39.463 [FileAllocator] done allocating datafile /data/db/testSet-1/local.1, size: 64MB, took 0 secs m31001| Fri Feb 22 12:13:39.475 [rsSync] ****** m31001| Fri Feb 22 12:13:39.475 [rsSync] replSet initial sync pending m31001| Fri Feb 22 12:13:39.475 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31001| Fri Feb 22 12:13:39.478 [conn3] end connection 165.225.128.186:41632 (2 connections now open) m31002| Fri Feb 22 12:13:39.686 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31000| Fri Feb 22 12:13:40.250 [rsSync] replSet SECONDARY m31000| Fri Feb 22 12:13:41.249 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:13:41.249 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 thinks that we are down m31000| Fri Feb 22 12:13:41.249 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state STARTUP2 m31000| Fri Feb 22 12:13:41.249 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31000 is electable' m31000| Fri Feb 22 12:13:41.249 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31000 is electable' m31001| Fri Feb 22 12:13:41.459 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:13:41.459 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:13:41.459 [initandlisten] connection accepted from 165.225.128.186:49036 #3 (3 connections now open) m31001| Fri Feb 22 12:13:41.459 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:13:47.250 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31002| Fri Feb 22 12:13:49.686 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:13:49.687 [initandlisten] connection accepted from 165.225.128.186:63005 #4 (3 connections now open) m31002| Fri Feb 22 12:13:49.687 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:13:49.688 [initandlisten] connection accepted from 165.225.128.186:57751 #4 (3 connections now open) m31002| Fri Feb 22 12:13:49.688 [initandlisten] connection accepted from 165.225.128.186:61966 #4 (4 connections now open) m31002| Fri Feb 22 12:13:49.689 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:13:49.689 [rsStart] replSet got config version 1 from a remote, saving locally m31002| Fri Feb 22 12:13:49.689 [rsStart] replSet info saving a newer config version to local.system.replset m31002| Fri Feb 22 12:13:49.692 [rsStart] replSet saveConfigLocally done m31002| Fri Feb 22 12:13:49.692 [rsStart] replSet STARTUP2 m31002| Fri Feb 22 12:13:49.693 [rsSync] ****** m31002| Fri Feb 22 12:13:49.693 [rsSync] creating replication oplog of size: 40MB... m31002| Fri Feb 22 12:13:49.693 [FileAllocator] allocating new datafile /data/db/testSet-2/local.1, filling with zeroes... m31002| Fri Feb 22 12:13:49.693 [FileAllocator] done allocating datafile /data/db/testSet-2/local.1, size: 64MB, took 0 secs m31002| Fri Feb 22 12:13:49.705 [rsSync] ****** m31002| Fri Feb 22 12:13:49.705 [rsSync] replSet initial sync pending m31002| Fri Feb 22 12:13:49.705 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31002| Fri Feb 22 12:13:49.708 [conn4] end connection 165.225.128.186:61966 (3 connections now open) m31000| Fri Feb 22 12:13:51.250 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 thinks that we are down m31000| Fri Feb 22 12:13:51.250 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state STARTUP2 m31000| Fri Feb 22 12:13:51.250 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31000 is electable' m31001| Fri Feb 22 12:13:51.460 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 thinks that we are down m31001| Fri Feb 22 12:13:51.460 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state STARTUP2 m31002| Fri Feb 22 12:13:51.689 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:13:51.689 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:13:51.689 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:13:51.689 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state STARTUP2 m31001| Fri Feb 22 12:13:53.250 [conn2] end connection 165.225.128.186:42135 (2 connections now open) m31001| Fri Feb 22 12:13:53.251 [initandlisten] connection accepted from 165.225.128.186:36801 #5 (3 connections now open) m31000| Fri Feb 22 12:13:55.461 [conn3] end connection 165.225.128.186:51990 (2 connections now open) m31000| Fri Feb 22 12:13:55.461 [initandlisten] connection accepted from 165.225.128.186:64374 #5 (3 connections now open) m31001| Fri Feb 22 12:13:55.475 [rsSync] replSet initial sync pending m31001| Fri Feb 22 12:13:55.475 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:13:55.476 [initandlisten] connection accepted from 165.225.128.186:61763 #6 (4 connections now open) m31001| Fri Feb 22 12:13:55.482 [rsSync] build index local.me { _id: 1 } m31001| Fri Feb 22 12:13:55.485 [rsSync] build index done. scanned 0 total records. 0.002 secs m31001| Fri Feb 22 12:13:55.486 [rsSync] build index local.replset.minvalid { _id: 1 } m31001| Fri Feb 22 12:13:55.487 [rsSync] build index done. scanned 0 total records. 0 secs m31001| Fri Feb 22 12:13:55.487 [rsSync] replSet initial sync drop all databases m31001| Fri Feb 22 12:13:55.487 [rsSync] dropAllDatabasesExceptLocal 1 m31001| Fri Feb 22 12:13:55.487 [rsSync] replSet initial sync clone all databases m31001| Fri Feb 22 12:13:55.487 [rsSync] replSet initial sync data copy, starting syncup m31001| Fri Feb 22 12:13:55.487 [rsSync] oplog sync 1 of 3 m31001| Fri Feb 22 12:13:55.487 [rsSync] oplog sync 2 of 3 m31001| Fri Feb 22 12:13:55.487 [rsSync] replSet initial sync building indexes m31001| Fri Feb 22 12:13:55.487 [rsSync] oplog sync 3 of 3 m31001| Fri Feb 22 12:13:55.487 [rsSync] replSet initial sync finishing up m31001| Fri Feb 22 12:13:55.497 [rsSync] replSet set minValid=512760e9:b m31001| Fri Feb 22 12:13:55.507 [rsSync] replSet RECOVERING m31001| Fri Feb 22 12:13:55.507 [rsSync] replSet initial sync done m31000| Fri Feb 22 12:13:55.507 [conn6] end connection 165.225.128.186:61763 (3 connections now open) m31002| Fri Feb 22 12:13:55.690 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state RECOVERING m31000| Fri Feb 22 12:13:57.251 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:13:57.251 [conn2] replSet RECOVERING m31002| Fri Feb 22 12:13:57.251 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31001| Fri Feb 22 12:13:57.251 [conn5] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31000| Fri Feb 22 12:13:57.251 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state RECOVERING m31001| Fri Feb 22 12:13:57.461 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state RECOVERING m31001| Fri Feb 22 12:13:57.507 [rsSync] replSet SECONDARY m31002| Fri Feb 22 12:13:57.690 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:13:58.250 [rsMgr] replSet PRIMARY m31000| Fri Feb 22 12:13:59.251 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state RECOVERING m31000| Fri Feb 22 12:13:59.252 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31001| Fri Feb 22 12:13:59.462 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31001| Fri Feb 22 12:13:59.463 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:13:59.464 [initandlisten] connection accepted from 165.225.128.186:49878 #7 (4 connections now open) m31001| Fri Feb 22 12:13:59.507 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:13:59.507 [initandlisten] connection accepted from 165.225.128.186:34887 #8 (5 connections now open) m31002| Fri Feb 22 12:13:59.690 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31000| Fri Feb 22 12:14:00.514 [slaveTracking] build index local.slaves { _id: 1 } m31000| Fri Feb 22 12:14:00.516 [slaveTracking] build index done. scanned 0 total records. 0.001 secs m31000| Fri Feb 22 12:14:05.691 [conn4] end connection 165.225.128.186:63005 (4 connections now open) m31000| Fri Feb 22 12:14:05.692 [initandlisten] connection accepted from 165.225.128.186:65041 #9 (5 connections now open) m31002| Fri Feb 22 12:14:05.705 [rsSync] replSet initial sync pending m31002| Fri Feb 22 12:14:05.705 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:05.706 [initandlisten] connection accepted from 165.225.128.186:41314 #10 (6 connections now open) m31002| Fri Feb 22 12:14:05.714 [rsSync] build index local.me { _id: 1 } m31002| Fri Feb 22 12:14:05.718 [rsSync] build index done. scanned 0 total records. 0.003 secs m31002| Fri Feb 22 12:14:05.719 [rsSync] build index local.replset.minvalid { _id: 1 } m31002| Fri Feb 22 12:14:05.720 [rsSync] build index done. scanned 0 total records. 0.001 secs m31002| Fri Feb 22 12:14:05.721 [rsSync] replSet initial sync drop all databases m31002| Fri Feb 22 12:14:05.721 [rsSync] dropAllDatabasesExceptLocal 1 m31002| Fri Feb 22 12:14:05.721 [rsSync] replSet initial sync clone all databases m31002| Fri Feb 22 12:14:05.721 [rsSync] replSet initial sync data copy, starting syncup m31002| Fri Feb 22 12:14:05.721 [rsSync] oplog sync 1 of 3 m31002| Fri Feb 22 12:14:05.721 [rsSync] oplog sync 2 of 3 m31002| Fri Feb 22 12:14:05.721 [rsSync] replSet initial sync building indexes m31002| Fri Feb 22 12:14:05.721 [rsSync] oplog sync 3 of 3 m31002| Fri Feb 22 12:14:05.722 [rsSync] replSet initial sync finishing up m31002| Fri Feb 22 12:14:05.730 [rsSync] replSet set minValid=512760e9:b m31002| Fri Feb 22 12:14:05.736 [rsSync] replSet initial sync done m31000| Fri Feb 22 12:14:05.736 [conn10] end connection 165.225.128.186:41314 (5 connections now open) m31002| Fri Feb 22 12:14:06.693 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:06.694 [initandlisten] connection accepted from 165.225.128.186:58684 #11 (6 connections now open) m31002| Fri Feb 22 12:14:06.737 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:06.738 [initandlisten] connection accepted from 165.225.128.186:47114 #12 (7 connections now open) m31002| Fri Feb 22 12:14:07.736 [rsSync] replSet SECONDARY m31002| Fri Feb 22 12:14:09.252 [conn2] end connection 165.225.128.186:47360 (2 connections now open) m31002| Fri Feb 22 12:14:09.253 [initandlisten] connection accepted from 165.225.128.186:51110 #5 (3 connections now open) m31000| Fri Feb 22 12:14:09.253 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY replsets_priority1.js initial sync m31000| Fri Feb 22 12:14:09.318 [FileAllocator] allocating new datafile /data/db/testSet-0/foo.ns, filling with zeroes... m31000| Fri Feb 22 12:14:09.318 [FileAllocator] done allocating datafile /data/db/testSet-0/foo.ns, size: 16MB, took 0 secs m31000| Fri Feb 22 12:14:09.318 [FileAllocator] allocating new datafile /data/db/testSet-0/foo.0, filling with zeroes... m31000| Fri Feb 22 12:14:09.318 [FileAllocator] done allocating datafile /data/db/testSet-0/foo.0, size: 16MB, took 0 secs m31000| Fri Feb 22 12:14:09.322 [conn1] build index foo.bar { _id: 1 } m31000| Fri Feb 22 12:14:09.323 [conn1] build index done. scanned 0 total records. 0.001 secs m31002| Fri Feb 22 12:14:09.325 [FileAllocator] allocating new datafile /data/db/testSet-2/foo.ns, filling with zeroes... m31001| Fri Feb 22 12:14:09.325 [FileAllocator] allocating new datafile /data/db/testSet-1/foo.ns, filling with zeroes... m31001| Fri Feb 22 12:14:09.325 [FileAllocator] done allocating datafile /data/db/testSet-1/foo.ns, size: 16MB, took 0 secs m31002| Fri Feb 22 12:14:09.326 [FileAllocator] done allocating datafile /data/db/testSet-2/foo.ns, size: 16MB, took 0 secs m31001| Fri Feb 22 12:14:09.326 [FileAllocator] allocating new datafile /data/db/testSet-1/foo.0, filling with zeroes... m31002| Fri Feb 22 12:14:09.326 [FileAllocator] allocating new datafile /data/db/testSet-2/foo.0, filling with zeroes... m31001| Fri Feb 22 12:14:09.326 [FileAllocator] done allocating datafile /data/db/testSet-1/foo.0, size: 16MB, took 0 secs m31002| Fri Feb 22 12:14:09.326 [FileAllocator] done allocating datafile /data/db/testSet-2/foo.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31000, is { "t" : 1361535249000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361535249000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:14:09.329 [repl writer worker 1] build index foo.bar { _id: 1 } m31002| Fri Feb 22 12:14:09.329 [repl writer worker 1] build index foo.bar { _id: 1 } m31001| Fri Feb 22 12:14:09.331 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31002| Fri Feb 22 12:14:09.331 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31001, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31002 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31002, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361535249000, "i" : 1 } replsets_priority1.js starting loop Round 0: FIGHT! random priority : 5.149137298576534 random priority : 71.78441060241312 random priority : 87.63530312571675 replsets_priority1.js max is bs-smartos-x86-64-1.10gen.cc:31002 with priority 87.63530312571675, reconfiguring... version is 1, trying to update to 2 m31000| Fri Feb 22 12:14:09.337 [conn1] replSet replSetReconfig config object parses ok, 3 members specified m31000| Fri Feb 22 12:14:09.337 [conn1] replSet replSetReconfig [2] m31000| Fri Feb 22 12:14:09.337 [conn1] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 12:14:09.354 [conn1] replSet saveConfigLocally done m31000| Fri Feb 22 12:14:09.354 [conn1] replSet relinquishing primary state m31000| Fri Feb 22 12:14:09.354 [conn1] replSet SECONDARY m31000| Fri Feb 22 12:14:09.354 [conn1] replSet closing client sockets after relinquishing primary m31000| Fri Feb 22 12:14:09.354 [conn1] replSet PRIMARY m31002| Fri Feb 22 12:14:09.355 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:14:09.355 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:09.355 [conn1] replSet replSetReconfig new config saved locally m31002| Fri Feb 22 12:14:09.355 [conn5] end connection 165.225.128.186:51110 (2 connections now open) m31002| Fri Feb 22 12:14:09.355 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:09.355 [conn8] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:34887] m31001| Fri Feb 22 12:14:09.355 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:09.355 [conn12] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:47114] m31000| Fri Feb 22 12:14:09.355 [conn7] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:49878] m31000| Fri Feb 22 12:14:09.355 [conn1] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:62102] m31000| Fri Feb 22 12:14:09.355 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:14:09.355 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:14:09.355 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31000 (priority 5.14914), bs-smartos-x86-64-1.10gen.cc:31001 is priority 71.7844 and 0 seconds behind m31000| Fri Feb 22 12:14:09.355 [rsMgr] replSet relinquishing primary state m31000| Fri Feb 22 12:14:09.355 [rsMgr] replSet SECONDARY m31000| Fri Feb 22 12:14:09.355 [rsMgr] replSet closing client sockets after relinquishing primary m31000| Fri Feb 22 12:14:09.355 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:14:09.355 [initandlisten] connection accepted from 165.225.128.186:60503 #6 (3 connections now open) m31002| Fri Feb 22 12:14:09.355 [conn6] end connection 165.225.128.186:60503 (2 connections now open) m31002| Fri Feb 22 12:14:09.355 [initandlisten] connection accepted from 165.225.128.186:54451 #7 (3 connections now open) m31002| Fri Feb 22 12:14:09.356 [conn7] end connection 165.225.128.186:54451 (2 connections now open) m31002| Fri Feb 22 12:14:09.356 [initandlisten] connection accepted from 165.225.128.186:35134 #8 (3 connections now open) m31000| Fri Feb 22 12:14:09.356 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:14:09.356 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:14:09.356 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' Fri Feb 22 12:14:09.365 DBClientCursor::init call() failed nreplsets_priority1.js Caught exception: Error: error doing query: failed Fri Feb 22 12:14:09.366 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:14:09.366 reconnect 127.0.0.1:31000 ok m31000| Fri Feb 22 12:14:09.366 [initandlisten] connection accepted from 127.0.0.1:56613 #13 (4 connections now open) m31002| Fri Feb 22 12:14:09.462 [conn3] end connection 165.225.128.186:49036 (2 connections now open) m31002| Fri Feb 22 12:14:09.463 [initandlisten] connection accepted from 165.225.128.186:59332 #9 (3 connections now open) m31001| Fri Feb 22 12:14:09.463 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:14:09.463 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31001| Fri Feb 22 12:14:09.463 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:14:09.463 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 12:14:09.477 [rsMgr] replSet saveConfigLocally done m31001| Fri Feb 22 12:14:09.477 [rsMgr] replSet replSetReconfig new config saved locally m31002| Fri Feb 22 12:14:09.477 [conn9] end connection 165.225.128.186:59332 (2 connections now open) m31001| Fri Feb 22 12:14:09.477 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:14:09.477 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:14:09.477 [initandlisten] connection accepted from 165.225.128.186:45441 #10 (3 connections now open) m31001| Fri Feb 22 12:14:09.478 [rsMgr] not electing self, we are not freshest m31001| Fri Feb 22 12:14:09.478 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31001| Fri Feb 22 12:14:09.478 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:14:09.478 [rsMgr] not electing self, we are not freshest m31001| Fri Feb 22 12:14:09.478 [rsMgr] not electing self, we are not freshest m31001| Fri Feb 22 12:14:09.478 [rsMgr] not electing self, we are not freshest m31002| Fri Feb 22 12:14:09.692 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31002| Fri Feb 22 12:14:09.692 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Fri Feb 22 12:14:09.692 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:14:09.714 [rsMgr] replSet saveConfigLocally done m31002| Fri Feb 22 12:14:09.714 [rsMgr] replSet replSetReconfig new config saved locally m31002| Fri Feb 22 12:14:09.714 [rsMgr] replset msgReceivedNewConfig version: version: 2 m31001| Fri Feb 22 12:14:09.714 [conn4] end connection 165.225.128.186:57751 (2 connections now open) m31002| Fri Feb 22 12:14:09.714 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 2 2 m31002| Fri Feb 22 12:14:09.714 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:14:09.714 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:14:09.715 [initandlisten] connection accepted from 165.225.128.186:44602 #6 (3 connections now open) m31002| Fri Feb 22 12:14:09.715 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:14:09.715 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:14:09.715 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31002 is electable' m31002| Fri Feb 22 12:14:09.715 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31002 is electable' m31002| Fri Feb 22 12:14:09.715 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31002 is electable' m31002| Fri Feb 22 12:14:10.355 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:10.355 [initandlisten] connection accepted from 165.225.128.186:44053 #14 (5 connections now open) m31000| Fri Feb 22 12:14:14.405 [conn11] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:58684] m31000| Fri Feb 22 12:14:15.356 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:14:15.478 [rsMgr] not electing self, we are not freshest m31002| Fri Feb 22 12:14:15.877 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:14:19.355 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:19.355 [initandlisten] connection accepted from 165.225.128.186:35167 #15 (5 connections now open) m31001| Fri Feb 22 12:14:19.356 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:14:19.356 [initandlisten] connection accepted from 165.225.128.186:45067 #16 (6 connections now open) m31000| Fri Feb 22 12:14:21.357 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:14:21.479 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31002| Fri Feb 22 12:14:22.648 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:14:25.357 [conn5] end connection 165.225.128.186:36801 (2 connections now open) m31001| Fri Feb 22 12:14:25.357 [initandlisten] connection accepted from 165.225.128.186:51761 #7 (3 connections now open) m31000| Fri Feb 22 12:14:25.479 [conn5] end connection 165.225.128.186:64374 (5 connections now open) m31000| Fri Feb 22 12:14:25.479 [initandlisten] connection accepted from 165.225.128.186:50332 #17 (6 connections now open) m31000| Fri Feb 22 12:14:25.717 [conn9] end connection 165.225.128.186:65041 (5 connections now open) m31000| Fri Feb 22 12:14:25.717 [initandlisten] connection accepted from 165.225.128.186:36675 #18 (6 connections now open) m31000| Fri Feb 22 12:14:27.358 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:14:27.480 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31002| Fri Feb 22 12:14:28.044 [rsMgr] replSet info electSelf 2 m31001| Fri Feb 22 12:14:28.044 [conn6] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31002 (2) m31000| Fri Feb 22 12:14:28.044 [conn18] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31002 (2) m31002| Fri Feb 22 12:14:28.356 [rsMgr] replSet PRIMARY m31000| Fri Feb 22 12:14:29.358 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY replsets_priority1.js wait for 2 slaves replsets_priority1.js wait for new config version 2 replsets_priority1.js awaitReplication ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31002, is { "t" : 1361535249000, "i" : 2 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361535249000, "i" : 2 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31000 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31000, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31001 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31001, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361535249000, "i" : 2 } reconfigured. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 rs.stop ReplSetTest n: 2 ports: [ 31000, 31001, 31002 ] 31002 number ReplSetTest stop *** Shutting down mongod in port 31002 *** m31002| Fri Feb 22 12:14:29.394 got signal 15 (Terminated), will terminate after current cmd ends m31002| Fri Feb 22 12:14:29.394 [interruptThread] now exiting m31002| Fri Feb 22 12:14:29.394 dbexit: m31002| Fri Feb 22 12:14:29.394 [interruptThread] shutdown: going to close listening sockets... m31002| Fri Feb 22 12:14:29.394 [interruptThread] closing listening socket: 18 m31002| Fri Feb 22 12:14:29.394 [interruptThread] closing listening socket: 19 m31002| Fri Feb 22 12:14:29.394 [interruptThread] closing listening socket: 20 m31002| Fri Feb 22 12:14:29.394 [interruptThread] removing socket file: /tmp/mongodb-31002.sock m31002| Fri Feb 22 12:14:29.394 [interruptThread] shutdown: going to flush diaglog... m31002| Fri Feb 22 12:14:29.394 [interruptThread] shutdown: going to close sockets... m31002| Fri Feb 22 12:14:29.394 [interruptThread] shutdown: waiting for fs preallocator... m31002| Fri Feb 22 12:14:29.394 [interruptThread] shutdown: lock for final commit... m31002| Fri Feb 22 12:14:29.394 [interruptThread] shutdown: final commit... m31002| Fri Feb 22 12:14:29.394 [conn1] end connection 127.0.0.1:64142 (2 connections now open) m31002| Fri Feb 22 12:14:29.394 [conn8] end connection 165.225.128.186:35134 (2 connections now open) m31002| Fri Feb 22 12:14:29.394 [conn10] end connection 165.225.128.186:45441 (2 connections now open) m31000| Fri Feb 22 12:14:29.394 [conn18] end connection 165.225.128.186:36675 (5 connections now open) m31000| Fri Feb 22 12:14:29.394 [conn14] end connection 165.225.128.186:44053 (5 connections now open) m31001| Fri Feb 22 12:14:29.394 [conn6] end connection 165.225.128.186:44602 (2 connections now open) m31002| Fri Feb 22 12:14:29.412 [interruptThread] shutdown: closing all files... m31002| Fri Feb 22 12:14:29.413 [interruptThread] closeAllFiles() finished m31002| Fri Feb 22 12:14:29.413 [interruptThread] journalCleanup... m31002| Fri Feb 22 12:14:29.413 [interruptThread] removeJournalFiles m31002| Fri Feb 22 12:14:29.413 dbexit: really exiting now m31001| Fri Feb 22 12:14:29.480 [rsHealthPoll] DBClientCursor::init call() failed m31001| Fri Feb 22 12:14:29.480 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:29.480 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31002 is down (or slow to respond): m31001| Fri Feb 22 12:14:29.480 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state DOWN m31001| Fri Feb 22 12:14:29.481 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31000 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31002 is already primary and more up-to-date' m31000| Fri Feb 22 12:14:30.251 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:30.251 [rsBackgroundSync] repl: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:30.251 [rsBackgroundSync] replSet not trying to sync from bs-smartos-x86-64-1.10gen.cc:31002, it is vetoed for 10 more seconds m31000| Fri Feb 22 12:14:30.251 [rsBackgroundSync] replSet not trying to sync from bs-smartos-x86-64-1.10gen.cc:31002, it is vetoed for 10 more seconds Fri Feb 22 12:14:30.394 shell: stopped mongo program on port 31002 Fri Feb 22 12:14:30.395 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31002 Fri Feb 22 12:14:30.395 SocketException: remote: 127.0.0.1:31002 error: 9001 socket exception [1] server [127.0.0.1:31002] Fri Feb 22 12:14:30.395 DBClientCursor::init call() failed ReplSetTest Could not call ismaster on node 2: Error: error doing query: failed m31000| Fri Feb 22 12:14:31.358 [rsHealthPoll] DBClientCursor::init call() failed m31000| Fri Feb 22 12:14:31.359 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:31.359 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31002 is down (or slow to respond): m31000| Fri Feb 22 12:14:31.359 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state DOWN m31000| Fri Feb 22 12:14:31.359 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31001| Fri Feb 22 12:14:31.481 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying Fri Feb 22 12:14:32.396 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:32.397 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:33.360 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:33.481 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying Fri Feb 22 12:14:34.398 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:34.398 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:35.360 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:35.482 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:35.862 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:14:36.400 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:36.400 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:37.359 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:14:37.361 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:37.483 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying Fri Feb 22 12:14:38.401 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:38.402 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:39.361 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:39.362 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:39.362 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:39.362 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:39.362 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:39.363 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:39.483 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:39.483 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:39.484 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:39.484 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:39.484 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:39.484 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:40.403 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:40.403 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:41.363 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:41.363 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:41.364 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:41.364 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:41.364 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:41.364 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:41.485 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:41.485 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:41.485 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:41.485 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:41.486 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:41.486 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:42.129 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:14:42.405 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:42.405 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:43.360 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:14:43.365 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:43.365 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:43.365 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:43.366 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:43.366 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:43.366 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:43.486 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:43.487 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:43.487 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:43.487 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:43.487 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:43.488 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:44.407 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:44.407 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:45.367 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:45.367 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:45.367 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:45.368 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:45.368 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:45.368 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:45.488 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:45.488 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:45.489 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:45.489 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:45.489 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:45.489 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:46.408 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:46.408 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:47.369 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:47.369 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:47.369 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:47.369 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:47.370 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:47.370 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:47.490 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:47.490 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:47.490 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:47.491 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:47.491 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:47.491 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:48.249 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:14:48.410 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:48.410 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:49.361 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:14:49.370 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:49.371 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:49.371 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:49.371 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:49.371 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:49.372 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:49.492 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:49.492 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:49.492 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:49.492 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:49.493 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:49.493 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:50.412 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:50.412 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:51.372 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:51.372 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:51.373 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:51.373 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:51.373 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:51.373 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:51.493 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:51.494 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:51.494 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:51.494 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:51.494 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:51.495 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:52.413 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:52.413 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:53.374 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:53.374 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:53.375 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:53.375 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:53.375 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:53.375 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:53.495 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:53.495 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:53.495 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:53.496 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:53.496 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:53.496 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:54.415 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:54.415 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31001| Fri Feb 22 12:14:54.447 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:14:55.361 [conn7] end connection 165.225.128.186:51761 (1 connection now open) m31001| Fri Feb 22 12:14:55.361 [initandlisten] connection accepted from 165.225.128.186:42289 #8 (2 connections now open) m31000| Fri Feb 22 12:14:55.362 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:14:55.376 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:55.376 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:55.376 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:55.376 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:55.377 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:55.377 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:55.484 [conn17] end connection 165.225.128.186:50332 (3 connections now open) m31000| Fri Feb 22 12:14:55.484 [initandlisten] connection accepted from 165.225.128.186:62338 #19 (4 connections now open) m31001| Fri Feb 22 12:14:55.497 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:55.497 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:55.497 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:55.497 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:55.497 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:55.498 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:56.416 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:56.416 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:57.377 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:57.378 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:57.378 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:57.378 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:57.378 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:57.379 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:57.498 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:57.498 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:57.499 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:57.499 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:57.499 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:57.499 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:14:58.418 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:14:58.418 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:14:59.379 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:59.379 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:59.380 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:14:59.380 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:59.380 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:14:59.380 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:59.508 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:59.508 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:59.509 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:14:59.509 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:59.509 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:59.509 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:14:59.861 [rsMgr] replSet info electSelf 1 m31000| Fri Feb 22 12:14:59.862 [conn19] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31001 (1) Fri Feb 22 12:15:00.420 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:15:00.420 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:15:01.363 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:15:01.381 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:01.381 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:01.382 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:15:01.382 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:01.382 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:01.383 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:01.510 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:01.510 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:01.510 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:15:01.511 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:01.511 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:01.511 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:15:02.421 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:15:02.421 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:15:03.383 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:03.384 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:03.384 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:15:03.384 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:03.384 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:03.384 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:03.512 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:03.512 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:03.512 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:15:03.512 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:03.513 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:03.513 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:15:04.423 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:15:04.423 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:15:05.385 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:05.385 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:05.386 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:15:05.386 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:05.386 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:05.386 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:05.513 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:05.514 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:05.514 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:15:05.514 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:05.515 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:05.515 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:15:06.424 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:15:06.424 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:15:07.363 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:15:07.387 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:07.387 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:07.387 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:15:07.388 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:07.388 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:07.388 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:07.515 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:07.516 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:07.516 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:15:07.516 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:07.516 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:07.517 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 Fri Feb 22 12:15:08.426 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:15:08.426 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 m31000| Fri Feb 22 12:15:09.389 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:09.389 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:09.389 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:09.390 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31000| Fri Feb 22 12:15:09.390 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:09.390 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:09.390 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:09.517 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:09.518 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:09.518 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:09.518 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31001| Fri Feb 22 12:15:09.519 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:09.519 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:09.519 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:15:09.864 [conn15] end connection 165.225.128.186:35167 (3 connections now open) m31001| Fri Feb 22 12:15:09.865 [rsMgr] replSet PRIMARY Fri Feb 22 12:15:10.427 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:15:10.428 reconnect 127.0.0.1:31002 failed couldn't connect to server 127.0.0.1:31002 ReplSetTest Could not call ismaster on node 2: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31002 killed max primary. Checking statuses. second is bs-smartos-x86-64-1.10gen.cc:31001 with priority 71.78441060241312 nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 8 restart max 2 ReplSetTest n: 2 ports: [ 31000, 31001, 31002 ] 31002 number ReplSetTest stop *** Shutting down mongod in port 31002 *** Fri Feb 22 12:15:10.430 No db started on port: 31002 Fri Feb 22 12:15:10.430 shell: stopped mongo program on port 31002 ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31000, 31001, 31002 ] 31002 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31002, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : true, "pathOpts" : { "node" : 2, "set" : "testSet" } } ReplSetTest (Re)Starting.... Fri Feb 22 12:15:10.434 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31002 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-2 --setParameter enableTestCommands=1 m31002| note: noprealloc may hurt performance in many applications m31002| Fri Feb 22 12:15:10.527 [initandlisten] MongoDB starting : pid=9032 port=31002 dbpath=/data/db/testSet-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31002| Fri Feb 22 12:15:10.527 [initandlisten] m31002| Fri Feb 22 12:15:10.527 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31002| Fri Feb 22 12:15:10.527 [initandlisten] ** uses to detect impending page faults. m31002| Fri Feb 22 12:15:10.527 [initandlisten] ** This may result in slower performance for certain use cases m31002| Fri Feb 22 12:15:10.527 [initandlisten] m31002| Fri Feb 22 12:15:10.527 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31002| Fri Feb 22 12:15:10.527 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31002| Fri Feb 22 12:15:10.527 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31002| Fri Feb 22 12:15:10.527 [initandlisten] allocator: system m31002| Fri Feb 22 12:15:10.527 [initandlisten] options: { dbpath: "/data/db/testSet-2", noprealloc: true, oplogSize: 40, port: 31002, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31002| Fri Feb 22 12:15:10.528 [initandlisten] journal dir=/data/db/testSet-2/journal m31002| Fri Feb 22 12:15:10.528 [initandlisten] recover : no journal files present, no recovery needed m31002| Fri Feb 22 12:15:10.548 [initandlisten] waiting for connections on port 31002 m31002| Fri Feb 22 12:15:10.548 [websvr] admin web console waiting for connections on port 32002 m31002| Fri Feb 22 12:15:10.569 [initandlisten] connection accepted from 165.225.128.186:50067 #1 (1 connection now open) m31002| Fri Feb 22 12:15:10.570 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:15:10.570 [conn1] end connection 165.225.128.186:50067 (0 connections now open) m31002| Fri Feb 22 12:15:10.570 [rsStart] replSet STARTUP2 m31000| Fri Feb 22 12:15:10.570 [initandlisten] connection accepted from 165.225.128.186:54086 #20 (4 connections now open) m31002| Fri Feb 22 12:15:10.571 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 thinks that we are down m31002| Fri Feb 22 12:15:10.571 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:15:10.571 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:15:10.635 [initandlisten] connection accepted from 127.0.0.1:52437 #2 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001, connection to bs-smartos-x86-64-1.10gen.cc:31002 ] max restarted. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 8 m31000| Fri Feb 22 12:15:11.364 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state PRIMARY m31002| Fri Feb 22 12:15:11.391 [initandlisten] connection accepted from 165.225.128.186:43682 #3 (2 connections now open) m31002| Fri Feb 22 12:15:11.391 [conn3] end connection 165.225.128.186:43682 (1 connection now open) m31002| Fri Feb 22 12:15:11.391 [initandlisten] connection accepted from 165.225.128.186:34530 #4 (3 connections now open) m31000| Fri Feb 22 12:15:11.391 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:15:11.391 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state STARTUP2 m31002| Fri Feb 22 12:15:11.520 [initandlisten] connection accepted from 165.225.128.186:43781 #5 (3 connections now open) m31002| Fri Feb 22 12:15:11.520 [conn5] end connection 165.225.128.186:43781 (2 connections now open) m31002| Fri Feb 22 12:15:11.520 [initandlisten] connection accepted from 165.225.128.186:63746 #6 (3 connections now open) m31001| Fri Feb 22 12:15:11.520 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 thinks that we are down m31001| Fri Feb 22 12:15:11.520 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31001| Fri Feb 22 12:15:11.520 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state STARTUP2 m31002| Fri Feb 22 12:15:11.570 [rsSync] replSet SECONDARY goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 5 m31000| Fri Feb 22 12:15:12.253 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:15:12.254 [initandlisten] connection accepted from 165.225.128.186:56586 #9 (3 connections now open) m31000| Fri Feb 22 12:15:12.255 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:15:12.255 [initandlisten] connection accepted from 165.225.128.186:34596 #10 (4 connections now open) m31000| Fri Feb 22 12:15:12.259 [rsSyncNotifier] build index local.me { _id: 1 } m31000| Fri Feb 22 12:15:12.262 [rsSyncNotifier] build index done. scanned 0 total records. 0.002 secs m31001| Fri Feb 22 12:15:12.570 [initandlisten] connection accepted from 165.225.128.186:64977 #11 (5 connections now open) m31002| Fri Feb 22 12:15:12.571 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:15:12.571 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 5 m31001| Fri Feb 22 12:15:13.267 [slaveTracking] build index local.slaves { _id: 1 } m31001| Fri Feb 22 12:15:13.269 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31000| Fri Feb 22 12:15:13.392 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:15:13.392 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31001 (priority 71.7844), bs-smartos-x86-64-1.10gen.cc:31002 is priority 87.6353 and 0 seconds behind m31001| Fri Feb 22 12:15:13.392 [conn8] replSet info stepping down as primary secs=1 m31001| Fri Feb 22 12:15:13.392 [conn8] replSet relinquishing primary state m31001| Fri Feb 22 12:15:13.392 [conn8] replSet SECONDARY m31001| Fri Feb 22 12:15:13.392 [conn8] replSet closing client sockets after relinquishing primary m31001| Fri Feb 22 12:15:13.392 [conn1] end connection 127.0.0.1:51402 (4 connections now open) m31002| Fri Feb 22 12:15:13.392 [conn6] end connection 165.225.128.186:63746 (2 connections now open) m31000| Fri Feb 22 12:15:13.392 [conn16] end connection 165.225.128.186:45067 (3 connections now open) m31000| Fri Feb 22 12:15:13.392 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:15:13.392 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:15:13.521 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31002 heartbeat failed, retrying m31002| Fri Feb 22 12:15:13.521 [initandlisten] connection accepted from 165.225.128.186:54271 #7 (3 connections now open) m31001| Fri Feb 22 12:15:13.521 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY Fri Feb 22 12:15:13.640 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31001 Fri Feb 22 12:15:13.640 SocketException: remote: 127.0.0.1:31001 error: 9001 socket exception [1] server [127.0.0.1:31001] Fri Feb 22 12:15:13.640 DBClientCursor::init call() failed Error: error doing query: failed nreplsets_priority1.js checkPrimaryIs reconnecting Fri Feb 22 12:15:13.641 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:15:13.641 reconnect 127.0.0.1:31001 ok m31001| Fri Feb 22 12:15:13.641 [initandlisten] connection accepted from 127.0.0.1:47883 #12 (5 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:15:14.571 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:15:14.571 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31000 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31001 is already primary and more up-to-date' goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:15:15.364 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:15:15.364 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31001 (priority 71.7844), bs-smartos-x86-64-1.10gen.cc:31002 is priority 87.6353 and 0 seconds behind m31000| Fri Feb 22 12:15:15.364 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31001 failed: { ok: 0.0, errmsg: "not primary so can't step down" } m31000| Fri Feb 22 12:15:15.364 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:15:17.310 [conn9] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:56586] m31001| Fri Feb 22 12:15:17.313 [conn10] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:34596] goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:15:19.488 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:15:19Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 70, "optime" : { "t" : 1361535249000, "i" : 2 }, "optimeDate" : ISODate("2013-02-22T12:14:09Z"), "lastHeartbeat" : ISODate("2013-02-22T12:15:19Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001" }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 110, "optime" : { "t" : 1361535249000, "i" : 2 }, "optimeDate" : ISODate("2013-02-22T12:14:09Z"), "self" : true }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 8, "optime" : { "t" : 1361535249000, "i" : 2 }, "optimeDate" : ISODate("2013-02-22T12:14:09Z"), "lastHeartbeat" : ISODate("2013-02-22T12:15:19Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 } ], "ok" : 1 } m31002| Fri Feb 22 12:15:20.629 [rsMgr] replSet info electSelf 2 m31001| Fri Feb 22 12:15:20.630 [conn11] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31002 already voted for another m31000| Fri Feb 22 12:15:20.630 [conn20] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31002 already voted for another m31002| Fri Feb 22 12:15:20.630 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:15:21.365 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:15:24.572 [conn20] end connection 165.225.128.186:54086 (2 connections now open) m31000| Fri Feb 22 12:15:24.573 [initandlisten] connection accepted from 165.225.128.186:48174 #21 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:15:25.365 [conn8] end connection 165.225.128.186:42289 (2 connections now open) m31001| Fri Feb 22 12:15:25.365 [initandlisten] connection accepted from 165.225.128.186:38383 #13 (3 connections now open) m31000| Fri Feb 22 12:15:25.488 [conn19] end connection 165.225.128.186:62338 (2 connections now open) m31000| Fri Feb 22 12:15:25.488 [initandlisten] connection accepted from 165.225.128.186:57602 #22 (3 connections now open) m31001| Fri Feb 22 12:15:25.489 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:15:27.366 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31002| Fri Feb 22 12:15:27.591 [rsMgr] replSet info electSelf 2 m31001| Fri Feb 22 12:15:27.591 [conn11] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31002 already voted for another m31000| Fri Feb 22 12:15:27.591 [conn21] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31002 already voted for another m31002| Fri Feb 22 12:15:27.592 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:15:31.490 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:15:32.998 [rsMgr] replSet info electSelf 2 m31001| Fri Feb 22 12:15:32.998 [conn11] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31002 (2) m31000| Fri Feb 22 12:15:32.998 [conn21] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31002 (2) m31000| Fri Feb 22 12:15:33.367 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31002| Fri Feb 22 12:15:33.572 [rsMgr] replSet PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:15:34Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 85, "optime" : { "t" : 1361535249000, "i" : 2 }, "optimeDate" : ISODate("2013-02-22T12:14:09Z"), "lastHeartbeat" : ISODate("2013-02-22T12:15:33Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001" }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 125, "optime" : { "t" : 1361535249000, "i" : 2 }, "optimeDate" : ISODate("2013-02-22T12:14:09Z"), "self" : true }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 23, "optime" : { "t" : 1361535249000, "i" : 2 }, "optimeDate" : ISODate("2013-02-22T12:14:09Z"), "lastHeartbeat" : ISODate("2013-02-22T12:15:33Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:15:34Z"), "pingMs" : 0 } ], "ok" : 1 } m31000| Fri Feb 22 12:15:35.395 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY m31001| Fri Feb 22 12:15:35.524 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 Round 1: FIGHT! random priority : 23.432764573954046 random priority : 9.66845361981541 random priority : 13.56648791115731 replsets_priority1.js max is bs-smartos-x86-64-1.10gen.cc:31000 with priority 23.432764573954046, reconfiguring... version is 2, trying to update to 3 m31002| Fri Feb 22 12:15:35.678 [conn2] replSet replSetReconfig config object parses ok, 3 members specified m31002| Fri Feb 22 12:15:35.678 [conn2] replSet replSetReconfig [2] m31002| Fri Feb 22 12:15:35.678 [conn2] replSet info saving a newer config version to local.system.replset m31002| Fri Feb 22 12:15:35.699 [conn2] replSet saveConfigLocally done m31002| Fri Feb 22 12:15:35.699 [conn2] replSet relinquishing primary state m31002| Fri Feb 22 12:15:35.699 [conn2] replSet SECONDARY m31002| Fri Feb 22 12:15:35.699 [conn2] replSet closing client sockets after relinquishing primary m31002| Fri Feb 22 12:15:35.699 [conn2] replSet PRIMARY Fri Feb 22 12:15:35.699 DBClientCursor::init call() failed m31002| Fri Feb 22 12:15:35.699 [conn2] replSet replSetReconfig new config saved locally m31001| Fri Feb 22 12:15:35.699 [conn11] end connection 165.225.128.186:64977 (2 connections now open) nreplsets_priority1.js Caught exception: Error: error doing query: failed m31002| Fri Feb 22 12:15:35.700 [conn2] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:52437] m31002| Fri Feb 22 12:15:35.700 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:15:35.700 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:15:35.700 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 (priority 13.5665), bs-smartos-x86-64-1.10gen.cc:31000 is priority 23.4328 and 0 seconds behind m31002| Fri Feb 22 12:15:35.700 [rsMgr] replSet relinquishing primary state m31002| Fri Feb 22 12:15:35.700 [rsMgr] replSet SECONDARY m31002| Fri Feb 22 12:15:35.700 [rsMgr] replSet closing client sockets after relinquishing primary Fri Feb 22 12:15:35.700 trying reconnect to 127.0.0.1:31002 m31001| Fri Feb 22 12:15:35.700 [initandlisten] connection accepted from 165.225.128.186:64044 #14 (3 connections now open) Fri Feb 22 12:15:35.700 reconnect 127.0.0.1:31002 ok m31002| Fri Feb 22 12:15:35.700 [initandlisten] connection accepted from 127.0.0.1:53526 #8 (3 connections now open) m31002| Fri Feb 22 12:15:35.700 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:15:35.700 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY version is 2, trying to update to 3 m31002| Fri Feb 22 12:15:35.701 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31002 is already primary and more up-to-date' m31001| Fri Feb 22 12:15:35.866 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:15:35.866 [initandlisten] connection accepted from 165.225.128.186:33610 #9 (4 connections now open) m31001| Fri Feb 22 12:15:35.867 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:15:35.867 [rsSyncNotifier] Socket flush send() errno:9 Bad file number 165.225.128.186:31000 m31001| Fri Feb 22 12:15:35.867 [rsSyncNotifier] caught exception (socket exception [SEND_ERROR] for 165.225.128.186:31000) in destructor (~PiggyBackData) m31002| Fri Feb 22 12:15:35.867 [initandlisten] connection accepted from 165.225.128.186:50585 #10 (5 connections now open) m31000| Fri Feb 22 12:15:36.393 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:15:36.393 [initandlisten] connection accepted from 165.225.128.186:54178 #11 (6 connections now open) m31000| Fri Feb 22 12:15:36.396 [rsSync] build index local.replset.minvalid { _id: 1 } m31000| Fri Feb 22 12:15:36.397 [rsSync] build index done. scanned 0 total records. 0.001 secs m31000| Fri Feb 22 12:15:36.398 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:15:36.398 [initandlisten] connection accepted from 165.225.128.186:48779 #12 (7 connections now open) m31002| Fri Feb 22 12:15:36.873 [slaveTracking] build index local.slaves { _id: 1 } m31002| Fri Feb 22 12:15:36.876 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31000| Fri Feb 22 12:15:37.395 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:15:37.395 [rsMgr] replset msgReceivedNewConfig version: version: 3 m31000| Fri Feb 22 12:15:37.395 [rsMgr] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 12:15:37.408 [rsMgr] replSet saveConfigLocally done m31000| Fri Feb 22 12:15:37.408 [rsMgr] replSet replSetReconfig new config saved locally m31000| Fri Feb 22 12:15:37.408 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:15:37.408 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:15:37.408 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:15:37.408 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:15:37.408 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31002 is already primary and more up-to-date' m31000| Fri Feb 22 12:15:37.408 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31002 is already primary and more up-to-date' m31000| Fri Feb 22 12:15:37.409 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31002 is already primary and more up-to-date' m31001| Fri Feb 22 12:15:37.491 [rsMgr] replset msgReceivedNewConfig version: version: 3 m31001| Fri Feb 22 12:15:37.491 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 12:15:37.505 [rsMgr] replSet saveConfigLocally done m31001| Fri Feb 22 12:15:37.506 [rsMgr] replSet replSetReconfig new config saved locally m31001| Fri Feb 22 12:15:37.506 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31001| Fri Feb 22 12:15:37.506 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:15:37.506 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:15:37.506 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:15:37.506 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 (priority 13.5665), bs-smartos-x86-64-1.10gen.cc:31000 is priority 23.4328 and 0 seconds behind m31001| Fri Feb 22 12:15:37.506 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 failed: { ok: 0.0, errmsg: "not primary so can't step down" } m31001| Fri Feb 22 12:15:37.506 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:15:37.506 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:15:39.408 [conn4] end connection 165.225.128.186:34530 (6 connections now open) m31002| Fri Feb 22 12:15:39.409 [initandlisten] connection accepted from 165.225.128.186:53425 #13 (7 connections now open) m31002| Fri Feb 22 12:15:39.506 [conn7] end connection 165.225.128.186:54271 (6 connections now open) m31002| Fri Feb 22 12:15:39.507 [initandlisten] connection accepted from 165.225.128.186:41249 #14 (7 connections now open) m31002| Fri Feb 22 12:15:41.701 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:15:43.409 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:15:43.507 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:15:45.969 [conn9] end connection 165.225.128.186:33610 (6 connections now open) m31002| Fri Feb 22 12:15:46.496 [conn11] end connection 165.225.128.186:54178 (5 connections now open) m31002| Fri Feb 22 12:15:47.702 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:15:49.410 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:15:49.508 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:15:51.702 [conn21] end connection 165.225.128.186:48174 (2 connections now open) m31000| Fri Feb 22 12:15:51.703 [initandlisten] connection accepted from 165.225.128.186:62951 #23 (3 connections now open) m31002| Fri Feb 22 12:15:53.703 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:15:55.410 [conn13] end connection 165.225.128.186:38383 (2 connections now open) m31001| Fri Feb 22 12:15:55.410 [initandlisten] connection accepted from 165.225.128.186:60175 #15 (3 connections now open) m31000| Fri Feb 22 12:15:55.411 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:15:55.508 [conn22] end connection 165.225.128.186:57602 (2 connections now open) m31000| Fri Feb 22 12:15:55.509 [initandlisten] connection accepted from 165.225.128.186:49465 #24 (3 connections now open) m31001| Fri Feb 22 12:15:55.509 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:15:59.704 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:16:01.412 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:16:01.510 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:16:05.704 [conn14] end connection 165.225.128.186:64044 (2 connections now open) m31001| Fri Feb 22 12:16:05.704 [initandlisten] connection accepted from 165.225.128.186:62189 #16 (3 connections now open) m31002| Fri Feb 22 12:16:05.705 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:16:07.413 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:16:07.413 [conn13] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31001| Fri Feb 22 12:16:07.413 [conn15] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31000| Fri Feb 22 12:16:07.497 [rsMgr] replSet PRIMARY m31001| Fri Feb 22 12:16:07.510 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31002| Fri Feb 22 12:16:07.705 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY replsets_priority1.js wait for 2 slaves replsets_priority1.js wait for new config version 3 replsets_priority1.js awaitReplication ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31000, is { "t" : 1361535335000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361535335000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31001 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31001, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31002 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31002, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361535335000, "i" : 1 } reconfigured. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 1 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 rs.stop ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number ReplSetTest stop *** Shutting down mongod in port 31000 *** m31000| Fri Feb 22 12:16:07.731 got signal 15 (Terminated), will terminate after current cmd ends m31000| Fri Feb 22 12:16:07.731 [interruptThread] now exiting m31000| Fri Feb 22 12:16:07.731 dbexit: m31000| Fri Feb 22 12:16:07.732 [interruptThread] shutdown: going to close listening sockets... m31000| Fri Feb 22 12:16:07.732 [interruptThread] closing listening socket: 12 m31000| Fri Feb 22 12:16:07.732 [interruptThread] closing listening socket: 13 m31000| Fri Feb 22 12:16:07.732 [interruptThread] closing listening socket: 14 m31000| Fri Feb 22 12:16:07.732 [interruptThread] removing socket file: /tmp/mongodb-31000.sock m31000| Fri Feb 22 12:16:07.732 [interruptThread] shutdown: going to flush diaglog... m31000| Fri Feb 22 12:16:07.732 [interruptThread] shutdown: going to close sockets... m31000| Fri Feb 22 12:16:07.732 [interruptThread] shutdown: waiting for fs preallocator... m31000| Fri Feb 22 12:16:07.732 [interruptThread] shutdown: lock for final commit... m31000| Fri Feb 22 12:16:07.732 [interruptThread] shutdown: final commit... m31000| Fri Feb 22 12:16:07.732 [conn24] end connection 165.225.128.186:49465 (2 connections now open) m31002| Fri Feb 22 12:16:07.732 [conn12] end connection 165.225.128.186:48779 (4 connections now open) m31000| Fri Feb 22 12:16:07.732 [conn23] end connection 165.225.128.186:62951 (2 connections now open) m31000| Fri Feb 22 12:16:07.732 [conn13] end connection 127.0.0.1:56613 (2 connections now open) m31001| Fri Feb 22 12:16:07.732 [conn15] end connection 165.225.128.186:60175 (2 connections now open) m31002| Fri Feb 22 12:16:07.732 [conn13] end connection 165.225.128.186:53425 (3 connections now open) m31000| Fri Feb 22 12:16:07.747 [interruptThread] shutdown: closing all files... m31000| Fri Feb 22 12:16:07.747 [interruptThread] closeAllFiles() finished m31000| Fri Feb 22 12:16:07.747 [interruptThread] journalCleanup... m31000| Fri Feb 22 12:16:07.747 [interruptThread] removeJournalFiles m31000| Fri Feb 22 12:16:07.747 dbexit: really exiting now m31001| Fri Feb 22 12:16:07.970 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:07.970 [rsBackgroundSync] repl: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:08.573 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:08.574 [rsBackgroundSync] repl: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:16:08.731 shell: stopped mongo program on port 31000 Fri Feb 22 12:16:08.732 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31000 Fri Feb 22 12:16:08.732 SocketException: remote: 127.0.0.1:31000 error: 9001 socket exception [1] server [127.0.0.1:31000] Fri Feb 22 12:16:08.732 DBClientCursor::init call() failed ReplSetTest Could not call ismaster on node 0: Error: error doing query: failed m31002| Fri Feb 22 12:16:09.511 [conn14] end connection 165.225.128.186:41249 (2 connections now open) m31001| Fri Feb 22 12:16:09.511 [rsHealthPoll] DBClientCursor::init call() failed m31002| Fri Feb 22 12:16:09.511 [initandlisten] connection accepted from 165.225.128.186:52596 #15 (3 connections now open) m31001| Fri Feb 22 12:16:09.511 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:09.511 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31000 is down (or slow to respond): m31001| Fri Feb 22 12:16:09.511 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state DOWN m31001| Fri Feb 22 12:16:09.512 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' m31002| Fri Feb 22 12:16:09.705 [rsHealthPoll] DBClientCursor::init call() failed m31002| Fri Feb 22 12:16:09.705 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:09.706 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31000 is down (or slow to respond): m31002| Fri Feb 22 12:16:09.706 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state DOWN m31002| Fri Feb 22 12:16:09.959 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:16:10.734 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:10.734 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:11.512 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:11.706 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:16:12.735 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:12.735 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:13.512 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:13.707 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:16:14.737 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:14.737 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:15.512 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:16:15.513 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:15.707 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:16.550 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:16:16.739 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:16.739 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:17.513 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:17.708 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:16:18.741 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:18.741 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:19.514 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:19.709 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:16:20.742 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:20.742 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:21.513 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:16:21.515 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:21.709 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:21.709 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:21.710 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:21.710 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:21.710 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:21.710 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:22.216 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:16:22.744 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:22.744 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:23.515 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:23.711 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:23.711 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:23.711 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:23.712 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:23.712 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:23.712 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:16:24.745 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:24.746 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:25.516 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:25.516 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:25.516 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:25.516 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:25.517 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:25.517 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:25.712 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:25.713 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:25.713 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:25.713 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:25.714 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:25.714 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:16:26.747 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:26.748 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:27.514 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:16:27.517 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:27.517 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:27.518 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:27.518 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:27.518 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:27.518 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:27.714 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:27.714 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:27.715 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:27.715 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:27.715 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:27.715 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:28.029 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:16:28.749 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:28.749 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:29.519 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:29.519 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:29.519 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:29.519 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:29.520 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:29.520 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:29.715 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:29.716 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:29.716 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:29.716 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:29.716 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:29.716 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:16:30.751 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:30.751 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:31.520 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:31.521 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:31.521 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:31.521 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:31.521 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:31.522 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:31.717 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:31.717 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:31.717 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:31.717 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:31.718 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:31.718 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:16:32.753 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:32.753 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:33.515 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:16:33.522 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:33.522 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:33.523 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:33.523 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:33.523 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:33.523 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:33.718 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:33.718 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:33.719 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:33.719 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:33.719 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:33.719 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:34.549 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:16:34.755 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:34.755 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:35.524 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:35.524 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:35.524 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:35.524 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:35.524 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:35.525 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:35.709 [conn16] end connection 165.225.128.186:62189 (1 connection now open) m31001| Fri Feb 22 12:16:35.709 [initandlisten] connection accepted from 165.225.128.186:38630 #17 (2 connections now open) m31002| Fri Feb 22 12:16:35.720 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:35.720 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:35.720 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:35.720 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:35.721 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:35.721 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:16:36.756 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:36.757 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:16:37.525 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:37.525 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:37.526 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:37.526 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:37.526 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:37.526 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:37.721 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:37.722 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:37.722 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:37.722 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:37.722 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:37.723 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:16:38.758 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:38.758 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:16:39.515 [conn15] end connection 165.225.128.186:52596 (2 connections now open) m31002| Fri Feb 22 12:16:39.516 [initandlisten] connection accepted from 165.225.128.186:45286 #16 (3 connections now open) m31001| Fri Feb 22 12:16:39.516 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:16:39.527 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:39.527 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:39.527 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:16:39.527 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:39.528 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:16:39.528 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:39.723 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:39.723 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:39.723 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:16:39.724 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:39.724 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:39.724 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:16:40.337 [rsMgr] replSet info electSelf 2 m31001| Fri Feb 22 12:16:40.337 [conn17] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31002 (2) m31002| Fri Feb 22 12:16:40.576 [rsMgr] replSet PRIMARY Fri Feb 22 12:16:40.760 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:16:40.760 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 killed max primary. Checking statuses. second is bs-smartos-x86-64-1.10gen.cc:31002 with priority 13.56648791115731 nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 8 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 restart max 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number ReplSetTest stop *** Shutting down mongod in port 31000 *** Fri Feb 22 12:16:40.763 No db started on port: 31000 Fri Feb 22 12:16:40.763 shell: stopped mongo program on port 31000 ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : true, "pathOpts" : { "node" : 0, "set" : "testSet" } } ReplSetTest (Re)Starting.... Fri Feb 22 12:16:40.767 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-0 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Fri Feb 22 12:16:40.856 [initandlisten] MongoDB starting : pid=9497 port=31000 dbpath=/data/db/testSet-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 12:16:40.857 [initandlisten] m31000| Fri Feb 22 12:16:40.857 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 12:16:40.857 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 12:16:40.857 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 12:16:40.857 [initandlisten] m31000| Fri Feb 22 12:16:40.857 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 12:16:40.857 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 12:16:40.857 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 12:16:40.857 [initandlisten] allocator: system m31000| Fri Feb 22 12:16:40.857 [initandlisten] options: { dbpath: "/data/db/testSet-0", noprealloc: true, oplogSize: 40, port: 31000, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Fri Feb 22 12:16:40.857 [initandlisten] journal dir=/data/db/testSet-0/journal m31000| Fri Feb 22 12:16:40.857 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 12:16:40.878 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 12:16:40.878 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 12:16:40.899 [initandlisten] connection accepted from 165.225.128.186:46387 #1 (1 connection now open) m31000| Fri Feb 22 12:16:40.899 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:16:40.899 [conn1] end connection 165.225.128.186:46387 (0 connections now open) m31000| Fri Feb 22 12:16:40.899 [rsStart] replSet STARTUP2 m31001| Fri Feb 22 12:16:40.900 [initandlisten] connection accepted from 165.225.128.186:64292 #18 (3 connections now open) m31000| Fri Feb 22 12:16:40.900 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 thinks that we are down m31000| Fri Feb 22 12:16:40.900 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:16:40.900 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:16:40.968 [initandlisten] connection accepted from 127.0.0.1:36000 #2 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001, connection to bs-smartos-x86-64-1.10gen.cc:31002 ] max restarted. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 8 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31001| Fri Feb 22 12:16:41.516 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY m31000| Fri Feb 22 12:16:41.528 [initandlisten] connection accepted from 165.225.128.186:46170 #3 (2 connections now open) m31000| Fri Feb 22 12:16:41.529 [conn3] end connection 165.225.128.186:46170 (1 connection now open) m31000| Fri Feb 22 12:16:41.529 [initandlisten] connection accepted from 165.225.128.186:33601 #4 (2 connections now open) m31001| Fri Feb 22 12:16:41.529 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:16:41.529 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state STARTUP2 m31000| Fri Feb 22 12:16:41.724 [initandlisten] connection accepted from 165.225.128.186:51259 #5 (3 connections now open) m31000| Fri Feb 22 12:16:41.725 [conn5] end connection 165.225.128.186:51259 (2 connections now open) m31000| Fri Feb 22 12:16:41.725 [initandlisten] connection accepted from 165.225.128.186:63200 #6 (3 connections now open) m31002| Fri Feb 22 12:16:41.725 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 thinks that we are down m31002| Fri Feb 22 12:16:41.725 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:16:41.725 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state STARTUP2 m31000| Fri Feb 22 12:16:41.900 [rsSync] replSet SECONDARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 5 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31001| Fri Feb 22 12:16:41.972 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:16:41.972 [initandlisten] connection accepted from 165.225.128.186:58745 #17 (4 connections now open) m31002| Fri Feb 22 12:16:42.900 [initandlisten] connection accepted from 165.225.128.186:47632 #18 (5 connections now open) m31000| Fri Feb 22 12:16:42.900 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:16:42.900 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 5 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31001| Fri Feb 22 12:16:43.529 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:16:43.529 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 (priority 13.5665), bs-smartos-x86-64-1.10gen.cc:31000 is priority 23.4328 and 0 seconds behind m31002| Fri Feb 22 12:16:43.529 [conn16] replSet info stepping down as primary secs=1 m31002| Fri Feb 22 12:16:43.529 [conn16] replSet relinquishing primary state m31002| Fri Feb 22 12:16:43.529 [conn16] replSet SECONDARY m31002| Fri Feb 22 12:16:43.529 [conn16] replSet closing client sockets after relinquishing primary m31002| Fri Feb 22 12:16:43.530 [conn8] end connection 127.0.0.1:53526 (4 connections now open) m31002| Fri Feb 22 12:16:43.530 [conn10] end connection 165.225.128.186:50585 (4 connections now open) m31000| Fri Feb 22 12:16:43.530 [conn6] end connection 165.225.128.186:63200 (2 connections now open) m31001| Fri Feb 22 12:16:43.530 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:16:43.725 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31000| Fri Feb 22 12:16:43.726 [initandlisten] connection accepted from 165.225.128.186:51268 #7 (3 connections now open) m31002| Fri Feb 22 12:16:43.726 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY Fri Feb 22 12:16:43.973 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31002 Fri Feb 22 12:16:43.973 SocketException: remote: 127.0.0.1:31002 error: 9001 socket exception [1] server [127.0.0.1:31002] Fri Feb 22 12:16:43.973 DBClientCursor::init call() failed Error: error doing query: failed nreplsets_priority1.js checkPrimaryIs reconnecting Fri Feb 22 12:16:43.974 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:16:43.974 reconnect 127.0.0.1:31002 ok m31002| Fri Feb 22 12:16:43.974 [initandlisten] connection accepted from 127.0.0.1:53655 #19 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:16:44.901 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:16:44.901 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31002 is already primary and more up-to-date' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:16:45.517 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:16:45.517 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 (priority 13.5665), bs-smartos-x86-64-1.10gen.cc:31000 is priority 23.4328 and 0 seconds behind m31001| Fri Feb 22 12:16:45.517 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 failed: { ok: 0.0, errmsg: "not primary so can't step down" } m31001| Fri Feb 22 12:16:45.517 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:16:47.024 [conn17] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:58745] goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:16:47Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 6, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "lastHeartbeat" : ISODate("2013-02-22T12:16:47Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 72, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "lastHeartbeat" : ISODate("2013-02-22T12:16:47Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 97, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "self" : true } ], "ok" : 1 } goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:16:49.712 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:16:50.902 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:16:50.902 [conn18] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:16:50.902 [conn18] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:16:50.902 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:16:51.518 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:16:51.727 [conn7] end connection 165.225.128.186:51268 (2 connections now open) m31000| Fri Feb 22 12:16:51.727 [initandlisten] connection accepted from 165.225.128.186:52933 #8 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:16:54.902 [conn18] end connection 165.225.128.186:64292 (2 connections now open) m31001| Fri Feb 22 12:16:54.902 [initandlisten] connection accepted from 165.225.128.186:37007 #19 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:16:55.531 [conn4] end connection 165.225.128.186:33601 (2 connections now open) m31000| Fri Feb 22 12:16:55.531 [initandlisten] connection accepted from 165.225.128.186:59924 #9 (3 connections now open) m31002| Fri Feb 22 12:16:55.712 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:16:56.903 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:16:56.903 [conn18] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:16:56.903 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:16:56.903 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:16:57.519 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:17:01.713 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:17:02.903 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:17:02.903 [conn18] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:17:02.904 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:17:02.904 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:17:03Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 22, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "lastHeartbeat" : ISODate("2013-02-22T12:17:01Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:17:02Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 88, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "lastHeartbeat" : ISODate("2013-02-22T12:17:01Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 113, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "self" : true } ], "ok" : 1 } m31001| Fri Feb 22 12:17:03.519 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:05.713 [conn17] end connection 165.225.128.186:38630 (2 connections now open) m31001| Fri Feb 22 12:17:05.714 [initandlisten] connection accepted from 165.225.128.186:36250 #20 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:17:07.714 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:17:08.904 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:17:08.904 [conn18] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:17:08.904 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:17:08.904 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:17:09.520 [conn16] end connection 165.225.128.186:45286 (2 connections now open) m31002| Fri Feb 22 12:17:09.520 [initandlisten] connection accepted from 165.225.128.186:43594 #20 (3 connections now open) m31001| Fri Feb 22 12:17:09.521 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:17:10.904 [conn18] end connection 165.225.128.186:47632 (2 connections now open) m31002| Fri Feb 22 12:17:10.904 [initandlisten] connection accepted from 165.225.128.186:47458 #21 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:17:13.715 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:17:14.905 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:17:14.905 [conn21] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31001| Fri Feb 22 12:17:14.905 [conn19] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:15.522 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:17:15.902 [rsMgr] replSet PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:17.534 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31002| Fri Feb 22 12:17:17.730 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 1 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:17:18Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 37, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "lastHeartbeat" : ISODate("2013-02-22T12:17:17Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 103, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "lastHeartbeat" : ISODate("2013-02-22T12:17:17Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 128, "optime" : { "t" : 1361535335000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:15:35Z"), "self" : true } ], "ok" : 1 } Round 2: FIGHT! random priority : 93.45762035809457 random priority : 85.46781886834651 random priority : 43.458109302446246 replsets_priority1.js max is bs-smartos-x86-64-1.10gen.cc:31000 with priority 93.45762035809457, reconfiguring... version is 3, trying to update to 4 m31000| Fri Feb 22 12:17:18.029 [conn2] replSet replSetReconfig config object parses ok, 3 members specified m31000| Fri Feb 22 12:17:18.029 [conn2] replSet replSetReconfig [2] m31000| Fri Feb 22 12:17:18.029 [conn2] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 12:17:18.042 [conn2] replSet saveConfigLocally done m31000| Fri Feb 22 12:17:18.042 [conn2] replSet relinquishing primary state m31000| Fri Feb 22 12:17:18.042 [conn2] replSet SECONDARY m31000| Fri Feb 22 12:17:18.042 [conn2] replSet closing client sockets after relinquishing primary m31000| Fri Feb 22 12:17:18.042 [conn2] replSet PRIMARY Fri Feb 22 12:17:18.042 DBClientCursor::init call() failed m31000| Fri Feb 22 12:17:18.042 [conn2] replSet replSetReconfig new config saved locally m31002| Fri Feb 22 12:17:18.042 [conn21] end connection 165.225.128.186:47458 (2 connections now open) nreplsets_priority1.js Caught exception: Error: error doing query: failed m31000| Fri Feb 22 12:17:18.042 [conn2] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:36000] m31000| Fri Feb 22 12:17:18.043 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:17:18.043 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY Fri Feb 22 12:17:18.043 trying reconnect to 127.0.0.1:31000 m31002| Fri Feb 22 12:17:18.043 [initandlisten] connection accepted from 165.225.128.186:43662 #22 (3 connections now open) Fri Feb 22 12:17:18.043 reconnect 127.0.0.1:31000 ok m31000| Fri Feb 22 12:17:18.043 [initandlisten] connection accepted from 127.0.0.1:56976 #10 (3 connections now open) m31000| Fri Feb 22 12:17:18.043 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:17:18.043 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY version is 3, trying to update to 4 m31000| Fri Feb 22 12:17:18.044 [conn10] replSet replSetReconfig config object parses ok, 3 members specified replsets_priority1.js wait for 2 slaves replsets_priority1.js wait for new config version 4 m31001| Fri Feb 22 12:17:18.531 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:17:18.531 [initandlisten] connection accepted from 165.225.128.186:48059 #11 (4 connections now open) m31001| Fri Feb 22 12:17:18.532 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:17:18.533 [initandlisten] connection accepted from 165.225.128.186:41570 #12 (5 connections now open) m31002| Fri Feb 22 12:17:18.577 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:17:18.578 [initandlisten] connection accepted from 165.225.128.186:41564 #13 (6 connections now open) m31002| Fri Feb 22 12:17:18.579 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:17:18.579 [initandlisten] connection accepted from 165.225.128.186:53664 #14 (7 connections now open) m31001| Fri Feb 22 12:17:19.534 [rsMgr] replset msgReceivedNewConfig version: version: 4 m31001| Fri Feb 22 12:17:19.534 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 12:17:19.549 [rsMgr] replSet saveConfigLocally done m31001| Fri Feb 22 12:17:19.549 [rsMgr] replSet replSetReconfig new config saved locally m31001| Fri Feb 22 12:17:19.549 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31001| Fri Feb 22 12:17:19.549 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:17:19.549 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:17:19.549 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31002| Fri Feb 22 12:17:19.716 [rsMgr] replset msgReceivedNewConfig version: version: 4 m31002| Fri Feb 22 12:17:19.716 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Fri Feb 22 12:17:19.730 [rsMgr] replSet saveConfigLocally done m31002| Fri Feb 22 12:17:19.730 [rsMgr] replSet replSetReconfig new config saved locally m31002| Fri Feb 22 12:17:19.731 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:17:19.731 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:17:19.731 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:17:19.731 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31002| Fri Feb 22 12:17:19.731 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' replsets_priority1.js awaitReplication ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31000, is { "t" : 1361535438000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361535438000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31001 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31001, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31002 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31002, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361535438000, "i" : 1 } reconfigured. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 1 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 rs.stop ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number ReplSetTest stop *** Shutting down mongod in port 31000 *** m31000| Fri Feb 22 12:17:19.862 got signal 15 (Terminated), will terminate after current cmd ends m31000| Fri Feb 22 12:17:19.862 [interruptThread] now exiting m31000| Fri Feb 22 12:17:19.862 dbexit: m31000| Fri Feb 22 12:17:19.862 [interruptThread] shutdown: going to close listening sockets... m31000| Fri Feb 22 12:17:19.862 [interruptThread] closing listening socket: 20 m31000| Fri Feb 22 12:17:19.862 [interruptThread] closing listening socket: 21 m31000| Fri Feb 22 12:17:19.862 [interruptThread] closing listening socket: 22 m31000| Fri Feb 22 12:17:19.862 [interruptThread] removing socket file: /tmp/mongodb-31000.sock m31000| Fri Feb 22 12:17:19.862 [interruptThread] shutdown: going to flush diaglog... m31000| Fri Feb 22 12:17:19.862 [interruptThread] shutdown: going to close sockets... m31000| Fri Feb 22 12:17:19.862 [interruptThread] shutdown: waiting for fs preallocator... m31000| Fri Feb 22 12:17:19.862 [interruptThread] shutdown: lock for final commit... m31000| Fri Feb 22 12:17:19.862 [interruptThread] shutdown: final commit... m31000| Fri Feb 22 12:17:19.862 [conn10] end connection 127.0.0.1:56976 (6 connections now open) m31000| Fri Feb 22 12:17:19.862 [conn9] end connection 165.225.128.186:59924 (6 connections now open) m31000| Fri Feb 22 12:17:19.862 [conn8] end connection 165.225.128.186:52933 (6 connections now open) m31001| Fri Feb 22 12:17:19.862 [conn19] end connection 165.225.128.186:37007 (2 connections now open) m31002| Fri Feb 22 12:17:19.862 [conn22] end connection 165.225.128.186:43662 (2 connections now open) m31001| Fri Feb 22 12:17:19.862 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:19.862 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:19.863 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:19.863 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:17:19.878 [interruptThread] shutdown: closing all files... m31000| Fri Feb 22 12:17:19.878 [interruptThread] closeAllFiles() finished m31000| Fri Feb 22 12:17:19.878 [interruptThread] journalCleanup... m31000| Fri Feb 22 12:17:19.878 [interruptThread] removeJournalFiles m31000| Fri Feb 22 12:17:19.878 dbexit: really exiting now Fri Feb 22 12:17:20.862 shell: stopped mongo program on port 31000 Fri Feb 22 12:17:20.862 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31000 Fri Feb 22 12:17:20.863 SocketException: remote: 127.0.0.1:31000 error: 9001 socket exception [1] server [127.0.0.1:31000] Fri Feb 22 12:17:20.863 DBClientCursor::init call() failed ReplSetTest Could not call ismaster on node 0: Error: error doing query: failed m31001| Fri Feb 22 12:17:21.549 [rsHealthPoll] DBClientCursor::init call() failed m31001| Fri Feb 22 12:17:21.549 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:21.550 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31000 is down (or slow to respond): m31001| Fri Feb 22 12:17:21.550 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state DOWN m31001| Fri Feb 22 12:17:21.550 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' m31001| Fri Feb 22 12:17:21.731 [conn20] end connection 165.225.128.186:36250 (1 connection now open) m31002| Fri Feb 22 12:17:21.731 [rsHealthPoll] DBClientCursor::init call() failed m31002| Fri Feb 22 12:17:21.731 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:21.731 [initandlisten] connection accepted from 165.225.128.186:59780 #21 (2 connections now open) m31002| Fri Feb 22 12:17:21.732 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31000 is down (or slow to respond): m31002| Fri Feb 22 12:17:21.732 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state DOWN m31002| Fri Feb 22 12:17:21.732 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' Fri Feb 22 12:17:22.864 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:22.864 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:17:23.550 [conn20] end connection 165.225.128.186:43594 (1 connection now open) m31002| Fri Feb 22 12:17:23.550 [initandlisten] connection accepted from 165.225.128.186:45382 #23 (2 connections now open) m31001| Fri Feb 22 12:17:23.550 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:23.732 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:17:24.866 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:24.866 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:25.551 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:25.733 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:17:26.867 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:26.868 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:27.552 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:27.733 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:17:27.733 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:27.843 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:17:28.869 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:28.869 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:29.552 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:29.734 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:17:30.871 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:30.871 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:31.553 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:31.734 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:17:32.872 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:32.872 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:33.554 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:33.733 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:17:33.735 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:33.897 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:17:34.874 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:34.874 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:35.554 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:35.735 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:17:36.876 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:36.876 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:37.555 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:37.739 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:37.739 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:37.740 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:37.740 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:37.740 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:37.740 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:17:38.877 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:38.877 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:39.555 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:39.556 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:39.556 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:39.556 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:39.556 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:39.557 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:39.734 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:17:39.741 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:39.741 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:39.741 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:39.742 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:39.742 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:39.742 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:40.431 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:17:40.879 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:40.879 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:41.557 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:41.557 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:41.557 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:41.558 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:41.558 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:41.558 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:41.743 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:41.743 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:41.743 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:41.744 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:41.745 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:41.745 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:17:42.881 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:42.881 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:43.559 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:43.559 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:43.559 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:43.559 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:43.560 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:43.560 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:43.746 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:43.746 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:43.746 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:43.746 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:43.747 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:43.747 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:17:44.882 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:44.883 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:17:45.560 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:45.561 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:45.561 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:17:45.561 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:45.561 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:45.561 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:45.735 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:17:45.747 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:45.748 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:45.748 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:17:45.748 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:45.748 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:17:45.749 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:17:46.390 [rsMgr] replSet info electSelf 1 m31002| Fri Feb 22 12:17:46.390 [conn23] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31001 (1) m31001| Fri Feb 22 12:17:46.864 [rsMgr] replSet PRIMARY Fri Feb 22 12:17:46.884 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:17:46.884 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 killed max primary. Checking statuses. second is bs-smartos-x86-64-1.10gen.cc:31001 with priority 85.46781886834651 nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 8 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 2 restart max 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number ReplSetTest stop *** Shutting down mongod in port 31000 *** Fri Feb 22 12:17:46.886 No db started on port: 31000 Fri Feb 22 12:17:46.886 shell: stopped mongo program on port 31000 ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : true, "pathOpts" : { "node" : 0, "set" : "testSet" } } ReplSetTest (Re)Starting.... Fri Feb 22 12:17:46.891 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-0 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Fri Feb 22 12:17:46.980 [initandlisten] MongoDB starting : pid=9654 port=31000 dbpath=/data/db/testSet-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 12:17:46.980 [initandlisten] m31000| Fri Feb 22 12:17:46.980 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 12:17:46.980 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 12:17:46.980 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 12:17:46.980 [initandlisten] m31000| Fri Feb 22 12:17:46.980 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 12:17:46.980 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 12:17:46.980 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 12:17:46.980 [initandlisten] allocator: system m31000| Fri Feb 22 12:17:46.980 [initandlisten] options: { dbpath: "/data/db/testSet-0", noprealloc: true, oplogSize: 40, port: 31000, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Fri Feb 22 12:17:46.981 [initandlisten] journal dir=/data/db/testSet-0/journal m31000| Fri Feb 22 12:17:46.981 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 12:17:47.000 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 12:17:47.000 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 12:17:47.023 [initandlisten] connection accepted from 165.225.128.186:60870 #1 (1 connection now open) m31000| Fri Feb 22 12:17:47.023 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:17:47.023 [conn1] end connection 165.225.128.186:60870 (0 connections now open) m31000| Fri Feb 22 12:17:47.024 [rsStart] replSet STARTUP2 m31000| Fri Feb 22 12:17:47.092 [initandlisten] connection accepted from 127.0.0.1:39228 #2 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001, connection to bs-smartos-x86-64-1.10gen.cc:31002 ] max restarted. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 8 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:17:47.562 [initandlisten] connection accepted from 165.225.128.186:44720 #3 (2 connections now open) m31000| Fri Feb 22 12:17:47.562 [conn3] end connection 165.225.128.186:44720 (1 connection now open) m31000| Fri Feb 22 12:17:47.562 [initandlisten] connection accepted from 165.225.128.186:58561 #4 (3 connections now open) m31001| Fri Feb 22 12:17:47.563 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 thinks that we are down m31001| Fri Feb 22 12:17:47.563 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:17:47.563 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state STARTUP2 m31002| Fri Feb 22 12:17:47.735 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state PRIMARY m31000| Fri Feb 22 12:17:47.749 [initandlisten] connection accepted from 165.225.128.186:46815 #5 (3 connections now open) m31000| Fri Feb 22 12:17:47.749 [conn5] end connection 165.225.128.186:46815 (2 connections now open) m31000| Fri Feb 22 12:17:47.749 [initandlisten] connection accepted from 165.225.128.186:59075 #6 (4 connections now open) m31002| Fri Feb 22 12:17:47.750 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 thinks that we are down m31002| Fri Feb 22 12:17:47.750 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:17:47.750 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state STARTUP2 m31002| Fri Feb 22 12:17:47.863 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:17:47.864 [initandlisten] connection accepted from 165.225.128.186:54579 #22 (3 connections now open) m31000| Fri Feb 22 12:17:48.024 [rsSync] replSet SECONDARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 5 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:49.024 [initandlisten] connection accepted from 165.225.128.186:61333 #23 (4 connections now open) m31002| Fri Feb 22 12:17:49.024 [initandlisten] connection accepted from 165.225.128.186:51925 #24 (3 connections now open) m31000| Fri Feb 22 12:17:49.024 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:17:49.024 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:17:49.025 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:17:49.025 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 5 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:49.569 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:17:49.569 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31001 (priority 85.4678), bs-smartos-x86-64-1.10gen.cc:31000 is priority 93.4576 and 0 seconds behind m31001| Fri Feb 22 12:17:49.569 [rsMgr] replSet relinquishing primary state m31001| Fri Feb 22 12:17:49.569 [rsMgr] replSet SECONDARY m31001| Fri Feb 22 12:17:49.569 [rsMgr] replSet closing client sockets after relinquishing primary m31001| Fri Feb 22 12:17:49.569 [conn12] end connection 127.0.0.1:47883 (3 connections now open) m31000| Fri Feb 22 12:17:49.570 [conn4] end connection 165.225.128.186:58561 (2 connections now open) m31002| Fri Feb 22 12:17:49.570 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:17:49.735 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:17:49.736 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:17:49.750 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:17:49.750 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' Fri Feb 22 12:17:50.097 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31001 Fri Feb 22 12:17:50.097 SocketException: remote: 127.0.0.1:31001 error: 9001 socket exception [1] server [127.0.0.1:31001] Fri Feb 22 12:17:50.097 DBClientCursor::init call() failed Error: error doing query: failed nreplsets_priority1.js checkPrimaryIs reconnecting Fri Feb 22 12:17:50.097 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:17:50.098 reconnect 127.0.0.1:31001 ok m31001| Fri Feb 22 12:17:50.098 [initandlisten] connection accepted from 127.0.0.1:50577 #24 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:17:51.025 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:17:51.025 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:17:51.025 [conn24] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:17:51.025 [conn23] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:17:51.025 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:51.569 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31000| Fri Feb 22 12:17:51.570 [initandlisten] connection accepted from 165.225.128.186:48113 #7 (3 connections now open) m31001| Fri Feb 22 12:17:51.736 [conn21] end connection 165.225.128.186:59780 (3 connections now open) m31001| Fri Feb 22 12:17:51.736 [initandlisten] connection accepted from 165.225.128.186:45181 #25 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:52.915 [conn22] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:54579] goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:17:53.554 [conn23] end connection 165.225.128.186:45382 (2 connections now open) m31002| Fri Feb 22 12:17:53.554 [initandlisten] connection accepted from 165.225.128.186:63448 #25 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:17:54Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 7, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "lastHeartbeat" : ISODate("2013-02-22T12:17:53Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 265, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "self" : true }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 35, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "lastHeartbeat" : ISODate("2013-02-22T12:17:53Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:17:53Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001" } ], "ok" : 1 } goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:17:55.555 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:17:55.737 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:17:57.026 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:17:57.026 [conn24] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:17:57.026 [conn23] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:17:57.026 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:18:01.556 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:18:01.737 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:18:03.026 [conn23] end connection 165.225.128.186:61333 (2 connections now open) m31001| Fri Feb 22 12:18:03.027 [initandlisten] connection accepted from 165.225.128.186:55605 #26 (3 connections now open) m31000| Fri Feb 22 12:18:03.027 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:18:03.027 [conn24] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:18:03.027 [conn26] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:18:03.027 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:18:07.557 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:18:07.738 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:18:07.752 [conn6] end connection 165.225.128.186:59075 (2 connections now open) m31000| Fri Feb 22 12:18:07.753 [initandlisten] connection accepted from 165.225.128.186:43552 #8 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:18:09.028 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:18:09.028 [conn24] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:18:09.028 [conn26] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:18:09.028 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:18:09Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 22, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "lastHeartbeat" : ISODate("2013-02-22T12:18:07Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:18:09Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 280, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "self" : true }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 50, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "lastHeartbeat" : ISODate("2013-02-22T12:18:07Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:18:07Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001" } ], "ok" : 1 } m31000| Fri Feb 22 12:18:09.572 [conn7] end connection 165.225.128.186:48113 (2 connections now open) m31000| Fri Feb 22 12:18:09.573 [initandlisten] connection accepted from 165.225.128.186:57624 #9 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:18:13.557 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:18:13.739 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:18:15.029 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:18:15.029 [conn24] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:18:15.029 [conn26] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:18:15.029 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:18:17.028 [conn24] end connection 165.225.128.186:51925 (2 connections now open) m31002| Fri Feb 22 12:18:17.029 [initandlisten] connection accepted from 165.225.128.186:41248 #26 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:18:19.558 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:18:19.739 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:18:21.029 [rsMgr] replSet info electSelf 0 m31001| Fri Feb 22 12:18:21.030 [conn26] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31002| Fri Feb 22 12:18:21.030 [conn26] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:18:21.739 [conn25] end connection 165.225.128.186:45181 (2 connections now open) m31001| Fri Feb 22 12:18:21.740 [initandlisten] connection accepted from 165.225.128.186:44476 #27 (3 connections now open) m31000| Fri Feb 22 12:18:22.026 [rsMgr] replSet PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:18:23.559 [conn25] end connection 165.225.128.186:63448 (2 connections now open) m31002| Fri Feb 22 12:18:23.559 [initandlisten] connection accepted from 165.225.128.186:46310 #27 (3 connections now open) m31001| Fri Feb 22 12:18:23.575 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31002| Fri Feb 22 12:18:23.755 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31001| Fri Feb 22 12:18:23.866 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:18:23.866 [initandlisten] connection accepted from 165.225.128.186:43351 #10 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 1 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:18:24Z"), "myState" : 2, "syncingTo" : "bs-smartos-x86-64-1.10gen.cc:31000", "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 37, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "lastHeartbeat" : ISODate("2013-02-22T12:18:23Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 295, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "errmsg" : "syncing to: bs-smartos-x86-64-1.10gen.cc:31000", "self" : true }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 65, "optime" : { "t" : 1361535438000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:17:18Z"), "lastHeartbeat" : ISODate("2013-02-22T12:18:23Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:18:23Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31001" } ], "ok" : 1 } Round 3: FIGHT! random priority : 26.43059827387333 random priority : 68.20806791074574 random priority : 65.14162549283355 replsets_priority1.js max is bs-smartos-x86-64-1.10gen.cc:31001 with priority 68.20806791074574, reconfiguring... version is 4, trying to update to 5 m31000| Fri Feb 22 12:18:24.140 [conn2] replSet replSetReconfig config object parses ok, 3 members specified m31000| Fri Feb 22 12:18:24.140 [conn2] replSet replSetReconfig [2] m31000| Fri Feb 22 12:18:24.140 [conn2] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 12:18:24.155 [conn2] replSet saveConfigLocally done m31000| Fri Feb 22 12:18:24.155 [conn2] replSet relinquishing primary state m31000| Fri Feb 22 12:18:24.155 [conn2] replSet SECONDARY m31000| Fri Feb 22 12:18:24.155 [conn2] replSet closing client sockets after relinquishing primary Fri Feb 22 12:18:24.155 DBClientCursor::init call() failed m31000| Fri Feb 22 12:18:24.155 [conn2] replSet PRIMARY m31000| Fri Feb 22 12:18:24.155 [conn10] end connection 165.225.128.186:43351 (3 connections now open) m31000| Fri Feb 22 12:18:24.156 [conn2] replSet replSetReconfig new config saved locally nreplsets_priority1.js Caught exception: Error: error doing query: failed m31002| Fri Feb 22 12:18:24.156 [conn26] end connection 165.225.128.186:41248 (2 connections now open) m31001| Fri Feb 22 12:18:24.156 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:18:24.156 [conn2] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:39228] m31001| Fri Feb 22 12:18:24.156 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:18:24.156 [rsSyncNotifier] Socket flush send() errno:9 Bad file number 165.225.128.186:31000 m31001| Fri Feb 22 12:18:24.156 [rsSyncNotifier] caught exception (socket exception [SEND_ERROR] for 165.225.128.186:31000) in destructor (~PiggyBackData) m31000| Fri Feb 22 12:18:24.156 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:18:24.156 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:18:24.156 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31000 (priority 26.4306), bs-smartos-x86-64-1.10gen.cc:31001 is priority 68.2081 and 0 seconds behind Fri Feb 22 12:18:24.156 trying reconnect to 127.0.0.1:31000 m31000| Fri Feb 22 12:18:24.156 [rsMgr] replSet relinquishing primary state m31000| Fri Feb 22 12:18:24.156 [rsMgr] replSet SECONDARY m31000| Fri Feb 22 12:18:24.156 [rsMgr] replSet closing client sockets after relinquishing primary m31000| Fri Feb 22 12:18:24.156 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31002: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:18:24.156 [initandlisten] connection accepted from 165.225.128.186:57450 #11 (3 connections now open) Fri Feb 22 12:18:24.156 reconnect 127.0.0.1:31000 ok m31000| Fri Feb 22 12:18:24.156 [initandlisten] connection accepted from 127.0.0.1:59363 #12 (4 connections now open) version is 4, trying to update to 5 m31002| Fri Feb 22 12:18:24.157 [initandlisten] connection accepted from 165.225.128.186:57373 #28 (3 connections now open) m31002| Fri Feb 22 12:18:24.157 [conn28] end connection 165.225.128.186:57373 (2 connections now open) m31002| Fri Feb 22 12:18:24.157 [initandlisten] connection accepted from 165.225.128.186:52146 #29 (3 connections now open) m31000| Fri Feb 22 12:18:24.157 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:18:24.157 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:18:24.157 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' m31002| Fri Feb 22 12:18:24.571 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:18:24.571 [initandlisten] connection accepted from 165.225.128.186:62657 #13 (5 connections now open) m31002| Fri Feb 22 12:18:24.572 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:18:24.573 [initandlisten] connection accepted from 165.225.128.186:38959 #14 (6 connections now open) m31001| Fri Feb 22 12:18:25.575 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:18:25.575 [rsMgr] replset msgReceivedNewConfig version: version: 5 m31001| Fri Feb 22 12:18:25.575 [rsMgr] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 12:18:25.585 [rsMgr] replSet saveConfigLocally done m31001| Fri Feb 22 12:18:25.585 [rsMgr] replSet replSetReconfig new config saved locally m31001| Fri Feb 22 12:18:25.585 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:18:25.585 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:18:25.585 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31001| Fri Feb 22 12:18:25.585 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:18:25.585 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' m31001| Fri Feb 22 12:18:25.586 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' m31001| Fri Feb 22 12:18:25.586 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' m31002| Fri Feb 22 12:18:25.740 [rsMgr] replset msgReceivedNewConfig version: version: 5 m31002| Fri Feb 22 12:18:25.740 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Fri Feb 22 12:18:25.753 [rsMgr] replSet saveConfigLocally done m31002| Fri Feb 22 12:18:25.754 [rsMgr] replSet replSetReconfig new config saved locally m31002| Fri Feb 22 12:18:25.754 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:18:25.754 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:18:25.754 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:18:25.754 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:18:25.754 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31000 (priority 26.4306), bs-smartos-x86-64-1.10gen.cc:31001 is priority 68.2081 and 0 seconds behind m31002| Fri Feb 22 12:18:25.754 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31000 failed: { ok: 0.0, errmsg: "not primary so can't step down" } m31002| Fri Feb 22 12:18:25.754 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:18:25.754 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31001| Fri Feb 22 12:18:29.754 [conn27] end connection 165.225.128.186:44476 (2 connections now open) m31001| Fri Feb 22 12:18:29.755 [initandlisten] connection accepted from 165.225.128.186:61280 #28 (3 connections now open) m31000| Fri Feb 22 12:18:30.157 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:18:31.586 [conn27] end connection 165.225.128.186:46310 (2 connections now open) m31002| Fri Feb 22 12:18:31.586 [initandlisten] connection accepted from 165.225.128.186:52841 #30 (3 connections now open) m31002| Fri Feb 22 12:18:31.755 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31001| Fri Feb 22 12:18:32.406 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:18:34.674 [conn13] end connection 165.225.128.186:62657 (5 connections now open) m31000| Fri Feb 22 12:18:36.158 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:18:37.755 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31001| Fri Feb 22 12:18:38.583 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:18:40.158 [conn26] end connection 165.225.128.186:55605 (2 connections now open) m31001| Fri Feb 22 12:18:40.158 [initandlisten] connection accepted from 165.225.128.186:42293 #29 (3 connections now open) m31000| Fri Feb 22 12:18:42.159 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:18:43.756 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31001| Fri Feb 22 12:18:44.434 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:18:45.756 [conn8] end connection 165.225.128.186:43552 (4 connections now open) m31000| Fri Feb 22 12:18:45.756 [initandlisten] connection accepted from 165.225.128.186:51737 #15 (5 connections now open) m31000| Fri Feb 22 12:18:47.588 [conn9] end connection 165.225.128.186:57624 (4 connections now open) m31000| Fri Feb 22 12:18:47.589 [initandlisten] connection accepted from 165.225.128.186:58434 #16 (5 connections now open) m31000| Fri Feb 22 12:18:48.160 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:18:49.757 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31001| Fri Feb 22 12:18:50.593 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31002| Fri Feb 22 12:18:54.161 [conn29] end connection 165.225.128.186:52146 (2 connections now open) m31002| Fri Feb 22 12:18:54.161 [initandlisten] connection accepted from 165.225.128.186:61762 #31 (3 connections now open) m31000| Fri Feb 22 12:18:54.161 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31002| Fri Feb 22 12:18:55.758 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31001| Fri Feb 22 12:18:55.905 [rsMgr] replSet info electSelf 1 m31002| Fri Feb 22 12:18:55.905 [conn30] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31001 (1) m31000| Fri Feb 22 12:18:55.905 [conn16] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31001 (1) m31001| Fri Feb 22 12:18:56.157 [rsMgr] replSet PRIMARY m31000| Fri Feb 22 12:18:56.161 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state PRIMARY replsets_priority1.js wait for 2 slaves replsets_priority1.js wait for new config version 5 replsets_priority1.js awaitReplication ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31001, is { "t" : 1361535504000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361535504000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31000 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31000, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31002 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31002, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361535504000, "i" : 1 } reconfigured. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 2 rs.stop ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number ReplSetTest stop *** Shutting down mongod in port 31001 *** m31001| Fri Feb 22 12:18:56.188 got signal 15 (Terminated), will terminate after current cmd ends m31001| Fri Feb 22 12:18:56.188 [interruptThread] now exiting m31001| Fri Feb 22 12:18:56.188 dbexit: m31001| Fri Feb 22 12:18:56.188 [interruptThread] shutdown: going to close listening sockets... m31001| Fri Feb 22 12:18:56.188 [interruptThread] closing listening socket: 15 m31001| Fri Feb 22 12:18:56.189 [interruptThread] closing listening socket: 16 m31001| Fri Feb 22 12:18:56.189 [interruptThread] closing listening socket: 17 m31001| Fri Feb 22 12:18:56.189 [interruptThread] removing socket file: /tmp/mongodb-31001.sock m31001| Fri Feb 22 12:18:56.189 [interruptThread] shutdown: going to flush diaglog... m31001| Fri Feb 22 12:18:56.189 [interruptThread] shutdown: going to close sockets... m31001| Fri Feb 22 12:18:56.189 [interruptThread] shutdown: waiting for fs preallocator... m31001| Fri Feb 22 12:18:56.189 [interruptThread] shutdown: lock for final commit... m31001| Fri Feb 22 12:18:56.189 [interruptThread] shutdown: final commit... m31001| Fri Feb 22 12:18:56.189 [conn24] end connection 127.0.0.1:50577 (2 connections now open) m31001| Fri Feb 22 12:18:56.189 [conn28] end connection 165.225.128.186:61280 (2 connections now open) m31001| Fri Feb 22 12:18:56.189 [conn29] end connection 165.225.128.186:42293 (2 connections now open) m31000| Fri Feb 22 12:18:56.189 [conn11] end connection 165.225.128.186:57450 (4 connections now open) m31000| Fri Feb 22 12:18:56.189 [conn16] end connection 165.225.128.186:58434 (4 connections now open) m31002| Fri Feb 22 12:18:56.189 [conn30] end connection 165.225.128.186:52841 (2 connections now open) m31001| Fri Feb 22 12:18:56.202 [interruptThread] shutdown: closing all files... m31001| Fri Feb 22 12:18:56.203 [interruptThread] closeAllFiles() finished m31001| Fri Feb 22 12:18:56.203 [interruptThread] journalCleanup... m31001| Fri Feb 22 12:18:56.203 [interruptThread] removeJournalFiles m31001| Fri Feb 22 12:18:56.203 dbexit: really exiting now m31000| Fri Feb 22 12:18:57.028 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:18:57.028 [rsBackgroundSync] repl: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:18:57.188 shell: stopped mongo program on port 31001 Fri Feb 22 12:18:57.189 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31001 Fri Feb 22 12:18:57.189 SocketException: remote: 127.0.0.1:31001 error: 9001 socket exception [1] server [127.0.0.1:31001] Fri Feb 22 12:18:57.189 DBClientCursor::init call() failed ReplSetTest Could not call ismaster on node 1: Error: error doing query: failed m31002| Fri Feb 22 12:18:57.758 [rsHealthPoll] DBClientCursor::init call() failed m31002| Fri Feb 22 12:18:57.758 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:18:57.759 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31001 is down (or slow to respond): m31002| Fri Feb 22 12:18:57.759 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state DOWN m31002| Fri Feb 22 12:18:57.759 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31000 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31001 is already primary and more up-to-date' m31000| Fri Feb 22 12:18:58.161 [rsHealthPoll] DBClientCursor::init call() failed m31000| Fri Feb 22 12:18:58.161 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:18:58.161 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31001 is down (or slow to respond): m31000| Fri Feb 22 12:18:58.161 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state DOWN m31000| Fri Feb 22 12:18:58.162 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' Fri Feb 22 12:18:59.190 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:18:59.191 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:18:59.759 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:18:59.759 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:18:59.760 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:18:59.760 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:18:59.760 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:18:59.760 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:00.162 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying Fri Feb 22 12:19:01.192 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:01.192 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:01.761 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:01.761 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:01.761 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:01.761 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:01.762 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:01.762 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:02.163 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying Fri Feb 22 12:19:03.194 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:03.194 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:03.762 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:03.763 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:03.763 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:03.763 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:03.763 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:03.764 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:04.072 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:19:04.163 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31000| Fri Feb 22 12:19:04.163 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying Fri Feb 22 12:19:05.196 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:05.196 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:05.764 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:05.764 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:05.765 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:05.765 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:05.765 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:05.766 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:06.164 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying Fri Feb 22 12:19:07.198 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:07.198 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:07.766 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:07.766 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:07.766 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:07.767 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:07.767 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:07.767 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:08.164 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying Fri Feb 22 12:19:09.203 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:09.203 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:09.768 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:09.768 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:09.768 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:09.769 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:09.769 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:09.769 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:10.150 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:19:10.164 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31000| Fri Feb 22 12:19:10.165 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:10.165 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:10.165 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:10.166 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:10.166 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:10.166 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:19:11.204 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:11.205 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:11.770 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:11.770 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:11.770 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:11.770 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:11.770 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:11.771 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:12.166 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:12.167 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:12.167 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:12.167 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:12.167 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:12.167 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:19:13.206 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:13.206 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:13.771 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:13.771 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:13.772 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:13.772 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:13.772 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:13.772 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:14.168 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:14.168 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:14.168 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:14.169 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:14.169 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:14.169 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:19:15.208 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:15.208 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31000| Fri Feb 22 12:19:15.760 [conn15] end connection 165.225.128.186:51737 (2 connections now open) m31000| Fri Feb 22 12:19:15.760 [initandlisten] connection accepted from 165.225.128.186:43759 #17 (3 connections now open) m31002| Fri Feb 22 12:19:15.773 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:15.773 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:15.773 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:15.774 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:15.774 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:15.774 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:16.165 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31000| Fri Feb 22 12:19:16.170 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:16.170 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:16.170 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:16.170 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:16.170 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:16.171 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:16.793 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:19:17.209 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:17.210 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:17.774 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:17.775 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:17.775 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:17.775 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:17.775 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:17.776 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:18.171 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:18.171 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:18.172 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:18.172 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:18.172 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:18.172 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:19:19.211 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:19.211 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:19.776 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:19.776 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:19.777 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:19.777 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:19.777 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:19.777 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:20.173 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:20.173 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:20.173 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:20.173 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:20.173 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:20.174 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:19:21.213 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:21.213 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:21.778 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:21.778 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:21.778 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:21.779 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:21.779 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:21.779 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:22.166 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31000| Fri Feb 22 12:19:22.174 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:22.174 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:22.175 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:22.175 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:22.175 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:22.175 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:22.402 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:19:23.214 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:23.215 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:23.780 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:23.780 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:23.780 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:23.780 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:23.781 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:23.781 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:24.166 [conn31] end connection 165.225.128.186:61762 (1 connection now open) m31002| Fri Feb 22 12:19:24.166 [initandlisten] connection accepted from 165.225.128.186:64377 #32 (2 connections now open) m31000| Fri Feb 22 12:19:24.176 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:24.176 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:24.176 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:24.176 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:24.176 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:24.177 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:19:25.216 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:25.216 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:25.781 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:25.781 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:25.782 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:25.782 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:25.782 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:25.783 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:26.177 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:26.177 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:26.178 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:26.178 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:26.178 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:26.178 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 Fri Feb 22 12:19:27.218 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:27.218 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 m31002| Fri Feb 22 12:19:27.783 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:27.783 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:27.784 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:19:27.784 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:27.784 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:27.784 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:27.835 [rsMgr] replSet info electSelf 2 m31000| Fri Feb 22 12:19:27.835 [conn17] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31002 (2) m31000| Fri Feb 22 12:19:28.167 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31000| Fri Feb 22 12:19:28.179 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:28.179 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:28.179 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31000| Fri Feb 22 12:19:28.179 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:28.180 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31000| Fri Feb 22 12:19:28.180 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31001: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:19:28.677 [rsMgr] replSet PRIMARY Fri Feb 22 12:19:29.219 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:19:29.220 reconnect 127.0.0.1:31001 failed couldn't connect to server 127.0.0.1:31001 ReplSetTest Could not call ismaster on node 1: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31001 killed max primary. Checking statuses. second is bs-smartos-x86-64-1.10gen.cc:31002 with priority 65.14162549283355 nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 8 bs-smartos-x86-64-1.10gen.cc:31002: 1 restart max 1 ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number ReplSetTest stop *** Shutting down mongod in port 31001 *** Fri Feb 22 12:19:29.221 No db started on port: 31001 Fri Feb 22 12:19:29.221 shell: stopped mongo program on port 31001 ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31000, 31001, 31002 ] 31001 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31001, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : true, "pathOpts" : { "node" : 1, "set" : "testSet" } } ReplSetTest (Re)Starting.... Fri Feb 22 12:19:29.225 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31001 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-1 --setParameter enableTestCommands=1 m31001| note: noprealloc may hurt performance in many applications m31001| Fri Feb 22 12:19:29.318 [initandlisten] MongoDB starting : pid=9863 port=31001 dbpath=/data/db/testSet-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31001| Fri Feb 22 12:19:29.319 [initandlisten] m31001| Fri Feb 22 12:19:29.319 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31001| Fri Feb 22 12:19:29.319 [initandlisten] ** uses to detect impending page faults. m31001| Fri Feb 22 12:19:29.319 [initandlisten] ** This may result in slower performance for certain use cases m31001| Fri Feb 22 12:19:29.319 [initandlisten] m31001| Fri Feb 22 12:19:29.319 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31001| Fri Feb 22 12:19:29.319 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31001| Fri Feb 22 12:19:29.319 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31001| Fri Feb 22 12:19:29.319 [initandlisten] allocator: system m31001| Fri Feb 22 12:19:29.319 [initandlisten] options: { dbpath: "/data/db/testSet-1", noprealloc: true, oplogSize: 40, port: 31001, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31001| Fri Feb 22 12:19:29.319 [initandlisten] journal dir=/data/db/testSet-1/journal m31001| Fri Feb 22 12:19:29.319 [initandlisten] recover : no journal files present, no recovery needed m31001| Fri Feb 22 12:19:29.342 [initandlisten] waiting for connections on port 31001 m31001| Fri Feb 22 12:19:29.342 [websvr] admin web console waiting for connections on port 32001 m31001| Fri Feb 22 12:19:29.364 [initandlisten] connection accepted from 165.225.128.186:45145 #1 (1 connection now open) m31001| Fri Feb 22 12:19:29.365 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:19:29.365 [conn1] end connection 165.225.128.186:45145 (0 connections now open) m31001| Fri Feb 22 12:19:29.365 [rsStart] replSet STARTUP2 m31000| Fri Feb 22 12:19:29.365 [initandlisten] connection accepted from 165.225.128.186:49067 #18 (4 connections now open) m31001| Fri Feb 22 12:19:29.366 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 thinks that we are down m31001| Fri Feb 22 12:19:29.366 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:19:29.366 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:19:29.426 [initandlisten] connection accepted from 127.0.0.1:56592 #2 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001, connection to bs-smartos-x86-64-1.10gen.cc:31002 ] max restarted. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 8 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31001| Fri Feb 22 12:19:29.785 [initandlisten] connection accepted from 165.225.128.186:47049 #3 (2 connections now open) m31001| Fri Feb 22 12:19:29.785 [conn3] end connection 165.225.128.186:47049 (1 connection now open) m31001| Fri Feb 22 12:19:29.785 [initandlisten] connection accepted from 165.225.128.186:48556 #4 (2 connections now open) m31002| Fri Feb 22 12:19:29.785 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 thinks that we are down m31002| Fri Feb 22 12:19:29.785 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:19:29.785 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state STARTUP2 m31000| Fri Feb 22 12:19:30.170 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY m31001| Fri Feb 22 12:19:30.180 [initandlisten] connection accepted from 165.225.128.186:58656 #5 (3 connections now open) m31001| Fri Feb 22 12:19:30.181 [conn5] end connection 165.225.128.186:58656 (2 connections now open) m31001| Fri Feb 22 12:19:30.181 [initandlisten] connection accepted from 165.225.128.186:57485 #6 (3 connections now open) m31000| Fri Feb 22 12:19:30.181 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:19:30.181 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state STARTUP2 m31001| Fri Feb 22 12:19:30.365 [rsSync] replSet SECONDARY goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 5 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31000| Fri Feb 22 12:19:31.030 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:19:31.031 [initandlisten] connection accepted from 165.225.128.186:52043 #33 (3 connections now open) m31002| Fri Feb 22 12:19:31.365 [initandlisten] connection accepted from 165.225.128.186:54968 #34 (4 connections now open) m31001| Fri Feb 22 12:19:31.366 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31001| Fri Feb 22 12:19:31.366 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 5 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31002| Fri Feb 22 12:19:31.786 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:19:31.786 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 (priority 65.1416), bs-smartos-x86-64-1.10gen.cc:31001 is priority 68.2081 and 0 seconds behind m31002| Fri Feb 22 12:19:31.786 [rsMgr] replSet relinquishing primary state m31002| Fri Feb 22 12:19:31.786 [rsMgr] replSet SECONDARY m31002| Fri Feb 22 12:19:31.786 [rsMgr] replSet closing client sockets after relinquishing primary m31002| Fri Feb 22 12:19:31.786 [conn19] end connection 127.0.0.1:53655 (3 connections now open) m31000| Fri Feb 22 12:19:31.786 [conn14] end connection 165.225.128.186:38959 (3 connections now open) m31000| Fri Feb 22 12:19:31.786 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002 m31000| Fri Feb 22 12:19:32.030 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:19:32.030 [initandlisten] connection accepted from 165.225.128.186:48675 #35 (4 connections now open) m31000| Fri Feb 22 12:19:32.170 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:19:32.170 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:19:32.181 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:19:32.182 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' Fri Feb 22 12:19:32.431 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31002 Fri Feb 22 12:19:32.431 SocketException: remote: 127.0.0.1:31002 error: 9001 socket exception [1] server [127.0.0.1:31002] Fri Feb 22 12:19:32.431 DBClientCursor::init call() failed Error: error doing query: failed nreplsets_priority1.js checkPrimaryIs reconnecting Fri Feb 22 12:19:32.432 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:19:32.432 reconnect 127.0.0.1:31002 ok m31002| Fri Feb 22 12:19:32.432 [initandlisten] connection accepted from 127.0.0.1:50078 #36 (5 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:19:33.366 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:19:33.560 [rsMgr] replSet info electSelf 1 m31002| Fri Feb 22 12:19:33.560 [conn34] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31000| Fri Feb 22 12:19:33.560 [conn18] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31001| Fri Feb 22 12:19:33.560 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:19:36.082 [conn33] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:52043] goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:19:36Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 71, "optime" : { "t" : 1361535504000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:18:24Z"), "lastHeartbeat" : ISODate("2013-02-22T12:19:35Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:19:36Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 7, "optime" : { "t" : 1361535504000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:18:24Z"), "lastHeartbeat" : ISODate("2013-02-22T12:19:35Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 266, "optime" : { "t" : 1361535504000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:18:24Z"), "self" : true } ], "ok" : 1 } goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:19:37.763 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:19:38.171 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:19:39.646 [rsMgr] replSet info electSelf 1 m31002| Fri Feb 22 12:19:39.646 [conn34] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31000| Fri Feb 22 12:19:39.647 [conn18] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31001| Fri Feb 22 12:19:39.647 [rsMgr] replSet couldn't elect self, only received 1 votes m31001| Fri Feb 22 12:19:40.182 [conn6] end connection 165.225.128.186:57485 (2 connections now open) m31001| Fri Feb 22 12:19:40.182 [initandlisten] connection accepted from 165.225.128.186:49486 #7 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:19:43.367 [conn18] end connection 165.225.128.186:49067 (2 connections now open) m31000| Fri Feb 22 12:19:43.368 [initandlisten] connection accepted from 165.225.128.186:43747 #19 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:19:43.764 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:19:44.172 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:19:45.764 [conn17] end connection 165.225.128.186:43759 (2 connections now open) m31000| Fri Feb 22 12:19:45.764 [initandlisten] connection accepted from 165.225.128.186:60331 #20 (3 connections now open) m31001| Fri Feb 22 12:19:46.230 [rsMgr] replSet info electSelf 1 m31002| Fri Feb 22 12:19:46.230 [conn34] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31000| Fri Feb 22 12:19:46.230 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31001| Fri Feb 22 12:19:46.231 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:19:49.765 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:19:50.173 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:19:51Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 86, "optime" : { "t" : 1361535504000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:18:24Z"), "lastHeartbeat" : ISODate("2013-02-22T12:19:49Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:19:50Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 22, "optime" : { "t" : 1361535504000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:18:24Z"), "lastHeartbeat" : ISODate("2013-02-22T12:19:49Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:19:51Z"), "pingMs" : 0 }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 281, "optime" : { "t" : 1361535504000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:18:24Z"), "self" : true } ], "ok" : 1 } m31001| Fri Feb 22 12:19:51.933 [rsMgr] replSet info electSelf 1 m31002| Fri Feb 22 12:19:51.933 [conn34] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31000| Fri Feb 22 12:19:51.933 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31001 already voted for another m31001| Fri Feb 22 12:19:51.933 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:19:54.173 [conn32] end connection 165.225.128.186:64377 (3 connections now open) m31002| Fri Feb 22 12:19:54.174 [initandlisten] connection accepted from 165.225.128.186:63305 #37 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:19:55.766 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' m31000| Fri Feb 22 12:19:56.174 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31000 has lower priority than bs-smartos-x86-64-1.10gen.cc:31001' goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:19:58.299 [rsMgr] replSet info electSelf 1 m31002| Fri Feb 22 12:19:58.299 [conn34] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31001 (1) m31000| Fri Feb 22 12:19:58.299 [conn19] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31001 (1) m31001| Fri Feb 22 12:19:58.367 [rsMgr] replSet PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:19:59.369 [conn34] end connection 165.225.128.186:54968 (3 connections now open) m31002| Fri Feb 22 12:19:59.369 [initandlisten] connection accepted from 165.225.128.186:58972 #38 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:19:59.789 [conn4] end connection 165.225.128.186:48556 (2 connections now open) m31001| Fri Feb 22 12:19:59.789 [initandlisten] connection accepted from 165.225.128.186:54678 #8 (3 connections now open) m31002| Fri Feb 22 12:19:59.789 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state PRIMARY m31000| Fri Feb 22 12:20:00.185 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31001==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 1 bs-smartos-x86-64-1.10gen.cc:31002: 2 Round 4: FIGHT! random priority : 84.40026505850255 random priority : 3.9521608501672745 random priority : 55.263624200597405 replsets_priority1.js max is bs-smartos-x86-64-1.10gen.cc:31000 with priority 84.40026505850255, reconfiguring... version is 5, trying to update to 6 m31001| Fri Feb 22 12:20:00.465 [conn2] replSet replSetReconfig config object parses ok, 3 members specified m31001| Fri Feb 22 12:20:00.465 [conn2] replSet replSetReconfig [2] m31001| Fri Feb 22 12:20:00.465 [conn2] replSet info saving a newer config version to local.system.replset m31001| Fri Feb 22 12:20:00.480 [conn2] replSet saveConfigLocally done m31001| Fri Feb 22 12:20:00.480 [conn2] replSet relinquishing primary state m31001| Fri Feb 22 12:20:00.480 [conn2] replSet SECONDARY m31001| Fri Feb 22 12:20:00.480 [conn2] replSet closing client sockets after relinquishing primary m31001| Fri Feb 22 12:20:00.480 [conn2] replSet PRIMARY Fri Feb 22 12:20:00.481 DBClientCursor::init call() failed m31001| Fri Feb 22 12:20:00.481 [conn2] replSet replSetReconfig new config saved locally nreplsets_priority1.js Caught exception: Error: error doing query: failed m31002| Fri Feb 22 12:20:00.481 [conn38] end connection 165.225.128.186:58972 (3 connections now open) m31001| Fri Feb 22 12:20:00.481 [conn2] SocketException handling request, closing client connection: 9001 socket exception [2] server [127.0.0.1:56592] m31001| Fri Feb 22 12:20:00.481 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:20:00.481 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:20:00.481 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31001 (priority 3.95216), bs-smartos-x86-64-1.10gen.cc:31000 is priority 84.4003 and 0 seconds behind m31001| Fri Feb 22 12:20:00.481 [rsMgr] replSet relinquishing primary state m31001| Fri Feb 22 12:20:00.481 [rsMgr] replSet SECONDARY m31001| Fri Feb 22 12:20:00.481 [rsMgr] replSet closing client sockets after relinquishing primary Fri Feb 22 12:20:00.481 trying reconnect to 127.0.0.1:31001 Fri Feb 22 12:20:00.481 reconnect 127.0.0.1:31001 ok m31002| Fri Feb 22 12:20:00.481 [initandlisten] connection accepted from 165.225.128.186:56486 #39 (4 connections now open) m31001| Fri Feb 22 12:20:00.481 [initandlisten] connection accepted from 127.0.0.1:62304 #9 (3 connections now open) version is 5, trying to update to 6 m31001| Fri Feb 22 12:20:00.482 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31001| Fri Feb 22 12:20:00.482 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:20:00.482 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31001 is already primary and more up-to-date' m31002| Fri Feb 22 12:20:00.678 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:20:00.679 [initandlisten] connection accepted from 165.225.128.186:41645 #10 (4 connections now open) m31002| Fri Feb 22 12:20:00.679 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:20:00.679 [rsSyncNotifier] Socket flush send() errno:9 Bad file number 165.225.128.186:31000 m31002| Fri Feb 22 12:20:00.679 [rsSyncNotifier] caught exception (socket exception [SEND_ERROR] for 165.225.128.186:31000) in destructor (~PiggyBackData) m31001| Fri Feb 22 12:20:00.680 [initandlisten] connection accepted from 165.225.128.186:45112 #11 (5 connections now open) m31000| Fri Feb 22 12:20:00.787 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31001 m31001| Fri Feb 22 12:20:00.788 [initandlisten] connection accepted from 165.225.128.186:58513 #12 (6 connections now open) m31000| Fri Feb 22 12:20:00.789 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31001 m31002| Fri Feb 22 12:20:00.789 [conn35] end connection 165.225.128.186:48675 (3 connections now open) m31001| Fri Feb 22 12:20:00.790 [initandlisten] connection accepted from 165.225.128.186:52995 #13 (7 connections now open) m31002| Fri Feb 22 12:20:01.790 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:20:01.790 [rsMgr] replset msgReceivedNewConfig version: version: 6 m31002| Fri Feb 22 12:20:01.790 [rsMgr] replSet info saving a newer config version to local.system.replset m31002| Fri Feb 22 12:20:01.805 [rsMgr] replSet saveConfigLocally done m31002| Fri Feb 22 12:20:01.805 [rsMgr] replSet replSetReconfig new config saved locally m31002| Fri Feb 22 12:20:01.805 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31002| Fri Feb 22 12:20:01.805 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:20:01.805 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31002| Fri Feb 22 12:20:01.805 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:20:01.805 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:20:01.806 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:20:01.806 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:20:02.175 [rsMgr] replset msgReceivedNewConfig version: version: 6 m31000| Fri Feb 22 12:20:02.175 [rsMgr] replSet info saving a newer config version to local.system.replset m31000| Fri Feb 22 12:20:02.186 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:20:02.188 [rsMgr] replSet saveConfigLocally done m31000| Fri Feb 22 12:20:02.188 [rsMgr] replSet replSetReconfig new config saved locally m31000| Fri Feb 22 12:20:02.188 [rsMgr] replset msgReceivedNewConfig version: version: 6 m31000| Fri Feb 22 12:20:02.189 [rsMgr] replSet info msgReceivedNewConfig but version isn't higher 6 6 m31000| Fri Feb 22 12:20:02.189 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:20:02.189 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:20:02.189 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:20:02.189 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:20:02.189 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:20:02.189 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:20:02.189 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31002| Fri Feb 22 12:20:04.189 [conn37] end connection 165.225.128.186:63305 (2 connections now open) m31002| Fri Feb 22 12:20:04.189 [initandlisten] connection accepted from 165.225.128.186:38149 #40 (3 connections now open) m31001| Fri Feb 22 12:20:06.482 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:20:07.806 [conn8] end connection 165.225.128.186:54678 (6 connections now open) m31001| Fri Feb 22 12:20:07.806 [initandlisten] connection accepted from 165.225.128.186:37884 #14 (7 connections now open) m31002| Fri Feb 22 12:20:07.807 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:20:08.190 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:20:10.781 [conn10] end connection 165.225.128.186:41645 (6 connections now open) m31001| Fri Feb 22 12:20:10.890 [conn12] end connection 165.225.128.186:58513 (5 connections now open) m31001| Fri Feb 22 12:20:12.483 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:20:13.807 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:20:14.191 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:20:16.483 [conn19] end connection 165.225.128.186:43747 (2 connections now open) m31000| Fri Feb 22 12:20:16.484 [initandlisten] connection accepted from 165.225.128.186:58530 #21 (3 connections now open) m31001| Fri Feb 22 12:20:18.484 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:20:19.808 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:20:20.191 [conn7] end connection 165.225.128.186:49486 (4 connections now open) m31001| Fri Feb 22 12:20:20.192 [initandlisten] connection accepted from 165.225.128.186:45446 #15 (5 connections now open) m31000| Fri Feb 22 12:20:20.192 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31000| Fri Feb 22 12:20:23.808 [conn20] end connection 165.225.128.186:60331 (2 connections now open) m31000| Fri Feb 22 12:20:23.809 [initandlisten] connection accepted from 165.225.128.186:54196 #22 (3 connections now open) m31001| Fri Feb 22 12:20:24.485 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:20:25.809 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:20:26.193 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31002| Fri Feb 22 12:20:30.485 [conn39] end connection 165.225.128.186:56486 (2 connections now open) m31002| Fri Feb 22 12:20:30.486 [initandlisten] connection accepted from 165.225.128.186:37545 #41 (3 connections now open) m31001| Fri Feb 22 12:20:30.494 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31002| Fri Feb 22 12:20:31.810 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:20:32.194 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:20:32.194 [conn40] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31001| Fri Feb 22 12:20:32.194 [conn15] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31000| Fri Feb 22 12:20:32.892 [rsMgr] replSet PRIMARY m31002| Fri Feb 22 12:20:33.810 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31002| Fri Feb 22 12:20:34.194 [conn40] end connection 165.225.128.186:38149 (2 connections now open) m31002| Fri Feb 22 12:20:34.194 [initandlisten] connection accepted from 165.225.128.186:39566 #42 (3 connections now open) m31001| Fri Feb 22 12:20:34.486 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY replsets_priority1.js wait for 2 slaves replsets_priority1.js wait for new config version 6 replsets_priority1.js awaitReplication ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31000, is { "t" : 1361535600000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361535600000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31001 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31001, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31002 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31002, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361535600000, "i" : 1 } reconfigured. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 1 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 rs.stop ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number ReplSetTest stop *** Shutting down mongod in port 31000 *** m31000| Fri Feb 22 12:20:34.514 got signal 15 (Terminated), will terminate after current cmd ends m31000| Fri Feb 22 12:20:34.515 [interruptThread] now exiting m31000| Fri Feb 22 12:20:34.515 dbexit: m31000| Fri Feb 22 12:20:34.515 [interruptThread] shutdown: going to close listening sockets... m31000| Fri Feb 22 12:20:34.515 [interruptThread] closing listening socket: 19 m31000| Fri Feb 22 12:20:34.515 [interruptThread] closing listening socket: 20 m31000| Fri Feb 22 12:20:34.515 [interruptThread] closing listening socket: 21 m31000| Fri Feb 22 12:20:34.515 [interruptThread] removing socket file: /tmp/mongodb-31000.sock m31000| Fri Feb 22 12:20:34.515 [interruptThread] shutdown: going to flush diaglog... m31000| Fri Feb 22 12:20:34.515 [interruptThread] shutdown: going to close sockets... m31000| Fri Feb 22 12:20:34.515 [interruptThread] shutdown: waiting for fs preallocator... m31000| Fri Feb 22 12:20:34.515 [interruptThread] shutdown: lock for final commit... m31000| Fri Feb 22 12:20:34.515 [interruptThread] shutdown: final commit... m31001| Fri Feb 22 12:20:34.515 [conn15] end connection 165.225.128.186:45446 (4 connections now open) m31000| Fri Feb 22 12:20:34.515 [conn22] end connection 165.225.128.186:54196 (2 connections now open) m31001| Fri Feb 22 12:20:34.515 [conn13] end connection 165.225.128.186:52995 (4 connections now open) m31000| Fri Feb 22 12:20:34.515 [conn12] end connection 127.0.0.1:59363 (2 connections now open) m31000| Fri Feb 22 12:20:34.515 [conn21] end connection 165.225.128.186:58530 (2 connections now open) m31002| Fri Feb 22 12:20:34.515 [conn42] end connection 165.225.128.186:39566 (2 connections now open) m31000| Fri Feb 22 12:20:34.532 [interruptThread] shutdown: closing all files... m31000| Fri Feb 22 12:20:34.532 [interruptThread] closeAllFiles() finished m31000| Fri Feb 22 12:20:34.532 [interruptThread] journalCleanup... m31000| Fri Feb 22 12:20:34.532 [interruptThread] removeJournalFiles m31000| Fri Feb 22 12:20:34.533 dbexit: really exiting now m31002| Fri Feb 22 12:20:34.782 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:34.782 [rsBackgroundSync] repl: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:35.369 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:35.369 [rsBackgroundSync] repl: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:35.369 [rsBackgroundSync] replSet not trying to sync from bs-smartos-x86-64-1.10gen.cc:31000, it is vetoed for 10 more seconds m31001| Fri Feb 22 12:20:35.369 [rsBackgroundSync] replSet not trying to sync from bs-smartos-x86-64-1.10gen.cc:31000, it is vetoed for 10 more seconds Fri Feb 22 12:20:35.514 shell: stopped mongo program on port 31000 Fri Feb 22 12:20:35.515 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31000 Fri Feb 22 12:20:35.515 SocketException: remote: 127.0.0.1:31000 error: 9001 socket exception [1] server [127.0.0.1:31000] Fri Feb 22 12:20:35.515 DBClientCursor::init call() failed ReplSetTest Could not call ismaster on node 0: Error: error doing query: failed m31002| Fri Feb 22 12:20:35.782 [rsBackgroundSync] replSet not trying to sync from bs-smartos-x86-64-1.10gen.cc:31000, it is vetoed for 9 more seconds m31002| Fri Feb 22 12:20:35.782 [rsBackgroundSync] replSet not trying to sync from bs-smartos-x86-64-1.10gen.cc:31000, it is vetoed for 9 more seconds m31002| Fri Feb 22 12:20:35.810 [rsHealthPoll] DBClientCursor::init call() failed m31002| Fri Feb 22 12:20:35.810 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:20:35.811 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31000 is down (or slow to respond): m31002| Fri Feb 22 12:20:35.811 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state DOWN m31002| Fri Feb 22 12:20:35.811 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31000 is already primary and more up-to-date' m31001| Fri Feb 22 12:20:36.486 [rsHealthPoll] DBClientCursor::init call() failed m31001| Fri Feb 22 12:20:36.486 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:36.487 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31000 is down (or slow to respond): m31001| Fri Feb 22 12:20:36.487 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state DOWN m31001| Fri Feb 22 12:20:36.487 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' Fri Feb 22 12:20:37.516 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:37.517 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31001| Fri Feb 22 12:20:37.810 [conn14] end connection 165.225.128.186:37884 (2 connections now open) m31001| Fri Feb 22 12:20:37.810 [initandlisten] connection accepted from 165.225.128.186:39277 #16 (3 connections now open) m31002| Fri Feb 22 12:20:37.811 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:38.487 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:20:39.518 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:39.518 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:39.812 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:40.488 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:20:41.520 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:41.520 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:41.812 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:20:42.152 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:20:42.488 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:42.489 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' Fri Feb 22 12:20:43.522 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:43.522 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:43.813 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:44.489 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying Fri Feb 22 12:20:45.523 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:45.523 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:45.813 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:46.490 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:46.490 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:46.490 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:46.491 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:46.491 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:46.491 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:20:47.525 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:47.525 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:47.814 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:20:48.119 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31001| Fri Feb 22 12:20:48.492 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:48.492 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:48.492 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:48.492 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:48.493 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:48.493 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:48.493 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' Fri Feb 22 12:20:49.527 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:49.527 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:49.814 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:50.493 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:50.494 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:50.494 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:50.494 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:50.494 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:50.495 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:20:51.530 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:51.530 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:51.815 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:52.495 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:52.495 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:52.495 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:52.496 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:52.496 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:52.496 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:20:53.531 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:53.531 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:53.815 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:53.815 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:53.816 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:20:53.816 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:53.816 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:53.816 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:54.496 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:54.497 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:54.497 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:54.497 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:54.497 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:20:54.498 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:54.498 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:54.595 [rsMgr] replSet not trying to elect self as responded yea to someone else recently Fri Feb 22 12:20:55.533 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:55.533 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:55.817 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:55.817 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:55.817 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:20:55.817 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:55.818 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:55.818 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:56.498 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:56.498 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:56.499 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:56.499 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:56.499 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:56.499 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:20:57.535 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:57.535 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:57.818 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:57.818 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:57.819 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:20:57.819 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:57.819 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:57.819 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:58.500 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:58.500 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:58.500 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:20:58.500 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:58.501 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:20:58.501 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:20:59.537 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:20:59.537 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:20:59.820 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:59.820 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:59.820 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:20:59.820 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:59.820 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:20:59.821 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:00.241 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31002| Fri Feb 22 12:21:00.498 [conn41] end connection 165.225.128.186:37545 (1 connection now open) m31002| Fri Feb 22 12:21:00.498 [initandlisten] connection accepted from 165.225.128.186:35916 #43 (2 connections now open) m31001| Fri Feb 22 12:21:00.499 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:21:00.502 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:00.502 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:00.502 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:21:00.502 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:00.502 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:00.503 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:21:01.539 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:21:01.539 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:21:01.821 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:01.821 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:01.822 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:21:01.822 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:01.822 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:01.822 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:02.503 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:02.503 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:02.504 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:21:02.504 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:02.504 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:02.504 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:21:03.540 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:21:03.541 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:21:03.823 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:03.823 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:03.823 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:21:03.823 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:03.823 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:03.824 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:04.504 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:04.505 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:04.505 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:21:04.505 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:04.505 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:04.505 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 Fri Feb 22 12:21:05.542 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:21:05.542 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 m31002| Fri Feb 22 12:21:05.824 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:05.824 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:05.825 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:21:05.825 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:05.825 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:05.825 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:06.283 [rsMgr] replSet info electSelf 2 m31001| Fri Feb 22 12:21:06.283 [conn16] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31002 (2) m31001| Fri Feb 22 12:21:06.499 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31002' m31001| Fri Feb 22 12:21:06.506 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:06.506 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:06.506 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31001| Fri Feb 22 12:21:06.506 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:06.506 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:06.507 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31000: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31002| Fri Feb 22 12:21:06.784 [rsMgr] replSet PRIMARY Fri Feb 22 12:21:07.544 trying reconnect to 127.0.0.1:31000 Fri Feb 22 12:21:07.544 reconnect 127.0.0.1:31000 failed couldn't connect to server 127.0.0.1:31000 ReplSetTest Could not call ismaster on node 0: Error: socket exception [CONNECT_ERROR] for 127.0.0.1:31000 killed max primary. Checking statuses. second is bs-smartos-x86-64-1.10gen.cc:31002 with priority 55.263624200597405 nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31002==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 8 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 restart max 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number ReplSetTest stop *** Shutting down mongod in port 31000 *** Fri Feb 22 12:21:07.546 No db started on port: 31000 Fri Feb 22 12:21:07.546 shell: stopped mongo program on port 31000 ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31000, 31001, 31002 ] 31000 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31000, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "testSet", "dbpath" : "$set-$node", "restart" : true, "pathOpts" : { "node" : 0, "set" : "testSet" } } ReplSetTest (Re)Starting.... Fri Feb 22 12:21:07.549 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31000 --noprealloc --smallfiles --rest --replSet testSet --dbpath /data/db/testSet-0 --setParameter enableTestCommands=1 m31000| note: noprealloc may hurt performance in many applications m31000| Fri Feb 22 12:21:07.620 [initandlisten] MongoDB starting : pid=11050 port=31000 dbpath=/data/db/testSet-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31000| Fri Feb 22 12:21:07.620 [initandlisten] m31000| Fri Feb 22 12:21:07.620 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31000| Fri Feb 22 12:21:07.620 [initandlisten] ** uses to detect impending page faults. m31000| Fri Feb 22 12:21:07.620 [initandlisten] ** This may result in slower performance for certain use cases m31000| Fri Feb 22 12:21:07.620 [initandlisten] m31000| Fri Feb 22 12:21:07.620 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31000| Fri Feb 22 12:21:07.620 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31000| Fri Feb 22 12:21:07.620 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31000| Fri Feb 22 12:21:07.620 [initandlisten] allocator: system m31000| Fri Feb 22 12:21:07.620 [initandlisten] options: { dbpath: "/data/db/testSet-0", noprealloc: true, oplogSize: 40, port: 31000, replSet: "testSet", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31000| Fri Feb 22 12:21:07.621 [initandlisten] journal dir=/data/db/testSet-0/journal m31000| Fri Feb 22 12:21:07.621 [initandlisten] recover : no journal files present, no recovery needed m31000| Fri Feb 22 12:21:07.637 [websvr] admin web console waiting for connections on port 32000 m31000| Fri Feb 22 12:21:07.637 [initandlisten] waiting for connections on port 31000 m31000| Fri Feb 22 12:21:07.658 [initandlisten] connection accepted from 165.225.128.186:49142 #1 (1 connection now open) m31000| Fri Feb 22 12:21:07.658 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:21:07.658 [conn1] end connection 165.225.128.186:49142 (0 connections now open) m31000| Fri Feb 22 12:21:07.659 [rsStart] replSet STARTUP2 m31001| Fri Feb 22 12:21:07.659 [initandlisten] connection accepted from 165.225.128.186:50791 #17 (4 connections now open) m31000| Fri Feb 22 12:21:07.659 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 thinks that we are down m31000| Fri Feb 22 12:21:07.659 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is up m31000| Fri Feb 22 12:21:07.659 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state SECONDARY m31000| Fri Feb 22 12:21:07.750 [initandlisten] connection accepted from 127.0.0.1:52638 #2 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31000, connection to bs-smartos-x86-64-1.10gen.cc:31001, connection to bs-smartos-x86-64-1.10gen.cc:31002 ] max restarted. Checking statuses. nreplsets_priority1.js checkPrimaryIs([object Object]) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 8 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31001| Fri Feb 22 12:21:07.814 [conn16] end connection 165.225.128.186:39277 (3 connections now open) m31001| Fri Feb 22 12:21:07.815 [initandlisten] connection accepted from 165.225.128.186:53530 #18 (4 connections now open) m31000| Fri Feb 22 12:21:07.826 [initandlisten] connection accepted from 165.225.128.186:61164 #3 (2 connections now open) m31000| Fri Feb 22 12:21:07.826 [conn3] end connection 165.225.128.186:61164 (1 connection now open) m31000| Fri Feb 22 12:21:07.826 [initandlisten] connection accepted from 165.225.128.186:57372 #4 (2 connections now open) m31002| Fri Feb 22 12:21:07.826 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 thinks that we are down m31002| Fri Feb 22 12:21:07.826 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31002| Fri Feb 22 12:21:07.826 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state STARTUP2 m31001| Fri Feb 22 12:21:08.499 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY m31000| Fri Feb 22 12:21:08.507 [initandlisten] connection accepted from 165.225.128.186:51274 #5 (3 connections now open) m31000| Fri Feb 22 12:21:08.507 [conn5] end connection 165.225.128.186:51274 (2 connections now open) m31000| Fri Feb 22 12:21:08.507 [initandlisten] connection accepted from 165.225.128.186:34938 #6 (3 connections now open) m31001| Fri Feb 22 12:21:08.508 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is up m31001| Fri Feb 22 12:21:08.508 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state STARTUP2 m31000| Fri Feb 22 12:21:08.659 [rsSync] replSet SECONDARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 5 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31001| Fri Feb 22 12:21:09.371 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:21:09.371 [initandlisten] connection accepted from 165.225.128.186:64450 #44 (3 connections now open) m31002| Fri Feb 22 12:21:09.659 [initandlisten] connection accepted from 165.225.128.186:58379 #45 (4 connections now open) m31000| Fri Feb 22 12:21:09.659 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is up m31000| Fri Feb 22 12:21:09.659 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 5 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 1 m31002| Fri Feb 22 12:21:09.827 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31002| Fri Feb 22 12:21:09.827 [rsMgr] stepping down bs-smartos-x86-64-1.10gen.cc:31002 (priority 55.2636), bs-smartos-x86-64-1.10gen.cc:31000 is priority 84.4003 and 0 seconds behind m31002| Fri Feb 22 12:21:09.827 [rsMgr] replSet relinquishing primary state m31002| Fri Feb 22 12:21:09.827 [rsMgr] replSet SECONDARY m31002| Fri Feb 22 12:21:09.827 [rsMgr] replSet closing client sockets after relinquishing primary m31000| Fri Feb 22 12:21:09.827 [conn4] end connection 165.225.128.186:57372 (2 connections now open) m31001| Fri Feb 22 12:21:09.827 [conn11] end connection 165.225.128.186:45112 (3 connections now open) m31002| Fri Feb 22 12:21:09.827 [conn36] end connection 127.0.0.1:50078 (3 connections now open) m31001| Fri Feb 22 12:21:09.827 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002 m31001| Fri Feb 22 12:21:10.372 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31002 m31002| Fri Feb 22 12:21:10.372 [initandlisten] connection accepted from 165.225.128.186:45988 #46 (4 connections now open) m31001| Fri Feb 22 12:21:10.500 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31001| Fri Feb 22 12:21:10.500 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:21:10.508 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state SECONDARY m31001| Fri Feb 22 12:21:10.508 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' Fri Feb 22 12:21:10.756 Socket recv() errno:131 Connection reset by peer 127.0.0.1:31002 Fri Feb 22 12:21:10.756 SocketException: remote: 127.0.0.1:31002 error: 9001 socket exception [1] server [127.0.0.1:31002] Fri Feb 22 12:21:10.756 DBClientCursor::init call() failed Error: error doing query: failed nreplsets_priority1.js checkPrimaryIs reconnecting Fri Feb 22 12:21:10.756 trying reconnect to 127.0.0.1:31002 Fri Feb 22 12:21:10.756 reconnect 127.0.0.1:31002 ok m31002| Fri Feb 22 12:21:10.756 [initandlisten] connection accepted from 127.0.0.1:59474 #47 (5 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:11.660 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31002 is now in state SECONDARY m31000| Fri Feb 22 12:21:11.660 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:21:11.660 [conn45] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:21:11.660 [conn17] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:21:11.660 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:11.827 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31000| Fri Feb 22 12:21:11.827 [initandlisten] connection accepted from 165.225.128.186:34369 #7 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:14.422 [conn44] SocketException handling request, closing client connection: 9001 socket exception [2] server [165.225.128.186:64450] goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:21:14Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 7, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "lastHeartbeat" : ISODate("2013-02-22T12:21:13Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 73, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "lastHeartbeat" : ISODate("2013-02-22T12:21:13Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:21:14Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 364, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "self" : true } ], "ok" : 1 } goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:15.816 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:21:16.501 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31000| Fri Feb 22 12:21:16.508 [conn6] end connection 165.225.128.186:34938 (2 connections now open) m31000| Fri Feb 22 12:21:16.509 [initandlisten] connection accepted from 165.225.128.186:51212 #8 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:17.661 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:21:17.661 [conn45] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:21:17.661 [conn17] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:21:17.661 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:21:21.661 [conn17] end connection 165.225.128.186:50791 (2 connections now open) m31001| Fri Feb 22 12:21:21.661 [initandlisten] connection accepted from 165.225.128.186:48561 #19 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:21.817 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:21:22.502 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:23.662 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:21:23.662 [conn45] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:21:23.662 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:21:23.662 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:23.829 [conn7] end connection 165.225.128.186:34369 (2 connections now open) m31000| Fri Feb 22 12:21:23.829 [initandlisten] connection accepted from 165.225.128.186:55783 #9 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:27.818 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:21:28.502 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:29.663 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:21:29.663 [conn45] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:21:29.663 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:21:29.663 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:21:29Z"), "myState" : 2, "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 22, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "lastHeartbeat" : ISODate("2013-02-22T12:21:27Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:21:29Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 88, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "lastHeartbeat" : ISODate("2013-02-22T12:21:27Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:21:28Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 379, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "self" : true } ], "ok" : 1 } m31002| Fri Feb 22 12:21:30.502 [conn43] end connection 165.225.128.186:35916 (3 connections now open) m31002| Fri Feb 22 12:21:30.502 [initandlisten] connection accepted from 165.225.128.186:47543 #48 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:33.818 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:21:34.504 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:35.663 [rsMgr] replSet info electSelf 0 m31002| Fri Feb 22 12:21:35.664 [conn45] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31001| Fri Feb 22 12:21:35.664 [conn19] replSet voting no for bs-smartos-x86-64-1.10gen.cc:31000 already voted for another m31000| Fri Feb 22 12:21:35.664 [rsMgr] replSet couldn't elect self, only received 1 votes goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:37.663 [conn45] end connection 165.225.128.186:58379 (3 connections now open) m31002| Fri Feb 22 12:21:37.664 [initandlisten] connection accepted from 165.225.128.186:59468 #49 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31001| Fri Feb 22 12:21:37.819 [conn18] end connection 165.225.128.186:53530 (2 connections now open) m31001| Fri Feb 22 12:21:37.819 [initandlisten] connection accepted from 165.225.128.186:59378 #20 (3 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:39.820 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31001 would veto with 'bs-smartos-x86-64-1.10gen.cc:31002 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' m31001| Fri Feb 22 12:21:40.511 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31002 would veto with 'bs-smartos-x86-64-1.10gen.cc:31001 has lower priority than bs-smartos-x86-64-1.10gen.cc:31000' goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:41.665 [rsMgr] replSet info electSelf 0 m31001| Fri Feb 22 12:21:41.665 [conn19] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) m31002| Fri Feb 22 12:21:41.665 [conn49] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31000 (0) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31000| Fri Feb 22 12:21:42.661 [rsMgr] replSet PRIMARY goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 2 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 m31002| Fri Feb 22 12:21:43.831 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31001| Fri Feb 22 12:21:44.512 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state PRIMARY m31002| Fri Feb 22 12:21:44.786 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:21:44.786 [initandlisten] connection accepted from 165.225.128.186:44750 #10 (4 connections now open) goal: bs-smartos-x86-64-1.10gen.cc:31000==1 states: bs-smartos-x86-64-1.10gen.cc:31000: 1 bs-smartos-x86-64-1.10gen.cc:31001: 2 bs-smartos-x86-64-1.10gen.cc:31002: 2 status: { "set" : "testSet", "date" : ISODate("2013-02-22T12:21:44Z"), "myState" : 2, "syncingTo" : "bs-smartos-x86-64-1.10gen.cc:31000", "members" : [ { "_id" : 0, "name" : "bs-smartos-x86-64-1.10gen.cc:31000", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 37, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "lastHeartbeat" : ISODate("2013-02-22T12:21:43Z"), "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"), "pingMs" : 0 }, { "_id" : 1, "name" : "bs-smartos-x86-64-1.10gen.cc:31001", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 103, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "lastHeartbeat" : ISODate("2013-02-22T12:21:43Z"), "lastHeartbeatRecv" : ISODate("2013-02-22T12:21:44Z"), "pingMs" : 0, "lastHeartbeatMessage" : "db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31002" }, { "_id" : 2, "name" : "bs-smartos-x86-64-1.10gen.cc:31002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 394, "optime" : { "t" : 1361535600000, "i" : 1 }, "optimeDate" : ISODate("2013-02-22T12:20:00Z"), "errmsg" : "syncing to: bs-smartos-x86-64-1.10gen.cc:31000", "self" : true } ], "ok" : 1 } replsets_priority1.js SUCCESS! m31000| Fri Feb 22 12:21:44.800 got signal 15 (Terminated), will terminate after current cmd ends m31000| Fri Feb 22 12:21:44.800 [interruptThread] now exiting m31000| Fri Feb 22 12:21:44.800 dbexit: m31000| Fri Feb 22 12:21:44.800 [interruptThread] shutdown: going to close listening sockets... m31000| Fri Feb 22 12:21:44.800 [interruptThread] closing listening socket: 21 m31000| Fri Feb 22 12:21:44.801 [interruptThread] closing listening socket: 22 m31000| Fri Feb 22 12:21:44.801 [interruptThread] closing listening socket: 23 m31000| Fri Feb 22 12:21:44.801 [interruptThread] removing socket file: /tmp/mongodb-31000.sock m31000| Fri Feb 22 12:21:44.801 [interruptThread] shutdown: going to flush diaglog... m31000| Fri Feb 22 12:21:44.801 [interruptThread] shutdown: going to close sockets... m31000| Fri Feb 22 12:21:44.801 [interruptThread] shutdown: waiting for fs preallocator... m31000| Fri Feb 22 12:21:44.801 [interruptThread] shutdown: lock for final commit... m31000| Fri Feb 22 12:21:44.801 [interruptThread] shutdown: final commit... m31000| Fri Feb 22 12:21:44.801 [conn2] end connection 127.0.0.1:52638 (3 connections now open) m31000| Fri Feb 22 12:21:44.801 [conn9] end connection 165.225.128.186:55783 (3 connections now open) m31000| Fri Feb 22 12:21:44.801 [conn8] end connection 165.225.128.186:51212 (3 connections now open) m31002| Fri Feb 22 12:21:44.801 [conn49] end connection 165.225.128.186:59468 (3 connections now open) m31001| Fri Feb 22 12:21:44.801 [conn19] end connection 165.225.128.186:48561 (2 connections now open) m31002| Fri Feb 22 12:21:44.801 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31000 m31000| Fri Feb 22 12:21:44.813 [interruptThread] shutdown: closing all files... m31000| Fri Feb 22 12:21:44.814 [interruptThread] closeAllFiles() finished m31000| Fri Feb 22 12:21:44.814 [interruptThread] journalCleanup... m31000| Fri Feb 22 12:21:44.814 [interruptThread] removeJournalFiles m31000| Fri Feb 22 12:21:44.814 dbexit: really exiting now m31001| Fri Feb 22 12:21:44.828 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:44.829 [rsBackgroundSync] repl: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31000 m31001| Fri Feb 22 12:21:45.800 got signal 15 (Terminated), will terminate after current cmd ends m31001| Fri Feb 22 12:21:45.801 [interruptThread] now exiting m31001| Fri Feb 22 12:21:45.801 dbexit: m31001| Fri Feb 22 12:21:45.801 [interruptThread] shutdown: going to close listening sockets... m31001| Fri Feb 22 12:21:45.801 [interruptThread] closing listening socket: 20 m31001| Fri Feb 22 12:21:45.801 [interruptThread] closing listening socket: 21 m31001| Fri Feb 22 12:21:45.801 [interruptThread] closing listening socket: 22 m31001| Fri Feb 22 12:21:45.801 [interruptThread] removing socket file: /tmp/mongodb-31001.sock m31001| Fri Feb 22 12:21:45.801 [interruptThread] shutdown: going to flush diaglog... m31001| Fri Feb 22 12:21:45.801 [interruptThread] shutdown: going to close sockets... m31001| Fri Feb 22 12:21:45.810 [interruptThread] shutdown: waiting for fs preallocator... m31001| Fri Feb 22 12:21:45.810 [interruptThread] shutdown: lock for final commit... m31001| Fri Feb 22 12:21:45.810 [interruptThread] shutdown: final commit... m31001| Fri Feb 22 12:21:45.810 [conn9] end connection 127.0.0.1:62304 (1 connection now open) m31001| Fri Feb 22 12:21:45.810 [conn20] end connection 165.225.128.186:59378 (1 connection now open) m31002| Fri Feb 22 12:21:45.810 [conn46] end connection 165.225.128.186:45988 (2 connections now open) m31002| Fri Feb 22 12:21:45.810 [conn48] end connection 165.225.128.186:47543 (2 connections now open) m31002| Fri Feb 22 12:21:45.820 [rsHealthPoll] DBClientCursor::init call() failed m31002| Fri Feb 22 12:21:45.820 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31001 heartbeat failed, retrying m31002| Fri Feb 22 12:21:45.821 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31001 is down (or slow to respond): m31002| Fri Feb 22 12:21:45.821 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31001 is now in state DOWN m31001| Fri Feb 22 12:21:45.826 [interruptThread] shutdown: closing all files... m31001| Fri Feb 22 12:21:45.826 [interruptThread] closeAllFiles() finished m31001| Fri Feb 22 12:21:45.826 [interruptThread] journalCleanup... m31001| Fri Feb 22 12:21:45.826 [interruptThread] removeJournalFiles m31001| Fri Feb 22 12:21:45.826 dbexit: really exiting now m31002| Fri Feb 22 12:21:45.832 [rsHealthPoll] DBClientCursor::init call() failed m31002| Fri Feb 22 12:21:45.832 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31000 heartbeat failed, retrying m31002| Fri Feb 22 12:21:45.832 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31000 is down (or slow to respond): m31002| Fri Feb 22 12:21:45.832 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31000 is now in state DOWN m31002| Fri Feb 22 12:21:45.832 [rsMgr] replSet can'tFri Feb 22 12:21:46.819 [conn2] end connection 127.0.0.1:39210 (0 connections now open) 8.3149 minutes Fri Feb 22 12:21:47.825 [initandlisten] connection accepted from 127.0.0.1:40733 #3 (1 connection now open) Fri Feb 22 12:21:47.826 [conn3] end connection 127.0.0.1:40733 (0 connections now open) ******************************************* Test : server7428.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/server7428.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/server7428.js";TestData.testFile = "server7428.js";TestData.testName = "server7428";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:21:47 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:21:47.974 [initandlisten] connection accepted from 127.0.0.1:51452 #4 (1 connection now open) null Resetting db path '/data/db/mongod-29000' Fri Feb 22 12:21:47.987 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/mongod-29000 --setParameter enableTestCommands=1 m29000| Fri Feb 22 12:21:48.059 [initandlisten] MongoDB starting : pid=11151 port=29000 dbpath=/data/db/mongod-29000 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 12:21:48.059 [initandlisten] m29000| Fri Feb 22 12:21:48.059 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 12:21:48.059 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 12:21:48.059 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 12:21:48.059 [initandlisten] m29000| Fri Feb 22 12:21:48.059 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 12:21:48.059 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 12:21:48.059 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 12:21:48.059 [initandlisten] allocator: system m29000| Fri Feb 22 12:21:48.059 [initandlisten] options: { dbpath: "/data/db/mongod-29000", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 12:21:48.059 [initandlisten] journal dir=/data/db/mongod-29000/journal m29000| Fri Feb 22 12:21:48.060 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 12:21:48.072 [FileAllocator] allocating new datafile /data/db/mongod-29000/local.ns, filling with zeroes... m29000| Fri Feb 22 12:21:48.072 [FileAllocator] creating directory /data/db/mongod-29000/_tmp m29000| Fri Feb 22 12:21:48.072 [FileAllocator] done allocating datafile /data/db/mongod-29000/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 12:21:48.072 [FileAllocator] allocating new datafile /data/db/mongod-29000/local.0, filling with zeroes... m29000| Fri Feb 22 12:21:48.073 [FileAllocator] done allocating datafile /data/db/mongod-29000/local.0, size: 64MB, took 0 secs m29000| Fri Feb 22 12:21:48.075 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 12:21:48.075 [websvr] admin web console waiting for connections on port 30000 m29000| Fri Feb 22 12:21:48.190 [initandlisten] connection accepted from 127.0.0.1:60661 #1 (1 connection now open) Resetting db path '/data/db/mongod-31001' Fri Feb 22 12:21:48.194 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --auth --port 31001 --dbpath /data/db/mongod-31001 --setParameter enableTestCommands=1 m31001| Fri Feb 22 12:21:48.282 [initandlisten] MongoDB starting : pid=11152 port=31001 dbpath=/data/db/mongod-31001 64-bit host=bs-smartos-x86-64-1.10gen.cc m31001| Fri Feb 22 12:21:48.282 [initandlisten] m31001| Fri Feb 22 12:21:48.282 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31001| Fri Feb 22 12:21:48.282 [initandlisten] ** uses to detect impending page faults. m31001| Fri Feb 22 12:21:48.282 [initandlisten] ** This may result in slower performance for certain use cases m31001| Fri Feb 22 12:21:48.282 [initandlisten] m31001| Fri Feb 22 12:21:48.282 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31001| Fri Feb 22 12:21:48.282 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31001| Fri Feb 22 12:21:48.282 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31001| Fri Feb 22 12:21:48.282 [initandlisten] allocator: system m31001| Fri Feb 22 12:21:48.282 [initandlisten] options: { auth: true, dbpath: "/data/db/mongod-31001", port: 31001, setParameter: [ "enableTestCommands=1" ] } m31001| Fri Feb 22 12:21:48.283 [initandlisten] journal dir=/data/db/mongod-31001/journal m31001| Fri Feb 22 12:21:48.283 [initandlisten] recover : no journal files present, no recovery needed m31001| Fri Feb 22 12:21:48.297 [FileAllocator] allocating new datafile /data/db/mongod-31001/local.ns, filling with zeroes... m31001| Fri Feb 22 12:21:48.297 [FileAllocator] creating directory /data/db/mongod-31001/_tmp m31001| Fri Feb 22 12:21:48.298 [FileAllocator] done allocating datafile /data/db/mongod-31001/local.ns, size: 16MB, took 0 secs m31001| Fri Feb 22 12:21:48.298 [FileAllocator] allocating new datafile /data/db/mongod-31001/local.0, filling with zeroes... m31001| Fri Feb 22 12:21:48.298 [FileAllocator] done allocating datafile /data/db/mongod-31001/local.0, size: 64MB, took 0 secs m31001| Fri Feb 22 12:21:48.301 [initandlisten] waiting for connections on port 31001 m31001| Fri Feb 22 12:21:48.301 [websvr] admin web console waiting for connections on port 32001 m31001| Fri Feb 22 12:21:48.395 [initandlisten] connection accepted from 127.0.0.1:50661 #1 (1 connection now open) m31001| Fri Feb 22 12:21:48.397 [conn1] note: no users configured in admin.system.users, allowing localhost access { "user" : "foo", "readOnly" : false, "pwd" : "3563025c1e89c7ad43fb63fcbcf1c3c6", "_id" : ObjectId("512762dc93dc1765e316a842") } Fri Feb 22 12:21:50.422 [conn4] end connection 127.0.0.1:51452 (0 connections now open) 2614.8491 ms Fri Feb 22 12:21:50.442 [initandlisten] connection accepted from 127.0.0.1:48364 #5 (1 connection now open) Fri Feb 22 12:21:50.443 [conn5] end connection 127.0.0.1:48364 (0 connections now open) ******************************************* Test : sharding_balance1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance1.js";TestData.testFile = "sharding_balance1.js";TestData.testName = "sharding_balance1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:21:50 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:21:50.623 [initandlisten] connection accepted from 127.0.0.1:47675 #6 (1 connection now open) null Resetting db path '/data/db/slow_sharding_balance10' Fri Feb 22 12:21:50.634 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/slow_sharding_balance10 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:21:50.725 [initandlisten] MongoDB starting : pid=11161 port=30000 dbpath=/data/db/slow_sharding_balance10 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:21:50.725 [initandlisten] m30000| Fri Feb 22 12:21:50.725 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:21:50.725 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:21:50.725 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:21:50.725 [initandlisten] m30000| Fri Feb 22 12:21:50.725 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:21:50.725 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:21:50.725 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:21:50.725 [initandlisten] allocator: system m30000| Fri Feb 22 12:21:50.725 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance10", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:21:50.726 [initandlisten] journal dir=/data/db/slow_sharding_balance10/journal m30000| Fri Feb 22 12:21:50.726 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:21:50.740 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/local.ns, filling with zeroes... m30000| Fri Feb 22 12:21:50.740 [FileAllocator] creating directory /data/db/slow_sharding_balance10/_tmp m30000| Fri Feb 22 12:21:50.741 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:21:50.741 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/local.0, filling with zeroes... m30000| Fri Feb 22 12:21:50.741 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:21:50.744 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:21:50.744 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:21:50.837 [initandlisten] connection accepted from 127.0.0.1:55885 #1 (1 connection now open) Resetting db path '/data/db/slow_sharding_balance11' Fri Feb 22 12:21:50.840 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/slow_sharding_balance11 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:21:50.908 [initandlisten] MongoDB starting : pid=11164 port=30001 dbpath=/data/db/slow_sharding_balance11 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:21:50.908 [initandlisten] m30001| Fri Feb 22 12:21:50.909 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:21:50.909 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:21:50.909 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:21:50.909 [initandlisten] m30001| Fri Feb 22 12:21:50.909 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:21:50.909 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:21:50.909 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:21:50.909 [initandlisten] allocator: system m30001| Fri Feb 22 12:21:50.909 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance11", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:21:50.909 [initandlisten] journal dir=/data/db/slow_sharding_balance11/journal m30001| Fri Feb 22 12:21:50.909 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:21:50.922 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance11/local.ns, filling with zeroes... m30001| Fri Feb 22 12:21:50.922 [FileAllocator] creating directory /data/db/slow_sharding_balance11/_tmp m30001| Fri Feb 22 12:21:50.923 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance11/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:21:50.923 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance11/local.0, filling with zeroes... m30001| Fri Feb 22 12:21:50.923 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance11/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:21:50.925 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:21:50.925 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:21:51.041 [initandlisten] connection accepted from 127.0.0.1:62380 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:21:51.042 [initandlisten] connection accepted from 127.0.0.1:55568 #2 (2 connections now open) ShardingTest slow_sharding_balance1 : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:21:51.048 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:21:51.062 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:21:51.063 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=11166 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:21:51.063 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:21:51.063 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:21:51.063 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 12:21:51.063 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 12:21:51.063 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:21:51.064 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:21:51.064 [mongosMain] connected connection! m30000| Fri Feb 22 12:21:51.064 [initandlisten] connection accepted from 127.0.0.1:42113 #3 (3 connections now open) m30999| Fri Feb 22 12:21:51.065 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:21:51.065 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:21:51.065 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:21:51.065 [mongosMain] connected connection! m30000| Fri Feb 22 12:21:51.065 [initandlisten] connection accepted from 127.0.0.1:43516 #4 (4 connections now open) m30000| Fri Feb 22 12:21:51.066 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:21:51.079 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:21:51.080 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:21:51.080 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:21:51.080 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 12:21:51.080 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:21:51 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "512762dfe1d9169da342a454" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 12:21:51.081 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/config.ns, filling with zeroes... m30000| Fri Feb 22 12:21:51.081 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:21:51.081 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/config.0, filling with zeroes... m30000| Fri Feb 22 12:21:51.081 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:21:51.081 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/config.1, filling with zeroes... m30000| Fri Feb 22 12:21:51.081 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:21:51.085 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:21:51.087 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:21:51.087 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:21:51.088 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:21:51.089 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:21:51 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838', sleeping for 30000ms m30000| Fri Feb 22 12:21:51.089 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:21:51.089 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:21:51.090 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762dfe1d9169da342a454 m30999| Fri Feb 22 12:21:51.093 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:21:51.093 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:21:51.093 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:51-512762dfe1d9169da342a455", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535711093), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:21:51.093 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:21:51.093 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:21:51.094 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:21:51.094 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:21:51.095 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:21:51.095 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:51-512762dfe1d9169da342a457", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535711095), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:21:51.095 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:21:51.096 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:21:51.097 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:21:51.098 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:21:51.098 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:21:51.098 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:21:51.098 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:21:51.098 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:21:51.098 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 12:21:51.098 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 12:21:51.098 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 12:21:51.099 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:21:51.099 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:21:51.101 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:21:51.101 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:21:51.101 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:21:51.102 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:21:51.102 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:21:51.102 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:21:51.102 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:21:51.103 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:21:51.103 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:21:51.105 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:21:51.105 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:21:51.105 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:21:51.106 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:21:51.106 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:21:51.106 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:21:51 m30999| Fri Feb 22 12:21:51.106 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:21:51.106 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:21:51.107 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:21:51.107 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 12:21:51.107 [Balancer] connected connection! m30000| Fri Feb 22 12:21:51.107 [initandlisten] connection accepted from 127.0.0.1:65355 #5 (5 connections now open) m30000| Fri Feb 22 12:21:51.108 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:21:51.108 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:21:51.108 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:21:51.108 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:21:51.108 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:21:51 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762dfe1d9169da342a459" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:21:51.109 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762dfe1d9169da342a459 m30999| Fri Feb 22 12:21:51.109 [Balancer] *** start balancing round m30999| Fri Feb 22 12:21:51.109 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:21:51.109 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:21:51.109 [Balancer] no collections to balance m30999| Fri Feb 22 12:21:51.109 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:21:51.109 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:21:51.110 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30999| Fri Feb 22 12:21:51.250 [mongosMain] connection accepted from 127.0.0.1:60357 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 12:21:51.252 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:21:51.252 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:21:51.253 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:21:51.253 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:21:51.255 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 12:21:51.256 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:21:51.256 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:21:51.257 [conn1] connected connection! m30001| Fri Feb 22 12:21:51.257 [initandlisten] connection accepted from 127.0.0.1:39499 #2 (2 connections now open) m30999| Fri Feb 22 12:21:51.258 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 12:21:51.259 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:21:51.260 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:21:51.260 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:21:51.260 [conn1] enabling sharding on: test m30999| Fri Feb 22 12:21:51.261 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:21:51.261 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:21:51.261 [conn1] connected connection! m30000| Fri Feb 22 12:21:51.261 [initandlisten] connection accepted from 127.0.0.1:46158 #6 (6 connections now open) m30999| Fri Feb 22 12:21:51.261 [conn1] creating WriteBackListener for: localhost:30000 serverID: 512762dfe1d9169da342a458 m30999| Fri Feb 22 12:21:51.261 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:21:51.261 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 12:21:51.262 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:21:51.262 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 12:21:51.262 [initandlisten] connection accepted from 127.0.0.1:56333 #3 (3 connections now open) m30999| Fri Feb 22 12:21:51.262 [conn1] connected connection! m30999| Fri Feb 22 12:21:51.262 [conn1] creating WriteBackListener for: localhost:30001 serverID: 512762dfe1d9169da342a458 m30999| Fri Feb 22 12:21:51.262 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:21:51.262 BackgroundJob starting: WriteBackListener-localhost:30001 { "_id" : "chunksize", "value" : 1 } m30001| Fri Feb 22 12:21:51.264 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance11/test.ns, filling with zeroes... m30001| Fri Feb 22 12:21:51.264 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance11/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:21:51.264 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance11/test.0, filling with zeroes... m30001| Fri Feb 22 12:21:51.264 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance11/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:21:51.265 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance11/test.1, filling with zeroes... m30001| Fri Feb 22 12:21:51.265 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance11/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:21:51.268 [conn3] build index test.foo { _id: 1 } m30001| Fri Feb 22 12:21:51.270 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:21:51.556 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:21:51.557 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:21:51.557 [conn1] connected connection! m30001| Fri Feb 22 12:21:51.557 [initandlisten] connection accepted from 127.0.0.1:55770 #4 (4 connections now open) m30999| Fri Feb 22 12:21:51.558 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:21:51.558 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:21:51.558 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:21:51.561 [conn1] going to create 41 chunk(s) for: test.foo using new epoch 512762dfe1d9169da342a45a m30999| Fri Feb 22 12:21:51.566 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:21:51.566 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:21:51.566 [conn1] connected connection! m30000| Fri Feb 22 12:21:51.566 [initandlisten] connection accepted from 127.0.0.1:38651 #7 (7 connections now open) m30999| Fri Feb 22 12:21:51.568 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 1|40||512762dfe1d9169da342a45a based on: (empty) m30000| Fri Feb 22 12:21:51.569 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 12:21:51.570 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:21:51.570 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|40, versionEpoch: ObjectId('512762dfe1d9169da342a45a'), serverID: ObjectId('512762dfe1d9169da342a458'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f990 2 m30999| Fri Feb 22 12:21:51.570 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:21:51.571 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|40, versionEpoch: ObjectId('512762dfe1d9169da342a45a'), serverID: ObjectId('512762dfe1d9169da342a458'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x117f990 2 m30001| Fri Feb 22 12:21:51.571 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:21:51.571 [initandlisten] connection accepted from 127.0.0.1:34953 #8 (8 connections now open) m30999| Fri Feb 22 12:21:51.573 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } { "shard0000" : 0, "shard0001" : 41 } { "shard0000" : 0, "shard0001" : 41 } 41 { "shard0000" : 0, "shard0001" : 41 } { "shard0000" : 0, "shard0001" : 41 } m30999| Fri Feb 22 12:21:57.110 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:21:57.111 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:21:57.111 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:21:57 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762e5e1d9169da342a45b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762dfe1d9169da342a459" } } m30999| Fri Feb 22 12:21:57.112 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762e5e1d9169da342a45b m30999| Fri Feb 22 12:21:57.112 [Balancer] *** start balancing round m30999| Fri Feb 22 12:21:57.112 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:21:57.112 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 12:21:57.114 [conn3] build index config.tags { _id: 1 } m30000| Fri Feb 22 12:21:57.117 [conn3] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 12:21:57.117 [conn3] info: creating collection config.tags on add index m30000| Fri Feb 22 12:21:57.117 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 12:21:57.119 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:21:57.119 [Balancer] shard0001 has more chunks me:41 best: shard0000:0 m30999| Fri Feb 22 12:21:57.119 [Balancer] collection : test.foo m30999| Fri Feb 22 12:21:57.119 [Balancer] donor : shard0001 chunks on 41 m30999| Fri Feb 22 12:21:57.119 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 12:21:57.119 [Balancer] threshold : 4 m30999| Fri Feb 22 12:21:57.119 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:21:57.120 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 51.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:21:57.120 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:21:57.120 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 51.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:21:57.120 [initandlisten] connection accepted from 127.0.0.1:59562 #9 (9 connections now open) m30001| Fri Feb 22 12:21:57.121 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262 (sleeping for 30000ms) m30001| Fri Feb 22 12:21:57.123 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762e55bae0f22c953da95 m30001| Fri Feb 22 12:21:57.123 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:57-512762e55bae0f22c953da96", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535717123), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:21:57.124 [conn4] moveChunk request accepted at version 1|40||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:21:57.124 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:21:57.124 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 51.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:21:57.125 [initandlisten] connection accepted from 127.0.0.1:49655 #5 (5 connections now open) m30000| Fri Feb 22 12:21:57.126 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/test.ns, filling with zeroes... m30000| Fri Feb 22 12:21:57.126 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:21:57.126 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/test.0, filling with zeroes... m30000| Fri Feb 22 12:21:57.126 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:21:57.126 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance10/test.1, filling with zeroes... m30000| Fri Feb 22 12:21:57.127 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance10/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:21:57.130 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 12:21:57.131 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:21:57.131 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 12:21:57.134 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:21:57.140 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:21:57.140 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:21:57.143 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:21:57.145 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:21:57.145 [conn4] moveChunk setting version to: 2|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:21:57.145 [initandlisten] connection accepted from 127.0.0.1:45768 #10 (10 connections now open) m30000| Fri Feb 22 12:21:57.145 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:21:57.153 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:21:57.153 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:21:57.153 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:57-512762e5f5293dc6c4c944d0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535717153), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 13 } } m30000| Fri Feb 22 12:21:57.153 [initandlisten] connection accepted from 127.0.0.1:49442 #11 (11 connections now open) m30001| Fri Feb 22 12:21:57.155 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:21:57.155 [conn4] moveChunk updating self version to: 2|1||512762dfe1d9169da342a45a through { _id: 51.0 } -> { _id: 103.0 } for collection 'test.foo' m30000| Fri Feb 22 12:21:57.155 [initandlisten] connection accepted from 127.0.0.1:57637 #12 (12 connections now open) m30001| Fri Feb 22 12:21:57.156 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:57-512762e55bae0f22c953da97", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535717156), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:21:57.156 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:21:57.156 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:21:57.156 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:21:57.156 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:21:57.156 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:21:57.157 [cleanupOldData-512762e55bae0f22c953da98] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 51.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:21:57.157 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:21:57.157 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:57-512762e55bae0f22c953da99", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535717157), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:21:57.157 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:21:57.158 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|1||512762dfe1d9169da342a45a based on: 1|40||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:21:57.158 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:21:57.158 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:21:57.177 [cleanupOldData-512762e55bae0f22c953da98] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:21:57.177 [cleanupOldData-512762e55bae0f22c953da98] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:21:57.181 [cleanupOldData-512762e55bae0f22c953da98] moveChunk deleted 51 documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30999| Fri Feb 22 12:21:58.159 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:21:58.159 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:21:58.160 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:21:58 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762e6e1d9169da342a45c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762e5e1d9169da342a45b" } } m30999| Fri Feb 22 12:21:58.160 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762e6e1d9169da342a45c m30999| Fri Feb 22 12:21:58.160 [Balancer] *** start balancing round m30999| Fri Feb 22 12:21:58.160 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:21:58.160 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:21:58.162 [Balancer] shard0001 has more chunks me:40 best: shard0000:1 m30999| Fri Feb 22 12:21:58.162 [Balancer] collection : test.foo m30999| Fri Feb 22 12:21:58.162 [Balancer] donor : shard0001 chunks on 40 m30999| Fri Feb 22 12:21:58.162 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 12:21:58.162 [Balancer] threshold : 2 m30999| Fri Feb 22 12:21:58.162 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_51.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 51.0 }, max: { _id: 103.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:21:58.162 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 51.0 }max: { _id: 103.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:21:58.162 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:21:58.162 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 51.0 }, max: { _id: 103.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_51.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:21:58.163 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762e65bae0f22c953da9a m30001| Fri Feb 22 12:21:58.163 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:58-512762e65bae0f22c953da9b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535718163), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:21:58.164 [conn4] moveChunk request accepted at version 2|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:21:58.164 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:21:58.164 [migrateThread] starting receiving-end of migration of chunk { _id: 51.0 } -> { _id: 103.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:21:58.171 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:21:58.171 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:21:58.173 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:21:58.174 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:21:58.174 [conn4] moveChunk setting version to: 3|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:21:58.174 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:21:58.183 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:21:58.183 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:21:58.183 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:58-512762e6f5293dc6c4c944d1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535718183), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:21:58.184 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:21:58.184 [conn4] moveChunk updating self version to: 3|1||512762dfe1d9169da342a45a through { _id: 103.0 } -> { _id: 155.0 } for collection 'test.foo' m30001| Fri Feb 22 12:21:58.185 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:58-512762e65bae0f22c953da9c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535718185), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:21:58.185 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:21:58.185 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:21:58.185 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:21:58.185 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:21:58.185 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:21:58.185 [cleanupOldData-512762e65bae0f22c953da9d] (start) waiting to cleanup test.foo from { _id: 51.0 } -> { _id: 103.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:21:58.186 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:21:58.186 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:58-512762e65bae0f22c953da9e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535718186), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:21:58.186 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:21:58.187 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 3|1||512762dfe1d9169da342a45a based on: 2|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:21:58.187 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:21:58.187 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:21:58.205 [cleanupOldData-512762e65bae0f22c953da9d] waiting to remove documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:21:58.205 [cleanupOldData-512762e65bae0f22c953da9d] moveChunk starting delete for: test.foo from { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:21:58.208 [cleanupOldData-512762e65bae0f22c953da9d] moveChunk deleted 52 documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30999| Fri Feb 22 12:21:59.188 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:21:59.188 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:21:59.189 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:21:59 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762e7e1d9169da342a45d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762e6e1d9169da342a45c" } } m30999| Fri Feb 22 12:21:59.189 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762e7e1d9169da342a45d m30999| Fri Feb 22 12:21:59.190 [Balancer] *** start balancing round m30999| Fri Feb 22 12:21:59.190 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:21:59.190 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:21:59.191 [Balancer] shard0001 has more chunks me:39 best: shard0000:2 m30999| Fri Feb 22 12:21:59.191 [Balancer] collection : test.foo m30999| Fri Feb 22 12:21:59.191 [Balancer] donor : shard0001 chunks on 39 m30999| Fri Feb 22 12:21:59.191 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 12:21:59.191 [Balancer] threshold : 2 m30999| Fri Feb 22 12:21:59.191 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_103.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 103.0 }, max: { _id: 155.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:21:59.191 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 103.0 }max: { _id: 155.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:21:59.191 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:21:59.191 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 103.0 }, max: { _id: 155.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_103.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:21:59.192 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762e75bae0f22c953da9f m30001| Fri Feb 22 12:21:59.192 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:59-512762e75bae0f22c953daa0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535719192), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:21:59.193 [conn4] moveChunk request accepted at version 3|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:21:59.193 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:21:59.193 [migrateThread] starting receiving-end of migration of chunk { _id: 103.0 } -> { _id: 155.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:21:59.198 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:21:59.199 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:21:59.200 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:21:59.203 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:21:59.203 [conn4] moveChunk setting version to: 4|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:21:59.203 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:21:59.210 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:21:59.210 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:21:59.210 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:59-512762e7f5293dc6c4c944d2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535719210), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 4, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:21:59.214 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:21:59.214 [conn4] moveChunk updating self version to: 4|1||512762dfe1d9169da342a45a through { _id: 155.0 } -> { _id: 207.0 } for collection 'test.foo' m30001| Fri Feb 22 12:21:59.214 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:59-512762e75bae0f22c953daa1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535719214), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:21:59.214 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:21:59.214 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:21:59.214 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:21:59.214 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:21:59.214 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:21:59.214 [cleanupOldData-512762e75bae0f22c953daa2] (start) waiting to cleanup test.foo from { _id: 103.0 } -> { _id: 155.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:21:59.215 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:21:59.215 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:21:59-512762e75bae0f22c953daa3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535719215), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:21:59.215 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:21:59.216 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 4|1||512762dfe1d9169da342a45a based on: 3|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:21:59.216 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:21:59.216 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:21:59.235 [cleanupOldData-512762e75bae0f22c953daa2] waiting to remove documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:21:59.235 [cleanupOldData-512762e75bae0f22c953daa2] moveChunk starting delete for: test.foo from { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:21:59.239 [cleanupOldData-512762e75bae0f22c953daa2] moveChunk deleted 52 documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30999| Fri Feb 22 12:22:00.217 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:00.218 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:00.218 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:00 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762e8e1d9169da342a45e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762e7e1d9169da342a45d" } } m30999| Fri Feb 22 12:22:00.219 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762e8e1d9169da342a45e m30999| Fri Feb 22 12:22:00.219 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:00.219 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:00.219 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:00.220 [Balancer] shard0001 has more chunks me:38 best: shard0000:3 m30999| Fri Feb 22 12:22:00.220 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:00.220 [Balancer] donor : shard0001 chunks on 38 m30999| Fri Feb 22 12:22:00.220 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 12:22:00.220 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:00.220 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_155.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 155.0 }, max: { _id: 207.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:00.220 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 155.0 }max: { _id: 207.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:00.220 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:00.220 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 155.0 }, max: { _id: 207.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_155.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:00.221 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762e85bae0f22c953daa4 m30001| Fri Feb 22 12:22:00.221 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:00-512762e85bae0f22c953daa5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535720221), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:00.222 [conn4] moveChunk request accepted at version 4|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:00.222 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:00.223 [migrateThread] starting receiving-end of migration of chunk { _id: 155.0 } -> { _id: 207.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:00.230 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:00.230 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:00.232 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:00.233 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:00.233 [conn4] moveChunk setting version to: 5|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:00.233 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:00.242 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:00.242 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:00.242 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:00-512762e8f5293dc6c4c944d3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535720242), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:00.243 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:00.243 [conn4] moveChunk updating self version to: 5|1||512762dfe1d9169da342a45a through { _id: 207.0 } -> { _id: 259.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:00.244 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:00-512762e85bae0f22c953daa6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535720244), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:00.244 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:00.244 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:00.244 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:00.244 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:00.244 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:00.244 [cleanupOldData-512762e85bae0f22c953daa7] (start) waiting to cleanup test.foo from { _id: 155.0 } -> { _id: 207.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:00.244 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:00.245 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:00-512762e85bae0f22c953daa8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535720244), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:00.245 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:00.246 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 5|1||512762dfe1d9169da342a45a based on: 4|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:00.246 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:00.246 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:00.264 [cleanupOldData-512762e85bae0f22c953daa7] waiting to remove documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:00.264 [cleanupOldData-512762e85bae0f22c953daa7] moveChunk starting delete for: test.foo from { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:00.269 [cleanupOldData-512762e85bae0f22c953daa7] moveChunk deleted 52 documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30999| Fri Feb 22 12:22:01.247 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:01.247 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:01.248 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762e9e1d9169da342a45f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762e8e1d9169da342a45e" } } m30999| Fri Feb 22 12:22:01.248 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762e9e1d9169da342a45f m30999| Fri Feb 22 12:22:01.248 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:01.248 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:01.248 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:01.250 [Balancer] shard0001 has more chunks me:37 best: shard0000:4 m30999| Fri Feb 22 12:22:01.250 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:01.250 [Balancer] donor : shard0001 chunks on 37 m30999| Fri Feb 22 12:22:01.250 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 12:22:01.250 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:01.250 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_207.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 207.0 }, max: { _id: 259.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:01.250 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 207.0 }max: { _id: 259.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:01.250 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:01.250 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 207.0 }, max: { _id: 259.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_207.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:01.251 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762e95bae0f22c953daa9 m30001| Fri Feb 22 12:22:01.251 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:01-512762e95bae0f22c953daaa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535721251), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:01.252 [conn4] moveChunk request accepted at version 5|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:01.252 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:01.253 [migrateThread] starting receiving-end of migration of chunk { _id: 207.0 } -> { _id: 259.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:01.263 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:01.263 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:01.263 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:01.265 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:01.273 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:01.274 [conn4] moveChunk setting version to: 6|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:01.274 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:01.275 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:22:01.275 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:22:01.275 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:01-512762e9f5293dc6c4c944d4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535721275), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:01.284 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:01.284 [conn4] moveChunk updating self version to: 6|1||512762dfe1d9169da342a45a through { _id: 259.0 } -> { _id: 311.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:01.285 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:01-512762e95bae0f22c953daab", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535721285), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:01.285 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:01.285 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:01.285 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:01.285 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:01.285 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:01.285 [cleanupOldData-512762e95bae0f22c953daac] (start) waiting to cleanup test.foo from { _id: 207.0 } -> { _id: 259.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:01.285 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:01.285 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:01-512762e95bae0f22c953daad", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535721285), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:01.285 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:01.287 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 6|1||512762dfe1d9169da342a45a based on: 5|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:01.287 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:01.287 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:01.305 [cleanupOldData-512762e95bae0f22c953daac] waiting to remove documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:01.305 [cleanupOldData-512762e95bae0f22c953daac] moveChunk starting delete for: test.foo from { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:01.309 [cleanupOldData-512762e95bae0f22c953daac] moveChunk deleted 52 documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } { "shard0000" : 5, "shard0001" : 36 } m30999| Fri Feb 22 12:22:02.288 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:02.288 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:02.288 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762eae1d9169da342a460" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762e9e1d9169da342a45f" } } m30999| Fri Feb 22 12:22:02.289 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762eae1d9169da342a460 m30999| Fri Feb 22 12:22:02.289 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:02.289 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:02.289 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:02.290 [Balancer] shard0001 has more chunks me:36 best: shard0000:5 m30999| Fri Feb 22 12:22:02.290 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:02.290 [Balancer] donor : shard0001 chunks on 36 m30999| Fri Feb 22 12:22:02.290 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 12:22:02.290 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:02.290 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_259.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 259.0 }, max: { _id: 311.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:02.290 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: 259.0 }max: { _id: 311.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:02.290 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:02.291 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 259.0 }, max: { _id: 311.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_259.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:02.291 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762ea5bae0f22c953daae m30001| Fri Feb 22 12:22:02.291 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:02-512762ea5bae0f22c953daaf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535722291), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:02.292 [conn4] moveChunk request accepted at version 6|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:02.292 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:02.292 [migrateThread] starting receiving-end of migration of chunk { _id: 259.0 } -> { _id: 311.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:02.302 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:02.302 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30001| Fri Feb 22 12:22:02.303 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:02.303 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30001| Fri Feb 22 12:22:02.313 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:02.313 [conn4] moveChunk setting version to: 7|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:02.313 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:02.313 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30000| Fri Feb 22 12:22:02.313 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30000| Fri Feb 22 12:22:02.313 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:02-512762eaf5293dc6c4c944d5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535722313), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:02.323 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:02.323 [conn4] moveChunk updating self version to: 7|1||512762dfe1d9169da342a45a through { _id: 311.0 } -> { _id: 363.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:02.324 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:02-512762ea5bae0f22c953dab0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535722324), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:02.324 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:02.324 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:02.324 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:02.324 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:02.324 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:02.324 [cleanupOldData-512762ea5bae0f22c953dab1] (start) waiting to cleanup test.foo from { _id: 259.0 } -> { _id: 311.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:02.324 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:02.324 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:02-512762ea5bae0f22c953dab2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535722324), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:02.324 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:02.325 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 7|1||512762dfe1d9169da342a45a based on: 6|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:02.326 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:02.326 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:02.344 [cleanupOldData-512762ea5bae0f22c953dab1] waiting to remove documents for test.foo from { _id: 259.0 } -> { _id: 311.0 } m30001| Fri Feb 22 12:22:02.344 [cleanupOldData-512762ea5bae0f22c953dab1] moveChunk starting delete for: test.foo from { _id: 259.0 } -> { _id: 311.0 } m30001| Fri Feb 22 12:22:02.347 [cleanupOldData-512762ea5bae0f22c953dab1] moveChunk deleted 52 documents for test.foo from { _id: 259.0 } -> { _id: 311.0 } m30999| Fri Feb 22 12:22:03.326 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:03.327 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:03.327 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762ebe1d9169da342a461" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762eae1d9169da342a460" } } m30999| Fri Feb 22 12:22:03.328 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762ebe1d9169da342a461 m30999| Fri Feb 22 12:22:03.328 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:03.328 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:03.328 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:03.329 [Balancer] shard0001 has more chunks me:35 best: shard0000:6 m30999| Fri Feb 22 12:22:03.329 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:03.329 [Balancer] donor : shard0001 chunks on 35 m30999| Fri Feb 22 12:22:03.329 [Balancer] receiver : shard0000 chunks on 6 m30999| Fri Feb 22 12:22:03.329 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:03.329 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_311.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 311.0 }, max: { _id: 363.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:03.329 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { _id: 311.0 }max: { _id: 363.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:03.329 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:03.329 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 311.0 }, max: { _id: 363.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_311.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:03.330 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762eb5bae0f22c953dab3 m30001| Fri Feb 22 12:22:03.330 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:03-512762eb5bae0f22c953dab4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535723330), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:03.331 [conn4] moveChunk request accepted at version 7|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:03.331 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:03.331 [migrateThread] starting receiving-end of migration of chunk { _id: 311.0 } -> { _id: 363.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:03.341 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:03.341 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30001| Fri Feb 22 12:22:03.341 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:03.343 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30001| Fri Feb 22 12:22:03.351 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:03.351 [conn4] moveChunk setting version to: 8|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:03.351 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:03.353 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30000| Fri Feb 22 12:22:03.353 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30000| Fri Feb 22 12:22:03.353 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:03-512762ebf5293dc6c4c944d6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535723353), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:03.362 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:03.362 [conn4] moveChunk updating self version to: 8|1||512762dfe1d9169da342a45a through { _id: 363.0 } -> { _id: 415.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:03.362 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:03-512762eb5bae0f22c953dab5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535723362), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:03.362 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:03.362 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:03.362 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:03.362 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:03.362 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:03.362 [cleanupOldData-512762eb5bae0f22c953dab6] (start) waiting to cleanup test.foo from { _id: 311.0 } -> { _id: 363.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:03.363 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:03.363 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:03-512762eb5bae0f22c953dab7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535723363), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:03.363 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:03.364 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 8|1||512762dfe1d9169da342a45a based on: 7|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:03.364 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:03.364 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:03.382 [cleanupOldData-512762eb5bae0f22c953dab6] waiting to remove documents for test.foo from { _id: 311.0 } -> { _id: 363.0 } m30001| Fri Feb 22 12:22:03.383 [cleanupOldData-512762eb5bae0f22c953dab6] moveChunk starting delete for: test.foo from { _id: 311.0 } -> { _id: 363.0 } m30001| Fri Feb 22 12:22:03.385 [cleanupOldData-512762eb5bae0f22c953dab6] moveChunk deleted 52 documents for test.foo from { _id: 311.0 } -> { _id: 363.0 } m30999| Fri Feb 22 12:22:04.365 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:04.365 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:04.366 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:04 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762ece1d9169da342a462" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762ebe1d9169da342a461" } } m30999| Fri Feb 22 12:22:04.366 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762ece1d9169da342a462 m30999| Fri Feb 22 12:22:04.366 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:04.366 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:04.366 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:04.368 [Balancer] shard0001 has more chunks me:34 best: shard0000:7 m30999| Fri Feb 22 12:22:04.368 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:04.368 [Balancer] donor : shard0001 chunks on 34 m30999| Fri Feb 22 12:22:04.368 [Balancer] receiver : shard0000 chunks on 7 m30999| Fri Feb 22 12:22:04.368 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:04.368 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_363.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 363.0 }, max: { _id: 415.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:04.368 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { _id: 363.0 }max: { _id: 415.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:04.368 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:04.368 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 363.0 }, max: { _id: 415.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_363.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:04.369 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762ec5bae0f22c953dab8 m30001| Fri Feb 22 12:22:04.369 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:04-512762ec5bae0f22c953dab9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535724369), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:04.370 [conn4] moveChunk request accepted at version 8|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:04.371 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:04.371 [migrateThread] starting receiving-end of migration of chunk { _id: 363.0 } -> { _id: 415.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:04.380 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:04.380 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30001| Fri Feb 22 12:22:04.381 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:04.382 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30001| Fri Feb 22 12:22:04.391 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:04.391 [conn4] moveChunk setting version to: 9|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:04.391 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:04.392 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30000| Fri Feb 22 12:22:04.392 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30000| Fri Feb 22 12:22:04.392 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:04-512762ecf5293dc6c4c944d7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535724392), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:04.401 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:04.401 [conn4] moveChunk updating self version to: 9|1||512762dfe1d9169da342a45a through { _id: 415.0 } -> { _id: 467.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:04.402 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:04-512762ec5bae0f22c953daba", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535724402), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:04.402 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:04.402 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:04.402 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:04.402 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:04.402 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:04.402 [cleanupOldData-512762ec5bae0f22c953dabb] (start) waiting to cleanup test.foo from { _id: 363.0 } -> { _id: 415.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:04.403 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:04.403 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:04-512762ec5bae0f22c953dabc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535724403), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:04.403 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:04.404 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 9|1||512762dfe1d9169da342a45a based on: 8|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:04.404 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:04.404 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:04.422 [cleanupOldData-512762ec5bae0f22c953dabb] waiting to remove documents for test.foo from { _id: 363.0 } -> { _id: 415.0 } m30001| Fri Feb 22 12:22:04.422 [cleanupOldData-512762ec5bae0f22c953dabb] moveChunk starting delete for: test.foo from { _id: 363.0 } -> { _id: 415.0 } m30001| Fri Feb 22 12:22:04.427 [cleanupOldData-512762ec5bae0f22c953dabb] moveChunk deleted 52 documents for test.foo from { _id: 363.0 } -> { _id: 415.0 } m30999| Fri Feb 22 12:22:05.405 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:05.405 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:05.406 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:05 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762ede1d9169da342a463" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762ece1d9169da342a462" } } m30999| Fri Feb 22 12:22:05.406 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762ede1d9169da342a463 m30999| Fri Feb 22 12:22:05.406 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:05.406 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:05.406 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:05.408 [Balancer] shard0001 has more chunks me:33 best: shard0000:8 m30999| Fri Feb 22 12:22:05.408 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:05.408 [Balancer] donor : shard0001 chunks on 33 m30999| Fri Feb 22 12:22:05.408 [Balancer] receiver : shard0000 chunks on 8 m30999| Fri Feb 22 12:22:05.408 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:05.408 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_415.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 415.0 }, max: { _id: 467.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:05.408 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { _id: 415.0 }max: { _id: 467.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:05.408 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:05.408 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 415.0 }, max: { _id: 467.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_415.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:05.409 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762ed5bae0f22c953dabd m30001| Fri Feb 22 12:22:05.409 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:05-512762ed5bae0f22c953dabe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535725409), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:05.410 [conn4] moveChunk request accepted at version 9|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:05.411 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:05.411 [migrateThread] starting receiving-end of migration of chunk { _id: 415.0 } -> { _id: 467.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:05.420 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:05.420 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30001| Fri Feb 22 12:22:05.421 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:05.422 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30001| Fri Feb 22 12:22:05.431 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:05.431 [conn4] moveChunk setting version to: 10|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:05.431 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:05.432 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30000| Fri Feb 22 12:22:05.432 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30000| Fri Feb 22 12:22:05.432 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:05-512762edf5293dc6c4c944d8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535725432), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:05.441 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:05.442 [conn4] moveChunk updating self version to: 10|1||512762dfe1d9169da342a45a through { _id: 467.0 } -> { _id: 519.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:05.442 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:05-512762ed5bae0f22c953dabf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535725442), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:05.442 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:05.442 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:05.442 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:05.442 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:05.442 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:05.442 [cleanupOldData-512762ed5bae0f22c953dac0] (start) waiting to cleanup test.foo from { _id: 415.0 } -> { _id: 467.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:05.443 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:05.443 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:05-512762ed5bae0f22c953dac1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535725443), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:05.443 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:05.444 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 10|1||512762dfe1d9169da342a45a based on: 9|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:05.444 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:05.444 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:05.462 [cleanupOldData-512762ed5bae0f22c953dac0] waiting to remove documents for test.foo from { _id: 415.0 } -> { _id: 467.0 } m30001| Fri Feb 22 12:22:05.463 [cleanupOldData-512762ed5bae0f22c953dac0] moveChunk starting delete for: test.foo from { _id: 415.0 } -> { _id: 467.0 } m30001| Fri Feb 22 12:22:05.467 [cleanupOldData-512762ed5bae0f22c953dac0] moveChunk deleted 52 documents for test.foo from { _id: 415.0 } -> { _id: 467.0 } m30999| Fri Feb 22 12:22:06.445 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:06.445 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:06.445 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762eee1d9169da342a464" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762ede1d9169da342a463" } } m30999| Fri Feb 22 12:22:06.446 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762eee1d9169da342a464 m30999| Fri Feb 22 12:22:06.446 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:06.446 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:06.446 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:06.447 [Balancer] shard0001 has more chunks me:32 best: shard0000:9 m30999| Fri Feb 22 12:22:06.447 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:06.447 [Balancer] donor : shard0001 chunks on 32 m30999| Fri Feb 22 12:22:06.447 [Balancer] receiver : shard0000 chunks on 9 m30999| Fri Feb 22 12:22:06.447 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:06.447 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_467.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 467.0 }, max: { _id: 519.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:06.448 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { _id: 467.0 }max: { _id: 519.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:06.448 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:06.448 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 467.0 }, max: { _id: 519.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_467.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:06.449 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762ee5bae0f22c953dac2 m30001| Fri Feb 22 12:22:06.449 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:06-512762ee5bae0f22c953dac3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535726449), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:06.449 [conn4] moveChunk request accepted at version 10|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:06.450 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:06.450 [migrateThread] starting receiving-end of migration of chunk { _id: 467.0 } -> { _id: 519.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:06.457 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:06.457 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30000| Fri Feb 22 12:22:06.458 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30001| Fri Feb 22 12:22:06.460 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:06.460 [conn4] moveChunk setting version to: 11|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:06.460 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:06.469 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30000| Fri Feb 22 12:22:06.469 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30000| Fri Feb 22 12:22:06.469 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:06-512762eef5293dc6c4c944d9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535726469), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:06.470 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:06.470 [conn4] moveChunk updating self version to: 11|1||512762dfe1d9169da342a45a through { _id: 519.0 } -> { _id: 571.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:06.471 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:06-512762ee5bae0f22c953dac4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535726471), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:06.471 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:06.471 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:06.471 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:06.471 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:06.471 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:06.471 [cleanupOldData-512762ee5bae0f22c953dac5] (start) waiting to cleanup test.foo from { _id: 467.0 } -> { _id: 519.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:06.472 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:06.472 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:06-512762ee5bae0f22c953dac6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535726472), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:06.472 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:06.473 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 11|1||512762dfe1d9169da342a45a based on: 10|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:06.473 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:06.473 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:06.491 [cleanupOldData-512762ee5bae0f22c953dac5] waiting to remove documents for test.foo from { _id: 467.0 } -> { _id: 519.0 } m30001| Fri Feb 22 12:22:06.491 [cleanupOldData-512762ee5bae0f22c953dac5] moveChunk starting delete for: test.foo from { _id: 467.0 } -> { _id: 519.0 } m30001| Fri Feb 22 12:22:06.494 [cleanupOldData-512762ee5bae0f22c953dac5] moveChunk deleted 52 documents for test.foo from { _id: 467.0 } -> { _id: 519.0 } { "shard0000" : 10, "shard0001" : 31 } m30999| Fri Feb 22 12:22:07.474 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:07.474 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:07.474 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:07 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762efe1d9169da342a465" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762eee1d9169da342a464" } } m30999| Fri Feb 22 12:22:07.475 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762efe1d9169da342a465 m30999| Fri Feb 22 12:22:07.475 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:07.475 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:07.475 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:07.476 [Balancer] shard0001 has more chunks me:31 best: shard0000:10 m30999| Fri Feb 22 12:22:07.476 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:07.476 [Balancer] donor : shard0001 chunks on 31 m30999| Fri Feb 22 12:22:07.476 [Balancer] receiver : shard0000 chunks on 10 m30999| Fri Feb 22 12:22:07.476 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:07.476 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_519.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 519.0 }, max: { _id: 571.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:07.476 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: 519.0 }max: { _id: 571.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:07.476 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:07.476 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 519.0 }, max: { _id: 571.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_519.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:07.477 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762ef5bae0f22c953dac7 m30001| Fri Feb 22 12:22:07.477 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:07-512762ef5bae0f22c953dac8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535727477), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:07.478 [conn4] moveChunk request accepted at version 11|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:07.478 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:07.478 [migrateThread] starting receiving-end of migration of chunk { _id: 519.0 } -> { _id: 571.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:07.486 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:07.486 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30000| Fri Feb 22 12:22:07.487 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30001| Fri Feb 22 12:22:07.488 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:07.488 [conn4] moveChunk setting version to: 12|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:07.489 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:07.497 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30000| Fri Feb 22 12:22:07.497 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30000| Fri Feb 22 12:22:07.497 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:07-512762eff5293dc6c4c944da", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535727497), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:07.499 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:07.499 [conn4] moveChunk updating self version to: 12|1||512762dfe1d9169da342a45a through { _id: 571.0 } -> { _id: 623.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:07.499 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:07-512762ef5bae0f22c953dac9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535727499), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:07.499 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:07.499 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:07.499 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:07.499 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:07.499 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:07.499 [cleanupOldData-512762ef5bae0f22c953daca] (start) waiting to cleanup test.foo from { _id: 519.0 } -> { _id: 571.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:07.500 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:07.500 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:07-512762ef5bae0f22c953dacb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535727500), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:07.500 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:07.501 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 12|1||512762dfe1d9169da342a45a based on: 11|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:07.501 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:07.501 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:07.520 [cleanupOldData-512762ef5bae0f22c953daca] waiting to remove documents for test.foo from { _id: 519.0 } -> { _id: 571.0 } m30001| Fri Feb 22 12:22:07.520 [cleanupOldData-512762ef5bae0f22c953daca] moveChunk starting delete for: test.foo from { _id: 519.0 } -> { _id: 571.0 } m30001| Fri Feb 22 12:22:07.522 [cleanupOldData-512762ef5bae0f22c953daca] moveChunk deleted 52 documents for test.foo from { _id: 519.0 } -> { _id: 571.0 } m30999| Fri Feb 22 12:22:08.502 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:08.502 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:08.503 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f0e1d9169da342a466" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762efe1d9169da342a465" } } m30999| Fri Feb 22 12:22:08.503 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f0e1d9169da342a466 m30999| Fri Feb 22 12:22:08.503 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:08.503 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:08.503 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:08.505 [Balancer] shard0001 has more chunks me:30 best: shard0000:11 m30999| Fri Feb 22 12:22:08.505 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:08.505 [Balancer] donor : shard0001 chunks on 30 m30999| Fri Feb 22 12:22:08.505 [Balancer] receiver : shard0000 chunks on 11 m30999| Fri Feb 22 12:22:08.505 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:08.505 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_571.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 571.0 }, max: { _id: 623.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:08.505 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 12|1||000000000000000000000000min: { _id: 571.0 }max: { _id: 623.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:08.505 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:08.505 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 571.0 }, max: { _id: 623.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_571.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:08.506 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f05bae0f22c953dacc m30001| Fri Feb 22 12:22:08.506 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:08-512762f05bae0f22c953dacd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535728506), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:08.507 [conn4] moveChunk request accepted at version 12|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:08.508 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:08.508 [migrateThread] starting receiving-end of migration of chunk { _id: 571.0 } -> { _id: 623.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:08.517 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:08.518 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30001| Fri Feb 22 12:22:08.518 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:08.519 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30001| Fri Feb 22 12:22:08.528 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:08.528 [conn4] moveChunk setting version to: 13|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:08.528 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:08.529 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30000| Fri Feb 22 12:22:08.529 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30000| Fri Feb 22 12:22:08.529 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:08-512762f0f5293dc6c4c944db", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535728529), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:08.538 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:08.538 [conn4] moveChunk updating self version to: 13|1||512762dfe1d9169da342a45a through { _id: 623.0 } -> { _id: 675.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:08.539 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:08-512762f05bae0f22c953dace", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535728539), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:08.539 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:08.539 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:08.539 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:08.539 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:08.539 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:08.539 [cleanupOldData-512762f05bae0f22c953dacf] (start) waiting to cleanup test.foo from { _id: 571.0 } -> { _id: 623.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:08.540 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:08.540 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:08-512762f05bae0f22c953dad0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535728540), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:08.540 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:08.541 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 13|1||512762dfe1d9169da342a45a based on: 12|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:08.541 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:08.541 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:08.559 [cleanupOldData-512762f05bae0f22c953dacf] waiting to remove documents for test.foo from { _id: 571.0 } -> { _id: 623.0 } m30001| Fri Feb 22 12:22:08.559 [cleanupOldData-512762f05bae0f22c953dacf] moveChunk starting delete for: test.foo from { _id: 571.0 } -> { _id: 623.0 } m30001| Fri Feb 22 12:22:08.564 [cleanupOldData-512762f05bae0f22c953dacf] moveChunk deleted 52 documents for test.foo from { _id: 571.0 } -> { _id: 623.0 } m30999| Fri Feb 22 12:22:09.542 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:09.542 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:09.542 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:09 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f1e1d9169da342a467" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f0e1d9169da342a466" } } m30999| Fri Feb 22 12:22:09.543 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f1e1d9169da342a467 m30999| Fri Feb 22 12:22:09.543 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:09.543 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:09.543 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:09.545 [Balancer] shard0001 has more chunks me:29 best: shard0000:12 m30999| Fri Feb 22 12:22:09.545 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:09.545 [Balancer] donor : shard0001 chunks on 29 m30999| Fri Feb 22 12:22:09.545 [Balancer] receiver : shard0000 chunks on 12 m30999| Fri Feb 22 12:22:09.545 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:09.545 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_623.0", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 623.0 }, max: { _id: 675.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:09.545 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 13|1||000000000000000000000000min: { _id: 623.0 }max: { _id: 675.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:09.545 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:09.545 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 623.0 }, max: { _id: 675.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_623.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:09.546 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f15bae0f22c953dad1 m30001| Fri Feb 22 12:22:09.546 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:09-512762f15bae0f22c953dad2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535729546), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:09.547 [conn4] moveChunk request accepted at version 13|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:09.547 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:09.547 [migrateThread] starting receiving-end of migration of chunk { _id: 623.0 } -> { _id: 675.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:09.557 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:09.557 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30001| Fri Feb 22 12:22:09.558 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:09.558 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30001| Fri Feb 22 12:22:09.568 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:09.568 [conn4] moveChunk setting version to: 14|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:09.568 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:09.569 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30000| Fri Feb 22 12:22:09.569 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30000| Fri Feb 22 12:22:09.569 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:09-512762f1f5293dc6c4c944dc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535729569), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:09.578 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:09.578 [conn4] moveChunk updating self version to: 14|1||512762dfe1d9169da342a45a through { _id: 675.0 } -> { _id: 727.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:09.579 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:09-512762f15bae0f22c953dad3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535729579), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:09.579 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:09.579 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:09.579 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:09.579 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:09.579 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:09.579 [cleanupOldData-512762f15bae0f22c953dad4] (start) waiting to cleanup test.foo from { _id: 623.0 } -> { _id: 675.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:09.579 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:09.580 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:09-512762f15bae0f22c953dad5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535729580), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:09.580 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:09.581 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 14|1||512762dfe1d9169da342a45a based on: 13|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:09.581 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:09.581 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:09.599 [cleanupOldData-512762f15bae0f22c953dad4] waiting to remove documents for test.foo from { _id: 623.0 } -> { _id: 675.0 } m30001| Fri Feb 22 12:22:09.599 [cleanupOldData-512762f15bae0f22c953dad4] moveChunk starting delete for: test.foo from { _id: 623.0 } -> { _id: 675.0 } m30001| Fri Feb 22 12:22:09.604 [cleanupOldData-512762f15bae0f22c953dad4] moveChunk deleted 52 documents for test.foo from { _id: 623.0 } -> { _id: 675.0 } m30999| Fri Feb 22 12:22:10.582 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:10.582 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:10.582 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:10 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f2e1d9169da342a468" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f1e1d9169da342a467" } } m30999| Fri Feb 22 12:22:10.583 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f2e1d9169da342a468 m30999| Fri Feb 22 12:22:10.583 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:10.583 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:10.583 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:10.584 [Balancer] shard0001 has more chunks me:28 best: shard0000:13 m30999| Fri Feb 22 12:22:10.584 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:10.584 [Balancer] donor : shard0001 chunks on 28 m30999| Fri Feb 22 12:22:10.584 [Balancer] receiver : shard0000 chunks on 13 m30999| Fri Feb 22 12:22:10.584 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:10.584 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_675.0", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 675.0 }, max: { _id: 727.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:10.584 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 14|1||000000000000000000000000min: { _id: 675.0 }max: { _id: 727.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:10.584 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:10.585 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 675.0 }, max: { _id: 727.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_675.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:10.585 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f25bae0f22c953dad6 m30001| Fri Feb 22 12:22:10.585 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:10-512762f25bae0f22c953dad7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535730585), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:10.586 [conn4] moveChunk request accepted at version 14|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:10.586 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:10.586 [migrateThread] starting receiving-end of migration of chunk { _id: 675.0 } -> { _id: 727.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:10.596 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:10.596 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30001| Fri Feb 22 12:22:10.597 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:10.597 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30001| Fri Feb 22 12:22:10.607 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:10.607 [conn4] moveChunk setting version to: 15|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:10.607 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:10.608 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30000| Fri Feb 22 12:22:10.608 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30000| Fri Feb 22 12:22:10.608 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:10-512762f2f5293dc6c4c944dd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535730608), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:10.617 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:10.617 [conn4] moveChunk updating self version to: 15|1||512762dfe1d9169da342a45a through { _id: 727.0 } -> { _id: 779.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:10.618 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:10-512762f25bae0f22c953dad8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535730618), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:10.618 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:10.618 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:10.618 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:10.618 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:10.618 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:10.618 [cleanupOldData-512762f25bae0f22c953dad9] (start) waiting to cleanup test.foo from { _id: 675.0 } -> { _id: 727.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:10.618 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:10.618 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:10-512762f25bae0f22c953dada", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535730618), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:10.618 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:10.619 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 15|1||512762dfe1d9169da342a45a based on: 14|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:10.619 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:10.620 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:10.638 [cleanupOldData-512762f25bae0f22c953dad9] waiting to remove documents for test.foo from { _id: 675.0 } -> { _id: 727.0 } m30001| Fri Feb 22 12:22:10.638 [cleanupOldData-512762f25bae0f22c953dad9] moveChunk starting delete for: test.foo from { _id: 675.0 } -> { _id: 727.0 } m30001| Fri Feb 22 12:22:10.642 [cleanupOldData-512762f25bae0f22c953dad9] moveChunk deleted 52 documents for test.foo from { _id: 675.0 } -> { _id: 727.0 } { "shard0000" : 14, "shard0001" : 27 } m30999| Fri Feb 22 12:22:11.620 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:11.621 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:11.621 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:11 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f3e1d9169da342a469" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f2e1d9169da342a468" } } m30999| Fri Feb 22 12:22:11.622 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f3e1d9169da342a469 m30999| Fri Feb 22 12:22:11.622 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:11.622 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:11.622 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:11.623 [Balancer] shard0001 has more chunks me:27 best: shard0000:14 m30999| Fri Feb 22 12:22:11.623 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:11.623 [Balancer] donor : shard0001 chunks on 27 m30999| Fri Feb 22 12:22:11.623 [Balancer] receiver : shard0000 chunks on 14 m30999| Fri Feb 22 12:22:11.623 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:11.623 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_727.0", lastmod: Timestamp 15000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 727.0 }, max: { _id: 779.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:11.623 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 15|1||000000000000000000000000min: { _id: 727.0 }max: { _id: 779.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:11.623 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:11.623 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 727.0 }, max: { _id: 779.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_727.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:11.624 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f35bae0f22c953dadb m30001| Fri Feb 22 12:22:11.624 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:11-512762f35bae0f22c953dadc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535731624), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:11.625 [conn4] moveChunk request accepted at version 15|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:11.625 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:11.625 [migrateThread] starting receiving-end of migration of chunk { _id: 727.0 } -> { _id: 779.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:11.634 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:11.634 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30001| Fri Feb 22 12:22:11.635 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:11.636 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30001| Fri Feb 22 12:22:11.645 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:11.645 [conn4] moveChunk setting version to: 16|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:11.646 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:11.646 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30000| Fri Feb 22 12:22:11.646 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30000| Fri Feb 22 12:22:11.646 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:11-512762f3f5293dc6c4c944de", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535731646), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:11.656 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:11.656 [conn4] moveChunk updating self version to: 16|1||512762dfe1d9169da342a45a through { _id: 779.0 } -> { _id: 831.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:11.656 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:11-512762f35bae0f22c953dadd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535731656), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:11.656 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:11.657 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:11.657 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:11.657 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:11.657 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:11.657 [cleanupOldData-512762f35bae0f22c953dade] (start) waiting to cleanup test.foo from { _id: 727.0 } -> { _id: 779.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:11.657 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:11.657 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:11-512762f35bae0f22c953dadf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535731657), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:11.657 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:11.658 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 16|1||512762dfe1d9169da342a45a based on: 15|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:11.658 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:11.658 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:11.677 [cleanupOldData-512762f35bae0f22c953dade] waiting to remove documents for test.foo from { _id: 727.0 } -> { _id: 779.0 } m30001| Fri Feb 22 12:22:11.677 [cleanupOldData-512762f35bae0f22c953dade] moveChunk starting delete for: test.foo from { _id: 727.0 } -> { _id: 779.0 } m30001| Fri Feb 22 12:22:11.681 [cleanupOldData-512762f35bae0f22c953dade] moveChunk deleted 52 documents for test.foo from { _id: 727.0 } -> { _id: 779.0 } m30999| Fri Feb 22 12:22:12.659 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:12.660 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:12.660 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:12 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f4e1d9169da342a46a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f3e1d9169da342a469" } } m30999| Fri Feb 22 12:22:12.661 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f4e1d9169da342a46a m30999| Fri Feb 22 12:22:12.661 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:12.661 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:12.661 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:12.662 [Balancer] shard0001 has more chunks me:26 best: shard0000:15 m30999| Fri Feb 22 12:22:12.662 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:12.662 [Balancer] donor : shard0001 chunks on 26 m30999| Fri Feb 22 12:22:12.662 [Balancer] receiver : shard0000 chunks on 15 m30999| Fri Feb 22 12:22:12.662 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:12.662 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_779.0", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 779.0 }, max: { _id: 831.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:12.662 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 16|1||000000000000000000000000min: { _id: 779.0 }max: { _id: 831.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:12.662 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:12.663 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 779.0 }, max: { _id: 831.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_779.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:12.664 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f45bae0f22c953dae0 m30001| Fri Feb 22 12:22:12.664 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:12-512762f45bae0f22c953dae1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535732664), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:12.664 [conn4] moveChunk request accepted at version 16|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:12.665 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:12.665 [migrateThread] starting receiving-end of migration of chunk { _id: 779.0 } -> { _id: 831.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:12.675 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:12.675 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30001| Fri Feb 22 12:22:12.675 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:12.677 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30001| Fri Feb 22 12:22:12.685 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:12.685 [conn4] moveChunk setting version to: 17|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:12.685 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:12.687 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30000| Fri Feb 22 12:22:12.687 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30000| Fri Feb 22 12:22:12.687 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:12-512762f4f5293dc6c4c944df", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535732687), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:22:12.696 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:12.696 [conn4] moveChunk updating self version to: 17|1||512762dfe1d9169da342a45a through { _id: 831.0 } -> { _id: 883.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:12.697 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:12-512762f45bae0f22c953dae2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535732697), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:12.697 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:12.697 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:12.697 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:12.697 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:12.697 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:12.697 [cleanupOldData-512762f45bae0f22c953dae3] (start) waiting to cleanup test.foo from { _id: 779.0 } -> { _id: 831.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:12.697 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:12.697 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:12-512762f45bae0f22c953dae4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535732697), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:12.697 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:12.698 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 17|1||512762dfe1d9169da342a45a based on: 16|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:12.698 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:12.699 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:12.717 [cleanupOldData-512762f45bae0f22c953dae3] waiting to remove documents for test.foo from { _id: 779.0 } -> { _id: 831.0 } m30001| Fri Feb 22 12:22:12.717 [cleanupOldData-512762f45bae0f22c953dae3] moveChunk starting delete for: test.foo from { _id: 779.0 } -> { _id: 831.0 } m30001| Fri Feb 22 12:22:12.722 [cleanupOldData-512762f45bae0f22c953dae3] moveChunk deleted 52 documents for test.foo from { _id: 779.0 } -> { _id: 831.0 } m30999| Fri Feb 22 12:22:13.699 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:13.700 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:13.700 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:13 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f5e1d9169da342a46b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f4e1d9169da342a46a" } } m30999| Fri Feb 22 12:22:13.701 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f5e1d9169da342a46b m30999| Fri Feb 22 12:22:13.701 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:13.701 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:13.701 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:13.702 [Balancer] shard0001 has more chunks me:25 best: shard0000:16 m30999| Fri Feb 22 12:22:13.702 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:13.702 [Balancer] donor : shard0001 chunks on 25 m30999| Fri Feb 22 12:22:13.702 [Balancer] receiver : shard0000 chunks on 16 m30999| Fri Feb 22 12:22:13.702 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:13.702 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_831.0", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 831.0 }, max: { _id: 883.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:13.702 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 17|1||000000000000000000000000min: { _id: 831.0 }max: { _id: 883.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:13.703 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:13.703 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 831.0 }, max: { _id: 883.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_831.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:13.704 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f55bae0f22c953dae5 m30001| Fri Feb 22 12:22:13.704 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:13-512762f55bae0f22c953dae6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535733704), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:13.705 [conn4] moveChunk request accepted at version 17|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:13.705 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:13.705 [migrateThread] starting receiving-end of migration of chunk { _id: 831.0 } -> { _id: 883.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:13.715 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:13.715 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30001| Fri Feb 22 12:22:13.715 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:13.717 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30001| Fri Feb 22 12:22:13.726 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:13.726 [conn4] moveChunk setting version to: 18|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:13.726 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:13.727 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30000| Fri Feb 22 12:22:13.727 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30000| Fri Feb 22 12:22:13.727 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:13-512762f5f5293dc6c4c944e0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535733727), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:13.736 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:13.736 [conn4] moveChunk updating self version to: 18|1||512762dfe1d9169da342a45a through { _id: 883.0 } -> { _id: 935.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:13.737 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:13-512762f55bae0f22c953dae7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535733737), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:13.737 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:13.737 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:13.737 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:13.737 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:13.737 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:13.738 [cleanupOldData-512762f55bae0f22c953dae8] (start) waiting to cleanup test.foo from { _id: 831.0 } -> { _id: 883.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:13.738 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:13.738 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:13-512762f55bae0f22c953dae9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535733738), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:13.738 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:13.739 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 18|1||512762dfe1d9169da342a45a based on: 17|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:13.739 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:13.740 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:13.758 [cleanupOldData-512762f55bae0f22c953dae8] waiting to remove documents for test.foo from { _id: 831.0 } -> { _id: 883.0 } m30001| Fri Feb 22 12:22:13.758 [cleanupOldData-512762f55bae0f22c953dae8] moveChunk starting delete for: test.foo from { _id: 831.0 } -> { _id: 883.0 } m30001| Fri Feb 22 12:22:13.762 [cleanupOldData-512762f55bae0f22c953dae8] moveChunk deleted 52 documents for test.foo from { _id: 831.0 } -> { _id: 883.0 } m30999| Fri Feb 22 12:22:14.740 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:14.741 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:14.741 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f6e1d9169da342a46c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f5e1d9169da342a46b" } } m30999| Fri Feb 22 12:22:14.741 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f6e1d9169da342a46c m30999| Fri Feb 22 12:22:14.742 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:14.742 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:14.742 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:14.743 [Balancer] shard0001 has more chunks me:24 best: shard0000:17 m30999| Fri Feb 22 12:22:14.743 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:14.743 [Balancer] donor : shard0001 chunks on 24 m30999| Fri Feb 22 12:22:14.743 [Balancer] receiver : shard0000 chunks on 17 m30999| Fri Feb 22 12:22:14.743 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:14.743 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_883.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 883.0 }, max: { _id: 935.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:14.743 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 18|1||000000000000000000000000min: { _id: 883.0 }max: { _id: 935.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:14.743 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:14.743 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 883.0 }, max: { _id: 935.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_883.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:14.744 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f65bae0f22c953daea m30001| Fri Feb 22 12:22:14.744 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:14-512762f65bae0f22c953daeb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535734744), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:14.745 [conn4] moveChunk request accepted at version 18|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:14.745 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:14.745 [migrateThread] starting receiving-end of migration of chunk { _id: 883.0 } -> { _id: 935.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:14.755 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:14.755 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30001| Fri Feb 22 12:22:14.755 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:14.757 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30001| Fri Feb 22 12:22:14.765 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:14.765 [conn4] moveChunk setting version to: 19|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:14.765 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:14.767 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30000| Fri Feb 22 12:22:14.767 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30000| Fri Feb 22 12:22:14.767 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:14-512762f6f5293dc6c4c944e1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535734767), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:22:14.776 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:14.776 [conn4] moveChunk updating self version to: 19|1||512762dfe1d9169da342a45a through { _id: 935.0 } -> { _id: 987.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:14.776 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:14-512762f65bae0f22c953daec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535734776), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:14.776 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:14.776 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:14.776 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:14.776 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:14.776 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:14.777 [cleanupOldData-512762f65bae0f22c953daed] (start) waiting to cleanup test.foo from { _id: 883.0 } -> { _id: 935.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:14.777 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:14.777 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:14-512762f65bae0f22c953daee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535734777), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:14.777 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:14.778 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 19|1||512762dfe1d9169da342a45a based on: 18|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:14.778 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:14.778 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:14.797 [cleanupOldData-512762f65bae0f22c953daed] waiting to remove documents for test.foo from { _id: 883.0 } -> { _id: 935.0 } m30001| Fri Feb 22 12:22:14.797 [cleanupOldData-512762f65bae0f22c953daed] moveChunk starting delete for: test.foo from { _id: 883.0 } -> { _id: 935.0 } m30001| Fri Feb 22 12:22:14.800 [cleanupOldData-512762f65bae0f22c953daed] moveChunk deleted 52 documents for test.foo from { _id: 883.0 } -> { _id: 935.0 } m30999| Fri Feb 22 12:22:15.779 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:15.779 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:15.780 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:15 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f7e1d9169da342a46d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f6e1d9169da342a46c" } } m30999| Fri Feb 22 12:22:15.781 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f7e1d9169da342a46d m30999| Fri Feb 22 12:22:15.781 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:15.781 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:15.781 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:15.782 [Balancer] shard0001 has more chunks me:23 best: shard0000:18 m30999| Fri Feb 22 12:22:15.782 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:15.782 [Balancer] donor : shard0001 chunks on 23 m30999| Fri Feb 22 12:22:15.782 [Balancer] receiver : shard0000 chunks on 18 m30999| Fri Feb 22 12:22:15.782 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:15.782 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_935.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 935.0 }, max: { _id: 987.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:15.782 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 19|1||000000000000000000000000min: { _id: 935.0 }max: { _id: 987.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:15.782 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:15.782 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 935.0 }, max: { _id: 987.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_935.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:15.783 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' acquired, ts : 512762f75bae0f22c953daef m30001| Fri Feb 22 12:22:15.783 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:15-512762f75bae0f22c953daf0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535735783), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:15.784 [conn4] moveChunk request accepted at version 19|1||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:15.784 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:15.784 [migrateThread] starting receiving-end of migration of chunk { _id: 935.0 } -> { _id: 987.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:15.793 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:15.793 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30001| Fri Feb 22 12:22:15.794 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:15.795 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30001| Fri Feb 22 12:22:15.804 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:15.804 [conn4] moveChunk setting version to: 20|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:15.805 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:15.805 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30000| Fri Feb 22 12:22:15.805 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30000| Fri Feb 22 12:22:15.805 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:15-512762f7f5293dc6c4c944e2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535735805), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:15.815 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:15.815 [conn4] moveChunk updating self version to: 20|1||512762dfe1d9169da342a45a through { _id: 987.0 } -> { _id: 1039.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:15.816 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:15-512762f75bae0f22c953daf1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535735815), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:15.816 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:15.816 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:15.816 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:15.816 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:15.816 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:15.816 [cleanupOldData-512762f75bae0f22c953daf2] (start) waiting to cleanup test.foo from { _id: 935.0 } -> { _id: 987.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:15.816 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535717:13262' unlocked. m30001| Fri Feb 22 12:22:15.816 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:15-512762f75bae0f22c953daf3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:55770", time: new Date(1361535735816), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:15.816 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:15.817 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 20|1||512762dfe1d9169da342a45a based on: 19|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:15.817 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:15.818 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30001| Fri Feb 22 12:22:15.836 [cleanupOldData-512762f75bae0f22c953daf2] waiting to remove documents for test.foo from { _id: 935.0 } -> { _id: 987.0 } m30001| Fri Feb 22 12:22:15.836 [cleanupOldData-512762f75bae0f22c953daf2] moveChunk starting delete for: test.foo from { _id: 935.0 } -> { _id: 987.0 } m30001| Fri Feb 22 12:22:15.839 [cleanupOldData-512762f75bae0f22c953daf2] moveChunk deleted 52 documents for test.foo from { _id: 935.0 } -> { _id: 987.0 } { "shard0000" : 19, "shard0001" : 22 } m30999| Fri Feb 22 12:22:16.599 [conn1] going to start draining shard: shard0000 m30999| primaryLocalDoc: { _id: "local", primary: "shard0000" } { "shard0001" : 22, "shard0000" : 19 } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:22:16.818 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:16.819 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:16.819 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f8e1d9169da342a46e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f7e1d9169da342a46d" } } m30999| Fri Feb 22 12:22:16.820 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f8e1d9169da342a46e m30999| Fri Feb 22 12:22:16.820 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:16.820 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:16.820 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:16.821 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:16.821 [Balancer] going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:16.821 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { _id: MinKey }max: { _id: 51.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:16.821 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:16.822 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 51.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:16.822 [initandlisten] connection accepted from 127.0.0.1:57241 #13 (13 connections now open) m30000| Fri Feb 22 12:22:16.823 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157 (sleeping for 30000ms) m30000| Fri Feb 22 12:22:16.824 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 512762f8f5293dc6c4c944e3 m30000| Fri Feb 22 12:22:16.825 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:16-512762f8f5293dc6c4c944e4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535736824), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:16.825 [conn7] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:22:16.826 [conn7] moveChunk request accepted at version 20|0||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:16.826 [conn7] moveChunk number of documents: 51 m30001| Fri Feb 22 12:22:16.827 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 51.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:16.835 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:16.835 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:22:16.836 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:22:16.837 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:16.837 [conn7] moveChunk setting version to: 21|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:16.837 [initandlisten] connection accepted from 127.0.0.1:41803 #6 (6 connections now open) m30001| Fri Feb 22 12:22:16.837 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:16.847 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:22:16.847 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:22:16.847 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:16-512762f85bae0f22c953daf4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535736847), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:16.847 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:16.848 [conn7] moveChunk updating self version to: 21|1||512762dfe1d9169da342a45a through { _id: 51.0 } -> { _id: 103.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:16.848 [initandlisten] connection accepted from 127.0.0.1:51528 #14 (14 connections now open) m30000| Fri Feb 22 12:22:16.848 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:16-512762f8f5293dc6c4c944e5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535736848), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:16.848 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:16.849 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:16.849 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:16.849 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:16.849 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:16.849 [cleanupOldData-512762f8f5293dc6c4c944e6] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 51.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:16.849 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:16.849 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:16-512762f8f5293dc6c4c944e7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535736849), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 6: 0, step2 of 6: 4, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:16.849 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:16.851 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 21|1||512762dfe1d9169da342a45a based on: 20|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:16.851 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:16.851 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:16.869 [cleanupOldData-512762f8f5293dc6c4c944e6] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:22:16.869 [cleanupOldData-512762f8f5293dc6c4c944e6] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:22:16.873 [cleanupOldData-512762f8f5293dc6c4c944e6] moveChunk deleted 51 documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30999| Fri Feb 22 12:22:17.852 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:17.852 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:17.852 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:17 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762f9e1d9169da342a46f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f8e1d9169da342a46e" } } m30999| Fri Feb 22 12:22:17.853 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762f9e1d9169da342a46f m30999| Fri Feb 22 12:22:17.853 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:17.853 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:17.853 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:17.854 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:17.854 [Balancer] going to move { _id: "test.foo-_id_51.0", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 51.0 }, max: { _id: 103.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:17.854 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 21|1||000000000000000000000000min: { _id: 51.0 }max: { _id: 103.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:17.855 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:17.855 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 51.0 }, max: { _id: 103.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_51.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:17.856 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 512762f9f5293dc6c4c944e8 m30000| Fri Feb 22 12:22:17.856 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:17-512762f9f5293dc6c4c944e9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535737856), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:17.857 [conn7] moveChunk request accepted at version 21|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:17.857 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:17.857 [migrateThread] starting receiving-end of migration of chunk { _id: 51.0 } -> { _id: 103.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:17.865 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:17.865 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:22:17.866 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:22:17.868 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:17.868 [conn7] moveChunk setting version to: 22|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:17.868 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:17.877 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:22:17.877 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:22:17.877 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:17-512762f95bae0f22c953daf5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535737877), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:17.878 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:17.878 [conn7] moveChunk updating self version to: 22|1||512762dfe1d9169da342a45a through { _id: 103.0 } -> { _id: 155.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:17.879 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:17-512762f9f5293dc6c4c944ea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535737879), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:17.879 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:17.879 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:17.879 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:17.879 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:17.879 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:17.879 [cleanupOldData-512762f9f5293dc6c4c944eb] (start) waiting to cleanup test.foo from { _id: 51.0 } -> { _id: 103.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:17.879 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:17.879 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:17-512762f9f5293dc6c4c944ec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535737879), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:17.879 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:17.881 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 22|1||512762dfe1d9169da342a45a based on: 21|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:17.881 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:17.881 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:17.899 [cleanupOldData-512762f9f5293dc6c4c944eb] waiting to remove documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:22:17.899 [cleanupOldData-512762f9f5293dc6c4c944eb] moveChunk starting delete for: test.foo from { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:22:17.904 [cleanupOldData-512762f9f5293dc6c4c944eb] moveChunk deleted 52 documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30999| Fri Feb 22 12:22:18.882 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:18.882 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:18.882 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:18 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762fae1d9169da342a470" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762f9e1d9169da342a46f" } } m30999| Fri Feb 22 12:22:18.883 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762fae1d9169da342a470 m30999| Fri Feb 22 12:22:18.883 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:18.883 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:18.883 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:18.884 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:18.884 [Balancer] going to move { _id: "test.foo-_id_103.0", lastmod: Timestamp 22000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 103.0 }, max: { _id: 155.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:18.884 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 22|1||000000000000000000000000min: { _id: 103.0 }max: { _id: 155.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:18.885 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:18.885 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 103.0 }, max: { _id: 155.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_103.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:18.886 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 512762faf5293dc6c4c944ed m30000| Fri Feb 22 12:22:18.886 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:18-512762faf5293dc6c4c944ee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535738886), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:18.887 [conn7] moveChunk request accepted at version 22|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:18.887 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:18.887 [migrateThread] starting receiving-end of migration of chunk { _id: 103.0 } -> { _id: 155.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:18.893 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:18.893 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:22:18.895 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:22:18.897 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:18.897 [conn7] moveChunk setting version to: 23|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:18.897 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:18.905 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:22:18.905 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:22:18.905 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:18-512762fa5bae0f22c953daf6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535738905), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:18.907 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:18.908 [conn7] moveChunk updating self version to: 23|1||512762dfe1d9169da342a45a through { _id: 155.0 } -> { _id: 207.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:18.908 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:18-512762faf5293dc6c4c944ef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535738908), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:18.908 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:18.908 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:18.908 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:18.908 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:18.908 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:18.908 [cleanupOldData-512762faf5293dc6c4c944f0] (start) waiting to cleanup test.foo from { _id: 103.0 } -> { _id: 155.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:18.909 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:18.909 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:18-512762faf5293dc6c4c944f1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535738909), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:18.909 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:18.910 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 23|1||512762dfe1d9169da342a45a based on: 22|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:18.910 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:18.911 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:18.928 [cleanupOldData-512762faf5293dc6c4c944f0] waiting to remove documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:22:18.929 [cleanupOldData-512762faf5293dc6c4c944f0] moveChunk starting delete for: test.foo from { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:22:18.931 [cleanupOldData-512762faf5293dc6c4c944f0] moveChunk deleted 52 documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30999| Fri Feb 22 12:22:19.911 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:19.912 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:19.912 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:19 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762fbe1d9169da342a471" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762fae1d9169da342a470" } } m30999| Fri Feb 22 12:22:19.913 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762fbe1d9169da342a471 m30999| Fri Feb 22 12:22:19.913 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:19.913 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:19.913 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:19.914 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:19.914 [Balancer] going to move { _id: "test.foo-_id_155.0", lastmod: Timestamp 23000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 155.0 }, max: { _id: 207.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:19.914 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 23|1||000000000000000000000000min: { _id: 155.0 }max: { _id: 207.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:19.914 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:19.914 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 155.0 }, max: { _id: 207.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_155.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:19.915 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 512762fbf5293dc6c4c944f2 m30000| Fri Feb 22 12:22:19.915 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:19-512762fbf5293dc6c4c944f3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535739915), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:19.916 [conn7] moveChunk request accepted at version 23|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:19.916 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:19.916 [migrateThread] starting receiving-end of migration of chunk { _id: 155.0 } -> { _id: 207.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:19.922 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:19.922 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:19.923 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:19.927 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:19.927 [conn7] moveChunk setting version to: 24|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:19.927 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:19.934 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:19.934 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:19.934 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:19-512762fb5bae0f22c953daf7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535739934), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:19.937 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:19.937 [conn7] moveChunk updating self version to: 24|1||512762dfe1d9169da342a45a through { _id: 207.0 } -> { _id: 259.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:19.938 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:19-512762fbf5293dc6c4c944f4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535739937), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:19.938 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:19.938 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:19.938 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:19.938 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:19.938 [cleanupOldData-512762fbf5293dc6c4c944f5] (start) waiting to cleanup test.foo from { _id: 155.0 } -> { _id: 207.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:19.938 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:19.938 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:19.938 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:19-512762fbf5293dc6c4c944f6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535739938), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:19.938 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:19.939 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 24|1||512762dfe1d9169da342a45a based on: 23|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:19.939 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:19.940 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:19.958 [cleanupOldData-512762fbf5293dc6c4c944f5] waiting to remove documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:19.958 [cleanupOldData-512762fbf5293dc6c4c944f5] moveChunk starting delete for: test.foo from { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:19.962 [cleanupOldData-512762fbf5293dc6c4c944f5] moveChunk deleted 52 documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30999| Fri Feb 22 12:22:20.940 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:20.941 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:20.941 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762fce1d9169da342a472" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762fbe1d9169da342a471" } } m30999| Fri Feb 22 12:22:20.942 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762fce1d9169da342a472 m30999| Fri Feb 22 12:22:20.942 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:20.942 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:20.942 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:20.944 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:20.944 [Balancer] going to move { _id: "test.foo-_id_207.0", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 207.0 }, max: { _id: 259.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:20.944 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 24|1||000000000000000000000000min: { _id: 207.0 }max: { _id: 259.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:20.944 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:20.944 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 207.0 }, max: { _id: 259.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_207.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:20.945 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 512762fcf5293dc6c4c944f7 m30000| Fri Feb 22 12:22:20.945 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:20-512762fcf5293dc6c4c944f8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535740945), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:20.946 [conn7] moveChunk request accepted at version 24|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:20.946 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:20.947 [migrateThread] starting receiving-end of migration of chunk { _id: 207.0 } -> { _id: 259.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:20.954 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:20.954 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:20.955 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:22:20.957 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:20.957 [conn7] moveChunk setting version to: 25|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:20.957 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:20.966 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:20.966 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:20.966 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:20-512762fc5bae0f22c953daf8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535740966), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:20.967 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:20.967 [conn7] moveChunk updating self version to: 25|1||512762dfe1d9169da342a45a through { _id: 259.0 } -> { _id: 311.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:20.968 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:20-512762fcf5293dc6c4c944f9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535740968), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:20.968 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:20.968 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:20.968 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:20.968 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:20.968 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:20.968 [cleanupOldData-512762fcf5293dc6c4c944fa] (start) waiting to cleanup test.foo from { _id: 207.0 } -> { _id: 259.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:20.968 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:20.968 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:20-512762fcf5293dc6c4c944fb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535740968), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:20.968 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:20.970 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 25|1||512762dfe1d9169da342a45a based on: 24|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:20.970 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:20.970 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:20.988 [cleanupOldData-512762fcf5293dc6c4c944fa] waiting to remove documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:22:20.988 [cleanupOldData-512762fcf5293dc6c4c944fa] moveChunk starting delete for: test.foo from { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:22:20.992 [cleanupOldData-512762fcf5293dc6c4c944fa] moveChunk deleted 52 documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m30999| Fri Feb 22 12:22:21.090 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:22:21 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838', sleeping for 30000ms { "shard0001" : 27, "shard0000" : 14 } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:22:21.971 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:21.971 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:21.972 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:21 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762fde1d9169da342a473" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762fce1d9169da342a472" } } m30999| Fri Feb 22 12:22:21.972 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762fde1d9169da342a473 m30999| Fri Feb 22 12:22:21.972 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:21.972 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:21.972 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:21.974 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:21.974 [Balancer] going to move { _id: "test.foo-_id_259.0", lastmod: Timestamp 25000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 259.0 }, max: { _id: 311.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:21.974 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 25|1||000000000000000000000000min: { _id: 259.0 }max: { _id: 311.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:21.974 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:21.974 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 259.0 }, max: { _id: 311.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_259.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:21.975 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 512762fdf5293dc6c4c944fc m30000| Fri Feb 22 12:22:21.975 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:21-512762fdf5293dc6c4c944fd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535741975), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:21.976 [conn7] moveChunk request accepted at version 25|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:21.976 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:21.977 [migrateThread] starting receiving-end of migration of chunk { _id: 259.0 } -> { _id: 311.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:21.984 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:21.984 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30001| Fri Feb 22 12:22:21.985 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30000| Fri Feb 22 12:22:21.987 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:21.987 [conn7] moveChunk setting version to: 26|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:21.987 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:21.995 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30001| Fri Feb 22 12:22:21.996 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m30001| Fri Feb 22 12:22:21.996 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:21-512762fd5bae0f22c953daf9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535741996), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:21.997 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:21.997 [conn7] moveChunk updating self version to: 26|1||512762dfe1d9169da342a45a through { _id: 311.0 } -> { _id: 363.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:21.998 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:21-512762fdf5293dc6c4c944fe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535741998), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:21.998 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:21.998 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:21.998 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:21.998 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:21.998 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:21.998 [cleanupOldData-512762fdf5293dc6c4c944ff] (start) waiting to cleanup test.foo from { _id: 259.0 } -> { _id: 311.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:21.998 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:21.998 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:21-512762fdf5293dc6c4c94500", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535741998), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:21.999 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:22.000 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 26|1||512762dfe1d9169da342a45a based on: 25|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:22.000 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:22.000 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:22.018 [cleanupOldData-512762fdf5293dc6c4c944ff] waiting to remove documents for test.foo from { _id: 259.0 } -> { _id: 311.0 } m30000| Fri Feb 22 12:22:22.018 [cleanupOldData-512762fdf5293dc6c4c944ff] moveChunk starting delete for: test.foo from { _id: 259.0 } -> { _id: 311.0 } m30000| Fri Feb 22 12:22:22.022 [cleanupOldData-512762fdf5293dc6c4c944ff] moveChunk deleted 52 documents for test.foo from { _id: 259.0 } -> { _id: 311.0 } m30999| Fri Feb 22 12:22:23.001 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:23.001 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:23.001 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:23 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512762ffe1d9169da342a474" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762fde1d9169da342a473" } } m30999| Fri Feb 22 12:22:23.002 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 512762ffe1d9169da342a474 m30999| Fri Feb 22 12:22:23.002 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:23.002 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:23.002 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:23.003 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:23.003 [Balancer] going to move { _id: "test.foo-_id_311.0", lastmod: Timestamp 26000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 311.0 }, max: { _id: 363.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:23.003 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 26|1||000000000000000000000000min: { _id: 311.0 }max: { _id: 363.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:23.004 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:23.004 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 311.0 }, max: { _id: 363.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_311.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:23.005 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 512762fff5293dc6c4c94501 m30000| Fri Feb 22 12:22:23.005 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:23-512762fff5293dc6c4c94502", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535743005), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:23.006 [conn7] moveChunk request accepted at version 26|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:23.006 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:23.006 [migrateThread] starting receiving-end of migration of chunk { _id: 311.0 } -> { _id: 363.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:23.012 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:23.012 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30001| Fri Feb 22 12:22:23.013 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30000| Fri Feb 22 12:22:23.017 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:23.017 [conn7] moveChunk setting version to: 27|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:23.017 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:23.024 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30001| Fri Feb 22 12:22:23.024 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m30001| Fri Feb 22 12:22:23.024 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:23-512762ff5bae0f22c953dafa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535743024), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:23.027 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:23.027 [conn7] moveChunk updating self version to: 27|1||512762dfe1d9169da342a45a through { _id: 363.0 } -> { _id: 415.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:23.028 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:23-512762fff5293dc6c4c94503", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535743028), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:23.028 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:23.028 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:23.028 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:23.028 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:23.028 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:23.028 [cleanupOldData-512762fff5293dc6c4c94504] (start) waiting to cleanup test.foo from { _id: 311.0 } -> { _id: 363.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:23.028 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:23.028 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:23-512762fff5293dc6c4c94505", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535743028), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:23.028 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:23.029 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 27|1||512762dfe1d9169da342a45a based on: 26|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:23.030 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:23.030 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:23.048 [cleanupOldData-512762fff5293dc6c4c94504] waiting to remove documents for test.foo from { _id: 311.0 } -> { _id: 363.0 } m30000| Fri Feb 22 12:22:23.048 [cleanupOldData-512762fff5293dc6c4c94504] moveChunk starting delete for: test.foo from { _id: 311.0 } -> { _id: 363.0 } m30000| Fri Feb 22 12:22:23.051 [cleanupOldData-512762fff5293dc6c4c94504] moveChunk deleted 52 documents for test.foo from { _id: 311.0 } -> { _id: 363.0 } m30999| Fri Feb 22 12:22:24.031 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:24.031 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:24.031 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:24 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276300e1d9169da342a475" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512762ffe1d9169da342a474" } } m30999| Fri Feb 22 12:22:24.032 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276300e1d9169da342a475 m30999| Fri Feb 22 12:22:24.032 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:24.032 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:24.032 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:24.033 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:24.033 [Balancer] going to move { _id: "test.foo-_id_363.0", lastmod: Timestamp 27000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 363.0 }, max: { _id: 415.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:24.034 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 27|1||000000000000000000000000min: { _id: 363.0 }max: { _id: 415.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:24.034 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:24.034 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 363.0 }, max: { _id: 415.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_363.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:24.035 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276300f5293dc6c4c94506 m30000| Fri Feb 22 12:22:24.035 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:24-51276300f5293dc6c4c94507", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535744035), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:24.036 [conn7] moveChunk request accepted at version 27|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:24.036 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:24.037 [migrateThread] starting receiving-end of migration of chunk { _id: 363.0 } -> { _id: 415.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:24.045 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:24.045 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30001| Fri Feb 22 12:22:24.046 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30000| Fri Feb 22 12:22:24.047 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:24.047 [conn7] moveChunk setting version to: 28|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:24.047 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:24.056 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30001| Fri Feb 22 12:22:24.056 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m30001| Fri Feb 22 12:22:24.057 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:24-512763005bae0f22c953dafb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535744057), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:24.057 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:24.057 [conn7] moveChunk updating self version to: 28|1||512762dfe1d9169da342a45a through { _id: 415.0 } -> { _id: 467.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:24.058 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:24-51276300f5293dc6c4c94508", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535744058), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:24.061 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:24.061 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:24.061 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:24.061 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:24.061 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:24.061 [cleanupOldData-51276300f5293dc6c4c94509] (start) waiting to cleanup test.foo from { _id: 363.0 } -> { _id: 415.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:24.061 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:24.061 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:24-51276300f5293dc6c4c9450a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535744061), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 13, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:24.061 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:24.063 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 29 version: 28|1||512762dfe1d9169da342a45a based on: 27|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:24.063 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:24.064 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:24.081 [cleanupOldData-51276300f5293dc6c4c94509] waiting to remove documents for test.foo from { _id: 363.0 } -> { _id: 415.0 } m30000| Fri Feb 22 12:22:24.081 [cleanupOldData-51276300f5293dc6c4c94509] moveChunk starting delete for: test.foo from { _id: 363.0 } -> { _id: 415.0 } m30000| Fri Feb 22 12:22:24.084 [cleanupOldData-51276300f5293dc6c4c94509] moveChunk deleted 52 documents for test.foo from { _id: 363.0 } -> { _id: 415.0 } m30999| Fri Feb 22 12:22:25.064 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:25.065 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:25.065 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:25 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276301e1d9169da342a476" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276300e1d9169da342a475" } } m30999| Fri Feb 22 12:22:25.065 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276301e1d9169da342a476 m30999| Fri Feb 22 12:22:25.066 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:25.066 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:25.066 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:25.067 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:25.067 [Balancer] going to move { _id: "test.foo-_id_415.0", lastmod: Timestamp 28000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 415.0 }, max: { _id: 467.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:25.067 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 28|1||000000000000000000000000min: { _id: 415.0 }max: { _id: 467.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:25.067 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:25.068 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 415.0 }, max: { _id: 467.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_415.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:25.069 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276301f5293dc6c4c9450b m30000| Fri Feb 22 12:22:25.069 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:25-51276301f5293dc6c4c9450c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535745069), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:25.070 [conn7] moveChunk request accepted at version 28|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:25.070 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:25.071 [migrateThread] starting receiving-end of migration of chunk { _id: 415.0 } -> { _id: 467.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:25.079 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:25.079 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30001| Fri Feb 22 12:22:25.080 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30000| Fri Feb 22 12:22:25.081 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:25.081 [conn7] moveChunk setting version to: 29|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:25.081 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:25.091 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30001| Fri Feb 22 12:22:25.091 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m30001| Fri Feb 22 12:22:25.091 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:25-512763015bae0f22c953dafc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535745091), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 12 } } m30000| Fri Feb 22 12:22:25.091 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:25.091 [conn7] moveChunk updating self version to: 29|1||512762dfe1d9169da342a45a through { _id: 467.0 } -> { _id: 519.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:25.092 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:25-51276301f5293dc6c4c9450d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535745092), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:25.092 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:25.092 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:25.092 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:25.092 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:25.092 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:25.093 [cleanupOldData-51276301f5293dc6c4c9450e] (start) waiting to cleanup test.foo from { _id: 415.0 } -> { _id: 467.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:25.093 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:25.093 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:25-51276301f5293dc6c4c9450f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535745093), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:25.093 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:25.095 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 30 version: 29|1||512762dfe1d9169da342a45a based on: 28|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:25.095 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:25.095 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:25.113 [cleanupOldData-51276301f5293dc6c4c9450e] waiting to remove documents for test.foo from { _id: 415.0 } -> { _id: 467.0 } m30000| Fri Feb 22 12:22:25.113 [cleanupOldData-51276301f5293dc6c4c9450e] moveChunk starting delete for: test.foo from { _id: 415.0 } -> { _id: 467.0 } m30000| Fri Feb 22 12:22:25.117 [cleanupOldData-51276301f5293dc6c4c9450e] moveChunk deleted 52 documents for test.foo from { _id: 415.0 } -> { _id: 467.0 } m30999| Fri Feb 22 12:22:26.096 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:26.096 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:26.096 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276302e1d9169da342a477" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276301e1d9169da342a476" } } m30999| Fri Feb 22 12:22:26.097 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276302e1d9169da342a477 m30999| Fri Feb 22 12:22:26.097 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:26.097 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:26.097 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:26.098 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:26.099 [Balancer] going to move { _id: "test.foo-_id_467.0", lastmod: Timestamp 29000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 467.0 }, max: { _id: 519.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:26.099 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 29|1||000000000000000000000000min: { _id: 467.0 }max: { _id: 519.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:26.099 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:26.099 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 467.0 }, max: { _id: 519.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_467.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:26.100 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276302f5293dc6c4c94510 m30000| Fri Feb 22 12:22:26.100 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:26-51276302f5293dc6c4c94511", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535746100), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:26.101 [conn7] moveChunk request accepted at version 29|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:26.101 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:26.101 [migrateThread] starting receiving-end of migration of chunk { _id: 467.0 } -> { _id: 519.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:26.109 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:26.109 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30001| Fri Feb 22 12:22:26.110 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30000| Fri Feb 22 12:22:26.112 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:26.112 [conn7] moveChunk setting version to: 30|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:26.112 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:26.121 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30001| Fri Feb 22 12:22:26.121 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m30001| Fri Feb 22 12:22:26.121 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:26-512763025bae0f22c953dafd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535746121), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:26.122 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:26.122 [conn7] moveChunk updating self version to: 30|1||512762dfe1d9169da342a45a through { _id: 519.0 } -> { _id: 571.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:26.123 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:26-51276302f5293dc6c4c94512", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535746123), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:26.123 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:26.123 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:26.123 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:26.123 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:26.123 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:26.123 [cleanupOldData-51276302f5293dc6c4c94513] (start) waiting to cleanup test.foo from { _id: 467.0 } -> { _id: 519.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:26.123 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:26.123 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:26-51276302f5293dc6c4c94514", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535746123), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:26.123 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:26.125 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 30|1||512762dfe1d9169da342a45a based on: 29|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:26.125 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:26.125 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:26.143 [cleanupOldData-51276302f5293dc6c4c94513] waiting to remove documents for test.foo from { _id: 467.0 } -> { _id: 519.0 } m30000| Fri Feb 22 12:22:26.143 [cleanupOldData-51276302f5293dc6c4c94513] moveChunk starting delete for: test.foo from { _id: 467.0 } -> { _id: 519.0 } m30000| Fri Feb 22 12:22:26.147 [cleanupOldData-51276302f5293dc6c4c94513] moveChunk deleted 52 documents for test.foo from { _id: 467.0 } -> { _id: 519.0 } { "shard0001" : 32, "shard0000" : 9 } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:22:27.125 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:27.126 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:27.126 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276303e1d9169da342a478" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276302e1d9169da342a477" } } m30999| Fri Feb 22 12:22:27.127 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276303e1d9169da342a478 m30999| Fri Feb 22 12:22:27.127 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:27.127 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:27.127 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:27.128 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:27.128 [Balancer] going to move { _id: "test.foo-_id_519.0", lastmod: Timestamp 30000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 519.0 }, max: { _id: 571.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:27.128 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 30|1||000000000000000000000000min: { _id: 519.0 }max: { _id: 571.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:27.128 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:27.128 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 519.0 }, max: { _id: 571.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_519.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:27.129 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276303f5293dc6c4c94515 m30000| Fri Feb 22 12:22:27.129 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:27-51276303f5293dc6c4c94516", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535747129), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:27.130 [conn7] moveChunk request accepted at version 30|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:27.130 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:27.130 [migrateThread] starting receiving-end of migration of chunk { _id: 519.0 } -> { _id: 571.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:27.136 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:27.136 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30001| Fri Feb 22 12:22:27.137 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30000| Fri Feb 22 12:22:27.141 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:27.141 [conn7] moveChunk setting version to: 31|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:27.141 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:27.148 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30001| Fri Feb 22 12:22:27.148 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m30001| Fri Feb 22 12:22:27.148 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:27-512763035bae0f22c953dafe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535747148), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:27.151 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:27.151 [conn7] moveChunk updating self version to: 31|1||512762dfe1d9169da342a45a through { _id: 571.0 } -> { _id: 623.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:27.151 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:27-51276303f5293dc6c4c94517", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535747151), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:27.151 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:27.152 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:27.152 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:27.152 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:27.152 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:27.152 [cleanupOldData-51276303f5293dc6c4c94518] (start) waiting to cleanup test.foo from { _id: 519.0 } -> { _id: 571.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:27.152 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:27.152 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:27-51276303f5293dc6c4c94519", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535747152), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:27.152 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:27.162 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 32 version: 31|1||512762dfe1d9169da342a45a based on: 30|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:27.162 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:27.162 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:27.172 [cleanupOldData-51276303f5293dc6c4c94518] waiting to remove documents for test.foo from { _id: 519.0 } -> { _id: 571.0 } m30000| Fri Feb 22 12:22:27.172 [cleanupOldData-51276303f5293dc6c4c94518] moveChunk starting delete for: test.foo from { _id: 519.0 } -> { _id: 571.0 } m30000| Fri Feb 22 12:22:27.176 [cleanupOldData-51276303f5293dc6c4c94518] moveChunk deleted 52 documents for test.foo from { _id: 519.0 } -> { _id: 571.0 } m30999| Fri Feb 22 12:22:28.163 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:28.163 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:28.164 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:28 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276304e1d9169da342a479" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276303e1d9169da342a478" } } m30999| Fri Feb 22 12:22:28.164 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276304e1d9169da342a479 m30999| Fri Feb 22 12:22:28.164 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:28.164 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:28.164 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:28.166 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:28.166 [Balancer] going to move { _id: "test.foo-_id_571.0", lastmod: Timestamp 31000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 571.0 }, max: { _id: 623.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:28.166 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 31|1||000000000000000000000000min: { _id: 571.0 }max: { _id: 623.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:28.166 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:28.166 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 571.0 }, max: { _id: 623.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_571.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:28.167 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276304f5293dc6c4c9451a m30000| Fri Feb 22 12:22:28.167 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:28-51276304f5293dc6c4c9451b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535748167), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:28.168 [conn7] moveChunk request accepted at version 31|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:28.168 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:28.168 [migrateThread] starting receiving-end of migration of chunk { _id: 571.0 } -> { _id: 623.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:28.175 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:28.175 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30001| Fri Feb 22 12:22:28.176 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30000| Fri Feb 22 12:22:28.178 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:28.178 [conn7] moveChunk setting version to: 32|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:28.179 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:28.187 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30001| Fri Feb 22 12:22:28.187 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m30001| Fri Feb 22 12:22:28.187 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:28-512763045bae0f22c953daff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535748187), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:28.189 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:28.189 [conn7] moveChunk updating self version to: 32|1||512762dfe1d9169da342a45a through { _id: 623.0 } -> { _id: 675.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:28.189 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:28-51276304f5293dc6c4c9451c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535748189), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:28.189 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:28.189 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:28.189 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:28.190 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:28.190 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:28.190 [cleanupOldData-51276304f5293dc6c4c9451d] (start) waiting to cleanup test.foo from { _id: 571.0 } -> { _id: 623.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:28.190 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:28.190 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:28-51276304f5293dc6c4c9451e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535748190), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:28.190 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:28.191 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 33 version: 32|1||512762dfe1d9169da342a45a based on: 31|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:28.191 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:28.192 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:28.210 [cleanupOldData-51276304f5293dc6c4c9451d] waiting to remove documents for test.foo from { _id: 571.0 } -> { _id: 623.0 } m30000| Fri Feb 22 12:22:28.210 [cleanupOldData-51276304f5293dc6c4c9451d] moveChunk starting delete for: test.foo from { _id: 571.0 } -> { _id: 623.0 } m30000| Fri Feb 22 12:22:28.213 [cleanupOldData-51276304f5293dc6c4c9451d] moveChunk deleted 52 documents for test.foo from { _id: 571.0 } -> { _id: 623.0 } m30999| Fri Feb 22 12:22:29.192 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:29.193 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:29.193 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:29 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276305e1d9169da342a47a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276304e1d9169da342a479" } } m30999| Fri Feb 22 12:22:29.194 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276305e1d9169da342a47a m30999| Fri Feb 22 12:22:29.194 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:29.194 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:29.194 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:29.195 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:29.195 [Balancer] going to move { _id: "test.foo-_id_623.0", lastmod: Timestamp 32000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 623.0 }, max: { _id: 675.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:29.195 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 32|1||000000000000000000000000min: { _id: 623.0 }max: { _id: 675.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:29.195 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:29.196 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 623.0 }, max: { _id: 675.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_623.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:29.196 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276305f5293dc6c4c9451f m30000| Fri Feb 22 12:22:29.197 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:29-51276305f5293dc6c4c94520", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535749197), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:29.198 [conn7] moveChunk request accepted at version 32|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:29.198 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:29.198 [migrateThread] starting receiving-end of migration of chunk { _id: 623.0 } -> { _id: 675.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:29.206 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:29.206 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30001| Fri Feb 22 12:22:29.208 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30000| Fri Feb 22 12:22:29.208 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:29.208 [conn7] moveChunk setting version to: 33|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:29.208 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:29.218 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30001| Fri Feb 22 12:22:29.218 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m30001| Fri Feb 22 12:22:29.218 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:29-512763055bae0f22c953db00", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535749218), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:29.219 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:29.219 [conn7] moveChunk updating self version to: 33|1||512762dfe1d9169da342a45a through { _id: 675.0 } -> { _id: 727.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:29.219 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:29-51276305f5293dc6c4c94521", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535749219), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:29.219 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:29.219 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:29.219 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:29.219 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:29.219 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:29.219 [cleanupOldData-51276305f5293dc6c4c94522] (start) waiting to cleanup test.foo from { _id: 623.0 } -> { _id: 675.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:29.220 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:29.220 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:29-51276305f5293dc6c4c94523", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535749220), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:29.220 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:29.221 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 34 version: 33|1||512762dfe1d9169da342a45a based on: 32|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:29.221 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:29.221 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:29.240 [cleanupOldData-51276305f5293dc6c4c94522] waiting to remove documents for test.foo from { _id: 623.0 } -> { _id: 675.0 } m30000| Fri Feb 22 12:22:29.240 [cleanupOldData-51276305f5293dc6c4c94522] moveChunk starting delete for: test.foo from { _id: 623.0 } -> { _id: 675.0 } m30000| Fri Feb 22 12:22:29.244 [cleanupOldData-51276305f5293dc6c4c94522] moveChunk deleted 52 documents for test.foo from { _id: 623.0 } -> { _id: 675.0 } m30999| Fri Feb 22 12:22:30.231 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:30.231 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:30.232 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:30 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276306e1d9169da342a47b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276305e1d9169da342a47a" } } m30999| Fri Feb 22 12:22:30.232 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276306e1d9169da342a47b m30999| Fri Feb 22 12:22:30.232 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:30.232 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:30.232 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:30.234 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:30.234 [Balancer] going to move { _id: "test.foo-_id_675.0", lastmod: Timestamp 33000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 675.0 }, max: { _id: 727.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:30.234 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 33|1||000000000000000000000000min: { _id: 675.0 }max: { _id: 727.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:30.234 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:30.235 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 675.0 }, max: { _id: 727.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_675.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:30.235 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276306f5293dc6c4c94524 m30000| Fri Feb 22 12:22:30.235 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:30-51276306f5293dc6c4c94525", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535750235), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:30.236 [conn7] moveChunk request accepted at version 33|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:30.237 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:30.237 [migrateThread] starting receiving-end of migration of chunk { _id: 675.0 } -> { _id: 727.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:30.243 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:30.243 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30001| Fri Feb 22 12:22:30.244 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30000| Fri Feb 22 12:22:30.247 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:30.247 [conn7] moveChunk setting version to: 34|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:30.247 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:30.254 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30001| Fri Feb 22 12:22:30.254 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m30001| Fri Feb 22 12:22:30.254 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:30-512763065bae0f22c953db01", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535750254), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:30.257 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:30.257 [conn7] moveChunk updating self version to: 34|1||512762dfe1d9169da342a45a through { _id: 727.0 } -> { _id: 779.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:30.258 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:30-51276306f5293dc6c4c94526", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535750258), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:30.258 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:30.258 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:30.258 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:30.258 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:30.258 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:30.258 [cleanupOldData-51276306f5293dc6c4c94527] (start) waiting to cleanup test.foo from { _id: 675.0 } -> { _id: 727.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:30.259 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:30.259 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:30-51276306f5293dc6c4c94528", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535750259), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:30.259 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:30.260 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 34|1||512762dfe1d9169da342a45a based on: 33|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:30.260 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:30.260 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:30.278 [cleanupOldData-51276306f5293dc6c4c94527] waiting to remove documents for test.foo from { _id: 675.0 } -> { _id: 727.0 } m30000| Fri Feb 22 12:22:30.278 [cleanupOldData-51276306f5293dc6c4c94527] moveChunk starting delete for: test.foo from { _id: 675.0 } -> { _id: 727.0 } m30000| Fri Feb 22 12:22:30.281 [cleanupOldData-51276306f5293dc6c4c94527] moveChunk deleted 52 documents for test.foo from { _id: 675.0 } -> { _id: 727.0 } m30999| Fri Feb 22 12:22:31.261 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:31.261 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:31.261 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:31 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276307e1d9169da342a47c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276306e1d9169da342a47b" } } m30999| Fri Feb 22 12:22:31.262 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276307e1d9169da342a47c m30999| Fri Feb 22 12:22:31.262 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:31.262 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:31.262 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:31.263 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:31.263 [Balancer] going to move { _id: "test.foo-_id_727.0", lastmod: Timestamp 34000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 727.0 }, max: { _id: 779.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:31.263 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 34|1||000000000000000000000000min: { _id: 727.0 }max: { _id: 779.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:31.263 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:31.264 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 727.0 }, max: { _id: 779.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_727.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:31.264 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276307f5293dc6c4c94529 m30000| Fri Feb 22 12:22:31.264 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:31-51276307f5293dc6c4c9452a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535751264), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:31.265 [conn7] moveChunk request accepted at version 34|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:31.265 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:31.266 [migrateThread] starting receiving-end of migration of chunk { _id: 727.0 } -> { _id: 779.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:31.272 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:31.272 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30001| Fri Feb 22 12:22:31.273 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30000| Fri Feb 22 12:22:31.276 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:31.276 [conn7] moveChunk setting version to: 35|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:31.276 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:31.283 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30001| Fri Feb 22 12:22:31.283 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m30001| Fri Feb 22 12:22:31.283 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:31-512763075bae0f22c953db02", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535751283), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 10 } } m30000| Fri Feb 22 12:22:31.286 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:31.286 [conn7] moveChunk updating self version to: 35|1||512762dfe1d9169da342a45a through { _id: 779.0 } -> { _id: 831.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:31.287 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:31-51276307f5293dc6c4c9452b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535751287), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:31.287 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:31.287 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:31.287 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:31.287 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:31.287 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:31.287 [cleanupOldData-51276307f5293dc6c4c9452c] (start) waiting to cleanup test.foo from { _id: 727.0 } -> { _id: 779.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:31.287 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:31.287 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:31-51276307f5293dc6c4c9452d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535751287), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:31.287 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:31.288 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 36 version: 35|1||512762dfe1d9169da342a45a based on: 34|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:31.288 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:31.288 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:31.307 [cleanupOldData-51276307f5293dc6c4c9452c] waiting to remove documents for test.foo from { _id: 727.0 } -> { _id: 779.0 } m30000| Fri Feb 22 12:22:31.307 [cleanupOldData-51276307f5293dc6c4c9452c] moveChunk starting delete for: test.foo from { _id: 727.0 } -> { _id: 779.0 } m30000| Fri Feb 22 12:22:31.310 [cleanupOldData-51276307f5293dc6c4c9452c] moveChunk deleted 52 documents for test.foo from { _id: 727.0 } -> { _id: 779.0 } { "shard0001" : 37, "shard0000" : 4 } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:22:32.289 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:32.289 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:32.289 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276308e1d9169da342a47d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276307e1d9169da342a47c" } } m30999| Fri Feb 22 12:22:32.290 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276308e1d9169da342a47d m30999| Fri Feb 22 12:22:32.290 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:32.290 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:32.290 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:32.291 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:32.291 [Balancer] going to move { _id: "test.foo-_id_779.0", lastmod: Timestamp 35000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 779.0 }, max: { _id: 831.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:32.292 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 35|1||000000000000000000000000min: { _id: 779.0 }max: { _id: 831.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:32.292 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:32.292 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 779.0 }, max: { _id: 831.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_779.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:32.293 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276308f5293dc6c4c9452e m30000| Fri Feb 22 12:22:32.293 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:32-51276308f5293dc6c4c9452f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535752293), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:32.294 [conn7] moveChunk request accepted at version 35|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:32.294 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:32.294 [migrateThread] starting receiving-end of migration of chunk { _id: 779.0 } -> { _id: 831.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:32.300 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:32.300 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30001| Fri Feb 22 12:22:32.301 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30000| Fri Feb 22 12:22:32.304 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:32.304 [conn7] moveChunk setting version to: 36|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:32.304 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:32.312 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30001| Fri Feb 22 12:22:32.312 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m30001| Fri Feb 22 12:22:32.312 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:32-512763085bae0f22c953db03", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535752312), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:32.315 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:32.315 [conn7] moveChunk updating self version to: 36|1||512762dfe1d9169da342a45a through { _id: 831.0 } -> { _id: 883.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:32.315 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:32-51276308f5293dc6c4c94530", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535752315), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:32.315 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:32.315 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:32.315 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:32.315 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:32.315 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:32.315 [cleanupOldData-51276308f5293dc6c4c94531] (start) waiting to cleanup test.foo from { _id: 779.0 } -> { _id: 831.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:32.316 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:32.316 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:32-51276308f5293dc6c4c94532", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535752316), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:32.316 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:32.317 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 36|1||512762dfe1d9169da342a45a based on: 35|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:32.317 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:32.317 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:32.335 [cleanupOldData-51276308f5293dc6c4c94531] waiting to remove documents for test.foo from { _id: 779.0 } -> { _id: 831.0 } m30000| Fri Feb 22 12:22:32.335 [cleanupOldData-51276308f5293dc6c4c94531] moveChunk starting delete for: test.foo from { _id: 779.0 } -> { _id: 831.0 } m30000| Fri Feb 22 12:22:32.339 [cleanupOldData-51276308f5293dc6c4c94531] moveChunk deleted 52 documents for test.foo from { _id: 779.0 } -> { _id: 831.0 } m30999| Fri Feb 22 12:22:33.318 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:33.319 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:33.319 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:33 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276309e1d9169da342a47e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276308e1d9169da342a47d" } } m30999| Fri Feb 22 12:22:33.320 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 51276309e1d9169da342a47e m30999| Fri Feb 22 12:22:33.320 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:33.320 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:33.320 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:33.321 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:33.321 [Balancer] going to move { _id: "test.foo-_id_831.0", lastmod: Timestamp 36000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 831.0 }, max: { _id: 883.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:33.321 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 36|1||000000000000000000000000min: { _id: 831.0 }max: { _id: 883.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:33.321 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:33.321 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 831.0 }, max: { _id: 883.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_831.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:33.322 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 51276309f5293dc6c4c94533 m30000| Fri Feb 22 12:22:33.322 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:33-51276309f5293dc6c4c94534", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535753322), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:33.323 [conn7] moveChunk request accepted at version 36|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:33.323 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:33.324 [migrateThread] starting receiving-end of migration of chunk { _id: 831.0 } -> { _id: 883.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:33.331 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:33.331 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30001| Fri Feb 22 12:22:33.333 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30000| Fri Feb 22 12:22:33.334 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:33.334 [conn7] moveChunk setting version to: 37|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:33.334 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:33.343 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30001| Fri Feb 22 12:22:33.343 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m30001| Fri Feb 22 12:22:33.343 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:33-512763095bae0f22c953db04", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535753343), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:33.344 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:33.344 [conn7] moveChunk updating self version to: 37|1||512762dfe1d9169da342a45a through { _id: 883.0 } -> { _id: 935.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:33.345 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:33-51276309f5293dc6c4c94535", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535753345), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:33.345 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:33.345 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:33.345 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:33.345 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:33.345 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:33.345 [cleanupOldData-51276309f5293dc6c4c94536] (start) waiting to cleanup test.foo from { _id: 831.0 } -> { _id: 883.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:33.345 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:33.345 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:33-51276309f5293dc6c4c94537", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535753345), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:33.345 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:33.347 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 38 version: 37|1||512762dfe1d9169da342a45a based on: 36|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:33.347 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:33.347 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:33.365 [cleanupOldData-51276309f5293dc6c4c94536] waiting to remove documents for test.foo from { _id: 831.0 } -> { _id: 883.0 } m30000| Fri Feb 22 12:22:33.365 [cleanupOldData-51276309f5293dc6c4c94536] moveChunk starting delete for: test.foo from { _id: 831.0 } -> { _id: 883.0 } m30000| Fri Feb 22 12:22:33.369 [cleanupOldData-51276309f5293dc6c4c94536] moveChunk deleted 52 documents for test.foo from { _id: 831.0 } -> { _id: 883.0 } m30999| Fri Feb 22 12:22:34.348 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:34.348 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:34.348 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:34 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127630ae1d9169da342a47f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276309e1d9169da342a47e" } } m30999| Fri Feb 22 12:22:34.349 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 5127630ae1d9169da342a47f m30999| Fri Feb 22 12:22:34.349 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:34.349 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:34.349 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:34.350 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:34.350 [Balancer] going to move { _id: "test.foo-_id_883.0", lastmod: Timestamp 37000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 883.0 }, max: { _id: 935.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:34.350 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 37|1||000000000000000000000000min: { _id: 883.0 }max: { _id: 935.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:34.351 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:34.351 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 883.0 }, max: { _id: 935.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_883.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:34.352 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 5127630af5293dc6c4c94538 m30000| Fri Feb 22 12:22:34.352 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:34-5127630af5293dc6c4c94539", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535754352), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:34.353 [conn7] moveChunk request accepted at version 37|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:34.353 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:34.353 [migrateThread] starting receiving-end of migration of chunk { _id: 883.0 } -> { _id: 935.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:34.361 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:34.361 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30001| Fri Feb 22 12:22:34.362 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30000| Fri Feb 22 12:22:34.363 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:34.363 [conn7] moveChunk setting version to: 38|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:34.363 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:34.372 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30001| Fri Feb 22 12:22:34.372 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30001| Fri Feb 22 12:22:34.372 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:34-5127630a5bae0f22c953db05", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535754372), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:22:34.373 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:34.374 [conn7] moveChunk updating self version to: 38|1||512762dfe1d9169da342a45a through { _id: 935.0 } -> { _id: 987.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:34.374 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:34-5127630af5293dc6c4c9453a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535754374), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:34.374 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:34.374 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:34.374 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:34.374 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:34.374 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:34.374 [cleanupOldData-5127630af5293dc6c4c9453b] (start) waiting to cleanup test.foo from { _id: 883.0 } -> { _id: 935.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:34.375 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:34.375 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:34-5127630af5293dc6c4c9453c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535754375), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:34.375 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:34.376 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 38|1||512762dfe1d9169da342a45a based on: 37|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:34.376 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:34.376 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:34.394 [cleanupOldData-5127630af5293dc6c4c9453b] waiting to remove documents for test.foo from { _id: 883.0 } -> { _id: 935.0 } m30000| Fri Feb 22 12:22:34.394 [cleanupOldData-5127630af5293dc6c4c9453b] moveChunk starting delete for: test.foo from { _id: 883.0 } -> { _id: 935.0 } m30000| Fri Feb 22 12:22:34.398 [cleanupOldData-5127630af5293dc6c4c9453b] moveChunk deleted 52 documents for test.foo from { _id: 883.0 } -> { _id: 935.0 } m30999| Fri Feb 22 12:22:35.377 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:35.377 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:35.378 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:35 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127630be1d9169da342a480" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127630ae1d9169da342a47f" } } m30999| Fri Feb 22 12:22:35.378 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 5127630be1d9169da342a480 m30999| Fri Feb 22 12:22:35.378 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:35.378 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:35.378 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:35.380 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:35.380 [Balancer] going to move { _id: "test.foo-_id_935.0", lastmod: Timestamp 38000|1, lastmodEpoch: ObjectId('512762dfe1d9169da342a45a'), ns: "test.foo", min: { _id: 935.0 }, max: { _id: 987.0 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:22:35.380 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 38|1||000000000000000000000000min: { _id: 935.0 }max: { _id: 987.0 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:22:35.380 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:22:35.380 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 935.0 }, max: { _id: 987.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_935.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:35.381 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' acquired, ts : 5127630bf5293dc6c4c9453d m30000| Fri Feb 22 12:22:35.381 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:35-5127630bf5293dc6c4c9453e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535755381), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:35.382 [conn7] moveChunk request accepted at version 38|1||512762dfe1d9169da342a45a m30000| Fri Feb 22 12:22:35.382 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:22:35.382 [migrateThread] starting receiving-end of migration of chunk { _id: 935.0 } -> { _id: 987.0 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:22:35.390 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:22:35.390 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30001| Fri Feb 22 12:22:35.392 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30000| Fri Feb 22 12:22:35.393 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:35.393 [conn7] moveChunk setting version to: 39|0||512762dfe1d9169da342a45a m30001| Fri Feb 22 12:22:35.393 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:22:35.402 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30001| Fri Feb 22 12:22:35.402 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m30001| Fri Feb 22 12:22:35.402 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:35-5127630b5bae0f22c953db06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535755402), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 12 } } m30000| Fri Feb 22 12:22:35.403 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:22:35.403 [conn7] moveChunk moved last chunk out for collection 'test.foo' m30000| Fri Feb 22 12:22:35.403 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:35-5127630bf5293dc6c4c9453f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535755403), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:22:35.404 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:35.404 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:35.404 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:22:35.404 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:22:35.404 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:22:35.404 [cleanupOldData-5127630bf5293dc6c4c94540] (start) waiting to cleanup test.foo from { _id: 935.0 } -> { _id: 987.0 }, # cursors remaining: 0 m30000| Fri Feb 22 12:22:35.404 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535736:22157' unlocked. m30000| Fri Feb 22 12:22:35.404 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:35-5127630bf5293dc6c4c94541", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:38651", time: new Date(1361535755404), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:35.404 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:35.405 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 39|0||512762dfe1d9169da342a45a based on: 38|1||512762dfe1d9169da342a45a m30999| Fri Feb 22 12:22:35.406 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:35.406 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. m30000| Fri Feb 22 12:22:35.424 [cleanupOldData-5127630bf5293dc6c4c94540] waiting to remove documents for test.foo from { _id: 935.0 } -> { _id: 987.0 } m30000| Fri Feb 22 12:22:35.424 [cleanupOldData-5127630bf5293dc6c4c94540] moveChunk starting delete for: test.foo from { _id: 935.0 } -> { _id: 987.0 } m30000| Fri Feb 22 12:22:35.428 [cleanupOldData-5127630bf5293dc6c4c94540] moveChunk deleted 52 documents for test.foo from { _id: 935.0 } -> { _id: 987.0 } m30999| Fri Feb 22 12:22:36.407 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:36.407 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838 ) m30999| Fri Feb 22 12:22:36.407 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:36 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127630ce1d9169da342a481" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127630be1d9169da342a480" } } m30999| Fri Feb 22 12:22:36.408 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' acquired, ts : 5127630ce1d9169da342a481 m30999| Fri Feb 22 12:22:36.408 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:36.408 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:36.408 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:36.410 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:22:36.410 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:36.410 [Balancer] donor : shard0001 chunks on 41 m30999| Fri Feb 22 12:22:36.410 [Balancer] receiver : shard0001 chunks on 41 m30999| Fri Feb 22 12:22:36.410 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:36.410 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:22:36.410 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:36.410 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535711:16838' unlocked. { "shard0001" : 41, "shard0000" : 0 } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:22:36.619 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 12:22:36.637 [conn3] end connection 127.0.0.1:56333 (5 connections now open) m30001| Fri Feb 22 12:22:36.637 [conn4] end connection 127.0.0.1:55770 (5 connections now open) m30000| Fri Feb 22 12:22:36.637 [conn3] end connection 127.0.0.1:42113 (13 connections now open) m30000| Fri Feb 22 12:22:36.637 [conn5] end connection 127.0.0.1:65355 (13 connections now open) m30000| Fri Feb 22 12:22:36.637 [conn6] end connection 127.0.0.1:46158 (13 connections now open) m30000| Fri Feb 22 12:22:36.637 [conn7] end connection 127.0.0.1:38651 (13 connections now open) Fri Feb 22 12:22:37.620 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:22:37.620 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:22:37.620 [interruptThread] now exiting m30000| Fri Feb 22 12:22:37.620 dbexit: m30000| Fri Feb 22 12:22:37.620 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:22:37.620 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:22:37.620 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:22:37.620 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:22:37.620 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:22:37.620 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:22:37.620 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:22:37.620 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:22:37.620 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:22:37.620 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:22:37.620 [conn1] end connection 127.0.0.1:55885 (9 connections now open) m30000| Fri Feb 22 12:22:37.620 [conn2] end connection 127.0.0.1:55568 (9 connections now open) m30000| Fri Feb 22 12:22:37.620 [conn8] end connection 127.0.0.1:34953 (9 connections now open) m30001| Fri Feb 22 12:22:37.620 [conn5] end connection 127.0.0.1:49655 (3 connections now open) m30000| Fri Feb 22 12:22:37.620 [conn10] end connection 127.0.0.1:45768 (9 connections now open) m30000| Fri Feb 22 12:22:37.621 [conn9] end connection 127.0.0.1:59562 (9 connections now open) m30000| Fri Feb 22 12:22:37.621 [conn11] end connection 127.0.0.1:49442 (9 connections now open) m30000| Fri Feb 22 12:22:37.621 [conn12] end connection 127.0.0.1:57637 (9 connections now open) m30000| Fri Feb 22 12:22:37.621 [conn13] end connection 127.0.0.1:57241 (9 connections now open) m30001| Fri Feb 22 12:22:37.621 [conn6] end connection 127.0.0.1:41803 (2 connections now open) m30000| Fri Feb 22 12:22:37.621 [conn14] end connection 127.0.0.1:51528 (8 connections now open) m30000| Fri Feb 22 12:22:37.654 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:22:37.659 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:22:37.659 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:22:37.659 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:22:37.660 dbexit: really exiting now Fri Feb 22 12:22:38.620 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 12:22:38.620 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:22:38.620 [interruptThread] now exiting m30001| Fri Feb 22 12:22:38.620 dbexit: m30001| Fri Feb 22 12:22:38.620 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:22:38.620 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:22:38.620 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:22:38.620 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:22:38.620 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:22:38.621 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:22:38.621 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:22:38.621 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:22:38.621 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:22:38.621 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:22:38.621 [conn1] end connection 127.0.0.1:62380 (1 connection now open) m30001| Fri Feb 22 12:22:38.643 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:22:38.648 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:22:38.648 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:22:38.648 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:22:38.648 dbexit: really exiting now Fri Feb 22 12:22:39.620 shell: stopped mongo program on port 30001 *** ShardingTest slow_sharding_balance1 completed successfully in 49.009 seconds *** Fri Feb 22 12:22:39.648 [conn6] end connection 127.0.0.1:47675 (0 connections now open) 49.2206 seconds Fri Feb 22 12:22:39.665 [initandlisten] connection accepted from 127.0.0.1:56621 #7 (1 connection now open) Fri Feb 22 12:22:39.666 [conn7] end connection 127.0.0.1:56621 (0 connections now open) ******************************************* Test : sharding_balance2.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance2.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance2.js";TestData.testFile = "sharding_balance2.js";TestData.testName = "sharding_balance2";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:22:39 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:22:39.814 [initandlisten] connection accepted from 127.0.0.1:60221 #8 (1 connection now open) null Resetting db path '/data/db/slow_sharding_balance20' Fri Feb 22 12:22:39.826 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/slow_sharding_balance20 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:22:39.904 [initandlisten] MongoDB starting : pid=11318 port=30000 dbpath=/data/db/slow_sharding_balance20 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:22:39.905 [initandlisten] m30000| Fri Feb 22 12:22:39.905 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:22:39.905 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:22:39.905 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:22:39.905 [initandlisten] m30000| Fri Feb 22 12:22:39.905 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:22:39.905 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:22:39.905 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:22:39.905 [initandlisten] allocator: system m30000| Fri Feb 22 12:22:39.905 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance20", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:22:39.905 [initandlisten] journal dir=/data/db/slow_sharding_balance20/journal m30000| Fri Feb 22 12:22:39.905 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:22:39.918 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/local.ns, filling with zeroes... m30000| Fri Feb 22 12:22:39.918 [FileAllocator] creating directory /data/db/slow_sharding_balance20/_tmp m30000| Fri Feb 22 12:22:39.919 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:22:39.919 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/local.0, filling with zeroes... m30000| Fri Feb 22 12:22:39.919 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:22:39.922 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:22:39.922 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:22:40.029 [initandlisten] connection accepted from 127.0.0.1:63918 #1 (1 connection now open) Resetting db path '/data/db/slow_sharding_balance21' Fri Feb 22 12:22:40.032 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/slow_sharding_balance21 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:22:40.107 [initandlisten] MongoDB starting : pid=11319 port=30001 dbpath=/data/db/slow_sharding_balance21 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:22:40.107 [initandlisten] m30001| Fri Feb 22 12:22:40.107 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:22:40.107 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:22:40.107 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:22:40.107 [initandlisten] m30001| Fri Feb 22 12:22:40.107 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:22:40.107 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:22:40.107 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:22:40.107 [initandlisten] allocator: system m30001| Fri Feb 22 12:22:40.107 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance21", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:22:40.107 [initandlisten] journal dir=/data/db/slow_sharding_balance21/journal m30001| Fri Feb 22 12:22:40.108 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:22:40.123 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance21/local.ns, filling with zeroes... m30001| Fri Feb 22 12:22:40.123 [FileAllocator] creating directory /data/db/slow_sharding_balance21/_tmp m30001| Fri Feb 22 12:22:40.123 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance21/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:22:40.123 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance21/local.0, filling with zeroes... m30001| Fri Feb 22 12:22:40.123 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance21/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:22:40.127 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:22:40.127 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:22:40.234 [initandlisten] connection accepted from 127.0.0.1:34678 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:22:40.235 [initandlisten] connection accepted from 127.0.0.1:64392 #2 (2 connections now open) ShardingTest slow_sharding_balance2 : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:22:40.243 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:22:40.260 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:22:40.261 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=11320 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:22:40.261 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:22:40.261 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:22:40.261 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 12:22:40.261 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 12:22:40.261 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:40.262 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:40.262 [mongosMain] connected connection! m30000| Fri Feb 22 12:22:40.262 [initandlisten] connection accepted from 127.0.0.1:54441 #3 (3 connections now open) m30999| Fri Feb 22 12:22:40.263 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:22:40.263 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:40.263 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:22:40.263 [initandlisten] connection accepted from 127.0.0.1:63086 #4 (4 connections now open) m30999| Fri Feb 22 12:22:40.263 [mongosMain] connected connection! m30000| Fri Feb 22 12:22:40.264 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:22:40.277 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:22:40.277 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 ) m30999| Fri Feb 22 12:22:40.277 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:22:40.278 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 12:22:40.278 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:40 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127631060d3b8e20a182780" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 12:22:40.278 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/config.ns, filling with zeroes... m30000| Fri Feb 22 12:22:40.278 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:22:40.278 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/config.0, filling with zeroes... m30000| Fri Feb 22 12:22:40.278 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:22:40.279 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/config.1, filling with zeroes... m30000| Fri Feb 22 12:22:40.279 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:22:40.282 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:22:40.284 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:22:40.284 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:22:40.285 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:40.286 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:22:40 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838', sleeping for 30000ms m30000| Fri Feb 22 12:22:40.286 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:22:40.286 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:22:40.287 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' acquired, ts : 5127631060d3b8e20a182780 m30999| Fri Feb 22 12:22:40.289 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:22:40.289 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:22:40.289 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:40-5127631060d3b8e20a182781", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535760289), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:22:40.290 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:22:40.290 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:40.290 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:22:40.291 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:22:40.291 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:40.292 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:40-5127631060d3b8e20a182783", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535760292), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:22:40.292 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:22:40.293 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' unlocked. m30000| Fri Feb 22 12:22:40.294 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:22:40.295 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:22:40.295 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:22:40.295 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:22:40.295 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:22:40.295 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:22:40.295 BackgroundJob starting: PeriodicTask::Runner m30000| Fri Feb 22 12:22:40.295 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:40.295 [mongosMain] waiting for connections on port 30999 m30999| Fri Feb 22 12:22:40.295 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 12:22:40.296 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:22:40.297 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:22:40.297 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:22:40.297 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:22:40.298 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:40.298 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:22:40.299 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:40.299 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:22:40.299 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:40.300 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:22:40.301 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:40.301 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:22:40.301 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:22:40.302 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:40.302 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:22:40.302 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:22:40 m30999| Fri Feb 22 12:22:40.302 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:22:40.302 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:40.303 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:22:40.303 [conn3] build index config.mongos { _id: 1 } m30000| Fri Feb 22 12:22:40.303 [initandlisten] connection accepted from 127.0.0.1:42223 #5 (5 connections now open) m30999| Fri Feb 22 12:22:40.303 [Balancer] connected connection! m30000| Fri Feb 22 12:22:40.304 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:40.304 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:40.304 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 ) m30999| Fri Feb 22 12:22:40.304 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:22:40.305 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127631060d3b8e20a182785" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:22:40.305 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' acquired, ts : 5127631060d3b8e20a182785 m30999| Fri Feb 22 12:22:40.305 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:40.305 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:40.305 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:40.306 [Balancer] no collections to balance m30999| Fri Feb 22 12:22:40.306 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:22:40.306 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:40.306 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' unlocked. m30999| Fri Feb 22 12:22:40.444 [mongosMain] connection accepted from 127.0.0.1:45738 #1 (1 connection now open) m30999| Fri Feb 22 12:22:40.447 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:22:40.447 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:22:40.448 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:40.448 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:22:40.450 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } m30999| Fri Feb 22 12:22:40.451 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:22:40.451 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:40.451 [conn1] connected connection! m30001| Fri Feb 22 12:22:40.451 [initandlisten] connection accepted from 127.0.0.1:44936 #2 (2 connections now open) m30999| Fri Feb 22 12:22:40.453 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001", maxSize: 1 } m30999| Fri Feb 22 12:22:40.454 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:22:40.454 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:22:40.455 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:22:40.455 [conn1] enabling sharding on: test m30999| Fri Feb 22 12:22:40.455 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:40.455 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:22:40.456 [initandlisten] connection accepted from 127.0.0.1:63921 #6 (6 connections now open) m30999| Fri Feb 22 12:22:40.456 [conn1] connected connection! m30999| Fri Feb 22 12:22:40.456 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5127631060d3b8e20a182784 m30999| Fri Feb 22 12:22:40.456 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:22:40.456 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 12:22:40.456 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:22:40.456 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:40.456 [conn1] connected connection! m30001| Fri Feb 22 12:22:40.456 [initandlisten] connection accepted from 127.0.0.1:53898 #3 (3 connections now open) m30999| Fri Feb 22 12:22:40.456 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5127631060d3b8e20a182784 m30999| Fri Feb 22 12:22:40.456 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:22:40.457 BackgroundJob starting: WriteBackListener-localhost:30001 { "_id" : "chunksize", "value" : 1 } m30001| Fri Feb 22 12:22:40.459 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance21/test.ns, filling with zeroes... m30001| Fri Feb 22 12:22:40.459 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance21/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:22:40.459 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance21/test.0, filling with zeroes... m30001| Fri Feb 22 12:22:40.459 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance21/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:22:40.460 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance21/test.1, filling with zeroes... m30001| Fri Feb 22 12:22:40.460 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance21/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:22:40.463 [conn3] build index test.foo { _id: 1 } m30001| Fri Feb 22 12:22:40.464 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:41.063 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:22:41.064 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:41.064 [conn1] connected connection! m30001| Fri Feb 22 12:22:41.064 [initandlisten] connection accepted from 127.0.0.1:61410 #4 (4 connections now open) m30999| Fri Feb 22 12:22:41.065 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:22:41.065 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:22:41.065 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:22:41.071 [conn1] going to create 81 chunk(s) for: test.foo using new epoch 5127631160d3b8e20a182786 m30999| Fri Feb 22 12:22:41.081 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:41.081 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:41.082 [conn1] connected connection! m30000| Fri Feb 22 12:22:41.082 [initandlisten] connection accepted from 127.0.0.1:49934 #7 (7 connections now open) m30999| Fri Feb 22 12:22:41.085 [conn1] ChunkManager: time to load chunks for test.foo: 3ms sequenceNumber: 2 version: 1|80||5127631160d3b8e20a182786 based on: (empty) m30000| Fri Feb 22 12:22:41.086 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 12:22:41.090 [conn3] build index done. scanned 0 total records. 0.004 secs m30999| Fri Feb 22 12:22:41.090 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|80, versionEpoch: ObjectId('5127631160d3b8e20a182786'), serverID: ObjectId('5127631060d3b8e20a182784'), shard: "shard0001", shardHost: "localhost:30001" } 0x1180470 2 m30999| Fri Feb 22 12:22:41.091 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:22:41.091 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|80, versionEpoch: ObjectId('5127631160d3b8e20a182786'), serverID: ObjectId('5127631060d3b8e20a182784'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x1180470 2 m30001| Fri Feb 22 12:22:41.091 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:22:41.091 [initandlisten] connection accepted from 127.0.0.1:44299 #8 (8 connections now open) m30999| Fri Feb 22 12:22:41.094 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } { "shard0000" : 0, "shard0001" : 81 } { "shard0000" : 0, "shard0001" : 81 } 81 { "shard0000" : 0, "shard0001" : 81 } { "shard0000" : 0, "shard0001" : 81 } { "shard0000" : 0, "shard0001" : 81 } m30999| Fri Feb 22 12:22:46.306 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:46.307 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 ) m30999| Fri Feb 22 12:22:46.307 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:46 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127631660d3b8e20a182787" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127631060d3b8e20a182785" } } m30999| Fri Feb 22 12:22:46.308 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' acquired, ts : 5127631660d3b8e20a182787 m30999| Fri Feb 22 12:22:46.308 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:46.308 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:46.308 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 12:22:46.309 [conn3] build index config.tags { _id: 1 } m30000| Fri Feb 22 12:22:46.312 [conn3] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 12:22:46.312 [conn3] info: creating collection config.tags on add index m30000| Fri Feb 22 12:22:46.312 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 12:22:46.313 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:46.313 [Balancer] shard0001 is unavailable m30999| Fri Feb 22 12:22:46.313 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:46.313 [Balancer] donor : shard0001 chunks on 81 m30999| Fri Feb 22 12:22:46.313 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 12:22:46.313 [Balancer] threshold : 8 m30999| Fri Feb 22 12:22:46.313 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127631160d3b8e20a182786'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:46.313 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 51.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:46.314 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:46.314 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 51.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:22:46.314 [initandlisten] connection accepted from 127.0.0.1:33100 #9 (9 connections now open) m30001| Fri Feb 22 12:22:46.315 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743 (sleeping for 30000ms) m30001| Fri Feb 22 12:22:46.316 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' acquired, ts : 51276316c1b0722a0249fbaf m30001| Fri Feb 22 12:22:46.316 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:46-51276316c1b0722a0249fbb0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535766316), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:46.317 [conn4] moveChunk request accepted at version 1|80||5127631160d3b8e20a182786 m30001| Fri Feb 22 12:22:46.317 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:22:46.318 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 51.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:22:46.318 [initandlisten] connection accepted from 127.0.0.1:47996 #5 (5 connections now open) m30000| Fri Feb 22 12:22:46.319 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/test.ns, filling with zeroes... m30000| Fri Feb 22 12:22:46.319 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:22:46.319 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/test.0, filling with zeroes... m30000| Fri Feb 22 12:22:46.319 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:22:46.320 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance20/test.1, filling with zeroes... m30000| Fri Feb 22 12:22:46.320 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance20/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:22:46.323 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 12:22:46.324 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:22:46.324 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 12:22:46.328 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:46.333 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:46.333 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:22:46.336 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:22:46.338 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:46.338 [conn4] moveChunk setting version to: 2|0||5127631160d3b8e20a182786 m30000| Fri Feb 22 12:22:46.338 [initandlisten] connection accepted from 127.0.0.1:37950 #10 (10 connections now open) m30000| Fri Feb 22 12:22:46.339 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:46.346 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:22:46.346 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:22:46.346 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:46-51276316ef473c91ddb9cc4f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535766346), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 13 } } m30000| Fri Feb 22 12:22:46.347 [initandlisten] connection accepted from 127.0.0.1:37224 #11 (11 connections now open) m30001| Fri Feb 22 12:22:46.349 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:46.349 [conn4] moveChunk updating self version to: 2|1||5127631160d3b8e20a182786 through { _id: 51.0 } -> { _id: 103.0 } for collection 'test.foo' m30000| Fri Feb 22 12:22:46.349 [initandlisten] connection accepted from 127.0.0.1:45157 #12 (12 connections now open) m30001| Fri Feb 22 12:22:46.350 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:46-51276316c1b0722a0249fbb1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535766350), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:46.350 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:46.350 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:46.350 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:46.350 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:46.350 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:46.350 [cleanupOldData-51276316c1b0722a0249fbb2] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 51.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:46.350 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' unlocked. m30001| Fri Feb 22 12:22:46.350 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:46-51276316c1b0722a0249fbb3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535766350), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:46.350 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:46.351 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|1||5127631160d3b8e20a182786 based on: 1|80||5127631160d3b8e20a182786 m30999| Fri Feb 22 12:22:46.351 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:46.352 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' unlocked. m30001| Fri Feb 22 12:22:46.370 [cleanupOldData-51276316c1b0722a0249fbb2] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:22:46.370 [cleanupOldData-51276316c1b0722a0249fbb2] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:22:46.373 [cleanupOldData-51276316c1b0722a0249fbb2] moveChunk deleted 51 documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30999| Fri Feb 22 12:22:47.352 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:47.353 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 ) m30999| Fri Feb 22 12:22:47.353 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:47 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127631760d3b8e20a182788" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127631660d3b8e20a182787" } } m30999| Fri Feb 22 12:22:47.353 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' acquired, ts : 5127631760d3b8e20a182788 m30999| Fri Feb 22 12:22:47.353 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:47.353 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:47.353 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:47.354 [Balancer] shard0001 is unavailable m30999| Fri Feb 22 12:22:47.354 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:47.354 [Balancer] donor : shard0001 chunks on 80 m30999| Fri Feb 22 12:22:47.354 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 12:22:47.354 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:47.355 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_51.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5127631160d3b8e20a182786'), ns: "test.foo", min: { _id: 51.0 }, max: { _id: 103.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:47.355 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 51.0 }max: { _id: 103.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:47.355 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:47.355 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 51.0 }, max: { _id: 103.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_51.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:47.356 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' acquired, ts : 51276317c1b0722a0249fbb4 m30001| Fri Feb 22 12:22:47.356 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:47-51276317c1b0722a0249fbb5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535767356), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:47.357 [conn4] moveChunk request accepted at version 2|1||5127631160d3b8e20a182786 m30001| Fri Feb 22 12:22:47.357 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:47.357 [migrateThread] starting receiving-end of migration of chunk { _id: 51.0 } -> { _id: 103.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:47.363 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:47.363 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:22:47.364 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:22:47.367 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:47.367 [conn4] moveChunk setting version to: 3|0||5127631160d3b8e20a182786 m30000| Fri Feb 22 12:22:47.367 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:47.374 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:22:47.374 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:22:47.374 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:47-51276317ef473c91ddb9cc50", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535767374), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:47.378 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:47.378 [conn4] moveChunk updating self version to: 3|1||5127631160d3b8e20a182786 through { _id: 103.0 } -> { _id: 155.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:47.378 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:47-51276317c1b0722a0249fbb6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535767378), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:47.378 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:47.378 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:47.378 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:47.378 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:47.378 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:47.378 [cleanupOldData-51276317c1b0722a0249fbb7] (start) waiting to cleanup test.foo from { _id: 51.0 } -> { _id: 103.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:47.379 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' unlocked. m30001| Fri Feb 22 12:22:47.379 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:47-51276317c1b0722a0249fbb8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535767379), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:47.379 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:47.380 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 3|1||5127631160d3b8e20a182786 based on: 2|1||5127631160d3b8e20a182786 m30999| Fri Feb 22 12:22:47.380 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:47.380 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' unlocked. m30001| Fri Feb 22 12:22:47.398 [cleanupOldData-51276317c1b0722a0249fbb7] waiting to remove documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:22:47.398 [cleanupOldData-51276317c1b0722a0249fbb7] moveChunk starting delete for: test.foo from { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:22:47.401 [cleanupOldData-51276317c1b0722a0249fbb7] moveChunk deleted 52 documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30999| Fri Feb 22 12:22:48.381 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:48.381 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 ) m30999| Fri Feb 22 12:22:48.382 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:48 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127631860d3b8e20a182789" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127631760d3b8e20a182788" } } m30999| Fri Feb 22 12:22:48.382 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' acquired, ts : 5127631860d3b8e20a182789 m30999| Fri Feb 22 12:22:48.382 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:48.382 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:48.382 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:48.384 [Balancer] shard0001 is unavailable m30999| Fri Feb 22 12:22:48.384 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:48.384 [Balancer] donor : shard0001 chunks on 79 m30999| Fri Feb 22 12:22:48.384 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 12:22:48.384 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:48.384 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_103.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5127631160d3b8e20a182786'), ns: "test.foo", min: { _id: 103.0 }, max: { _id: 155.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:48.384 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 103.0 }max: { _id: 155.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:48.384 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:48.385 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 103.0 }, max: { _id: 155.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_103.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:48.386 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' acquired, ts : 51276318c1b0722a0249fbb9 m30001| Fri Feb 22 12:22:48.386 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:48-51276318c1b0722a0249fbba", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535768386), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:48.387 [conn4] moveChunk request accepted at version 3|1||5127631160d3b8e20a182786 m30001| Fri Feb 22 12:22:48.387 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:48.387 [migrateThread] starting receiving-end of migration of chunk { _id: 103.0 } -> { _id: 155.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:48.397 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:48.397 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:22:48.397 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:48.398 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:22:48.407 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:48.408 [conn4] moveChunk setting version to: 4|0||5127631160d3b8e20a182786 m30000| Fri Feb 22 12:22:48.408 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:48.408 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:22:48.408 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:22:48.409 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:48-51276318ef473c91ddb9cc51", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535768408), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:48.418 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:48.418 [conn4] moveChunk updating self version to: 4|1||5127631160d3b8e20a182786 through { _id: 155.0 } -> { _id: 207.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:48.419 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:48-51276318c1b0722a0249fbbb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535768419), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:48.419 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:48.419 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:48.419 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:48.419 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:48.419 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:48.419 [cleanupOldData-51276318c1b0722a0249fbbc] (start) waiting to cleanup test.foo from { _id: 103.0 } -> { _id: 155.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:48.419 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' unlocked. m30001| Fri Feb 22 12:22:48.419 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:48-51276318c1b0722a0249fbbd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535768419), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:48.419 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:48.420 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 4|1||5127631160d3b8e20a182786 based on: 3|1||5127631160d3b8e20a182786 m30999| Fri Feb 22 12:22:48.421 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:48.421 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' unlocked. m30001| Fri Feb 22 12:22:48.439 [cleanupOldData-51276318c1b0722a0249fbbc] waiting to remove documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:22:48.439 [cleanupOldData-51276318c1b0722a0249fbbc] moveChunk starting delete for: test.foo from { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:22:48.443 [cleanupOldData-51276318c1b0722a0249fbbc] moveChunk deleted 52 documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30999| Fri Feb 22 12:22:49.422 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:49.422 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 ) m30999| Fri Feb 22 12:22:49.422 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:49 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127631960d3b8e20a18278a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127631860d3b8e20a182789" } } m30999| Fri Feb 22 12:22:49.423 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' acquired, ts : 5127631960d3b8e20a18278a m30999| Fri Feb 22 12:22:49.423 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:49.423 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:49.423 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:49.425 [Balancer] shard0001 is unavailable m30999| Fri Feb 22 12:22:49.425 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:49.425 [Balancer] donor : shard0001 chunks on 78 m30999| Fri Feb 22 12:22:49.425 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 12:22:49.425 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:49.425 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_155.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5127631160d3b8e20a182786'), ns: "test.foo", min: { _id: 155.0 }, max: { _id: 207.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:49.425 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 155.0 }max: { _id: 207.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:49.425 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:49.426 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 155.0 }, max: { _id: 207.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_155.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:49.427 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' acquired, ts : 51276319c1b0722a0249fbbe m30001| Fri Feb 22 12:22:49.427 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:49-51276319c1b0722a0249fbbf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535769427), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:49.428 [conn4] moveChunk request accepted at version 4|1||5127631160d3b8e20a182786 m30001| Fri Feb 22 12:22:49.428 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:49.428 [migrateThread] starting receiving-end of migration of chunk { _id: 155.0 } -> { _id: 207.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:49.437 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:49.438 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:49.438 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:49.439 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:49.448 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:49.449 [conn4] moveChunk setting version to: 5|0||5127631160d3b8e20a182786 m30000| Fri Feb 22 12:22:49.449 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:49.449 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:49.449 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:22:49.449 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:49-51276319ef473c91ddb9cc52", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535769449), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:49.459 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:49.459 [conn4] moveChunk updating self version to: 5|1||5127631160d3b8e20a182786 through { _id: 207.0 } -> { _id: 259.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:49.460 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:49-51276319c1b0722a0249fbc0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535769460), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:49.460 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:49.460 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:49.460 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:49.460 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:49.460 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:49.460 [cleanupOldData-51276319c1b0722a0249fbc1] (start) waiting to cleanup test.foo from { _id: 155.0 } -> { _id: 207.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:49.460 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' unlocked. m30001| Fri Feb 22 12:22:49.461 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:49-51276319c1b0722a0249fbc2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535769461), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:49.461 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:49.462 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 6 version: 5|1||5127631160d3b8e20a182786 based on: 4|1||5127631160d3b8e20a182786 m30999| Fri Feb 22 12:22:49.463 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:49.463 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' unlocked. m30001| Fri Feb 22 12:22:49.480 [cleanupOldData-51276319c1b0722a0249fbc1] waiting to remove documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:49.480 [cleanupOldData-51276319c1b0722a0249fbc1] moveChunk starting delete for: test.foo from { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:22:49.485 [cleanupOldData-51276319c1b0722a0249fbc1] moveChunk deleted 52 documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30999| Fri Feb 22 12:22:50.464 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:50.464 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838 ) m30999| Fri Feb 22 12:22:50.464 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127631a60d3b8e20a18278b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127631960d3b8e20a18278a" } } m30999| Fri Feb 22 12:22:50.465 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' acquired, ts : 5127631a60d3b8e20a18278b m30999| Fri Feb 22 12:22:50.465 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:50.465 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:50.465 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:50.466 [Balancer] shard0001 is unavailable m30999| Fri Feb 22 12:22:50.466 [Balancer] collection : test.foo m30999| Fri Feb 22 12:22:50.466 [Balancer] donor : shard0001 chunks on 77 m30999| Fri Feb 22 12:22:50.466 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 12:22:50.466 [Balancer] threshold : 2 m30999| Fri Feb 22 12:22:50.466 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_207.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5127631160d3b8e20a182786'), ns: "test.foo", min: { _id: 207.0 }, max: { _id: 259.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:22:50.466 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 207.0 }max: { _id: 259.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:22:50.466 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:22:50.466 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 207.0 }, max: { _id: 259.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_207.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:22:50.467 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' acquired, ts : 5127631ac1b0722a0249fbc3 m30001| Fri Feb 22 12:22:50.467 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:50-5127631ac1b0722a0249fbc4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535770467), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:50.468 [conn4] moveChunk request accepted at version 5|1||5127631160d3b8e20a182786 m30001| Fri Feb 22 12:22:50.468 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:22:50.468 [migrateThread] starting receiving-end of migration of chunk { _id: 207.0 } -> { _id: 259.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:22:50.478 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:22:50.478 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:50.479 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:22:50.480 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:50.489 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:22:50.489 [conn4] moveChunk setting version to: 6|0||5127631160d3b8e20a182786 m30000| Fri Feb 22 12:22:50.489 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:22:50.490 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:22:50.490 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:22:50.490 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:50-5127631aef473c91ddb9cc53", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535770490), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:22:50.499 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:22:50.499 [conn4] moveChunk updating self version to: 6|1||5127631160d3b8e20a182786 through { _id: 259.0 } -> { _id: 311.0 } for collection 'test.foo' m30001| Fri Feb 22 12:22:50.500 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:50-5127631ac1b0722a0249fbc5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535770500), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:22:50.500 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:50.500 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:50.500 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:22:50.500 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:22:50.500 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:22:50.500 [cleanupOldData-5127631ac1b0722a0249fbc6] (start) waiting to cleanup test.foo from { _id: 207.0 } -> { _id: 259.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:22:50.500 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535766:18743' unlocked. m30001| Fri Feb 22 12:22:50.500 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:50-5127631ac1b0722a0249fbc7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61410", time: new Date(1361535770500), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:22:50.501 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:22:50.502 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 6|1||5127631160d3b8e20a182786 based on: 5|1||5127631160d3b8e20a182786 m30999| Fri Feb 22 12:22:50.502 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:50.502 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535760:16838' unlocked. m30001| Fri Feb 22 12:22:50.520 [cleanupOldData-5127631ac1b0722a0249fbc6] waiting to remove documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:50.520 [cleanupOldData-5127631ac1b0722a0249fbc6] moveChunk starting delete for: test.foo from { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:22:50.523 [cleanupOldData-5127631ac1b0722a0249fbc6] moveChunk deleted 52 documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:22:51.123 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 12:22:51.135 [conn4] end connection 127.0.0.1:61410 (4 connections now open) m30001| Fri Feb 22 12:22:51.135 [conn3] end connection 127.0.0.1:53898 (4 connections now open) m30000| Fri Feb 22 12:22:51.135 [conn3] end connection 127.0.0.1:54441 (11 connections now open) m30000| Fri Feb 22 12:22:51.135 [conn6] end connection 127.0.0.1:63921 (11 connections now open) m30000| Fri Feb 22 12:22:51.135 [conn7] end connection 127.0.0.1:49934 (11 connections now open) m30000| Fri Feb 22 12:22:51.135 [conn5] end connection 127.0.0.1:42223 (11 connections now open) Fri Feb 22 12:22:52.123 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:22:52.124 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:22:52.124 [interruptThread] now exiting m30000| Fri Feb 22 12:22:52.124 dbexit: m30000| Fri Feb 22 12:22:52.124 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:22:52.124 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:22:52.124 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:22:52.124 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:22:52.124 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:22:52.124 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:22:52.124 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:22:52.124 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:22:52.124 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:22:52.124 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:22:52.124 [conn1] end connection 127.0.0.1:63918 (7 connections now open) m30000| Fri Feb 22 12:22:52.124 [conn2] end connection 127.0.0.1:64392 (7 connections now open) m30000| Fri Feb 22 12:22:52.124 [conn10] end connection 127.0.0.1:37950 (7 connections now open) m30000| Fri Feb 22 12:22:52.124 [conn8] end connection 127.0.0.1:44299 (7 connections now open) m30001| Fri Feb 22 12:22:52.124 [conn5] end connection 127.0.0.1:47996 (2 connections now open) m30000| Fri Feb 22 12:22:52.124 [conn9] end connection 127.0.0.1:33100 (7 connections now open) m30000| Fri Feb 22 12:22:52.124 [conn11] end connection 127.0.0.1:37224 (6 connections now open) m30000| Fri Feb 22 12:22:52.124 [conn12] end connection 127.0.0.1:45157 (6 connections now open) m30000| Fri Feb 22 12:22:52.162 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:22:52.163 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:22:52.163 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:22:52.163 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:22:52.163 dbexit: really exiting now Fri Feb 22 12:22:53.124 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 12:22:53.124 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:22:53.124 [interruptThread] now exiting m30001| Fri Feb 22 12:22:53.124 dbexit: m30001| Fri Feb 22 12:22:53.124 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:22:53.124 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:22:53.124 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:22:53.124 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:22:53.124 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:22:53.124 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:22:53.124 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:22:53.124 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:22:53.124 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:22:53.124 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:22:53.124 [conn1] end connection 127.0.0.1:34678 (1 connection now open) m30001| Fri Feb 22 12:22:53.147 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:22:53.152 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:22:53.152 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:22:53.152 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:22:53.152 dbexit: really exiting now Fri Feb 22 12:22:54.124 shell: stopped mongo program on port 30001 *** ShardingTest slow_sharding_balance2 completed successfully in 14.324 seconds *** Fri Feb 22 12:22:54.155 [conn8] end connection 127.0.0.1:60221 (0 connections now open) 14.5092 seconds Fri Feb 22 12:22:54.176 [initandlisten] connection accepted from 127.0.0.1:40926 #9 (1 connection now open) Fri Feb 22 12:22:54.177 [conn9] end connection 127.0.0.1:40926 (0 connections now open) ******************************************* Test : sharding_balance3.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance3.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance3.js";TestData.testFile = "sharding_balance3.js";TestData.testName = "sharding_balance3";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:22:54 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:22:54.392 [initandlisten] connection accepted from 127.0.0.1:56744 #10 (1 connection now open) null Resetting db path '/data/db/slow_sharding_balance30' Fri Feb 22 12:22:54.403 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/slow_sharding_balance30 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:22:54.493 [initandlisten] MongoDB starting : pid=11349 port=30000 dbpath=/data/db/slow_sharding_balance30 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:22:54.493 [initandlisten] m30000| Fri Feb 22 12:22:54.493 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:22:54.493 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:22:54.493 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:22:54.493 [initandlisten] m30000| Fri Feb 22 12:22:54.493 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:22:54.493 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:22:54.493 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:22:54.493 [initandlisten] allocator: system m30000| Fri Feb 22 12:22:54.493 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance30", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:22:54.494 [initandlisten] journal dir=/data/db/slow_sharding_balance30/journal m30000| Fri Feb 22 12:22:54.494 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:22:54.511 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/local.ns, filling with zeroes... m30000| Fri Feb 22 12:22:54.511 [FileAllocator] creating directory /data/db/slow_sharding_balance30/_tmp m30000| Fri Feb 22 12:22:54.511 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:22:54.512 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/local.0, filling with zeroes... m30000| Fri Feb 22 12:22:54.512 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:22:54.515 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:22:54.515 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:22:54.606 [initandlisten] connection accepted from 127.0.0.1:38532 #1 (1 connection now open) Resetting db path '/data/db/slow_sharding_balance31' Fri Feb 22 12:22:54.610 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/slow_sharding_balance31 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:22:54.704 [initandlisten] MongoDB starting : pid=11350 port=30001 dbpath=/data/db/slow_sharding_balance31 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:22:54.705 [initandlisten] m30001| Fri Feb 22 12:22:54.705 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:22:54.705 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:22:54.705 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:22:54.705 [initandlisten] m30001| Fri Feb 22 12:22:54.705 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:22:54.705 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:22:54.705 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:22:54.705 [initandlisten] allocator: system m30001| Fri Feb 22 12:22:54.705 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance31", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:22:54.705 [initandlisten] journal dir=/data/db/slow_sharding_balance31/journal m30001| Fri Feb 22 12:22:54.705 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:22:54.720 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance31/local.ns, filling with zeroes... m30001| Fri Feb 22 12:22:54.721 [FileAllocator] creating directory /data/db/slow_sharding_balance31/_tmp m30001| Fri Feb 22 12:22:54.721 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance31/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:22:54.721 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance31/local.0, filling with zeroes... m30001| Fri Feb 22 12:22:54.721 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance31/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:22:54.724 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:22:54.724 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:22:54.812 [initandlisten] connection accepted from 127.0.0.1:34056 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:22:54.812 [initandlisten] connection accepted from 127.0.0.1:44285 #2 (2 connections now open) ShardingTest slow_sharding_balance3 : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:22:54.820 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -vvv --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:22:54.837 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:22:54.837 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=11351 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:22:54.837 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:22:54.837 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:22:54.837 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], vvv: true } m30999| Fri Feb 22 12:22:54.837 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 12:22:54.837 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:54.838 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:54.839 [mongosMain] connected connection! m30000| Fri Feb 22 12:22:54.839 [initandlisten] connection accepted from 127.0.0.1:44163 #3 (3 connections now open) m30999| Fri Feb 22 12:22:54.839 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:22:54.840 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:54.840 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:22:54.840 [initandlisten] connection accepted from 127.0.0.1:63315 #4 (4 connections now open) m30999| Fri Feb 22 12:22:54.840 [mongosMain] connected connection! m30000| Fri Feb 22 12:22:54.841 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:22:54.852 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:22:54.852 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 12:22:54.852 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 12:22:54.853 [mongosMain] skew from remote server localhost:30000 found: -1 m30999| Fri Feb 22 12:22:54.853 [mongosMain] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds. m30999| Fri Feb 22 12:22:54.853 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 ) m30999| Fri Feb 22 12:22:54.853 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:22:54.853 [LockPinger] distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' about to ping. m30999| Fri Feb 22 12:22:54.853 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 12:22:54.853 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:54 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127631ef72bccb1fac7733f" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 12:22:54.853 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/config.ns, filling with zeroes... m30000| Fri Feb 22 12:22:54.854 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:22:54.854 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/config.0, filling with zeroes... m30000| Fri Feb 22 12:22:54.854 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:22:54.854 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/config.1, filling with zeroes... m30000| Fri Feb 22 12:22:54.854 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:22:54.857 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:22:54.858 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:22:54.859 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:22:54.860 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:54.861 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:22:54 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838', sleeping for 30000ms m30000| Fri Feb 22 12:22:54.861 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:22:54.861 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:22:54.862 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' acquired, ts : 5127631ef72bccb1fac7733f m30999| Fri Feb 22 12:22:54.864 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:22:54.864 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:22:54.864 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:54-5127631ef72bccb1fac77340", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535774864), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:22:54.864 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:22:54.865 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:54.865 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:22:54.866 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:22:54.866 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:54.867 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:22:54-5127631ef72bccb1fac77342", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535774867), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:22:54.867 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:22:54.867 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' unlocked. m30000| Fri Feb 22 12:22:54.869 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:22:54.869 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:22:54.869 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:22:54.870 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:22:54.870 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:22:54.870 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:22:54.870 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 12:22:54.870 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 12:22:54.870 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:54.870 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 12:22:54.870 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:22:54.872 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:22:54.872 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:22:54.872 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:22:54.872 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:54.873 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:22:54.873 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:54.873 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:22:54.874 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:54.874 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:22:54.875 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:22:54.875 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:22:54.875 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:22:54.876 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:54.876 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:22:54.876 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:22:54 m30999| Fri Feb 22 12:22:54.876 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:22:54.876 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:54.877 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:22:54.877 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 12:22:54.877 [Balancer] connected connection! m30000| Fri Feb 22 12:22:54.877 [initandlisten] connection accepted from 127.0.0.1:43448 #5 (5 connections now open) m30000| Fri Feb 22 12:22:54.878 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:22:54.878 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:22:54.878 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 ) m30999| Fri Feb 22 12:22:54.879 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:22:54.879 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:22:54 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127631ef72bccb1fac77344" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:22:54.879 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' acquired, ts : 5127631ef72bccb1fac77344 m30999| Fri Feb 22 12:22:54.879 [Balancer] *** start balancing round m30999| Fri Feb 22 12:22:54.879 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:22:54.880 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:22:54.880 [Balancer] no collections to balance m30999| Fri Feb 22 12:22:54.880 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:22:54.880 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:22:54.880 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' unlocked. m30999| Fri Feb 22 12:22:55.021 [mongosMain] connection accepted from 127.0.0.1:53304 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 12:22:55.024 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:22:55.025 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:22:55.026 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:55.027 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:22:55.027 [conn1] Request::process begin ns: admin.$cmd msg id: 1 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.027 [conn1] single query: admin.$cmd { addshard: "localhost:30000" } ntoreturn: -1 options : 0 m30999| Fri Feb 22 12:22:55.028 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } m30999| Fri Feb 22 12:22:55.029 [conn1] Request::process end ns: admin.$cmd msg id: 1 op: 2004 attempt: 0 2ms { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 12:22:55.030 [conn1] Request::process begin ns: admin.$cmd msg id: 2 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.030 [conn1] single query: admin.$cmd { addshard: "localhost:30001" } ntoreturn: -1 options : 0 m30999| Fri Feb 22 12:22:55.030 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:22:55.031 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:55.031 [conn1] connected connection! m30001| Fri Feb 22 12:22:55.031 [initandlisten] connection accepted from 127.0.0.1:46750 #2 (2 connections now open) m30999| Fri Feb 22 12:22:55.032 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } m30999| Fri Feb 22 12:22:55.033 [conn1] Request::process end ns: admin.$cmd msg id: 2 op: 2004 attempt: 0 2ms { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 12:22:55.034 [conn1] Request::process begin ns: admin.$cmd msg id: 3 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.034 [conn1] single query: admin.$cmd { enablesharding: "test" } ntoreturn: -1 options : 0 m30999| Fri Feb 22 12:22:55.034 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:22:55.035 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:22:55.035 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:22:55.035 [conn1] enabling sharding on: test m30999| Fri Feb 22 12:22:55.035 [conn1] Request::process end ns: admin.$cmd msg id: 3 op: 2004 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.036 [conn1] Request::process begin ns: config.settings msg id: 4 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.036 [conn1] shard query: config.settings {} m30999| Fri Feb 22 12:22:55.036 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.settings", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:22:55.036 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.036 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.036 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:55.036 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:55.036 [conn1] connected connection! m30999| Fri Feb 22 12:22:55.036 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5127631ef72bccb1fac77343 m30000| Fri Feb 22 12:22:55.036 [initandlisten] connection accepted from 127.0.0.1:50465 #6 (6 connections now open) m30999| Fri Feb 22 12:22:55.036 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:22:55.036 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 12:22:55.036 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5127631ef72bccb1fac77343'), authoritative: true } m30999| Fri Feb 22 12:22:55.037 [conn1] initial sharding result : { initialized: true, ok: 1.0 } m30999| Fri Feb 22 12:22:55.037 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:22:55.037 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:55.037 [conn1] connected connection! m30999| Fri Feb 22 12:22:55.037 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5127631ef72bccb1fac77343 m30001| Fri Feb 22 12:22:55.037 [initandlisten] connection accepted from 127.0.0.1:36104 #3 (3 connections now open) m30999| Fri Feb 22 12:22:55.037 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:22:55.038 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Fri Feb 22 12:22:55.038 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5127631ef72bccb1fac77343'), authoritative: true } m30999| Fri Feb 22 12:22:55.038 [conn1] initial sharding result : { initialized: true, ok: 1.0 } m30999| Fri Feb 22 12:22:55.038 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.038 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.038 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.038 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "chunksize", value: 1 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.038 [conn1] Request::process end ns: config.settings msg id: 4 op: 2004 attempt: 0 2ms { "_id" : "chunksize", "value" : 1 } m30999| Fri Feb 22 12:22:55.039 [conn1] Request::process begin ns: test.foo msg id: 5 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.039 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 5 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 6 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 6 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 7 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30001| Fri Feb 22 12:22:55.040 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance31/test.ns, filling with zeroes... m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 7 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 8 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 8 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 9 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 9 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 10 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 10 op: 2002 attempt: 0 0ms m30001| Fri Feb 22 12:22:55.040 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance31/test.ns, size: 16MB, took 0 secs m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 11 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 11 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 12 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30001| Fri Feb 22 12:22:55.040 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance31/test.0, filling with zeroes... m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 12 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 13 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 13 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 14 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 14 op: 2002 attempt: 0 0ms m30001| Fri Feb 22 12:22:55.040 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance31/test.0, size: 64MB, took 0 secs m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 15 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process end ns: test.foo msg id: 15 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.040 [conn1] Request::process begin ns: test.foo msg id: 16 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.040 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.041 [conn1] Request::process end ns: test.foo msg id: 16 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.041 [conn1] Request::process begin ns: test.foo msg id: 17 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.041 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.041 [conn1] Request::process end ns: test.foo msg id: 17 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.041 [conn1] Request::process begin ns: test.foo msg id: 18 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.041 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.041 [conn1] Request::process end ns: test.foo msg id: 18 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.041 [conn1] Request::process begin ns: test.foo msg id: 19 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.041 [conn1] write: test.foo m30001| Fri Feb 22 12:22:55.041 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance31/test.1, filling with zeroes... m30001| Fri Feb 22 12:22:55.041 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance31/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:22:55.044 [conn3] build index test.foo { _id: 1 } m30001| Fri Feb 22 12:22:55.046 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 19 op: 2002 attempt: 0 7ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 20 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 20 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 21 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 21 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 22 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 22 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 23 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 23 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 24 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 24 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 25 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 25 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 26 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 26 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 27 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 27 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 28 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process end ns: test.foo msg id: 28 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.048 [conn1] Request::process begin ns: test.foo msg id: 29 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.048 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 29 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 30 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 30 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 31 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 31 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 32 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 32 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 33 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 33 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 34 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 34 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 35 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 35 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 36 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 36 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 37 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 37 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 38 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 38 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 39 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 39 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 40 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 40 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 41 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 41 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 42 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process end ns: test.foo msg id: 42 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.049 [conn1] Request::process begin ns: test.foo msg id: 43 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.049 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 43 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 44 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 44 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 45 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 45 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 46 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 46 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 47 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 47 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 48 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 48 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 49 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 49 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 50 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process end ns: test.foo msg id: 50 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.050 [conn1] Request::process begin ns: test.foo msg id: 51 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.050 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.051 [conn1] Request::process end ns: test.foo msg id: 51 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.051 [conn1] Request::process begin ns: test.foo msg id: 52 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.051 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.051 [conn1] Request::process end ns: test.foo msg id: 52 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.051 [conn1] Request::process begin ns: test.foo msg id: 53 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.051 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.051 [conn1] Request::process end ns: test.foo msg id: 53 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 54 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 54 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 55 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 55 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 56 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 56 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 57 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 57 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 58 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 58 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 59 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 59 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 60 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 60 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 61 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 61 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 62 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 62 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 63 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 63 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 64 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 64 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 65 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 65 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 66 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 66 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 67 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process end ns: test.foo msg id: 67 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.052 [conn1] Request::process begin ns: test.foo msg id: 68 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.052 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 68 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 69 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 69 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 70 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 70 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 71 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 71 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 72 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 72 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 73 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 73 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 74 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 74 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 75 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 75 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 76 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 76 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 77 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 77 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 78 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 78 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 79 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 79 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 80 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process end ns: test.foo msg id: 80 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.053 [conn1] Request::process begin ns: test.foo msg id: 81 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.053 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 81 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 82 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 82 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 83 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 83 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 84 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 84 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 85 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 85 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 86 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 86 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 87 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 87 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 88 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 88 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 89 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 89 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 90 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 90 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 91 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 91 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 92 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 92 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process begin ns: test.foo msg id: 93 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.055 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.055 [conn1] Request::process end ns: test.foo msg id: 93 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 94 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 94 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 95 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 95 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 96 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 96 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 97 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 97 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 98 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 98 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 99 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 99 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 100 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 100 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 101 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 101 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 102 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 102 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 103 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 103 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 104 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 104 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 105 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 105 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 106 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 106 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 107 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 107 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 108 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.056 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process end ns: test.foo msg id: 108 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.056 [conn1] Request::process begin ns: test.foo msg id: 109 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.057 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.057 [conn1] Request::process end ns: test.foo msg id: 109 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.057 [conn1] Request::process begin ns: test.foo msg id: 110 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.057 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 110 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 111 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 111 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 112 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 112 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 113 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 113 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 114 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 114 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 115 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 115 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 116 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 116 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 117 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 117 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 118 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 118 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process begin ns: test.foo msg id: 119 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.058 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.058 [conn1] Request::process end ns: test.foo msg id: 119 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 120 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 120 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 121 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 121 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 122 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 122 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 123 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 123 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 124 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 124 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 125 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 125 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 126 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 126 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 127 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 127 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 128 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 128 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 129 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 129 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 130 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 130 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 131 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 131 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 132 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 132 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 133 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 133 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 134 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.059 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process end ns: test.foo msg id: 134 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.059 [conn1] Request::process begin ns: test.foo msg id: 135 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.060 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.060 [conn1] Request::process end ns: test.foo msg id: 135 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.060 [conn1] Request::process begin ns: test.foo msg id: 136 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.060 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.060 [conn1] Request::process end ns: test.foo msg id: 136 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.061 [conn1] Request::process begin ns: test.foo msg id: 137 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.061 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.061 [conn1] Request::process end ns: test.foo msg id: 137 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.061 [conn1] Request::process begin ns: test.foo msg id: 138 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.061 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 138 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 139 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 139 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 140 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 140 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 141 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 141 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 142 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 142 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 143 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 143 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 144 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 144 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 145 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 145 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 146 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 146 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 147 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 147 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 148 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 148 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 149 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 149 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 150 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 150 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 151 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 151 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 152 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process end ns: test.foo msg id: 152 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.062 [conn1] Request::process begin ns: test.foo msg id: 153 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.062 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 153 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 154 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 154 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 155 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 155 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 156 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 156 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 157 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 157 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 158 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 158 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 159 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 159 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 160 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 160 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 161 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 161 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 162 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 162 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 163 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 163 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 164 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 164 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 165 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 165 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 166 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process end ns: test.foo msg id: 166 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.063 [conn1] Request::process begin ns: test.foo msg id: 167 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.063 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 167 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 168 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 168 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 169 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 169 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 170 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 170 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 171 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 171 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 172 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 172 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 173 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 173 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 174 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 174 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 175 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 175 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 176 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 176 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 177 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 177 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 178 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 178 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process begin ns: test.foo msg id: 179 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.065 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.065 [conn1] Request::process end ns: test.foo msg id: 179 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 180 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 180 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 181 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 181 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 182 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 182 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 183 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 183 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 184 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 184 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 185 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 185 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 186 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 186 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 187 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 187 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 188 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 188 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 189 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 189 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 190 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 190 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 191 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 191 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 192 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 192 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 193 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 193 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 194 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.066 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process end ns: test.foo msg id: 194 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.066 [conn1] Request::process begin ns: test.foo msg id: 195 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.067 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.067 [conn1] Request::process end ns: test.foo msg id: 195 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.067 [conn1] Request::process begin ns: test.foo msg id: 196 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.067 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 196 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 197 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 197 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 198 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 198 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 199 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 199 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 200 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 200 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 201 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 201 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 202 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 202 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 203 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 203 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 204 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 204 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 205 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process end ns: test.foo msg id: 205 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.068 [conn1] Request::process begin ns: test.foo msg id: 206 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.068 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 206 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 207 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 207 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 208 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 208 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 209 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 209 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 210 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 210 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 211 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 211 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 212 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 212 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 213 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 213 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 214 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 214 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 215 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 215 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 216 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 216 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 217 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 217 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 218 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 218 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 219 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 219 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 220 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process end ns: test.foo msg id: 220 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.069 [conn1] Request::process begin ns: test.foo msg id: 221 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.069 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process end ns: test.foo msg id: 221 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process begin ns: test.foo msg id: 222 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.070 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process end ns: test.foo msg id: 222 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process begin ns: test.foo msg id: 223 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.070 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process end ns: test.foo msg id: 223 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process begin ns: test.foo msg id: 224 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.070 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process end ns: test.foo msg id: 224 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process begin ns: test.foo msg id: 225 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.070 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process end ns: test.foo msg id: 225 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.070 [conn1] Request::process begin ns: test.foo msg id: 226 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.070 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process end ns: test.foo msg id: 226 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process begin ns: test.foo msg id: 227 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.071 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process end ns: test.foo msg id: 227 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process begin ns: test.foo msg id: 228 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.071 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process end ns: test.foo msg id: 228 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process begin ns: test.foo msg id: 229 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.071 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process end ns: test.foo msg id: 229 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process begin ns: test.foo msg id: 230 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.071 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process end ns: test.foo msg id: 230 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process begin ns: test.foo msg id: 231 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.071 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process end ns: test.foo msg id: 231 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.071 [conn1] Request::process begin ns: test.foo msg id: 232 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.071 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 232 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 233 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 233 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 234 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 234 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 235 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 235 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 236 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 236 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 237 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 237 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 238 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 238 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 239 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 239 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 240 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 240 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 241 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 241 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 242 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 242 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 243 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 243 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 244 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 244 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process begin ns: test.foo msg id: 245 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.072 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.072 [conn1] Request::process end ns: test.foo msg id: 245 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 246 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 246 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 247 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 247 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 248 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 248 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 249 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 249 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 250 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 250 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 251 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 251 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 252 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 252 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 253 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 253 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 254 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.073 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process end ns: test.foo msg id: 254 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.073 [conn1] Request::process begin ns: test.foo msg id: 255 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 255 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 256 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 256 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 257 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 257 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 258 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 258 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 259 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 259 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 260 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 260 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 261 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 261 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 262 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 262 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 263 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process end ns: test.foo msg id: 263 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.074 [conn1] Request::process begin ns: test.foo msg id: 264 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.074 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 264 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 265 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 265 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 266 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 266 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 267 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 267 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 268 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 268 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 269 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 269 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 270 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 270 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 271 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 271 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 272 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 272 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process begin ns: test.foo msg id: 273 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.075 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.075 [conn1] Request::process end ns: test.foo msg id: 273 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 274 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 274 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 275 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 275 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 276 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 276 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 277 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 277 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 278 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 278 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 279 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 279 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 280 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 280 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 281 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 281 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 282 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process end ns: test.foo msg id: 282 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.076 [conn1] Request::process begin ns: test.foo msg id: 283 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.076 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 283 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 284 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 284 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 285 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 285 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 286 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 286 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 287 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 287 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 288 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 288 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 289 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 289 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 290 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 290 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 291 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process end ns: test.foo msg id: 291 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.077 [conn1] Request::process begin ns: test.foo msg id: 292 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.077 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 292 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 293 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 293 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 294 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 294 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 295 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 295 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 296 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 296 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 297 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 297 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 298 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 298 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 299 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 299 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 300 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 300 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 301 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 301 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 302 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.078 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process end ns: test.foo msg id: 302 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.078 [conn1] Request::process begin ns: test.foo msg id: 303 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 303 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 304 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 304 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 305 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 305 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 306 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 306 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 307 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 307 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 308 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 308 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 309 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 309 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 310 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 310 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 311 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 311 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 312 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 312 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 313 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 313 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 314 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process end ns: test.foo msg id: 314 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.079 [conn1] Request::process begin ns: test.foo msg id: 315 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.079 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process end ns: test.foo msg id: 315 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process begin ns: test.foo msg id: 316 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.081 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process end ns: test.foo msg id: 316 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process begin ns: test.foo msg id: 317 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.081 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process end ns: test.foo msg id: 317 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process begin ns: test.foo msg id: 318 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.081 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process end ns: test.foo msg id: 318 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process begin ns: test.foo msg id: 319 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.081 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process end ns: test.foo msg id: 319 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process begin ns: test.foo msg id: 320 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.081 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process end ns: test.foo msg id: 320 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process begin ns: test.foo msg id: 321 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.081 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process end ns: test.foo msg id: 321 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.081 [conn1] Request::process begin ns: test.foo msg id: 322 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.081 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 322 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 323 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 323 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 324 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 324 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 325 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 325 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 326 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 326 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 327 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 327 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 328 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 328 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 329 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 329 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 330 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 330 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process begin ns: test.foo msg id: 331 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.082 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.082 [conn1] Request::process end ns: test.foo msg id: 331 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 332 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 332 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 333 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 333 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 334 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 334 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 335 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 335 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 336 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 336 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 337 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 337 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 338 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 338 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 339 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 339 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 340 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.083 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process end ns: test.foo msg id: 340 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.083 [conn1] Request::process begin ns: test.foo msg id: 341 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 341 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 342 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 342 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 343 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 343 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 344 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 344 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 345 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 345 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 346 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 346 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 347 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 347 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 348 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 348 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 349 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 349 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 350 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 350 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 351 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 351 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 352 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 352 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 353 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 353 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process begin ns: test.foo msg id: 354 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.084 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.084 [conn1] Request::process end ns: test.foo msg id: 354 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 355 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 355 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 356 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 356 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 357 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 357 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 358 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 358 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 359 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 359 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 360 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 360 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 361 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 361 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 362 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 362 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 363 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 363 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 364 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 364 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 365 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 365 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 366 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 366 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 367 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 367 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 368 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process end ns: test.foo msg id: 368 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.085 [conn1] Request::process begin ns: test.foo msg id: 369 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.085 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 369 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 370 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 370 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 371 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 371 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 372 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 372 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 373 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 373 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 374 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 374 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 375 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 375 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 376 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 376 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 377 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 377 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 378 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 378 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 379 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 379 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process begin ns: test.foo msg id: 380 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.087 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.087 [conn1] Request::process end ns: test.foo msg id: 380 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 381 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 381 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 382 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 382 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 383 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 383 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 384 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 384 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 385 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 385 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 386 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 386 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 387 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 387 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 388 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 388 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 389 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 389 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 390 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 390 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 391 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 391 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 392 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 392 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 393 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 393 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 394 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 394 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process begin ns: test.foo msg id: 395 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.088 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.088 [conn1] Request::process end ns: test.foo msg id: 395 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.089 [conn1] Request::process begin ns: test.foo msg id: 396 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.089 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.089 [conn1] Request::process end ns: test.foo msg id: 396 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.089 [conn1] Request::process begin ns: test.foo msg id: 397 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.089 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.089 [conn1] Request::process end ns: test.foo msg id: 397 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.089 [conn1] Request::process begin ns: test.foo msg id: 398 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.089 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 398 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 399 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 399 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 400 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 400 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 401 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 401 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 402 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 402 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 403 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 403 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 404 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 404 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 405 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 405 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process begin ns: test.foo msg id: 406 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.090 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.090 [conn1] Request::process end ns: test.foo msg id: 406 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 407 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 407 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 408 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 408 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 409 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 409 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 410 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 410 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 411 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 411 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 412 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 412 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 413 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 413 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 414 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 414 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 415 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 415 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 416 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 416 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 417 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 417 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 418 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 418 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 419 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 419 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 420 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process end ns: test.foo msg id: 420 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.091 [conn1] Request::process begin ns: test.foo msg id: 421 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.091 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process end ns: test.foo msg id: 421 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process begin ns: test.foo msg id: 422 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.092 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process end ns: test.foo msg id: 422 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process begin ns: test.foo msg id: 423 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.092 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process end ns: test.foo msg id: 423 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process begin ns: test.foo msg id: 424 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.092 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process end ns: test.foo msg id: 424 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process begin ns: test.foo msg id: 425 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.092 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process end ns: test.foo msg id: 425 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process begin ns: test.foo msg id: 426 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.092 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process end ns: test.foo msg id: 426 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process begin ns: test.foo msg id: 427 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.092 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process end ns: test.foo msg id: 427 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.092 [conn1] Request::process begin ns: test.foo msg id: 428 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.092 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process end ns: test.foo msg id: 428 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process begin ns: test.foo msg id: 429 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.093 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process end ns: test.foo msg id: 429 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process begin ns: test.foo msg id: 430 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.093 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process end ns: test.foo msg id: 430 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process begin ns: test.foo msg id: 431 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.093 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process end ns: test.foo msg id: 431 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.093 [conn1] Request::process begin ns: test.foo msg id: 432 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.093 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 432 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 433 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 433 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 434 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 434 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 435 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 435 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 436 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 436 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 437 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 437 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 438 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 438 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 439 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 439 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 440 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 440 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 441 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 441 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 442 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 442 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 443 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 443 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 444 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 444 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 445 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 445 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 446 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 446 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process begin ns: test.foo msg id: 447 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.094 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.094 [conn1] Request::process end ns: test.foo msg id: 447 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 448 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 448 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 449 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 449 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 450 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 450 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 451 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 451 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 452 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 452 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 453 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 453 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 454 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 454 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 455 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 455 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 456 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process end ns: test.foo msg id: 456 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.095 [conn1] Request::process begin ns: test.foo msg id: 457 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.095 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.096 [conn1] Request::process end ns: test.foo msg id: 457 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.096 [conn1] Request::process begin ns: test.foo msg id: 458 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 458 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 459 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 459 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 460 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 460 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 461 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 461 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 462 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 462 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 463 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 463 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 464 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 464 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 465 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 465 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 466 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 466 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 467 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 467 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 468 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 468 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 469 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 469 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 470 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 470 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 471 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 471 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 472 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.097 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process end ns: test.foo msg id: 472 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.097 [conn1] Request::process begin ns: test.foo msg id: 473 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 473 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 474 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 474 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 475 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 475 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 476 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 476 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 477 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 477 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 478 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 478 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 479 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 479 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 480 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 480 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 481 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 481 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 482 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 482 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 483 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 483 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 484 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 484 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 485 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process end ns: test.foo msg id: 485 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.098 [conn1] Request::process begin ns: test.foo msg id: 486 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.098 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 486 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 487 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 487 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 488 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 488 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 489 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 489 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 490 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 490 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 491 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 491 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 492 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 492 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 493 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 493 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 494 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 494 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 495 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 495 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 496 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 496 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 497 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 497 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 498 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 498 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process begin ns: test.foo msg id: 499 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.100 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.100 [conn1] Request::process end ns: test.foo msg id: 499 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 500 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 500 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 501 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 501 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 502 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 502 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 503 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 503 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 504 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 504 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 505 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 505 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 506 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 506 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 507 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 507 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 508 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 508 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 509 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 509 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 510 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 510 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 511 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 511 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 512 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 512 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process begin ns: test.foo msg id: 513 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.101 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.101 [conn1] Request::process end ns: test.foo msg id: 513 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.102 [conn1] Request::process begin ns: test.foo msg id: 514 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.102 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.102 [conn1] Request::process end ns: test.foo msg id: 514 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.102 [conn1] Request::process begin ns: test.foo msg id: 515 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.102 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.102 [conn1] Request::process end ns: test.foo msg id: 515 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.102 [conn1] Request::process begin ns: test.foo msg id: 516 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.102 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.102 [conn1] Request::process end ns: test.foo msg id: 516 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.102 [conn1] Request::process begin ns: test.foo msg id: 517 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.102 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 517 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process begin ns: test.foo msg id: 518 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.103 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 518 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process begin ns: test.foo msg id: 519 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.103 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 519 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process begin ns: test.foo msg id: 520 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.103 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 520 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process begin ns: test.foo msg id: 521 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.103 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 521 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process begin ns: test.foo msg id: 522 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.103 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 522 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process begin ns: test.foo msg id: 523 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.103 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 523 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process begin ns: test.foo msg id: 524 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.103 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.103 [conn1] Request::process end ns: test.foo msg id: 524 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 525 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 525 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 526 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 526 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 527 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 527 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 528 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 528 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 529 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 529 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 530 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 530 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 531 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 531 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 532 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 532 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 533 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 533 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 534 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 534 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 535 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 535 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 536 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 536 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 537 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 537 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 538 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 538 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 539 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process end ns: test.foo msg id: 539 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.104 [conn1] Request::process begin ns: test.foo msg id: 540 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.104 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process end ns: test.foo msg id: 540 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process begin ns: test.foo msg id: 541 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.105 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process end ns: test.foo msg id: 541 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process begin ns: test.foo msg id: 542 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.105 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process end ns: test.foo msg id: 542 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process begin ns: test.foo msg id: 543 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.105 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process end ns: test.foo msg id: 543 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process begin ns: test.foo msg id: 544 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.105 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process end ns: test.foo msg id: 544 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.105 [conn1] Request::process begin ns: test.foo msg id: 545 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.105 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process end ns: test.foo msg id: 545 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process begin ns: test.foo msg id: 546 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.106 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process end ns: test.foo msg id: 546 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process begin ns: test.foo msg id: 547 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.106 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process end ns: test.foo msg id: 547 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process begin ns: test.foo msg id: 548 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.106 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process end ns: test.foo msg id: 548 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process begin ns: test.foo msg id: 549 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.106 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process end ns: test.foo msg id: 549 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process begin ns: test.foo msg id: 550 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.106 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process end ns: test.foo msg id: 550 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.106 [conn1] Request::process begin ns: test.foo msg id: 551 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.106 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 551 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 552 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 552 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 553 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 553 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 554 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 554 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 555 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 555 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 556 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 556 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 557 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 557 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 558 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 558 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 559 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 559 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 560 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 560 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 561 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 561 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 562 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 562 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 563 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 563 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 564 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 564 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 565 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 565 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process begin ns: test.foo msg id: 566 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.107 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.107 [conn1] Request::process end ns: test.foo msg id: 566 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 567 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process end ns: test.foo msg id: 567 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 568 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process end ns: test.foo msg id: 568 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 569 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process end ns: test.foo msg id: 569 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 570 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process end ns: test.foo msg id: 570 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 571 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process end ns: test.foo msg id: 571 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 572 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process end ns: test.foo msg id: 572 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 573 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process end ns: test.foo msg id: 573 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.108 [conn1] Request::process begin ns: test.foo msg id: 574 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.108 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process end ns: test.foo msg id: 574 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process begin ns: test.foo msg id: 575 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.109 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process end ns: test.foo msg id: 575 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process begin ns: test.foo msg id: 576 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.109 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process end ns: test.foo msg id: 576 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process begin ns: test.foo msg id: 577 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.109 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process end ns: test.foo msg id: 577 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.109 [conn1] Request::process begin ns: test.foo msg id: 578 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 578 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 579 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 579 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 580 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 580 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 581 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 581 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 582 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 582 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 583 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 583 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 584 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 584 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 585 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 585 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 586 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 586 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 587 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 587 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 588 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 588 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 589 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 589 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 590 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 590 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 591 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 591 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 592 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.110 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process end ns: test.foo msg id: 592 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.110 [conn1] Request::process begin ns: test.foo msg id: 593 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 593 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 594 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 594 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 595 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 595 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 596 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 596 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 597 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 597 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 598 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 598 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 599 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 599 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 600 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process end ns: test.foo msg id: 600 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.111 [conn1] Request::process begin ns: test.foo msg id: 601 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.111 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.112 [conn1] Request::process end ns: test.foo msg id: 601 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.112 [conn1] Request::process begin ns: test.foo msg id: 602 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.112 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.112 [conn1] Request::process end ns: test.foo msg id: 602 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.112 [conn1] Request::process begin ns: test.foo msg id: 603 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.112 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 603 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 604 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 604 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 605 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 605 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 606 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 606 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 607 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 607 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 608 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 608 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 609 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 609 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 610 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 610 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 611 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 611 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 612 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 612 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 613 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 613 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 614 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 614 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 615 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 615 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 616 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 616 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 617 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 617 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process begin ns: test.foo msg id: 618 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.113 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.113 [conn1] Request::process end ns: test.foo msg id: 618 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 619 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 619 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 620 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 620 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 621 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 621 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 622 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 622 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 623 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 623 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 624 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 624 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 625 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 625 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 626 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 626 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 627 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 627 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 628 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 628 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 629 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process end ns: test.foo msg id: 629 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.114 [conn1] Request::process begin ns: test.foo msg id: 630 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.114 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 630 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 631 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 631 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 632 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 632 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 633 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 633 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 634 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 634 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 635 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 635 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 636 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 636 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 637 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 637 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 638 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 638 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 639 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 639 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 640 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 640 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process begin ns: test.foo msg id: 641 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.116 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.116 [conn1] Request::process end ns: test.foo msg id: 641 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 642 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 642 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 643 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 643 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 644 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 644 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 645 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 645 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 646 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 646 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 647 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 647 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 648 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 648 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 649 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 649 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 650 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 650 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 651 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 651 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 652 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 652 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 653 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 653 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 654 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 654 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 655 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 655 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 656 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.117 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process end ns: test.foo msg id: 656 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.117 [conn1] Request::process begin ns: test.foo msg id: 657 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.118 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.118 [conn1] Request::process end ns: test.foo msg id: 657 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.118 [conn1] Request::process begin ns: test.foo msg id: 658 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.118 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 658 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 659 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 659 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 660 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 660 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 661 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 661 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 662 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 662 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 663 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 663 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 664 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 664 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 665 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 665 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 666 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 666 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process begin ns: test.foo msg id: 667 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.119 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.119 [conn1] Request::process end ns: test.foo msg id: 667 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 668 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 668 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 669 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 669 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 670 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 670 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 671 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 671 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 672 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 672 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 673 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 673 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 674 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 674 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 675 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 675 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 676 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 676 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 677 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 677 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 678 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 678 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 679 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 679 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 680 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 680 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 681 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 681 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 682 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process end ns: test.foo msg id: 682 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.120 [conn1] Request::process begin ns: test.foo msg id: 683 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.120 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process end ns: test.foo msg id: 683 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process begin ns: test.foo msg id: 684 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.121 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process end ns: test.foo msg id: 684 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process begin ns: test.foo msg id: 685 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.121 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process end ns: test.foo msg id: 685 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process begin ns: test.foo msg id: 686 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.121 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process end ns: test.foo msg id: 686 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.121 [conn1] Request::process begin ns: test.foo msg id: 687 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.121 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.122 [conn1] Request::process end ns: test.foo msg id: 687 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.122 [conn1] Request::process begin ns: test.foo msg id: 688 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.122 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.122 [conn1] Request::process end ns: test.foo msg id: 688 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 689 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 689 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 690 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 690 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 691 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 691 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 692 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 692 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 693 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 693 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 694 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 694 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 695 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 695 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 696 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 696 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 697 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 697 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 698 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 698 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 699 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 699 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 700 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 700 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 701 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 701 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 702 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 702 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process begin ns: test.foo msg id: 703 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.123 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.123 [conn1] Request::process end ns: test.foo msg id: 703 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 704 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 704 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 705 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 705 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 706 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 706 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 707 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 707 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 708 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 708 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 709 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 709 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 710 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 710 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 711 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process end ns: test.foo msg id: 711 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.124 [conn1] Request::process begin ns: test.foo msg id: 712 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.124 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 712 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 713 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 713 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 714 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 714 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 715 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 715 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 716 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 716 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 717 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 717 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 718 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 718 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 719 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 719 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 720 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 720 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 721 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 721 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 722 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 722 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 723 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 723 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 724 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 724 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 725 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process end ns: test.foo msg id: 725 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.126 [conn1] Request::process begin ns: test.foo msg id: 726 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.126 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 726 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 727 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 727 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 728 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 728 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 729 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 729 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 730 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 730 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 731 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 731 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 732 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 732 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 733 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 733 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 734 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 734 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 735 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process end ns: test.foo msg id: 735 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.127 [conn1] Request::process begin ns: test.foo msg id: 736 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.127 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 736 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 737 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 737 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 738 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 738 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 739 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 739 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 740 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 740 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 741 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 741 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 742 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 742 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 743 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 743 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 744 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 744 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 745 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 745 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 746 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 746 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 747 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 747 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 748 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 748 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 749 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process end ns: test.foo msg id: 749 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.129 [conn1] Request::process begin ns: test.foo msg id: 750 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.129 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.130 [conn1] Request::process end ns: test.foo msg id: 750 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.130 [conn1] Request::process begin ns: test.foo msg id: 751 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.130 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.130 [conn1] Request::process end ns: test.foo msg id: 751 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.130 [conn1] Request::process begin ns: test.foo msg id: 752 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.130 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.130 [conn1] Request::process end ns: test.foo msg id: 752 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.130 [conn1] Request::process begin ns: test.foo msg id: 753 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.130 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 753 op: 2002 attempt: 0 47ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 754 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 754 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 755 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 755 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 756 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 756 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 757 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 757 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 758 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 758 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 759 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 759 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 760 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 760 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 761 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 761 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 762 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 762 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 763 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 763 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 764 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 764 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process begin ns: test.foo msg id: 765 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.178 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.178 [conn1] Request::process end ns: test.foo msg id: 765 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 766 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 766 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 767 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 767 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 768 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 768 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 769 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 769 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 770 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 770 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 771 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 771 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 772 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 772 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 773 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 773 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 774 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 774 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 775 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process end ns: test.foo msg id: 775 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.181 [conn1] Request::process begin ns: test.foo msg id: 776 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.181 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 776 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 777 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 777 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 778 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 778 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 779 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 779 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 780 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 780 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 781 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 781 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 782 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 782 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 783 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 783 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 784 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 784 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 785 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 785 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 786 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 786 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 787 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 787 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 788 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 788 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 789 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 789 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 790 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process end ns: test.foo msg id: 790 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.182 [conn1] Request::process begin ns: test.foo msg id: 791 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.182 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process end ns: test.foo msg id: 791 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process begin ns: test.foo msg id: 792 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.183 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process end ns: test.foo msg id: 792 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process begin ns: test.foo msg id: 793 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.183 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process end ns: test.foo msg id: 793 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process begin ns: test.foo msg id: 794 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.183 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process end ns: test.foo msg id: 794 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process begin ns: test.foo msg id: 795 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.183 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process end ns: test.foo msg id: 795 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.183 [conn1] Request::process begin ns: test.foo msg id: 796 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.183 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process end ns: test.foo msg id: 796 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process begin ns: test.foo msg id: 797 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.184 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process end ns: test.foo msg id: 797 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process begin ns: test.foo msg id: 798 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.184 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process end ns: test.foo msg id: 798 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process begin ns: test.foo msg id: 799 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.184 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process end ns: test.foo msg id: 799 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process begin ns: test.foo msg id: 800 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.184 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process end ns: test.foo msg id: 800 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process begin ns: test.foo msg id: 801 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.184 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process end ns: test.foo msg id: 801 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.184 [conn1] Request::process begin ns: test.foo msg id: 802 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.184 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 802 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 803 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 803 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 804 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 804 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 805 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 805 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 806 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 806 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 807 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 807 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 808 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 808 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 809 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 809 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 810 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 810 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 811 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 811 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 812 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 812 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 813 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 813 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 814 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 814 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 815 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 815 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 816 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process end ns: test.foo msg id: 816 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.185 [conn1] Request::process begin ns: test.foo msg id: 817 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.185 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 817 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 818 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 818 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 819 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 819 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 820 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 820 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 821 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 821 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 822 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 822 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 823 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 823 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 824 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process end ns: test.foo msg id: 824 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.186 [conn1] Request::process begin ns: test.foo msg id: 825 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.186 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.187 [conn1] Request::process end ns: test.foo msg id: 825 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.187 [conn1] Request::process begin ns: test.foo msg id: 826 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.187 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.187 [conn1] Request::process end ns: test.foo msg id: 826 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.187 [conn1] Request::process begin ns: test.foo msg id: 827 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.187 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 827 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 828 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 828 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 829 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 829 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 830 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 830 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 831 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 831 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 832 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 832 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 833 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 833 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 834 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 834 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 835 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 835 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 836 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 836 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 837 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 837 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 838 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 838 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 839 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 839 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 840 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 840 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 841 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 841 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process begin ns: test.foo msg id: 842 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.188 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.188 [conn1] Request::process end ns: test.foo msg id: 842 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 843 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 843 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 844 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 844 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 845 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 845 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 846 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 846 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 847 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 847 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 848 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 848 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 849 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 849 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 850 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 850 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 851 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 851 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 852 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process end ns: test.foo msg id: 852 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.189 [conn1] Request::process begin ns: test.foo msg id: 853 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.189 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.190 [conn1] Request::process end ns: test.foo msg id: 853 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 854 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 854 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 855 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 855 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 856 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 856 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 857 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 857 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 858 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 858 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 859 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 859 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 860 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 860 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 861 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 861 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 862 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 862 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 863 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 863 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 864 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 864 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 865 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 865 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 866 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 866 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 867 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.191 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process end ns: test.foo msg id: 867 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.191 [conn1] Request::process begin ns: test.foo msg id: 868 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 868 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 869 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 869 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 870 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 870 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 871 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 871 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 872 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 872 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 873 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 873 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 874 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 874 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 875 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 875 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 876 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 876 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 877 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 877 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 878 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 878 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 879 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 879 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 880 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 880 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 881 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 881 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 882 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process end ns: test.foo msg id: 882 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.192 [conn1] Request::process begin ns: test.foo msg id: 883 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.192 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 883 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 884 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 884 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 885 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 885 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 886 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 886 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 887 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 887 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 888 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 888 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 889 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 889 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 890 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 890 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 891 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 891 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 892 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 892 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 893 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 893 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process begin ns: test.foo msg id: 894 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.194 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.194 [conn1] Request::process end ns: test.foo msg id: 894 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 895 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 895 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 896 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 896 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 897 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 897 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 898 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 898 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 899 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 899 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 900 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 900 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 901 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 901 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 902 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 902 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 903 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 903 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 904 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 904 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 905 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 905 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 906 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 906 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 907 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 907 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 908 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 908 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 909 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.195 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process end ns: test.foo msg id: 909 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.195 [conn1] Request::process begin ns: test.foo msg id: 910 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.196 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.196 [conn1] Request::process end ns: test.foo msg id: 910 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.196 [conn1] Request::process begin ns: test.foo msg id: 911 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.196 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 911 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 912 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 912 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 913 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 913 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 914 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 914 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 915 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 915 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 916 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 916 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 917 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 917 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 918 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 918 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 919 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 919 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 920 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 920 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process begin ns: test.foo msg id: 921 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.197 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.197 [conn1] Request::process end ns: test.foo msg id: 921 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 922 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 922 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 923 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 923 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 924 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 924 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 925 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 925 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 926 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 926 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 927 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 927 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 928 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 928 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 929 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 929 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 930 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 930 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 931 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 931 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 932 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 932 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 933 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 933 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 934 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process end ns: test.foo msg id: 934 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.198 [conn1] Request::process begin ns: test.foo msg id: 935 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.198 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 935 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 936 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 936 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 937 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 937 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 938 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 938 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 939 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 939 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 940 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 940 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 941 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 941 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 942 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 942 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 943 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 943 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 944 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 944 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 945 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 945 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 946 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 946 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 947 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 947 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process begin ns: test.foo msg id: 948 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.200 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.200 [conn1] Request::process end ns: test.foo msg id: 948 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 949 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 949 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 950 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 950 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 951 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 951 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 952 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 952 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 953 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 953 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 954 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 954 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 955 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 955 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 956 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 956 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 957 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 957 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 958 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 958 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 959 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 959 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 960 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 960 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 961 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 961 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 962 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process end ns: test.foo msg id: 962 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.201 [conn1] Request::process begin ns: test.foo msg id: 963 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.201 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 963 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 964 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 964 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 965 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 965 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 966 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 966 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 967 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 967 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 968 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 968 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 969 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 969 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 970 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 970 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 971 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 971 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 972 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 972 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 973 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.203 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process end ns: test.foo msg id: 973 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.203 [conn1] Request::process begin ns: test.foo msg id: 974 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 974 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 975 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 975 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 976 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 976 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 977 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 977 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 978 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 978 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 979 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 979 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 980 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 980 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 981 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 981 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 982 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 982 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 983 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 983 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 984 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 984 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 985 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 985 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 986 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 986 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 987 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 987 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 988 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process end ns: test.foo msg id: 988 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.204 [conn1] Request::process begin ns: test.foo msg id: 989 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.204 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process end ns: test.foo msg id: 989 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process begin ns: test.foo msg id: 990 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.205 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process end ns: test.foo msg id: 990 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process begin ns: test.foo msg id: 991 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.205 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process end ns: test.foo msg id: 991 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process begin ns: test.foo msg id: 992 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.205 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process end ns: test.foo msg id: 992 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process begin ns: test.foo msg id: 993 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.205 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process end ns: test.foo msg id: 993 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.205 [conn1] Request::process begin ns: test.foo msg id: 994 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.205 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process end ns: test.foo msg id: 994 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process begin ns: test.foo msg id: 995 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.206 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process end ns: test.foo msg id: 995 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process begin ns: test.foo msg id: 996 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.206 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process end ns: test.foo msg id: 996 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process begin ns: test.foo msg id: 997 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.206 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process end ns: test.foo msg id: 997 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process begin ns: test.foo msg id: 998 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.206 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process end ns: test.foo msg id: 998 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process begin ns: test.foo msg id: 999 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.206 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process end ns: test.foo msg id: 999 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.206 [conn1] Request::process begin ns: test.foo msg id: 1000 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.206 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1000 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1001 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1001 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1002 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1002 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1003 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1003 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1004 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1004 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1005 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1005 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1006 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1006 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1007 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1007 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1008 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1008 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1009 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1009 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1010 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1010 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1011 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1011 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1012 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1012 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1013 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1013 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1014 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process end ns: test.foo msg id: 1014 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.207 [conn1] Request::process begin ns: test.foo msg id: 1015 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.207 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process end ns: test.foo msg id: 1015 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process begin ns: test.foo msg id: 1016 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.208 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process end ns: test.foo msg id: 1016 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process begin ns: test.foo msg id: 1017 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.208 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process end ns: test.foo msg id: 1017 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process begin ns: test.foo msg id: 1018 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.208 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process end ns: test.foo msg id: 1018 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process begin ns: test.foo msg id: 1019 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.208 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process end ns: test.foo msg id: 1019 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process begin ns: test.foo msg id: 1020 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.208 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process end ns: test.foo msg id: 1020 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process begin ns: test.foo msg id: 1021 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.208 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process end ns: test.foo msg id: 1021 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.208 [conn1] Request::process begin ns: test.foo msg id: 1022 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.208 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.209 [conn1] Request::process end ns: test.foo msg id: 1022 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.209 [conn1] Request::process begin ns: test.foo msg id: 1023 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.209 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.209 [conn1] Request::process end ns: test.foo msg id: 1023 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.209 [conn1] Request::process begin ns: test.foo msg id: 1024 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.209 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.209 [conn1] Request::process end ns: test.foo msg id: 1024 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.209 [conn1] Request::process begin ns: test.foo msg id: 1025 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.209 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1025 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1026 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1026 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1027 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1027 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1028 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1028 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1029 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1029 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1030 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1030 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1031 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1031 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1032 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1032 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1033 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1033 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1034 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1034 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1035 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1035 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1036 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1036 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1037 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1037 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1038 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1038 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1039 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process end ns: test.foo msg id: 1039 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.210 [conn1] Request::process begin ns: test.foo msg id: 1040 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.210 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1040 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1041 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1041 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1042 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1042 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1043 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1043 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1044 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1044 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1045 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1045 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1046 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1046 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1047 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1047 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1048 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1048 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1049 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1049 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1050 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process end ns: test.foo msg id: 1050 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.211 [conn1] Request::process begin ns: test.foo msg id: 1051 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.211 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1051 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1052 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1052 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1053 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1053 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1054 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1054 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1055 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1055 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1056 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1056 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1057 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1057 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1058 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1058 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1059 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1059 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1060 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1060 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1061 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1061 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1062 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1062 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1063 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1063 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1064 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1064 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1065 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process end ns: test.foo msg id: 1065 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.213 [conn1] Request::process begin ns: test.foo msg id: 1066 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.213 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1066 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1067 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1067 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1068 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1068 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1069 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1069 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1070 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1070 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1071 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1071 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1072 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1072 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1073 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1073 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1074 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1074 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1075 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1075 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1076 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process end ns: test.foo msg id: 1076 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.214 [conn1] Request::process begin ns: test.foo msg id: 1077 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.214 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1077 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1078 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1078 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1079 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1079 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1080 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1080 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1081 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1081 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1082 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1082 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1083 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1083 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1084 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1084 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1085 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1085 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1086 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1086 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1087 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1087 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1088 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process end ns: test.foo msg id: 1088 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.216 [conn1] Request::process begin ns: test.foo msg id: 1089 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.216 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1089 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1090 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1090 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1091 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1091 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1092 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1092 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1093 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1093 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1094 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1094 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1095 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1095 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1096 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1096 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1097 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1097 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1098 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1098 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1099 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1099 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1100 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1100 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1101 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1101 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1102 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process end ns: test.foo msg id: 1102 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.217 [conn1] Request::process begin ns: test.foo msg id: 1103 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.217 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1103 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1104 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1104 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1105 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1105 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1106 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1106 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1107 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1107 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1108 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1108 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1109 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1109 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1110 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1110 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1111 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1111 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1112 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process end ns: test.foo msg id: 1112 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.219 [conn1] Request::process begin ns: test.foo msg id: 1113 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.219 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1113 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1114 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1114 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1115 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1115 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1116 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1116 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1117 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1117 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1118 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1118 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1119 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1119 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1120 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1120 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1121 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1121 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1122 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1122 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1123 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1123 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1124 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1124 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1125 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1125 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1126 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1126 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1127 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process end ns: test.foo msg id: 1127 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.220 [conn1] Request::process begin ns: test.foo msg id: 1128 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.220 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.221 [conn1] Request::process end ns: test.foo msg id: 1128 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.221 [conn1] Request::process begin ns: test.foo msg id: 1129 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.221 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process end ns: test.foo msg id: 1129 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process begin ns: test.foo msg id: 1130 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.222 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process end ns: test.foo msg id: 1130 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process begin ns: test.foo msg id: 1131 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.222 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process end ns: test.foo msg id: 1131 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process begin ns: test.foo msg id: 1132 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.222 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process end ns: test.foo msg id: 1132 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process begin ns: test.foo msg id: 1133 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.222 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process end ns: test.foo msg id: 1133 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process begin ns: test.foo msg id: 1134 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.222 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process end ns: test.foo msg id: 1134 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process begin ns: test.foo msg id: 1135 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.222 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process end ns: test.foo msg id: 1135 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.222 [conn1] Request::process begin ns: test.foo msg id: 1136 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.222 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1136 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1137 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1137 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1138 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1138 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1139 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1139 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1140 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1140 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1141 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1141 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1142 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1142 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1143 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1143 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1144 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1144 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1145 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1145 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1146 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1146 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1147 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1147 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1148 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1148 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1149 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1149 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1150 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process end ns: test.foo msg id: 1150 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.223 [conn1] Request::process begin ns: test.foo msg id: 1151 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.223 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process end ns: test.foo msg id: 1151 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process begin ns: test.foo msg id: 1152 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.224 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process end ns: test.foo msg id: 1152 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process begin ns: test.foo msg id: 1153 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.224 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process end ns: test.foo msg id: 1153 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process begin ns: test.foo msg id: 1154 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.224 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process end ns: test.foo msg id: 1154 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.224 [conn1] Request::process begin ns: test.foo msg id: 1155 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.224 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process end ns: test.foo msg id: 1155 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process begin ns: test.foo msg id: 1156 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.225 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process end ns: test.foo msg id: 1156 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process begin ns: test.foo msg id: 1157 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.225 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process end ns: test.foo msg id: 1157 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process begin ns: test.foo msg id: 1158 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.225 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process end ns: test.foo msg id: 1158 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process begin ns: test.foo msg id: 1159 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.225 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process end ns: test.foo msg id: 1159 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.225 [conn1] Request::process begin ns: test.foo msg id: 1160 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.225 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1160 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1161 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1161 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1162 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1162 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1163 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1163 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1164 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1164 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1165 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1165 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1166 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1166 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1167 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1167 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1168 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1168 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1169 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1169 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1170 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1170 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1171 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1171 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1172 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1172 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1173 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1173 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1174 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process end ns: test.foo msg id: 1174 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.226 [conn1] Request::process begin ns: test.foo msg id: 1175 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.226 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process end ns: test.foo msg id: 1175 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process begin ns: test.foo msg id: 1176 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.227 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process end ns: test.foo msg id: 1176 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process begin ns: test.foo msg id: 1177 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.227 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process end ns: test.foo msg id: 1177 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process begin ns: test.foo msg id: 1178 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.227 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process end ns: test.foo msg id: 1178 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process begin ns: test.foo msg id: 1179 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.227 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process end ns: test.foo msg id: 1179 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process begin ns: test.foo msg id: 1180 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.227 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process end ns: test.foo msg id: 1180 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process begin ns: test.foo msg id: 1181 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.227 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process end ns: test.foo msg id: 1181 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.227 [conn1] Request::process begin ns: test.foo msg id: 1182 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.227 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1182 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1183 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1183 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1184 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1184 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1185 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1185 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1186 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1186 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1187 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1187 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1188 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1188 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1189 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1189 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1190 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1190 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1191 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1191 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1192 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1192 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1193 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1193 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1194 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1194 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1195 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1195 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1196 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process end ns: test.foo msg id: 1196 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.229 [conn1] Request::process begin ns: test.foo msg id: 1197 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.229 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1197 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1198 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1198 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1199 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1199 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1200 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1200 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1201 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1201 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1202 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1202 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1203 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1203 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1204 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1204 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1205 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1205 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1206 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1206 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1207 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1207 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1208 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1208 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1209 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1209 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1210 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process end ns: test.foo msg id: 1210 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.230 [conn1] Request::process begin ns: test.foo msg id: 1211 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.230 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1211 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1212 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1212 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1213 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1213 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1214 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1214 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1215 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1215 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1216 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1216 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1217 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1217 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1218 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1218 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1219 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1219 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1220 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1220 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1221 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1221 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1222 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process end ns: test.foo msg id: 1222 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.232 [conn1] Request::process begin ns: test.foo msg id: 1223 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.232 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1223 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1224 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1224 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1225 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1225 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1226 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1226 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1227 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1227 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1228 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1228 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1229 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1229 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1230 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1230 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1231 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1231 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1232 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1232 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1233 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1233 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1234 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1234 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1235 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1235 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1236 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1236 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1237 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process end ns: test.foo msg id: 1237 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.233 [conn1] Request::process begin ns: test.foo msg id: 1238 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.233 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.234 [conn1] Request::process end ns: test.foo msg id: 1238 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.234 [conn1] Request::process begin ns: test.foo msg id: 1239 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.234 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1239 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1240 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1240 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1241 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1241 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1242 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1242 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1243 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1243 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1244 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1244 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1245 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1245 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1246 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1246 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1247 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1247 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1248 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1248 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process begin ns: test.foo msg id: 1249 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.235 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.235 [conn1] Request::process end ns: test.foo msg id: 1249 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1250 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1250 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1251 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1251 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1252 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1252 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1253 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1253 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1254 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1254 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1255 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1255 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1256 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1256 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1257 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1257 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1258 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1258 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1259 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1259 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1260 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1260 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1261 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1261 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1262 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1262 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1263 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.236 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process end ns: test.foo msg id: 1263 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.236 [conn1] Request::process begin ns: test.foo msg id: 1264 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1264 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1265 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1265 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1266 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1266 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1267 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1267 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1268 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1268 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1269 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1269 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1270 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1270 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1271 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1271 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1272 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1272 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1273 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1273 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1274 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1274 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1275 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1275 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1276 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1276 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1277 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1277 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1278 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1278 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1279 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process end ns: test.foo msg id: 1279 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.237 [conn1] Request::process begin ns: test.foo msg id: 1280 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.237 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.238 [conn1] Request::process end ns: test.foo msg id: 1280 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.238 [conn1] Request::process begin ns: test.foo msg id: 1281 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.238 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.238 [conn1] Request::process end ns: test.foo msg id: 1281 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.238 [conn1] Request::process begin ns: test.foo msg id: 1282 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.238 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.238 [conn1] Request::process end ns: test.foo msg id: 1282 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.238 [conn1] Request::process begin ns: test.foo msg id: 1283 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.238 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1283 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1284 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1284 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1285 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1285 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1286 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1286 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1287 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1287 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1288 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1288 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1289 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1289 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1290 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1290 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1291 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1291 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1292 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1292 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1293 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1293 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1294 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1294 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1295 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1295 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1296 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1296 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1297 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1297 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process begin ns: test.foo msg id: 1298 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.239 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.239 [conn1] Request::process end ns: test.foo msg id: 1298 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1299 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1299 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1300 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1300 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1301 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1301 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1302 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1302 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1303 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1303 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1304 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1304 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1305 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1305 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1306 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1306 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1307 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1307 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1308 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1308 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1309 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1309 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1310 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1310 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1311 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1311 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1312 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1312 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1313 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1313 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process begin ns: test.foo msg id: 1314 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.240 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.240 [conn1] Request::process end ns: test.foo msg id: 1314 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1315 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1315 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1316 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1316 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1317 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1317 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1318 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1318 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1319 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1319 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1320 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1320 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1321 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1321 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1322 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1322 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1323 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1323 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1324 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1324 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1325 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1325 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1326 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1326 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1327 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1327 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1328 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1328 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1329 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process end ns: test.foo msg id: 1329 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.241 [conn1] Request::process begin ns: test.foo msg id: 1330 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.241 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.242 [conn1] Request::process end ns: test.foo msg id: 1330 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.242 [conn1] Request::process begin ns: test.foo msg id: 1331 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.242 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.242 [conn1] Request::process end ns: test.foo msg id: 1331 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.242 [conn1] Request::process begin ns: test.foo msg id: 1332 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.242 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1332 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1333 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1333 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1334 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1334 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1335 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1335 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1336 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1336 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1337 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1337 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1338 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1338 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1339 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1339 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1340 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1340 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1341 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1341 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1342 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1342 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1343 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1343 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1344 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1344 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1345 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1345 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1346 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1346 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1347 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process end ns: test.foo msg id: 1347 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.243 [conn1] Request::process begin ns: test.foo msg id: 1348 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.243 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1348 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1349 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1349 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1350 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1350 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1351 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1351 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1352 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1352 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1353 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1353 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1354 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1354 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1355 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1355 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1356 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1356 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1357 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1357 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1358 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1358 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1359 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1359 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1360 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1360 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1361 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1361 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1362 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1362 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1363 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1363 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process begin ns: test.foo msg id: 1364 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.244 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.244 [conn1] Request::process end ns: test.foo msg id: 1364 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1365 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1365 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1366 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1366 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1367 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1367 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1368 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1368 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1369 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1369 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1370 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1370 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1371 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1371 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1372 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1372 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1373 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1373 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1374 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process end ns: test.foo msg id: 1374 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.245 [conn1] Request::process begin ns: test.foo msg id: 1375 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.245 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process end ns: test.foo msg id: 1375 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process begin ns: test.foo msg id: 1376 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.246 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process end ns: test.foo msg id: 1376 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process begin ns: test.foo msg id: 1377 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.246 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process end ns: test.foo msg id: 1377 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process begin ns: test.foo msg id: 1378 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.246 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process end ns: test.foo msg id: 1378 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process begin ns: test.foo msg id: 1379 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.246 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process end ns: test.foo msg id: 1379 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process begin ns: test.foo msg id: 1380 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.246 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process end ns: test.foo msg id: 1380 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process begin ns: test.foo msg id: 1381 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.246 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process end ns: test.foo msg id: 1381 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.246 [conn1] Request::process begin ns: test.foo msg id: 1382 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.246 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1382 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1383 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1383 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1384 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1384 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1385 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1385 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1386 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1386 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1387 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1387 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1388 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1388 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1389 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1389 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1390 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1390 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1391 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1391 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1392 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1392 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1393 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1393 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1394 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1394 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1395 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1395 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1396 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1396 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1397 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1397 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process begin ns: test.foo msg id: 1398 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.247 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.247 [conn1] Request::process end ns: test.foo msg id: 1398 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1399 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1399 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1400 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1400 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1401 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1401 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1402 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1402 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1403 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1403 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1404 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1404 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1405 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1405 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1406 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1406 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1407 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1407 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1408 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1408 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1409 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1409 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1410 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1410 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1411 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1411 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1412 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1412 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1413 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1413 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1414 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.248 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process end ns: test.foo msg id: 1414 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.248 [conn1] Request::process begin ns: test.foo msg id: 1415 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1415 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1416 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1416 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1417 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1417 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1418 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1418 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1419 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1419 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1420 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1420 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1421 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1421 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1422 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1422 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1423 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1423 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1424 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1424 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1425 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process end ns: test.foo msg id: 1425 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.249 [conn1] Request::process begin ns: test.foo msg id: 1426 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.249 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process end ns: test.foo msg id: 1426 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process begin ns: test.foo msg id: 1427 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.250 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process end ns: test.foo msg id: 1427 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process begin ns: test.foo msg id: 1428 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.250 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process end ns: test.foo msg id: 1428 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process begin ns: test.foo msg id: 1429 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.250 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process end ns: test.foo msg id: 1429 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process begin ns: test.foo msg id: 1430 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.250 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process end ns: test.foo msg id: 1430 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process begin ns: test.foo msg id: 1431 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.250 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process end ns: test.foo msg id: 1431 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process begin ns: test.foo msg id: 1432 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.250 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process end ns: test.foo msg id: 1432 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.250 [conn1] Request::process begin ns: test.foo msg id: 1433 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1433 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1434 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1434 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1435 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1435 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1436 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1436 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1437 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1437 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1438 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1438 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1439 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1439 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1440 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1440 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1441 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1441 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1442 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1442 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1443 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1443 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1444 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1444 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1445 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1445 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1446 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1446 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1447 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1447 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1448 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.251 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process end ns: test.foo msg id: 1448 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.251 [conn1] Request::process begin ns: test.foo msg id: 1449 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1449 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1450 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1450 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1451 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1451 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1452 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1452 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1453 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1453 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1454 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1454 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1455 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1455 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1456 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1456 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1457 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1457 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1458 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1458 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1459 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1459 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1460 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1460 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1461 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1461 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1462 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1462 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1463 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1463 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process begin ns: test.foo msg id: 1464 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.252 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.252 [conn1] Request::process end ns: test.foo msg id: 1464 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1465 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1465 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1466 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1466 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1467 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1467 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1468 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1468 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1469 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1469 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1470 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1470 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1471 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1471 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1472 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1472 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1473 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1473 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1474 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1474 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1475 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1475 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1476 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1476 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1477 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1477 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1478 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1478 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1479 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process end ns: test.foo msg id: 1479 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.253 [conn1] Request::process begin ns: test.foo msg id: 1480 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.253 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.254 [conn1] Request::process end ns: test.foo msg id: 1480 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.254 [conn1] Request::process begin ns: test.foo msg id: 1481 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.254 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.254 [conn1] Request::process end ns: test.foo msg id: 1481 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.254 [conn1] Request::process begin ns: test.foo msg id: 1482 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.254 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.254 [conn1] Request::process end ns: test.foo msg id: 1482 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1483 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1483 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1484 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1484 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1485 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1485 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1486 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1486 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1487 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1487 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1488 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1488 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1489 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1489 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1490 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1490 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1491 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1491 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1492 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1492 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1493 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1493 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1494 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1494 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1495 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1495 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1496 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1496 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1497 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1497 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1498 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process end ns: test.foo msg id: 1498 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.255 [conn1] Request::process begin ns: test.foo msg id: 1499 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.255 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1499 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1500 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1500 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1501 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1501 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1502 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1502 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1503 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1503 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1504 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1504 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1505 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1505 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1506 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1506 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1507 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1507 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1508 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1508 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1509 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1509 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1510 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1510 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1511 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1511 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1512 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1512 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1513 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1513 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1514 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process end ns: test.foo msg id: 1514 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.256 [conn1] Request::process begin ns: test.foo msg id: 1515 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.256 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1515 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1516 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1516 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1517 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1517 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1518 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1518 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1519 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1519 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1520 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1520 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1521 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1521 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1522 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1522 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1523 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1523 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1524 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1524 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1525 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1525 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1526 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1526 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1527 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1527 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1528 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1528 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1529 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1529 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process begin ns: test.foo msg id: 1530 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.257 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.257 [conn1] Request::process end ns: test.foo msg id: 1530 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.258 [conn1] Request::process begin ns: test.foo msg id: 1531 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.258 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.258 [conn1] Request::process end ns: test.foo msg id: 1531 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.258 [conn1] Request::process begin ns: test.foo msg id: 1532 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.258 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1532 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1533 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1533 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1534 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1534 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1535 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1535 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1536 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1536 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1537 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1537 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1538 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1538 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1539 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1539 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1540 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1540 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1541 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1541 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1542 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1542 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1543 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1543 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1544 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1544 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1545 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1545 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1546 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.259 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process end ns: test.foo msg id: 1546 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.259 [conn1] Request::process begin ns: test.foo msg id: 1547 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1547 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1548 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1548 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1549 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1549 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1550 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1550 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1551 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1551 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1552 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1552 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1553 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1553 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1554 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1554 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1555 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1555 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1556 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1556 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1557 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1557 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1558 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1558 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1559 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1559 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1560 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1560 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1561 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1561 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1562 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process end ns: test.foo msg id: 1562 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.260 [conn1] Request::process begin ns: test.foo msg id: 1563 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.260 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.261 [conn1] Request::process end ns: test.foo msg id: 1563 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.261 [conn1] Request::process begin ns: test.foo msg id: 1564 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.261 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.261 [conn1] Request::process end ns: test.foo msg id: 1564 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.261 [conn1] Request::process begin ns: test.foo msg id: 1565 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.261 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.261 [conn1] Request::process end ns: test.foo msg id: 1565 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1566 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1566 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1567 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1567 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1568 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1568 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1569 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1569 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1570 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1570 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1571 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1571 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1572 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1572 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1573 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1573 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1574 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1574 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1575 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1575 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1576 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1576 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1577 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1577 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1578 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1578 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1579 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.262 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process end ns: test.foo msg id: 1579 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.262 [conn1] Request::process begin ns: test.foo msg id: 1580 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1580 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1581 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1581 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1582 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1582 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1583 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1583 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1584 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1584 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1585 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1585 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1586 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1586 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1587 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1587 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1588 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1588 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1589 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1589 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1590 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1590 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1591 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1591 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1592 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1592 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1593 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1593 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1594 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1594 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1595 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process end ns: test.foo msg id: 1595 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.263 [conn1] Request::process begin ns: test.foo msg id: 1596 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.263 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1596 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1597 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1597 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1598 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1598 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1599 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1599 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1600 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1600 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1601 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1601 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1602 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1602 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1603 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1603 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1604 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1604 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1605 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1605 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1606 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1606 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1607 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1607 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1608 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1608 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1609 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1609 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1610 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1610 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1611 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process end ns: test.foo msg id: 1611 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.264 [conn1] Request::process begin ns: test.foo msg id: 1612 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.264 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process end ns: test.foo msg id: 1612 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process begin ns: test.foo msg id: 1613 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.265 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process end ns: test.foo msg id: 1613 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process begin ns: test.foo msg id: 1614 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.265 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process end ns: test.foo msg id: 1614 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process begin ns: test.foo msg id: 1615 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.265 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process end ns: test.foo msg id: 1615 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.265 [conn1] Request::process begin ns: test.foo msg id: 1616 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.265 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1616 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1617 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1617 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1618 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1618 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1619 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1619 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1620 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1620 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1621 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1621 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1622 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1622 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1623 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1623 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1624 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1624 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1625 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1625 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1626 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1626 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1627 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1627 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1628 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process end ns: test.foo msg id: 1628 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.266 [conn1] Request::process begin ns: test.foo msg id: 1629 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.266 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1629 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1630 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1630 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1631 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1631 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1632 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1632 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1633 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1633 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1634 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1634 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1635 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1635 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1636 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1636 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1637 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1637 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1638 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1638 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1639 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1639 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1640 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1640 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1641 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1641 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1642 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1642 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1643 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1643 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1644 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1644 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process begin ns: test.foo msg id: 1645 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.267 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.267 [conn1] Request::process end ns: test.foo msg id: 1645 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1646 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1646 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1647 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1647 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1648 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1648 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1649 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1649 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1650 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1650 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1651 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1651 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1652 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1652 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1653 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1653 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1654 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1654 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1655 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1655 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1656 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1656 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1657 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1657 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1658 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1658 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1659 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1659 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1660 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process end ns: test.foo msg id: 1660 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.268 [conn1] Request::process begin ns: test.foo msg id: 1661 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.268 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.269 [conn1] Request::process end ns: test.foo msg id: 1661 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.269 [conn1] Request::process begin ns: test.foo msg id: 1662 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.269 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.269 [conn1] Request::process end ns: test.foo msg id: 1662 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.269 [conn1] Request::process begin ns: test.foo msg id: 1663 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1663 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1664 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1664 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1665 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1665 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1666 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1666 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1667 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1667 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1668 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1668 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1669 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1669 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1670 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1670 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1671 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1671 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1672 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1672 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1673 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1673 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1674 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1674 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1675 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1675 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1676 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1676 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1677 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1677 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1678 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.270 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process end ns: test.foo msg id: 1678 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.270 [conn1] Request::process begin ns: test.foo msg id: 1679 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1679 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1680 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1680 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1681 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1681 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1682 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1682 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1683 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1683 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1684 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1684 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1685 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1685 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1686 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1686 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1687 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1687 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1688 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1688 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1689 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1689 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1690 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1690 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1691 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1691 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1692 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1692 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1693 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1693 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1694 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.271 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process end ns: test.foo msg id: 1694 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.271 [conn1] Request::process begin ns: test.foo msg id: 1695 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1695 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1696 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1696 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1697 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1697 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1698 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1698 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1699 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1699 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1700 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1700 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1701 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1701 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1702 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1702 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1703 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1703 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1704 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1704 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1705 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1705 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1706 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1706 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1707 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1707 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1708 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1708 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1709 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.272 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process end ns: test.foo msg id: 1709 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.272 [conn1] Request::process begin ns: test.foo msg id: 1710 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.273 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.273 [conn1] Request::process end ns: test.foo msg id: 1710 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.273 [conn1] Request::process begin ns: test.foo msg id: 1711 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.273 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1711 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1712 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1712 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1713 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1713 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1714 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1714 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1715 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1715 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1716 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1716 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1717 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1717 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1718 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1718 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1719 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1719 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1720 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1720 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1721 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1721 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1722 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1722 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1723 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1723 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1724 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1724 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1725 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1725 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1726 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process end ns: test.foo msg id: 1726 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.274 [conn1] Request::process begin ns: test.foo msg id: 1727 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.274 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1727 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1728 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1728 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1729 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1729 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1730 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1730 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1731 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1731 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1732 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1732 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1733 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1733 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1734 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1734 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1735 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1735 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1736 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1736 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1737 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1737 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1738 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1738 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1739 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1739 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1740 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1740 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1741 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1741 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1742 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1742 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process begin ns: test.foo msg id: 1743 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.275 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.275 [conn1] Request::process end ns: test.foo msg id: 1743 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1744 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1744 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1745 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1745 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1746 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1746 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1747 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1747 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1748 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1748 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1749 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1749 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1750 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1750 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1751 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1751 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1752 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1752 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1753 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process end ns: test.foo msg id: 1753 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.276 [conn1] Request::process begin ns: test.foo msg id: 1754 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.276 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process end ns: test.foo msg id: 1754 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process begin ns: test.foo msg id: 1755 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.277 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process end ns: test.foo msg id: 1755 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process begin ns: test.foo msg id: 1756 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.277 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process end ns: test.foo msg id: 1756 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process begin ns: test.foo msg id: 1757 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.277 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process end ns: test.foo msg id: 1757 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process begin ns: test.foo msg id: 1758 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.277 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process end ns: test.foo msg id: 1758 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process begin ns: test.foo msg id: 1759 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.277 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process end ns: test.foo msg id: 1759 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process begin ns: test.foo msg id: 1760 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.277 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process end ns: test.foo msg id: 1760 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.277 [conn1] Request::process begin ns: test.foo msg id: 1761 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1761 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1762 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1762 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1763 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1763 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1764 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1764 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1765 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1765 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1766 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1766 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1767 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1767 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1768 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1768 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1769 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1769 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1770 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1770 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1771 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1771 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1772 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1772 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1773 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1773 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1774 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1774 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1775 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1775 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1776 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.278 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process end ns: test.foo msg id: 1776 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.278 [conn1] Request::process begin ns: test.foo msg id: 1777 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.279 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process end ns: test.foo msg id: 1777 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process begin ns: test.foo msg id: 1778 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.279 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process end ns: test.foo msg id: 1778 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process begin ns: test.foo msg id: 1779 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.279 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process end ns: test.foo msg id: 1779 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process begin ns: test.foo msg id: 1780 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.279 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process end ns: test.foo msg id: 1780 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.279 [conn1] Request::process begin ns: test.foo msg id: 1781 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.279 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1781 op: 2002 attempt: 0 44ms m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process begin ns: test.foo msg id: 1782 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.323 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1782 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process begin ns: test.foo msg id: 1783 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.323 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1783 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process begin ns: test.foo msg id: 1784 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.323 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1784 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process begin ns: test.foo msg id: 1785 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.323 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1785 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process begin ns: test.foo msg id: 1786 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.323 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1786 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process begin ns: test.foo msg id: 1787 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.323 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1787 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process begin ns: test.foo msg id: 1788 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.323 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.323 [conn1] Request::process end ns: test.foo msg id: 1788 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1789 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1789 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1790 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1790 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1791 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1791 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1792 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1792 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1793 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1793 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1794 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1794 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1795 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1795 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1796 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1796 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1797 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1797 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1798 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1798 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1799 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1799 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1800 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1800 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1801 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1801 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1802 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1802 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1803 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.324 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process end ns: test.foo msg id: 1803 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.324 [conn1] Request::process begin ns: test.foo msg id: 1804 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.325 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process end ns: test.foo msg id: 1804 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process begin ns: test.foo msg id: 1805 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.325 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process end ns: test.foo msg id: 1805 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process begin ns: test.foo msg id: 1806 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.325 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process end ns: test.foo msg id: 1806 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process begin ns: test.foo msg id: 1807 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.325 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process end ns: test.foo msg id: 1807 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process begin ns: test.foo msg id: 1808 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.325 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process end ns: test.foo msg id: 1808 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.325 [conn1] Request::process begin ns: test.foo msg id: 1809 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.325 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.326 [conn1] Request::process end ns: test.foo msg id: 1809 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.326 [conn1] Request::process begin ns: test.foo msg id: 1810 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.326 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.326 [conn1] Request::process end ns: test.foo msg id: 1810 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.326 [conn1] Request::process begin ns: test.foo msg id: 1811 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.326 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.326 [conn1] Request::process end ns: test.foo msg id: 1811 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.326 [conn1] Request::process begin ns: test.foo msg id: 1812 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.326 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.326 [conn1] Request::process end ns: test.foo msg id: 1812 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1813 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1813 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1814 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1814 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1815 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1815 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1816 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1816 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1817 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1817 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1818 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1818 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1819 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1819 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1820 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1820 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1821 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1821 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1822 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1822 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1823 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1823 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1824 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1824 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1825 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1825 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1826 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1826 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1827 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1827 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1828 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.327 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process end ns: test.foo msg id: 1828 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.327 [conn1] Request::process begin ns: test.foo msg id: 1829 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1829 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1830 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1830 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1831 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1831 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1832 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1832 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1833 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1833 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1834 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1834 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1835 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1835 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1836 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1836 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1837 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1837 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1838 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1838 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1839 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1839 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1840 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1840 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1841 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1841 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1842 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1842 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1843 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1843 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process begin ns: test.foo msg id: 1844 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.328 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.328 [conn1] Request::process end ns: test.foo msg id: 1844 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.329 [conn1] Request::process begin ns: test.foo msg id: 1845 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.329 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.329 [conn1] Request::process end ns: test.foo msg id: 1845 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.329 [conn1] Request::process begin ns: test.foo msg id: 1846 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.329 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1846 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1847 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1847 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1848 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1848 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1849 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1849 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1850 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1850 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1851 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1851 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1852 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1852 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1853 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1853 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1854 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1854 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1855 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1855 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1856 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1856 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1857 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1857 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1858 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1858 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1859 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1859 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1860 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1860 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process begin ns: test.foo msg id: 1861 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.330 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.330 [conn1] Request::process end ns: test.foo msg id: 1861 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1862 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1862 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1863 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1863 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1864 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1864 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1865 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1865 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1866 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1866 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1867 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1867 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1868 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1868 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1869 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1869 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1870 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1870 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1871 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1871 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1872 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1872 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1873 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1873 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1874 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1874 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1875 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1875 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1876 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1876 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process begin ns: test.foo msg id: 1877 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.331 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.331 [conn1] Request::process end ns: test.foo msg id: 1877 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process begin ns: test.foo msg id: 1878 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.332 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process end ns: test.foo msg id: 1878 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process begin ns: test.foo msg id: 1879 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.332 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process end ns: test.foo msg id: 1879 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process begin ns: test.foo msg id: 1880 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.332 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process end ns: test.foo msg id: 1880 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process begin ns: test.foo msg id: 1881 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.332 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process end ns: test.foo msg id: 1881 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.332 [conn1] Request::process begin ns: test.foo msg id: 1882 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.332 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1882 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1883 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1883 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1884 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1884 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1885 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1885 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1886 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1886 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1887 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1887 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1888 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1888 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1889 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1889 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1890 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1890 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1891 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1891 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1892 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1892 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1893 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1893 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1894 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.333 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process end ns: test.foo msg id: 1894 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.333 [conn1] Request::process begin ns: test.foo msg id: 1895 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1895 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1896 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1896 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1897 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1897 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1898 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1898 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1899 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1899 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1900 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1900 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1901 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1901 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1902 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1902 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1903 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1903 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1904 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1904 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1905 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1905 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1906 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1906 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1907 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process end ns: test.foo msg id: 1907 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.334 [conn1] Request::process begin ns: test.foo msg id: 1908 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.334 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process end ns: test.foo msg id: 1908 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process begin ns: test.foo msg id: 1909 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.335 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process end ns: test.foo msg id: 1909 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process begin ns: test.foo msg id: 1910 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.335 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process end ns: test.foo msg id: 1910 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process begin ns: test.foo msg id: 1911 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.335 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process end ns: test.foo msg id: 1911 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.335 [conn1] Request::process begin ns: test.foo msg id: 1912 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1912 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1913 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1913 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1914 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1914 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1915 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1915 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1916 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1916 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1917 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1917 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1918 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1918 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1919 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1919 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1920 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1920 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1921 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1921 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1922 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1922 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1923 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1923 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1924 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1924 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1925 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1925 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1926 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1926 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1927 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process end ns: test.foo msg id: 1927 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.336 [conn1] Request::process begin ns: test.foo msg id: 1928 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.336 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process end ns: test.foo msg id: 1928 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process begin ns: test.foo msg id: 1929 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.337 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process end ns: test.foo msg id: 1929 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process begin ns: test.foo msg id: 1930 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.337 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process end ns: test.foo msg id: 1930 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process begin ns: test.foo msg id: 1931 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.337 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process end ns: test.foo msg id: 1931 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process begin ns: test.foo msg id: 1932 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.337 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process end ns: test.foo msg id: 1932 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process begin ns: test.foo msg id: 1933 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.337 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process end ns: test.foo msg id: 1933 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.337 [conn1] Request::process begin ns: test.foo msg id: 1934 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.337 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1934 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1935 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1935 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1936 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1936 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1937 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1937 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1938 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1938 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1939 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1939 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1940 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1940 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1941 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1941 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1942 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1942 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1943 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1943 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process begin ns: test.foo msg id: 1944 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.338 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.338 [conn1] Request::process end ns: test.foo msg id: 1944 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1945 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1945 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1946 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1946 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1947 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1947 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1948 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1948 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1949 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1949 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1950 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1950 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1951 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1951 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1952 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1952 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1953 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1953 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1954 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1954 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1955 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1955 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1956 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1956 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1957 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1957 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1958 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process end ns: test.foo msg id: 1958 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.339 [conn1] Request::process begin ns: test.foo msg id: 1959 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.339 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.340 [conn1] Request::process end ns: test.foo msg id: 1959 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.340 [conn1] Request::process begin ns: test.foo msg id: 1960 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.340 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.340 [conn1] Request::process end ns: test.foo msg id: 1960 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.340 [conn1] Request::process begin ns: test.foo msg id: 1961 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.340 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.340 [conn1] Request::process end ns: test.foo msg id: 1961 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.340 [conn1] Request::process begin ns: test.foo msg id: 1962 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.340 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1962 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1963 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1963 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1964 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1964 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1965 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1965 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1966 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1966 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1967 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1967 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1968 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1968 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1969 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1969 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1970 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1970 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1971 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1971 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1972 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1972 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1973 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1973 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1974 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1974 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1975 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1975 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1976 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1976 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1977 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process end ns: test.foo msg id: 1977 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.341 [conn1] Request::process begin ns: test.foo msg id: 1978 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.341 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process end ns: test.foo msg id: 1978 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process begin ns: test.foo msg id: 1979 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.342 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process end ns: test.foo msg id: 1979 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process begin ns: test.foo msg id: 1980 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.342 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process end ns: test.foo msg id: 1980 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process begin ns: test.foo msg id: 1981 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.342 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process end ns: test.foo msg id: 1981 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process begin ns: test.foo msg id: 1982 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.342 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process end ns: test.foo msg id: 1982 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process begin ns: test.foo msg id: 1983 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.342 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process end ns: test.foo msg id: 1983 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process begin ns: test.foo msg id: 1984 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.342 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process end ns: test.foo msg id: 1984 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.342 [conn1] Request::process begin ns: test.foo msg id: 1985 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.342 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1985 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1986 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1986 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1987 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1987 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1988 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1988 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1989 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1989 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1990 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1990 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1991 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1991 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1992 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1992 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1993 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1993 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1994 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process end ns: test.foo msg id: 1994 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.343 [conn1] Request::process begin ns: test.foo msg id: 1995 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.343 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 1995 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 1996 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 1996 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 1997 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 1997 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 1998 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 1998 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 1999 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 1999 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2000 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2000 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2001 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2001 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2002 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2002 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2003 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2003 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2004 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2004 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2005 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2005 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2006 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2006 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2007 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2007 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2008 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2008 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2009 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process end ns: test.foo msg id: 2009 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.344 [conn1] Request::process begin ns: test.foo msg id: 2010 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.344 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.345 [conn1] Request::process end ns: test.foo msg id: 2010 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.345 [conn1] Request::process begin ns: test.foo msg id: 2011 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.345 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.345 [conn1] Request::process end ns: test.foo msg id: 2011 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.345 [conn1] Request::process begin ns: test.foo msg id: 2012 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.345 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2012 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2013 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2013 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2014 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2014 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2015 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2015 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2016 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2016 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2017 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2017 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2018 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2018 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2019 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2019 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2020 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2020 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2021 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2021 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2022 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2022 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2023 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2023 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2024 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2024 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2025 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2025 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2026 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2026 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process begin ns: test.foo msg id: 2027 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.346 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.346 [conn1] Request::process end ns: test.foo msg id: 2027 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2028 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2028 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2029 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2029 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2030 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2030 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2031 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2031 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2032 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2032 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2033 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2033 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2034 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2034 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2035 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2035 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2036 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2036 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2037 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2037 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2038 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process end ns: test.foo msg id: 2038 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.347 [conn1] Request::process begin ns: test.foo msg id: 2039 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.347 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process end ns: test.foo msg id: 2039 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process begin ns: test.foo msg id: 2040 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.348 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process end ns: test.foo msg id: 2040 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process begin ns: test.foo msg id: 2041 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.348 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process end ns: test.foo msg id: 2041 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process begin ns: test.foo msg id: 2042 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.348 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process end ns: test.foo msg id: 2042 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process begin ns: test.foo msg id: 2043 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.348 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process end ns: test.foo msg id: 2043 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process begin ns: test.foo msg id: 2044 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.348 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process end ns: test.foo msg id: 2044 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process begin ns: test.foo msg id: 2045 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.348 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process end ns: test.foo msg id: 2045 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.348 [conn1] Request::process begin ns: test.foo msg id: 2046 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.348 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process end ns: test.foo msg id: 2046 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process begin ns: test.foo msg id: 2047 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.349 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process end ns: test.foo msg id: 2047 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process begin ns: test.foo msg id: 2048 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.349 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process end ns: test.foo msg id: 2048 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process begin ns: test.foo msg id: 2049 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.349 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process end ns: test.foo msg id: 2049 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process begin ns: test.foo msg id: 2050 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.349 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process end ns: test.foo msg id: 2050 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process begin ns: test.foo msg id: 2051 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.349 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process end ns: test.foo msg id: 2051 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process begin ns: test.foo msg id: 2052 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.349 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process end ns: test.foo msg id: 2052 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.349 [conn1] Request::process begin ns: test.foo msg id: 2053 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.349 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2053 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2054 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2054 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2055 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2055 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2056 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2056 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2057 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2057 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2058 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2058 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2059 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2059 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2060 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2060 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2061 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2061 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2062 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2062 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2063 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2063 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2064 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2064 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2065 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2065 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process begin ns: test.foo msg id: 2066 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.350 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.350 [conn1] Request::process end ns: test.foo msg id: 2066 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2067 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2067 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2068 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2068 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2069 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2069 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2070 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2070 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2071 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2071 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2072 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2072 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2073 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2073 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2074 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2074 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2075 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2075 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2076 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2076 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2077 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process end ns: test.foo msg id: 2077 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.351 [conn1] Request::process begin ns: test.foo msg id: 2078 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.351 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process end ns: test.foo msg id: 2078 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process begin ns: test.foo msg id: 2079 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.352 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process end ns: test.foo msg id: 2079 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process begin ns: test.foo msg id: 2080 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.352 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process end ns: test.foo msg id: 2080 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process begin ns: test.foo msg id: 2081 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.352 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process end ns: test.foo msg id: 2081 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.352 [conn1] Request::process begin ns: test.foo msg id: 2082 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.352 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2082 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2083 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2083 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2084 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2084 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2085 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2085 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2086 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2086 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2087 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2087 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2088 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2088 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2089 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2089 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2090 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2090 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2091 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2091 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2092 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2092 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2093 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.353 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process end ns: test.foo msg id: 2093 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.353 [conn1] Request::process begin ns: test.foo msg id: 2094 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2094 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2095 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2095 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2096 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2096 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2097 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2097 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2098 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2098 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2099 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2099 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2100 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2100 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2101 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2101 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2102 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2102 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2103 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2103 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2104 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2104 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2105 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2105 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2106 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2106 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2107 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process end ns: test.foo msg id: 2107 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.354 [conn1] Request::process begin ns: test.foo msg id: 2108 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.354 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.355 [conn1] Request::process end ns: test.foo msg id: 2108 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.355 [conn1] Request::process begin ns: test.foo msg id: 2109 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.355 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2109 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2110 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2110 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2111 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2111 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2112 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2112 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2113 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2113 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2114 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2114 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2115 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2115 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2116 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2116 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2117 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2117 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2118 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2118 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2119 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2119 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2120 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2120 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2121 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2121 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2122 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2122 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2123 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2123 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2124 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2124 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process begin ns: test.foo msg id: 2125 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.356 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.356 [conn1] Request::process end ns: test.foo msg id: 2125 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2126 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2126 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2127 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2127 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2128 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2128 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2129 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2129 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2130 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2130 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2131 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2131 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2132 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2132 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2133 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process end ns: test.foo msg id: 2133 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.357 [conn1] Request::process begin ns: test.foo msg id: 2134 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.357 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2134 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2135 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2135 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2136 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2136 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2137 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2137 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2138 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2138 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2139 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2139 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2140 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2140 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2141 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2141 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process begin ns: test.foo msg id: 2142 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.358 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.358 [conn1] Request::process end ns: test.foo msg id: 2142 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2143 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2143 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2144 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2144 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2145 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2145 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2146 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2146 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2147 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2147 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2148 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2148 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2149 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2149 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2150 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2150 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2151 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2151 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2152 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2152 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2153 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2153 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2154 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2154 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2155 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2155 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2156 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2156 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2157 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2157 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process begin ns: test.foo msg id: 2158 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.359 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.359 [conn1] Request::process end ns: test.foo msg id: 2158 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.360 [conn1] Request::process begin ns: test.foo msg id: 2159 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.360 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.360 [conn1] Request::process end ns: test.foo msg id: 2159 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.360 [conn1] Request::process begin ns: test.foo msg id: 2160 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.360 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2160 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2161 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2161 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2162 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2162 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2163 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2163 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2164 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2164 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2165 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2165 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2166 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2166 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2167 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2167 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2168 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2168 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2169 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2169 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2170 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2170 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2171 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2171 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2172 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2172 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2173 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2173 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2174 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process end ns: test.foo msg id: 2174 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.361 [conn1] Request::process begin ns: test.foo msg id: 2175 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.361 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2175 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2176 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2176 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2177 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2177 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2178 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2178 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2179 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2179 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2180 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2180 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2181 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2181 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2182 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2182 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2183 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2183 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2184 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process end ns: test.foo msg id: 2184 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.362 [conn1] Request::process begin ns: test.foo msg id: 2185 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.362 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2185 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process begin ns: test.foo msg id: 2186 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.363 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2186 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process begin ns: test.foo msg id: 2187 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.363 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2187 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process begin ns: test.foo msg id: 2188 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.363 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2188 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process begin ns: test.foo msg id: 2189 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.363 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2189 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process begin ns: test.foo msg id: 2190 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.363 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2190 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process begin ns: test.foo msg id: 2191 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.363 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2191 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process begin ns: test.foo msg id: 2192 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.363 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.363 [conn1] Request::process end ns: test.foo msg id: 2192 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2193 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2193 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2194 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2194 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2195 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2195 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2196 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2196 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2197 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2197 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2198 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2198 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2199 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2199 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2200 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2200 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2201 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2201 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2202 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2202 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2203 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2203 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2204 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2204 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2205 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2205 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2206 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2206 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2207 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2207 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2208 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process end ns: test.foo msg id: 2208 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.364 [conn1] Request::process begin ns: test.foo msg id: 2209 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.364 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.365 [conn1] Request::process end ns: test.foo msg id: 2209 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.365 [conn1] Request::process begin ns: test.foo msg id: 2210 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.365 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2210 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2211 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2211 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2212 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2212 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2213 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2213 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2214 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2214 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2215 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2215 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2216 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2216 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2217 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2217 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2218 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2218 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2219 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2219 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2220 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2220 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2221 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2221 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2222 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2222 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2223 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2223 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2224 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2224 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2225 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process end ns: test.foo msg id: 2225 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.366 [conn1] Request::process begin ns: test.foo msg id: 2226 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.366 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2226 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2227 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2227 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2228 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2228 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2229 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2229 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2230 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2230 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2231 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2231 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2232 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2232 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2233 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2233 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2234 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2234 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2235 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process end ns: test.foo msg id: 2235 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.367 [conn1] Request::process begin ns: test.foo msg id: 2236 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.367 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process end ns: test.foo msg id: 2236 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process begin ns: test.foo msg id: 2237 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.368 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process end ns: test.foo msg id: 2237 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process begin ns: test.foo msg id: 2238 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.368 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process end ns: test.foo msg id: 2238 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process begin ns: test.foo msg id: 2239 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.368 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process end ns: test.foo msg id: 2239 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process begin ns: test.foo msg id: 2240 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.368 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process end ns: test.foo msg id: 2240 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process begin ns: test.foo msg id: 2241 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.368 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process end ns: test.foo msg id: 2241 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process begin ns: test.foo msg id: 2242 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.368 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process end ns: test.foo msg id: 2242 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.368 [conn1] Request::process begin ns: test.foo msg id: 2243 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.368 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2243 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2244 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2244 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2245 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2245 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2246 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2246 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2247 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2247 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2248 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2248 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2249 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2249 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2250 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2250 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2251 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2251 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2252 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2252 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2253 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2253 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2254 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2254 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2255 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2255 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2256 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2256 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2257 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2257 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2258 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process end ns: test.foo msg id: 2258 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.369 [conn1] Request::process begin ns: test.foo msg id: 2259 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.369 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.370 [conn1] Request::process end ns: test.foo msg id: 2259 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.370 [conn1] Request::process begin ns: test.foo msg id: 2260 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.370 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.370 [conn1] Request::process end ns: test.foo msg id: 2260 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.370 [conn1] Request::process begin ns: test.foo msg id: 2261 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.370 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2261 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2262 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2262 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2263 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2263 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2264 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2264 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2265 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2265 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2266 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2266 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2267 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2267 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2268 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2268 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2269 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2269 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2270 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2270 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2271 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2271 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2272 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2272 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2273 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2273 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2274 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2274 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2275 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process end ns: test.foo msg id: 2275 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.371 [conn1] Request::process begin ns: test.foo msg id: 2276 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.371 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2276 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2277 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2277 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2278 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2278 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2279 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2279 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2280 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2280 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2281 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2281 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2282 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2282 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2283 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2283 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2284 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2284 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2285 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2285 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2286 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process end ns: test.foo msg id: 2286 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.372 [conn1] Request::process begin ns: test.foo msg id: 2287 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.372 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process end ns: test.foo msg id: 2287 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process begin ns: test.foo msg id: 2288 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.373 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process end ns: test.foo msg id: 2288 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process begin ns: test.foo msg id: 2289 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.373 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process end ns: test.foo msg id: 2289 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process begin ns: test.foo msg id: 2290 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.373 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process end ns: test.foo msg id: 2290 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process begin ns: test.foo msg id: 2291 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.373 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process end ns: test.foo msg id: 2291 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process begin ns: test.foo msg id: 2292 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.373 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process end ns: test.foo msg id: 2292 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.373 [conn1] Request::process begin ns: test.foo msg id: 2293 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.373 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2293 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2294 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2294 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2295 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2295 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2296 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2296 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2297 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2297 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2298 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2298 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2299 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2299 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2300 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2300 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2301 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2301 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2302 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2302 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2303 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2303 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2304 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2304 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2305 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2305 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2306 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2306 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2307 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2307 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2308 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process end ns: test.foo msg id: 2308 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.374 [conn1] Request::process begin ns: test.foo msg id: 2309 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.374 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process end ns: test.foo msg id: 2309 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process begin ns: test.foo msg id: 2310 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.375 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process end ns: test.foo msg id: 2310 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process begin ns: test.foo msg id: 2311 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.375 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process end ns: test.foo msg id: 2311 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process begin ns: test.foo msg id: 2312 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.375 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process end ns: test.foo msg id: 2312 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.375 [conn1] Request::process begin ns: test.foo msg id: 2313 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.375 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2313 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2314 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2314 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2315 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2315 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2316 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2316 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2317 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2317 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2318 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2318 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2319 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2319 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2320 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2320 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2321 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2321 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2322 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2322 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2323 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2323 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2324 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2324 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2325 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process end ns: test.foo msg id: 2325 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.376 [conn1] Request::process begin ns: test.foo msg id: 2326 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.376 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2326 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2327 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2327 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2328 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2328 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2329 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2329 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2330 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2330 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2331 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2331 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2332 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2332 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2333 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2333 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2334 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2334 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2335 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2335 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2336 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2336 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2337 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2337 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2338 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process end ns: test.foo msg id: 2338 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.377 [conn1] Request::process begin ns: test.foo msg id: 2339 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.377 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process end ns: test.foo msg id: 2339 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process begin ns: test.foo msg id: 2340 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.378 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process end ns: test.foo msg id: 2340 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process begin ns: test.foo msg id: 2341 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.378 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process end ns: test.foo msg id: 2341 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process begin ns: test.foo msg id: 2342 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.378 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process end ns: test.foo msg id: 2342 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.378 [conn1] Request::process begin ns: test.foo msg id: 2343 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.378 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2343 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2344 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2344 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2345 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2345 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2346 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2346 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2347 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2347 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2348 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2348 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2349 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2349 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2350 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2350 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2351 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2351 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2352 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2352 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2353 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2353 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2354 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2354 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2355 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2355 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2356 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2356 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2357 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2357 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2358 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process end ns: test.foo msg id: 2358 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.379 [conn1] Request::process begin ns: test.foo msg id: 2359 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.379 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process end ns: test.foo msg id: 2359 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process begin ns: test.foo msg id: 2360 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.380 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process end ns: test.foo msg id: 2360 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process begin ns: test.foo msg id: 2361 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.380 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process end ns: test.foo msg id: 2361 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process begin ns: test.foo msg id: 2362 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.380 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process end ns: test.foo msg id: 2362 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process begin ns: test.foo msg id: 2363 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.380 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process end ns: test.foo msg id: 2363 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.380 [conn1] Request::process begin ns: test.foo msg id: 2364 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.380 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2364 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2365 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2365 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2366 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2366 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2367 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2367 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2368 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2368 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2369 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2369 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2370 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2370 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2371 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2371 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2372 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2372 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2373 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2373 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2374 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2374 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2375 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process end ns: test.foo msg id: 2375 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.381 [conn1] Request::process begin ns: test.foo msg id: 2376 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.381 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2376 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2377 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2377 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2378 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2378 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2379 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2379 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2380 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2380 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2381 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2381 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2382 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2382 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2383 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2383 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2384 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2384 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2385 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2385 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2386 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2386 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2387 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2387 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2388 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process end ns: test.foo msg id: 2388 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.382 [conn1] Request::process begin ns: test.foo msg id: 2389 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.382 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process end ns: test.foo msg id: 2389 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process begin ns: test.foo msg id: 2390 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.383 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process end ns: test.foo msg id: 2390 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process begin ns: test.foo msg id: 2391 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.383 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process end ns: test.foo msg id: 2391 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process begin ns: test.foo msg id: 2392 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.383 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process end ns: test.foo msg id: 2392 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.383 [conn1] Request::process begin ns: test.foo msg id: 2393 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.383 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2393 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2394 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2394 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2395 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2395 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2396 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2396 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2397 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2397 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2398 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2398 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2399 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2399 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2400 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2400 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2401 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2401 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2402 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2402 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2403 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2403 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2404 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2404 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2405 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2405 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2406 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2406 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2407 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2407 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2408 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2408 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process begin ns: test.foo msg id: 2409 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.384 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.384 [conn1] Request::process end ns: test.foo msg id: 2409 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.385 [conn1] Request::process begin ns: test.foo msg id: 2410 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.385 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.385 [conn1] Request::process end ns: test.foo msg id: 2410 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.385 [conn1] Request::process begin ns: test.foo msg id: 2411 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.385 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.385 [conn1] Request::process end ns: test.foo msg id: 2411 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.385 [conn1] Request::process begin ns: test.foo msg id: 2412 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.385 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.385 [conn1] Request::process end ns: test.foo msg id: 2412 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.385 [conn1] Request::process begin ns: test.foo msg id: 2413 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.385 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2413 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2414 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2414 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2415 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2415 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2416 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2416 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2417 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2417 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2418 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2418 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2419 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2419 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2420 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2420 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2421 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2421 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2422 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2422 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2423 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2423 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2424 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2424 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2425 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2425 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2426 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.386 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process end ns: test.foo msg id: 2426 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.386 [conn1] Request::process begin ns: test.foo msg id: 2427 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2427 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2428 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2428 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2429 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2429 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2430 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2430 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2431 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2431 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2432 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2432 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2433 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2433 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2434 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2434 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2435 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2435 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2436 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2436 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2437 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process end ns: test.foo msg id: 2437 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.387 [conn1] Request::process begin ns: test.foo msg id: 2438 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.387 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process end ns: test.foo msg id: 2438 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process begin ns: test.foo msg id: 2439 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.388 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process end ns: test.foo msg id: 2439 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process begin ns: test.foo msg id: 2440 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.388 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process end ns: test.foo msg id: 2440 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process begin ns: test.foo msg id: 2441 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.388 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process end ns: test.foo msg id: 2441 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process begin ns: test.foo msg id: 2442 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.388 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process end ns: test.foo msg id: 2442 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process begin ns: test.foo msg id: 2443 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.388 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.388 [conn1] Request::process end ns: test.foo msg id: 2443 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2444 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2444 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2445 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2445 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2446 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2446 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2447 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2447 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2448 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2448 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2449 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2449 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2450 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2450 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2451 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2451 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2452 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2452 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2453 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2453 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2454 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2454 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2455 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2455 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2456 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2456 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2457 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2457 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2458 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process end ns: test.foo msg id: 2458 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.389 [conn1] Request::process begin ns: test.foo msg id: 2459 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.389 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process end ns: test.foo msg id: 2459 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process begin ns: test.foo msg id: 2460 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.390 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process end ns: test.foo msg id: 2460 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process begin ns: test.foo msg id: 2461 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.390 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process end ns: test.foo msg id: 2461 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process begin ns: test.foo msg id: 2462 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.390 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process end ns: test.foo msg id: 2462 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process begin ns: test.foo msg id: 2463 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.390 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process end ns: test.foo msg id: 2463 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.390 [conn1] Request::process begin ns: test.foo msg id: 2464 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.390 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2464 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2465 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2465 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2466 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2466 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2467 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2467 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2468 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2468 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2469 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2469 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2470 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2470 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2471 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2471 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2472 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2472 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2473 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2473 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2474 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2474 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2475 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2475 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process begin ns: test.foo msg id: 2476 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.391 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.391 [conn1] Request::process end ns: test.foo msg id: 2476 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2477 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2477 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2478 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2478 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2479 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2479 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2480 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2480 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2481 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2481 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2482 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2482 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2483 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2483 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2484 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2484 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2485 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2485 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2486 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2486 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2487 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2487 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2488 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process end ns: test.foo msg id: 2488 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.392 [conn1] Request::process begin ns: test.foo msg id: 2489 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.392 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process end ns: test.foo msg id: 2489 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process begin ns: test.foo msg id: 2490 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.393 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process end ns: test.foo msg id: 2490 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process begin ns: test.foo msg id: 2491 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.393 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process end ns: test.foo msg id: 2491 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process begin ns: test.foo msg id: 2492 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.393 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process end ns: test.foo msg id: 2492 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process begin ns: test.foo msg id: 2493 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.393 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process end ns: test.foo msg id: 2493 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.393 [conn1] Request::process begin ns: test.foo msg id: 2494 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.393 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2494 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2495 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2495 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2496 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2496 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2497 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2497 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2498 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2498 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2499 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2499 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2500 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2500 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2501 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2501 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2502 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2502 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2503 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2503 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2504 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2504 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2505 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2505 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2506 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2506 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2507 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2507 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2508 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2508 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2509 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2509 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process begin ns: test.foo msg id: 2510 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.394 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.394 [conn1] Request::process end ns: test.foo msg id: 2510 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.395 [conn1] Request::process begin ns: test.foo msg id: 2511 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.395 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.395 [conn1] Request::process end ns: test.foo msg id: 2511 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.395 [conn1] Request::process begin ns: test.foo msg id: 2512 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.395 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.395 [conn1] Request::process end ns: test.foo msg id: 2512 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.395 [conn1] Request::process begin ns: test.foo msg id: 2513 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.395 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.395 [conn1] Request::process end ns: test.foo msg id: 2513 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.395 [conn1] Request::process begin ns: test.foo msg id: 2514 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.395 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2514 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2515 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2515 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2516 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2516 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2517 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2517 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2518 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2518 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2519 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2519 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2520 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2520 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2521 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2521 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2522 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2522 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2523 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2523 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2524 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2524 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2525 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2525 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2526 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2526 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process begin ns: test.foo msg id: 2527 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.396 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.396 [conn1] Request::process end ns: test.foo msg id: 2527 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process begin ns: test.foo msg id: 2528 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.397 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process end ns: test.foo msg id: 2528 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process begin ns: test.foo msg id: 2529 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.397 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process end ns: test.foo msg id: 2529 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process begin ns: test.foo msg id: 2530 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.397 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process end ns: test.foo msg id: 2530 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process begin ns: test.foo msg id: 2531 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.397 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process end ns: test.foo msg id: 2531 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process begin ns: test.foo msg id: 2532 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.397 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process end ns: test.foo msg id: 2532 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process begin ns: test.foo msg id: 2533 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.397 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.397 [conn1] Request::process end ns: test.foo msg id: 2533 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2534 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2534 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2535 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2535 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2536 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2536 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2537 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2537 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2538 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2538 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2539 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2539 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2540 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2540 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2541 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2541 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2542 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2542 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2543 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2543 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2544 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2544 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2545 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2545 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2546 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2546 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2547 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process end ns: test.foo msg id: 2547 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.398 [conn1] Request::process begin ns: test.foo msg id: 2548 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.398 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2548 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2549 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2549 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2550 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2550 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2551 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2551 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2552 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2552 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2553 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2553 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2554 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2554 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2555 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2555 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2556 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2556 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2557 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2557 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2558 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2558 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2559 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process end ns: test.foo msg id: 2559 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.399 [conn1] Request::process begin ns: test.foo msg id: 2560 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.399 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process end ns: test.foo msg id: 2560 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process begin ns: test.foo msg id: 2561 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.400 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process end ns: test.foo msg id: 2561 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process begin ns: test.foo msg id: 2562 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.400 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process end ns: test.foo msg id: 2562 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process begin ns: test.foo msg id: 2563 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.400 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process end ns: test.foo msg id: 2563 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process begin ns: test.foo msg id: 2564 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.400 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process end ns: test.foo msg id: 2564 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.400 [conn1] Request::process begin ns: test.foo msg id: 2565 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.400 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2565 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2566 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2566 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2567 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2567 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2568 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2568 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2569 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2569 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2570 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2570 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2571 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2571 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2572 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2572 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2573 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2573 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2574 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2574 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2575 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2575 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2576 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2576 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2577 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2577 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2578 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2578 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2579 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2579 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2580 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process end ns: test.foo msg id: 2580 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.401 [conn1] Request::process begin ns: test.foo msg id: 2581 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.401 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2581 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2582 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2582 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2583 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2583 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2584 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2584 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2585 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2585 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2586 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2586 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2587 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2587 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2588 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2588 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2589 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2589 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2590 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2590 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2591 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2591 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2592 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2592 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2593 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2593 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2594 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2594 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2595 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2595 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2596 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2596 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2597 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.402 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process end ns: test.foo msg id: 2597 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.402 [conn1] Request::process begin ns: test.foo msg id: 2598 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.403 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process end ns: test.foo msg id: 2598 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process begin ns: test.foo msg id: 2599 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.403 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process end ns: test.foo msg id: 2599 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process begin ns: test.foo msg id: 2600 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.403 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process end ns: test.foo msg id: 2600 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process begin ns: test.foo msg id: 2601 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.403 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process end ns: test.foo msg id: 2601 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.403 [conn1] Request::process begin ns: test.foo msg id: 2602 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.403 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2602 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2603 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2603 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2604 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2604 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2605 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2605 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2606 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2606 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2607 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2607 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2608 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2608 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2609 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2609 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2610 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2610 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2611 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2611 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2612 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2612 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2613 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2613 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2614 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2614 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process begin ns: test.foo msg id: 2615 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.404 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.404 [conn1] Request::process end ns: test.foo msg id: 2615 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2616 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2616 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2617 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2617 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2618 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2618 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2619 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2619 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2620 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2620 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2621 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2621 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2622 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2622 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2623 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2623 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2624 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2624 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2625 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2625 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2626 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2626 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2627 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2627 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2628 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2628 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2629 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2629 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2630 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2630 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2631 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.405 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process end ns: test.foo msg id: 2631 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.405 [conn1] Request::process begin ns: test.foo msg id: 2632 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2632 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2633 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2633 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2634 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2634 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2635 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2635 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2636 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2636 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2637 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2637 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2638 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2638 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2639 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2639 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2640 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2640 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2641 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2641 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2642 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2642 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2643 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2643 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2644 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2644 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2645 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2645 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2646 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2646 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2647 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process end ns: test.foo msg id: 2647 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.406 [conn1] Request::process begin ns: test.foo msg id: 2648 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.406 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.407 [conn1] Request::process end ns: test.foo msg id: 2648 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.407 [conn1] Request::process begin ns: test.foo msg id: 2649 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.407 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.407 [conn1] Request::process end ns: test.foo msg id: 2649 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2650 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2650 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2651 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2651 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2652 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2652 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2653 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2653 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2654 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2654 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2655 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2655 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2656 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2656 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2657 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2657 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2658 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2658 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2659 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2659 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2660 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2660 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2661 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2661 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2662 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2662 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2663 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2663 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2664 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2664 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process begin ns: test.foo msg id: 2665 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.408 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.408 [conn1] Request::process end ns: test.foo msg id: 2665 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2666 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2666 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2667 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2667 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2668 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2668 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2669 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2669 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2670 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2670 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2671 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2671 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2672 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2672 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2673 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2673 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2674 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2674 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2675 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2675 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2676 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2676 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2677 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2677 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2678 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2678 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2679 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2679 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2680 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2680 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2681 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process end ns: test.foo msg id: 2681 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.409 [conn1] Request::process begin ns: test.foo msg id: 2682 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.409 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2682 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2683 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2683 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2684 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2684 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2685 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2685 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2686 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2686 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2687 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2687 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2688 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2688 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2689 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process end ns: test.foo msg id: 2689 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.410 [conn1] Request::process begin ns: test.foo msg id: 2690 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.410 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2690 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2691 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2691 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2692 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2692 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2693 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2693 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2694 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2694 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2695 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2695 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2696 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2696 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2697 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process end ns: test.foo msg id: 2697 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.411 [conn1] Request::process begin ns: test.foo msg id: 2698 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.411 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2698 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2699 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2699 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2700 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2700 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2701 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2701 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2702 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2702 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2703 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2703 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2704 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2704 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2705 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2705 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2706 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2706 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2707 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2707 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2708 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2708 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2709 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2709 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2710 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2710 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2711 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2711 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2712 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2712 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2713 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process end ns: test.foo msg id: 2713 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.412 [conn1] Request::process begin ns: test.foo msg id: 2714 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.412 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.413 [conn1] Request::process end ns: test.foo msg id: 2714 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.413 [conn1] Request::process begin ns: test.foo msg id: 2715 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.413 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2715 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2716 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2716 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2717 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2717 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2718 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2718 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2719 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2719 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2720 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2720 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2721 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2721 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2722 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2722 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2723 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2723 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2724 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2724 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2725 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2725 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2726 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2726 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2727 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2727 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2728 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2728 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2729 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2729 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2730 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process end ns: test.foo msg id: 2730 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.414 [conn1] Request::process begin ns: test.foo msg id: 2731 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.414 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2731 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2732 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2732 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2733 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2733 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2734 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2734 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2735 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2735 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2736 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2736 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2737 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2737 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2738 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2738 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2739 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2739 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2740 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process end ns: test.foo msg id: 2740 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.415 [conn1] Request::process begin ns: test.foo msg id: 2741 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.415 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process end ns: test.foo msg id: 2741 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process begin ns: test.foo msg id: 2742 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.416 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process end ns: test.foo msg id: 2742 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process begin ns: test.foo msg id: 2743 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.416 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process end ns: test.foo msg id: 2743 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process begin ns: test.foo msg id: 2744 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.416 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process end ns: test.foo msg id: 2744 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process begin ns: test.foo msg id: 2745 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.416 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process end ns: test.foo msg id: 2745 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process begin ns: test.foo msg id: 2746 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.416 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process end ns: test.foo msg id: 2746 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process begin ns: test.foo msg id: 2747 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.416 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.416 [conn1] Request::process end ns: test.foo msg id: 2747 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2748 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2748 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2749 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2749 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2750 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2750 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2751 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2751 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2752 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2752 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2753 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2753 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2754 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2754 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2755 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2755 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2756 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2756 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2757 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2757 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2758 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2758 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2759 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2759 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2760 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2760 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2761 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2761 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2762 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2762 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2763 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process end ns: test.foo msg id: 2763 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.417 [conn1] Request::process begin ns: test.foo msg id: 2764 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.417 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.418 [conn1] Request::process end ns: test.foo msg id: 2764 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.418 [conn1] Request::process begin ns: test.foo msg id: 2765 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.418 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2765 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2766 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2766 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2767 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2767 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2768 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2768 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2769 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2769 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2770 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2770 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2771 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2771 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2772 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2772 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2773 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2773 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2774 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2774 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2775 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2775 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2776 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2776 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2777 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2777 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2778 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2778 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2779 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2779 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2780 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process end ns: test.foo msg id: 2780 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.419 [conn1] Request::process begin ns: test.foo msg id: 2781 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.419 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2781 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2782 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2782 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2783 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2783 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2784 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2784 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2785 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2785 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2786 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2786 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2787 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2787 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2788 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2788 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2789 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2789 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2790 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process end ns: test.foo msg id: 2790 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.420 [conn1] Request::process begin ns: test.foo msg id: 2791 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.420 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process end ns: test.foo msg id: 2791 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process begin ns: test.foo msg id: 2792 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.421 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process end ns: test.foo msg id: 2792 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process begin ns: test.foo msg id: 2793 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.421 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process end ns: test.foo msg id: 2793 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process begin ns: test.foo msg id: 2794 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.421 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process end ns: test.foo msg id: 2794 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process begin ns: test.foo msg id: 2795 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.421 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process end ns: test.foo msg id: 2795 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process begin ns: test.foo msg id: 2796 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.421 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process end ns: test.foo msg id: 2796 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process begin ns: test.foo msg id: 2797 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.421 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process end ns: test.foo msg id: 2797 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.421 [conn1] Request::process begin ns: test.foo msg id: 2798 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2798 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2799 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2799 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2800 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2800 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2801 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2801 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2802 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2802 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2803 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2803 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2804 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2804 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2805 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2805 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2806 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2806 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2807 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2807 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2808 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2808 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2809 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2809 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2810 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2810 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2811 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2811 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2812 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process end ns: test.foo msg id: 2812 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.422 [conn1] Request::process begin ns: test.foo msg id: 2813 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.422 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process end ns: test.foo msg id: 2813 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process begin ns: test.foo msg id: 2814 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.423 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process end ns: test.foo msg id: 2814 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process begin ns: test.foo msg id: 2815 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.423 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process end ns: test.foo msg id: 2815 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process begin ns: test.foo msg id: 2816 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.423 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process end ns: test.foo msg id: 2816 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.423 [conn1] Request::process begin ns: test.foo msg id: 2817 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.423 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2817 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2818 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2818 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2819 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2819 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2820 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2820 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2821 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2821 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2822 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2822 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2823 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2823 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2824 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2824 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2825 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2825 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2826 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2826 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2827 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2827 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2828 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2828 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2829 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process end ns: test.foo msg id: 2829 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.424 [conn1] Request::process begin ns: test.foo msg id: 2830 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.424 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process end ns: test.foo msg id: 2830 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process begin ns: test.foo msg id: 2831 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.425 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process end ns: test.foo msg id: 2831 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process begin ns: test.foo msg id: 2832 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.425 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process end ns: test.foo msg id: 2832 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process begin ns: test.foo msg id: 2833 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.425 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process end ns: test.foo msg id: 2833 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process begin ns: test.foo msg id: 2834 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.425 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process end ns: test.foo msg id: 2834 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.425 [conn1] Request::process begin ns: test.foo msg id: 2835 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.425 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2835 op: 2002 attempt: 0 23ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2836 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2836 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2837 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2837 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2838 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2838 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2839 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2839 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2840 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2840 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2841 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2841 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2842 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2842 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2843 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2843 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2844 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2844 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2845 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2845 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2846 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2846 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process begin ns: test.foo msg id: 2847 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.449 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.449 [conn1] Request::process end ns: test.foo msg id: 2847 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2848 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2848 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2849 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2849 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2850 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2850 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2851 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2851 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2852 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2852 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2853 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2853 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2854 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2854 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2855 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2855 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2856 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2856 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2857 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2857 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2858 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2858 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process begin ns: test.foo msg id: 2859 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.451 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.451 [conn1] Request::process end ns: test.foo msg id: 2859 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2860 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2860 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2861 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2861 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2862 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2862 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2863 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2863 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2864 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2864 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2865 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2865 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2866 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2866 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2867 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2867 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2868 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2868 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2869 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2869 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2870 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2870 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2871 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2871 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2872 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2872 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2873 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2873 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2874 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2874 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process begin ns: test.foo msg id: 2875 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.452 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.452 [conn1] Request::process end ns: test.foo msg id: 2875 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2876 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2876 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2877 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2877 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2878 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2878 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2879 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2879 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2880 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2880 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2881 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2881 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2882 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2882 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2883 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2883 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2884 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2884 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2885 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2885 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2886 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2886 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2887 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2887 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2888 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2888 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2889 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2889 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2890 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2890 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2891 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process end ns: test.foo msg id: 2891 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.453 [conn1] Request::process begin ns: test.foo msg id: 2892 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.453 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process end ns: test.foo msg id: 2892 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process begin ns: test.foo msg id: 2893 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.454 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process end ns: test.foo msg id: 2893 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process begin ns: test.foo msg id: 2894 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.454 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process end ns: test.foo msg id: 2894 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process begin ns: test.foo msg id: 2895 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.454 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process end ns: test.foo msg id: 2895 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process begin ns: test.foo msg id: 2896 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.454 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process end ns: test.foo msg id: 2896 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.454 [conn1] Request::process begin ns: test.foo msg id: 2897 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.454 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2897 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2898 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2898 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2899 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2899 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2900 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2900 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2901 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2901 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2902 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2902 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2903 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2903 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2904 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2904 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2905 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2905 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2906 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2906 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2907 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2907 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2908 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process end ns: test.foo msg id: 2908 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.455 [conn1] Request::process begin ns: test.foo msg id: 2909 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.455 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2909 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2910 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2910 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2911 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2911 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2912 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2912 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2913 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2913 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2914 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2914 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2915 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2915 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2916 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2916 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2917 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2917 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2918 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2918 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2919 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2919 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2920 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2920 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2921 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2921 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2922 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2922 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2923 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process end ns: test.foo msg id: 2923 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.456 [conn1] Request::process begin ns: test.foo msg id: 2924 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.456 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.457 [conn1] Request::process end ns: test.foo msg id: 2924 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.457 [conn1] Request::process begin ns: test.foo msg id: 2925 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.457 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.457 [conn1] Request::process end ns: test.foo msg id: 2925 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2926 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2926 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2927 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2927 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2928 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2928 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2929 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2929 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2930 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2930 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2931 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2931 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2932 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2932 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2933 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2933 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2934 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2934 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2935 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2935 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2936 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2936 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2937 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2937 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2938 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2938 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2939 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2939 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2940 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2940 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process begin ns: test.foo msg id: 2941 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.458 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.458 [conn1] Request::process end ns: test.foo msg id: 2941 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2942 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process end ns: test.foo msg id: 2942 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2943 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process end ns: test.foo msg id: 2943 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2944 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process end ns: test.foo msg id: 2944 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2945 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process end ns: test.foo msg id: 2945 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2946 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process end ns: test.foo msg id: 2946 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2947 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process end ns: test.foo msg id: 2947 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2948 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process end ns: test.foo msg id: 2948 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.459 [conn1] Request::process begin ns: test.foo msg id: 2949 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.459 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2949 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2950 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2950 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2951 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2951 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2952 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2952 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2953 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2953 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2954 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2954 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2955 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2955 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2956 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2956 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2957 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2957 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2958 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process end ns: test.foo msg id: 2958 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.460 [conn1] Request::process begin ns: test.foo msg id: 2959 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.460 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2959 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2960 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2960 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2961 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2961 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2962 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2962 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2963 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2963 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2964 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2964 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2965 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2965 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2966 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2966 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2967 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2967 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2968 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2968 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2969 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2969 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2970 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2970 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2971 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2971 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2972 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2972 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2973 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process end ns: test.foo msg id: 2973 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.461 [conn1] Request::process begin ns: test.foo msg id: 2974 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.461 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.462 [conn1] Request::process end ns: test.foo msg id: 2974 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.462 [conn1] Request::process begin ns: test.foo msg id: 2975 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.462 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.462 [conn1] Request::process end ns: test.foo msg id: 2975 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.462 [conn1] Request::process begin ns: test.foo msg id: 2976 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.462 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.462 [conn1] Request::process end ns: test.foo msg id: 2976 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2977 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2977 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2978 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2978 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2979 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2979 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2980 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2980 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2981 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2981 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2982 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2982 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2983 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2983 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2984 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2984 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2985 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2985 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2986 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2986 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2987 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2987 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2988 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2988 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2989 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2989 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2990 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2990 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2991 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2991 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process begin ns: test.foo msg id: 2992 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.463 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.463 [conn1] Request::process end ns: test.foo msg id: 2992 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process begin ns: test.foo msg id: 2993 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.464 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process end ns: test.foo msg id: 2993 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process begin ns: test.foo msg id: 2994 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.464 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process end ns: test.foo msg id: 2994 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process begin ns: test.foo msg id: 2995 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.464 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process end ns: test.foo msg id: 2995 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process begin ns: test.foo msg id: 2996 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.464 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process end ns: test.foo msg id: 2996 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process begin ns: test.foo msg id: 2997 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.464 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process end ns: test.foo msg id: 2997 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.464 [conn1] Request::process begin ns: test.foo msg id: 2998 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.464 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 2998 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 2999 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 2999 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3000 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3000 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3001 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3001 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3002 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3002 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3003 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3003 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3004 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3004 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3005 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3005 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3006 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3006 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3007 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3007 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3008 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3008 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process begin ns: test.foo msg id: 3009 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.465 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.465 [conn1] Request::process end ns: test.foo msg id: 3009 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3010 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3010 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3011 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3011 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3012 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3012 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3013 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3013 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3014 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3014 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3015 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3015 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3016 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3016 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3017 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3017 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3018 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3018 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3019 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3019 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3020 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3020 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3021 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3021 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3022 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process end ns: test.foo msg id: 3022 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.466 [conn1] Request::process begin ns: test.foo msg id: 3023 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.466 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process end ns: test.foo msg id: 3023 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process begin ns: test.foo msg id: 3024 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.467 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process end ns: test.foo msg id: 3024 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process begin ns: test.foo msg id: 3025 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.467 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process end ns: test.foo msg id: 3025 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process begin ns: test.foo msg id: 3026 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.467 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process end ns: test.foo msg id: 3026 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.467 [conn1] Request::process begin ns: test.foo msg id: 3027 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.467 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3027 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3028 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3028 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3029 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3029 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3030 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3030 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3031 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3031 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3032 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3032 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3033 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3033 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3034 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3034 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3035 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3035 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3036 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3036 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3037 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3037 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3038 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3038 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3039 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3039 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3040 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3040 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3041 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3041 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3042 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3042 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process begin ns: test.foo msg id: 3043 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.468 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.468 [conn1] Request::process end ns: test.foo msg id: 3043 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process begin ns: test.foo msg id: 3044 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.469 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process end ns: test.foo msg id: 3044 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process begin ns: test.foo msg id: 3045 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.469 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process end ns: test.foo msg id: 3045 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process begin ns: test.foo msg id: 3046 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.469 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process end ns: test.foo msg id: 3046 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process begin ns: test.foo msg id: 3047 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.469 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process end ns: test.foo msg id: 3047 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.469 [conn1] Request::process begin ns: test.foo msg id: 3048 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.469 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3048 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3049 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3049 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3050 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3050 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3051 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3051 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3052 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3052 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3053 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3053 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3054 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3054 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3055 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3055 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3056 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3056 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3057 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3057 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3058 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3058 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3059 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3059 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3060 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process end ns: test.foo msg id: 3060 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.470 [conn1] Request::process begin ns: test.foo msg id: 3061 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.470 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3061 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3062 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3062 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3063 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3063 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3064 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3064 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3065 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3065 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3066 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3066 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3067 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3067 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3068 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3068 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3069 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3069 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3070 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3070 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3071 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3071 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3072 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process end ns: test.foo msg id: 3072 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.471 [conn1] Request::process begin ns: test.foo msg id: 3073 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.471 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process end ns: test.foo msg id: 3073 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process begin ns: test.foo msg id: 3074 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.472 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process end ns: test.foo msg id: 3074 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process begin ns: test.foo msg id: 3075 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.472 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process end ns: test.foo msg id: 3075 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process begin ns: test.foo msg id: 3076 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.472 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process end ns: test.foo msg id: 3076 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process begin ns: test.foo msg id: 3077 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.472 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process end ns: test.foo msg id: 3077 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process begin ns: test.foo msg id: 3078 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.472 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process end ns: test.foo msg id: 3078 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.472 [conn1] Request::process begin ns: test.foo msg id: 3079 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3079 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3080 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3080 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3081 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3081 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3082 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3082 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3083 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3083 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3084 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3084 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3085 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3085 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3086 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3086 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3087 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3087 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3088 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3088 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3089 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3089 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3090 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3090 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3091 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3091 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3092 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3092 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3093 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3093 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3094 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process end ns: test.foo msg id: 3094 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.473 [conn1] Request::process begin ns: test.foo msg id: 3095 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.473 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.474 [conn1] Request::process end ns: test.foo msg id: 3095 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.474 [conn1] Request::process begin ns: test.foo msg id: 3096 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.474 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.474 [conn1] Request::process end ns: test.foo msg id: 3096 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.474 [conn1] Request::process begin ns: test.foo msg id: 3097 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.474 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.474 [conn1] Request::process end ns: test.foo msg id: 3097 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.474 [conn1] Request::process begin ns: test.foo msg id: 3098 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.474 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3098 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3099 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3099 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3100 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3100 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3101 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3101 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3102 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3102 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3103 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3103 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3104 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3104 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3105 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3105 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3106 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3106 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3107 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3107 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3108 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3108 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3109 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3109 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3110 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3110 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3111 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process end ns: test.foo msg id: 3111 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.475 [conn1] Request::process begin ns: test.foo msg id: 3112 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.475 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3112 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3113 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3113 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3114 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3114 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3115 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3115 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3116 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3116 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3117 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3117 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3118 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3118 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3119 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3119 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3120 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3120 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3121 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3121 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3122 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process end ns: test.foo msg id: 3122 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.476 [conn1] Request::process begin ns: test.foo msg id: 3123 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.476 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process end ns: test.foo msg id: 3123 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process begin ns: test.foo msg id: 3124 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.477 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process end ns: test.foo msg id: 3124 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process begin ns: test.foo msg id: 3125 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.477 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process end ns: test.foo msg id: 3125 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process begin ns: test.foo msg id: 3126 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.477 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process end ns: test.foo msg id: 3126 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process begin ns: test.foo msg id: 3127 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.477 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process end ns: test.foo msg id: 3127 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process begin ns: test.foo msg id: 3128 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.477 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process end ns: test.foo msg id: 3128 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.477 [conn1] Request::process begin ns: test.foo msg id: 3129 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.477 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3129 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3130 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3130 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3131 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3131 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3132 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3132 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3133 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3133 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3134 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3134 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3135 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3135 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3136 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3136 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3137 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3137 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3138 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3138 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3139 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3139 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3140 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3140 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3141 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3141 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3142 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3142 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3143 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3143 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3144 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3144 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3145 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.478 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process end ns: test.foo msg id: 3145 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.478 [conn1] Request::process begin ns: test.foo msg id: 3146 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.479 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.479 [conn1] Request::process end ns: test.foo msg id: 3146 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.479 [conn1] Request::process begin ns: test.foo msg id: 3147 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.479 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.479 [conn1] Request::process end ns: test.foo msg id: 3147 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.479 [conn1] Request::process begin ns: test.foo msg id: 3148 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.479 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3148 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3149 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3149 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3150 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3150 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3151 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3151 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3152 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3152 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3153 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3153 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3154 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3154 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3155 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3155 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3156 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3156 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3157 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3157 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3158 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3158 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3159 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3159 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3160 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3160 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3161 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3161 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process begin ns: test.foo msg id: 3162 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.480 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.480 [conn1] Request::process end ns: test.foo msg id: 3162 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3163 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3163 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3164 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3164 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3165 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3165 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3166 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3166 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3167 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3167 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3168 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3168 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3169 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3169 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3170 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3170 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3171 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3171 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3172 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process end ns: test.foo msg id: 3172 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.481 [conn1] Request::process begin ns: test.foo msg id: 3173 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.481 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process end ns: test.foo msg id: 3173 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process begin ns: test.foo msg id: 3174 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.482 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process end ns: test.foo msg id: 3174 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process begin ns: test.foo msg id: 3175 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.482 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process end ns: test.foo msg id: 3175 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process begin ns: test.foo msg id: 3176 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.482 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process end ns: test.foo msg id: 3176 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process begin ns: test.foo msg id: 3177 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.482 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process end ns: test.foo msg id: 3177 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process begin ns: test.foo msg id: 3178 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.482 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process end ns: test.foo msg id: 3178 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process begin ns: test.foo msg id: 3179 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.482 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.482 [conn1] Request::process end ns: test.foo msg id: 3179 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3180 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3180 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3181 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3181 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3182 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3182 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3183 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3183 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3184 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3184 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3185 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3185 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3186 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3186 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3187 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3187 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3188 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3188 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3189 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3189 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3190 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3190 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3191 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3191 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3192 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3192 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3193 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3193 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3194 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3194 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process begin ns: test.foo msg id: 3195 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.483 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.483 [conn1] Request::process end ns: test.foo msg id: 3195 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.484 [conn1] Request::process begin ns: test.foo msg id: 3196 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.484 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.484 [conn1] Request::process end ns: test.foo msg id: 3196 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.484 [conn1] Request::process begin ns: test.foo msg id: 3197 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.484 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.484 [conn1] Request::process end ns: test.foo msg id: 3197 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.484 [conn1] Request::process begin ns: test.foo msg id: 3198 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.484 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.484 [conn1] Request::process end ns: test.foo msg id: 3198 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.484 [conn1] Request::process begin ns: test.foo msg id: 3199 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.484 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3199 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3200 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3200 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3201 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3201 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3202 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3202 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3203 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3203 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3204 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3204 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3205 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3205 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3206 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3206 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3207 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3207 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3208 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3208 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3209 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3209 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3210 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3210 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3211 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process end ns: test.foo msg id: 3211 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.485 [conn1] Request::process begin ns: test.foo msg id: 3212 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.485 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3212 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3213 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3213 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3214 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3214 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3215 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3215 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3216 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3216 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3217 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3217 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3218 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3218 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3219 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3219 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3220 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3220 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3221 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3221 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3222 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3222 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3223 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process end ns: test.foo msg id: 3223 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.486 [conn1] Request::process begin ns: test.foo msg id: 3224 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.486 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process end ns: test.foo msg id: 3224 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process begin ns: test.foo msg id: 3225 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.487 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process end ns: test.foo msg id: 3225 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process begin ns: test.foo msg id: 3226 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.487 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process end ns: test.foo msg id: 3226 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process begin ns: test.foo msg id: 3227 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.487 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process end ns: test.foo msg id: 3227 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process begin ns: test.foo msg id: 3228 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.487 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process end ns: test.foo msg id: 3228 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process begin ns: test.foo msg id: 3229 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.487 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.487 [conn1] Request::process end ns: test.foo msg id: 3229 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3230 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3230 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3231 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3231 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3232 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3232 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3233 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3233 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3234 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3234 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3235 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3235 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3236 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3236 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3237 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3237 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3238 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3238 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3239 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3239 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3240 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3240 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3241 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3241 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3242 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3242 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3243 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3243 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3244 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process end ns: test.foo msg id: 3244 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.488 [conn1] Request::process begin ns: test.foo msg id: 3245 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.488 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process end ns: test.foo msg id: 3245 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process begin ns: test.foo msg id: 3246 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.489 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process end ns: test.foo msg id: 3246 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process begin ns: test.foo msg id: 3247 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.489 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process end ns: test.foo msg id: 3247 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process begin ns: test.foo msg id: 3248 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.489 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process end ns: test.foo msg id: 3248 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process begin ns: test.foo msg id: 3249 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.489 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process end ns: test.foo msg id: 3249 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.489 [conn1] Request::process begin ns: test.foo msg id: 3250 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.489 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3250 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3251 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3251 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3252 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3252 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3253 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3253 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3254 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3254 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3255 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3255 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3256 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3256 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3257 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3257 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3258 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3258 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3259 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3259 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3260 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3260 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3261 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process end ns: test.foo msg id: 3261 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.490 [conn1] Request::process begin ns: test.foo msg id: 3262 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.490 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3262 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3263 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3263 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3264 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3264 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3265 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3265 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3266 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3266 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3267 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3267 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3268 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3268 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3269 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3269 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3270 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3270 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3271 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3271 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3272 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3272 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3273 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3273 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3274 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3274 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3275 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process end ns: test.foo msg id: 3275 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.491 [conn1] Request::process begin ns: test.foo msg id: 3276 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.491 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.492 [conn1] Request::process end ns: test.foo msg id: 3276 op: 2002 attempt: 0 1ms m30999| Fri Feb 22 12:22:55.492 [conn1] Request::process begin ns: test.foo msg id: 3277 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.492 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.492 [conn1] Request::process end ns: test.foo msg id: 3277 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.492 [conn1] Request::process begin ns: test.foo msg id: 3278 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.492 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3278 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3279 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3279 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3280 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3280 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3281 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3281 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3282 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3282 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3283 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3283 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3284 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3284 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3285 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3285 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3286 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3286 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3287 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3287 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3288 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3288 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3289 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3289 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3290 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3290 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3291 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3291 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3292 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3292 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3293 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3293 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process begin ns: test.foo msg id: 3294 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.493 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.493 [conn1] Request::process end ns: test.foo msg id: 3294 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process begin ns: test.foo msg id: 3295 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.494 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process end ns: test.foo msg id: 3295 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process begin ns: test.foo msg id: 3296 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.494 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process end ns: test.foo msg id: 3296 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process begin ns: test.foo msg id: 3297 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.494 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process end ns: test.foo msg id: 3297 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process begin ns: test.foo msg id: 3298 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.494 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process end ns: test.foo msg id: 3298 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process begin ns: test.foo msg id: 3299 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.494 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process end ns: test.foo msg id: 3299 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process begin ns: test.foo msg id: 3300 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.494 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process end ns: test.foo msg id: 3300 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.494 [conn1] Request::process begin ns: test.foo msg id: 3301 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.494 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3301 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3302 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3302 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3303 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3303 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3304 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3304 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3305 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3305 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3306 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3306 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3307 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3307 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3308 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3308 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3309 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3309 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3310 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3310 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3311 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process end ns: test.foo msg id: 3311 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.495 [conn1] Request::process begin ns: test.foo msg id: 3312 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.495 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3312 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3313 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3313 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3314 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3314 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3315 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3315 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3316 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3316 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3317 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3317 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3318 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3318 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3319 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3319 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3320 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3320 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3321 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3321 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3322 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3322 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3323 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3323 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3324 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3324 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3325 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process end ns: test.foo msg id: 3325 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.496 [conn1] Request::process begin ns: test.foo msg id: 3326 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.496 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.497 [conn1] Request::process end ns: test.foo msg id: 3326 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.497 [conn1] Request::process begin ns: test.foo msg id: 3327 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.497 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.497 [conn1] Request::process end ns: test.foo msg id: 3327 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.497 [conn1] Request::process begin ns: test.foo msg id: 3328 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.497 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.497 [conn1] Request::process end ns: test.foo msg id: 3328 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.497 [conn1] Request::process begin ns: test.foo msg id: 3329 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.497 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.497 [conn1] Request::process end ns: test.foo msg id: 3329 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3330 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3330 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3331 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3331 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3332 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3332 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3333 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3333 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3334 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3334 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3335 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3335 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3336 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3336 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3337 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3337 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3338 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3338 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3339 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3339 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3340 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3340 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3341 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3341 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3342 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3342 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3343 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3343 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3344 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3344 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process begin ns: test.foo msg id: 3345 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.498 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.498 [conn1] Request::process end ns: test.foo msg id: 3345 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process begin ns: test.foo msg id: 3346 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.499 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process end ns: test.foo msg id: 3346 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process begin ns: test.foo msg id: 3347 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.499 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process end ns: test.foo msg id: 3347 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process begin ns: test.foo msg id: 3348 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.499 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process end ns: test.foo msg id: 3348 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process begin ns: test.foo msg id: 3349 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.499 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process end ns: test.foo msg id: 3349 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process begin ns: test.foo msg id: 3350 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.499 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process end ns: test.foo msg id: 3350 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.499 [conn1] Request::process begin ns: test.foo msg id: 3351 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.499 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3351 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3352 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3352 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3353 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3353 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3354 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3354 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3355 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3355 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3356 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3356 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3357 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3357 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3358 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3358 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3359 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3359 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3360 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3360 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3361 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3361 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3362 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process end ns: test.foo msg id: 3362 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.500 [conn1] Request::process begin ns: test.foo msg id: 3363 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.500 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3363 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3364 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3364 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3365 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3365 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3366 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3366 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3367 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3367 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3368 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3368 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3369 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3369 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3370 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3370 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3371 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3371 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3372 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3372 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3373 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3373 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3374 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3374 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3375 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3375 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3376 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3376 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3377 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3377 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3378 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3378 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3379 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.501 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process end ns: test.foo msg id: 3379 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.501 [conn1] Request::process begin ns: test.foo msg id: 3380 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.502 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process end ns: test.foo msg id: 3380 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process begin ns: test.foo msg id: 3381 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.502 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process end ns: test.foo msg id: 3381 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process begin ns: test.foo msg id: 3382 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.502 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process end ns: test.foo msg id: 3382 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process begin ns: test.foo msg id: 3383 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.502 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process end ns: test.foo msg id: 3383 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.502 [conn1] Request::process begin ns: test.foo msg id: 3384 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.502 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3384 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3385 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3385 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3386 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3386 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3387 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3387 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3388 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3388 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3389 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3389 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3390 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3390 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3391 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3391 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3392 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3392 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3393 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3393 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3394 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3394 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3395 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3395 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3396 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3396 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process begin ns: test.foo msg id: 3397 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.503 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.503 [conn1] Request::process end ns: test.foo msg id: 3397 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3398 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3398 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3399 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3399 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3400 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3400 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3401 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3401 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3402 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3402 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3403 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3403 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3404 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3404 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3405 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3405 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3406 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3406 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3407 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3407 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3408 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3408 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3409 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3409 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3410 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3410 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3411 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3411 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3412 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3412 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3413 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process end ns: test.foo msg id: 3413 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.504 [conn1] Request::process begin ns: test.foo msg id: 3414 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.504 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3414 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3415 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3415 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3416 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3416 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3417 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3417 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3418 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3418 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3419 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3419 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3420 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3420 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3421 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3421 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3422 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3422 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3423 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3423 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3424 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3424 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3425 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3425 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3426 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3426 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3427 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3427 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3428 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3428 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3429 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3429 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process begin ns: test.foo msg id: 3430 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.505 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.505 [conn1] Request::process end ns: test.foo msg id: 3430 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process begin ns: test.foo msg id: 3431 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.506 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process end ns: test.foo msg id: 3431 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process begin ns: test.foo msg id: 3432 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.506 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process end ns: test.foo msg id: 3432 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process begin ns: test.foo msg id: 3433 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.506 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process end ns: test.foo msg id: 3433 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process begin ns: test.foo msg id: 3434 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.506 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process end ns: test.foo msg id: 3434 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.506 [conn1] Request::process begin ns: test.foo msg id: 3435 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.506 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3435 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3436 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3436 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3437 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3437 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3438 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3438 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3439 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3439 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3440 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3440 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3441 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3441 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3442 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3442 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3443 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3443 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3444 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3444 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3445 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3445 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3446 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3446 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3447 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3447 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process begin ns: test.foo msg id: 3448 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.507 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.507 [conn1] Request::process end ns: test.foo msg id: 3448 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3449 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3449 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3450 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3450 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3451 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3451 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3452 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3452 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3453 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3453 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3454 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3454 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3455 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3455 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3456 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3456 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3457 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3457 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3458 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3458 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3459 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3459 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3460 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3460 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3461 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3461 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3462 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3462 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3463 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3463 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process begin ns: test.foo msg id: 3464 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.508 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.508 [conn1] Request::process end ns: test.foo msg id: 3464 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3465 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3465 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3466 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3466 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3467 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3467 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3468 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3468 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3469 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3469 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3470 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3470 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3471 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3471 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3472 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3472 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3473 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3473 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3474 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3474 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3475 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3475 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3476 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3476 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3477 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3477 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3478 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3478 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3479 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3479 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3480 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process end ns: test.foo msg id: 3480 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.509 [conn1] Request::process begin ns: test.foo msg id: 3481 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.509 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process end ns: test.foo msg id: 3481 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process begin ns: test.foo msg id: 3482 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.510 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process end ns: test.foo msg id: 3482 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process begin ns: test.foo msg id: 3483 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.510 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process end ns: test.foo msg id: 3483 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process begin ns: test.foo msg id: 3484 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.510 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process end ns: test.foo msg id: 3484 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process begin ns: test.foo msg id: 3485 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.510 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process end ns: test.foo msg id: 3485 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.510 [conn1] Request::process begin ns: test.foo msg id: 3486 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.510 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3486 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3487 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3487 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3488 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3488 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3489 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3489 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3490 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3490 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3491 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3491 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3492 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3492 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3493 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3493 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3494 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3494 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3495 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3495 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3496 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3496 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3497 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3497 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process begin ns: test.foo msg id: 3498 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.511 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.511 [conn1] Request::process end ns: test.foo msg id: 3498 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3499 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3499 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3500 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3500 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3501 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3501 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3502 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3502 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3503 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3503 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3504 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3504 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3505 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3505 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3506 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3506 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3507 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3507 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3508 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3508 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3509 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3509 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3510 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3510 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3511 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3511 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3512 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3512 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3513 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3513 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3514 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process end ns: test.foo msg id: 3514 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.512 [conn1] Request::process begin ns: test.foo msg id: 3515 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.512 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3515 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3516 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3516 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3517 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3517 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3518 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3518 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3519 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3519 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3520 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3520 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3521 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3521 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3522 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3522 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3523 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3523 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3524 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3524 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3525 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3525 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3526 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3526 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3527 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3527 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3528 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3528 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3529 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3529 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3530 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process end ns: test.foo msg id: 3530 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.513 [conn1] Request::process begin ns: test.foo msg id: 3531 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.513 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process end ns: test.foo msg id: 3531 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process begin ns: test.foo msg id: 3532 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.514 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process end ns: test.foo msg id: 3532 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process begin ns: test.foo msg id: 3533 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.514 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process end ns: test.foo msg id: 3533 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process begin ns: test.foo msg id: 3534 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.514 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process end ns: test.foo msg id: 3534 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process begin ns: test.foo msg id: 3535 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.514 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process end ns: test.foo msg id: 3535 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process begin ns: test.foo msg id: 3536 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.514 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process end ns: test.foo msg id: 3536 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process begin ns: test.foo msg id: 3537 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.514 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process end ns: test.foo msg id: 3537 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.514 [conn1] Request::process begin ns: test.foo msg id: 3538 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.514 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3538 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3539 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3539 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3540 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3540 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3541 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3541 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3542 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3542 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3543 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3543 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3544 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3544 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3545 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3545 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3546 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3546 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3547 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3547 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3548 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3548 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process begin ns: test.foo msg id: 3549 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.515 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.515 [conn1] Request::process end ns: test.foo msg id: 3549 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3550 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3550 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3551 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3551 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3552 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3552 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3553 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3553 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3554 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3554 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3555 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3555 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3556 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3556 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3557 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3557 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3558 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3558 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3559 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3559 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3560 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3560 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3561 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3561 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3562 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3562 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3563 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3563 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3564 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3564 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3565 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.516 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process end ns: test.foo msg id: 3565 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.516 [conn1] Request::process begin ns: test.foo msg id: 3566 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3566 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3567 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3567 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3568 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3568 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3569 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3569 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3570 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3570 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3571 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3571 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3572 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3572 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3573 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3573 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3574 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3574 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3575 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3575 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3576 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3576 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3577 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3577 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3578 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3578 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3579 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3579 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3580 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3580 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3581 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process end ns: test.foo msg id: 3581 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.517 [conn1] Request::process begin ns: test.foo msg id: 3582 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.517 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process end ns: test.foo msg id: 3582 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process begin ns: test.foo msg id: 3583 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.518 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process end ns: test.foo msg id: 3583 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process begin ns: test.foo msg id: 3584 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.518 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process end ns: test.foo msg id: 3584 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process begin ns: test.foo msg id: 3585 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.518 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process end ns: test.foo msg id: 3585 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process begin ns: test.foo msg id: 3586 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.518 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process end ns: test.foo msg id: 3586 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.518 [conn1] Request::process begin ns: test.foo msg id: 3587 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.518 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3587 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3588 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3588 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3589 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3589 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3590 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3590 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3591 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3591 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3592 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3592 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3593 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3593 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3594 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3594 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3595 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3595 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3596 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3596 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3597 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3597 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3598 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3598 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3599 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process end ns: test.foo msg id: 3599 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.519 [conn1] Request::process begin ns: test.foo msg id: 3600 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.519 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3600 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3601 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3601 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3602 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3602 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3603 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3603 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3604 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3604 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3605 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3605 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3606 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3606 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3607 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3607 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3608 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3608 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3609 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3609 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3610 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3610 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3611 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3611 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3612 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3612 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3613 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3613 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3614 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3614 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3615 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3615 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process begin ns: test.foo msg id: 3616 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.520 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.520 [conn1] Request::process end ns: test.foo msg id: 3616 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3617 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3617 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3618 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3618 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3619 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3619 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3620 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3620 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3621 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3621 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3622 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3622 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3623 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3623 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3624 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3624 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3625 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3625 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3626 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3626 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3627 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3627 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3628 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3628 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3629 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3629 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3630 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3630 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3631 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3631 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3632 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process end ns: test.foo msg id: 3632 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.521 [conn1] Request::process begin ns: test.foo msg id: 3633 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.521 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process end ns: test.foo msg id: 3633 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process begin ns: test.foo msg id: 3634 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.522 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process end ns: test.foo msg id: 3634 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process begin ns: test.foo msg id: 3635 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.522 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process end ns: test.foo msg id: 3635 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process begin ns: test.foo msg id: 3636 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.522 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process end ns: test.foo msg id: 3636 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.522 [conn1] Request::process begin ns: test.foo msg id: 3637 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.522 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3637 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3638 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3638 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3639 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3639 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3640 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3640 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3641 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3641 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3642 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3642 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3643 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3643 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3644 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3644 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3645 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3645 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3646 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3646 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3647 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3647 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3648 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3648 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3649 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3649 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3650 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process end ns: test.foo msg id: 3650 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.523 [conn1] Request::process begin ns: test.foo msg id: 3651 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.523 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3651 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3652 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3652 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3653 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3653 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3654 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3654 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3655 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3655 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3656 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3656 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3657 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3657 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3658 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3658 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3659 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3659 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3660 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3660 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3661 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3661 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3662 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3662 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3663 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3663 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3664 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3664 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3665 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3665 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3666 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.524 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process end ns: test.foo msg id: 3666 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.524 [conn1] Request::process begin ns: test.foo msg id: 3667 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3667 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3668 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3668 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3669 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3669 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3670 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3670 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3671 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3671 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3672 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3672 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3673 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3673 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3674 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3674 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3675 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3675 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3676 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3676 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3677 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3677 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3678 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3678 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3679 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3679 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3680 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3680 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3681 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3681 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3682 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process end ns: test.foo msg id: 3682 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.525 [conn1] Request::process begin ns: test.foo msg id: 3683 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.525 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process end ns: test.foo msg id: 3683 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process begin ns: test.foo msg id: 3684 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.526 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process end ns: test.foo msg id: 3684 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process begin ns: test.foo msg id: 3685 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.526 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process end ns: test.foo msg id: 3685 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process begin ns: test.foo msg id: 3686 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.526 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process end ns: test.foo msg id: 3686 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process begin ns: test.foo msg id: 3687 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.526 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process end ns: test.foo msg id: 3687 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process begin ns: test.foo msg id: 3688 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.526 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process end ns: test.foo msg id: 3688 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process begin ns: test.foo msg id: 3689 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.526 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process end ns: test.foo msg id: 3689 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.526 [conn1] Request::process begin ns: test.foo msg id: 3690 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.526 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3690 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3691 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3691 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3692 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3692 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3693 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3693 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3694 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3694 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3695 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3695 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3696 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3696 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3697 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3697 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3698 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3698 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3699 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3699 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3700 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3700 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3701 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process end ns: test.foo msg id: 3701 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.527 [conn1] Request::process begin ns: test.foo msg id: 3702 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.527 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3702 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3703 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3703 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3704 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3704 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3705 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3705 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3706 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3706 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3707 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3707 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3708 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3708 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3709 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3709 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3710 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3710 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3711 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3711 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3712 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3712 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3713 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3713 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3714 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3714 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3715 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3715 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3716 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3716 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3717 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3717 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3718 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.528 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process end ns: test.foo msg id: 3718 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.528 [conn1] Request::process begin ns: test.foo msg id: 3719 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3719 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3720 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3720 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3721 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3721 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3722 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3722 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3723 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3723 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3724 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3724 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3725 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3725 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3726 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3726 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3727 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3727 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3728 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3728 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3729 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3729 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3730 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3730 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3731 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3731 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3732 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3732 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3733 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3733 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3734 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process end ns: test.foo msg id: 3734 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.529 [conn1] Request::process begin ns: test.foo msg id: 3735 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.529 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.530 [conn1] Request::process end ns: test.foo msg id: 3735 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.530 [conn1] Request::process begin ns: test.foo msg id: 3736 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.530 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.530 [conn1] Request::process end ns: test.foo msg id: 3736 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3737 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3737 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3738 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3738 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3739 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3739 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3740 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3740 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3741 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3741 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3742 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3742 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3743 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3743 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3744 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3744 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3745 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3745 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3746 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3746 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3747 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3747 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3748 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3748 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3749 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3749 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3750 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3750 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3751 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3751 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process begin ns: test.foo msg id: 3752 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.531 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.531 [conn1] Request::process end ns: test.foo msg id: 3752 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3753 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3753 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3754 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3754 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3755 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3755 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3756 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3756 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3757 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3757 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3758 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3758 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3759 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3759 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3760 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3760 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3761 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3761 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3762 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3762 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3763 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3763 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3764 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3764 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3765 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3765 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3766 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3766 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3767 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3767 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3768 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.532 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process end ns: test.foo msg id: 3768 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.532 [conn1] Request::process begin ns: test.foo msg id: 3769 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3769 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3770 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3770 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3771 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3771 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3772 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3772 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3773 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3773 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3774 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3774 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3775 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3775 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3776 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3776 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3777 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3777 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3778 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3778 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3779 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3779 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3780 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3780 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3781 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3781 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3782 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3782 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3783 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3783 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3784 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process end ns: test.foo msg id: 3784 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.533 [conn1] Request::process begin ns: test.foo msg id: 3785 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.533 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process end ns: test.foo msg id: 3785 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process begin ns: test.foo msg id: 3786 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.534 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process end ns: test.foo msg id: 3786 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process begin ns: test.foo msg id: 3787 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.534 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process end ns: test.foo msg id: 3787 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process begin ns: test.foo msg id: 3788 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.534 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process end ns: test.foo msg id: 3788 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.534 [conn1] Request::process begin ns: test.foo msg id: 3789 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.534 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3789 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3790 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3790 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3791 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3791 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3792 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3792 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3793 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3793 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3794 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3794 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3795 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3795 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3796 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3796 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3797 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3797 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3798 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3798 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3799 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3799 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3800 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3800 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3801 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3801 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3802 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process end ns: test.foo msg id: 3802 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.535 [conn1] Request::process begin ns: test.foo msg id: 3803 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.535 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3803 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3804 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3804 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3805 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3805 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3806 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3806 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3807 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3807 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3808 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3808 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3809 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3809 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3810 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3810 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3811 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3811 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3812 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3812 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3813 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3813 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3814 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3814 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3815 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3815 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3816 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3816 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3817 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3817 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3818 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3818 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process begin ns: test.foo msg id: 3819 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.536 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.536 [conn1] Request::process end ns: test.foo msg id: 3819 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3820 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3820 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3821 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3821 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3822 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3822 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3823 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3823 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3824 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3824 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3825 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3825 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3826 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3826 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3827 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3827 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3828 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3828 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3829 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3829 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3830 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3830 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3831 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3831 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3832 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3832 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3833 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3833 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3834 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process end ns: test.foo msg id: 3834 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.537 [conn1] Request::process begin ns: test.foo msg id: 3835 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.537 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3835 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3836 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3836 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3837 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3837 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3838 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3838 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3839 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3839 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3840 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3840 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3841 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3841 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3842 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process end ns: test.foo msg id: 3842 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.538 [conn1] Request::process begin ns: test.foo msg id: 3843 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.538 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3843 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3844 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3844 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3845 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3845 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3846 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3846 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3847 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3847 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3848 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3848 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3849 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3849 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3850 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3850 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3851 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3851 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3852 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process end ns: test.foo msg id: 3852 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.539 [conn1] Request::process begin ns: test.foo msg id: 3853 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.539 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3853 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3854 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3854 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3855 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3855 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3856 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3856 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3857 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3857 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3858 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3858 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3859 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3859 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3860 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3860 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3861 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3861 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3862 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3862 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3863 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3863 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3864 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3864 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3865 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3865 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3866 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3866 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3867 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3867 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3868 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.540 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process end ns: test.foo msg id: 3868 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.540 [conn1] Request::process begin ns: test.foo msg id: 3869 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3869 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3870 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3870 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3871 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3871 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3872 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3872 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3873 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3873 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3874 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3874 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3875 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3875 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3876 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3876 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3877 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3877 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3878 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3878 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3879 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3879 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3880 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3880 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3881 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3881 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3882 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3882 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3883 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3883 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3884 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process end ns: test.foo msg id: 3884 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.541 [conn1] Request::process begin ns: test.foo msg id: 3885 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.541 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3885 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3886 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3886 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3887 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3887 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3888 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3888 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3889 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3889 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3890 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3890 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3891 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3891 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3892 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3892 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3893 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process end ns: test.foo msg id: 3893 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.542 [conn1] Request::process begin ns: test.foo msg id: 3894 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.542 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3894 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3895 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3895 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3896 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3896 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3897 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3897 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3898 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3898 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3899 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3899 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3900 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3900 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3901 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3901 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3902 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process end ns: test.foo msg id: 3902 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.543 [conn1] Request::process begin ns: test.foo msg id: 3903 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.543 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3903 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3904 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3904 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3905 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3905 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3906 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3906 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3907 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3907 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3908 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3908 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3909 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3909 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3910 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3910 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3911 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3911 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3912 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3912 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3913 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3913 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3914 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3914 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3915 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3915 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3916 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3916 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3917 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3917 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process begin ns: test.foo msg id: 3918 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.544 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.544 [conn1] Request::process end ns: test.foo msg id: 3918 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3919 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3919 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3920 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3920 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3921 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3921 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3922 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3922 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3923 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3923 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3924 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3924 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3925 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3925 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3926 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3926 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3927 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3927 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3928 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3928 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3929 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3929 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3930 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3930 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3931 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3931 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3932 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3932 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3933 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3933 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3934 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.545 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process end ns: test.foo msg id: 3934 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.545 [conn1] Request::process begin ns: test.foo msg id: 3935 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3935 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3936 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3936 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3937 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3937 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3938 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3938 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3939 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3939 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3940 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3940 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3941 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3941 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3942 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3942 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3943 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3943 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3944 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3944 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3945 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3945 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3946 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3946 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3947 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3947 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3948 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3948 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3949 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process end ns: test.foo msg id: 3949 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.546 [conn1] Request::process begin ns: test.foo msg id: 3950 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.546 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.547 [conn1] Request::process end ns: test.foo msg id: 3950 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.547 [conn1] Request::process begin ns: test.foo msg id: 3951 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.547 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.547 [conn1] Request::process end ns: test.foo msg id: 3951 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.547 [conn1] Request::process begin ns: test.foo msg id: 3952 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.547 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.547 [conn1] Request::process end ns: test.foo msg id: 3952 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3953 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3953 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3954 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3954 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3955 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3955 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3956 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3956 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3957 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3957 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3958 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3958 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3959 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3959 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3960 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3960 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3961 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3961 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3962 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3962 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3963 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3963 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3964 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3964 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3965 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3965 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3966 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3966 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3967 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3967 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process begin ns: test.foo msg id: 3968 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.548 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.548 [conn1] Request::process end ns: test.foo msg id: 3968 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3969 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3969 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3970 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3970 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3971 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3971 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3972 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3972 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3973 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3973 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3974 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3974 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3975 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3975 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3976 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3976 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3977 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3977 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3978 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3978 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3979 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3979 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3980 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3980 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3981 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3981 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3982 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3982 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3983 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3983 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process begin ns: test.foo msg id: 3984 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.549 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.549 [conn1] Request::process end ns: test.foo msg id: 3984 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process begin ns: test.foo msg id: 3985 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.550 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process end ns: test.foo msg id: 3985 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process begin ns: test.foo msg id: 3986 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.550 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process end ns: test.foo msg id: 3986 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process begin ns: test.foo msg id: 3987 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.550 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process end ns: test.foo msg id: 3987 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process begin ns: test.foo msg id: 3988 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.550 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process end ns: test.foo msg id: 3988 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process begin ns: test.foo msg id: 3989 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.550 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process end ns: test.foo msg id: 3989 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process begin ns: test.foo msg id: 3990 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.550 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process end ns: test.foo msg id: 3990 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.550 [conn1] Request::process begin ns: test.foo msg id: 3991 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.550 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.576 [conn1] Request::process end ns: test.foo msg id: 3991 op: 2002 attempt: 0 26ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3992 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3992 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3993 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3993 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3994 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3994 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3995 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3995 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3996 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3996 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3997 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3997 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3998 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3998 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 3999 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 3999 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 4000 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 4000 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 4001 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 4001 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 4002 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 4002 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 4003 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 4003 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 4004 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 4004 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 4005 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process end ns: test.foo msg id: 4005 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.577 [conn1] Request::process begin ns: test.foo msg id: 4006 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.577 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4006 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4007 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4007 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4008 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4008 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4009 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4009 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4010 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4010 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4011 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4011 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4012 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4012 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4013 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4013 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4014 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4014 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4015 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4015 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4016 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4016 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4017 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4017 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4018 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4018 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4019 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4019 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4020 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4020 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4021 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.578 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process end ns: test.foo msg id: 4021 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.578 [conn1] Request::process begin ns: test.foo msg id: 4022 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4022 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4023 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4023 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4024 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4024 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4025 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4025 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4026 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4026 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4027 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4027 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4028 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4028 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4029 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4029 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4030 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4030 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4031 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4031 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4032 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4032 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4033 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4033 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4034 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4034 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4035 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4035 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4036 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4036 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4037 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process end ns: test.foo msg id: 4037 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.579 [conn1] Request::process begin ns: test.foo msg id: 4038 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.579 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4038 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4039 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4039 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4040 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4040 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4041 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4041 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4042 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4042 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4043 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4043 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4044 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4044 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4045 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4045 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4046 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4046 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4047 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4047 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4048 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4048 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4049 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4049 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4050 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4050 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4051 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4051 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4052 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process end ns: test.foo msg id: 4052 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.580 [conn1] Request::process begin ns: test.foo msg id: 4053 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.580 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.581 [conn1] Request::process end ns: test.foo msg id: 4053 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.581 [conn1] Request::process begin ns: test.foo msg id: 4054 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.581 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.581 [conn1] Request::process end ns: test.foo msg id: 4054 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.581 [conn1] Request::process begin ns: test.foo msg id: 4055 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.581 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.581 [conn1] Request::process end ns: test.foo msg id: 4055 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.581 [conn1] Request::process begin ns: test.foo msg id: 4056 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.581 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4056 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4057 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4057 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4058 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4058 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4059 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4059 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4060 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4060 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4061 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4061 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4062 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4062 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4063 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4063 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4064 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4064 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4065 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4065 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4066 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4066 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4067 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4067 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4068 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4068 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4069 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4069 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process begin ns: test.foo msg id: 4070 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.582 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.582 [conn1] Request::process end ns: test.foo msg id: 4070 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4071 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4071 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4072 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4072 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4073 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4073 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4074 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4074 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4075 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4075 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4076 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4076 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4077 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4077 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4078 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4078 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4079 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4079 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4080 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4080 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4081 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4081 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4082 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4082 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4083 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4083 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4084 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4084 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4085 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4085 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4086 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process end ns: test.foo msg id: 4086 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.583 [conn1] Request::process begin ns: test.foo msg id: 4087 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.583 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4087 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4088 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4088 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4089 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4089 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4090 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4090 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4091 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4091 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4092 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4092 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4093 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4093 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4094 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4094 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4095 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4095 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4096 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4096 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4097 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4097 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4098 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4098 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4099 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4099 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4100 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4100 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4101 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4101 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4102 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4102 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process begin ns: test.foo msg id: 4103 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.584 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.584 [conn1] Request::process end ns: test.foo msg id: 4103 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.585 [conn1] Request::process begin ns: test.foo msg id: 4104 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.585 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.585 [conn1] Request::process end ns: test.foo msg id: 4104 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.585 [conn1] Request::process begin ns: test.foo msg id: 4105 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.585 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4105 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4106 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4106 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4107 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4107 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4108 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4108 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4109 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4109 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4110 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4110 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4111 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4111 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4112 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4112 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4113 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4113 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4114 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4114 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4115 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4115 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4116 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4116 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4117 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4117 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4118 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4118 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4119 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process end ns: test.foo msg id: 4119 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.586 [conn1] Request::process begin ns: test.foo msg id: 4120 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.586 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4120 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4121 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4121 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4122 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4122 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4123 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4123 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4124 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4124 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4125 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4125 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4126 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4126 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4127 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4127 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4128 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4128 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4129 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4129 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4130 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4130 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4131 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4131 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4132 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4132 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4133 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4133 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4134 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4134 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4135 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process end ns: test.foo msg id: 4135 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.587 [conn1] Request::process begin ns: test.foo msg id: 4136 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.587 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4136 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4137 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4137 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4138 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4138 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4139 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4139 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4140 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4140 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4141 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4141 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4142 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4142 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4143 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4143 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4144 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4144 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4145 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4145 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4146 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4146 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4147 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4147 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4148 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4148 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4149 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4149 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4150 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4150 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4151 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4151 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process begin ns: test.foo msg id: 4152 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.588 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.588 [conn1] Request::process end ns: test.foo msg id: 4152 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4153 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4153 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4154 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4154 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4155 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4155 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4156 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4156 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4157 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4157 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4158 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4158 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4159 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4159 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4160 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4160 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4161 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4161 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4162 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4162 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4163 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process end ns: test.foo msg id: 4163 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.589 [conn1] Request::process begin ns: test.foo msg id: 4164 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.589 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process end ns: test.foo msg id: 4164 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process begin ns: test.foo msg id: 4165 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.590 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process end ns: test.foo msg id: 4165 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process begin ns: test.foo msg id: 4166 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.590 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process end ns: test.foo msg id: 4166 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process begin ns: test.foo msg id: 4167 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.590 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process end ns: test.foo msg id: 4167 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process begin ns: test.foo msg id: 4168 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.590 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process end ns: test.foo msg id: 4168 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process begin ns: test.foo msg id: 4169 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.590 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process end ns: test.foo msg id: 4169 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.590 [conn1] Request::process begin ns: test.foo msg id: 4170 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4170 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4171 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4171 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4172 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4172 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4173 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4173 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4174 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4174 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4175 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4175 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4176 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4176 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4177 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4177 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4178 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4178 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4179 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4179 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4180 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4180 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4181 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4181 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4182 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4182 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4183 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4183 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4184 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process end ns: test.foo msg id: 4184 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.591 [conn1] Request::process begin ns: test.foo msg id: 4185 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.591 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process end ns: test.foo msg id: 4185 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process begin ns: test.foo msg id: 4186 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.592 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process end ns: test.foo msg id: 4186 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process begin ns: test.foo msg id: 4187 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.592 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process end ns: test.foo msg id: 4187 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process begin ns: test.foo msg id: 4188 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.592 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process end ns: test.foo msg id: 4188 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process begin ns: test.foo msg id: 4189 op: 2002 attempt: 0 m30999| Fri Feb 22 12:22:55.592 [conn1] write: test.foo m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process end ns: test.foo msg id: 4189 op: 2002 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.592 [conn1] Request::process begin ns: test.$cmd msg id: 4190 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.592 [conn1] single query: test.$cmd { getlasterror: 1.0 } ntoreturn: -1 options : 0 m30999| Fri Feb 22 12:22:55.593 [conn1] Request::process end ns: test.$cmd msg id: 4190 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.593 [conn1] Request::process begin ns: admin.$cmd msg id: 4191 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.593 [conn1] single query: admin.$cmd { shardcollection: "test.foo", key: { _id: 1.0 } } ntoreturn: -1 options : 0 m30999| Fri Feb 22 12:22:55.593 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:22:55.593 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:55.593 [conn1] connected connection! m30001| Fri Feb 22 12:22:55.593 [initandlisten] connection accepted from 127.0.0.1:59237 #4 (4 connections now open) m30999| Fri Feb 22 12:22:55.594 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:22:55.594 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:22:55.595 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:22:55.598 [conn1] going to create 81 chunk(s) for: test.foo using new epoch 5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:22:55.606 [conn1] major version query from 0|0||5127631ff72bccb1fac77345 and over 0 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 0|0 } } ] } m30999| Fri Feb 22 12:22:55.606 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:22:55.607 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:22:55.607 [conn1] connected connection! m30000| Fri Feb 22 12:22:55.607 [initandlisten] connection accepted from 127.0.0.1:38940 #7 (7 connections now open) m30999| Fri Feb 22 12:22:55.608 [conn1] found 81 new chunks for collection test.foo (tracking 81), new version is 0x1183b90 m30999| Fri Feb 22 12:22:55.609 [conn1] loaded 81 chunks into new chunk manager for test.foo with version 1|80||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:22:55.609 [conn1] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 2 version: 1|80||5127631ff72bccb1fac77345 based on: (empty) m30000| Fri Feb 22 12:22:55.610 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 12:22:55.613 [conn3] build index done. scanned 0 total records. 0.003 secs m30999| Fri Feb 22 12:22:55.614 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|80||5127631ff72bccb1fac77345 manager: 0x1183a20 m30999| Fri Feb 22 12:22:55.614 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|80, versionEpoch: ObjectId('5127631ff72bccb1fac77345'), serverID: ObjectId('5127631ef72bccb1fac77343'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181340 2 m30999| Fri Feb 22 12:22:55.614 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:22:55.614 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|80||5127631ff72bccb1fac77345 manager: 0x1183a20 m30999| Fri Feb 22 12:22:55.614 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|80, versionEpoch: ObjectId('5127631ff72bccb1fac77345'), serverID: ObjectId('5127631ef72bccb1fac77343'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x1181340 2 m30001| Fri Feb 22 12:22:55.614 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:22:55.615 [initandlisten] connection accepted from 127.0.0.1:65233 #8 (8 connections now open) m30999| Fri Feb 22 12:22:55.616 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:22:55.616 [conn1] Request::process end ns: admin.$cmd msg id: 4191 op: 2004 attempt: 0 23ms m30999| Fri Feb 22 12:22:55.617 [conn1] Request::process begin ns: config.$cmd msg id: 4192 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.617 [conn1] single query: config.$cmd { count: "chunks", query: {}, fields: {} } ntoreturn: -1 options : 0 m30999| Fri Feb 22 12:22:55.617 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: {} }, fields: {} } and CInfo { v_ns: "config.chunks", filter: {} } m30999| Fri Feb 22 12:22:55.617 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.617 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.617 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.617 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.617 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 81.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.618 [conn1] Request::process end ns: config.$cmd msg id: 4192 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.618 [conn1] Request::process begin ns: config.shards msg id: 4193 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.618 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.618 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.618 [conn1] Request::process end ns: config.shards msg id: 4193 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.619 [conn1] Request::process begin ns: config.chunks msg id: 4194 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.619 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:22:55.619 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:22:55.619 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.619 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.619 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.619 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.619 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.619 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.619 [conn1] Request::process end ns: config.chunks msg id: 4194 op: 2004 attempt: 0 0ms { "shard0000" : 0, "shard0001" : 81 } m30999| Fri Feb 22 12:22:55.622 [conn1] Request::process begin ns: config.shards msg id: 4195 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.622 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.622 [conn1] Request::process end ns: config.shards msg id: 4195 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.622 [conn1] Request::process begin ns: config.chunks msg id: 4196 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.622 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.622 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.623 [conn1] Request::process end ns: config.chunks msg id: 4196 op: 2004 attempt: 0 0ms { "shard0000" : 0, "shard0001" : 81 } m30999| Fri Feb 22 12:22:55.627 [conn1] Request::process begin ns: config.shards msg id: 4197 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.627 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.627 [conn1] Request::process end ns: config.shards msg id: 4197 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:22:55.627 [conn1] Request::process begin ns: config.chunks msg id: 4198 op: 2004 attempt: 0 m30999| Fri Feb 22 12:22:55.627 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.627 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:22:55.628 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:22:55.628 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:22:55.628 [conn1] Request::process end ns: config.chunks msg id: 4198 op: 2004 attempt: 0 0ms { "shard0000" : 0, "shard0001" : 81 } m30999| Fri Feb 22 12:23:00.630 [conn1] Request::process begin ns: config.shards msg id: 4199 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:00.630 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:00.630 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:00.630 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:00.630 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:00.630 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:00.630 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:00.630 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:00.631 [conn1] Request::process end ns: config.shards msg id: 4199 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:00.631 [conn1] Request::process begin ns: config.chunks msg id: 4200 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:00.631 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:00.631 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:00.632 [conn1] Request::process end ns: config.chunks msg id: 4200 op: 2004 attempt: 0 0ms { "shard0000" : 0, "shard0001" : 81 } m30999| Fri Feb 22 12:23:00.881 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:00.881 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 ) m30999| Fri Feb 22 12:23:00.881 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:23:00 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276324f72bccb1fac77346" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127631ef72bccb1fac77344" } } m30999| Fri Feb 22 12:23:00.882 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' acquired, ts : 51276324f72bccb1fac77346 m30999| Fri Feb 22 12:23:00.882 [Balancer] *** start balancing round m30999| Fri Feb 22 12:23:00.882 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:23:00.882 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 12:23:00.885 [conn3] build index config.tags { _id: 1 } m30000| Fri Feb 22 12:23:00.888 [conn3] build index done. scanned 0 total records. 0.003 secs m30000| Fri Feb 22 12:23:00.888 [conn3] info: creating collection config.tags on add index m30000| Fri Feb 22 12:23:00.888 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 12:23:00.890 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:23:00.890 [Balancer] shard0001 has more chunks me:81 best: shard0000:0 m30999| Fri Feb 22 12:23:00.890 [Balancer] collection : test.foo m30999| Fri Feb 22 12:23:00.890 [Balancer] donor : shard0001 chunks on 81 m30999| Fri Feb 22 12:23:00.890 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 12:23:00.890 [Balancer] threshold : 8 m30999| Fri Feb 22 12:23:00.890 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:23:00.890 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 51.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:23:00.890 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:23:00.891 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 51.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:23:00.891 [initandlisten] connection accepted from 127.0.0.1:42565 #9 (9 connections now open) m30001| Fri Feb 22 12:23:00.892 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619 (sleeping for 30000ms) m30001| Fri Feb 22 12:23:00.893 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' acquired, ts : 512763249fece98eaa810888 m30001| Fri Feb 22 12:23:00.893 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:00-512763249fece98eaa810889", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535780893), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:00.894 [conn4] moveChunk request accepted at version 1|80||5127631ff72bccb1fac77345 m30001| Fri Feb 22 12:23:00.894 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:23:00.895 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 51.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:23:00.895 [initandlisten] connection accepted from 127.0.0.1:38318 #5 (5 connections now open) m30000| Fri Feb 22 12:23:00.896 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/test.ns, filling with zeroes... m30000| Fri Feb 22 12:23:00.896 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:23:00.896 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/test.0, filling with zeroes... m30000| Fri Feb 22 12:23:00.897 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:23:00.897 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance30/test.1, filling with zeroes... m30000| Fri Feb 22 12:23:00.897 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance30/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:23:00.900 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 12:23:00.901 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:23:00.902 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 12:23:00.905 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:23:00.910 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:23:00.910 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:23:00.913 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:23:00.915 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:23:00.915 [conn4] moveChunk setting version to: 2|0||5127631ff72bccb1fac77345 m30000| Fri Feb 22 12:23:00.916 [initandlisten] connection accepted from 127.0.0.1:56971 #10 (10 connections now open) m30000| Fri Feb 22 12:23:00.916 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:23:00.923 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:23:00.923 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m30000| Fri Feb 22 12:23:00.923 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:00-512763240380af7a3bb2d55a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535780923), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 5: 6, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 13 } } m30000| Fri Feb 22 12:23:00.924 [initandlisten] connection accepted from 127.0.0.1:34808 #11 (11 connections now open) m30001| Fri Feb 22 12:23:00.926 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:23:00.926 [conn4] moveChunk updating self version to: 2|1||5127631ff72bccb1fac77345 through { _id: 51.0 } -> { _id: 103.0 } for collection 'test.foo' m30000| Fri Feb 22 12:23:00.926 [initandlisten] connection accepted from 127.0.0.1:33714 #12 (12 connections now open) m30001| Fri Feb 22 12:23:00.927 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:00-512763249fece98eaa81088a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535780927), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:00.927 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:00.927 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:00.927 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:23:00.927 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:00.927 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:00.927 [cleanupOldData-512763249fece98eaa81088b] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 51.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:23:00.927 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' unlocked. m30001| Fri Feb 22 12:23:00.927 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:00-512763249fece98eaa81088c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535780927), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:23:00.928 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:23:00.928 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 1|80||5127631ff72bccb1fac77345 and 81 chunks m30999| Fri Feb 22 12:23:00.928 [Balancer] major version query from 1|80||5127631ff72bccb1fac77345 and over 1 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 1000|80 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 1000|80 } } ] } m30999| Fri Feb 22 12:23:00.929 [Balancer] found 3 new chunks for collection test.foo (tracking 3), new version is 0x117ed80 m30999| Fri Feb 22 12:23:00.929 [Balancer] loaded 3 chunks into new chunk manager for test.foo with version 2|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:00.929 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|1||5127631ff72bccb1fac77345 based on: 1|80||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:00.929 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:23:00.929 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' unlocked. m30001| Fri Feb 22 12:23:00.947 [cleanupOldData-512763249fece98eaa81088b] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:23:00.947 [cleanupOldData-512763249fece98eaa81088b] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 51.0 } m30001| Fri Feb 22 12:23:00.951 [cleanupOldData-512763249fece98eaa81088b] moveChunk deleted 51 documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m30999| Fri Feb 22 12:23:01.930 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:01.930 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 ) m30999| Fri Feb 22 12:23:01.931 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:23:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276325f72bccb1fac77347" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276324f72bccb1fac77346" } } m30999| Fri Feb 22 12:23:01.931 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' acquired, ts : 51276325f72bccb1fac77347 m30999| Fri Feb 22 12:23:01.931 [Balancer] *** start balancing round m30999| Fri Feb 22 12:23:01.931 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:23:01.931 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:23:01.933 [Balancer] shard0001 has more chunks me:80 best: shard0000:1 m30999| Fri Feb 22 12:23:01.933 [Balancer] collection : test.foo m30999| Fri Feb 22 12:23:01.933 [Balancer] donor : shard0001 chunks on 80 m30999| Fri Feb 22 12:23:01.933 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 12:23:01.933 [Balancer] threshold : 2 m30999| Fri Feb 22 12:23:01.933 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_51.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: 51.0 }, max: { _id: 103.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:23:01.933 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 51.0 }max: { _id: 103.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:23:01.933 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:23:01.934 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 51.0 }, max: { _id: 103.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_51.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:23:01.935 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' acquired, ts : 512763259fece98eaa81088d m30001| Fri Feb 22 12:23:01.935 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:01-512763259fece98eaa81088e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535781935), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:01.936 [conn4] moveChunk request accepted at version 2|1||5127631ff72bccb1fac77345 m30001| Fri Feb 22 12:23:01.936 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:23:01.936 [migrateThread] starting receiving-end of migration of chunk { _id: 51.0 } -> { _id: 103.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:23:01.943 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:23:01.943 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:23:01.945 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:23:01.946 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:23:01.946 [conn4] moveChunk setting version to: 3|0||5127631ff72bccb1fac77345 m30000| Fri Feb 22 12:23:01.947 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:23:01.955 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:23:01.955 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m30000| Fri Feb 22 12:23:01.955 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:01-512763250380af7a3bb2d55b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535781955), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:23:01.957 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:23:01.957 [conn4] moveChunk updating self version to: 3|1||5127631ff72bccb1fac77345 through { _id: 103.0 } -> { _id: 155.0 } for collection 'test.foo' m30001| Fri Feb 22 12:23:01.957 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:01-512763259fece98eaa81088f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535781957), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:01.957 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:01.957 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:01.958 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:23:01.958 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:01.958 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:01.958 [cleanupOldData-512763259fece98eaa810890] (start) waiting to cleanup test.foo from { _id: 51.0 } -> { _id: 103.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:23:01.958 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' unlocked. m30001| Fri Feb 22 12:23:01.958 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:01-512763259fece98eaa810891", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535781958), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:23:01.958 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:23:01.959 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 2|1||5127631ff72bccb1fac77345 and 81 chunks m30999| Fri Feb 22 12:23:01.959 [Balancer] major version query from 2|1||5127631ff72bccb1fac77345 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 2000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 2000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 2000|1 } } ] } m30999| Fri Feb 22 12:23:01.960 [Balancer] found 2 new chunks for collection test.foo (tracking 2), new version is 0x1183b90 m30999| Fri Feb 22 12:23:01.960 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 3|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:01.960 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 4 version: 3|1||5127631ff72bccb1fac77345 based on: 2|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:01.960 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:23:01.960 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' unlocked. m30001| Fri Feb 22 12:23:01.978 [cleanupOldData-512763259fece98eaa810890] waiting to remove documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:23:01.978 [cleanupOldData-512763259fece98eaa810890] moveChunk starting delete for: test.foo from { _id: 51.0 } -> { _id: 103.0 } m30001| Fri Feb 22 12:23:01.982 [cleanupOldData-512763259fece98eaa810890] moveChunk deleted 52 documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30999| Fri Feb 22 12:23:02.961 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:02.961 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 ) m30999| Fri Feb 22 12:23:02.961 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:23:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276326f72bccb1fac77348" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276325f72bccb1fac77347" } } m30999| Fri Feb 22 12:23:02.962 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' acquired, ts : 51276326f72bccb1fac77348 m30999| Fri Feb 22 12:23:02.962 [Balancer] *** start balancing round m30999| Fri Feb 22 12:23:02.962 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:23:02.962 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:23:02.964 [Balancer] shard0001 has more chunks me:79 best: shard0000:2 m30999| Fri Feb 22 12:23:02.964 [Balancer] collection : test.foo m30999| Fri Feb 22 12:23:02.964 [Balancer] donor : shard0001 chunks on 79 m30999| Fri Feb 22 12:23:02.964 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 12:23:02.964 [Balancer] threshold : 2 m30999| Fri Feb 22 12:23:02.964 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_103.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: 103.0 }, max: { _id: 155.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:23:02.964 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 103.0 }max: { _id: 155.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:23:02.964 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:23:02.964 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 103.0 }, max: { _id: 155.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_103.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:23:02.965 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' acquired, ts : 512763269fece98eaa810892 m30001| Fri Feb 22 12:23:02.965 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:02-512763269fece98eaa810893", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535782965), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:02.966 [conn4] moveChunk request accepted at version 3|1||5127631ff72bccb1fac77345 m30001| Fri Feb 22 12:23:02.966 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:23:02.966 [migrateThread] starting receiving-end of migration of chunk { _id: 103.0 } -> { _id: 155.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:23:02.974 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:23:02.974 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:23:02.975 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:23:02.977 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:23:02.977 [conn4] moveChunk setting version to: 4|0||5127631ff72bccb1fac77345 m30000| Fri Feb 22 12:23:02.977 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:23:02.986 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:23:02.986 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m30000| Fri Feb 22 12:23:02.986 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:02-512763260380af7a3bb2d55c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535782986), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:23:02.987 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:23:02.987 [conn4] moveChunk updating self version to: 4|1||5127631ff72bccb1fac77345 through { _id: 155.0 } -> { _id: 207.0 } for collection 'test.foo' m30001| Fri Feb 22 12:23:02.988 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:02-512763269fece98eaa810894", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535782988), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:02.988 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:02.988 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:02.988 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:23:02.988 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:02.988 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:02.988 [cleanupOldData-512763269fece98eaa810895] (start) waiting to cleanup test.foo from { _id: 103.0 } -> { _id: 155.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:23:02.988 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' unlocked. m30001| Fri Feb 22 12:23:02.988 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:02-512763269fece98eaa810896", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535782988), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:23:02.988 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:23:02.989 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 3|1||5127631ff72bccb1fac77345 and 81 chunks m30999| Fri Feb 22 12:23:02.989 [Balancer] major version query from 3|1||5127631ff72bccb1fac77345 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 3000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 3000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 3000|1 } } ] } m30999| Fri Feb 22 12:23:02.989 [Balancer] found 2 new chunks for collection test.foo (tracking 2), new version is 0x117ed80 m30999| Fri Feb 22 12:23:02.989 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 4|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:02.989 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 4|1||5127631ff72bccb1fac77345 based on: 3|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:02.990 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:23:02.990 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' unlocked. m30001| Fri Feb 22 12:23:03.008 [cleanupOldData-512763269fece98eaa810895] waiting to remove documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:23:03.008 [cleanupOldData-512763269fece98eaa810895] moveChunk starting delete for: test.foo from { _id: 103.0 } -> { _id: 155.0 } m30001| Fri Feb 22 12:23:03.011 [cleanupOldData-512763269fece98eaa810895] moveChunk deleted 52 documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30999| Fri Feb 22 12:23:03.990 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:03.991 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 ) m30999| Fri Feb 22 12:23:03.991 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:23:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276327f72bccb1fac77349" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276326f72bccb1fac77348" } } m30999| Fri Feb 22 12:23:03.991 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' acquired, ts : 51276327f72bccb1fac77349 m30999| Fri Feb 22 12:23:03.991 [Balancer] *** start balancing round m30999| Fri Feb 22 12:23:03.991 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:23:03.991 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:23:03.993 [Balancer] shard0001 has more chunks me:78 best: shard0000:3 m30999| Fri Feb 22 12:23:03.993 [Balancer] collection : test.foo m30999| Fri Feb 22 12:23:03.993 [Balancer] donor : shard0001 chunks on 78 m30999| Fri Feb 22 12:23:03.993 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 12:23:03.993 [Balancer] threshold : 2 m30999| Fri Feb 22 12:23:03.993 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_155.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: 155.0 }, max: { _id: 207.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:23:03.993 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 155.0 }max: { _id: 207.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:23:03.993 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:23:03.993 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 155.0 }, max: { _id: 207.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_155.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:23:03.994 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' acquired, ts : 512763279fece98eaa810897 m30001| Fri Feb 22 12:23:03.994 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:03-512763279fece98eaa810898", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535783994), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:03.994 [conn4] moveChunk request accepted at version 4|1||5127631ff72bccb1fac77345 m30001| Fri Feb 22 12:23:03.995 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:23:03.995 [migrateThread] starting receiving-end of migration of chunk { _id: 155.0 } -> { _id: 207.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:23:04.002 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:23:04.002 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:23:04.003 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:23:04.005 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:23:04.005 [conn4] moveChunk setting version to: 5|0||5127631ff72bccb1fac77345 m30000| Fri Feb 22 12:23:04.005 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:23:04.013 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:23:04.013 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m30000| Fri Feb 22 12:23:04.013 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:04-512763280380af7a3bb2d55d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535784013), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:23:04.015 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:23:04.015 [conn4] moveChunk updating self version to: 5|1||5127631ff72bccb1fac77345 through { _id: 207.0 } -> { _id: 259.0 } for collection 'test.foo' m30001| Fri Feb 22 12:23:04.016 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:04-512763289fece98eaa810899", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535784016), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:04.016 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:04.016 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:04.016 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:23:04.016 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:04.016 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:04.016 [cleanupOldData-512763289fece98eaa81089a] (start) waiting to cleanup test.foo from { _id: 155.0 } -> { _id: 207.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:23:04.016 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' unlocked. m30001| Fri Feb 22 12:23:04.016 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:04-512763289fece98eaa81089b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535784016), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:23:04.017 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:23:04.017 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 4|1||5127631ff72bccb1fac77345 and 81 chunks m30999| Fri Feb 22 12:23:04.017 [Balancer] major version query from 4|1||5127631ff72bccb1fac77345 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 4000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 4000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 4000|1 } } ] } m30999| Fri Feb 22 12:23:04.018 [Balancer] found 2 new chunks for collection test.foo (tracking 2), new version is 0x1183b90 m30999| Fri Feb 22 12:23:04.018 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 5|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:04.018 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 5|1||5127631ff72bccb1fac77345 based on: 4|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:04.018 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:23:04.018 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' unlocked. m30001| Fri Feb 22 12:23:04.036 [cleanupOldData-512763289fece98eaa81089a] waiting to remove documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:23:04.036 [cleanupOldData-512763289fece98eaa81089a] moveChunk starting delete for: test.foo from { _id: 155.0 } -> { _id: 207.0 } m30001| Fri Feb 22 12:23:04.039 [cleanupOldData-512763289fece98eaa81089a] moveChunk deleted 52 documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m30999| Fri Feb 22 12:23:05.019 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:05.019 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838 ) m30999| Fri Feb 22 12:23:05.019 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:23:05 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276329f72bccb1fac7734a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276327f72bccb1fac77349" } } m30999| Fri Feb 22 12:23:05.020 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' acquired, ts : 51276329f72bccb1fac7734a m30999| Fri Feb 22 12:23:05.020 [Balancer] *** start balancing round m30999| Fri Feb 22 12:23:05.020 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:23:05.020 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:23:05.021 [Balancer] shard0001 has more chunks me:77 best: shard0000:4 m30999| Fri Feb 22 12:23:05.022 [Balancer] collection : test.foo m30999| Fri Feb 22 12:23:05.022 [Balancer] donor : shard0001 chunks on 77 m30999| Fri Feb 22 12:23:05.022 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 12:23:05.022 [Balancer] threshold : 2 m30999| Fri Feb 22 12:23:05.022 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_207.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: 207.0 }, max: { _id: 259.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:23:05.022 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 207.0 }max: { _id: 259.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:23:05.022 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:23:05.022 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 207.0 }, max: { _id: 259.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_207.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:23:05.023 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' acquired, ts : 512763299fece98eaa81089c m30001| Fri Feb 22 12:23:05.023 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:05-512763299fece98eaa81089d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535785023), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:05.024 [conn4] moveChunk request accepted at version 5|1||5127631ff72bccb1fac77345 m30001| Fri Feb 22 12:23:05.024 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:23:05.025 [migrateThread] starting receiving-end of migration of chunk { _id: 207.0 } -> { _id: 259.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:23:05.032 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:23:05.032 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:23:05.034 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:23:05.035 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:23:05.035 [conn4] moveChunk setting version to: 6|0||5127631ff72bccb1fac77345 m30000| Fri Feb 22 12:23:05.035 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:23:05.044 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:23:05.044 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m30000| Fri Feb 22 12:23:05.044 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:05-512763290380af7a3bb2d55e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535785044), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:23:05.045 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:23:05.046 [conn4] moveChunk updating self version to: 6|1||5127631ff72bccb1fac77345 through { _id: 259.0 } -> { _id: 311.0 } for collection 'test.foo' m30001| Fri Feb 22 12:23:05.046 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:05-512763299fece98eaa81089e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535785046), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:23:05.046 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:05.046 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:05.046 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:23:05.046 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:23:05.046 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:23:05.046 [cleanupOldData-512763299fece98eaa81089f] (start) waiting to cleanup test.foo from { _id: 207.0 } -> { _id: 259.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:23:05.047 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535780:1619' unlocked. m30001| Fri Feb 22 12:23:05.047 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:23:05-512763299fece98eaa8108a0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:59237", time: new Date(1361535785047), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:23:05.047 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:23:05.048 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 5|1||5127631ff72bccb1fac77345 and 81 chunks m30999| Fri Feb 22 12:23:05.048 [Balancer] major version query from 5|1||5127631ff72bccb1fac77345 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 5000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 5000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 5000|1 } } ] } m30999| Fri Feb 22 12:23:05.048 [Balancer] found 2 new chunks for collection test.foo (tracking 2), new version is 0x1186f10 m30999| Fri Feb 22 12:23:05.048 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 6|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:05.048 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 6|1||5127631ff72bccb1fac77345 based on: 5|1||5127631ff72bccb1fac77345 m30999| Fri Feb 22 12:23:05.048 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:23:05.048 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' unlocked. m30001| Fri Feb 22 12:23:05.066 [cleanupOldData-512763299fece98eaa81089f] waiting to remove documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:23:05.066 [cleanupOldData-512763299fece98eaa81089f] moveChunk starting delete for: test.foo from { _id: 207.0 } -> { _id: 259.0 } m30001| Fri Feb 22 12:23:05.069 [cleanupOldData-512763299fece98eaa81089f] moveChunk deleted 52 documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m30999| Fri Feb 22 12:23:05.634 [conn1] Request::process begin ns: config.shards msg id: 4201 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.634 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:05.634 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.634 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.634 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.634 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.634 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.634 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.634 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.634 [conn1] Request::process end ns: config.shards msg id: 4201 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:05.634 [conn1] Request::process begin ns: config.chunks msg id: 4202 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.635 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:05.635 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.635 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.635 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.635 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.635 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.635 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.635 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.635 [conn1] Request::process end ns: config.chunks msg id: 4202 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } * A disabling the balancer m30999| Fri Feb 22 12:23:05.638 [conn1] Request::process begin ns: config.settings msg id: 4203 op: 2001 attempt: 0 m30999| Fri Feb 22 12:23:05.638 [conn1] write: config.settings m30999| Fri Feb 22 12:23:05.638 [conn1] Request::process end ns: config.settings msg id: 4203 op: 2001 attempt: 0 0ms m30999| Fri Feb 22 12:23:05.638 [conn1] Request::process begin ns: config.settings msg id: 4204 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.638 [conn1] shard query: config.settings {} m30999| Fri Feb 22 12:23:05.638 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.settings", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.638 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.638 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.638 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.638 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.638 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "chunksize", value: 1 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] Request::process end ns: config.settings msg id: 4204 op: 2004 attempt: 0 0ms { "_id" : "chunksize", "value" : 1 } { "_id" : "balancer", "stopped" : true } * B m30999| Fri Feb 22 12:23:05.639 [conn1] Request::process begin ns: config.shards msg id: 4205 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.639 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] Request::process end ns: config.shards msg id: 4205 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:05.639 [conn1] Request::process begin ns: config.chunks msg id: 4206 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.639 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.639 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.640 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.640 [conn1] Request::process end ns: config.chunks msg id: 4206 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } 71 m30999| Fri Feb 22 12:23:05.641 [conn1] Request::process begin ns: config.shards msg id: 4207 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.641 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.641 [conn1] Request::process end ns: config.shards msg id: 4207 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:05.641 [conn1] Request::process begin ns: config.chunks msg id: 4208 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.641 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.641 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.642 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.642 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.642 [conn1] Request::process end ns: config.chunks msg id: 4208 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:05.644 [conn1] Request::process begin ns: config.shards msg id: 4209 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.644 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.644 [conn1] Request::process end ns: config.shards msg id: 4209 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:05.644 [conn1] Request::process begin ns: config.chunks msg id: 4210 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:05.644 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:05.644 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:05.645 [conn1] Request::process end ns: config.chunks msg id: 4210 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:06.049 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:06.049 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:10.646 [conn1] Request::process begin ns: config.shards msg id: 4211 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:10.646 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:10.646 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:10.646 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:10.646 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:10.646 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:10.646 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:10.646 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:10.647 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:10.647 [conn1] Request::process end ns: config.shards msg id: 4211 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:10.647 [conn1] Request::process begin ns: config.chunks msg id: 4212 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:10.647 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:10.647 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:10.647 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:10.647 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:10.647 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:10.647 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:10.647 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:10.648 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:10.648 [conn1] Request::process end ns: config.chunks msg id: 4212 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:12.050 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:12.050 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:15.650 [conn1] Request::process begin ns: config.shards msg id: 4213 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:15.650 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:15.650 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:15.650 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:15.650 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:15.650 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:15.650 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:15.650 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:15.651 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:15.651 [conn1] Request::process end ns: config.shards msg id: 4213 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:15.651 [conn1] Request::process begin ns: config.chunks msg id: 4214 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:15.651 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:15.651 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:15.651 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:15.651 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:15.651 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:15.651 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:15.651 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:15.652 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:15.652 [conn1] Request::process end ns: config.chunks msg id: 4214 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:18.051 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:18.051 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:20.654 [conn1] Request::process begin ns: config.shards msg id: 4215 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:20.654 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:20.654 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:20.654 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:20.654 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:20.654 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:20.654 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:20.654 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:20.655 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:20.655 [conn1] Request::process end ns: config.shards msg id: 4215 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:20.655 [conn1] Request::process begin ns: config.chunks msg id: 4216 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:20.655 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:20.655 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:20.655 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:20.655 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:20.655 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:20.655 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:20.655 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:20.656 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:20.656 [conn1] Request::process end ns: config.chunks msg id: 4216 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:24.052 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:24.052 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:24.861 [LockPinger] distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' about to ping. m30999| Fri Feb 22 12:23:24.862 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:23:24 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838', sleeping for 30000ms m30999| Fri Feb 22 12:23:25.658 [conn1] Request::process begin ns: config.shards msg id: 4217 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:25.658 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:25.658 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:25.658 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:25.658 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:25.658 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:25.658 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:25.658 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:25.659 [conn1] Request::process end ns: config.shards msg id: 4217 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:25.659 [conn1] Request::process begin ns: config.chunks msg id: 4218 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:25.659 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:25.659 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:25.659 [conn1] Request::process end ns: config.chunks msg id: 4218 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:30.052 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:30.053 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:30.662 [conn1] Request::process begin ns: config.shards msg id: 4219 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:30.662 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:30.662 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:30.662 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:30.662 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:30.662 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:30.662 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:30.662 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:30.662 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:30.663 [conn1] Request::process end ns: config.shards msg id: 4219 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:30.663 [conn1] Request::process begin ns: config.chunks msg id: 4220 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:30.663 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:30.663 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:30.663 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:30.663 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:30.663 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:30.663 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:30.663 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:30.663 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:30.663 [conn1] Request::process end ns: config.chunks msg id: 4220 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:35.666 [conn1] Request::process begin ns: config.shards msg id: 4221 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:35.666 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:35.666 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:35.666 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:35.666 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:35.666 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:35.666 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:35.666 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:35.666 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:35.666 [conn1] Request::process end ns: config.shards msg id: 4221 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:35.667 [conn1] Request::process begin ns: config.chunks msg id: 4222 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:35.667 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:35.667 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:35.667 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:35.667 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:35.667 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:35.667 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:35.667 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:35.667 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:35.667 [conn1] Request::process end ns: config.chunks msg id: 4222 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:36.053 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:36.053 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:40.670 [conn1] Request::process begin ns: config.shards msg id: 4223 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:40.670 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:40.670 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:40.670 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:40.670 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:40.670 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:40.670 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:40.670 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:40.670 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:40.670 [conn1] Request::process end ns: config.shards msg id: 4223 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:40.671 [conn1] Request::process begin ns: config.chunks msg id: 4224 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:40.671 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:40.671 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:40.671 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:40.671 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:40.671 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:40.671 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:40.671 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:40.671 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:40.672 [conn1] Request::process end ns: config.chunks msg id: 4224 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:42.054 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:42.054 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:45.674 [conn1] Request::process begin ns: config.shards msg id: 4225 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:45.674 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:45.674 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:45.674 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:45.674 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:45.674 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:45.674 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:45.674 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:45.674 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:45.675 [conn1] Request::process end ns: config.shards msg id: 4225 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:45.675 [conn1] Request::process begin ns: config.chunks msg id: 4226 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:45.675 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:45.675 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:45.675 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:45.675 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:45.675 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:45.675 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:45.675 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:45.675 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:45.675 [conn1] Request::process end ns: config.chunks msg id: 4226 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:48.055 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:48.055 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:50.678 [conn1] Request::process begin ns: config.shards msg id: 4227 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:50.678 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:50.678 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:50.678 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:50.678 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:50.678 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:50.678 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:50.678 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:50.679 [conn1] Request::process end ns: config.shards msg id: 4227 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:50.679 [conn1] Request::process begin ns: config.chunks msg id: 4228 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:50.679 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:50.679 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:50.679 [conn1] Request::process end ns: config.chunks msg id: 4228 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:23:54.056 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:23:54.056 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:23:54.862 [LockPinger] distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838' about to ping. m30999| Fri Feb 22 12:23:54.863 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:23:54 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535774:16838', sleeping for 30000ms m30999| Fri Feb 22 12:23:54.870 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 0ms m30999| Fri Feb 22 12:23:54.870 [PeriodicTask::Runner] task: DBConnectionPool-cleaner took: 0ms m30999| Fri Feb 22 12:23:55.682 [conn1] Request::process begin ns: config.shards msg id: 4229 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:55.682 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:55.682 [conn1] Request::process end ns: config.shards msg id: 4229 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:23:55.682 [conn1] Request::process begin ns: config.chunks msg id: 4230 op: 2004 attempt: 0 m30999| Fri Feb 22 12:23:55.682 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:23:55.682 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:23:55.683 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:23:55.683 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:23:55.683 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:55.683 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:23:55.683 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:23:55.683 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:23:55.683 [conn1] Request::process end ns: config.chunks msg id: 4230 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:24:00.057 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:00.057 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:24:00.686 [conn1] Request::process begin ns: config.shards msg id: 4231 op: 2004 attempt: 0 m30999| Fri Feb 22 12:24:00.686 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:00.686 [conn1] Request::process end ns: config.shards msg id: 4231 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:24:00.686 [conn1] Request::process begin ns: config.chunks msg id: 4232 op: 2004 attempt: 0 m30999| Fri Feb 22 12:24:00.686 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:00.686 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:00.687 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:00.687 [conn1] Request::process end ns: config.chunks msg id: 4232 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:24:05.689 [conn1] Request::process begin ns: config.shards msg id: 4233 op: 2004 attempt: 0 m30999| Fri Feb 22 12:24:05.690 [conn1] shard query: config.shards {} m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:05.690 [conn1] Request::process end ns: config.shards msg id: 4233 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:24:05.690 [conn1] Request::process begin ns: config.chunks msg id: 4234 op: 2004 attempt: 0 m30999| Fri Feb 22 12:24:05.690 [conn1] shard query: config.chunks { ns: "test.foo" } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:05.690 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:05.691 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127631ff72bccb1fac77345'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:05.691 [conn1] Request::process end ns: config.chunks msg id: 4234 op: 2004 attempt: 0 0ms { "shard0000" : 5, "shard0001" : 76 } m30999| Fri Feb 22 12:24:05.693 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 12:24:05.707 [conn3] end connection 127.0.0.1:36104 (4 connections now open) m30001| Fri Feb 22 12:24:05.707 [conn4] end connection 127.0.0.1:59237 (4 connections now open) m30000| Fri Feb 22 12:24:05.710 [conn5] end connection 127.0.0.1:43448 (11 connections now open) m30000| Fri Feb 22 12:24:05.710 [conn3] end connection 127.0.0.1:44163 (11 connections now open) m30000| Fri Feb 22 12:24:05.710 [conn7] end connection 127.0.0.1:38940 (11 connections now open) m30000| Fri Feb 22 12:24:05.710 [conn6] end connection 127.0.0.1:50465 (11 connections now open) Fri Feb 22 12:24:06.693 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:24:06.694 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:24:06.694 [interruptThread] now exiting m30000| Fri Feb 22 12:24:06.694 dbexit: m30000| Fri Feb 22 12:24:06.694 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:24:06.694 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:24:06.694 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:24:06.694 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:24:06.694 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:24:06.694 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:24:06.694 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:24:06.694 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:24:06.694 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:24:06.694 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:24:06.694 [conn1] end connection 127.0.0.1:38532 (7 connections now open) m30000| Fri Feb 22 12:24:06.694 [conn2] end connection 127.0.0.1:44285 (7 connections now open) m30000| Fri Feb 22 12:24:06.694 [conn8] end connection 127.0.0.1:65233 (7 connections now open) m30001| Fri Feb 22 12:24:06.694 [conn5] end connection 127.0.0.1:38318 (2 connections now open) m30000| Fri Feb 22 12:24:06.694 [conn10] end connection 127.0.0.1:56971 (7 connections now open) m30000| Fri Feb 22 12:24:06.694 [conn12] end connection 127.0.0.1:33714 (7 connections now open) m30000| Fri Feb 22 12:24:06.694 [conn9] end connection 127.0.0.1:42565 (7 connections now open) m30000| Fri Feb 22 12:24:06.694 [conn11] end connection 127.0.0.1:34808 (7 connections now open) m30000| Fri Feb 22 12:24:06.722 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:24:06.725 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:24:06.725 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:24:06.725 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:24:06.725 dbexit: really exiting now Fri Feb 22 12:24:07.694 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 12:24:07.694 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:24:07.694 [interruptThread] now exiting m30001| Fri Feb 22 12:24:07.694 dbexit: m30001| Fri Feb 22 12:24:07.694 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:24:07.694 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:24:07.694 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:24:07.694 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:24:07.694 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:24:07.694 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:24:07.694 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:24:07.694 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:24:07.694 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:24:07.694 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:24:07.694 [conn1] end connection 127.0.0.1:34056 (1 connection now open) m30001| Fri Feb 22 12:24:07.706 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:24:07.709 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:24:07.709 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:24:07.709 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:24:07.709 dbexit: really exiting now Fri Feb 22 12:24:08.694 shell: stopped mongo program on port 30001 *** ShardingTest slow_sharding_balance3 completed successfully in 74.317 seconds *** Fri Feb 22 12:24:08.726 [conn10] end connection 127.0.0.1:56744 (0 connections now open) 1.2428 minutes Fri Feb 22 12:24:08.746 [initandlisten] connection accepted from 127.0.0.1:56996 #11 (1 connection now open) Fri Feb 22 12:24:08.747 [conn11] end connection 127.0.0.1:56996 (0 connections now open) ******************************************* Test : sharding_balance4.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance4.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance4.js";TestData.testFile = "sharding_balance4.js";TestData.testName = "sharding_balance4";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:24:08 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:24:08.922 [initandlisten] connection accepted from 127.0.0.1:50639 #12 (1 connection now open) null Resetting db path '/data/db/slow_sharding_balance40' Fri Feb 22 12:24:08.936 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/slow_sharding_balance40 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:24:09.028 [initandlisten] MongoDB starting : pid=11534 port=30000 dbpath=/data/db/slow_sharding_balance40 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:24:09.028 [initandlisten] m30000| Fri Feb 22 12:24:09.028 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:24:09.028 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:24:09.028 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:24:09.028 [initandlisten] m30000| Fri Feb 22 12:24:09.028 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:24:09.028 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:24:09.028 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:24:09.028 [initandlisten] allocator: system m30000| Fri Feb 22 12:24:09.028 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance40", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:24:09.028 [initandlisten] journal dir=/data/db/slow_sharding_balance40/journal m30000| Fri Feb 22 12:24:09.029 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:24:09.044 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/local.ns, filling with zeroes... m30000| Fri Feb 22 12:24:09.045 [FileAllocator] creating directory /data/db/slow_sharding_balance40/_tmp m30000| Fri Feb 22 12:24:09.045 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:24:09.045 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/local.0, filling with zeroes... m30000| Fri Feb 22 12:24:09.045 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:24:09.048 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:24:09.049 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:24:09.139 [initandlisten] connection accepted from 127.0.0.1:55681 #1 (1 connection now open) Resetting db path '/data/db/slow_sharding_balance41' Fri Feb 22 12:24:09.143 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/slow_sharding_balance41 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:24:09.230 [initandlisten] MongoDB starting : pid=11537 port=30001 dbpath=/data/db/slow_sharding_balance41 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:24:09.230 [initandlisten] m30001| Fri Feb 22 12:24:09.230 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:24:09.230 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:24:09.230 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:24:09.230 [initandlisten] m30001| Fri Feb 22 12:24:09.231 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:24:09.231 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:24:09.231 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:24:09.231 [initandlisten] allocator: system m30001| Fri Feb 22 12:24:09.231 [initandlisten] options: { dbpath: "/data/db/slow_sharding_balance41", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:24:09.231 [initandlisten] journal dir=/data/db/slow_sharding_balance41/journal m30001| Fri Feb 22 12:24:09.231 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:24:09.245 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance41/local.ns, filling with zeroes... m30001| Fri Feb 22 12:24:09.245 [FileAllocator] creating directory /data/db/slow_sharding_balance41/_tmp m30001| Fri Feb 22 12:24:09.246 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance41/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:24:09.246 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance41/local.0, filling with zeroes... m30001| Fri Feb 22 12:24:09.246 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance41/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:24:09.249 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:24:09.249 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:24:09.344 [initandlisten] connection accepted from 127.0.0.1:42535 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:24:09.345 [initandlisten] connection accepted from 127.0.0.1:46467 #2 (2 connections now open) ShardingTest slow_sharding_balance4 : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:24:09.353 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:24:09.371 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:24:09.371 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=11541 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:24:09.371 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:24:09.371 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:24:09.371 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 12:24:09.372 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 12:24:09.372 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:09.373 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:09.373 [mongosMain] connected connection! m30000| Fri Feb 22 12:24:09.373 [initandlisten] connection accepted from 127.0.0.1:49322 #3 (3 connections now open) m30999| Fri Feb 22 12:24:09.374 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:24:09.374 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:09.374 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:24:09.374 [initandlisten] connection accepted from 127.0.0.1:45633 #4 (4 connections now open) m30999| Fri Feb 22 12:24:09.374 [mongosMain] connected connection! m30000| Fri Feb 22 12:24:09.375 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:24:09.386 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:24:09.387 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:09.387 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:24:09.387 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 12:24:09.387 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:09 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "51276369f1561f29e16720e0" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 12:24:09.387 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/config.ns, filling with zeroes... m30000| Fri Feb 22 12:24:09.387 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:24:09.387 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/config.0, filling with zeroes... m30000| Fri Feb 22 12:24:09.388 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:24:09.388 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/config.1, filling with zeroes... m30000| Fri Feb 22 12:24:09.388 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:24:09.391 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:24:09.393 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:24:09.393 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:24:09.394 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:09.395 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:24:09 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838', sleeping for 30000ms m30000| Fri Feb 22 12:24:09.395 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:24:09.395 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:24:09.395 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276369f1561f29e16720e0 m30999| Fri Feb 22 12:24:09.398 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:24:09.398 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:24:09.398 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-51276369f1561f29e16720e1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535849398), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:24:09.398 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:24:09.399 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:09.399 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:24:09.399 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:24:09.400 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:09.401 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-51276369f1561f29e16720e3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535849401), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:24:09.401 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:24:09.401 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30000| Fri Feb 22 12:24:09.402 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:24:09.403 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:24:09.403 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:24:09.403 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:24:09.403 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:24:09.403 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:24:09.403 BackgroundJob starting: PeriodicTask::Runner m30000| Fri Feb 22 12:24:09.403 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:09.403 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 12:24:09.404 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 12:24:09.404 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:24:09.405 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:24:09.405 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:24:09.405 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:24:09.406 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:09.406 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:24:09.407 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:09.407 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:24:09.408 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:09.408 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:24:09.409 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:09.409 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:24:09.409 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:24:09.410 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:09.410 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:24:09.410 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:24:09 m30999| Fri Feb 22 12:24:09.410 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:24:09.410 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:09.411 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:24:09.411 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 12:24:09.411 [Balancer] connected connection! m30000| Fri Feb 22 12:24:09.411 [initandlisten] connection accepted from 127.0.0.1:40241 #5 (5 connections now open) m30000| Fri Feb 22 12:24:09.412 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:09.412 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:09.412 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:09.412 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:24:09.413 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:09 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276369f1561f29e16720e5" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:24:09.413 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276369f1561f29e16720e5 m30999| Fri Feb 22 12:24:09.413 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:09.413 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:09.413 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:09.413 [Balancer] no collections to balance m30999| Fri Feb 22 12:24:09.413 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:24:09.413 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:09.414 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:09.554 [mongosMain] connection accepted from 127.0.0.1:36550 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 12:24:09.557 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:24:09.558 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:24:09.558 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:09.559 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:24:09.560 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 12:24:09.562 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:24:09.562 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:09.562 [conn1] connected connection! m30001| Fri Feb 22 12:24:09.562 [initandlisten] connection accepted from 127.0.0.1:36213 #2 (2 connections now open) m30999| Fri Feb 22 12:24:09.564 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 12:24:09.565 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:09.565 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:09.565 [conn1] connected connection! m30000| Fri Feb 22 12:24:09.565 [initandlisten] connection accepted from 127.0.0.1:57742 #6 (6 connections now open) m30999| Fri Feb 22 12:24:09.565 [conn1] creating WriteBackListener for: localhost:30000 serverID: 51276369f1561f29e16720e4 m30999| Fri Feb 22 12:24:09.565 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:24:09.565 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 12:24:09.566 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:24:09.566 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:09.566 [conn1] connected connection! m30001| Fri Feb 22 12:24:09.566 [initandlisten] connection accepted from 127.0.0.1:37621 #3 (3 connections now open) m30999| Fri Feb 22 12:24:09.566 [conn1] creating WriteBackListener for: localhost:30001 serverID: 51276369f1561f29e16720e4 m30999| Fri Feb 22 12:24:09.566 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:24:09.566 BackgroundJob starting: WriteBackListener-localhost:30001 Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... m30999| Fri Feb 22 12:24:09.569 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:24:09.569 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:09.569 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:24:09.569 [initandlisten] connection accepted from 127.0.0.1:58249 #7 (7 connections now open) m30999| Fri Feb 22 12:24:09.569 [conn1] connected connection! m30999| Fri Feb 22 12:24:09.570 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:24:09.570 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:09.570 [conn1] connected connection! m30001| Fri Feb 22 12:24:09.570 [initandlisten] connection accepted from 127.0.0.1:46656 #4 (4 connections now open) m30999| Fri Feb 22 12:24:09.570 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:24:09.571 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:24:09.571 [conn1] enabling sharding on: test m30001| Fri Feb 22 12:24:09.573 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance41/test.ns, filling with zeroes... m30001| Fri Feb 22 12:24:09.573 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance41/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:24:09.573 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance41/test.0, filling with zeroes... m30001| Fri Feb 22 12:24:09.573 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance41/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:24:09.574 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance41/test.1, filling with zeroes... m30001| Fri Feb 22 12:24:09.574 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance41/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:24:09.578 [conn4] build index test.foo { _id: 1 } m30001| Fri Feb 22 12:24:09.579 [conn4] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:24:09.579 [conn4] info: creating collection test.foo on add index m30999| Fri Feb 22 12:24:09.579 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:24:09.579 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 12:24:09.580 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.581 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||51276369f1561f29e16720e6 based on: (empty) m30000| Fri Feb 22 12:24:09.581 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 12:24:09.583 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:09.583 [conn1] resetting shard version of test.foo on localhost:30000, version is zero m30999| Fri Feb 22 12:24:09.583 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 0|0, versionEpoch: ObjectId('000000000000000000000000'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 2 m30999| Fri Feb 22 12:24:09.583 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.583 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 2 m30999| Fri Feb 22 12:24:09.583 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:24:09.584 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 2 m30001| Fri Feb 22 12:24:09.584 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:24:09.584 [initandlisten] connection accepted from 127.0.0.1:56793 #8 (8 connections now open) m30999| Fri Feb 22 12:24:09.585 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } { "_id" : "chunksize", "value" : 1 } { "_id" : "balancer", "stopped" : true } m30999| Fri Feb 22 12:24:09.588 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 157301 splitThreshold: 921 m30001| Fri Feb 22 12:24:09.588 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.588 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } m30999| Fri Feb 22 12:24:09.588 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.589 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 10104 splitThreshold: 921 m30001| Fri Feb 22 12:24:09.589 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.589 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } m30999| Fri Feb 22 12:24:09.589 [conn1] chunk not full enough to trigger auto-split { _id: 1.0 } m30999| Fri Feb 22 12:24:09.589 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 10104 splitThreshold: 921 m30001| Fri Feb 22 12:24:09.589 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.589 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.589 [conn4] warning: chunk is larger than 1024 bytes because of key { _id: 0.0 } m30001| Fri Feb 22 12:24:09.590 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "localhost:30000" } m30000| Fri Feb 22 12:24:09.591 [initandlisten] connection accepted from 127.0.0.1:62140 #9 (9 connections now open) m30001| Fri Feb 22 12:24:09.591 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131 (sleeping for 30000ms) m30001| Fri Feb 22 12:24:09.593 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4c5 m30001| Fri Feb 22 12:24:09.593 [conn4] splitChunk accepted at version 1|0||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.594 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4c6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849594), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.595 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.595 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||51276369f1561f29e16720e6 based on: 1|0||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.596 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921) m30999| Fri Feb 22 12:24:09.596 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|2, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 3 m30999| Fri Feb 22 12:24:09.596 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.596 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 157301 splitThreshold: 471859 m30999| Fri Feb 22 12:24:09.596 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.598 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 101040 splitThreshold: 471859 m30999| Fri Feb 22 12:24:09.599 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.600 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 101040 splitThreshold: 471859 m30999| Fri Feb 22 12:24:09.601 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.602 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 101040 splitThreshold: 471859 m30999| Fri Feb 22 12:24:09.602 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.604 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 101040 splitThreshold: 471859 m30999| Fri Feb 22 12:24:09.605 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.606 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 101040 splitThreshold: 471859 m30001| Fri Feb 22 12:24:09.606 [conn4] request split points lookup for chunk test.foo { : 0.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.607 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 0.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.607 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 53.0 } ], shardId: "test.foo-_id_0.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.608 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4c7 m30001| Fri Feb 22 12:24:09.609 [conn4] splitChunk accepted at version 1|2||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.609 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4c8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849609), what: "split", ns: "test.foo", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 53.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 53.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.610 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.611 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||51276369f1561f29e16720e6 based on: 1|2||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.611 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } on: { _id: 53.0 } (splitThreshold 471859) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.611 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|4, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 4 m30999| Fri Feb 22 12:24:09.611 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.612 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30999| Fri Feb 22 12:24:09.612 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.615 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30999| Fri Feb 22 12:24:09.616 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.619 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30999| Fri Feb 22 12:24:09.619 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.623 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.623 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.623 [conn1] chunk not full enough to trigger auto-split { _id: 104.0 } m30999| Fri Feb 22 12:24:09.626 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.626 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.627 [conn1] chunk not full enough to trigger auto-split { _id: 104.0 } m30999| Fri Feb 22 12:24:09.630 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.630 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.630 [conn1] chunk not full enough to trigger auto-split { _id: 104.0 } m30999| Fri Feb 22 12:24:09.634 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.634 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.634 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 53.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.634 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 53.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 172.0 } ], shardId: "test.foo-_id_53.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.635 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4c9 m30001| Fri Feb 22 12:24:09.636 [conn4] splitChunk accepted at version 1|4||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.637 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4ca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849637), what: "split", ns: "test.foo", details: { before: { min: { _id: 53.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 53.0 }, max: { _id: 172.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 172.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.637 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.638 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 1|6||51276369f1561f29e16720e6 based on: 1|4||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.638 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|4||000000000000000000000000min: { _id: 53.0 }max: { _id: MaxKey } on: { _id: 172.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.638 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|6, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 5 m30999| Fri Feb 22 12:24:09.639 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.639 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.640 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.640 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.643 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.643 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.643 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.646 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.647 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.647 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.650 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.650 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.650 [conn1] chunk not full enough to trigger auto-split { _id: 223.0 } m30999| Fri Feb 22 12:24:09.654 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.654 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.654 [conn1] chunk not full enough to trigger auto-split { _id: 223.0 } m30999| Fri Feb 22 12:24:09.669 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.669 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.669 [conn1] chunk not full enough to trigger auto-split { _id: 223.0 } m30999| Fri Feb 22 12:24:09.673 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.673 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.673 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 172.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.673 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 172.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 291.0 } ], shardId: "test.foo-_id_172.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.674 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4cb m30001| Fri Feb 22 12:24:09.675 [conn4] splitChunk accepted at version 1|6||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.676 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4cc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849676), what: "split", ns: "test.foo", details: { before: { min: { _id: 172.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|6, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 172.0 }, max: { _id: 291.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 291.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.676 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.677 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 1|8||51276369f1561f29e16720e6 based on: 1|6||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.677 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 172.0 }max: { _id: MaxKey } on: { _id: 291.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.677 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|8, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 6 m30999| Fri Feb 22 12:24:09.678 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.678 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.679 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.679 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.682 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.682 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.682 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.686 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.686 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.686 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.689 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.689 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.689 [conn1] chunk not full enough to trigger auto-split { _id: 342.0 } m30999| Fri Feb 22 12:24:09.693 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.693 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.693 [conn1] chunk not full enough to trigger auto-split { _id: 342.0 } m30999| Fri Feb 22 12:24:09.697 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.697 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.697 [conn1] chunk not full enough to trigger auto-split { _id: 342.0 } m30999| Fri Feb 22 12:24:09.700 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.700 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.700 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 291.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.701 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 291.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 410.0 } ], shardId: "test.foo-_id_291.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.702 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4cd m30001| Fri Feb 22 12:24:09.703 [conn4] splitChunk accepted at version 1|8||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.703 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4ce", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849703), what: "split", ns: "test.foo", details: { before: { min: { _id: 291.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|8, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 291.0 }, max: { _id: 410.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 410.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.704 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.705 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 1|10||51276369f1561f29e16720e6 based on: 1|8||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.705 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|8||000000000000000000000000min: { _id: 291.0 }max: { _id: MaxKey } on: { _id: 410.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.705 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|10, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 7 m30999| Fri Feb 22 12:24:09.705 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.706 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.706 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.706 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.710 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.710 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.710 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.713 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.713 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.713 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.717 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.717 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.717 [conn1] chunk not full enough to trigger auto-split { _id: 461.0 } m30999| Fri Feb 22 12:24:09.720 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.720 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.720 [conn1] chunk not full enough to trigger auto-split { _id: 461.0 } m30999| Fri Feb 22 12:24:09.724 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.724 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.724 [conn1] chunk not full enough to trigger auto-split { _id: 461.0 } m30999| Fri Feb 22 12:24:09.727 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.728 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.728 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 410.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.728 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 410.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 529.0 } ], shardId: "test.foo-_id_410.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.729 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4cf m30001| Fri Feb 22 12:24:09.730 [conn4] splitChunk accepted at version 1|10||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.731 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4d0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849731), what: "split", ns: "test.foo", details: { before: { min: { _id: 410.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|10, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 410.0 }, max: { _id: 529.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 529.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.731 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.732 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 1|12||51276369f1561f29e16720e6 based on: 1|10||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.732 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|10||000000000000000000000000min: { _id: 410.0 }max: { _id: MaxKey } on: { _id: 529.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.732 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|12, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 8 m30999| Fri Feb 22 12:24:09.732 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.733 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.733 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.733 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.737 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.738 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.738 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.741 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.741 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.741 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.745 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.745 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.745 [conn1] chunk not full enough to trigger auto-split { _id: 580.0 } m30999| Fri Feb 22 12:24:09.749 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.749 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.749 [conn1] chunk not full enough to trigger auto-split { _id: 580.0 } m30999| Fri Feb 22 12:24:09.752 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.752 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.753 [conn1] chunk not full enough to trigger auto-split { _id: 580.0 } m30999| Fri Feb 22 12:24:09.756 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.756 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.756 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 529.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.757 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 529.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 648.0 } ], shardId: "test.foo-_id_529.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.758 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4d1 m30001| Fri Feb 22 12:24:09.758 [conn4] splitChunk accepted at version 1|12||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.759 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4d2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849759), what: "split", ns: "test.foo", details: { before: { min: { _id: 529.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|12, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 529.0 }, max: { _id: 648.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 648.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.759 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.760 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 1|14||51276369f1561f29e16720e6 based on: 1|12||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.761 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|12||000000000000000000000000min: { _id: 529.0 }max: { _id: MaxKey } on: { _id: 648.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.761 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|14, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 9 m30999| Fri Feb 22 12:24:09.761 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.762 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.762 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.762 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.765 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.765 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.765 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.791 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.791 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.792 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.795 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.795 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.795 [conn1] chunk not full enough to trigger auto-split { _id: 699.0 } m30999| Fri Feb 22 12:24:09.798 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.798 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.799 [conn1] chunk not full enough to trigger auto-split { _id: 699.0 } m30999| Fri Feb 22 12:24:09.802 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.802 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.802 [conn1] chunk not full enough to trigger auto-split { _id: 699.0 } m30999| Fri Feb 22 12:24:09.805 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.805 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.805 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 648.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.806 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 648.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 767.0 } ], shardId: "test.foo-_id_648.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.807 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4d3 m30001| Fri Feb 22 12:24:09.808 [conn4] splitChunk accepted at version 1|14||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.808 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4d4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849808), what: "split", ns: "test.foo", details: { before: { min: { _id: 648.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|14, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 648.0 }, max: { _id: 767.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 767.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.809 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.809 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 1|16||51276369f1561f29e16720e6 based on: 1|14||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.810 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|14||000000000000000000000000min: { _id: 648.0 }max: { _id: MaxKey } on: { _id: 767.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.810 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|16, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 10 m30999| Fri Feb 22 12:24:09.810 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.811 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.811 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.811 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.814 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.814 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.814 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.818 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.818 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.818 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.821 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.821 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.821 [conn1] chunk not full enough to trigger auto-split { _id: 818.0 } m30999| Fri Feb 22 12:24:09.824 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.824 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.825 [conn1] chunk not full enough to trigger auto-split { _id: 818.0 } m30999| Fri Feb 22 12:24:09.828 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.828 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.828 [conn1] chunk not full enough to trigger auto-split { _id: 818.0 } m30999| Fri Feb 22 12:24:09.832 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.832 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.832 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 767.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.833 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 767.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 886.0 } ], shardId: "test.foo-_id_767.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.834 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4d5 m30001| Fri Feb 22 12:24:09.835 [conn4] splitChunk accepted at version 1|16||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.835 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4d6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849835), what: "split", ns: "test.foo", details: { before: { min: { _id: 767.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|16, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 767.0 }, max: { _id: 886.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 886.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.836 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.836 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 1|18||51276369f1561f29e16720e6 based on: 1|16||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.837 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|16||000000000000000000000000min: { _id: 767.0 }max: { _id: MaxKey } on: { _id: 886.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.837 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|18, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 11 m30999| Fri Feb 22 12:24:09.837 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.838 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.838 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.838 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.841 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.841 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.841 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.845 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.845 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.845 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.848 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.848 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.848 [conn1] chunk not full enough to trigger auto-split { _id: 937.0 } m30999| Fri Feb 22 12:24:09.851 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.852 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.852 [conn1] chunk not full enough to trigger auto-split { _id: 937.0 } m30999| Fri Feb 22 12:24:09.855 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.855 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.855 [conn1] chunk not full enough to trigger auto-split { _id: 937.0 } m30999| Fri Feb 22 12:24:09.858 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.858 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.859 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 886.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.859 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 886.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1005.0 } ], shardId: "test.foo-_id_886.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.860 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4d7 m30001| Fri Feb 22 12:24:09.861 [conn4] splitChunk accepted at version 1|18||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.861 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4d8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849861), what: "split", ns: "test.foo", details: { before: { min: { _id: 886.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|18, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 886.0 }, max: { _id: 1005.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1005.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.862 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.863 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 1|20||51276369f1561f29e16720e6 based on: 1|18||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.863 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|18||000000000000000000000000min: { _id: 886.0 }max: { _id: MaxKey } on: { _id: 1005.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.863 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|20, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 12 m30999| Fri Feb 22 12:24:09.863 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.864 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.864 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.864 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.867 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.867 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.868 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.871 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.871 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.871 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.874 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.874 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.875 [conn1] chunk not full enough to trigger auto-split { _id: 1056.0 } m30999| Fri Feb 22 12:24:09.878 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.878 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.878 [conn1] chunk not full enough to trigger auto-split { _id: 1056.0 } m30999| Fri Feb 22 12:24:09.881 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.881 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.882 [conn1] chunk not full enough to trigger auto-split { _id: 1056.0 } m30999| Fri Feb 22 12:24:09.885 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.885 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.885 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1005.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.885 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1005.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1124.0 } ], shardId: "test.foo-_id_1005.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.886 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4d9 m30001| Fri Feb 22 12:24:09.887 [conn4] splitChunk accepted at version 1|20||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.888 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4da", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849888), what: "split", ns: "test.foo", details: { before: { min: { _id: 1005.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|20, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1005.0 }, max: { _id: 1124.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1124.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.888 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.889 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 1|22||51276369f1561f29e16720e6 based on: 1|20||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.889 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|20||000000000000000000000000min: { _id: 1005.0 }max: { _id: MaxKey } on: { _id: 1124.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.889 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|22, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 13 m30999| Fri Feb 22 12:24:09.890 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.890 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.891 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.891 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.905 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.905 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.906 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.909 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.909 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.909 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.912 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.912 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.912 [conn1] chunk not full enough to trigger auto-split { _id: 1175.0 } m30999| Fri Feb 22 12:24:09.916 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.916 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.916 [conn1] chunk not full enough to trigger auto-split { _id: 1175.0 } m30999| Fri Feb 22 12:24:09.919 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.919 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.919 [conn1] chunk not full enough to trigger auto-split { _id: 1175.0 } m30999| Fri Feb 22 12:24:09.923 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.923 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.923 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1124.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.923 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1124.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1243.0 } ], shardId: "test.foo-_id_1124.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.924 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4db m30001| Fri Feb 22 12:24:09.925 [conn4] splitChunk accepted at version 1|22||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.926 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4dc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849926), what: "split", ns: "test.foo", details: { before: { min: { _id: 1124.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|22, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1124.0 }, max: { _id: 1243.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1243.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.926 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.927 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 1|24||51276369f1561f29e16720e6 based on: 1|22||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.927 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|22||000000000000000000000000min: { _id: 1124.0 }max: { _id: MaxKey } on: { _id: 1243.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.927 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|24, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 14 m30999| Fri Feb 22 12:24:09.927 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.928 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.928 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.929 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.932 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.932 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.932 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.935 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.935 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.935 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.939 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.939 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.939 [conn1] chunk not full enough to trigger auto-split { _id: 1294.0 } m30999| Fri Feb 22 12:24:09.942 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.942 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.942 [conn1] chunk not full enough to trigger auto-split { _id: 1294.0 } m30999| Fri Feb 22 12:24:09.945 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.946 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.946 [conn1] chunk not full enough to trigger auto-split { _id: 1294.0 } m30999| Fri Feb 22 12:24:09.949 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.949 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.949 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1243.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.950 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1243.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1362.0 } ], shardId: "test.foo-_id_1243.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.951 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4dd m30001| Fri Feb 22 12:24:09.951 [conn4] splitChunk accepted at version 1|24||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.952 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4de", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849952), what: "split", ns: "test.foo", details: { before: { min: { _id: 1243.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|24, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1243.0 }, max: { _id: 1362.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1362.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.952 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.953 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 1|26||51276369f1561f29e16720e6 based on: 1|24||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.954 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|24||000000000000000000000000min: { _id: 1243.0 }max: { _id: MaxKey } on: { _id: 1362.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.954 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|26, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 15 m30999| Fri Feb 22 12:24:09.954 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.955 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.955 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.955 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.958 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.958 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.958 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.961 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.962 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.962 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.965 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.965 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.965 [conn1] chunk not full enough to trigger auto-split { _id: 1413.0 } m30999| Fri Feb 22 12:24:09.968 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.968 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.969 [conn1] chunk not full enough to trigger auto-split { _id: 1413.0 } m30999| Fri Feb 22 12:24:09.972 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.972 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.972 [conn1] chunk not full enough to trigger auto-split { _id: 1413.0 } m30999| Fri Feb 22 12:24:09.975 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.975 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.976 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1362.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:09.976 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1362.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1481.0 } ], shardId: "test.foo-_id_1362.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:09.977 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636957d9fef1ecdff4df m30001| Fri Feb 22 12:24:09.978 [conn4] splitChunk accepted at version 1|26||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:09.978 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:09-5127636957d9fef1ecdff4e0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535849978), what: "split", ns: "test.foo", details: { before: { min: { _id: 1362.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|26, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1362.0 }, max: { _id: 1481.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1481.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:09.979 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:09.980 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 1|28||51276369f1561f29e16720e6 based on: 1|26||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:09.980 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|26||000000000000000000000000min: { _id: 1362.0 }max: { _id: MaxKey } on: { _id: 1481.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:09.980 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|28, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 16 m30999| Fri Feb 22 12:24:09.980 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:09.981 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } dataWritten: 197717 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.981 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.981 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.984 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.985 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.985 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.988 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.988 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.988 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:09.991 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.991 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.992 [conn1] chunk not full enough to trigger auto-split { _id: 1532.0 } m30999| Fri Feb 22 12:24:09.995 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.995 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.995 [conn1] chunk not full enough to trigger auto-split { _id: 1532.0 } m30999| Fri Feb 22 12:24:09.998 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:09.998 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:09.998 [conn1] chunk not full enough to trigger auto-split { _id: 1532.0 } m30999| Fri Feb 22 12:24:10.002 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.002 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.002 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1481.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.002 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1481.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1600.0 } ], shardId: "test.foo-_id_1481.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.003 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4e1 m30001| Fri Feb 22 12:24:10.004 [conn4] splitChunk accepted at version 1|28||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.005 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4e2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850005), what: "split", ns: "test.foo", details: { before: { min: { _id: 1481.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|28, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1481.0 }, max: { _id: 1600.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.005 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.006 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 1|30||51276369f1561f29e16720e6 based on: 1|28||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.006 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|28||000000000000000000000000min: { _id: 1481.0 }max: { _id: MaxKey } on: { _id: 1600.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.007 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|30, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 17 m30999| Fri Feb 22 12:24:10.007 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.024 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.024 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.024 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.030 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.030 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.031 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.036 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.036 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.036 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.041 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.041 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.041 [conn1] chunk not full enough to trigger auto-split { _id: 1651.0 } m30999| Fri Feb 22 12:24:10.046 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.046 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.046 [conn1] chunk not full enough to trigger auto-split { _id: 1651.0 } m30999| Fri Feb 22 12:24:10.051 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.051 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.052 [conn1] chunk not full enough to trigger auto-split { _id: 1651.0 } m30999| Fri Feb 22 12:24:10.057 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.057 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.057 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1600.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.057 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1600.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1718.0 } ], shardId: "test.foo-_id_1600.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.058 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4e3 m30001| Fri Feb 22 12:24:10.059 [conn4] splitChunk accepted at version 1|30||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.060 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4e4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850060), what: "split", ns: "test.foo", details: { before: { min: { _id: 1600.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|30, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1600.0 }, max: { _id: 1718.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1718.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.060 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.061 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 1|32||51276369f1561f29e16720e6 based on: 1|30||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.061 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|30||000000000000000000000000min: { _id: 1600.0 }max: { _id: MaxKey } on: { _id: 1718.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.062 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|32, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 18 m30999| Fri Feb 22 12:24:10.062 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.063 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.063 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.063 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.067 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.067 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.067 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.071 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.071 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.072 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.076 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.076 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.076 [conn1] chunk not full enough to trigger auto-split { _id: 1769.0 } m30999| Fri Feb 22 12:24:10.080 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.080 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.081 [conn1] chunk not full enough to trigger auto-split { _id: 1769.0 } m30999| Fri Feb 22 12:24:10.085 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.085 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.085 [conn1] chunk not full enough to trigger auto-split { _id: 1769.0 } m30999| Fri Feb 22 12:24:10.089 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.089 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.089 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1718.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.090 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1718.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1836.0 } ], shardId: "test.foo-_id_1718.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.091 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4e5 m30001| Fri Feb 22 12:24:10.091 [conn4] splitChunk accepted at version 1|32||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.092 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4e6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850092), what: "split", ns: "test.foo", details: { before: { min: { _id: 1718.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|32, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1718.0 }, max: { _id: 1836.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1836.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.093 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.093 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 1|34||51276369f1561f29e16720e6 based on: 1|32||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.094 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|32||000000000000000000000000min: { _id: 1718.0 }max: { _id: MaxKey } on: { _id: 1836.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.094 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|34, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 19 m30999| Fri Feb 22 12:24:10.094 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.095 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.095 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.095 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.099 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.099 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.100 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.104 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.104 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.104 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.108 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.108 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.108 [conn1] chunk not full enough to trigger auto-split { _id: 1887.0 } m30999| Fri Feb 22 12:24:10.113 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.113 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.113 [conn1] chunk not full enough to trigger auto-split { _id: 1887.0 } m30999| Fri Feb 22 12:24:10.117 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.117 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.118 [conn1] chunk not full enough to trigger auto-split { _id: 1887.0 } m30999| Fri Feb 22 12:24:10.122 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.122 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.123 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1836.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.123 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1836.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 1954.0 } ], shardId: "test.foo-_id_1836.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.124 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4e7 m30001| Fri Feb 22 12:24:10.125 [conn4] splitChunk accepted at version 1|34||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.125 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4e8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850125), what: "split", ns: "test.foo", details: { before: { min: { _id: 1836.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|34, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1836.0 }, max: { _id: 1954.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1954.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.126 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.127 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 1|36||51276369f1561f29e16720e6 based on: 1|34||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.127 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|34||000000000000000000000000min: { _id: 1836.0 }max: { _id: MaxKey } on: { _id: 1954.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.127 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|36, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 20 m30999| Fri Feb 22 12:24:10.127 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.132 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.132 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.133 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.137 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.137 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.137 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.141 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.141 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.141 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.145 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.146 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.146 [conn1] chunk not full enough to trigger auto-split { _id: 2005.0 } m30999| Fri Feb 22 12:24:10.150 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.150 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.150 [conn1] chunk not full enough to trigger auto-split { _id: 2005.0 } m30999| Fri Feb 22 12:24:10.154 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.154 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.155 [conn1] chunk not full enough to trigger auto-split { _id: 2005.0 } m30999| Fri Feb 22 12:24:10.159 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.159 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.159 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1954.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.160 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1954.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2072.0 } ], shardId: "test.foo-_id_1954.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.160 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4e9 m30001| Fri Feb 22 12:24:10.161 [conn4] splitChunk accepted at version 1|36||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.162 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4ea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850162), what: "split", ns: "test.foo", details: { before: { min: { _id: 1954.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|36, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1954.0 }, max: { _id: 2072.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2072.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.162 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.163 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 1|38||51276369f1561f29e16720e6 based on: 1|36||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.164 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|36||000000000000000000000000min: { _id: 1954.0 }max: { _id: MaxKey } on: { _id: 2072.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.164 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|38, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 21 m30999| Fri Feb 22 12:24:10.164 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.165 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.165 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.165 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.171 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.171 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.171 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.175 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.175 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.176 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.180 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.180 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.180 [conn1] chunk not full enough to trigger auto-split { _id: 2123.0 } m30999| Fri Feb 22 12:24:10.184 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.184 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.184 [conn1] chunk not full enough to trigger auto-split { _id: 2123.0 } m30999| Fri Feb 22 12:24:10.188 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.189 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.189 [conn1] chunk not full enough to trigger auto-split { _id: 2123.0 } m30999| Fri Feb 22 12:24:10.193 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.193 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.193 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2072.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.194 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2072.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2190.0 } ], shardId: "test.foo-_id_2072.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.195 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4eb m30001| Fri Feb 22 12:24:10.195 [conn4] splitChunk accepted at version 1|38||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.196 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4ec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850196), what: "split", ns: "test.foo", details: { before: { min: { _id: 2072.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|38, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2072.0 }, max: { _id: 2190.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2190.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.196 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.197 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 1|40||51276369f1561f29e16720e6 based on: 1|38||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.198 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|38||000000000000000000000000min: { _id: 2072.0 }max: { _id: MaxKey } on: { _id: 2190.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.198 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|40, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 22 m30999| Fri Feb 22 12:24:10.198 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.199 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.199 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.199 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.203 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.203 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.204 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.208 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.208 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.208 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.212 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.212 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.212 [conn1] chunk not full enough to trigger auto-split { _id: 2241.0 } m30999| Fri Feb 22 12:24:10.216 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.216 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.217 [conn1] chunk not full enough to trigger auto-split { _id: 2241.0 } m30999| Fri Feb 22 12:24:10.221 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.221 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.221 [conn1] chunk not full enough to trigger auto-split { _id: 2241.0 } m30999| Fri Feb 22 12:24:10.225 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.225 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.226 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2190.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.226 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2190.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2308.0 } ], shardId: "test.foo-_id_2190.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.227 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4ed m30001| Fri Feb 22 12:24:10.228 [conn4] splitChunk accepted at version 1|40||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.228 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4ee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850228), what: "split", ns: "test.foo", details: { before: { min: { _id: 2190.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|40, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2190.0 }, max: { _id: 2308.0 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2308.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.229 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.230 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 1|42||51276369f1561f29e16720e6 based on: 1|40||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.230 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|40||000000000000000000000000min: { _id: 2190.0 }max: { _id: MaxKey } on: { _id: 2308.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.230 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|42, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 23 m30999| Fri Feb 22 12:24:10.230 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.231 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.231 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.231 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.243 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.243 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.244 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.248 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.248 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.248 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.252 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.252 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.252 [conn1] chunk not full enough to trigger auto-split { _id: 2359.0 } m30999| Fri Feb 22 12:24:10.257 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.257 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.257 [conn1] chunk not full enough to trigger auto-split { _id: 2359.0 } m30999| Fri Feb 22 12:24:10.261 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.261 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.261 [conn1] chunk not full enough to trigger auto-split { _id: 2359.0 } m30999| Fri Feb 22 12:24:10.266 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.266 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.266 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2308.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.266 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2308.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2426.0 } ], shardId: "test.foo-_id_2308.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.267 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4ef m30001| Fri Feb 22 12:24:10.267 [conn4] splitChunk accepted at version 1|42||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.268 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4f0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850268), what: "split", ns: "test.foo", details: { before: { min: { _id: 2308.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|42, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2308.0 }, max: { _id: 2426.0 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2426.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.268 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.269 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 1|44||51276369f1561f29e16720e6 based on: 1|42||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.269 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|42||000000000000000000000000min: { _id: 2308.0 }max: { _id: MaxKey } on: { _id: 2426.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.269 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|44, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 24 m30999| Fri Feb 22 12:24:10.269 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.270 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.270 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.271 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.275 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.275 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.275 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.279 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.279 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.279 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.283 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.283 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.284 [conn1] chunk not full enough to trigger auto-split { _id: 2477.0 } m30999| Fri Feb 22 12:24:10.288 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.288 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.288 [conn1] chunk not full enough to trigger auto-split { _id: 2477.0 } m30999| Fri Feb 22 12:24:10.292 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.292 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.292 [conn1] chunk not full enough to trigger auto-split { _id: 2477.0 } m30999| Fri Feb 22 12:24:10.296 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.296 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.296 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2426.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.297 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2426.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2544.0 } ], shardId: "test.foo-_id_2426.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.297 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4f1 m30001| Fri Feb 22 12:24:10.298 [conn4] splitChunk accepted at version 1|44||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.298 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4f2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850298), what: "split", ns: "test.foo", details: { before: { min: { _id: 2426.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|44, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2426.0 }, max: { _id: 2544.0 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2544.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.299 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.300 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 1|46||51276369f1561f29e16720e6 based on: 1|44||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.300 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|44||000000000000000000000000min: { _id: 2426.0 }max: { _id: MaxKey } on: { _id: 2544.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.300 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|46, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 25 m30999| Fri Feb 22 12:24:10.300 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.301 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.301 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.301 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.305 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.305 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.305 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.309 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.310 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.310 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.314 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.314 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.314 [conn1] chunk not full enough to trigger auto-split { _id: 2595.0 } m30999| Fri Feb 22 12:24:10.320 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.320 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.320 [conn1] chunk not full enough to trigger auto-split { _id: 2595.0 } m30999| Fri Feb 22 12:24:10.326 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.327 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.327 [conn1] chunk not full enough to trigger auto-split { _id: 2595.0 } m30999| Fri Feb 22 12:24:10.333 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.333 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.333 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2544.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.333 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2544.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2662.0 } ], shardId: "test.foo-_id_2544.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.334 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4f3 m30001| Fri Feb 22 12:24:10.335 [conn4] splitChunk accepted at version 1|46||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.335 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4f4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850335), what: "split", ns: "test.foo", details: { before: { min: { _id: 2544.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|46, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2544.0 }, max: { _id: 2662.0 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2662.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.335 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.336 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 1|48||51276369f1561f29e16720e6 based on: 1|46||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.336 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|46||000000000000000000000000min: { _id: 2544.0 }max: { _id: MaxKey } on: { _id: 2662.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.337 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|48, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 26 m30999| Fri Feb 22 12:24:10.337 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.338 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.338 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.338 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.343 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.343 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.343 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.370 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.370 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.370 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.374 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.374 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.375 [conn1] chunk not full enough to trigger auto-split { _id: 2713.0 } m30999| Fri Feb 22 12:24:10.379 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.379 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.379 [conn1] chunk not full enough to trigger auto-split { _id: 2713.0 } m30999| Fri Feb 22 12:24:10.384 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.384 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.384 [conn1] chunk not full enough to trigger auto-split { _id: 2713.0 } m30999| Fri Feb 22 12:24:10.388 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.388 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.388 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2662.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.389 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2662.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2780.0 } ], shardId: "test.foo-_id_2662.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.389 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4f5 m30001| Fri Feb 22 12:24:10.390 [conn4] splitChunk accepted at version 1|48||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.390 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4f6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850390), what: "split", ns: "test.foo", details: { before: { min: { _id: 2662.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|48, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2662.0 }, max: { _id: 2780.0 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2780.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.391 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.391 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 1|50||51276369f1561f29e16720e6 based on: 1|48||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.392 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|48||000000000000000000000000min: { _id: 2662.0 }max: { _id: MaxKey } on: { _id: 2780.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.392 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|50, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 27 m30999| Fri Feb 22 12:24:10.392 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.393 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.393 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.393 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.398 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.398 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.398 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.402 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.402 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.402 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.406 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.406 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.407 [conn1] chunk not full enough to trigger auto-split { _id: 2831.0 } m30999| Fri Feb 22 12:24:10.411 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.411 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.411 [conn1] chunk not full enough to trigger auto-split { _id: 2831.0 } m30999| Fri Feb 22 12:24:10.415 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.415 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.415 [conn1] chunk not full enough to trigger auto-split { _id: 2831.0 } m30999| Fri Feb 22 12:24:10.419 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.419 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.420 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2780.0 } -->> { : MaxKey } m30001| Fri Feb 22 12:24:10.420 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2780.0 }, max: { _id: MaxKey }, from: "shard0001", splitKeys: [ { _id: 2898.0 } ], shardId: "test.foo-_id_2780.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:10.421 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636a57d9fef1ecdff4f7 m30001| Fri Feb 22 12:24:10.421 [conn4] splitChunk accepted at version 1|50||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:10.422 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:10-5127636a57d9fef1ecdff4f8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535850422), what: "split", ns: "test.foo", details: { before: { min: { _id: 2780.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|50, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2780.0 }, max: { _id: 2898.0 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2898.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|52, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:10.422 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:10.423 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 1|52||51276369f1561f29e16720e6 based on: 1|50||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:10.423 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|50||000000000000000000000000min: { _id: 2780.0 }max: { _id: MaxKey } on: { _id: 2898.0 } (splitThreshold 943718) (migrate suggested, but no migrations allowed) m30999| Fri Feb 22 12:24:10.423 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|52, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 28 m30999| Fri Feb 22 12:24:10.423 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:10.425 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: 2898.0 }max: { _id: MaxKey } dataWritten: 193736 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.425 [conn4] request split points lookup for chunk test.foo { : 2898.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.425 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.429 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: 2898.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.429 [conn4] request split points lookup for chunk test.foo { : 2898.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.429 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.433 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: 2898.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.433 [conn4] request split points lookup for chunk test.foo { : 2898.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.433 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:24:10.438 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: 2898.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.438 [conn4] request split points lookup for chunk test.foo { : 2898.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.438 [conn1] chunk not full enough to trigger auto-split { _id: 2949.0 } m30999| Fri Feb 22 12:24:10.442 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: 2898.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.442 [conn4] request split points lookup for chunk test.foo { : 2898.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.442 [conn1] chunk not full enough to trigger auto-split { _id: 2949.0 } m30999| Fri Feb 22 12:24:10.446 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|52||000000000000000000000000min: { _id: 2898.0 }max: { _id: MaxKey } dataWritten: 191976 splitThreshold: 943718 m30001| Fri Feb 22 12:24:10.447 [conn4] request split points lookup for chunk test.foo { : 2898.0 } -->> { : MaxKey } m30999| Fri Feb 22 12:24:10.447 [conn1] chunk not full enough to trigger auto-split { _id: 2949.0 } m30999| Fri Feb 22 12:24:11.748 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|47||000000000000000000000000min: { _id: 2544.0 }max: { _id: 2662.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.748 [conn4] request split points lookup for chunk test.foo { : 2544.0 } -->> { : 2662.0 } m30001| Fri Feb 22 12:24:11.748 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2544.0 } -->> { : 2662.0 } m30001| Fri Feb 22 12:24:11.748 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2544.0 }, max: { _id: 2662.0 }, from: "shard0001", splitKeys: [ { _id: 2595.0 } ], shardId: "test.foo-_id_2544.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.749 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff4f9 m30001| Fri Feb 22 12:24:11.750 [conn4] splitChunk accepted at version 1|52||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.750 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff4fa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851750), what: "split", ns: "test.foo", details: { before: { min: { _id: 2544.0 }, max: { _id: 2662.0 }, lastmod: Timestamp 1000|47, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2544.0 }, max: { _id: 2595.0 }, lastmod: Timestamp 1000|53, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2595.0 }, max: { _id: 2662.0 }, lastmod: Timestamp 1000|54, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.750 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.751 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 1|54||51276369f1561f29e16720e6 based on: 1|52||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.752 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|47||000000000000000000000000min: { _id: 2544.0 }max: { _id: 2662.0 } on: { _id: 2595.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.752 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|54, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 29 m30999| Fri Feb 22 12:24:11.752 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.753 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { _id: 1481.0 }max: { _id: 1600.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.753 [conn4] request split points lookup for chunk test.foo { : 1481.0 } -->> { : 1600.0 } m30001| Fri Feb 22 12:24:11.753 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1481.0 } -->> { : 1600.0 } m30001| Fri Feb 22 12:24:11.753 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1481.0 }, max: { _id: 1600.0 }, from: "shard0001", splitKeys: [ { _id: 1532.0 } ], shardId: "test.foo-_id_1481.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.753 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff4fb m30001| Fri Feb 22 12:24:11.754 [conn4] splitChunk accepted at version 1|54||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.754 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff4fc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851754), what: "split", ns: "test.foo", details: { before: { min: { _id: 1481.0 }, max: { _id: 1600.0 }, lastmod: Timestamp 1000|29, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1481.0 }, max: { _id: 1532.0 }, lastmod: Timestamp 1000|55, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1532.0 }, max: { _id: 1600.0 }, lastmod: Timestamp 1000|56, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.755 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.755 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 1|56||51276369f1561f29e16720e6 based on: 1|54||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.756 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|29||000000000000000000000000min: { _id: 1481.0 }max: { _id: 1600.0 } on: { _id: 1532.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.756 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|56, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 30 m30999| Fri Feb 22 12:24:11.756 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.773 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|45||000000000000000000000000min: { _id: 2426.0 }max: { _id: 2544.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.773 [conn4] request split points lookup for chunk test.foo { : 2426.0 } -->> { : 2544.0 } m30001| Fri Feb 22 12:24:11.773 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2426.0 } -->> { : 2544.0 } m30001| Fri Feb 22 12:24:11.773 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2426.0 }, max: { _id: 2544.0 }, from: "shard0001", splitKeys: [ { _id: 2477.0 } ], shardId: "test.foo-_id_2426.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.774 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff4fd m30001| Fri Feb 22 12:24:11.774 [conn4] splitChunk accepted at version 1|56||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.775 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff4fe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851775), what: "split", ns: "test.foo", details: { before: { min: { _id: 2426.0 }, max: { _id: 2544.0 }, lastmod: Timestamp 1000|45, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2426.0 }, max: { _id: 2477.0 }, lastmod: Timestamp 1000|57, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2477.0 }, max: { _id: 2544.0 }, lastmod: Timestamp 1000|58, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.775 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.776 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 1|58||51276369f1561f29e16720e6 based on: 1|56||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.776 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|45||000000000000000000000000min: { _id: 2426.0 }max: { _id: 2544.0 } on: { _id: 2477.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.776 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|58, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 31 m30999| Fri Feb 22 12:24:11.776 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.780 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { _id: 1836.0 }max: { _id: 1954.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.780 [conn4] request split points lookup for chunk test.foo { : 1836.0 } -->> { : 1954.0 } m30001| Fri Feb 22 12:24:11.780 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1836.0 } -->> { : 1954.0 } m30001| Fri Feb 22 12:24:11.780 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1836.0 }, max: { _id: 1954.0 }, from: "shard0001", splitKeys: [ { _id: 1887.0 } ], shardId: "test.foo-_id_1836.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.781 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff4ff m30001| Fri Feb 22 12:24:11.781 [conn4] splitChunk accepted at version 1|58||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.782 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff500", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851782), what: "split", ns: "test.foo", details: { before: { min: { _id: 1836.0 }, max: { _id: 1954.0 }, lastmod: Timestamp 1000|35, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1836.0 }, max: { _id: 1887.0 }, lastmod: Timestamp 1000|59, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1887.0 }, max: { _id: 1954.0 }, lastmod: Timestamp 1000|60, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.782 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.783 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 32 version: 1|60||51276369f1561f29e16720e6 based on: 1|58||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.783 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|35||000000000000000000000000min: { _id: 1836.0 }max: { _id: 1954.0 } on: { _id: 1887.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.783 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|60, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 32 m30999| Fri Feb 22 12:24:11.783 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.800 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { _id: 2308.0 }max: { _id: 2426.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.800 [conn4] request split points lookup for chunk test.foo { : 2308.0 } -->> { : 2426.0 } m30001| Fri Feb 22 12:24:11.800 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2308.0 } -->> { : 2426.0 } m30001| Fri Feb 22 12:24:11.800 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2308.0 }, max: { _id: 2426.0 }, from: "shard0001", splitKeys: [ { _id: 2359.0 } ], shardId: "test.foo-_id_2308.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.801 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff501 m30001| Fri Feb 22 12:24:11.801 [conn4] splitChunk accepted at version 1|60||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.802 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff502", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851802), what: "split", ns: "test.foo", details: { before: { min: { _id: 2308.0 }, max: { _id: 2426.0 }, lastmod: Timestamp 1000|43, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2308.0 }, max: { _id: 2359.0 }, lastmod: Timestamp 1000|61, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2359.0 }, max: { _id: 2426.0 }, lastmod: Timestamp 1000|62, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.802 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.803 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 33 version: 1|62||51276369f1561f29e16720e6 based on: 1|60||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.803 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|43||000000000000000000000000min: { _id: 2308.0 }max: { _id: 2426.0 } on: { _id: 2359.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.803 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|62, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 33 m30999| Fri Feb 22 12:24:11.803 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.811 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { _id: 2190.0 }max: { _id: 2308.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.811 [conn4] request split points lookup for chunk test.foo { : 2190.0 } -->> { : 2308.0 } m30001| Fri Feb 22 12:24:11.811 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2190.0 } -->> { : 2308.0 } m30001| Fri Feb 22 12:24:11.811 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2190.0 }, max: { _id: 2308.0 }, from: "shard0001", splitKeys: [ { _id: 2241.0 } ], shardId: "test.foo-_id_2190.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.812 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff503 m30001| Fri Feb 22 12:24:11.812 [conn4] splitChunk accepted at version 1|62||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.813 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff504", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851813), what: "split", ns: "test.foo", details: { before: { min: { _id: 2190.0 }, max: { _id: 2308.0 }, lastmod: Timestamp 1000|41, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2190.0 }, max: { _id: 2241.0 }, lastmod: Timestamp 1000|63, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2241.0 }, max: { _id: 2308.0 }, lastmod: Timestamp 1000|64, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.813 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.814 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 34 version: 1|64||51276369f1561f29e16720e6 based on: 1|62||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.814 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|41||000000000000000000000000min: { _id: 2190.0 }max: { _id: 2308.0 } on: { _id: 2241.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.814 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|64, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 34 m30999| Fri Feb 22 12:24:11.814 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.832 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { _id: 2072.0 }max: { _id: 2190.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.832 [conn4] request split points lookup for chunk test.foo { : 2072.0 } -->> { : 2190.0 } m30001| Fri Feb 22 12:24:11.832 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2072.0 } -->> { : 2190.0 } m30001| Fri Feb 22 12:24:11.832 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2072.0 }, max: { _id: 2190.0 }, from: "shard0001", splitKeys: [ { _id: 2123.0 } ], shardId: "test.foo-_id_2072.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.833 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff505 m30001| Fri Feb 22 12:24:11.833 [conn4] splitChunk accepted at version 1|64||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.834 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff506", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851834), what: "split", ns: "test.foo", details: { before: { min: { _id: 2072.0 }, max: { _id: 2190.0 }, lastmod: Timestamp 1000|39, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2072.0 }, max: { _id: 2123.0 }, lastmod: Timestamp 1000|65, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2123.0 }, max: { _id: 2190.0 }, lastmod: Timestamp 1000|66, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.834 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.835 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 1|66||51276369f1561f29e16720e6 based on: 1|64||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.835 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|39||000000000000000000000000min: { _id: 2072.0 }max: { _id: 2190.0 } on: { _id: 2123.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.835 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|66, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 35 m30999| Fri Feb 22 12:24:11.835 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.840 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|49||000000000000000000000000min: { _id: 2662.0 }max: { _id: 2780.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.840 [conn4] request split points lookup for chunk test.foo { : 2662.0 } -->> { : 2780.0 } m30001| Fri Feb 22 12:24:11.840 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2662.0 } -->> { : 2780.0 } m30001| Fri Feb 22 12:24:11.840 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2662.0 }, max: { _id: 2780.0 }, from: "shard0001", splitKeys: [ { _id: 2713.0 } ], shardId: "test.foo-_id_2662.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.841 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff507 m30001| Fri Feb 22 12:24:11.841 [conn4] splitChunk accepted at version 1|66||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.842 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff508", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851841), what: "split", ns: "test.foo", details: { before: { min: { _id: 2662.0 }, max: { _id: 2780.0 }, lastmod: Timestamp 1000|49, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2662.0 }, max: { _id: 2713.0 }, lastmod: Timestamp 1000|67, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2713.0 }, max: { _id: 2780.0 }, lastmod: Timestamp 1000|68, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.842 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.843 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 36 version: 1|68||51276369f1561f29e16720e6 based on: 1|66||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.843 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|49||000000000000000000000000min: { _id: 2662.0 }max: { _id: 2780.0 } on: { _id: 2713.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.843 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|68, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 36 m30999| Fri Feb 22 12:24:11.843 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.847 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { _id: 1954.0 }max: { _id: 2072.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.847 [conn4] request split points lookup for chunk test.foo { : 1954.0 } -->> { : 2072.0 } m30001| Fri Feb 22 12:24:11.847 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1954.0 } -->> { : 2072.0 } m30001| Fri Feb 22 12:24:11.847 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1954.0 }, max: { _id: 2072.0 }, from: "shard0001", splitKeys: [ { _id: 2005.0 } ], shardId: "test.foo-_id_1954.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.848 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff509 m30001| Fri Feb 22 12:24:11.848 [conn4] splitChunk accepted at version 1|68||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.849 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff50a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851849), what: "split", ns: "test.foo", details: { before: { min: { _id: 1954.0 }, max: { _id: 2072.0 }, lastmod: Timestamp 1000|37, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1954.0 }, max: { _id: 2005.0 }, lastmod: Timestamp 1000|69, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2005.0 }, max: { _id: 2072.0 }, lastmod: Timestamp 1000|70, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.849 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.850 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 1|70||51276369f1561f29e16720e6 based on: 1|68||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.850 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|37||000000000000000000000000min: { _id: 1954.0 }max: { _id: 2072.0 } on: { _id: 2005.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.850 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|70, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 37 m30999| Fri Feb 22 12:24:11.850 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.858 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { _id: 1600.0 }max: { _id: 1718.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.858 [conn4] request split points lookup for chunk test.foo { : 1600.0 } -->> { : 1718.0 } m30001| Fri Feb 22 12:24:11.858 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1600.0 } -->> { : 1718.0 } m30001| Fri Feb 22 12:24:11.858 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1600.0 }, max: { _id: 1718.0 }, from: "shard0001", splitKeys: [ { _id: 1651.0 } ], shardId: "test.foo-_id_1600.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.859 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff50b m30001| Fri Feb 22 12:24:11.859 [conn4] splitChunk accepted at version 1|70||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.860 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff50c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851860), what: "split", ns: "test.foo", details: { before: { min: { _id: 1600.0 }, max: { _id: 1718.0 }, lastmod: Timestamp 1000|31, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1600.0 }, max: { _id: 1651.0 }, lastmod: Timestamp 1000|71, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1651.0 }, max: { _id: 1718.0 }, lastmod: Timestamp 1000|72, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.860 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.861 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 38 version: 1|72||51276369f1561f29e16720e6 based on: 1|70||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.861 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|31||000000000000000000000000min: { _id: 1600.0 }max: { _id: 1718.0 } on: { _id: 1651.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.861 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|72, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 38 m30999| Fri Feb 22 12:24:11.862 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.867 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { _id: 1718.0 }max: { _id: 1836.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.867 [conn4] request split points lookup for chunk test.foo { : 1718.0 } -->> { : 1836.0 } m30001| Fri Feb 22 12:24:11.867 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1718.0 } -->> { : 1836.0 } m30001| Fri Feb 22 12:24:11.867 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1718.0 }, max: { _id: 1836.0 }, from: "shard0001", splitKeys: [ { _id: 1769.0 } ], shardId: "test.foo-_id_1718.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.868 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff50d m30001| Fri Feb 22 12:24:11.868 [conn4] splitChunk accepted at version 1|72||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.869 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff50e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851869), what: "split", ns: "test.foo", details: { before: { min: { _id: 1718.0 }, max: { _id: 1836.0 }, lastmod: Timestamp 1000|33, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1718.0 }, max: { _id: 1769.0 }, lastmod: Timestamp 1000|73, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1769.0 }, max: { _id: 1836.0 }, lastmod: Timestamp 1000|74, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.869 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.870 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 1|74||51276369f1561f29e16720e6 based on: 1|72||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.870 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|33||000000000000000000000000min: { _id: 1718.0 }max: { _id: 1836.0 } on: { _id: 1769.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.870 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|74, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 39 m30999| Fri Feb 22 12:24:11.870 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.895 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|51||000000000000000000000000min: { _id: 2780.0 }max: { _id: 2898.0 } dataWritten: 209740 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.895 [conn4] request split points lookup for chunk test.foo { : 2780.0 } -->> { : 2898.0 } m30001| Fri Feb 22 12:24:11.895 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 2780.0 } -->> { : 2898.0 } m30001| Fri Feb 22 12:24:11.895 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 2780.0 }, max: { _id: 2898.0 }, from: "shard0001", splitKeys: [ { _id: 2831.0 } ], shardId: "test.foo-_id_2780.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.896 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff50f m30001| Fri Feb 22 12:24:11.897 [conn4] splitChunk accepted at version 1|74||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.897 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff510", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851897), what: "split", ns: "test.foo", details: { before: { min: { _id: 2780.0 }, max: { _id: 2898.0 }, lastmod: Timestamp 1000|51, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 2780.0 }, max: { _id: 2831.0 }, lastmod: Timestamp 1000|75, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 2831.0 }, max: { _id: 2898.0 }, lastmod: Timestamp 1000|76, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.897 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.898 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 1|76||51276369f1561f29e16720e6 based on: 1|74||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.899 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|51||000000000000000000000000min: { _id: 2780.0 }max: { _id: 2898.0 } on: { _id: 2831.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.899 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|76, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 40 m30999| Fri Feb 22 12:24:11.899 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.922 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { _id: 1362.0 }max: { _id: 1481.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.923 [conn4] request split points lookup for chunk test.foo { : 1362.0 } -->> { : 1481.0 } m30001| Fri Feb 22 12:24:11.923 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1362.0 } -->> { : 1481.0 } m30001| Fri Feb 22 12:24:11.923 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1362.0 }, max: { _id: 1481.0 }, from: "shard0001", splitKeys: [ { _id: 1413.0 } ], shardId: "test.foo-_id_1362.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.924 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff511 m30001| Fri Feb 22 12:24:11.924 [conn4] splitChunk accepted at version 1|76||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.925 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff512", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851925), what: "split", ns: "test.foo", details: { before: { min: { _id: 1362.0 }, max: { _id: 1481.0 }, lastmod: Timestamp 1000|27, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1362.0 }, max: { _id: 1413.0 }, lastmod: Timestamp 1000|77, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1413.0 }, max: { _id: 1481.0 }, lastmod: Timestamp 1000|78, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.925 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.926 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 41 version: 1|78||51276369f1561f29e16720e6 based on: 1|76||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.926 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|27||000000000000000000000000min: { _id: 1362.0 }max: { _id: 1481.0 } on: { _id: 1413.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.927 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|78, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 41 m30999| Fri Feb 22 12:24:11.927 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.932 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { _id: 1005.0 }max: { _id: 1124.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.932 [conn4] request split points lookup for chunk test.foo { : 1005.0 } -->> { : 1124.0 } m30001| Fri Feb 22 12:24:11.932 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1005.0 } -->> { : 1124.0 } m30001| Fri Feb 22 12:24:11.932 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1005.0 }, max: { _id: 1124.0 }, from: "shard0001", splitKeys: [ { _id: 1056.0 } ], shardId: "test.foo-_id_1005.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.933 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff513 m30001| Fri Feb 22 12:24:11.933 [conn4] splitChunk accepted at version 1|78||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.934 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff514", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851934), what: "split", ns: "test.foo", details: { before: { min: { _id: 1005.0 }, max: { _id: 1124.0 }, lastmod: Timestamp 1000|21, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1005.0 }, max: { _id: 1056.0 }, lastmod: Timestamp 1000|79, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1056.0 }, max: { _id: 1124.0 }, lastmod: Timestamp 1000|80, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.934 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.935 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 42 version: 1|80||51276369f1561f29e16720e6 based on: 1|78||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.935 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|21||000000000000000000000000min: { _id: 1005.0 }max: { _id: 1124.0 } on: { _id: 1056.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.936 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|80, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 42 m30999| Fri Feb 22 12:24:11.936 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.963 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|11||000000000000000000000000min: { _id: 410.0 }max: { _id: 529.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.963 [conn4] request split points lookup for chunk test.foo { : 410.0 } -->> { : 529.0 } m30001| Fri Feb 22 12:24:11.964 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 410.0 } -->> { : 529.0 } m30001| Fri Feb 22 12:24:11.964 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 410.0 }, max: { _id: 529.0 }, from: "shard0001", splitKeys: [ { _id: 461.0 } ], shardId: "test.foo-_id_410.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.965 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff515 m30001| Fri Feb 22 12:24:11.965 [conn4] splitChunk accepted at version 1|80||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.966 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff516", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851966), what: "split", ns: "test.foo", details: { before: { min: { _id: 410.0 }, max: { _id: 529.0 }, lastmod: Timestamp 1000|11, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 410.0 }, max: { _id: 461.0 }, lastmod: Timestamp 1000|81, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 461.0 }, max: { _id: 529.0 }, lastmod: Timestamp 1000|82, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.966 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.967 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 43 version: 1|82||51276369f1561f29e16720e6 based on: 1|80||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.967 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|11||000000000000000000000000min: { _id: 410.0 }max: { _id: 529.0 } on: { _id: 461.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.967 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|82, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 43 m30999| Fri Feb 22 12:24:11.967 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.977 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|7||000000000000000000000000min: { _id: 172.0 }max: { _id: 291.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.977 [conn4] request split points lookup for chunk test.foo { : 172.0 } -->> { : 291.0 } m30001| Fri Feb 22 12:24:11.977 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 172.0 } -->> { : 291.0 } m30001| Fri Feb 22 12:24:11.978 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 172.0 }, max: { _id: 291.0 }, from: "shard0001", splitKeys: [ { _id: 223.0 } ], shardId: "test.foo-_id_172.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.978 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff517 m30001| Fri Feb 22 12:24:11.979 [conn4] splitChunk accepted at version 1|82||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.980 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff518", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851980), what: "split", ns: "test.foo", details: { before: { min: { _id: 172.0 }, max: { _id: 291.0 }, lastmod: Timestamp 1000|7, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 172.0 }, max: { _id: 223.0 }, lastmod: Timestamp 1000|83, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 223.0 }, max: { _id: 291.0 }, lastmod: Timestamp 1000|84, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.980 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.981 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 44 version: 1|84||51276369f1561f29e16720e6 based on: 1|82||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.981 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|7||000000000000000000000000min: { _id: 172.0 }max: { _id: 291.0 } on: { _id: 223.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.981 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|84, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 44 m30999| Fri Feb 22 12:24:11.982 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.984 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { _id: 648.0 }max: { _id: 767.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.984 [conn4] request split points lookup for chunk test.foo { : 648.0 } -->> { : 767.0 } m30001| Fri Feb 22 12:24:11.985 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 648.0 } -->> { : 767.0 } m30001| Fri Feb 22 12:24:11.985 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 648.0 }, max: { _id: 767.0 }, from: "shard0001", splitKeys: [ { _id: 699.0 } ], shardId: "test.foo-_id_648.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.985 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff519 m30001| Fri Feb 22 12:24:11.986 [conn4] splitChunk accepted at version 1|84||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.986 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff51a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851986), what: "split", ns: "test.foo", details: { before: { min: { _id: 648.0 }, max: { _id: 767.0 }, lastmod: Timestamp 1000|15, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 648.0 }, max: { _id: 699.0 }, lastmod: Timestamp 1000|85, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 699.0 }, max: { _id: 767.0 }, lastmod: Timestamp 1000|86, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.987 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.988 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 45 version: 1|86||51276369f1561f29e16720e6 based on: 1|84||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.988 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|15||000000000000000000000000min: { _id: 648.0 }max: { _id: 767.0 } on: { _id: 699.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.988 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|86, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 45 m30999| Fri Feb 22 12:24:11.988 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.989 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|5||000000000000000000000000min: { _id: 53.0 }max: { _id: 172.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.989 [conn4] request split points lookup for chunk test.foo { : 53.0 } -->> { : 172.0 } m30001| Fri Feb 22 12:24:11.989 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 53.0 } -->> { : 172.0 } m30001| Fri Feb 22 12:24:11.990 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 53.0 }, max: { _id: 172.0 }, from: "shard0001", splitKeys: [ { _id: 104.0 } ], shardId: "test.foo-_id_53.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.990 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff51b m30001| Fri Feb 22 12:24:11.991 [conn4] splitChunk accepted at version 1|86||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.991 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff51c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851991), what: "split", ns: "test.foo", details: { before: { min: { _id: 53.0 }, max: { _id: 172.0 }, lastmod: Timestamp 1000|5, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 53.0 }, max: { _id: 104.0 }, lastmod: Timestamp 1000|87, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 104.0 }, max: { _id: 172.0 }, lastmod: Timestamp 1000|88, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.991 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.993 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 46 version: 1|88||51276369f1561f29e16720e6 based on: 1|86||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.993 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|5||000000000000000000000000min: { _id: 53.0 }max: { _id: 172.0 } on: { _id: 104.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.993 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|88, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 46 m30999| Fri Feb 22 12:24:11.993 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:11.994 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { _id: 886.0 }max: { _id: 1005.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:11.994 [conn4] request split points lookup for chunk test.foo { : 886.0 } -->> { : 1005.0 } m30001| Fri Feb 22 12:24:11.994 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 886.0 } -->> { : 1005.0 } m30001| Fri Feb 22 12:24:11.994 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 886.0 }, max: { _id: 1005.0 }, from: "shard0001", splitKeys: [ { _id: 937.0 } ], shardId: "test.foo-_id_886.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:11.995 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636b57d9fef1ecdff51d m30001| Fri Feb 22 12:24:11.996 [conn4] splitChunk accepted at version 1|88||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:11.996 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:11-5127636b57d9fef1ecdff51e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535851996), what: "split", ns: "test.foo", details: { before: { min: { _id: 886.0 }, max: { _id: 1005.0 }, lastmod: Timestamp 1000|19, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 886.0 }, max: { _id: 937.0 }, lastmod: Timestamp 1000|89, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 937.0 }, max: { _id: 1005.0 }, lastmod: Timestamp 1000|90, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:11.996 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:11.998 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 47 version: 1|90||51276369f1561f29e16720e6 based on: 1|88||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:11.998 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|19||000000000000000000000000min: { _id: 886.0 }max: { _id: 1005.0 } on: { _id: 937.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:11.998 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|90, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 47 m30999| Fri Feb 22 12:24:11.998 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:12.000 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { _id: 529.0 }max: { _id: 648.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:12.000 [conn4] request split points lookup for chunk test.foo { : 529.0 } -->> { : 648.0 } m30001| Fri Feb 22 12:24:12.000 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 529.0 } -->> { : 648.0 } m30001| Fri Feb 22 12:24:12.001 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 529.0 }, max: { _id: 648.0 }, from: "shard0001", splitKeys: [ { _id: 580.0 } ], shardId: "test.foo-_id_529.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:12.001 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636c57d9fef1ecdff51f m30001| Fri Feb 22 12:24:12.002 [conn4] splitChunk accepted at version 1|90||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:12.002 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:12-5127636c57d9fef1ecdff520", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535852002), what: "split", ns: "test.foo", details: { before: { min: { _id: 529.0 }, max: { _id: 648.0 }, lastmod: Timestamp 1000|13, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 529.0 }, max: { _id: 580.0 }, lastmod: Timestamp 1000|91, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 580.0 }, max: { _id: 648.0 }, lastmod: Timestamp 1000|92, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:12.003 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:12.004 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 48 version: 1|92||51276369f1561f29e16720e6 based on: 1|90||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:12.004 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|13||000000000000000000000000min: { _id: 529.0 }max: { _id: 648.0 } on: { _id: 580.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:12.004 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|92, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 48 m30999| Fri Feb 22 12:24:12.005 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:12.008 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { _id: 767.0 }max: { _id: 886.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:12.008 [conn4] request split points lookup for chunk test.foo { : 767.0 } -->> { : 886.0 } m30001| Fri Feb 22 12:24:12.009 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 767.0 } -->> { : 886.0 } m30001| Fri Feb 22 12:24:12.009 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 767.0 }, max: { _id: 886.0 }, from: "shard0001", splitKeys: [ { _id: 818.0 } ], shardId: "test.foo-_id_767.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:12.009 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636c57d9fef1ecdff521 m30001| Fri Feb 22 12:24:12.010 [conn4] splitChunk accepted at version 1|92||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:12.010 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:12-5127636c57d9fef1ecdff522", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535852010), what: "split", ns: "test.foo", details: { before: { min: { _id: 767.0 }, max: { _id: 886.0 }, lastmod: Timestamp 1000|17, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 767.0 }, max: { _id: 818.0 }, lastmod: Timestamp 1000|93, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 818.0 }, max: { _id: 886.0 }, lastmod: Timestamp 1000|94, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:12.011 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:12.012 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 49 version: 1|94||51276369f1561f29e16720e6 based on: 1|92||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:12.012 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|17||000000000000000000000000min: { _id: 767.0 }max: { _id: 886.0 } on: { _id: 818.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:12.012 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|94, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 49 m30999| Fri Feb 22 12:24:12.012 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:12.017 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { _id: 1124.0 }max: { _id: 1243.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:12.017 [conn4] request split points lookup for chunk test.foo { : 1124.0 } -->> { : 1243.0 } m30001| Fri Feb 22 12:24:12.017 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1124.0 } -->> { : 1243.0 } m30001| Fri Feb 22 12:24:12.018 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1124.0 }, max: { _id: 1243.0 }, from: "shard0001", splitKeys: [ { _id: 1175.0 } ], shardId: "test.foo-_id_1124.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:12.018 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636c57d9fef1ecdff523 m30001| Fri Feb 22 12:24:12.019 [conn4] splitChunk accepted at version 1|94||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:12.019 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:12-5127636c57d9fef1ecdff524", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535852019), what: "split", ns: "test.foo", details: { before: { min: { _id: 1124.0 }, max: { _id: 1243.0 }, lastmod: Timestamp 1000|23, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1124.0 }, max: { _id: 1175.0 }, lastmod: Timestamp 1000|95, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1175.0 }, max: { _id: 1243.0 }, lastmod: Timestamp 1000|96, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:12.020 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:12.021 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 50 version: 1|96||51276369f1561f29e16720e6 based on: 1|94||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:12.021 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|23||000000000000000000000000min: { _id: 1124.0 }max: { _id: 1243.0 } on: { _id: 1175.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:12.021 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|96, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 50 m30999| Fri Feb 22 12:24:12.021 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:12.039 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|9||000000000000000000000000min: { _id: 291.0 }max: { _id: 410.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:12.039 [conn4] request split points lookup for chunk test.foo { : 291.0 } -->> { : 410.0 } m30001| Fri Feb 22 12:24:12.039 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 291.0 } -->> { : 410.0 } m30001| Fri Feb 22 12:24:12.040 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 291.0 }, max: { _id: 410.0 }, from: "shard0001", splitKeys: [ { _id: 342.0 } ], shardId: "test.foo-_id_291.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:12.040 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636c57d9fef1ecdff525 m30001| Fri Feb 22 12:24:12.041 [conn4] splitChunk accepted at version 1|96||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:12.041 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:12-5127636c57d9fef1ecdff526", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535852041), what: "split", ns: "test.foo", details: { before: { min: { _id: 291.0 }, max: { _id: 410.0 }, lastmod: Timestamp 1000|9, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 291.0 }, max: { _id: 342.0 }, lastmod: Timestamp 1000|97, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 342.0 }, max: { _id: 410.0 }, lastmod: Timestamp 1000|98, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:12.042 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:12.043 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 51 version: 1|98||51276369f1561f29e16720e6 based on: 1|96||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:12.043 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|9||000000000000000000000000min: { _id: 291.0 }max: { _id: 410.0 } on: { _id: 342.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:12.043 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|98, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 51 m30999| Fri Feb 22 12:24:12.043 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:12.080 [conn1] about to initiate autosplit: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { _id: 1243.0 }max: { _id: 1362.0 } dataWritten: 209755 splitThreshold: 1048576 m30001| Fri Feb 22 12:24:12.080 [conn4] request split points lookup for chunk test.foo { : 1243.0 } -->> { : 1362.0 } m30001| Fri Feb 22 12:24:12.080 [conn4] max number of requested split points reached (2) before the end of chunk test.foo { : 1243.0 } -->> { : 1362.0 } m30001| Fri Feb 22 12:24:12.080 [conn4] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 1243.0 }, max: { _id: 1362.0 }, from: "shard0001", splitKeys: [ { _id: 1294.0 } ], shardId: "test.foo-_id_1243.0", configdb: "localhost:30000" } m30001| Fri Feb 22 12:24:12.081 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636c57d9fef1ecdff527 m30001| Fri Feb 22 12:24:12.082 [conn4] splitChunk accepted at version 1|98||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:12.082 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:12-5127636c57d9fef1ecdff528", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535852082), what: "split", ns: "test.foo", details: { before: { min: { _id: 1243.0 }, max: { _id: 1362.0 }, lastmod: Timestamp 1000|25, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 1243.0 }, max: { _id: 1294.0 }, lastmod: Timestamp 1000|99, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') }, right: { min: { _id: 1294.0 }, max: { _id: 1362.0 }, lastmod: Timestamp 1000|100, lastmodEpoch: ObjectId('51276369f1561f29e16720e6') } } } m30001| Fri Feb 22 12:24:12.082 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30999| Fri Feb 22 12:24:12.083 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 52 version: 1|100||51276369f1561f29e16720e6 based on: 1|98||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:12.084 [conn1] autosplitted test.foo shard: ns:test.fooshard: shard0001:localhost:30001lastmod: 1|25||000000000000000000000000min: { _id: 1243.0 }max: { _id: 1362.0 } on: { _id: 1294.0 } (splitThreshold 1048576) m30999| Fri Feb 22 12:24:12.084 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|100, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 52 m30999| Fri Feb 22 12:24:12.084 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ShardingTest test.foo-_id_MinKey 1000|1 { "_id" : { "$minKey" : 1 } } -> { "_id" : 0 } shard0001 test.foo test.foo-_id_0.0 1000|3 { "_id" : 0 } -> { "_id" : 53 } shard0001 test.foo test.foo-_id_53.0 1000|87 { "_id" : 53 } -> { "_id" : 104 } shard0001 test.foo test.foo-_id_104.0 1000|88 { "_id" : 104 } -> { "_id" : 172 } shard0001 test.foo test.foo-_id_172.0 1000|83 { "_id" : 172 } -> { "_id" : 223 } shard0001 test.foo test.foo-_id_223.0 1000|84 { "_id" : 223 } -> { "_id" : 291 } shard0001 test.foo test.foo-_id_291.0 1000|97 { "_id" : 291 } -> { "_id" : 342 } shard0001 test.foo test.foo-_id_342.0 1000|98 { "_id" : 342 } -> { "_id" : 410 } shard0001 test.foo test.foo-_id_410.0 1000|81 { "_id" : 410 } -> { "_id" : 461 } shard0001 test.foo test.foo-_id_461.0 1000|82 { "_id" : 461 } -> { "_id" : 529 } shard0001 test.foo test.foo-_id_529.0 1000|91 { "_id" : 529 } -> { "_id" : 580 } shard0001 test.foo test.foo-_id_580.0 1000|92 { "_id" : 580 } -> { "_id" : 648 } shard0001 test.foo test.foo-_id_648.0 1000|85 { "_id" : 648 } -> { "_id" : 699 } shard0001 test.foo test.foo-_id_699.0 1000|86 { "_id" : 699 } -> { "_id" : 767 } shard0001 test.foo test.foo-_id_767.0 1000|93 { "_id" : 767 } -> { "_id" : 818 } shard0001 test.foo test.foo-_id_818.0 1000|94 { "_id" : 818 } -> { "_id" : 886 } shard0001 test.foo test.foo-_id_886.0 1000|89 { "_id" : 886 } -> { "_id" : 937 } shard0001 test.foo test.foo-_id_937.0 1000|90 { "_id" : 937 } -> { "_id" : 1005 } shard0001 test.foo test.foo-_id_1005.0 1000|79 { "_id" : 1005 } -> { "_id" : 1056 } shard0001 test.foo test.foo-_id_1056.0 1000|80 { "_id" : 1056 } -> { "_id" : 1124 } shard0001 test.foo test.foo-_id_1124.0 1000|95 { "_id" : 1124 } -> { "_id" : 1175 } shard0001 test.foo test.foo-_id_1175.0 1000|96 { "_id" : 1175 } -> { "_id" : 1243 } shard0001 test.foo test.foo-_id_1243.0 1000|99 { "_id" : 1243 } -> { "_id" : 1294 } shard0001 test.foo test.foo-_id_1294.0 1000|100 { "_id" : 1294 } -> { "_id" : 1362 } shard0001 test.foo test.foo-_id_1362.0 1000|77 { "_id" : 1362 } -> { "_id" : 1413 } shard0001 test.foo test.foo-_id_1413.0 1000|78 { "_id" : 1413 } -> { "_id" : 1481 } shard0001 test.foo test.foo-_id_1481.0 1000|55 { "_id" : 1481 } -> { "_id" : 1532 } shard0001 test.foo test.foo-_id_1532.0 1000|56 { "_id" : 1532 } -> { "_id" : 1600 } shard0001 test.foo test.foo-_id_1600.0 1000|71 { "_id" : 1600 } -> { "_id" : 1651 } shard0001 test.foo test.foo-_id_1651.0 1000|72 { "_id" : 1651 } -> { "_id" : 1718 } shard0001 test.foo test.foo-_id_1718.0 1000|73 { "_id" : 1718 } -> { "_id" : 1769 } shard0001 test.foo test.foo-_id_1769.0 1000|74 { "_id" : 1769 } -> { "_id" : 1836 } shard0001 test.foo test.foo-_id_1836.0 1000|59 { "_id" : 1836 } -> { "_id" : 1887 } shard0001 test.foo test.foo-_id_1887.0 1000|60 { "_id" : 1887 } -> { "_id" : 1954 } shard0001 test.foo test.foo-_id_1954.0 1000|69 { "_id" : 1954 } -> { "_id" : 2005 } shard0001 test.foo test.foo-_id_2005.0 1000|70 { "_id" : 2005 } -> { "_id" : 2072 } shard0001 test.foo test.foo-_id_2072.0 1000|65 { "_id" : 2072 } -> { "_id" : 2123 } shard0001 test.foo test.foo-_id_2123.0 1000|66 { "_id" : 2123 } -> { "_id" : 2190 } shard0001 test.foo test.foo-_id_2190.0 1000|63 { "_id" : 2190 } -> { "_id" : 2241 } shard0001 test.foo test.foo-_id_2241.0 1000|64 { "_id" : 2241 } -> { "_id" : 2308 } shard0001 test.foo test.foo-_id_2308.0 1000|61 { "_id" : 2308 } -> { "_id" : 2359 } shard0001 test.foo test.foo-_id_2359.0 1000|62 { "_id" : 2359 } -> { "_id" : 2426 } shard0001 test.foo test.foo-_id_2426.0 1000|57 { "_id" : 2426 } -> { "_id" : 2477 } shard0001 test.foo test.foo-_id_2477.0 1000|58 { "_id" : 2477 } -> { "_id" : 2544 } shard0001 test.foo test.foo-_id_2544.0 1000|53 { "_id" : 2544 } -> { "_id" : 2595 } shard0001 test.foo test.foo-_id_2595.0 1000|54 { "_id" : 2595 } -> { "_id" : 2662 } shard0001 test.foo test.foo-_id_2662.0 1000|67 { "_id" : 2662 } -> { "_id" : 2713 } shard0001 test.foo test.foo-_id_2713.0 1000|68 { "_id" : 2713 } -> { "_id" : 2780 } shard0001 test.foo test.foo-_id_2780.0 1000|75 { "_id" : 2780 } -> { "_id" : 2831 } shard0001 test.foo test.foo-_id_2831.0 1000|76 { "_id" : 2831 } -> { "_id" : 2898 } shard0001 test.foo test.foo-_id_2898.0 1000|52 { "_id" : 2898 } -> { "_id" : { "$maxKey" : 1 } } shard0001 test.foo ---- Running diff1... ---- ---- Running diff1... ---- 51 m30999| Fri Feb 22 12:24:15.415 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:15.415 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:15.415 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:15 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127636ff1561f29e16720e7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276369f1561f29e16720e5" } } m30999| Fri Feb 22 12:24:15.416 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 5127636ff1561f29e16720e7 m30999| Fri Feb 22 12:24:15.416 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:15.416 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:15.416 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 12:24:15.418 [conn3] build index config.tags { _id: 1 } m30000| Fri Feb 22 12:24:15.419 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:15.419 [conn3] info: creating collection config.tags on add index m30000| Fri Feb 22 12:24:15.419 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 12:24:15.419 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:15.420 [Balancer] shard0001 has more chunks me:51 best: shard0000:0 m30999| Fri Feb 22 12:24:15.420 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:15.420 [Balancer] donor : shard0001 chunks on 51 m30999| Fri Feb 22 12:24:15.420 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 12:24:15.420 [Balancer] threshold : 4 m30999| Fri Feb 22 12:24:15.420 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:15.420 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|1||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:15.420 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:15.420 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:15.421 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127636f57d9fef1ecdff529 m30001| Fri Feb 22 12:24:15.421 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:15-5127636f57d9fef1ecdff52a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535855421), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:15.421 [conn4] moveChunk request accepted at version 1|100||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:15.421 [conn4] moveChunk number of documents: 0 m30000| Fri Feb 22 12:24:15.421 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:15.422 [initandlisten] connection accepted from 127.0.0.1:59527 #5 (5 connections now open) m30000| Fri Feb 22 12:24:15.423 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/test.ns, filling with zeroes... m30000| Fri Feb 22 12:24:15.423 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:24:15.423 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/test.0, filling with zeroes... m30000| Fri Feb 22 12:24:15.423 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:24:15.423 [FileAllocator] allocating new datafile /data/db/slow_sharding_balance40/test.1, filling with zeroes... m30000| Fri Feb 22 12:24:15.423 [FileAllocator] done allocating datafile /data/db/slow_sharding_balance40/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:24:15.426 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 12:24:15.427 [migrateThread] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:15.427 [migrateThread] info: creating collection test.foo on add index m30000| Fri Feb 22 12:24:15.427 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:15.427 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 12:24:15.428 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 12:24:15.432 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:15.432 [conn4] moveChunk setting version to: 2|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:15.432 [initandlisten] connection accepted from 127.0.0.1:42013 #10 (10 connections now open) m30000| Fri Feb 22 12:24:15.432 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:15.438 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 12:24:15.438 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0 } m30000| Fri Feb 22 12:24:15.438 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:15-5127636fb989ea9ef0f7b929", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535855438), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 0, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:24:15.439 [initandlisten] connection accepted from 127.0.0.1:46205 #11 (11 connections now open) m30001| Fri Feb 22 12:24:15.442 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:15.442 [conn4] moveChunk updating self version to: 2|1||51276369f1561f29e16720e6 through { _id: 0.0 } -> { _id: 53.0 } for collection 'test.foo' m30000| Fri Feb 22 12:24:15.442 [initandlisten] connection accepted from 127.0.0.1:49592 #12 (12 connections now open) m30001| Fri Feb 22 12:24:15.443 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:15-5127636f57d9fef1ecdff52b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535855443), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:15.443 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:15.443 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:15.443 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:15.443 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:15.443 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:15.443 [cleanupOldData-5127636f57d9fef1ecdff52c] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 0.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:15.444 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:15.444 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:15-5127636f57d9fef1ecdff52d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535855444), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:15.444 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:15.445 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 53 version: 2|1||51276369f1561f29e16720e6 based on: 1|100||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:15.445 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:15.445 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. ---- Running diff1... ---- m30999| Fri Feb 22 12:24:15.456 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 53 m30999| Fri Feb 22 12:24:15.456 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:15.463 [cleanupOldData-5127636f57d9fef1ecdff52c] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 12:24:15.463 [cleanupOldData-5127636f57d9fef1ecdff52c] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 0.0 } m30001| Fri Feb 22 12:24:15.464 [cleanupOldData-5127636f57d9fef1ecdff52c] moveChunk deleted 0 documents for test.foo from { _id: MinKey } -> { _id: 0.0 } ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:16.446 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:16.446 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:16.447 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276370f1561f29e16720e8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127636ff1561f29e16720e7" } } m30999| Fri Feb 22 12:24:16.448 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276370f1561f29e16720e8 m30999| Fri Feb 22 12:24:16.448 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:16.448 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:16.448 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:16.449 [Balancer] shard0001 has more chunks me:50 best: shard0000:1 m30999| Fri Feb 22 12:24:16.449 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:16.449 [Balancer] donor : shard0001 chunks on 50 m30999| Fri Feb 22 12:24:16.450 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 12:24:16.450 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:16.450 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 0.0 }, max: { _id: 53.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:16.450 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 0.0 }max: { _id: 53.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:16.450 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:16.450 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0 }, max: { _id: 53.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:16.451 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637057d9fef1ecdff52e m30001| Fri Feb 22 12:24:16.451 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:16-5127637057d9fef1ecdff52f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535856451), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:16.452 [conn4] moveChunk request accepted at version 2|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:16.452 [conn4] moveChunk number of documents: 53 m30000| Fri Feb 22 12:24:16.452 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0 } -> { _id: 53.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:16.462 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 53.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 50, clonedBytes: 503000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:16.463 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:16.463 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 53.0 } m30000| Fri Feb 22 12:24:16.465 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 53.0 } m30001| Fri Feb 22 12:24:16.472 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 53.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 53, clonedBytes: 533180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:16.472 [conn4] moveChunk setting version to: 3|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:16.472 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:16.476 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0 } -> { _id: 53.0 } m30000| Fri Feb 22 12:24:16.476 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0 } -> { _id: 53.0 } m30000| Fri Feb 22 12:24:16.476 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:16-51276370b989ea9ef0f7b92a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535856476), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:24:16.482 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0 }, max: { _id: 53.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 53, clonedBytes: 533180, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:16.482 [conn4] moveChunk updating self version to: 3|1||51276369f1561f29e16720e6 through { _id: 53.0 } -> { _id: 104.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:16.483 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:16-5127637057d9fef1ecdff530", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535856483), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:16.483 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:16.483 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:16.483 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:16.483 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:16.483 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:16.484 [cleanupOldData-5127637057d9fef1ecdff531] (start) waiting to cleanup test.foo from { _id: 0.0 } -> { _id: 53.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:16.484 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:16.484 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:16-5127637057d9fef1ecdff532", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535856484), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0 }, max: { _id: 53.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:16.484 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:16.486 [Balancer] ChunkManager: time to load chunks for test.foo: 2ms sequenceNumber: 54 version: 3|1||51276369f1561f29e16720e6 based on: 2|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:16.486 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:16.487 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:16.488 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 54 m30999| Fri Feb 22 12:24:16.489 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:16.504 [cleanupOldData-5127637057d9fef1ecdff531] waiting to remove documents for test.foo from { _id: 0.0 } -> { _id: 53.0 } m30001| Fri Feb 22 12:24:16.504 [cleanupOldData-5127637057d9fef1ecdff531] moveChunk starting delete for: test.foo from { _id: 0.0 } -> { _id: 53.0 } m30001| Fri Feb 22 12:24:16.572 [cleanupOldData-5127637057d9fef1ecdff531] moveChunk deleted 53 documents for test.foo from { _id: 0.0 } -> { _id: 53.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:16.756 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 54 m30999| Fri Feb 22 12:24:16.756 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:24:16.756 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 54 m30000| Fri Feb 22 12:24:16.756 [conn6] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 12:24:16.758 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:24:17.487 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:17.488 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:17.488 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:17 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276371f1561f29e16720e9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276370f1561f29e16720e8" } } m30999| Fri Feb 22 12:24:17.489 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276371f1561f29e16720e9 m30999| Fri Feb 22 12:24:17.489 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:17.489 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:17.489 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:17.490 [Balancer] shard0001 has more chunks me:49 best: shard0000:2 m30999| Fri Feb 22 12:24:17.490 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:17.490 [Balancer] donor : shard0001 chunks on 49 m30999| Fri Feb 22 12:24:17.490 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 12:24:17.490 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:17.490 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_53.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 53.0 }, max: { _id: 104.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:17.490 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 53.0 }max: { _id: 104.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:17.490 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:17.491 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 53.0 }, max: { _id: 104.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_53.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:17.492 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637157d9fef1ecdff533 m30001| Fri Feb 22 12:24:17.492 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:17-5127637157d9fef1ecdff534", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535857492), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 53.0 }, max: { _id: 104.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:17.493 [conn4] moveChunk request accepted at version 3|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:17.493 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:17.493 [migrateThread] starting receiving-end of migration of chunk { _id: 53.0 } -> { _id: 104.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:17.500 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:17.500 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 53.0 } -> { _id: 104.0 } m30000| Fri Feb 22 12:24:17.501 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 53.0 } -> { _id: 104.0 } m30001| Fri Feb 22 12:24:17.503 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 53.0 }, max: { _id: 104.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:17.503 [conn4] moveChunk setting version to: 4|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:17.503 [conn10] Waiting for commit to finish m30999| Fri Feb 22 12:24:17.505 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 55 version: 3|1||51276369f1561f29e16720e6 based on: 3|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:17.505 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 3|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:17.505 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 55 m30001| Fri Feb 22 12:24:17.505 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:24:17.512 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 53.0 } -> { _id: 104.0 } m30000| Fri Feb 22 12:24:17.512 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 53.0 } -> { _id: 104.0 } m30000| Fri Feb 22 12:24:17.512 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:17-51276371b989ea9ef0f7b92b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535857512), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 53.0 }, max: { _id: 104.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:17.514 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 53.0 }, max: { _id: 104.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:17.514 [conn4] moveChunk updating self version to: 4|1||51276369f1561f29e16720e6 through { _id: 104.0 } -> { _id: 172.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:17.514 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:17-5127637157d9fef1ecdff535", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535857514), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 53.0 }, max: { _id: 104.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:24:17.514 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:24:17.514 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 3000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 4000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:17.514 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:17.514 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:17.514 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:17.514 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:17.514 [cleanupOldData-5127637157d9fef1ecdff536] (start) waiting to cleanup test.foo from { _id: 53.0 } -> { _id: 104.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:17.515 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:17.515 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:17-5127637157d9fef1ecdff537", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535857515), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 53.0 }, max: { _id: 104.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:17.515 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:17.516 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 56 version: 4|1||51276369f1561f29e16720e6 based on: 3|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:17.517 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 57 version: 4|1||51276369f1561f29e16720e6 based on: 3|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:17.517 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:17.517 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:17.517 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:17.517 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:17.517 [conn1] connected connection! m30000| Fri Feb 22 12:24:17.517 [initandlisten] connection accepted from 127.0.0.1:33240 #13 (13 connections now open) m30999| Fri Feb 22 12:24:17.518 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 58 version: 4|1||51276369f1561f29e16720e6 based on: 4|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:17.518 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 4|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:17.518 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 58 m30999| Fri Feb 22 12:24:17.518 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:17.535 [cleanupOldData-5127637157d9fef1ecdff536] waiting to remove documents for test.foo from { _id: 53.0 } -> { _id: 104.0 } m30001| Fri Feb 22 12:24:17.535 [cleanupOldData-5127637157d9fef1ecdff536] moveChunk starting delete for: test.foo from { _id: 53.0 } -> { _id: 104.0 } m30001| Fri Feb 22 12:24:17.792 [cleanupOldData-5127637157d9fef1ecdff536] moveChunk deleted 51 documents for test.foo from { _id: 53.0 } -> { _id: 104.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:17.914 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 58 m30999| Fri Feb 22 12:24:17.915 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:18.518 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:18.518 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:18.518 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:18 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276372f1561f29e16720ea" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276371f1561f29e16720e9" } } m30999| Fri Feb 22 12:24:18.519 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276372f1561f29e16720ea m30999| Fri Feb 22 12:24:18.519 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:18.519 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:18.519 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:18.520 [Balancer] shard0001 has more chunks me:48 best: shard0000:3 m30999| Fri Feb 22 12:24:18.520 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:18.520 [Balancer] donor : shard0001 chunks on 48 m30999| Fri Feb 22 12:24:18.520 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 12:24:18.520 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:18.520 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_104.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 104.0 }, max: { _id: 172.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:18.521 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 104.0 }max: { _id: 172.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:18.521 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:18.521 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 104.0 }, max: { _id: 172.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_104.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:18.521 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637257d9fef1ecdff538 m30001| Fri Feb 22 12:24:18.522 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:18-5127637257d9fef1ecdff539", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535858522), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 104.0 }, max: { _id: 172.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:18.522 [conn4] moveChunk request accepted at version 4|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:18.522 [conn4] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:18.523 [migrateThread] starting receiving-end of migration of chunk { _id: 104.0 } -> { _id: 172.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:18.532 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:18.532 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 104.0 } -> { _id: 172.0 } m30001| Fri Feb 22 12:24:18.533 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 104.0 }, max: { _id: 172.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:18.534 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 104.0 } -> { _id: 172.0 } m30001| Fri Feb 22 12:24:18.543 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 104.0 }, max: { _id: 172.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:18.543 [conn4] moveChunk setting version to: 5|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:18.543 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:18.544 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 104.0 } -> { _id: 172.0 } m30000| Fri Feb 22 12:24:18.544 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 104.0 } -> { _id: 172.0 } m30000| Fri Feb 22 12:24:18.544 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:18-51276372b989ea9ef0f7b92c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535858544), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 104.0 }, max: { _id: 172.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:24:18.544 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 59 version: 4|1||51276369f1561f29e16720e6 based on: 4|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:18.545 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 4|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:18.545 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 59 m30001| Fri Feb 22 12:24:18.545 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:24:18.553 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 104.0 }, max: { _id: 172.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:18.553 [conn4] moveChunk updating self version to: 5|1||51276369f1561f29e16720e6 through { _id: 172.0 } -> { _id: 223.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:18.554 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:18-5127637257d9fef1ecdff53a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535858554), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 104.0 }, max: { _id: 172.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:18.554 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:18.554 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:24:18.554 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:24:18.554 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 4000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 5000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:18.554 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:18.554 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:18.554 [cleanupOldData-5127637257d9fef1ecdff53b] (start) waiting to cleanup test.foo from { _id: 104.0 } -> { _id: 172.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:18.555 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:18.555 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:18-5127637257d9fef1ecdff53c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535858555), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 104.0 }, max: { _id: 172.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:18.555 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:18.555 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 60 version: 5|1||51276369f1561f29e16720e6 based on: 4|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:18.556 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 61 version: 5|1||51276369f1561f29e16720e6 based on: 4|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:18.556 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:18.557 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:18.557 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 62 version: 5|1||51276369f1561f29e16720e6 based on: 5|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:18.557 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 5|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:18.558 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 62 m30999| Fri Feb 22 12:24:18.558 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:18.574 [cleanupOldData-5127637257d9fef1ecdff53b] waiting to remove documents for test.foo from { _id: 104.0 } -> { _id: 172.0 } m30001| Fri Feb 22 12:24:18.574 [cleanupOldData-5127637257d9fef1ecdff53b] moveChunk starting delete for: test.foo from { _id: 104.0 } -> { _id: 172.0 } m30001| Fri Feb 22 12:24:18.617 [cleanupOldData-5127637257d9fef1ecdff53b] moveChunk deleted 68 documents for test.foo from { _id: 104.0 } -> { _id: 172.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:19.041 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 62 m30999| Fri Feb 22 12:24:19.042 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:19.557 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:19.558 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:19.558 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:19 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276373f1561f29e16720eb" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276372f1561f29e16720ea" } } m30999| Fri Feb 22 12:24:19.559 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276373f1561f29e16720eb m30999| Fri Feb 22 12:24:19.559 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:19.559 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:19.559 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:19.560 [Balancer] shard0001 has more chunks me:47 best: shard0000:4 m30999| Fri Feb 22 12:24:19.560 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:19.560 [Balancer] donor : shard0001 chunks on 47 m30999| Fri Feb 22 12:24:19.560 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 12:24:19.560 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:19.560 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_172.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 172.0 }, max: { _id: 223.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:19.560 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 172.0 }max: { _id: 223.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:19.560 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:19.560 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 172.0 }, max: { _id: 223.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_172.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:19.561 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637357d9fef1ecdff53d m30001| Fri Feb 22 12:24:19.561 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:19-5127637357d9fef1ecdff53e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535859561), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 172.0 }, max: { _id: 223.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:19.562 [conn4] moveChunk request accepted at version 5|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:19.562 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:19.562 [migrateThread] starting receiving-end of migration of chunk { _id: 172.0 } -> { _id: 223.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:19.569 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:19.569 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 172.0 } -> { _id: 223.0 } m30000| Fri Feb 22 12:24:19.570 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 172.0 } -> { _id: 223.0 } m30001| Fri Feb 22 12:24:19.572 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 172.0 }, max: { _id: 223.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:19.572 [conn4] moveChunk setting version to: 6|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:19.572 [conn10] Waiting for commit to finish m30999| Fri Feb 22 12:24:19.574 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 63 version: 5|1||51276369f1561f29e16720e6 based on: 5|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:19.574 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 5|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:19.574 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 63 m30001| Fri Feb 22 12:24:19.574 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:24:19.581 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 172.0 } -> { _id: 223.0 } m30000| Fri Feb 22 12:24:19.581 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 172.0 } -> { _id: 223.0 } m30000| Fri Feb 22 12:24:19.581 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:19-51276373b989ea9ef0f7b92d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535859581), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 172.0 }, max: { _id: 223.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:19.583 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 172.0 }, max: { _id: 223.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:19.583 [conn4] moveChunk updating self version to: 6|1||51276369f1561f29e16720e6 through { _id: 223.0 } -> { _id: 291.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:19.583 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:19-5127637357d9fef1ecdff53f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535859583), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 172.0 }, max: { _id: 223.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:19.583 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:19.583 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:19.583 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:24:19.583 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 5000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 6000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:19.583 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:19.583 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:19.583 [cleanupOldData-5127637357d9fef1ecdff540] (start) waiting to cleanup test.foo from { _id: 172.0 } -> { _id: 223.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:19.584 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:19.584 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:19-5127637357d9fef1ecdff541", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535859584), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 172.0 }, max: { _id: 223.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:19.584 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:19.585 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 64 version: 6|1||51276369f1561f29e16720e6 based on: 5|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:19.585 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 65 version: 6|1||51276369f1561f29e16720e6 based on: 5|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:19.586 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:19.586 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:19.586 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 66 version: 6|1||51276369f1561f29e16720e6 based on: 6|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:19.586 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 6|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:19.587 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 66 m30999| Fri Feb 22 12:24:19.587 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:19.603 [cleanupOldData-5127637357d9fef1ecdff540] waiting to remove documents for test.foo from { _id: 172.0 } -> { _id: 223.0 } m30001| Fri Feb 22 12:24:19.603 [cleanupOldData-5127637357d9fef1ecdff540] moveChunk starting delete for: test.foo from { _id: 172.0 } -> { _id: 223.0 } m30001| Fri Feb 22 12:24:19.778 [cleanupOldData-5127637357d9fef1ecdff540] moveChunk deleted 51 documents for test.foo from { _id: 172.0 } -> { _id: 223.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:20.287 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 66 m30999| Fri Feb 22 12:24:20.288 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:20.586 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:20.587 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:20.587 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276374f1561f29e16720ec" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276373f1561f29e16720eb" } } m30999| Fri Feb 22 12:24:20.588 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276374f1561f29e16720ec m30999| Fri Feb 22 12:24:20.588 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:20.588 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:20.588 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:20.589 [Balancer] shard0001 has more chunks me:46 best: shard0000:5 m30999| Fri Feb 22 12:24:20.589 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:20.589 [Balancer] donor : shard0001 chunks on 46 m30999| Fri Feb 22 12:24:20.589 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 12:24:20.589 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:20.589 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_223.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 223.0 }, max: { _id: 291.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:20.589 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: 223.0 }max: { _id: 291.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:20.589 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:20.589 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 223.0 }, max: { _id: 291.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_223.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:20.590 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637457d9fef1ecdff542 m30001| Fri Feb 22 12:24:20.590 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:20-5127637457d9fef1ecdff543", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535860590), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 223.0 }, max: { _id: 291.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:20.591 [conn4] moveChunk request accepted at version 6|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:20.591 [conn4] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:20.591 [migrateThread] starting receiving-end of migration of chunk { _id: 223.0 } -> { _id: 291.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:20.600 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:20.600 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 223.0 } -> { _id: 291.0 } m30001| Fri Feb 22 12:24:20.602 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 223.0 }, max: { _id: 291.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:20.602 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 223.0 } -> { _id: 291.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:20.612 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 223.0 }, max: { _id: 291.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:20.612 [conn4] moveChunk setting version to: 7|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:20.612 [conn10] Waiting for commit to finish ---- Running diff1... ---- m30999| Fri Feb 22 12:24:20.616 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 3, instanceIdent: "bs-smartos-x86-64-1.10gen.cc:30001", version: Timestamp 7000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), yourVersion: Timestamp 6000|1, yourVersionEpoch: ObjectId('51276369f1561f29e16720e6'), msg: BinData, id: ObjectId('512763740000000000000000') }, ok: 1.0 } m30999| Fri Feb 22 12:24:20.616 [WriteBackListener-localhost:30001] connectionId: bs-smartos-x86-64-1.10gen.cc:30001:3 writebackId: 512763740000000000000000 needVersion : 7|0||51276369f1561f29e16720e6 mine : 6|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:20.616 [WriteBackListener-localhost:30001] op: update len: 78 ns: test.foo flags: 1 query: { _id: 699.0 } update: { $inc: { x: 1.0 } } m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:20.617 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:24:20.617 [initandlisten] connection accepted from 127.0.0.1:34172 #14 (14 connections now open) m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] connected connection! m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x11c2ba0 66 m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:24:20.617 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] connected connection! m30001| Fri Feb 22 12:24:20.617 [initandlisten] connection accepted from 127.0.0.1:45438 #6 (6 connections now open) m30999| Fri Feb 22 12:24:20.617 [WriteBackListener-localhost:30001] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:24:20.618 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 66 m30001| Fri Feb 22 12:24:20.618 [conn6] waiting till out of critical section m30000| Fri Feb 22 12:24:20.622 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:20.622 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 223.0 } -> { _id: 291.0 } m30000| Fri Feb 22 12:24:20.622 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 223.0 } -> { _id: 291.0 } m30000| Fri Feb 22 12:24:20.622 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:20-51276374b989ea9ef0f7b92e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535860622), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 223.0 }, max: { _id: 291.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 21 } } m30001| Fri Feb 22 12:24:20.632 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 223.0 }, max: { _id: 291.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:20.632 [conn4] moveChunk updating self version to: 7|1||51276369f1561f29e16720e6 through { _id: 291.0 } -> { _id: 342.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:20.633 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:20-5127637457d9fef1ecdff544", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535860633), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 223.0 }, max: { _id: 291.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:20.633 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:24:20.633 [WriteBackListener-localhost:30001] setShardVersion failed! m30001| Fri Feb 22 12:24:20.633 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", version: Timestamp 6000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 7000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:20.633 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:20.633 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:20.633 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:20.633 [cleanupOldData-5127637457d9fef1ecdff545] (start) waiting to cleanup test.foo from { _id: 223.0 } -> { _id: 291.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:20.633 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:20.633 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:20-5127637457d9fef1ecdff546", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535860633), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 223.0 }, max: { _id: 291.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 20, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:20.633 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:20.634 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 67 version: 7|1||51276369f1561f29e16720e6 based on: 6|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:20.634 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 67 m30999| Fri Feb 22 12:24:20.634 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:24:20.634 [WriteBackListener-localhost:30001] update will be retried b/c sharding config info is stale, retries: 0 ns: test.foo data: { _id: 699.0 } m30999| Fri Feb 22 12:24:20.634 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 68 version: 7|1||51276369f1561f29e16720e6 based on: 6|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:20.635 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:20.635 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. ---- Running diff1... ---- m30999| Fri Feb 22 12:24:20.650 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 67 m30999| Fri Feb 22 12:24:20.650 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:20.653 [cleanupOldData-5127637457d9fef1ecdff545] waiting to remove documents for test.foo from { _id: 223.0 } -> { _id: 291.0 } m30001| Fri Feb 22 12:24:20.653 [cleanupOldData-5127637457d9fef1ecdff545] moveChunk starting delete for: test.foo from { _id: 223.0 } -> { _id: 291.0 } m30001| Fri Feb 22 12:24:20.658 [cleanupOldData-5127637457d9fef1ecdff545] moveChunk deleted 68 documents for test.foo from { _id: 223.0 } -> { _id: 291.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:20.833 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 67 m30999| Fri Feb 22 12:24:20.834 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:21.636 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:21.636 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:21.636 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:21 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276375f1561f29e16720ed" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276374f1561f29e16720ec" } } m30999| Fri Feb 22 12:24:21.637 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276375f1561f29e16720ed m30999| Fri Feb 22 12:24:21.637 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:21.637 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:21.637 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:21.639 [Balancer] shard0001 has more chunks me:45 best: shard0000:6 m30999| Fri Feb 22 12:24:21.639 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:21.639 [Balancer] donor : shard0001 chunks on 45 m30999| Fri Feb 22 12:24:21.639 [Balancer] receiver : shard0000 chunks on 6 m30999| Fri Feb 22 12:24:21.639 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:21.639 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_291.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 291.0 }, max: { _id: 342.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:21.639 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { _id: 291.0 }max: { _id: 342.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:21.639 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:21.639 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 291.0 }, max: { _id: 342.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_291.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:21.640 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637557d9fef1ecdff547 m30001| Fri Feb 22 12:24:21.640 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:21-5127637557d9fef1ecdff548", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535861640), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 291.0 }, max: { _id: 342.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:21.641 [conn2] moveChunk request accepted at version 7|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:21.641 [conn2] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:21.642 [migrateThread] starting receiving-end of migration of chunk { _id: 291.0 } -> { _id: 342.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:21.651 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:21.651 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 291.0 } -> { _id: 342.0 } m30001| Fri Feb 22 12:24:21.652 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 291.0 }, max: { _id: 342.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:21.653 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 291.0 } -> { _id: 342.0 } m30001| Fri Feb 22 12:24:21.662 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 291.0 }, max: { _id: 342.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:21.663 [conn2] moveChunk setting version to: 8|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:21.663 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:21.663 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 291.0 } -> { _id: 342.0 } m30000| Fri Feb 22 12:24:21.663 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 291.0 } -> { _id: 342.0 } m30000| Fri Feb 22 12:24:21.664 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:21-51276375b989ea9ef0f7b92f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535861664), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 291.0 }, max: { _id: 342.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:24:21.664 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 69 version: 7|1||51276369f1561f29e16720e6 based on: 7|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:21.664 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 7|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:21.664 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 69 m30001| Fri Feb 22 12:24:21.665 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:24:21.673 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 291.0 }, max: { _id: 342.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:21.673 [conn2] moveChunk updating self version to: 8|1||51276369f1561f29e16720e6 through { _id: 342.0 } -> { _id: 410.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:21.674 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:21-5127637557d9fef1ecdff549", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535861674), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 291.0 }, max: { _id: 342.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:21.674 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:24:21.674 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:24:21.674 [conn2] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 7000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 8000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:21.674 [conn2] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:21.674 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:21.674 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:21.674 [cleanupOldData-5127637557d9fef1ecdff54a] (start) waiting to cleanup test.foo from { _id: 291.0 } -> { _id: 342.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:21.674 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:21.674 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:21-5127637557d9fef1ecdff54b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535861674), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 291.0 }, max: { _id: 342.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:21.674 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:21.675 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 70 version: 8|1||51276369f1561f29e16720e6 based on: 7|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:21.676 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 71 version: 8|1||51276369f1561f29e16720e6 based on: 7|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:21.676 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:21.677 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:21.677 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 72 version: 8|1||51276369f1561f29e16720e6 based on: 8|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:21.677 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 8|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:21.677 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 8000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 72 m30999| Fri Feb 22 12:24:21.677 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:21.694 [cleanupOldData-5127637557d9fef1ecdff54a] waiting to remove documents for test.foo from { _id: 291.0 } -> { _id: 342.0 } m30001| Fri Feb 22 12:24:21.694 [cleanupOldData-5127637557d9fef1ecdff54a] moveChunk starting delete for: test.foo from { _id: 291.0 } -> { _id: 342.0 } m30001| Fri Feb 22 12:24:21.860 [cleanupOldData-5127637557d9fef1ecdff54a] moveChunk deleted 51 documents for test.foo from { _id: 291.0 } -> { _id: 342.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:22.081 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 8000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 72 m30999| Fri Feb 22 12:24:22.082 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:22.677 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:22.677 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:22.678 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:22 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276376f1561f29e16720ee" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276375f1561f29e16720ed" } } m30999| Fri Feb 22 12:24:22.678 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276376f1561f29e16720ee m30999| Fri Feb 22 12:24:22.678 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:22.678 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:22.678 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:22.680 [Balancer] shard0001 has more chunks me:44 best: shard0000:7 m30999| Fri Feb 22 12:24:22.680 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:22.680 [Balancer] donor : shard0001 chunks on 44 m30999| Fri Feb 22 12:24:22.680 [Balancer] receiver : shard0000 chunks on 7 m30999| Fri Feb 22 12:24:22.680 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:22.680 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_342.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 342.0 }, max: { _id: 410.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:22.680 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { _id: 342.0 }max: { _id: 410.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:22.680 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:22.680 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 342.0 }, max: { _id: 410.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_342.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:22.681 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637657d9fef1ecdff54c m30001| Fri Feb 22 12:24:22.681 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:22-5127637657d9fef1ecdff54d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535862681), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 342.0 }, max: { _id: 410.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:22.681 [conn2] moveChunk request accepted at version 8|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:22.681 [conn2] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:22.682 [migrateThread] starting receiving-end of migration of chunk { _id: 342.0 } -> { _id: 410.0 } for collection test.foo from localhost:30001 (0 slaves detected) ---- Running diff1... ---- m30000| Fri Feb 22 12:24:22.690 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:22.690 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 342.0 } -> { _id: 410.0 } m30001| Fri Feb 22 12:24:22.692 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 342.0 }, max: { _id: 410.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:22.692 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 342.0 } -> { _id: 410.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:22.702 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 342.0 }, max: { _id: 410.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:22.702 [conn2] moveChunk setting version to: 9|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:22.702 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:22.702 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 342.0 } -> { _id: 410.0 } m30000| Fri Feb 22 12:24:22.703 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 342.0 } -> { _id: 410.0 } m30000| Fri Feb 22 12:24:22.703 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:22-51276376b989ea9ef0f7b930", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535862703), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 342.0 }, max: { _id: 410.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:22.712 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 342.0 }, max: { _id: 410.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:22.712 [conn2] moveChunk updating self version to: 9|1||51276369f1561f29e16720e6 through { _id: 410.0 } -> { _id: 461.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:22.713 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:22-5127637657d9fef1ecdff54e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535862713), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 342.0 }, max: { _id: 410.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:22.713 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:22.713 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:22.713 [conn2] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:22.713 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:22.713 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:22.713 [cleanupOldData-5127637657d9fef1ecdff54f] (start) waiting to cleanup test.foo from { _id: 342.0 } -> { _id: 410.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:22.713 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:22.713 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:22-5127637657d9fef1ecdff550", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535862713), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 342.0 }, max: { _id: 410.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:22.714 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:22.715 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 73 version: 9|1||51276369f1561f29e16720e6 based on: 8|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:22.715 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:22.715 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. ---- Running diff1... ---- m30999| Fri Feb 22 12:24:22.725 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 9000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 73 m30999| Fri Feb 22 12:24:22.726 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:22.733 [cleanupOldData-5127637657d9fef1ecdff54f] waiting to remove documents for test.foo from { _id: 342.0 } -> { _id: 410.0 } m30001| Fri Feb 22 12:24:22.733 [cleanupOldData-5127637657d9fef1ecdff54f] moveChunk starting delete for: test.foo from { _id: 342.0 } -> { _id: 410.0 } m30001| Fri Feb 22 12:24:22.740 [cleanupOldData-5127637657d9fef1ecdff54f] moveChunk deleted 68 documents for test.foo from { _id: 342.0 } -> { _id: 410.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:22.774 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 9000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 73 m30999| Fri Feb 22 12:24:22.775 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:23.716 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:23.716 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:23.717 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:23 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276377f1561f29e16720ef" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276376f1561f29e16720ee" } } m30999| Fri Feb 22 12:24:23.717 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276377f1561f29e16720ef m30999| Fri Feb 22 12:24:23.717 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:23.717 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:23.717 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:23.719 [Balancer] shard0001 has more chunks me:43 best: shard0000:8 m30999| Fri Feb 22 12:24:23.719 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:23.719 [Balancer] donor : shard0001 chunks on 43 m30999| Fri Feb 22 12:24:23.719 [Balancer] receiver : shard0000 chunks on 8 m30999| Fri Feb 22 12:24:23.719 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:23.719 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_410.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 410.0 }, max: { _id: 461.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:23.719 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { _id: 410.0 }max: { _id: 461.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:23.719 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:23.719 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 410.0 }, max: { _id: 461.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_410.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:23.720 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637757d9fef1ecdff551 m30001| Fri Feb 22 12:24:23.720 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:23-5127637757d9fef1ecdff552", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535863720), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 410.0 }, max: { _id: 461.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:23.722 [conn2] moveChunk request accepted at version 9|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:23.722 [conn2] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:23.722 [migrateThread] starting receiving-end of migration of chunk { _id: 410.0 } -> { _id: 461.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:23.730 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:23.730 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 410.0 } -> { _id: 461.0 } m30000| Fri Feb 22 12:24:23.731 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 410.0 } -> { _id: 461.0 } m30001| Fri Feb 22 12:24:23.732 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 410.0 }, max: { _id: 461.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:23.732 [conn2] moveChunk setting version to: 10|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:23.733 [conn10] Waiting for commit to finish m30999| Fri Feb 22 12:24:23.734 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 74 version: 9|1||51276369f1561f29e16720e6 based on: 9|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:23.734 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 9|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:23.734 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 9000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 74 m30001| Fri Feb 22 12:24:23.734 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:24:23.742 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 410.0 } -> { _id: 461.0 } m30000| Fri Feb 22 12:24:23.742 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 410.0 } -> { _id: 461.0 } m30000| Fri Feb 22 12:24:23.742 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:23-51276377b989ea9ef0f7b931", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535863742), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 410.0 }, max: { _id: 461.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:23.743 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 410.0 }, max: { _id: 461.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:23.743 [conn2] moveChunk updating self version to: 10|1||51276369f1561f29e16720e6 through { _id: 461.0 } -> { _id: 529.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:23.744 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:23-5127637757d9fef1ecdff553", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535863743), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 410.0 }, max: { _id: 461.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:23.744 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:23.744 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:23.744 [conn2] forking for cleanup of chunk data m30999| Fri Feb 22 12:24:23.744 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 9000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 10000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:23.744 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:23.744 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:23.744 [cleanupOldData-5127637757d9fef1ecdff554] (start) waiting to cleanup test.foo from { _id: 410.0 } -> { _id: 461.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:23.744 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:23.744 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:23-5127637757d9fef1ecdff555", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535863744), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 410.0 }, max: { _id: 461.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:23.744 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:23.745 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 75 version: 10|1||51276369f1561f29e16720e6 based on: 9|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:23.746 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 76 version: 10|1||51276369f1561f29e16720e6 based on: 9|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:23.746 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:23.746 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:23.747 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 77 version: 10|1||51276369f1561f29e16720e6 based on: 10|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:23.747 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 10|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:23.747 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 10000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 77 m30999| Fri Feb 22 12:24:23.747 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:23.764 [cleanupOldData-5127637757d9fef1ecdff554] waiting to remove documents for test.foo from { _id: 410.0 } -> { _id: 461.0 } m30001| Fri Feb 22 12:24:23.764 [cleanupOldData-5127637757d9fef1ecdff554] moveChunk starting delete for: test.foo from { _id: 410.0 } -> { _id: 461.0 } m30001| Fri Feb 22 12:24:23.807 [cleanupOldData-5127637757d9fef1ecdff554] moveChunk deleted 51 documents for test.foo from { _id: 410.0 } -> { _id: 461.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:24.337 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 10000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 77 m30999| Fri Feb 22 12:24:24.338 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:24.747 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:24.747 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:24.747 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:24 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276378f1561f29e16720f0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276377f1561f29e16720ef" } } m30999| Fri Feb 22 12:24:24.748 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276378f1561f29e16720f0 m30999| Fri Feb 22 12:24:24.748 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:24.748 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:24.748 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:24.750 [Balancer] shard0001 has more chunks me:42 best: shard0000:9 m30999| Fri Feb 22 12:24:24.750 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:24.750 [Balancer] donor : shard0001 chunks on 42 m30999| Fri Feb 22 12:24:24.750 [Balancer] receiver : shard0000 chunks on 9 m30999| Fri Feb 22 12:24:24.750 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:24.750 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_461.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 461.0 }, max: { _id: 529.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:24.750 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { _id: 461.0 }max: { _id: 529.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:24.750 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:24.750 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 461.0 }, max: { _id: 529.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_461.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:24.751 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637857d9fef1ecdff556 m30001| Fri Feb 22 12:24:24.751 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:24-5127637857d9fef1ecdff557", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535864751), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 461.0 }, max: { _id: 529.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:24.752 [conn2] moveChunk request accepted at version 10|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:24.752 [conn2] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:24.752 [migrateThread] starting receiving-end of migration of chunk { _id: 461.0 } -> { _id: 529.0 } for collection test.foo from localhost:30001 (0 slaves detected) ---- Running diff1... ---- m30001| Fri Feb 22 12:24:24.763 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 529.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 55, clonedBytes: 553300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:24.765 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:24.765 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 461.0 } -> { _id: 529.0 } m30000| Fri Feb 22 12:24:24.767 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 461.0 } -> { _id: 529.0 } m30001| Fri Feb 22 12:24:24.773 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 529.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:24.773 [conn2] moveChunk setting version to: 11|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:24.773 [conn10] Waiting for commit to finish ---- Running diff1... ---- m30999| Fri Feb 22 12:24:24.775 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 3, instanceIdent: "bs-smartos-x86-64-1.10gen.cc:30001", version: Timestamp 11000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), yourVersion: Timestamp 10000|1, yourVersionEpoch: ObjectId('51276369f1561f29e16720e6'), msg: BinData, id: ObjectId('512763780000000000000001') }, ok: 1.0 } m30999| Fri Feb 22 12:24:24.775 [WriteBackListener-localhost:30001] connectionId: bs-smartos-x86-64-1.10gen.cc:30001:3 writebackId: 512763780000000000000001 needVersion : 11|0||51276369f1561f29e16720e6 mine : 10|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:24.775 [WriteBackListener-localhost:30001] op: update len: 78 ns: test.foo flags: 1 query: { _id: 713.0 } update: { $inc: { x: 1.0 } } m30999| Fri Feb 22 12:24:24.775 [WriteBackListener-localhost:30001] new version change detected to 11|0||51276369f1561f29e16720e6, 1 writebacks processed at 7|0||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:24.775 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 10000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 77 m30001| Fri Feb 22 12:24:24.775 [conn6] waiting till out of critical section m30000| Fri Feb 22 12:24:24.777 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 461.0 } -> { _id: 529.0 } m30000| Fri Feb 22 12:24:24.777 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 461.0 } -> { _id: 529.0 } m30000| Fri Feb 22 12:24:24.777 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:24-51276378b989ea9ef0f7b932", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535864777), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 461.0 }, max: { _id: 529.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:24:24.784 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 529.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:24.784 [conn2] moveChunk updating self version to: 11|1||51276369f1561f29e16720e6 through { _id: 529.0 } -> { _id: 580.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:24.784 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:24-5127637857d9fef1ecdff558", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535864784), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 461.0 }, max: { _id: 529.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:24.784 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:24.795 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:24.795 [conn2] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:24.795 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:24.795 [conn2] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:24:24.796 [WriteBackListener-localhost:30001] setShardVersion failed! m30999| { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 10000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 11000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:24.796 [cleanupOldData-5127637857d9fef1ecdff559] (start) waiting to cleanup test.foo from { _id: 461.0 } -> { _id: 529.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:24.796 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:24.796 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:24-5127637857d9fef1ecdff55a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535864796), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 461.0 }, max: { _id: 529.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 22, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:24.796 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:24.797 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 78 version: 11|1||51276369f1561f29e16720e6 based on: 10|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:24.797 [WriteBackListener-localhost:30001] update will be retried b/c sharding config info is stale, retries: 0 ns: test.foo data: { _id: 713.0 } m30999| Fri Feb 22 12:24:24.797 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 11000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 78 m30999| Fri Feb 22 12:24:24.797 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:24.798 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 79 version: 11|1||51276369f1561f29e16720e6 based on: 10|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:24.798 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:24.798 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30001| Fri Feb 22 12:24:24.816 [cleanupOldData-5127637857d9fef1ecdff559] waiting to remove documents for test.foo from { _id: 461.0 } -> { _id: 529.0 } m30001| Fri Feb 22 12:24:24.816 [cleanupOldData-5127637857d9fef1ecdff559] moveChunk starting delete for: test.foo from { _id: 461.0 } -> { _id: 529.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:24.819 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 11000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 78 m30999| Fri Feb 22 12:24:24.819 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:24.831 [cleanupOldData-5127637857d9fef1ecdff559] moveChunk deleted 68 documents for test.foo from { _id: 461.0 } -> { _id: 529.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:24.832 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 11000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 78 m30999| Fri Feb 22 12:24:24.833 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:25.799 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:25.799 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:25.800 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:25 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276379f1561f29e16720f1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276378f1561f29e16720f0" } } m30999| Fri Feb 22 12:24:25.800 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276379f1561f29e16720f1 m30999| Fri Feb 22 12:24:25.800 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:25.800 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:25.800 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:25.802 [Balancer] shard0001 has more chunks me:41 best: shard0000:10 m30999| Fri Feb 22 12:24:25.802 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:25.802 [Balancer] donor : shard0001 chunks on 41 m30999| Fri Feb 22 12:24:25.802 [Balancer] receiver : shard0000 chunks on 10 m30999| Fri Feb 22 12:24:25.802 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:25.802 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_529.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 529.0 }, max: { _id: 580.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:25.802 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: 529.0 }max: { _id: 580.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:25.802 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:25.802 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 529.0 }, max: { _id: 580.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_529.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:25.803 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637957d9fef1ecdff55b m30001| Fri Feb 22 12:24:25.803 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:25-5127637957d9fef1ecdff55c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535865803), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 529.0 }, max: { _id: 580.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:25.805 [conn4] moveChunk request accepted at version 11|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:25.805 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:25.805 [migrateThread] starting receiving-end of migration of chunk { _id: 529.0 } -> { _id: 580.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:25.815 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:25.815 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 529.0 } -> { _id: 580.0 } m30001| Fri Feb 22 12:24:25.816 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 529.0 }, max: { _id: 580.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 ---- Running diff1... ---- m30000| Fri Feb 22 12:24:25.816 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 529.0 } -> { _id: 580.0 } m30001| Fri Feb 22 12:24:25.826 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 529.0 }, max: { _id: 580.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:25.826 [conn4] moveChunk setting version to: 12|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:25.826 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:25.827 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 529.0 } -> { _id: 580.0 } m30000| Fri Feb 22 12:24:25.827 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 529.0 } -> { _id: 580.0 } m30000| Fri Feb 22 12:24:25.827 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:25-51276379b989ea9ef0f7b933", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535865827), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 529.0 }, max: { _id: 580.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:25.830 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 3, instanceIdent: "bs-smartos-x86-64-1.10gen.cc:30001", version: Timestamp 12000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), yourVersion: Timestamp 11000|1, yourVersionEpoch: ObjectId('51276369f1561f29e16720e6'), msg: BinData, id: ObjectId('512763790000000000000002') }, ok: 1.0 } m30999| Fri Feb 22 12:24:25.830 [WriteBackListener-localhost:30001] connectionId: bs-smartos-x86-64-1.10gen.cc:30001:3 writebackId: 512763790000000000000002 needVersion : 12|0||51276369f1561f29e16720e6 mine : 11|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:25.830 [WriteBackListener-localhost:30001] op: update len: 78 ns: test.foo flags: 1 query: { _id: 2915.0 } update: { $inc: { x: 1.0 } } m30999| Fri Feb 22 12:24:25.830 [WriteBackListener-localhost:30001] new version change detected to 12|0||51276369f1561f29e16720e6, 1 writebacks processed at 11|0||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:25.836 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 529.0 }, max: { _id: 580.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:25.836 [conn4] moveChunk updating self version to: 12|1||51276369f1561f29e16720e6 through { _id: 580.0 } -> { _id: 648.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:25.837 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:25-5127637957d9fef1ecdff55d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535865837), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 529.0 }, max: { _id: 580.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:25.837 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:25.848 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:25.848 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:25.848 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:25.848 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:24:25.848 [WriteBackListener-localhost:30001] new version change detected, 1 writebacks processed previously m30999| Fri Feb 22 12:24:25.848 [WriteBackListener-localhost:30001] writeback failed because of stale config, retrying attempts: 1 m30999| Fri Feb 22 12:24:25.848 [WriteBackListener-localhost:30001] writeback error : { singleShard: "localhost:30001", err: "cannot queue a writeback operation to the writeback queue", code: 9517, n: 0, connectionId: 6, ok: 1.0 } m30001| Fri Feb 22 12:24:25.848 [cleanupOldData-5127637957d9fef1ecdff55e] (start) waiting to cleanup test.foo from { _id: 529.0 } -> { _id: 580.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:25.849 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:25.849 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:25-5127637957d9fef1ecdff55f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535865849), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 529.0 }, max: { _id: 580.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 21, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:25.849 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:25.849 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 80 version: 12|1||51276369f1561f29e16720e6 based on: 11|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:25.849 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 12000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 80 m30999| Fri Feb 22 12:24:25.850 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:25.850 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 81 version: 12|1||51276369f1561f29e16720e6 based on: 11|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:25.851 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:25.851 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30001| Fri Feb 22 12:24:25.868 [cleanupOldData-5127637957d9fef1ecdff55e] waiting to remove documents for test.foo from { _id: 529.0 } -> { _id: 580.0 } m30001| Fri Feb 22 12:24:25.868 [cleanupOldData-5127637957d9fef1ecdff55e] moveChunk starting delete for: test.foo from { _id: 529.0 } -> { _id: 580.0 } m30001| Fri Feb 22 12:24:25.872 [cleanupOldData-5127637957d9fef1ecdff55e] moveChunk deleted 51 documents for test.foo from { _id: 529.0 } -> { _id: 580.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:25.874 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 12000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 80 m30999| Fri Feb 22 12:24:25.874 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:25.965 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 12000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 80 m30999| Fri Feb 22 12:24:25.966 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:26.852 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:26.852 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:26.852 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127637af1561f29e16720f2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276379f1561f29e16720f1" } } m30999| Fri Feb 22 12:24:26.853 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 5127637af1561f29e16720f2 m30999| Fri Feb 22 12:24:26.853 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:26.853 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:26.853 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:26.854 [Balancer] shard0001 has more chunks me:40 best: shard0000:11 m30999| Fri Feb 22 12:24:26.854 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:26.854 [Balancer] donor : shard0001 chunks on 40 m30999| Fri Feb 22 12:24:26.854 [Balancer] receiver : shard0000 chunks on 11 m30999| Fri Feb 22 12:24:26.854 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:26.854 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_580.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 580.0 }, max: { _id: 648.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:26.855 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 12|1||000000000000000000000000min: { _id: 580.0 }max: { _id: 648.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:26.855 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:26.855 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 580.0 }, max: { _id: 648.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_580.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:26.856 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637a57d9fef1ecdff560 m30001| Fri Feb 22 12:24:26.856 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:26-5127637a57d9fef1ecdff561", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535866856), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 580.0 }, max: { _id: 648.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:26.857 [conn2] moveChunk request accepted at version 12|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:26.857 [conn2] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:26.857 [migrateThread] starting receiving-end of migration of chunk { _id: 580.0 } -> { _id: 648.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:26.867 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 580.0 }, max: { _id: 648.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 58, clonedBytes: 583480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:26.869 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:26.869 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 580.0 } -> { _id: 648.0 } m30000| Fri Feb 22 12:24:26.871 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 580.0 } -> { _id: 648.0 } m30001| Fri Feb 22 12:24:26.878 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 580.0 }, max: { _id: 648.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:26.878 [conn2] moveChunk setting version to: 13|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:26.878 [conn10] Waiting for commit to finish m30999| Fri Feb 22 12:24:26.879 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 82 version: 12|1||51276369f1561f29e16720e6 based on: 12|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:26.879 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 12|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:26.879 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 12000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 82 m30001| Fri Feb 22 12:24:26.880 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:24:26.881 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 580.0 } -> { _id: 648.0 } m30000| Fri Feb 22 12:24:26.881 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 580.0 } -> { _id: 648.0 } m30000| Fri Feb 22 12:24:26.881 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:26-5127637ab989ea9ef0f7b934", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535866881), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 580.0 }, max: { _id: 648.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 10, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:24:26.888 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 580.0 }, max: { _id: 648.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:26.888 [conn2] moveChunk updating self version to: 13|1||51276369f1561f29e16720e6 through { _id: 648.0 } -> { _id: 699.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:26.889 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:26-5127637a57d9fef1ecdff562", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535866889), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 580.0 }, max: { _id: 648.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:26.889 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:26.889 [conn2] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:24:26.889 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:24:26.889 [conn2] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 12000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 13000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:26.889 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:26.889 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:26.889 [cleanupOldData-5127637a57d9fef1ecdff563] (start) waiting to cleanup test.foo from { _id: 580.0 } -> { _id: 648.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:26.889 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:26.889 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:26-5127637a57d9fef1ecdff564", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535866889), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 580.0 }, max: { _id: 648.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:26.889 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:26.889 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:26.890 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:26.890 [Balancer] connected connection! m30000| Fri Feb 22 12:24:26.890 [initandlisten] connection accepted from 127.0.0.1:34080 #15 (15 connections now open) m30999| Fri Feb 22 12:24:26.891 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 83 version: 13|1||51276369f1561f29e16720e6 based on: 12|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:26.891 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:26.891 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:26.892 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 84 version: 13|1||51276369f1561f29e16720e6 based on: 12|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:26.893 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 85 version: 13|1||51276369f1561f29e16720e6 based on: 13|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:26.893 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 13|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:26.894 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 13000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 85 m30999| Fri Feb 22 12:24:26.894 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:26.909 [cleanupOldData-5127637a57d9fef1ecdff563] waiting to remove documents for test.foo from { _id: 580.0 } -> { _id: 648.0 } m30001| Fri Feb 22 12:24:26.909 [cleanupOldData-5127637a57d9fef1ecdff563] moveChunk starting delete for: test.foo from { _id: 580.0 } -> { _id: 648.0 } m30001| Fri Feb 22 12:24:27.088 [cleanupOldData-5127637a57d9fef1ecdff563] moveChunk deleted 68 documents for test.foo from { _id: 580.0 } -> { _id: 648.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:27.523 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 13000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 85 m30999| Fri Feb 22 12:24:27.524 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:27.892 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:27.892 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:27.892 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127637bf1561f29e16720f3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127637af1561f29e16720f2" } } m30999| Fri Feb 22 12:24:27.893 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 5127637bf1561f29e16720f3 m30999| Fri Feb 22 12:24:27.893 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:27.893 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:27.893 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:27.895 [Balancer] shard0001 has more chunks me:39 best: shard0000:12 m30999| Fri Feb 22 12:24:27.895 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:27.895 [Balancer] donor : shard0001 chunks on 39 m30999| Fri Feb 22 12:24:27.895 [Balancer] receiver : shard0000 chunks on 12 m30999| Fri Feb 22 12:24:27.895 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:27.895 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_648.0", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 648.0 }, max: { _id: 699.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:27.895 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 13|1||000000000000000000000000min: { _id: 648.0 }max: { _id: 699.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:27.895 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:27.895 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 648.0 }, max: { _id: 699.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_648.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:27.896 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637b57d9fef1ecdff565 m30001| Fri Feb 22 12:24:27.896 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:27-5127637b57d9fef1ecdff566", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535867896), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 648.0 }, max: { _id: 699.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:27.897 [conn2] moveChunk request accepted at version 13|1||51276369f1561f29e16720e6 ---- Running diff1... ---- m30001| Fri Feb 22 12:24:27.897 [conn2] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:27.897 [migrateThread] starting receiving-end of migration of chunk { _id: 648.0 } -> { _id: 699.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:27.907 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:27.907 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 648.0 } -> { _id: 699.0 } m30001| Fri Feb 22 12:24:27.907 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 648.0 }, max: { _id: 699.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:27.909 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 648.0 } -> { _id: 699.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:27.917 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 648.0 }, max: { _id: 699.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:27.917 [conn2] moveChunk setting version to: 14|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:27.918 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:27.919 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 648.0 } -> { _id: 699.0 } m30000| Fri Feb 22 12:24:27.919 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 648.0 } -> { _id: 699.0 } m30000| Fri Feb 22 12:24:27.919 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:27-5127637bb989ea9ef0f7b935", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535867919), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 648.0 }, max: { _id: 699.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 12 } } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:27.925 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 3, instanceIdent: "bs-smartos-x86-64-1.10gen.cc:30001", version: Timestamp 14000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), yourVersion: Timestamp 13000|1, yourVersionEpoch: ObjectId('51276369f1561f29e16720e6'), msg: BinData, id: ObjectId('5127637b0000000000000003') }, ok: 1.0 } m30999| Fri Feb 22 12:24:27.925 [WriteBackListener-localhost:30001] connectionId: bs-smartos-x86-64-1.10gen.cc:30001:3 writebackId: 5127637b0000000000000003 needVersion : 14|0||51276369f1561f29e16720e6 mine : 13|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:27.925 [WriteBackListener-localhost:30001] op: update len: 78 ns: test.foo flags: 1 query: { _id: 2649.0 } update: { $inc: { x: 1.0 } } m30999| Fri Feb 22 12:24:27.925 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 13000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 85 m30001| Fri Feb 22 12:24:27.925 [conn6] waiting till out of critical section m30001| Fri Feb 22 12:24:27.928 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 648.0 }, max: { _id: 699.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:27.928 [conn2] moveChunk updating self version to: 14|1||51276369f1561f29e16720e6 through { _id: 699.0 } -> { _id: 767.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:27.928 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:27-5127637b57d9fef1ecdff567", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535867928), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 648.0 }, max: { _id: 699.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:27.928 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:27.928 [conn2] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:24:27.928 [WriteBackListener-localhost:30001] setShardVersion failed! m30001| Fri Feb 22 12:24:27.928 [conn2] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 13000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 14000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:27.929 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:27.929 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:27.929 [cleanupOldData-5127637b57d9fef1ecdff568] (start) waiting to cleanup test.foo from { _id: 648.0 } -> { _id: 699.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:27.929 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:27.929 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:27-5127637b57d9fef1ecdff569", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535867929), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 648.0 }, max: { _id: 699.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:27.929 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:27.929 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 86 version: 14|1||51276369f1561f29e16720e6 based on: 13|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:27.930 [WriteBackListener-localhost:30001] update will be retried b/c sharding config info is stale, retries: 0 ns: test.foo data: { _id: 2649.0 } m30999| Fri Feb 22 12:24:27.930 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 14000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 86 m30999| Fri Feb 22 12:24:27.930 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:27.930 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 87 version: 14|1||51276369f1561f29e16720e6 based on: 13|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:27.930 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:27.931 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30001| Fri Feb 22 12:24:27.949 [cleanupOldData-5127637b57d9fef1ecdff568] waiting to remove documents for test.foo from { _id: 648.0 } -> { _id: 699.0 } m30001| Fri Feb 22 12:24:27.949 [cleanupOldData-5127637b57d9fef1ecdff568] moveChunk starting delete for: test.foo from { _id: 648.0 } -> { _id: 699.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:27.949 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 14000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 86 m30999| Fri Feb 22 12:24:27.950 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:27.964 [cleanupOldData-5127637b57d9fef1ecdff568] moveChunk deleted 51 documents for test.foo from { _id: 648.0 } -> { _id: 699.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:27.978 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 14000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 86 m30999| Fri Feb 22 12:24:27.979 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:28.932 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:28.932 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:28.932 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:28 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127637cf1561f29e16720f4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127637bf1561f29e16720f3" } } m30999| Fri Feb 22 12:24:28.933 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 5127637cf1561f29e16720f4 m30999| Fri Feb 22 12:24:28.933 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:28.933 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:28.933 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:28.935 [Balancer] shard0001 has more chunks me:38 best: shard0000:13 m30999| Fri Feb 22 12:24:28.935 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:28.935 [Balancer] donor : shard0001 chunks on 38 m30999| Fri Feb 22 12:24:28.935 [Balancer] receiver : shard0000 chunks on 13 m30999| Fri Feb 22 12:24:28.935 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:28.935 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_699.0", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 699.0 }, max: { _id: 767.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:28.935 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 14|1||000000000000000000000000min: { _id: 699.0 }max: { _id: 767.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:28.935 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:28.935 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 699.0 }, max: { _id: 767.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_699.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:28.936 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637c57d9fef1ecdff56a m30001| Fri Feb 22 12:24:28.936 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:28-5127637c57d9fef1ecdff56b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535868936), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 699.0 }, max: { _id: 767.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:28.937 [conn4] moveChunk request accepted at version 14|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:28.937 [conn4] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:28.938 [migrateThread] starting receiving-end of migration of chunk { _id: 699.0 } -> { _id: 767.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:28.948 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 699.0 }, max: { _id: 767.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 55, clonedBytes: 553300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:28.950 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:28.950 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 699.0 } -> { _id: 767.0 } m30000| Fri Feb 22 12:24:28.952 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 699.0 } -> { _id: 767.0 } m30001| Fri Feb 22 12:24:28.958 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 699.0 }, max: { _id: 767.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:28.958 [conn4] moveChunk setting version to: 15|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:28.958 [conn10] Waiting for commit to finish m30999| Fri Feb 22 12:24:28.960 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 88 version: 14|1||51276369f1561f29e16720e6 based on: 14|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:28.960 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 14|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:28.960 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 14000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 88 m30001| Fri Feb 22 12:24:28.960 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:24:28.962 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 699.0 } -> { _id: 767.0 } m30000| Fri Feb 22 12:24:28.962 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 699.0 } -> { _id: 767.0 } m30000| Fri Feb 22 12:24:28.962 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:28-5127637cb989ea9ef0f7b936", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535868962), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 699.0 }, max: { _id: 767.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:24:28.968 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 699.0 }, max: { _id: 767.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:28.968 [conn4] moveChunk updating self version to: 15|1||51276369f1561f29e16720e6 through { _id: 767.0 } -> { _id: 818.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:28.969 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:28-5127637c57d9fef1ecdff56c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535868969), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 699.0 }, max: { _id: 767.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:28.969 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:28.969 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:24:28.969 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 14000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 15000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:28.969 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:28.969 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:28.969 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:28.969 [cleanupOldData-5127637c57d9fef1ecdff56d] (start) waiting to cleanup test.foo from { _id: 699.0 } -> { _id: 767.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:28.970 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:28.970 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:28-5127637c57d9fef1ecdff56e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535868970), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 699.0 }, max: { _id: 767.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:28.970 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:28.970 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 89 version: 15|1||51276369f1561f29e16720e6 based on: 14|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:28.971 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 90 version: 15|1||51276369f1561f29e16720e6 based on: 14|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:28.972 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:28.972 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:28.973 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 91 version: 15|1||51276369f1561f29e16720e6 based on: 15|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:28.973 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 15|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:28.973 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 15000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 91 m30999| Fri Feb 22 12:24:28.973 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:28.989 [cleanupOldData-5127637c57d9fef1ecdff56d] waiting to remove documents for test.foo from { _id: 699.0 } -> { _id: 767.0 } m30001| Fri Feb 22 12:24:28.990 [cleanupOldData-5127637c57d9fef1ecdff56d] moveChunk starting delete for: test.foo from { _id: 699.0 } -> { _id: 767.0 } m30001| Fri Feb 22 12:24:29.041 [cleanupOldData-5127637c57d9fef1ecdff56d] moveChunk deleted 68 documents for test.foo from { _id: 699.0 } -> { _id: 767.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:29.629 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 15000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 91 m30999| Fri Feb 22 12:24:29.630 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:29.973 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:29.973 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:29.973 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:29 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127637df1561f29e16720f5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127637cf1561f29e16720f4" } } m30999| Fri Feb 22 12:24:29.974 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 5127637df1561f29e16720f5 m30999| Fri Feb 22 12:24:29.974 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:29.974 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:29.974 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:29.975 [Balancer] shard0001 has more chunks me:37 best: shard0000:14 m30999| Fri Feb 22 12:24:29.975 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:29.975 [Balancer] donor : shard0001 chunks on 37 m30999| Fri Feb 22 12:24:29.975 [Balancer] receiver : shard0000 chunks on 14 m30999| Fri Feb 22 12:24:29.975 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:29.975 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_767.0", lastmod: Timestamp 15000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 767.0 }, max: { _id: 818.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:29.975 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 15|1||000000000000000000000000min: { _id: 767.0 }max: { _id: 818.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:29.976 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:29.976 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 767.0 }, max: { _id: 818.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_767.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:29.977 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637d57d9fef1ecdff56f m30001| Fri Feb 22 12:24:29.977 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:29-5127637d57d9fef1ecdff570", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535869977), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 767.0 }, max: { _id: 818.0 }, from: "shard0001", to: "shard0000" } } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:29.978 [conn4] moveChunk request accepted at version 15|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:29.978 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:29.978 [migrateThread] starting receiving-end of migration of chunk { _id: 767.0 } -> { _id: 818.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:29.988 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:29.988 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 767.0 } -> { _id: 818.0 } m30001| Fri Feb 22 12:24:29.989 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 767.0 }, max: { _id: 818.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:29.989 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 767.0 } -> { _id: 818.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:29.999 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 767.0 }, max: { _id: 818.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:29.999 [conn4] moveChunk setting version to: 16|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:29.999 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:30.000 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 767.0 } -> { _id: 818.0 } m30000| Fri Feb 22 12:24:30.000 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 767.0 } -> { _id: 818.0 } m30000| Fri Feb 22 12:24:30.000 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:30-5127637eb989ea9ef0f7b937", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535870000), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 767.0 }, max: { _id: 818.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:30.009 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 767.0 }, max: { _id: 818.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:30.009 [conn4] moveChunk updating self version to: 16|1||51276369f1561f29e16720e6 through { _id: 818.0 } -> { _id: 886.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:30.010 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:30-5127637e57d9fef1ecdff571", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535870010), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 767.0 }, max: { _id: 818.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:30.010 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:30.011 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:30.011 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:30.011 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:30.011 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:30.011 [cleanupOldData-5127637e57d9fef1ecdff572] (start) waiting to cleanup test.foo from { _id: 767.0 } -> { _id: 818.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:30.011 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:30.011 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:30-5127637e57d9fef1ecdff573", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535870011), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 767.0 }, max: { _id: 818.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:30.011 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:30.013 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 92 version: 16|1||51276369f1561f29e16720e6 based on: 15|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:30.013 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:30.013 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:30.013 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 16000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 92 m30999| Fri Feb 22 12:24:30.014 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:30.031 [cleanupOldData-5127637e57d9fef1ecdff572] waiting to remove documents for test.foo from { _id: 767.0 } -> { _id: 818.0 } m30001| Fri Feb 22 12:24:30.031 [cleanupOldData-5127637e57d9fef1ecdff572] moveChunk starting delete for: test.foo from { _id: 767.0 } -> { _id: 818.0 } m30001| Fri Feb 22 12:24:30.035 [cleanupOldData-5127637e57d9fef1ecdff572] moveChunk deleted 51 documents for test.foo from { _id: 767.0 } -> { _id: 818.0 } m30999| Fri Feb 22 12:24:30.308 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 16000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 92 m30999| Fri Feb 22 12:24:30.309 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:31.014 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:31.014 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:31.015 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:31 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127637ff1561f29e16720f6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127637df1561f29e16720f5" } } m30999| Fri Feb 22 12:24:31.016 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 5127637ff1561f29e16720f6 m30999| Fri Feb 22 12:24:31.016 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:31.016 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:31.016 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:31.017 [Balancer] shard0001 has more chunks me:36 best: shard0000:15 m30999| Fri Feb 22 12:24:31.017 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:31.017 [Balancer] donor : shard0001 chunks on 36 m30999| Fri Feb 22 12:24:31.017 [Balancer] receiver : shard0000 chunks on 15 m30999| Fri Feb 22 12:24:31.017 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:31.017 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_818.0", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 818.0 }, max: { _id: 886.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:31.017 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 16|1||000000000000000000000000min: { _id: 818.0 }max: { _id: 886.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:31.018 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:31.018 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 818.0 }, max: { _id: 886.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_818.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:31.018 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127637f57d9fef1ecdff574 m30001| Fri Feb 22 12:24:31.019 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:31-5127637f57d9fef1ecdff575", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535871018), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 818.0 }, max: { _id: 886.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:31.019 [conn4] moveChunk request accepted at version 16|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:31.020 [conn4] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:31.020 [migrateThread] starting receiving-end of migration of chunk { _id: 818.0 } -> { _id: 886.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:31.030 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:31.030 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 818.0 } -> { _id: 886.0 } m30001| Fri Feb 22 12:24:31.030 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 818.0 }, max: { _id: 886.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:31.031 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 818.0 } -> { _id: 886.0 } m30001| Fri Feb 22 12:24:31.040 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 818.0 }, max: { _id: 886.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:31.040 [conn4] moveChunk setting version to: 17|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:31.040 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:31.041 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 818.0 } -> { _id: 886.0 } m30000| Fri Feb 22 12:24:31.041 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 818.0 } -> { _id: 886.0 } m30000| Fri Feb 22 12:24:31.042 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:31-5127637fb989ea9ef0f7b938", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535871041), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 818.0 }, max: { _id: 886.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30999| Fri Feb 22 12:24:31.042 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 93 version: 16|1||51276369f1561f29e16720e6 based on: 16|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:31.042 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 16|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:31.042 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 16000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 93 m30001| Fri Feb 22 12:24:31.042 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:24:31.050 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 818.0 }, max: { _id: 886.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:31.050 [conn4] moveChunk updating self version to: 17|1||51276369f1561f29e16720e6 through { _id: 886.0 } -> { _id: 937.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:31.051 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:31-5127637f57d9fef1ecdff576", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535871051), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 818.0 }, max: { _id: 886.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:31.051 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:31.051 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:31.051 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:24:31.051 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 16000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 17000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:31.051 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:31.051 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:31.051 [cleanupOldData-5127637f57d9fef1ecdff577] (start) waiting to cleanup test.foo from { _id: 818.0 } -> { _id: 886.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:31.052 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:31.052 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:31-5127637f57d9fef1ecdff578", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535871052), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 818.0 }, max: { _id: 886.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:31.052 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:31.053 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 94 version: 17|1||51276369f1561f29e16720e6 based on: 16|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:31.054 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 95 version: 17|1||51276369f1561f29e16720e6 based on: 16|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:31.054 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:31.054 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:31.054 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 96 version: 17|1||51276369f1561f29e16720e6 based on: 17|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:31.055 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 17|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:31.055 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 17000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 96 m30999| Fri Feb 22 12:24:31.055 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:31.071 [cleanupOldData-5127637f57d9fef1ecdff577] waiting to remove documents for test.foo from { _id: 818.0 } -> { _id: 886.0 } m30001| Fri Feb 22 12:24:31.072 [cleanupOldData-5127637f57d9fef1ecdff577] moveChunk starting delete for: test.foo from { _id: 818.0 } -> { _id: 886.0 } m30001| Fri Feb 22 12:24:31.097 [cleanupOldData-5127637f57d9fef1ecdff577] moveChunk deleted 68 documents for test.foo from { _id: 818.0 } -> { _id: 886.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:31.143 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 17000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 96 m30999| Fri Feb 22 12:24:31.144 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:32.055 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:32.055 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:32.055 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276380f1561f29e16720f7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127637ff1561f29e16720f6" } } m30999| Fri Feb 22 12:24:32.056 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276380f1561f29e16720f7 m30999| Fri Feb 22 12:24:32.056 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:32.056 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:32.056 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:32.057 [Balancer] shard0001 has more chunks me:35 best: shard0000:16 m30999| Fri Feb 22 12:24:32.057 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:32.057 [Balancer] donor : shard0001 chunks on 35 m30999| Fri Feb 22 12:24:32.057 [Balancer] receiver : shard0000 chunks on 16 m30999| Fri Feb 22 12:24:32.057 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:32.057 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_886.0", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 886.0 }, max: { _id: 937.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:32.057 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 17|1||000000000000000000000000min: { _id: 886.0 }max: { _id: 937.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:32.058 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:32.058 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 886.0 }, max: { _id: 937.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_886.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:32.058 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638057d9fef1ecdff579 m30001| Fri Feb 22 12:24:32.058 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:32-5127638057d9fef1ecdff57a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535872058), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 886.0 }, max: { _id: 937.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:32.059 [conn4] moveChunk request accepted at version 17|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:32.059 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:32.060 [migrateThread] starting receiving-end of migration of chunk { _id: 886.0 } -> { _id: 937.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:32.070 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 886.0 }, max: { _id: 937.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 37, clonedBytes: 372220, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:32.072 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:32.072 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 886.0 } -> { _id: 937.0 } m30000| Fri Feb 22 12:24:32.074 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 886.0 } -> { _id: 937.0 } m30001| Fri Feb 22 12:24:32.080 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 886.0 }, max: { _id: 937.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:32.080 [conn4] moveChunk setting version to: 18|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:32.080 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:32.084 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 886.0 } -> { _id: 937.0 } m30000| Fri Feb 22 12:24:32.084 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 886.0 } -> { _id: 937.0 } m30000| Fri Feb 22 12:24:32.084 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:32-51276380b989ea9ef0f7b939", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535872084), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 886.0 }, max: { _id: 937.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 12, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:24:32.090 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 886.0 }, max: { _id: 937.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:32.090 [conn4] moveChunk updating self version to: 18|1||51276369f1561f29e16720e6 through { _id: 937.0 } -> { _id: 1005.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:32.091 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:32-5127638057d9fef1ecdff57b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535872091), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 886.0 }, max: { _id: 937.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:32.091 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:32.091 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:32.091 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:32.091 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:32.091 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:32.091 [cleanupOldData-5127638057d9fef1ecdff57c] (start) waiting to cleanup test.foo from { _id: 886.0 } -> { _id: 937.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:32.091 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:32.091 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:32-5127638057d9fef1ecdff57d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535872091), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 886.0 }, max: { _id: 937.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:32.091 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:32.093 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 97 version: 18|1||51276369f1561f29e16720e6 based on: 17|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:32.093 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:32.093 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 18000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 97 m30999| Fri Feb 22 12:24:32.093 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:32.094 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:32.111 [cleanupOldData-5127638057d9fef1ecdff57c] waiting to remove documents for test.foo from { _id: 886.0 } -> { _id: 937.0 } m30001| Fri Feb 22 12:24:32.111 [cleanupOldData-5127638057d9fef1ecdff57c] moveChunk starting delete for: test.foo from { _id: 886.0 } -> { _id: 937.0 } m30001| Fri Feb 22 12:24:32.114 [cleanupOldData-5127638057d9fef1ecdff57c] moveChunk deleted 51 documents for test.foo from { _id: 886.0 } -> { _id: 937.0 } m30999| Fri Feb 22 12:24:32.351 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 18000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 97 m30999| Fri Feb 22 12:24:32.351 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:33.094 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:33.094 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:33.095 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:33 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276381f1561f29e16720f8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276380f1561f29e16720f7" } } m30999| Fri Feb 22 12:24:33.095 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276381f1561f29e16720f8 m30999| Fri Feb 22 12:24:33.095 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:33.095 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:33.095 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:33.097 [Balancer] shard0001 has more chunks me:34 best: shard0000:17 m30999| Fri Feb 22 12:24:33.097 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:33.097 [Balancer] donor : shard0001 chunks on 34 m30999| Fri Feb 22 12:24:33.097 [Balancer] receiver : shard0000 chunks on 17 m30999| Fri Feb 22 12:24:33.097 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:33.097 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_937.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 937.0 }, max: { _id: 1005.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:33.097 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 18|1||000000000000000000000000min: { _id: 937.0 }max: { _id: 1005.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:33.097 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:33.098 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 937.0 }, max: { _id: 1005.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_937.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:33.098 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638157d9fef1ecdff57e m30001| Fri Feb 22 12:24:33.098 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:33-5127638157d9fef1ecdff57f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535873098), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 937.0 }, max: { _id: 1005.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:33.099 [conn4] moveChunk request accepted at version 18|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:33.099 [conn4] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:33.099 [migrateThread] starting receiving-end of migration of chunk { _id: 937.0 } -> { _id: 1005.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:33.109 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:33.109 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 937.0 } -> { _id: 1005.0 } m30001| Fri Feb 22 12:24:33.109 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 937.0 }, max: { _id: 1005.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:33.111 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 937.0 } -> { _id: 1005.0 } m30001| Fri Feb 22 12:24:33.120 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 937.0 }, max: { _id: 1005.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:33.120 [conn4] moveChunk setting version to: 19|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:33.120 [conn10] Waiting for commit to finish m30999| Fri Feb 22 12:24:33.121 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 98 version: 18|1||51276369f1561f29e16720e6 based on: 18|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:33.121 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 18|1||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:33.121 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 937.0 } -> { _id: 1005.0 } m30000| Fri Feb 22 12:24:33.121 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 937.0 } -> { _id: 1005.0 } m30999| Fri Feb 22 12:24:33.121 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 18000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 98 m30000| Fri Feb 22 12:24:33.121 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:33-51276381b989ea9ef0f7b93a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535873121), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 937.0 }, max: { _id: 1005.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:24:33.122 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:24:33.130 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 937.0 }, max: { _id: 1005.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:33.130 [conn4] moveChunk updating self version to: 19|1||51276369f1561f29e16720e6 through { _id: 1005.0 } -> { _id: 1056.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:33.131 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:33-5127638157d9fef1ecdff580", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535873131), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 937.0 }, max: { _id: 1005.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:33.131 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:33.131 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:33.131 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:33.131 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:24:33.131 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 18000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 19000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:33.131 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:33.131 [cleanupOldData-5127638157d9fef1ecdff581] (start) waiting to cleanup test.foo from { _id: 937.0 } -> { _id: 1005.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:33.131 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:33.131 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:33-5127638157d9fef1ecdff582", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535873131), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 937.0 }, max: { _id: 1005.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:33.132 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:33.133 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 99 version: 19|1||51276369f1561f29e16720e6 based on: 18|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:33.134 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 100 version: 19|1||51276369f1561f29e16720e6 based on: 18|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:33.134 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:33.134 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30999| Fri Feb 22 12:24:33.135 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 101 version: 19|1||51276369f1561f29e16720e6 based on: 19|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:33.135 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 19|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:33.135 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 19000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 101 m30999| Fri Feb 22 12:24:33.135 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:33.151 [cleanupOldData-5127638157d9fef1ecdff581] waiting to remove documents for test.foo from { _id: 937.0 } -> { _id: 1005.0 } m30001| Fri Feb 22 12:24:33.151 [cleanupOldData-5127638157d9fef1ecdff581] moveChunk starting delete for: test.foo from { _id: 937.0 } -> { _id: 1005.0 } m30001| Fri Feb 22 12:24:33.185 [cleanupOldData-5127638157d9fef1ecdff581] moveChunk deleted 68 documents for test.foo from { _id: 937.0 } -> { _id: 1005.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:33.244 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 19000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 101 m30999| Fri Feb 22 12:24:33.245 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:34.135 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:34.135 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:34.135 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:34 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276382f1561f29e16720f9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276381f1561f29e16720f8" } } m30999| Fri Feb 22 12:24:34.136 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276382f1561f29e16720f9 m30999| Fri Feb 22 12:24:34.136 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:34.136 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:34.136 [Balancer] secondaryThrottle: 1 ---- Running diff1... ---- m30999| Fri Feb 22 12:24:34.138 [Balancer] shard0001 has more chunks me:33 best: shard0000:18 m30999| Fri Feb 22 12:24:34.138 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:34.138 [Balancer] donor : shard0001 chunks on 33 m30999| Fri Feb 22 12:24:34.138 [Balancer] receiver : shard0000 chunks on 18 m30999| Fri Feb 22 12:24:34.138 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:34.138 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1005.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 1005.0 }, max: { _id: 1056.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:34.138 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 19|1||000000000000000000000000min: { _id: 1005.0 }max: { _id: 1056.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:34.138 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:34.138 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1005.0 }, max: { _id: 1056.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1005.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:34.139 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638257d9fef1ecdff583 m30001| Fri Feb 22 12:24:34.139 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:34-5127638257d9fef1ecdff584", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535874139), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1005.0 }, max: { _id: 1056.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:34.140 [conn4] moveChunk request accepted at version 19|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:34.140 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:34.141 [migrateThread] starting receiving-end of migration of chunk { _id: 1005.0 } -> { _id: 1056.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:34.150 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:34.150 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1005.0 } -> { _id: 1056.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:34.151 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1005.0 }, max: { _id: 1056.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:34.152 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1005.0 } -> { _id: 1056.0 } m30001| Fri Feb 22 12:24:34.161 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1005.0 }, max: { _id: 1056.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:34.161 [conn4] moveChunk setting version to: 20|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:34.161 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:34.162 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1005.0 } -> { _id: 1056.0 } m30000| Fri Feb 22 12:24:34.162 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1005.0 } -> { _id: 1056.0 } m30000| Fri Feb 22 12:24:34.162 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:34-51276382b989ea9ef0f7b93b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535874162), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1005.0 }, max: { _id: 1056.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:34.165 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 3, instanceIdent: "bs-smartos-x86-64-1.10gen.cc:30001", version: Timestamp 20000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), yourVersion: Timestamp 19000|1, yourVersionEpoch: ObjectId('51276369f1561f29e16720e6'), msg: BinData, id: ObjectId('512763820000000000000004') }, ok: 1.0 } m30999| Fri Feb 22 12:24:34.165 [WriteBackListener-localhost:30001] connectionId: bs-smartos-x86-64-1.10gen.cc:30001:3 writebackId: 512763820000000000000004 needVersion : 20|0||51276369f1561f29e16720e6 mine : 19|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:34.165 [WriteBackListener-localhost:30001] op: update len: 78 ns: test.foo flags: 1 query: { _id: 1505.0 } update: { $inc: { x: 1.0 } } m30999| Fri Feb 22 12:24:34.165 [WriteBackListener-localhost:30001] new version change detected to 20|0||51276369f1561f29e16720e6, 2 writebacks processed at 14|0||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:34.165 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 19000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 101 m30001| Fri Feb 22 12:24:34.166 [conn6] waiting till out of critical section m30001| Fri Feb 22 12:24:34.171 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 1005.0 }, max: { _id: 1056.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:34.172 [conn4] moveChunk updating self version to: 20|1||51276369f1561f29e16720e6 through { _id: 1056.0 } -> { _id: 1124.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:34.172 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:34-5127638257d9fef1ecdff585", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535874172), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1005.0 }, max: { _id: 1056.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:34.172 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:24:34.172 [WriteBackListener-localhost:30001] setShardVersion failed! m30999| { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 19000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 20000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:34.172 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:34.172 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:34.172 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:34.172 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:34.172 [cleanupOldData-5127638257d9fef1ecdff586] (start) waiting to cleanup test.foo from { _id: 1005.0 } -> { _id: 1056.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:34.173 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:34.173 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:34-5127638257d9fef1ecdff587", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535874173), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1005.0 }, max: { _id: 1056.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:34.173 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:34.174 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 102 version: 20|1||51276369f1561f29e16720e6 based on: 19|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:34.174 [WriteBackListener-localhost:30001] update will be retried b/c sharding config info is stale, retries: 0 ns: test.foo data: { _id: 1505.0 } m30999| Fri Feb 22 12:24:34.174 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 20000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 102 m30999| Fri Feb 22 12:24:34.174 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:34.175 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 103 version: 20|1||51276369f1561f29e16720e6 based on: 19|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:34.175 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:34.175 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. ---- Running diff1... ---- m30999| Fri Feb 22 12:24:34.189 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 20000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 102 m30999| Fri Feb 22 12:24:34.189 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:34.193 [cleanupOldData-5127638257d9fef1ecdff586] waiting to remove documents for test.foo from { _id: 1005.0 } -> { _id: 1056.0 } m30001| Fri Feb 22 12:24:34.193 [cleanupOldData-5127638257d9fef1ecdff586] moveChunk starting delete for: test.foo from { _id: 1005.0 } -> { _id: 1056.0 } m30001| Fri Feb 22 12:24:34.197 [cleanupOldData-5127638257d9fef1ecdff586] moveChunk deleted 51 documents for test.foo from { _id: 1005.0 } -> { _id: 1056.0 } ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:34.216 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 20000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 102 m30999| Fri Feb 22 12:24:34.217 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:35.176 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:35.176 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:35.177 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:35 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276383f1561f29e16720fa" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276382f1561f29e16720f9" } } m30999| Fri Feb 22 12:24:35.177 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276383f1561f29e16720fa m30999| Fri Feb 22 12:24:35.177 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:35.177 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:35.177 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:35.179 [Balancer] shard0001 has more chunks me:32 best: shard0000:19 m30999| Fri Feb 22 12:24:35.179 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:35.179 [Balancer] donor : shard0001 chunks on 32 m30999| Fri Feb 22 12:24:35.179 [Balancer] receiver : shard0000 chunks on 19 m30999| Fri Feb 22 12:24:35.179 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:35.179 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1056.0", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 1056.0 }, max: { _id: 1124.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:35.179 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 20|1||000000000000000000000000min: { _id: 1056.0 }max: { _id: 1124.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:35.179 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:35.179 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1056.0 }, max: { _id: 1124.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1056.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:35.180 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638357d9fef1ecdff588 m30001| Fri Feb 22 12:24:35.180 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:35-5127638357d9fef1ecdff589", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535875180), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1056.0 }, max: { _id: 1124.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:35.180 [conn2] moveChunk request accepted at version 20|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:35.181 [conn2] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:35.181 [migrateThread] starting receiving-end of migration of chunk { _id: 1056.0 } -> { _id: 1124.0 } for collection test.foo from localhost:30001 (0 slaves detected) ---- Running diff1... ---- m30000| Fri Feb 22 12:24:35.191 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:35.191 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1056.0 } -> { _id: 1124.0 } m30001| Fri Feb 22 12:24:35.191 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1056.0 }, max: { _id: 1124.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:35.192 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1056.0 } -> { _id: 1124.0 } m30001| Fri Feb 22 12:24:35.201 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1056.0 }, max: { _id: 1124.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:35.201 [conn2] moveChunk setting version to: 21|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:35.201 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:35.202 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1056.0 } -> { _id: 1124.0 } m30000| Fri Feb 22 12:24:35.202 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1056.0 } -> { _id: 1124.0 } m30000| Fri Feb 22 12:24:35.203 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:35-51276383b989ea9ef0f7b93c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535875203), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1056.0 }, max: { _id: 1124.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:35.211 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 1056.0 }, max: { _id: 1124.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:35.211 [conn2] moveChunk updating self version to: 21|1||51276369f1561f29e16720e6 through { _id: 1124.0 } -> { _id: 1175.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:35.212 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:35-5127638357d9fef1ecdff58a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535875212), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1056.0 }, max: { _id: 1124.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:35.212 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:35.212 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:35.212 [conn2] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:35.212 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:35.212 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:35.212 [cleanupOldData-5127638357d9fef1ecdff58b] (start) waiting to cleanup test.foo from { _id: 1056.0 } -> { _id: 1124.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:35.212 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:35.212 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:35-5127638357d9fef1ecdff58c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535875212), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1056.0 }, max: { _id: 1124.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:35.213 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:35.214 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 104 version: 21|1||51276369f1561f29e16720e6 based on: 20|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:35.214 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:35.214 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. ---- Running diff1... ---- m30999| Fri Feb 22 12:24:35.218 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 21000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 104 m30999| Fri Feb 22 12:24:35.218 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:35.232 [cleanupOldData-5127638357d9fef1ecdff58b] waiting to remove documents for test.foo from { _id: 1056.0 } -> { _id: 1124.0 } m30001| Fri Feb 22 12:24:35.232 [cleanupOldData-5127638357d9fef1ecdff58b] moveChunk starting delete for: test.foo from { _id: 1056.0 } -> { _id: 1124.0 } m30001| Fri Feb 22 12:24:35.236 [cleanupOldData-5127638357d9fef1ecdff58b] moveChunk deleted 68 documents for test.foo from { _id: 1056.0 } -> { _id: 1124.0 } ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:35.260 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 21000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 104 m30999| Fri Feb 22 12:24:35.261 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:36.215 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:36.215 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:36.215 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:36 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276384f1561f29e16720fb" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276383f1561f29e16720fa" } } m30999| Fri Feb 22 12:24:36.216 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276384f1561f29e16720fb m30999| Fri Feb 22 12:24:36.216 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:36.216 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:36.216 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:36.217 [Balancer] shard0001 has more chunks me:31 best: shard0000:20 m30999| Fri Feb 22 12:24:36.217 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:36.217 [Balancer] donor : shard0001 chunks on 31 m30999| Fri Feb 22 12:24:36.217 [Balancer] receiver : shard0000 chunks on 20 m30999| Fri Feb 22 12:24:36.217 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:36.217 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1124.0", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 1124.0 }, max: { _id: 1175.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:36.217 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 21|1||000000000000000000000000min: { _id: 1124.0 }max: { _id: 1175.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:36.217 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:36.218 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1124.0 }, max: { _id: 1175.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1124.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:36.218 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638457d9fef1ecdff58d m30001| Fri Feb 22 12:24:36.218 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:36-5127638457d9fef1ecdff58e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535876218), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1124.0 }, max: { _id: 1175.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:36.220 [conn2] moveChunk request accepted at version 21|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:36.220 [conn2] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:36.220 [migrateThread] starting receiving-end of migration of chunk { _id: 1124.0 } -> { _id: 1175.0 } for collection test.foo from localhost:30001 (0 slaves detected) ---- Running diff1... ---- m30000| Fri Feb 22 12:24:36.229 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:36.229 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1124.0 } -> { _id: 1175.0 } m30000| Fri Feb 22 12:24:36.230 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1124.0 } -> { _id: 1175.0 } m30001| Fri Feb 22 12:24:36.230 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1124.0 }, max: { _id: 1175.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:36.230 [conn2] moveChunk setting version to: 22|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:36.230 [conn10] Waiting for commit to finish ---- Running diff1... ---- m30000| Fri Feb 22 12:24:36.240 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1124.0 } -> { _id: 1175.0 } m30000| Fri Feb 22 12:24:36.240 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:36.241 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1124.0 } -> { _id: 1175.0 } m30000| Fri Feb 22 12:24:36.241 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:36-51276384b989ea9ef0f7b93d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535876241), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1124.0 }, max: { _id: 1175.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:36.251 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 1124.0 }, max: { _id: 1175.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:36.251 [conn2] moveChunk updating self version to: 22|1||51276369f1561f29e16720e6 through { _id: 1175.0 } -> { _id: 1243.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:36.252 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:36-5127638457d9fef1ecdff58f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535876252), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1124.0 }, max: { _id: 1175.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:36.252 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:36.252 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:36.252 [conn2] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:36.252 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:36.252 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:36.252 [cleanupOldData-5127638457d9fef1ecdff590] (start) waiting to cleanup test.foo from { _id: 1124.0 } -> { _id: 1175.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:36.252 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:36.252 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:36-5127638457d9fef1ecdff591", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535876252), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1124.0 }, max: { _id: 1175.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 21, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:36.252 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:36.253 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 105 version: 22|1||51276369f1561f29e16720e6 based on: 21|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:36.253 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:36.253 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. ---- Running diff1... ---- m30999| Fri Feb 22 12:24:36.264 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 22000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 105 m30999| Fri Feb 22 12:24:36.265 [conn1] setShardVersion success: { oldVersion: Timestamp 21000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:36.272 [cleanupOldData-5127638457d9fef1ecdff590] waiting to remove documents for test.foo from { _id: 1124.0 } -> { _id: 1175.0 } m30001| Fri Feb 22 12:24:36.272 [cleanupOldData-5127638457d9fef1ecdff590] moveChunk starting delete for: test.foo from { _id: 1124.0 } -> { _id: 1175.0 } m30001| Fri Feb 22 12:24:36.274 [cleanupOldData-5127638457d9fef1ecdff590] moveChunk deleted 51 documents for test.foo from { _id: 1124.0 } -> { _id: 1175.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:36.277 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 22000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 105 m30999| Fri Feb 22 12:24:36.277 [conn1] setShardVersion success: { oldVersion: Timestamp 21000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:37.254 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:37.254 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:37.254 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:37 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276385f1561f29e16720fc" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276384f1561f29e16720fb" } } m30999| Fri Feb 22 12:24:37.255 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276385f1561f29e16720fc m30999| Fri Feb 22 12:24:37.255 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:37.255 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:37.255 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:37.256 [Balancer] shard0001 has more chunks me:30 best: shard0000:21 m30999| Fri Feb 22 12:24:37.256 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:37.256 [Balancer] donor : shard0001 chunks on 30 m30999| Fri Feb 22 12:24:37.256 [Balancer] receiver : shard0000 chunks on 21 m30999| Fri Feb 22 12:24:37.256 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:37.256 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1175.0", lastmod: Timestamp 22000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 1175.0 }, max: { _id: 1243.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:37.256 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 22|1||000000000000000000000000min: { _id: 1175.0 }max: { _id: 1243.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:37.256 [conn2] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:37.257 [conn2] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1175.0 }, max: { _id: 1243.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1175.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:37.257 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638557d9fef1ecdff592 m30001| Fri Feb 22 12:24:37.257 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:37-5127638557d9fef1ecdff593", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535877257), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1175.0 }, max: { _id: 1243.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:37.258 [conn2] moveChunk request accepted at version 22|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:37.258 [conn2] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:37.258 [migrateThread] starting receiving-end of migration of chunk { _id: 1175.0 } -> { _id: 1243.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:37.268 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:37.268 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1175.0 } -> { _id: 1243.0 } m30001| Fri Feb 22 12:24:37.269 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1175.0 }, max: { _id: 1243.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:37.269 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1175.0 } -> { _id: 1243.0 } ---- Running diff1... ---- m30001| Fri Feb 22 12:24:37.279 [conn2] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1175.0 }, max: { _id: 1243.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:37.279 [conn2] moveChunk setting version to: 23|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:37.279 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:37.280 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1175.0 } -> { _id: 1243.0 } m30000| Fri Feb 22 12:24:37.280 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1175.0 } -> { _id: 1243.0 } m30000| Fri Feb 22 12:24:37.280 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:37-51276385b989ea9ef0f7b93e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535877280), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1175.0 }, max: { _id: 1243.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:37.285 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 3, instanceIdent: "bs-smartos-x86-64-1.10gen.cc:30001", version: Timestamp 23000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), yourVersion: Timestamp 22000|1, yourVersionEpoch: ObjectId('51276369f1561f29e16720e6'), msg: BinData, id: ObjectId('512763850000000000000005') }, ok: 1.0 } m30999| Fri Feb 22 12:24:37.285 [WriteBackListener-localhost:30001] connectionId: bs-smartos-x86-64-1.10gen.cc:30001:3 writebackId: 512763850000000000000005 needVersion : 23|0||51276369f1561f29e16720e6 mine : 22|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:37.285 [WriteBackListener-localhost:30001] op: update len: 78 ns: test.foo flags: 1 query: { _id: 2025.0 } update: { $inc: { x: 1.0 } } m30999| Fri Feb 22 12:24:37.285 [WriteBackListener-localhost:30001] new version change detected to 23|0||51276369f1561f29e16720e6, 1 writebacks processed at 20|0||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:37.285 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 22000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 105 m30001| Fri Feb 22 12:24:37.285 [conn6] waiting till out of critical section m30001| Fri Feb 22 12:24:37.289 [conn2] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 1175.0 }, max: { _id: 1243.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:37.289 [conn2] moveChunk updating self version to: 23|1||51276369f1561f29e16720e6 through { _id: 1243.0 } -> { _id: 1294.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:37.290 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:37-5127638557d9fef1ecdff594", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535877290), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1175.0 }, max: { _id: 1243.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:37.290 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:24:37.290 [WriteBackListener-localhost:30001] setShardVersion failed! m30001| Fri Feb 22 12:24:37.290 [conn2] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", version: Timestamp 22000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), globalVersion: Timestamp 23000|0, globalVersionEpoch: ObjectId('51276369f1561f29e16720e6'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:24:37.290 [conn2] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:37.290 [conn2] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:37.290 [conn2] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:37.290 [cleanupOldData-5127638557d9fef1ecdff595] (start) waiting to cleanup test.foo from { _id: 1175.0 } -> { _id: 1243.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:37.290 [conn2] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:37.290 [conn2] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:37-5127638557d9fef1ecdff596", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36213", time: new Date(1361535877290), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1175.0 }, max: { _id: 1243.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:37.290 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:37.291 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 106 version: 23|1||51276369f1561f29e16720e6 based on: 22|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:37.291 [WriteBackListener-localhost:30001] update will be retried b/c sharding config info is stale, retries: 0 ns: test.foo data: { _id: 2025.0 } m30999| Fri Feb 22 12:24:37.291 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 23000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 106 m30999| Fri Feb 22 12:24:37.291 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:37.292 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 107 version: 23|1||51276369f1561f29e16720e6 based on: 22|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:37.292 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:37.292 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. ---- Running diff1... ---- m30999| Fri Feb 22 12:24:37.309 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 23000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f260 106 m30001| Fri Feb 22 12:24:37.310 [cleanupOldData-5127638557d9fef1ecdff595] waiting to remove documents for test.foo from { _id: 1175.0 } -> { _id: 1243.0 } m30001| Fri Feb 22 12:24:37.310 [cleanupOldData-5127638557d9fef1ecdff595] moveChunk starting delete for: test.foo from { _id: 1175.0 } -> { _id: 1243.0 } m30999| Fri Feb 22 12:24:37.310 [conn1] setShardVersion success: { oldVersion: Timestamp 22000|0, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30001| Fri Feb 22 12:24:37.316 [cleanupOldData-5127638557d9fef1ecdff595] moveChunk deleted 68 documents for test.foo from { _id: 1175.0 } -> { _id: 1243.0 } ---- Running diff1... ---- m30999| Fri Feb 22 12:24:37.323 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 23000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x117fc30 106 m30999| Fri Feb 22 12:24:37.323 [conn1] setShardVersion success: { oldVersion: Timestamp 22000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- ---- Running diff1... ---- m30999| Fri Feb 22 12:24:38.293 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:38.293 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:38.293 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276386f1561f29e16720fd" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276385f1561f29e16720fc" } } m30999| Fri Feb 22 12:24:38.294 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276386f1561f29e16720fd m30999| Fri Feb 22 12:24:38.294 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:38.294 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:38.294 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:38.296 [Balancer] shard0001 has more chunks me:29 best: shard0000:22 m30999| Fri Feb 22 12:24:38.296 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:38.296 [Balancer] donor : shard0001 chunks on 29 m30999| Fri Feb 22 12:24:38.296 [Balancer] receiver : shard0000 chunks on 22 m30999| Fri Feb 22 12:24:38.296 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:38.296 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1243.0", lastmod: Timestamp 23000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 1243.0 }, max: { _id: 1294.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:38.296 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 23|1||000000000000000000000000min: { _id: 1243.0 }max: { _id: 1294.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:38.296 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:38.296 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1243.0 }, max: { _id: 1294.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1243.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:38.297 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638657d9fef1ecdff597 m30001| Fri Feb 22 12:24:38.297 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:38-5127638657d9fef1ecdff598", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535878297), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1243.0 }, max: { _id: 1294.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:38.297 [conn4] moveChunk request accepted at version 23|1||51276369f1561f29e16720e6 ---- Running diff1... ---- m30001| Fri Feb 22 12:24:38.298 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:38.298 [migrateThread] starting receiving-end of migration of chunk { _id: 1243.0 } -> { _id: 1294.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:38.305 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:38.305 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1243.0 } -> { _id: 1294.0 } m30000| Fri Feb 22 12:24:38.306 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1243.0 } -> { _id: 1294.0 } m30001| Fri Feb 22 12:24:38.308 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1243.0 }, max: { _id: 1294.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:38.308 [conn4] moveChunk setting version to: 24|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:38.308 [conn10] Waiting for commit to finish ---- Running diff1... ---- m30999| Fri Feb 22 12:24:38.312 [WriteBackListener-localhost:30001] writebacklisten result: { data: { writeBack: true, ns: "test.foo", connectionId: 3, instanceIdent: "bs-smartos-x86-64-1.10gen.cc:30001", version: Timestamp 24000|0, versionEpoch: ObjectId('51276369f1561f29e16720e6'), yourVersion: Timestamp 23000|1, yourVersionEpoch: ObjectId('51276369f1561f29e16720e6'), msg: BinData, id: ObjectId('512763860000000000000006') }, ok: 1.0 } m30999| Fri Feb 22 12:24:38.312 [WriteBackListener-localhost:30001] connectionId: bs-smartos-x86-64-1.10gen.cc:30001:3 writebackId: 512763860000000000000006 needVersion : 24|0||51276369f1561f29e16720e6 mine : 23|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:38.312 [WriteBackListener-localhost:30001] op: update len: 78 ns: test.foo flags: 1 query: { _id: 1693.0 } update: { $inc: { x: 1.0 } } m30999| Fri Feb 22 12:24:38.312 [WriteBackListener-localhost:30001] new version change detected to 24|0||51276369f1561f29e16720e6, 1 writebacks processed at 23|0||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] new version change detected, 1 writebacks processed previously m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] writeback failed because of stale config, retrying attempts: 1 m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] writeback error : { singleShard: "localhost:30001", err: "cannot queue a writeback operation to the writeback queue", code: 9517, n: 0, connectionId: 6, ok: 1.0 } m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] new version change detected, 1 writebacks processed previously m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] writeback failed because of stale config, retrying attempts: 2 m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] writeback error : { singleShard: "localhost:30001", err: "cannot queue a writeback operation to the writeback queue", code: 9517, n: 0, connectionId: 6, ok: 1.0 } m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] new version change detected, 1 writebacks processed previously m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] writeback failed because of stale config, retrying attempts: 3 m30999| Fri Feb 22 12:24:38.313 [WriteBackListener-localhost:30001] writeback error : { singleShard: "localhost:30001", err: "cannot queue a writeback operation to the writeback queue", code: 9517, n: 0, connectionId: 6, ok: 1.0 } m30999| Fri Feb 22 12:24:38.314 [WriteBackListener-localhost:30001] DBConfig unserialize: test { _id: "test", partitioned: true, primary: "shard0001" } m30999| Fri Feb 22 12:24:38.315 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 108 version: 23|1||51276369f1561f29e16720e6 based on: (empty) m30999| Fri Feb 22 12:24:38.315 [WriteBackListener-localhost:30001] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 109 version: 23|1||51276369f1561f29e16720e6 based on: 23|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:38.315 [WriteBackListener-localhost:30001] warning: chunk manager reload forced for collection 'test.foo', config version is 23|1||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:38.317 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1243.0 } -> { _id: 1294.0 } m30000| Fri Feb 22 12:24:38.317 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1243.0 } -> { _id: 1294.0 } m30000| Fri Feb 22 12:24:38.317 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:38-51276386b989ea9ef0f7b93f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535878317), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1243.0 }, max: { _id: 1294.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:38.318 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 1243.0 }, max: { _id: 1294.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:38.318 [conn4] moveChunk updating self version to: 24|1||51276369f1561f29e16720e6 through { _id: 1294.0 } -> { _id: 1362.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:38.319 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:38-5127638657d9fef1ecdff599", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535878319), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1243.0 }, max: { _id: 1294.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:38.319 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:38.319 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:38.319 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:38.319 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:38.319 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:38.319 [cleanupOldData-5127638657d9fef1ecdff59a] (start) waiting to cleanup test.foo from { _id: 1243.0 } -> { _id: 1294.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:38.319 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:38.319 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:38-5127638657d9fef1ecdff59b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535878319), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1243.0 }, max: { _id: 1294.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:38.319 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:38.320 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 110 version: 24|1||51276369f1561f29e16720e6 based on: 23|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:38.320 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:38.321 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30001| Fri Feb 22 12:24:38.339 [cleanupOldData-5127638657d9fef1ecdff59a] waiting to remove documents for test.foo from { _id: 1243.0 } -> { _id: 1294.0 } m30001| Fri Feb 22 12:24:38.339 [cleanupOldData-5127638657d9fef1ecdff59a] moveChunk starting delete for: test.foo from { _id: 1243.0 } -> { _id: 1294.0 } m30001| Fri Feb 22 12:24:38.342 [cleanupOldData-5127638657d9fef1ecdff59a] moveChunk deleted 51 documents for test.foo from { _id: 1243.0 } -> { _id: 1294.0 } m30999| Fri Feb 22 12:24:39.321 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:39.322 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838 ) m30999| Fri Feb 22 12:24:39.322 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:39 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276387f1561f29e16720fe" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276386f1561f29e16720fd" } } m30999| Fri Feb 22 12:24:39.323 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' acquired, ts : 51276387f1561f29e16720fe m30999| Fri Feb 22 12:24:39.323 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:39.323 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:39.323 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:39.324 [Balancer] shard0001 has more chunks me:28 best: shard0000:23 m30999| Fri Feb 22 12:24:39.324 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:39.324 [Balancer] donor : shard0001 chunks on 28 m30999| Fri Feb 22 12:24:39.324 [Balancer] receiver : shard0000 chunks on 23 m30999| Fri Feb 22 12:24:39.324 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:39.324 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1294.0", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('51276369f1561f29e16720e6'), ns: "test.foo", min: { _id: 1294.0 }, max: { _id: 1362.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:39.324 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 24|1||000000000000000000000000min: { _id: 1294.0 }max: { _id: 1362.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:39.325 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:39.325 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1294.0 }, max: { _id: 1362.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1294.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:39.326 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' acquired, ts : 5127638757d9fef1ecdff59c m30001| Fri Feb 22 12:24:39.326 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:39-5127638757d9fef1ecdff59d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535879326), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1294.0 }, max: { _id: 1362.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:39.327 [conn4] moveChunk request accepted at version 24|1||51276369f1561f29e16720e6 m30001| Fri Feb 22 12:24:39.327 [conn4] moveChunk number of documents: 68 m30000| Fri Feb 22 12:24:39.328 [migrateThread] starting receiving-end of migration of chunk { _id: 1294.0 } -> { _id: 1362.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:39.338 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1294.0 }, max: { _id: 1362.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 41, clonedBytes: 412460, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:39.343 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:39.343 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1294.0 } -> { _id: 1362.0 } m30000| Fri Feb 22 12:24:39.345 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1294.0 } -> { _id: 1362.0 } m30001| Fri Feb 22 12:24:39.348 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1294.0 }, max: { _id: 1362.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:39.348 [conn4] moveChunk setting version to: 25|0||51276369f1561f29e16720e6 m30000| Fri Feb 22 12:24:39.348 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:39.356 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1294.0 } -> { _id: 1362.0 } m30000| Fri Feb 22 12:24:39.356 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1294.0 } -> { _id: 1362.0 } m30000| Fri Feb 22 12:24:39.356 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:39-51276387b989ea9ef0f7b940", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535879356), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1294.0 }, max: { _id: 1362.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 14, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:24:39.358 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 1294.0 }, max: { _id: 1362.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 68, clonedBytes: 684080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:39.359 [conn4] moveChunk updating self version to: 25|1||51276369f1561f29e16720e6 through { _id: 1362.0 } -> { _id: 1413.0 } for collection 'test.foo' m30001| Fri Feb 22 12:24:39.360 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:39-5127638757d9fef1ecdff59e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535879359), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1294.0 }, max: { _id: 1362.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:39.360 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:39.360 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:39.360 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:39.360 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:39.360 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:39.360 [cleanupOldData-5127638757d9fef1ecdff59f] (start) waiting to cleanup test.foo from { _id: 1294.0 } -> { _id: 1362.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:39.360 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535849:17131' unlocked. m30001| Fri Feb 22 12:24:39.360 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:39-5127638757d9fef1ecdff5a0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:46656", time: new Date(1361535879360), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1294.0 }, max: { _id: 1362.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:39.360 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:39.362 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 111 version: 25|1||51276369f1561f29e16720e6 based on: 24|1||51276369f1561f29e16720e6 m30999| Fri Feb 22 12:24:39.362 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:39.362 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838' unlocked. m30001| Fri Feb 22 12:24:39.380 [cleanupOldData-5127638757d9fef1ecdff59f] waiting to remove documents for test.foo from { _id: 1294.0 } -> { _id: 1362.0 } m30001| Fri Feb 22 12:24:39.380 [cleanupOldData-5127638757d9fef1ecdff59f] moveChunk starting delete for: test.foo from { _id: 1294.0 } -> { _id: 1362.0 } m30001| Fri Feb 22 12:24:39.386 [cleanupOldData-5127638757d9fef1ecdff59f] moveChunk deleted 68 documents for test.foo from { _id: 1294.0 } -> { _id: 1362.0 } m30999| Fri Feb 22 12:24:39.396 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:24:39 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535849:16838', sleeping for 30000ms m30999| Fri Feb 22 12:24:40.316 [WriteBackListener-localhost:30001] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 25000|1, versionEpoch: ObjectId('51276369f1561f29e16720e6'), serverID: ObjectId('51276369f1561f29e16720e4'), shard: "shard0001", shardHost: "localhost:30001" } 0x1181770 111 m30999| Fri Feb 22 12:24:40.316 [WriteBackListener-localhost:30001] setShardVersion success: { oldVersion: Timestamp 23000|1, oldVersionEpoch: ObjectId('51276369f1561f29e16720e6'), ok: 1.0 } m30999| Fri Feb 22 12:24:40.325 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 12:24:40.326 [conn2] end connection 127.0.0.1:36213 (5 connections now open) m30001| Fri Feb 22 12:24:40.326 [conn3] end connection 127.0.0.1:37621 (5 connections now open) m30001| Fri Feb 22 12:24:40.326 [conn6] end connection 127.0.0.1:45438 (5 connections now open) m30000| Fri Feb 22 12:24:40.344 [conn6] end connection 127.0.0.1:57742 (14 connections now open) m30000| Fri Feb 22 12:24:40.344 [conn7] end connection 127.0.0.1:58249 (14 connections now open) m30000| Fri Feb 22 12:24:40.344 [conn5] end connection 127.0.0.1:40241 (14 connections now open) m30000| Fri Feb 22 12:24:40.344 [conn14] end connection 127.0.0.1:34172 (14 connections now open) m30000| Fri Feb 22 12:24:40.344 [conn13] end connection 127.0.0.1:33240 (14 connections now open) m30000| Fri Feb 22 12:24:40.344 [conn3] end connection 127.0.0.1:49322 (14 connections now open) m30000| Fri Feb 22 12:24:40.344 [conn15] end connection 127.0.0.1:34080 (14 connections now open) Fri Feb 22 12:24:41.325 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:24:41.326 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:24:41.326 [interruptThread] now exiting m30000| Fri Feb 22 12:24:41.326 dbexit: m30000| Fri Feb 22 12:24:41.326 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:24:41.326 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:24:41.326 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:24:41.326 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:24:41.326 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:24:41.326 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:24:41.326 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:24:41.326 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:24:41.326 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:24:41.326 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:24:41.326 [conn1] end connection 127.0.0.1:55681 (7 connections now open) m30000| Fri Feb 22 12:24:41.326 [conn2] end connection 127.0.0.1:46467 (7 connections now open) m30000| Fri Feb 22 12:24:41.326 [conn8] end connection 127.0.0.1:56793 (7 connections now open) m30000| Fri Feb 22 12:24:41.326 [conn10] end connection 127.0.0.1:42013 (7 connections now open) m30000| Fri Feb 22 12:24:41.326 [conn9] end connection 127.0.0.1:62140 (7 connections now open) m30001| Fri Feb 22 12:24:41.326 [conn5] end connection 127.0.0.1:59527 (2 connections now open) m30000| Fri Feb 22 12:24:41.326 [conn11] end connection 127.0.0.1:46205 (7 connections now open) m30000| Fri Feb 22 12:24:41.326 [conn12] end connection 127.0.0.1:49592 (6 connections now open) m30000| Fri Feb 22 12:24:41.370 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:24:41.376 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:24:41.377 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:24:41.377 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:24:41.377 dbexit: really exiting now Fri Feb 22 12:24:42.326 shell: stopped mongo program on port 30000 Fri Feb 22 12:24:42.366 [conn12] end connection 127.0.0.1:50639 (0 connections now open) m30001| Fri Feb 22 12:24:42.326 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:24:42.326 [interruptThread] now exiting m30001| Fri Feb 22 12:24:42.326 dbexit: m30001| Fri Feb 22 12:24:42.326 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:24:42.326 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:24:42.326 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:24:42.326 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:24:42.326 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:24:42.326 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:24:42.326 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:24:42.331 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:24:42.331 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:24:42.331 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:24:42.331 [conn1] end connection 127.0.0.1:42535 (1 connection now open) m30001| Fri Feb 22 12:24:42.355 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:24:42.365 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:24:42.365 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:24:42.365 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:24:42.366 dbexit: really exiting now Fri Feb 22 12:24:43.326 shell: stopped mongo program on port 30001 *** ShardingTest slow_sharding_balance4 completed successfully in 34.414 seconds *** 34.6415 seconds Fri Feb 22 12:24:43.389 [initandlisten] connection accepted from 127.0.0.1:41019 #13 (1 connection now open) Fri Feb 22 12:24:43.390 [conn13] end connection 127.0.0.1:41019 (0 connections now open) ******************************************* Test : sharding_balance_randomorder1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance_randomorder1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_balance_randomorder1.js";TestData.testFile = "sharding_balance_randomorder1.js";TestData.testName = "sharding_balance_randomorder1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:24:43 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:24:43.567 [initandlisten] connection accepted from 127.0.0.1:34237 #14 (1 connection now open) null Resetting db path '/data/db/sharding_balance_randomorder10' Fri Feb 22 12:24:43.581 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/sharding_balance_randomorder10 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:24:43.651 [initandlisten] MongoDB starting : pid=11576 port=30000 dbpath=/data/db/sharding_balance_randomorder10 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:24:43.652 [initandlisten] m30000| Fri Feb 22 12:24:43.652 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:24:43.652 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:24:43.652 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:24:43.652 [initandlisten] m30000| Fri Feb 22 12:24:43.652 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:24:43.652 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:24:43.652 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:24:43.652 [initandlisten] allocator: system m30000| Fri Feb 22 12:24:43.652 [initandlisten] options: { dbpath: "/data/db/sharding_balance_randomorder10", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:24:43.652 [initandlisten] journal dir=/data/db/sharding_balance_randomorder10/journal m30000| Fri Feb 22 12:24:43.652 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:24:43.667 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/local.ns, filling with zeroes... m30000| Fri Feb 22 12:24:43.667 [FileAllocator] creating directory /data/db/sharding_balance_randomorder10/_tmp m30000| Fri Feb 22 12:24:43.668 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:24:43.668 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/local.0, filling with zeroes... m30000| Fri Feb 22 12:24:43.668 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:24:43.670 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:24:43.670 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:24:43.784 [initandlisten] connection accepted from 127.0.0.1:62231 #1 (1 connection now open) Resetting db path '/data/db/sharding_balance_randomorder11' Fri Feb 22 12:24:43.786 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/sharding_balance_randomorder11 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:24:43.854 [initandlisten] MongoDB starting : pid=11577 port=30001 dbpath=/data/db/sharding_balance_randomorder11 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:24:43.854 [initandlisten] m30001| Fri Feb 22 12:24:43.854 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:24:43.854 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:24:43.854 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:24:43.854 [initandlisten] m30001| Fri Feb 22 12:24:43.854 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:24:43.854 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:24:43.854 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:24:43.854 [initandlisten] allocator: system m30001| Fri Feb 22 12:24:43.854 [initandlisten] options: { dbpath: "/data/db/sharding_balance_randomorder11", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:24:43.855 [initandlisten] journal dir=/data/db/sharding_balance_randomorder11/journal m30001| Fri Feb 22 12:24:43.855 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:24:43.867 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder11/local.ns, filling with zeroes... m30001| Fri Feb 22 12:24:43.867 [FileAllocator] creating directory /data/db/sharding_balance_randomorder11/_tmp m30001| Fri Feb 22 12:24:43.867 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder11/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:24:43.867 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder11/local.0, filling with zeroes... m30001| Fri Feb 22 12:24:43.867 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder11/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:24:43.870 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:24:43.870 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:24:43.988 [initandlisten] connection accepted from 127.0.0.1:51304 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:24:43.989 [initandlisten] connection accepted from 127.0.0.1:44355 #2 (2 connections now open) ShardingTest sharding_balance_randomorder1 : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:24:43.995 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -vv --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:24:44.011 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:24:44.012 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=11579 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:24:44.012 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:24:44.012 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:24:44.012 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], vv: true } m30999| Fri Feb 22 12:24:44.012 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 12:24:44.012 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:44.013 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:44.013 [mongosMain] connected connection! m30000| Fri Feb 22 12:24:44.013 [initandlisten] connection accepted from 127.0.0.1:55799 #3 (3 connections now open) m30999| Fri Feb 22 12:24:44.014 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:44.014 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:24:44.014 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:24:44.014 [initandlisten] connection accepted from 127.0.0.1:48162 #4 (4 connections now open) m30999| Fri Feb 22 12:24:44.014 [mongosMain] connected connection! m30000| Fri Feb 22 12:24:44.015 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:24:44.022 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:24:44.023 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 12:24:44.023 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 12:24:44.023 [mongosMain] skew from remote server localhost:30000 found: 0 m30999| Fri Feb 22 12:24:44.023 [mongosMain] total clock skew of 0ms for servers localhost:30000 is in 30000ms bounds. m30999| Fri Feb 22 12:24:44.023 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:44.023 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:24:44.023 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 12:24:44.023 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:44 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127638ca620d63c9f282dcb" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 12:24:44.024 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/config.ns, filling with zeroes... m30000| Fri Feb 22 12:24:44.024 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:24:44.024 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/config.0, filling with zeroes... m30000| Fri Feb 22 12:24:44.024 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:24:44.024 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/config.1, filling with zeroes... m30000| Fri Feb 22 12:24:44.024 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:24:44.028 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:24:44.029 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:24:44.029 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:24:44.030 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:44.031 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:24:44 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838', sleeping for 30000ms m30000| Fri Feb 22 12:24:44.031 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:24:44.031 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:24:44.031 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127638ca620d63c9f282dcb m30999| Fri Feb 22 12:24:44.033 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:24:44.033 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:24:44.034 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:44-5127638ca620d63c9f282dcc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535884034), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:24:44.034 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:24:44.034 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:44.034 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:24:44.035 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:24:44.035 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:44.035 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:44-5127638ca620d63c9f282dce", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535884035), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:24:44.036 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:24:44.036 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:24:44.037 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:24:44.038 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:24:44.038 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:24:44.038 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:24:44.038 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:24:44.038 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:24:44.038 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 12:24:44.038 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 12:24:44.038 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:44.038 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 12:24:44.039 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:24:44.040 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:24:44.040 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:24:44.040 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:24:44.041 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:44.041 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:24:44.041 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:44.041 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:24:44.042 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:44.042 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:24:44.043 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:24:44.043 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:24:44.043 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:24:44.044 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:44.045 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:24:44.045 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:24:44 m30999| Fri Feb 22 12:24:44.045 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:24:44.045 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:44.045 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:24:44.045 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 12:24:44.045 [Balancer] connected connection! m30000| Fri Feb 22 12:24:44.045 [initandlisten] connection accepted from 127.0.0.1:37831 #5 (5 connections now open) m30000| Fri Feb 22 12:24:44.046 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:44.046 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:44.046 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:44.047 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:24:44.047 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127638ca620d63c9f282dd0" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:24:44.047 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127638ca620d63c9f282dd0 m30999| Fri Feb 22 12:24:44.047 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:44.047 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:44.047 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:44.047 [Balancer] no collections to balance m30999| Fri Feb 22 12:24:44.047 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:24:44.047 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:44.048 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30999| Fri Feb 22 12:24:44.196 [mongosMain] connection accepted from 127.0.0.1:50413 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 12:24:44.198 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:24:44.199 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:24:44.199 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:24:44.200 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:24:44.201 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 12:24:44.202 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:24:44.202 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:44.203 [conn1] connected connection! m30001| Fri Feb 22 12:24:44.203 [initandlisten] connection accepted from 127.0.0.1:38678 #2 (2 connections now open) m30999| Fri Feb 22 12:24:44.204 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 12:24:44.204 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:24:44.205 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:24:44.205 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:24:44.205 [conn1] enabling sharding on: test m30999| Fri Feb 22 12:24:44.206 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.settings", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:44.206 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.206 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.206 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:44.206 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:44.206 [conn1] connected connection! m30000| Fri Feb 22 12:24:44.206 [initandlisten] connection accepted from 127.0.0.1:46871 #6 (6 connections now open) m30999| Fri Feb 22 12:24:44.206 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5127638ca620d63c9f282dcf m30999| Fri Feb 22 12:24:44.207 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:24:44.207 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 12:24:44.207 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5127638ca620d63c9f282dcf'), authoritative: true } m30999| Fri Feb 22 12:24:44.207 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:24:44.207 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:44.207 [conn1] connected connection! m30999| Fri Feb 22 12:24:44.207 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5127638ca620d63c9f282dcf m30001| Fri Feb 22 12:24:44.207 [initandlisten] connection accepted from 127.0.0.1:48791 #3 (3 connections now open) m30999| Fri Feb 22 12:24:44.207 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:24:44.207 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Fri Feb 22 12:24:44.207 [conn1] initial sharding settings : { setShardVersion: "", init: true, configdb: "localhost:30000", serverID: ObjectId('5127638ca620d63c9f282dcf'), authoritative: true } m30999| Fri Feb 22 12:24:44.208 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.208 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.208 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.208 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "chunksize", value: 1 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "chunksize", "value" : 1 } m30001| Fri Feb 22 12:24:44.209 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder11/test.ns, filling with zeroes... m30001| Fri Feb 22 12:24:44.209 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder11/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:24:44.209 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder11/test.0, filling with zeroes... m30001| Fri Feb 22 12:24:44.210 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder11/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:24:44.210 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder11/test.1, filling with zeroes... m30001| Fri Feb 22 12:24:44.210 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder11/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:24:44.213 [conn3] build index test.foo { _id: 1 } m30001| Fri Feb 22 12:24:44.214 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:44.511 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:24:44.511 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:44.511 [conn1] connected connection! m30001| Fri Feb 22 12:24:44.511 [initandlisten] connection accepted from 127.0.0.1:36585 #4 (4 connections now open) m30999| Fri Feb 22 12:24:44.512 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:24:44.512 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:24:44.513 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:24:44.514 [conn1] going to create 41 chunk(s) for: test.foo using new epoch 5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:44.519 [conn1] major version query from 0|0||5127638ca620d63c9f282dd1 and over 0 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 0|0 } } ] } m30999| Fri Feb 22 12:24:44.519 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:24:44.519 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:24:44.519 [conn1] connected connection! m30000| Fri Feb 22 12:24:44.519 [initandlisten] connection accepted from 127.0.0.1:36530 #7 (7 connections now open) m30999| Fri Feb 22 12:24:44.521 [conn1] loaded 41 chunks into new chunk manager for test.foo with version 1|40||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:44.521 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 1|40||5127638ca620d63c9f282dd1 based on: (empty) m30000| Fri Feb 22 12:24:44.521 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 12:24:44.523 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:44.523 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|40||5127638ca620d63c9f282dd1 manager: 0x1183d20 m30999| Fri Feb 22 12:24:44.523 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|40, versionEpoch: ObjectId('5127638ca620d63c9f282dd1'), serverID: ObjectId('5127638ca620d63c9f282dcf'), shard: "shard0001", shardHost: "localhost:30001" } 0x11812b0 2 m30999| Fri Feb 22 12:24:44.523 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:24:44.523 [conn1] have to set shard version for conn: localhost:30001 ns:test.foo my last seq: 0 current: 2 version: 1|40||5127638ca620d63c9f282dd1 manager: 0x1183d20 m30999| Fri Feb 22 12:24:44.523 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|40, versionEpoch: ObjectId('5127638ca620d63c9f282dd1'), serverID: ObjectId('5127638ca620d63c9f282dcf'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x11812b0 2 m30001| Fri Feb 22 12:24:44.523 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:24:44.524 [initandlisten] connection accepted from 127.0.0.1:50229 #8 (8 connections now open) m30999| Fri Feb 22 12:24:44.525 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: {} }, fields: {} } and CInfo { v_ns: "config.chunks", filter: {} } m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 41.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.526 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.527 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 0, "shard0001" : 41 } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.529 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 0, "shard0001" : 41 } 41 m30999| Fri Feb 22 12:24:44.530 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:44.530 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.530 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.530 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.530 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.530 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.530 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:44.531 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:44.531 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:44.531 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.531 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.531 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:44.531 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:44.531 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 0, "shard0001" : 41 } m30999| Fri Feb 22 12:24:49.534 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:49.534 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:49.534 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:49.534 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:49.534 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:49.534 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:49.534 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:49.535 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:49.535 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:49.535 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:49.535 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:49.535 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:49.535 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:49.535 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 0, "shard0001" : 41 } m30999| Fri Feb 22 12:24:50.048 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:50.049 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:50.049 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276392a620d63c9f282dd2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127638ca620d63c9f282dd0" } } m30999| Fri Feb 22 12:24:50.050 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276392a620d63c9f282dd2 m30999| Fri Feb 22 12:24:50.050 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:50.050 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:50.050 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 12:24:50.052 [conn3] build index config.tags { _id: 1 } m30000| Fri Feb 22 12:24:50.055 [conn3] build index done. scanned 0 total records. 0.002 secs m30000| Fri Feb 22 12:24:50.055 [conn3] info: creating collection config.tags on add index m30000| Fri Feb 22 12:24:50.055 [conn3] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 12:24:50.057 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:24:50.058 [Balancer] shard0001 has more chunks me:41 best: shard0000:0 m30999| Fri Feb 22 12:24:50.058 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:50.058 [Balancer] donor : shard0001 chunks on 41 m30999| Fri Feb 22 12:24:50.058 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 12:24:50.058 [Balancer] threshold : 4 m30999| Fri Feb 22 12:24:50.058 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:50.058 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0241540502756834 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:50.058 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:50.058 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:24:50.059 [initandlisten] connection accepted from 127.0.0.1:53631 #9 (9 connections now open) m30001| Fri Feb 22 12:24:50.060 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841 (sleeping for 30000ms) m30001| Fri Feb 22 12:24:50.061 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763923d95f035bd477cc3 m30001| Fri Feb 22 12:24:50.061 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:50-512763923d95f035bd477cc4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535890061), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:50.062 [conn4] moveChunk request accepted at version 1|40||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:50.062 [conn4] moveChunk number of documents: 51 m30000| Fri Feb 22 12:24:50.063 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0241540502756834 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:50.063 [initandlisten] connection accepted from 127.0.0.1:35004 #5 (5 connections now open) m30000| Fri Feb 22 12:24:50.064 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/test.ns, filling with zeroes... m30000| Fri Feb 22 12:24:50.064 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:24:50.064 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/test.0, filling with zeroes... m30000| Fri Feb 22 12:24:50.065 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:24:50.065 [FileAllocator] allocating new datafile /data/db/sharding_balance_randomorder10/test.1, filling with zeroes... m30000| Fri Feb 22 12:24:50.065 [FileAllocator] done allocating datafile /data/db/sharding_balance_randomorder10/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:24:50.068 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 12:24:50.070 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:24:50.070 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 12:24:50.073 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:50.079 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:50.079 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30000| Fri Feb 22 12:24:50.082 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30001| Fri Feb 22 12:24:50.083 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:50.083 [conn4] moveChunk setting version to: 2|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:50.083 [initandlisten] connection accepted from 127.0.0.1:38768 #10 (10 connections now open) m30000| Fri Feb 22 12:24:50.084 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:50.093 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30000| Fri Feb 22 12:24:50.093 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30000| Fri Feb 22 12:24:50.093 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:50-51276392865ed09de9e1dacd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535890093), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, step1 of 5: 7, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 13 } } m30000| Fri Feb 22 12:24:50.093 [initandlisten] connection accepted from 127.0.0.1:49696 #11 (11 connections now open) m30001| Fri Feb 22 12:24:50.094 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:50.094 [conn4] moveChunk updating self version to: 2|1||5127638ca620d63c9f282dd1 through { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } for collection 'test.foo' m30000| Fri Feb 22 12:24:50.094 [initandlisten] connection accepted from 127.0.0.1:64844 #12 (12 connections now open) m30001| Fri Feb 22 12:24:50.095 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:50-512763923d95f035bd477cc5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535890095), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:50.095 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:50.095 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:50.095 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:50.095 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:50.095 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:50.095 [cleanupOldData-512763923d95f035bd477cc6] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:50.095 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:50.096 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:50-512763923d95f035bd477cc7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535890096), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:50.096 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:50.096 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 1|40||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:50.096 [Balancer] major version query from 1|40||5127638ca620d63c9f282dd1 and over 1 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 1000|40 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 1000|40 } } ] } m30999| Fri Feb 22 12:24:50.097 [Balancer] loaded 3 chunks into new chunk manager for test.foo with version 2|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:50.097 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|1||5127638ca620d63c9f282dd1 based on: 1|40||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:50.097 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:50.097 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:50.115 [cleanupOldData-512763923d95f035bd477cc6] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 } m30001| Fri Feb 22 12:24:50.115 [cleanupOldData-512763923d95f035bd477cc6] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 } m30001| Fri Feb 22 12:24:50.121 [cleanupOldData-512763923d95f035bd477cc6] moveChunk deleted 51 documents for test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 } m30999| Fri Feb 22 12:24:51.098 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:51.098 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:51.098 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:51 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276393a620d63c9f282dd3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276392a620d63c9f282dd2" } } m30999| Fri Feb 22 12:24:51.099 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276393a620d63c9f282dd3 m30999| Fri Feb 22 12:24:51.099 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:51.099 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:51.099 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:51.100 [Balancer] shard0001 has more chunks me:40 best: shard0000:1 m30999| Fri Feb 22 12:24:51.100 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:51.100 [Balancer] donor : shard0001 chunks on 40 m30999| Fri Feb 22 12:24:51.100 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 12:24:51.100 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:51.100 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.0241540502756834", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:51.101 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 0.0241540502756834 }max: { _id: 0.04868675791658461 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:51.101 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:51.101 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0241540502756834", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:51.102 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763933d95f035bd477cc8 m30001| Fri Feb 22 12:24:51.102 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:51-512763933d95f035bd477cc9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535891102), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:51.103 [conn4] moveChunk request accepted at version 2|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:51.103 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:51.103 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:51.111 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:51.111 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30000| Fri Feb 22 12:24:51.113 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30001| Fri Feb 22 12:24:51.113 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:51.113 [conn4] moveChunk setting version to: 3|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:51.114 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:51.123 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30000| Fri Feb 22 12:24:51.123 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30000| Fri Feb 22 12:24:51.123 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:51-51276393865ed09de9e1dace", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535891123), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:51.124 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:51.124 [conn4] moveChunk updating self version to: 3|1||5127638ca620d63c9f282dd1 through { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } for collection 'test.foo' m30001| Fri Feb 22 12:24:51.124 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:51-512763933d95f035bd477cca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535891124), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:51.124 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:51.124 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:51.124 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:51.124 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:51.124 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:51.125 [cleanupOldData-512763933d95f035bd477ccb] (start) waiting to cleanup test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:51.125 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:51.125 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:51-512763933d95f035bd477ccc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535891125), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:51.125 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:51.125 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 2|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:51.125 [Balancer] major version query from 2|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 2000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 2000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 2000|1 } } ] } m30999| Fri Feb 22 12:24:51.126 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 3|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:51.126 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 3|1||5127638ca620d63c9f282dd1 based on: 2|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:51.126 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:51.127 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:51.145 [cleanupOldData-512763933d95f035bd477ccb] waiting to remove documents for test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30001| Fri Feb 22 12:24:51.145 [cleanupOldData-512763933d95f035bd477ccb] moveChunk starting delete for: test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30001| Fri Feb 22 12:24:51.149 [cleanupOldData-512763933d95f035bd477ccb] moveChunk deleted 52 documents for test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30999| Fri Feb 22 12:24:52.127 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:52.127 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:52.128 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:52 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276394a620d63c9f282dd4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276393a620d63c9f282dd3" } } m30999| Fri Feb 22 12:24:52.128 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276394a620d63c9f282dd4 m30999| Fri Feb 22 12:24:52.128 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:52.128 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:52.128 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:52.129 [Balancer] shard0001 has more chunks me:39 best: shard0000:2 m30999| Fri Feb 22 12:24:52.129 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:52.129 [Balancer] donor : shard0001 chunks on 39 m30999| Fri Feb 22 12:24:52.129 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 12:24:52.129 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:52.129 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.04868675791658461", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:52.129 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 0.04868675791658461 }max: { _id: 0.07311806757934391 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:52.130 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:52.130 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.04868675791658461", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:52.131 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763943d95f035bd477ccd m30001| Fri Feb 22 12:24:52.131 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:52-512763943d95f035bd477cce", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535892131), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:52.132 [conn4] moveChunk request accepted at version 3|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:52.132 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:52.132 [migrateThread] starting receiving-end of migration of chunk { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:52.139 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:52.139 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30000| Fri Feb 22 12:24:52.140 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30001| Fri Feb 22 12:24:52.142 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:52.142 [conn4] moveChunk setting version to: 4|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:52.142 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:52.150 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30000| Fri Feb 22 12:24:52.150 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30000| Fri Feb 22 12:24:52.150 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:52-51276394865ed09de9e1dacf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535892150), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:52.152 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:52.152 [conn4] moveChunk updating self version to: 4|1||5127638ca620d63c9f282dd1 through { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } for collection 'test.foo' m30001| Fri Feb 22 12:24:52.153 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:52-512763943d95f035bd477ccf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535892153), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:52.161 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:52.161 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:52.162 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:52.162 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:52.162 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:52.162 [cleanupOldData-512763943d95f035bd477cd0] (start) waiting to cleanup test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:52.162 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:52.162 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:52-512763943d95f035bd477cd1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535892162), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 19, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:52.162 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:52.163 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 3|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:52.163 [Balancer] major version query from 3|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 3000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 3000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 3000|1 } } ] } m30999| Fri Feb 22 12:24:52.163 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 4|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:52.163 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 5 version: 4|1||5127638ca620d63c9f282dd1 based on: 3|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:52.163 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:52.164 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:52.182 [cleanupOldData-512763943d95f035bd477cd0] waiting to remove documents for test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30001| Fri Feb 22 12:24:52.182 [cleanupOldData-512763943d95f035bd477cd0] moveChunk starting delete for: test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30001| Fri Feb 22 12:24:52.186 [cleanupOldData-512763943d95f035bd477cd0] moveChunk deleted 52 documents for test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30999| Fri Feb 22 12:24:53.164 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:53.165 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:53.165 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:53 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276395a620d63c9f282dd5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276394a620d63c9f282dd4" } } m30999| Fri Feb 22 12:24:53.166 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276395a620d63c9f282dd5 m30999| Fri Feb 22 12:24:53.166 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:53.166 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:53.166 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:53.167 [Balancer] shard0001 has more chunks me:38 best: shard0000:3 m30999| Fri Feb 22 12:24:53.167 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:53.167 [Balancer] donor : shard0001 chunks on 38 m30999| Fri Feb 22 12:24:53.167 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 12:24:53.167 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:53.167 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.07311806757934391", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:53.168 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 0.07311806757934391 }max: { _id: 0.09237447124905884 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:53.168 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:53.168 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.07311806757934391", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:53.169 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763953d95f035bd477cd2 m30001| Fri Feb 22 12:24:53.169 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:53-512763953d95f035bd477cd3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535893169), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:53.170 [conn4] moveChunk request accepted at version 4|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:53.170 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:53.170 [migrateThread] starting receiving-end of migration of chunk { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:53.181 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 35, clonedBytes: 351715, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:53.183 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:53.183 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30000| Fri Feb 22 12:24:53.184 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30001| Fri Feb 22 12:24:53.191 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:53.191 [conn4] moveChunk setting version to: 5|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:53.191 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:53.195 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30000| Fri Feb 22 12:24:53.195 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30000| Fri Feb 22 12:24:53.195 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:53-51276395865ed09de9e1dad0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535893195), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, step1 of 5: 2, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:53.201 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:53.201 [conn4] moveChunk updating self version to: 5|1||5127638ca620d63c9f282dd1 through { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } for collection 'test.foo' m30001| Fri Feb 22 12:24:53.202 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:53-512763953d95f035bd477cd4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535893202), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:53.202 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:53.202 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:53.202 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:53.202 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:53.202 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:53.202 [cleanupOldData-512763953d95f035bd477cd5] (start) waiting to cleanup test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:53.202 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:53.202 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:53-512763953d95f035bd477cd6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535893202), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:53.202 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:53.203 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 4|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:53.203 [Balancer] major version query from 4|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 4000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 4000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 4000|1 } } ] } m30999| Fri Feb 22 12:24:53.204 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 5|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:53.204 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 6 version: 5|1||5127638ca620d63c9f282dd1 based on: 4|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:53.204 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:53.204 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:53.222 [cleanupOldData-512763953d95f035bd477cd5] waiting to remove documents for test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30001| Fri Feb 22 12:24:53.222 [cleanupOldData-512763953d95f035bd477cd5] moveChunk starting delete for: test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30001| Fri Feb 22 12:24:53.227 [cleanupOldData-512763953d95f035bd477cd5] moveChunk deleted 52 documents for test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30999| Fri Feb 22 12:24:54.205 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:54.205 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:54.205 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:54 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276396a620d63c9f282dd6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276395a620d63c9f282dd5" } } m30999| Fri Feb 22 12:24:54.206 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276396a620d63c9f282dd6 m30999| Fri Feb 22 12:24:54.206 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:54.206 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:54.206 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:54.207 [Balancer] shard0001 has more chunks me:37 best: shard0000:4 m30999| Fri Feb 22 12:24:54.207 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:54.207 [Balancer] donor : shard0001 chunks on 37 m30999| Fri Feb 22 12:24:54.207 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 12:24:54.207 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:54.207 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.09237447124905884", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:54.208 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 0.09237447124905884 }max: { _id: 0.119084446458146 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:54.208 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:54.208 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.09237447124905884", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:54.209 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763963d95f035bd477cd7 m30001| Fri Feb 22 12:24:54.209 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:54-512763963d95f035bd477cd8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535894209), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:54.210 [conn4] moveChunk request accepted at version 5|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:54.210 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:54.222 [migrateThread] starting receiving-end of migration of chunk { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:54.232 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 48, clonedBytes: 482352, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:54.233 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:54.233 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30000| Fri Feb 22 12:24:54.234 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30001| Fri Feb 22 12:24:54.242 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:54.242 [conn4] moveChunk setting version to: 6|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:54.242 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:54.244 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30000| Fri Feb 22 12:24:54.244 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30000| Fri Feb 22 12:24:54.245 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:54-51276396865ed09de9e1dad1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535894244), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:54.252 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:54.253 [conn4] moveChunk updating self version to: 6|1||5127638ca620d63c9f282dd1 through { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } for collection 'test.foo' m30001| Fri Feb 22 12:24:54.253 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:54-512763963d95f035bd477cd9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535894253), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:54.253 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:54.253 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:54.253 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:54.253 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:54.253 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:54.253 [cleanupOldData-512763963d95f035bd477cda] (start) waiting to cleanup test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:54.254 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:54.254 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:54-512763963d95f035bd477cdb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535894254), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 11, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:54.254 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:54.254 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 5|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:54.254 [Balancer] major version query from 5|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 5000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 5000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 5000|1 } } ] } m30999| Fri Feb 22 12:24:54.255 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 6|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:54.255 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 6|1||5127638ca620d63c9f282dd1 based on: 5|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:54.255 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:54.255 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:54.274 [cleanupOldData-512763963d95f035bd477cda] waiting to remove documents for test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30001| Fri Feb 22 12:24:54.274 [cleanupOldData-512763963d95f035bd477cda] moveChunk starting delete for: test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30001| Fri Feb 22 12:24:54.279 [cleanupOldData-512763963d95f035bd477cda] moveChunk deleted 52 documents for test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:54.537 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:54.538 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:54.538 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:54.538 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:54.538 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:54.538 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:54.538 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 5, "shard0001" : 36 } m30999| Fri Feb 22 12:24:55.256 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:55.257 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:55.257 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:55 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276397a620d63c9f282dd7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276396a620d63c9f282dd6" } } m30999| Fri Feb 22 12:24:55.258 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276397a620d63c9f282dd7 m30999| Fri Feb 22 12:24:55.258 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:55.258 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:55.258 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:55.259 [Balancer] shard0001 has more chunks me:36 best: shard0000:5 m30999| Fri Feb 22 12:24:55.259 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:55.259 [Balancer] donor : shard0001 chunks on 36 m30999| Fri Feb 22 12:24:55.259 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 12:24:55.259 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:55.259 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.119084446458146", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:55.259 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: 0.119084446458146 }max: { _id: 0.1485460062976927 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:55.259 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:55.259 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.119084446458146", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:55.260 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763973d95f035bd477cdc m30001| Fri Feb 22 12:24:55.260 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:55-512763973d95f035bd477cdd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535895260), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:55.261 [conn4] moveChunk request accepted at version 6|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:55.261 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:55.262 [migrateThread] starting receiving-end of migration of chunk { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:55.271 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:55.271 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30001| Fri Feb 22 12:24:55.272 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:55.272 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30001| Fri Feb 22 12:24:55.282 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:55.282 [conn4] moveChunk setting version to: 7|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:55.282 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:55.282 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30000| Fri Feb 22 12:24:55.282 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30000| Fri Feb 22 12:24:55.282 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:55-51276397865ed09de9e1dad2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535895282), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:55.292 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:55.292 [conn4] moveChunk updating self version to: 7|1||5127638ca620d63c9f282dd1 through { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } for collection 'test.foo' m30001| Fri Feb 22 12:24:55.293 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:55-512763973d95f035bd477cde", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535895293), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:55.293 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:55.293 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:55.293 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:55.293 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:55.293 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:55.293 [cleanupOldData-512763973d95f035bd477cdf] (start) waiting to cleanup test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:55.293 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:55.293 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:55-512763973d95f035bd477ce0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535895293), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:55.294 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:55.294 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 6|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:55.294 [Balancer] major version query from 6|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 6000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 6000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 6000|1 } } ] } m30999| Fri Feb 22 12:24:55.295 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 7|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:55.295 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 8 version: 7|1||5127638ca620d63c9f282dd1 based on: 6|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:55.295 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:55.295 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:55.313 [cleanupOldData-512763973d95f035bd477cdf] waiting to remove documents for test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30001| Fri Feb 22 12:24:55.313 [cleanupOldData-512763973d95f035bd477cdf] moveChunk starting delete for: test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30001| Fri Feb 22 12:24:55.319 [cleanupOldData-512763973d95f035bd477cdf] moveChunk deleted 52 documents for test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30999| Fri Feb 22 12:24:56.296 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:56.296 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:56.296 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:56 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276398a620d63c9f282dd8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276397a620d63c9f282dd7" } } m30999| Fri Feb 22 12:24:56.297 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276398a620d63c9f282dd8 m30999| Fri Feb 22 12:24:56.297 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:56.297 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:56.297 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:56.298 [Balancer] shard0001 has more chunks me:35 best: shard0000:6 m30999| Fri Feb 22 12:24:56.298 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:56.298 [Balancer] donor : shard0001 chunks on 35 m30999| Fri Feb 22 12:24:56.298 [Balancer] receiver : shard0000 chunks on 6 m30999| Fri Feb 22 12:24:56.298 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:56.298 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.1485460062976927", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:56.298 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { _id: 0.1485460062976927 }max: { _id: 0.174441245617345 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:56.299 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:56.299 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.1485460062976927", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:56.300 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763983d95f035bd477ce1 m30001| Fri Feb 22 12:24:56.300 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:56-512763983d95f035bd477ce2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535896300), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:56.301 [conn4] moveChunk request accepted at version 7|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:56.301 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:56.301 [migrateThread] starting receiving-end of migration of chunk { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:56.310 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:56.310 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30000| Fri Feb 22 12:24:56.311 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30001| Fri Feb 22 12:24:56.311 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:56.311 [conn4] moveChunk setting version to: 8|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:56.311 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:56.321 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30000| Fri Feb 22 12:24:56.321 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30000| Fri Feb 22 12:24:56.321 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:56-51276398865ed09de9e1dad3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535896321), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:56.322 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:56.322 [conn4] moveChunk updating self version to: 8|1||5127638ca620d63c9f282dd1 through { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } for collection 'test.foo' m30001| Fri Feb 22 12:24:56.322 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:56-512763983d95f035bd477ce3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535896322), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:56.322 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:56.322 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:56.322 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:56.322 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:56.322 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:56.322 [cleanupOldData-512763983d95f035bd477ce4] (start) waiting to cleanup test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:56.323 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:56.323 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:56-512763983d95f035bd477ce5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535896323), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:56.323 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:56.323 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 7|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:56.323 [Balancer] major version query from 7|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 7000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 7000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 7000|1 } } ] } m30999| Fri Feb 22 12:24:56.324 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 8|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:56.324 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 8|1||5127638ca620d63c9f282dd1 based on: 7|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:56.324 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:56.324 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:56.342 [cleanupOldData-512763983d95f035bd477ce4] waiting to remove documents for test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30001| Fri Feb 22 12:24:56.342 [cleanupOldData-512763983d95f035bd477ce4] moveChunk starting delete for: test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30001| Fri Feb 22 12:24:56.357 [cleanupOldData-512763983d95f035bd477ce4] moveChunk deleted 52 documents for test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30999| Fri Feb 22 12:24:57.325 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:57.325 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:57.325 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:57 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276399a620d63c9f282dd9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276398a620d63c9f282dd8" } } m30999| Fri Feb 22 12:24:57.326 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 51276399a620d63c9f282dd9 m30999| Fri Feb 22 12:24:57.326 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:57.326 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:57.326 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:57.328 [Balancer] shard0001 has more chunks me:34 best: shard0000:7 m30999| Fri Feb 22 12:24:57.328 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:57.328 [Balancer] donor : shard0001 chunks on 34 m30999| Fri Feb 22 12:24:57.328 [Balancer] receiver : shard0000 chunks on 7 m30999| Fri Feb 22 12:24:57.328 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:57.328 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.174441245617345", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:57.328 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { _id: 0.174441245617345 }max: { _id: 0.1965870081912726 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:57.328 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:57.328 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.174441245617345", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:57.329 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763993d95f035bd477ce6 m30001| Fri Feb 22 12:24:57.329 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:57-512763993d95f035bd477ce7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535897329), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:57.330 [conn4] moveChunk request accepted at version 8|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:57.330 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:57.330 [migrateThread] starting receiving-end of migration of chunk { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:24:57.341 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:24:57.341 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:57.341 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30000| Fri Feb 22 12:24:57.342 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30001| Fri Feb 22 12:24:57.351 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:57.351 [conn4] moveChunk setting version to: 9|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:57.351 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:57.353 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30000| Fri Feb 22 12:24:57.353 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30000| Fri Feb 22 12:24:57.353 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:57-51276399865ed09de9e1dad4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535897353), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:57.361 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:57.361 [conn4] moveChunk updating self version to: 9|1||5127638ca620d63c9f282dd1 through { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } for collection 'test.foo' m30001| Fri Feb 22 12:24:57.362 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:57-512763993d95f035bd477ce8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535897362), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:57.362 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:57.362 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:57.362 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:57.362 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:57.362 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:57.362 [cleanupOldData-512763993d95f035bd477ce9] (start) waiting to cleanup test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:57.362 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:57.362 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:57-512763993d95f035bd477cea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535897362), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:57.363 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:57.363 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 8|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:57.363 [Balancer] major version query from 8|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 8000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 8000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 8000|1 } } ] } m30999| Fri Feb 22 12:24:57.364 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 9|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:57.364 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 10 version: 9|1||5127638ca620d63c9f282dd1 based on: 8|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:57.364 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:57.364 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:57.382 [cleanupOldData-512763993d95f035bd477ce9] waiting to remove documents for test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30001| Fri Feb 22 12:24:57.382 [cleanupOldData-512763993d95f035bd477ce9] moveChunk starting delete for: test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30001| Fri Feb 22 12:24:57.388 [cleanupOldData-512763993d95f035bd477ce9] moveChunk deleted 52 documents for test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30999| Fri Feb 22 12:24:58.365 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:58.365 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:58.366 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:58 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127639aa620d63c9f282dda" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276399a620d63c9f282dd9" } } m30999| Fri Feb 22 12:24:58.366 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127639aa620d63c9f282dda m30999| Fri Feb 22 12:24:58.366 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:58.366 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:58.366 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:58.368 [Balancer] shard0001 has more chunks me:33 best: shard0000:8 m30999| Fri Feb 22 12:24:58.368 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:58.368 [Balancer] donor : shard0001 chunks on 33 m30999| Fri Feb 22 12:24:58.368 [Balancer] receiver : shard0000 chunks on 8 m30999| Fri Feb 22 12:24:58.368 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:58.368 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.1965870081912726", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:58.368 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { _id: 0.1965870081912726 }max: { _id: 0.2179276177193969 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:58.368 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:58.368 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.1965870081912726", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:58.369 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 5127639a3d95f035bd477ceb m30001| Fri Feb 22 12:24:58.369 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:58-5127639a3d95f035bd477cec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535898369), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:58.371 [conn4] moveChunk request accepted at version 9|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:58.371 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:58.371 [migrateThread] starting receiving-end of migration of chunk { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:58.381 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:58.381 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:58.381 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30000| Fri Feb 22 12:24:58.383 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30001| Fri Feb 22 12:24:58.391 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:58.392 [conn4] moveChunk setting version to: 10|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:58.392 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:58.393 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30000| Fri Feb 22 12:24:58.393 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30000| Fri Feb 22 12:24:58.393 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:58-5127639a865ed09de9e1dad5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535898393), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:24:58.402 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:58.402 [conn4] moveChunk updating self version to: 10|1||5127638ca620d63c9f282dd1 through { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } for collection 'test.foo' m30001| Fri Feb 22 12:24:58.403 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:58-5127639a3d95f035bd477ced", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535898403), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:58.403 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:58.403 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:58.403 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:58.403 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:58.403 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:58.403 [cleanupOldData-5127639a3d95f035bd477cee] (start) waiting to cleanup test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:58.403 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:58.403 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:58-5127639a3d95f035bd477cef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535898403), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:58.403 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:58.404 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 9|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:58.404 [Balancer] major version query from 9|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 9000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 9000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 9000|1 } } ] } m30999| Fri Feb 22 12:24:58.405 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 10|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:58.405 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 10|1||5127638ca620d63c9f282dd1 based on: 9|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:58.405 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:58.405 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:58.423 [cleanupOldData-5127639a3d95f035bd477cee] waiting to remove documents for test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30001| Fri Feb 22 12:24:58.423 [cleanupOldData-5127639a3d95f035bd477cee] moveChunk starting delete for: test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30001| Fri Feb 22 12:24:58.428 [cleanupOldData-5127639a3d95f035bd477cee] moveChunk deleted 52 documents for test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30999| Fri Feb 22 12:24:59.405 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:24:59.406 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:24:59.406 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:24:59 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127639ba620d63c9f282ddb" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127639aa620d63c9f282dda" } } m30999| Fri Feb 22 12:24:59.407 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127639ba620d63c9f282ddb m30999| Fri Feb 22 12:24:59.407 [Balancer] *** start balancing round m30999| Fri Feb 22 12:24:59.407 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:24:59.407 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:24:59.408 [Balancer] shard0001 has more chunks me:32 best: shard0000:9 m30999| Fri Feb 22 12:24:59.408 [Balancer] collection : test.foo m30999| Fri Feb 22 12:24:59.408 [Balancer] donor : shard0001 chunks on 32 m30999| Fri Feb 22 12:24:59.408 [Balancer] receiver : shard0000 chunks on 9 m30999| Fri Feb 22 12:24:59.408 [Balancer] threshold : 2 m30999| Fri Feb 22 12:24:59.408 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.2179276177193969", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:24:59.408 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { _id: 0.2179276177193969 }max: { _id: 0.2448905608616769 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:24:59.408 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:24:59.408 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.2179276177193969", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:24:59.409 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 5127639b3d95f035bd477cf0 m30001| Fri Feb 22 12:24:59.409 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:59-5127639b3d95f035bd477cf1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535899409), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:59.410 [conn4] moveChunk request accepted at version 10|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:24:59.410 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:24:59.410 [migrateThread] starting receiving-end of migration of chunk { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:24:59.421 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:24:59.421 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:24:59.421 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30000| Fri Feb 22 12:24:59.422 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30001| Fri Feb 22 12:24:59.431 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:24:59.431 [conn4] moveChunk setting version to: 11|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:24:59.431 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:24:59.433 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30000| Fri Feb 22 12:24:59.433 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30000| Fri Feb 22 12:24:59.433 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:59-5127639b865ed09de9e1dad6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535899433), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:24:59.441 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:24:59.441 [conn4] moveChunk updating self version to: 11|1||5127638ca620d63c9f282dd1 through { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } for collection 'test.foo' m30001| Fri Feb 22 12:24:59.442 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:59-5127639b3d95f035bd477cf2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535899442), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:24:59.442 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:59.442 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:59.442 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:24:59.442 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:24:59.442 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:24:59.442 [cleanupOldData-5127639b3d95f035bd477cf3] (start) waiting to cleanup test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 }, # cursors remaining: 0 m30001| Fri Feb 22 12:24:59.442 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:24:59.442 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:24:59-5127639b3d95f035bd477cf4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535899442), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:24:59.442 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:24:59.443 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 10|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:24:59.443 [Balancer] major version query from 10|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 10000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 10000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 10000|1 } } ] } m30999| Fri Feb 22 12:24:59.443 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 11|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:59.444 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 11|1||5127638ca620d63c9f282dd1 based on: 10|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:24:59.444 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:24:59.444 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:24:59.462 [cleanupOldData-5127639b3d95f035bd477cf3] waiting to remove documents for test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30001| Fri Feb 22 12:24:59.462 [cleanupOldData-5127639b3d95f035bd477cf3] moveChunk starting delete for: test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30001| Fri Feb 22 12:24:59.466 [cleanupOldData-5127639b3d95f035bd477cf3] moveChunk deleted 52 documents for test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:24:59.540 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:24:59.541 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 10, "shard0001" : 31 } m30999| Fri Feb 22 12:25:00.444 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:00.445 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:00.445 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:00 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127639ca620d63c9f282ddc" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127639ba620d63c9f282ddb" } } m30999| Fri Feb 22 12:25:00.446 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127639ca620d63c9f282ddc m30999| Fri Feb 22 12:25:00.446 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:00.446 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:00.446 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:00.447 [Balancer] shard0001 has more chunks me:31 best: shard0000:10 m30999| Fri Feb 22 12:25:00.447 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:00.447 [Balancer] donor : shard0001 chunks on 31 m30999| Fri Feb 22 12:25:00.447 [Balancer] receiver : shard0000 chunks on 10 m30999| Fri Feb 22 12:25:00.447 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:00.447 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.2448905608616769", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:00.447 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: 0.2448905608616769 }max: { _id: 0.2748649539425969 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:00.447 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:00.447 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.2448905608616769", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:00.448 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 5127639c3d95f035bd477cf5 m30001| Fri Feb 22 12:25:00.448 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:00-5127639c3d95f035bd477cf6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535900448), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:00.449 [conn4] moveChunk request accepted at version 11|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:00.449 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:00.450 [migrateThread] starting receiving-end of migration of chunk { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:00.460 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:00.460 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:00.460 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30000| Fri Feb 22 12:25:00.461 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30001| Fri Feb 22 12:25:00.470 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:00.470 [conn4] moveChunk setting version to: 12|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:00.470 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:00.472 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30000| Fri Feb 22 12:25:00.472 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30000| Fri Feb 22 12:25:00.472 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:00-5127639c865ed09de9e1dad7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535900472), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:00.480 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:00.480 [conn4] moveChunk updating self version to: 12|1||5127638ca620d63c9f282dd1 through { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } for collection 'test.foo' m30001| Fri Feb 22 12:25:00.481 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:00-5127639c3d95f035bd477cf7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535900481), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:00.481 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:00.481 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:00.481 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:00.481 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:00.481 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:00.481 [cleanupOldData-5127639c3d95f035bd477cf8] (start) waiting to cleanup test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:00.482 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:00.482 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:00-5127639c3d95f035bd477cf9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535900482), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:00.482 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:00.482 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 11|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:00.482 [Balancer] major version query from 11|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 11000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 11000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 11000|1 } } ] } m30999| Fri Feb 22 12:25:00.483 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 12|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:00.483 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 12|1||5127638ca620d63c9f282dd1 based on: 11|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:00.483 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:00.483 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:00.501 [cleanupOldData-5127639c3d95f035bd477cf8] waiting to remove documents for test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30001| Fri Feb 22 12:25:00.501 [cleanupOldData-5127639c3d95f035bd477cf8] moveChunk starting delete for: test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30001| Fri Feb 22 12:25:00.505 [cleanupOldData-5127639c3d95f035bd477cf8] moveChunk deleted 52 documents for test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30999| Fri Feb 22 12:25:01.484 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:01.484 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:01.484 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127639da620d63c9f282ddd" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127639ca620d63c9f282ddc" } } m30999| Fri Feb 22 12:25:01.485 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127639da620d63c9f282ddd m30999| Fri Feb 22 12:25:01.485 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:01.485 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:01.485 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:01.486 [Balancer] shard0001 has more chunks me:30 best: shard0000:11 m30999| Fri Feb 22 12:25:01.486 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:01.486 [Balancer] donor : shard0001 chunks on 30 m30999| Fri Feb 22 12:25:01.486 [Balancer] receiver : shard0000 chunks on 11 m30999| Fri Feb 22 12:25:01.486 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:01.486 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.2748649539425969", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:01.486 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 12|1||000000000000000000000000min: { _id: 0.2748649539425969 }max: { _id: 0.3002202115021646 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:01.487 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:01.487 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.2748649539425969", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:01.488 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 5127639d3d95f035bd477cfa m30001| Fri Feb 22 12:25:01.488 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:01-5127639d3d95f035bd477cfb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535901488), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:01.488 [conn4] moveChunk request accepted at version 12|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:01.489 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:01.489 [migrateThread] starting receiving-end of migration of chunk { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:01.499 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:01.499 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:01.499 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30000| Fri Feb 22 12:25:01.501 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30001| Fri Feb 22 12:25:01.509 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:01.509 [conn4] moveChunk setting version to: 13|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:01.510 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:01.511 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30000| Fri Feb 22 12:25:01.511 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30000| Fri Feb 22 12:25:01.511 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:01-5127639d865ed09de9e1dad8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535901511), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:01.520 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:01.520 [conn4] moveChunk updating self version to: 13|1||5127638ca620d63c9f282dd1 through { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } for collection 'test.foo' m30001| Fri Feb 22 12:25:01.520 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:01-5127639d3d95f035bd477cfc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535901520), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:01.520 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:01.520 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:01.520 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:01.520 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:01.520 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:01.520 [cleanupOldData-5127639d3d95f035bd477cfd] (start) waiting to cleanup test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:01.521 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:01.521 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:01-5127639d3d95f035bd477cfe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535901521), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:01.521 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:01.521 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 12|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:01.521 [Balancer] major version query from 12|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 12000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 12000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 12000|1 } } ] } m30999| Fri Feb 22 12:25:01.522 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 13|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:01.522 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 13|1||5127638ca620d63c9f282dd1 based on: 12|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:01.522 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:01.522 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:01.541 [cleanupOldData-5127639d3d95f035bd477cfd] waiting to remove documents for test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30001| Fri Feb 22 12:25:01.541 [cleanupOldData-5127639d3d95f035bd477cfd] moveChunk starting delete for: test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30001| Fri Feb 22 12:25:01.544 [cleanupOldData-5127639d3d95f035bd477cfd] moveChunk deleted 52 documents for test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30999| Fri Feb 22 12:25:02.523 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:02.523 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:02.524 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127639ea620d63c9f282dde" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127639da620d63c9f282ddd" } } m30999| Fri Feb 22 12:25:02.525 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127639ea620d63c9f282dde m30999| Fri Feb 22 12:25:02.525 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:02.525 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:02.525 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:02.526 [Balancer] shard0001 has more chunks me:29 best: shard0000:12 m30999| Fri Feb 22 12:25:02.526 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:02.526 [Balancer] donor : shard0001 chunks on 29 m30999| Fri Feb 22 12:25:02.526 [Balancer] receiver : shard0000 chunks on 12 m30999| Fri Feb 22 12:25:02.526 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:02.526 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.3002202115021646", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:02.526 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 13|1||000000000000000000000000min: { _id: 0.3002202115021646 }max: { _id: 0.3187460692133754 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:02.526 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:02.526 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.3002202115021646", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:02.527 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 5127639e3d95f035bd477cff m30001| Fri Feb 22 12:25:02.527 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:02-5127639e3d95f035bd477d00", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535902527), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:02.528 [conn4] moveChunk request accepted at version 13|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:02.528 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:02.528 [migrateThread] starting receiving-end of migration of chunk { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:02.539 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:02.539 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:02.539 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30000| Fri Feb 22 12:25:02.540 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30001| Fri Feb 22 12:25:02.549 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:02.549 [conn4] moveChunk setting version to: 14|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:02.549 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:02.550 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30000| Fri Feb 22 12:25:02.550 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30000| Fri Feb 22 12:25:02.550 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:02-5127639e865ed09de9e1dad9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535902550), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:02.559 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:02.559 [conn4] moveChunk updating self version to: 14|1||5127638ca620d63c9f282dd1 through { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } for collection 'test.foo' m30001| Fri Feb 22 12:25:02.560 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:02-5127639e3d95f035bd477d01", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535902560), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:02.560 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:02.560 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:02.560 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:02.560 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:02.560 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:02.560 [cleanupOldData-5127639e3d95f035bd477d02] (start) waiting to cleanup test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:02.561 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:02.561 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:02-5127639e3d95f035bd477d03", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535902561), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:02.561 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:02.561 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 13|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:02.561 [Balancer] major version query from 13|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 13000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 13000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 13000|1 } } ] } m30999| Fri Feb 22 12:25:02.562 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 14|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:02.562 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 14|1||5127638ca620d63c9f282dd1 based on: 13|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:02.562 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:02.562 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:02.580 [cleanupOldData-5127639e3d95f035bd477d02] waiting to remove documents for test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30001| Fri Feb 22 12:25:02.580 [cleanupOldData-5127639e3d95f035bd477d02] moveChunk starting delete for: test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30001| Fri Feb 22 12:25:02.596 [cleanupOldData-5127639e3d95f035bd477d02] moveChunk deleted 52 documents for test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30999| Fri Feb 22 12:25:03.563 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:03.563 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:03.563 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127639fa620d63c9f282ddf" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127639ea620d63c9f282dde" } } m30999| Fri Feb 22 12:25:03.564 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 5127639fa620d63c9f282ddf m30999| Fri Feb 22 12:25:03.564 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:03.564 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:03.564 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:03.565 [Balancer] shard0001 has more chunks me:28 best: shard0000:13 m30999| Fri Feb 22 12:25:03.565 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:03.565 [Balancer] donor : shard0001 chunks on 28 m30999| Fri Feb 22 12:25:03.565 [Balancer] receiver : shard0000 chunks on 13 m30999| Fri Feb 22 12:25:03.565 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:03.565 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.3187460692133754", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:03.565 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 14|1||000000000000000000000000min: { _id: 0.3187460692133754 }max: { _id: 0.3477728136349469 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:03.566 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:03.566 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.3187460692133754", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:03.567 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 5127639f3d95f035bd477d04 m30001| Fri Feb 22 12:25:03.567 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:03-5127639f3d95f035bd477d05", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535903567), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:03.568 [conn4] moveChunk request accepted at version 14|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:03.568 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:03.568 [migrateThread] starting receiving-end of migration of chunk { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:25:03.576 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:03.576 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30000| Fri Feb 22 12:25:03.577 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30001| Fri Feb 22 12:25:03.578 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:03.578 [conn4] moveChunk setting version to: 15|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:03.578 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:03.588 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30000| Fri Feb 22 12:25:03.588 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30000| Fri Feb 22 12:25:03.588 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:03-5127639f865ed09de9e1dada", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535903588), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:03.588 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:03.589 [conn4] moveChunk updating self version to: 15|1||5127638ca620d63c9f282dd1 through { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } for collection 'test.foo' m30001| Fri Feb 22 12:25:03.589 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:03-5127639f3d95f035bd477d06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535903589), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:03.589 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:03.589 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:03.589 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:03.589 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:03.589 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:03.589 [cleanupOldData-5127639f3d95f035bd477d07] (start) waiting to cleanup test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:03.590 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:03.590 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:03-5127639f3d95f035bd477d08", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535903590), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:03.590 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:03.590 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 14|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:03.590 [Balancer] major version query from 14|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 14000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 14000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 14000|1 } } ] } m30999| Fri Feb 22 12:25:03.591 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 15|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:03.591 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 15|1||5127638ca620d63c9f282dd1 based on: 14|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:03.591 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:03.591 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:03.609 [cleanupOldData-5127639f3d95f035bd477d07] waiting to remove documents for test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30001| Fri Feb 22 12:25:03.609 [cleanupOldData-5127639f3d95f035bd477d07] moveChunk starting delete for: test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30001| Fri Feb 22 12:25:03.613 [cleanupOldData-5127639f3d95f035bd477d07] moveChunk deleted 52 documents for test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30999| Fri Feb 22 12:25:04.545 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:04.545 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:04.545 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:04.546 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 14, "shard0001" : 27 } m30999| Fri Feb 22 12:25:04.592 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:04.592 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:04.592 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:04 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a0a620d63c9f282de0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127639fa620d63c9f282ddf" } } m30999| Fri Feb 22 12:25:04.593 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a0a620d63c9f282de0 m30999| Fri Feb 22 12:25:04.593 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:04.593 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:04.593 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:04.595 [Balancer] shard0001 has more chunks me:27 best: shard0000:14 m30999| Fri Feb 22 12:25:04.595 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:04.595 [Balancer] donor : shard0001 chunks on 27 m30999| Fri Feb 22 12:25:04.595 [Balancer] receiver : shard0000 chunks on 14 m30999| Fri Feb 22 12:25:04.595 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:04.595 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.3477728136349469", lastmod: Timestamp 15000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:04.595 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 15|1||000000000000000000000000min: { _id: 0.3477728136349469 }max: { _id: 0.376576479524374 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:04.595 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:04.595 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.3477728136349469", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:04.596 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763a03d95f035bd477d09 m30001| Fri Feb 22 12:25:04.596 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:04-512763a03d95f035bd477d0a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535904596), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:04.597 [conn4] moveChunk request accepted at version 15|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:04.598 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:04.598 [migrateThread] starting receiving-end of migration of chunk { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:04.608 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:04.608 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:04.608 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30000| Fri Feb 22 12:25:04.610 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30001| Fri Feb 22 12:25:04.618 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:04.618 [conn4] moveChunk setting version to: 16|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:04.618 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:04.620 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30000| Fri Feb 22 12:25:04.620 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30000| Fri Feb 22 12:25:04.620 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:04-512763a0865ed09de9e1dadb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535904620), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:04.628 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:04.628 [conn4] moveChunk updating self version to: 16|1||5127638ca620d63c9f282dd1 through { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } for collection 'test.foo' m30001| Fri Feb 22 12:25:04.629 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:04-512763a03d95f035bd477d0b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535904629), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:04.629 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:04.629 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:04.629 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:04.629 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:04.629 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:04.629 [cleanupOldData-512763a03d95f035bd477d0c] (start) waiting to cleanup test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:04.630 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:04.630 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:04-512763a03d95f035bd477d0d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535904630), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:04.630 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:04.630 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 15|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:04.630 [Balancer] major version query from 15|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 15000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 15000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 15000|1 } } ] } m30999| Fri Feb 22 12:25:04.631 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 16|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:04.631 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 17 version: 16|1||5127638ca620d63c9f282dd1 based on: 15|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:04.631 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:04.631 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:04.649 [cleanupOldData-512763a03d95f035bd477d0c] waiting to remove documents for test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30001| Fri Feb 22 12:25:04.649 [cleanupOldData-512763a03d95f035bd477d0c] moveChunk starting delete for: test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30001| Fri Feb 22 12:25:04.654 [cleanupOldData-512763a03d95f035bd477d0c] moveChunk deleted 52 documents for test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30999| Fri Feb 22 12:25:05.632 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:05.633 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:05.633 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:05 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a1a620d63c9f282de1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a0a620d63c9f282de0" } } m30999| Fri Feb 22 12:25:05.634 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a1a620d63c9f282de1 m30999| Fri Feb 22 12:25:05.634 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:05.634 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:05.634 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:05.635 [Balancer] shard0001 has more chunks me:26 best: shard0000:15 m30999| Fri Feb 22 12:25:05.635 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:05.635 [Balancer] donor : shard0001 chunks on 26 m30999| Fri Feb 22 12:25:05.636 [Balancer] receiver : shard0000 chunks on 15 m30999| Fri Feb 22 12:25:05.636 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:05.636 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.376576479524374", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:05.636 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 16|1||000000000000000000000000min: { _id: 0.376576479524374 }max: { _id: 0.4025206139776856 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:05.636 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:05.636 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.376576479524374", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:05.637 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763a13d95f035bd477d0e m30001| Fri Feb 22 12:25:05.637 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:05-512763a13d95f035bd477d0f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535905637), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:05.638 [conn4] moveChunk request accepted at version 16|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:05.639 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:05.639 [migrateThread] starting receiving-end of migration of chunk { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:05.649 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 42, clonedBytes: 422058, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:05.651 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:05.651 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30000| Fri Feb 22 12:25:05.652 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30001| Fri Feb 22 12:25:05.659 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:05.659 [conn4] moveChunk setting version to: 17|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:05.660 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:05.663 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30000| Fri Feb 22 12:25:05.663 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30000| Fri Feb 22 12:25:05.663 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:05-512763a1865ed09de9e1dadc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535905663), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 11, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:25:05.670 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:05.670 [conn4] moveChunk updating self version to: 17|1||5127638ca620d63c9f282dd1 through { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } for collection 'test.foo' m30001| Fri Feb 22 12:25:05.670 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:05-512763a13d95f035bd477d10", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535905670), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:05.670 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:05.670 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:05.671 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:05.671 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:05.671 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:05.671 [cleanupOldData-512763a13d95f035bd477d11] (start) waiting to cleanup test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:05.671 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:05.671 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:05-512763a13d95f035bd477d12", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535905671), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:05.671 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:05.672 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 16|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:05.672 [Balancer] major version query from 16|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 16000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 16000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 16000|1 } } ] } m30999| Fri Feb 22 12:25:05.672 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 17|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:05.672 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 18 version: 17|1||5127638ca620d63c9f282dd1 based on: 16|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:05.673 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:05.673 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:05.691 [cleanupOldData-512763a13d95f035bd477d11] waiting to remove documents for test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30001| Fri Feb 22 12:25:05.691 [cleanupOldData-512763a13d95f035bd477d11] moveChunk starting delete for: test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30001| Fri Feb 22 12:25:05.696 [cleanupOldData-512763a13d95f035bd477d11] moveChunk deleted 52 documents for test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30999| Fri Feb 22 12:25:06.673 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:06.674 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:06.674 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a2a620d63c9f282de2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a1a620d63c9f282de1" } } m30999| Fri Feb 22 12:25:06.675 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a2a620d63c9f282de2 m30999| Fri Feb 22 12:25:06.675 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:06.675 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:06.675 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:06.677 [Balancer] shard0001 has more chunks me:25 best: shard0000:16 m30999| Fri Feb 22 12:25:06.677 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:06.677 [Balancer] donor : shard0001 chunks on 25 m30999| Fri Feb 22 12:25:06.677 [Balancer] receiver : shard0000 chunks on 16 m30999| Fri Feb 22 12:25:06.677 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:06.677 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.4025206139776856", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:06.677 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 17|1||000000000000000000000000min: { _id: 0.4025206139776856 }max: { _id: 0.4263731909450144 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:06.677 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:06.678 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.4025206139776856", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:06.679 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763a23d95f035bd477d13 m30001| Fri Feb 22 12:25:06.679 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:06-512763a23d95f035bd477d14", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535906679), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:06.680 [conn4] moveChunk request accepted at version 17|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:06.680 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:06.688 [migrateThread] starting receiving-end of migration of chunk { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:06.698 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 50, clonedBytes: 502450, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:06.698 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:06.698 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30000| Fri Feb 22 12:25:06.700 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30001| Fri Feb 22 12:25:06.708 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:06.708 [conn4] moveChunk setting version to: 18|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:06.708 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:06.710 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30000| Fri Feb 22 12:25:06.710 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30000| Fri Feb 22 12:25:06.710 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:06-512763a2865ed09de9e1dadd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535906710), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 9, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:06.718 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:06.719 [conn4] moveChunk updating self version to: 18|1||5127638ca620d63c9f282dd1 through { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } for collection 'test.foo' m30001| Fri Feb 22 12:25:06.719 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:06-512763a23d95f035bd477d15", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535906719), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:06.719 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:06.719 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:06.719 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:06.719 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:06.719 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:06.720 [cleanupOldData-512763a23d95f035bd477d16] (start) waiting to cleanup test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:06.720 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:06.720 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:06-512763a23d95f035bd477d17", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535906720), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 8, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:06.720 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:06.721 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 17|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:06.721 [Balancer] major version query from 17|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 17000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 17000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 17000|1 } } ] } m30999| Fri Feb 22 12:25:06.722 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 18|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:06.722 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 19 version: 18|1||5127638ca620d63c9f282dd1 based on: 17|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:06.722 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:06.722 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:06.740 [cleanupOldData-512763a23d95f035bd477d16] waiting to remove documents for test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30001| Fri Feb 22 12:25:06.740 [cleanupOldData-512763a23d95f035bd477d16] moveChunk starting delete for: test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30001| Fri Feb 22 12:25:06.744 [cleanupOldData-512763a23d95f035bd477d16] moveChunk deleted 52 documents for test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30999| Fri Feb 22 12:25:07.723 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:07.723 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:07.723 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:07 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a3a620d63c9f282de3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a2a620d63c9f282de2" } } m30999| Fri Feb 22 12:25:07.724 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a3a620d63c9f282de3 m30999| Fri Feb 22 12:25:07.724 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:07.724 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:07.724 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:07.725 [Balancer] shard0001 has more chunks me:24 best: shard0000:17 m30999| Fri Feb 22 12:25:07.725 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:07.725 [Balancer] donor : shard0001 chunks on 24 m30999| Fri Feb 22 12:25:07.725 [Balancer] receiver : shard0000 chunks on 17 m30999| Fri Feb 22 12:25:07.725 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:07.725 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.4263731909450144", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:07.725 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 18|1||000000000000000000000000min: { _id: 0.4263731909450144 }max: { _id: 0.4538818108849227 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:07.725 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:07.726 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.4263731909450144", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:07.726 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763a33d95f035bd477d18 m30001| Fri Feb 22 12:25:07.727 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:07-512763a33d95f035bd477d19", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535907726), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:07.727 [conn4] moveChunk request accepted at version 18|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:07.728 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:07.728 [migrateThread] starting receiving-end of migration of chunk { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:25:07.736 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:07.736 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30000| Fri Feb 22 12:25:07.738 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30001| Fri Feb 22 12:25:07.738 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:07.738 [conn4] moveChunk setting version to: 19|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:07.738 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:07.748 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30000| Fri Feb 22 12:25:07.748 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30000| Fri Feb 22 12:25:07.748 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:07-512763a3865ed09de9e1dade", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535907748), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:07.748 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:07.748 [conn4] moveChunk updating self version to: 19|1||5127638ca620d63c9f282dd1 through { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } for collection 'test.foo' m30001| Fri Feb 22 12:25:07.749 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:07-512763a33d95f035bd477d1a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535907749), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:07.749 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:07.749 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:07.749 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:07.750 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:07.750 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:07.750 [cleanupOldData-512763a33d95f035bd477d1b] (start) waiting to cleanup test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:07.750 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:07.750 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:07-512763a33d95f035bd477d1c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535907750), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:07.750 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:07.751 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 18|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:07.751 [Balancer] major version query from 18|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 18000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 18000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 18000|1 } } ] } m30999| Fri Feb 22 12:25:07.751 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 19|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:07.751 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 20 version: 19|1||5127638ca620d63c9f282dd1 based on: 18|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:07.751 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:07.752 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:07.770 [cleanupOldData-512763a33d95f035bd477d1b] waiting to remove documents for test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30001| Fri Feb 22 12:25:07.770 [cleanupOldData-512763a33d95f035bd477d1b] moveChunk starting delete for: test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30001| Fri Feb 22 12:25:07.776 [cleanupOldData-512763a33d95f035bd477d1b] moveChunk deleted 52 documents for test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30999| Fri Feb 22 12:25:08.752 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:08.752 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:08.753 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a4a620d63c9f282de4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a3a620d63c9f282de3" } } m30999| Fri Feb 22 12:25:08.753 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a4a620d63c9f282de4 m30999| Fri Feb 22 12:25:08.753 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:08.753 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:08.753 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:08.755 [Balancer] shard0001 has more chunks me:23 best: shard0000:18 m30999| Fri Feb 22 12:25:08.755 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:08.755 [Balancer] donor : shard0001 chunks on 23 m30999| Fri Feb 22 12:25:08.755 [Balancer] receiver : shard0000 chunks on 18 m30999| Fri Feb 22 12:25:08.755 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:08.755 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_0.4538818108849227", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:25:08.755 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 19|1||000000000000000000000000min: { _id: 0.4538818108849227 }max: { _id: 0.4776490097865462 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:25:08.755 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:08.755 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.4538818108849227", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:08.756 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' acquired, ts : 512763a43d95f035bd477d1d m30001| Fri Feb 22 12:25:08.756 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:08-512763a43d95f035bd477d1e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535908756), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:08.757 [conn4] moveChunk request accepted at version 19|1||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:08.757 [conn4] moveChunk number of documents: 52 m30000| Fri Feb 22 12:25:08.757 [migrateThread] starting receiving-end of migration of chunk { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } for collection test.foo from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:25:08.765 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:25:08.765 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30000| Fri Feb 22 12:25:08.767 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30001| Fri Feb 22 12:25:08.767 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:08.767 [conn4] moveChunk setting version to: 20|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:08.767 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:25:08.777 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30000| Fri Feb 22 12:25:08.777 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30000| Fri Feb 22 12:25:08.777 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:08-512763a4865ed09de9e1dadf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535908777), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:25:08.778 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:08.778 [conn4] moveChunk updating self version to: 20|1||5127638ca620d63c9f282dd1 through { _id: 0.4776490097865462 } -> { _id: 0.5017605081666261 } for collection 'test.foo' m30001| Fri Feb 22 12:25:08.778 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:08-512763a43d95f035bd477d1f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535908778), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:25:08.778 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:08.778 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:08.778 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:08.778 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:08.778 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:08.778 [cleanupOldData-512763a43d95f035bd477d20] (start) waiting to cleanup test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:08.779 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535890:31841' unlocked. m30001| Fri Feb 22 12:25:08.779 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:08-512763a43d95f035bd477d21", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36585", time: new Date(1361535908779), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:08.779 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:08.779 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 19|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:08.779 [Balancer] major version query from 19|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 19000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 19000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 19000|1 } } ] } m30999| Fri Feb 22 12:25:08.780 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 20|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:08.780 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 21 version: 20|1||5127638ca620d63c9f282dd1 based on: 19|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:08.780 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:08.780 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30001| Fri Feb 22 12:25:08.798 [cleanupOldData-512763a43d95f035bd477d20] waiting to remove documents for test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30001| Fri Feb 22 12:25:08.798 [cleanupOldData-512763a43d95f035bd477d20] moveChunk starting delete for: test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30001| Fri Feb 22 12:25:08.802 [cleanupOldData-512763a43d95f035bd477d20] moveChunk deleted 52 documents for test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.549 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.550 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0000" : 19, "shard0001" : 22 } m30999| Fri Feb 22 12:25:09.551 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:09.551 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.551 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.551 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.551 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.551 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0000", host: "localhost:30000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.552 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:09.555 [conn1] going to start draining shard: shard0000 m30999| primaryLocalDoc: { _id: "local", primary: "shard0000" } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.556 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0000" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0001" : 22, "shard0000" : 19 } m30999| Fri Feb 22 12:25:09.558 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:09.558 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.558 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.558 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.558 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.558 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.558 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:25:09.559 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "shard0001" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "shard0001" } } m30999| Fri Feb 22 12:25:09.559 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:09.559 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.559 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.559 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:09.559 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:09.559 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 22.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:09.781 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:09.781 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:09.781 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:09 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a5a620d63c9f282de5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a4a620d63c9f282de4" } } m30999| Fri Feb 22 12:25:09.782 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a5a620d63c9f282de5 m30999| Fri Feb 22 12:25:09.782 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:09.782 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:09.782 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:09.783 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:09.783 [Balancer] going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 2000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:09.784 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 2|0||000000000000000000000000min: { _id: MinKey }max: { _id: 0.0241540502756834 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:09.784 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:09.784 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:09.784 [initandlisten] connection accepted from 127.0.0.1:41865 #13 (13 connections now open) m30000| Fri Feb 22 12:25:09.785 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287 (sleeping for 30000ms) m30000| Fri Feb 22 12:25:09.787 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763a5865ed09de9e1dae0 m30000| Fri Feb 22 12:25:09.787 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:09-512763a5865ed09de9e1dae1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535909787), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:09.787 [conn7] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:25:09.788 [conn7] moveChunk request accepted at version 20|0||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:09.788 [conn7] moveChunk number of documents: 51 m30001| Fri Feb 22 12:25:09.789 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 0.0241540502756834 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:09.797 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:09.797 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30001| Fri Feb 22 12:25:09.798 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30000| Fri Feb 22 12:25:09.799 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:09.799 [conn7] moveChunk setting version to: 21|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:09.799 [initandlisten] connection accepted from 127.0.0.1:32980 #6 (6 connections now open) m30001| Fri Feb 22 12:25:09.799 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:09.809 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30001| Fri Feb 22 12:25:09.809 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 0.0241540502756834 } m30001| Fri Feb 22 12:25:09.809 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:09-512763a53d95f035bd477d22", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535909809), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:09.809 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 512499, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:09.810 [conn7] moveChunk updating self version to: 21|1||5127638ca620d63c9f282dd1 through { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } for collection 'test.foo' m30000| Fri Feb 22 12:25:09.810 [initandlisten] connection accepted from 127.0.0.1:58369 #14 (14 connections now open) m30000| Fri Feb 22 12:25:09.811 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:09-512763a5865ed09de9e1dae2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535909810), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:09.811 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:09.811 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:09.811 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:09.811 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:09.811 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:09.811 [cleanupOldData-512763a5865ed09de9e1dae3] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:09.811 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:09.811 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:09-512763a5865ed09de9e1dae4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535909811), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, step1 of 6: 0, step2 of 6: 4, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:09.811 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:09.812 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 20|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:09.812 [Balancer] major version query from 20|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 20000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 20000|0 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 20000|1 } } ] } m30999| Fri Feb 22 12:25:09.812 [Balancer] loaded 3 chunks into new chunk manager for test.foo with version 21|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:09.812 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 21|1||5127638ca620d63c9f282dd1 based on: 20|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:09.812 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:09.813 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:09.831 [cleanupOldData-512763a5865ed09de9e1dae3] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 } m30000| Fri Feb 22 12:25:09.831 [cleanupOldData-512763a5865ed09de9e1dae3] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 } m30000| Fri Feb 22 12:25:09.834 [cleanupOldData-512763a5865ed09de9e1dae3] moveChunk deleted 51 documents for test.foo from { _id: MinKey } -> { _id: 0.0241540502756834 } m30999| Fri Feb 22 12:25:10.813 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:10.814 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:10.814 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:10 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a6a620d63c9f282de6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a5a620d63c9f282de5" } } m30999| Fri Feb 22 12:25:10.815 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a6a620d63c9f282de6 m30999| Fri Feb 22 12:25:10.815 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:10.815 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:10.815 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:10.816 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:10.816 [Balancer] going to move { _id: "test.foo-_id_0.0241540502756834", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:10.816 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 21|1||000000000000000000000000min: { _id: 0.0241540502756834 }max: { _id: 0.04868675791658461 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:10.816 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:10.816 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.0241540502756834", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:10.817 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763a6865ed09de9e1dae5 m30000| Fri Feb 22 12:25:10.817 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:10-512763a6865ed09de9e1dae6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535910817), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:10.818 [conn7] moveChunk request accepted at version 21|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:10.818 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:10.818 [migrateThread] starting receiving-end of migration of chunk { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:10.825 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:10.825 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30001| Fri Feb 22 12:25:10.826 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30000| Fri Feb 22 12:25:10.829 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:10.829 [conn7] moveChunk setting version to: 22|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:10.829 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:10.836 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30001| Fri Feb 22 12:25:10.836 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30001| Fri Feb 22 12:25:10.836 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:10-512763a63d95f035bd477d23", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535910836), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:10.839 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:10.839 [conn7] moveChunk updating self version to: 22|1||5127638ca620d63c9f282dd1 through { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } for collection 'test.foo' m30000| Fri Feb 22 12:25:10.840 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:10-512763a6865ed09de9e1dae7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535910840), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:10.840 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:10.840 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:10.840 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:10.840 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:10.840 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:10.840 [cleanupOldData-512763a6865ed09de9e1dae8] (start) waiting to cleanup test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:10.840 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:10.840 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:10-512763a6865ed09de9e1dae9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535910840), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.0241540502756834 }, max: { _id: 0.04868675791658461 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:10.840 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:10.841 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 21|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:10.841 [Balancer] major version query from 21|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 21000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 21000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 21000|0 } } ] } m30999| Fri Feb 22 12:25:10.841 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 22|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:10.841 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 23 version: 22|1||5127638ca620d63c9f282dd1 based on: 21|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:10.842 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:10.842 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:10.860 [cleanupOldData-512763a6865ed09de9e1dae8] waiting to remove documents for test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30000| Fri Feb 22 12:25:10.860 [cleanupOldData-512763a6865ed09de9e1dae8] moveChunk starting delete for: test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30000| Fri Feb 22 12:25:10.864 [cleanupOldData-512763a6865ed09de9e1dae8] moveChunk deleted 52 documents for test.foo from { _id: 0.0241540502756834 } -> { _id: 0.04868675791658461 } m30999| Fri Feb 22 12:25:11.842 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:11.843 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:11.843 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:11 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a7a620d63c9f282de7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a6a620d63c9f282de6" } } m30999| Fri Feb 22 12:25:11.844 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a7a620d63c9f282de7 m30999| Fri Feb 22 12:25:11.844 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:11.844 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:11.844 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:11.845 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:11.845 [Balancer] going to move { _id: "test.foo-_id_0.04868675791658461", lastmod: Timestamp 22000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:11.845 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 22|1||000000000000000000000000min: { _id: 0.04868675791658461 }max: { _id: 0.07311806757934391 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:11.845 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:11.845 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.04868675791658461", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:11.846 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763a7865ed09de9e1daea m30000| Fri Feb 22 12:25:11.846 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:11-512763a7865ed09de9e1daeb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535911846), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:11.847 [conn7] moveChunk request accepted at version 22|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:11.847 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:11.847 [migrateThread] starting receiving-end of migration of chunk { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:11.856 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:11.856 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30001| Fri Feb 22 12:25:11.857 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30000| Fri Feb 22 12:25:11.857 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:11.858 [conn7] moveChunk setting version to: 23|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:11.858 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:11.867 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30001| Fri Feb 22 12:25:11.867 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30001| Fri Feb 22 12:25:11.868 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:11-512763a73d95f035bd477d24", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535911868), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:11.868 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:11.868 [conn7] moveChunk updating self version to: 23|1||5127638ca620d63c9f282dd1 through { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } for collection 'test.foo' m30000| Fri Feb 22 12:25:11.869 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:11-512763a7865ed09de9e1daec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535911868), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:11.869 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:11.869 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:11.869 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:11.869 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:11.869 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:11.869 [cleanupOldData-512763a7865ed09de9e1daed] (start) waiting to cleanup test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:11.872 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:11.872 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:11-512763a7865ed09de9e1daee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535911872), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.04868675791658461 }, max: { _id: 0.07311806757934391 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:11.872 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:11.873 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 22|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:11.873 [Balancer] major version query from 22|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 22000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 22000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 22000|0 } } ] } m30999| Fri Feb 22 12:25:11.873 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 23|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:11.873 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 23|1||5127638ca620d63c9f282dd1 based on: 22|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:11.873 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:11.874 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:11.889 [cleanupOldData-512763a7865ed09de9e1daed] waiting to remove documents for test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30000| Fri Feb 22 12:25:11.889 [cleanupOldData-512763a7865ed09de9e1daed] moveChunk starting delete for: test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30000| Fri Feb 22 12:25:11.892 [cleanupOldData-512763a7865ed09de9e1daed] moveChunk deleted 52 documents for test.foo from { _id: 0.04868675791658461 } -> { _id: 0.07311806757934391 } m30999| Fri Feb 22 12:25:12.874 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:12.875 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:12.875 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:12 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a8a620d63c9f282de8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a7a620d63c9f282de7" } } m30999| Fri Feb 22 12:25:12.876 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a8a620d63c9f282de8 m30999| Fri Feb 22 12:25:12.876 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:12.876 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:12.876 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:12.877 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:12.877 [Balancer] going to move { _id: "test.foo-_id_0.07311806757934391", lastmod: Timestamp 23000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:12.877 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 23|1||000000000000000000000000min: { _id: 0.07311806757934391 }max: { _id: 0.09237447124905884 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:12.877 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:12.877 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.07311806757934391", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:12.878 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763a8865ed09de9e1daef m30000| Fri Feb 22 12:25:12.878 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:12-512763a8865ed09de9e1daf0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535912878), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:12.879 [conn7] moveChunk request accepted at version 23|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:12.879 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:12.879 [migrateThread] starting receiving-end of migration of chunk { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:12.886 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:12.886 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30001| Fri Feb 22 12:25:12.887 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30000| Fri Feb 22 12:25:12.889 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:12.890 [conn7] moveChunk setting version to: 24|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:12.890 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:12.897 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30001| Fri Feb 22 12:25:12.897 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30001| Fri Feb 22 12:25:12.897 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:12-512763a83d95f035bd477d25", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535912897), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:12.900 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:12.900 [conn7] moveChunk updating self version to: 24|1||5127638ca620d63c9f282dd1 through { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } for collection 'test.foo' m30000| Fri Feb 22 12:25:12.901 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:12-512763a8865ed09de9e1daf1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535912901), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:12.901 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:12.901 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:12.901 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:12.901 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:12.901 [cleanupOldData-512763a8865ed09de9e1daf2] (start) waiting to cleanup test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:12.901 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:12.901 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:12.901 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:12-512763a8865ed09de9e1daf3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535912901), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.07311806757934391 }, max: { _id: 0.09237447124905884 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:12.901 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:12.902 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 23|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:12.902 [Balancer] major version query from 23|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 23000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 23000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 23000|0 } } ] } m30999| Fri Feb 22 12:25:12.902 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 24|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:12.902 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 25 version: 24|1||5127638ca620d63c9f282dd1 based on: 23|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:12.902 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:12.903 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:12.921 [cleanupOldData-512763a8865ed09de9e1daf2] waiting to remove documents for test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30000| Fri Feb 22 12:25:12.921 [cleanupOldData-512763a8865ed09de9e1daf2] moveChunk starting delete for: test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30000| Fri Feb 22 12:25:12.924 [cleanupOldData-512763a8865ed09de9e1daf2] moveChunk deleted 52 documents for test.foo from { _id: 0.07311806757934391 } -> { _id: 0.09237447124905884 } m30999| Fri Feb 22 12:25:13.903 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:13.904 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:13.904 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:13 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763a9a620d63c9f282de9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a8a620d63c9f282de8" } } m30999| Fri Feb 22 12:25:13.905 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763a9a620d63c9f282de9 m30999| Fri Feb 22 12:25:13.905 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:13.905 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:13.905 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:13.906 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:13.906 [Balancer] going to move { _id: "test.foo-_id_0.09237447124905884", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:13.906 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 24|1||000000000000000000000000min: { _id: 0.09237447124905884 }max: { _id: 0.119084446458146 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:13.906 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:13.907 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.09237447124905884", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:13.908 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763a9865ed09de9e1daf4 m30000| Fri Feb 22 12:25:13.908 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:13-512763a9865ed09de9e1daf5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535913908), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:13.909 [conn7] moveChunk request accepted at version 24|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:13.909 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:13.909 [migrateThread] starting receiving-end of migration of chunk { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:13.917 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:13.917 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30001| Fri Feb 22 12:25:13.919 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30000| Fri Feb 22 12:25:13.919 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:13.919 [conn7] moveChunk setting version to: 25|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:13.919 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:13.929 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30001| Fri Feb 22 12:25:13.929 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30001| Fri Feb 22 12:25:13.929 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:13-512763a93d95f035bd477d26", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535913929), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:13.929 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:13.930 [conn7] moveChunk updating self version to: 25|1||5127638ca620d63c9f282dd1 through { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } for collection 'test.foo' m30000| Fri Feb 22 12:25:13.930 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:13-512763a9865ed09de9e1daf6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535913930), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:13.930 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:13.930 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:13.930 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:13.930 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:13.930 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:13.930 [cleanupOldData-512763a9865ed09de9e1daf7] (start) waiting to cleanup test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:13.931 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:13.931 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:13-512763a9865ed09de9e1daf8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535913931), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.09237447124905884 }, max: { _id: 0.119084446458146 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:13.931 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:13.931 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 24|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:13.931 [Balancer] major version query from 24|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 24000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 24000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 24000|0 } } ] } m30999| Fri Feb 22 12:25:13.932 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 25|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:13.932 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 26 version: 25|1||5127638ca620d63c9f282dd1 based on: 24|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:13.932 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:13.932 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:13.950 [cleanupOldData-512763a9865ed09de9e1daf7] waiting to remove documents for test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30000| Fri Feb 22 12:25:13.951 [cleanupOldData-512763a9865ed09de9e1daf7] moveChunk starting delete for: test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30000| Fri Feb 22 12:25:13.954 [cleanupOldData-512763a9865ed09de9e1daf7] moveChunk deleted 52 documents for test.foo from { _id: 0.09237447124905884 } -> { _id: 0.119084446458146 } m30999| Fri Feb 22 12:25:14.032 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:25:14 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838', sleeping for 30000ms m30999| Fri Feb 22 12:25:14.560 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:14.560 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:14.560 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.560 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.560 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:14.560 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.560 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:14.561 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:14.561 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:14.561 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.561 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.561 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:14.561 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.561 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 21000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0001" : 27, "shard0000" : 14 } m30999| Fri Feb 22 12:25:14.563 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:14.563 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:14.563 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.563 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.563 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:14.563 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.563 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:25:14.564 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "shard0001" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "shard0001" } } m30999| Fri Feb 22 12:25:14.564 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:14.564 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.564 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.564 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:14.564 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:14.564 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 27.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:14.933 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:14.934 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:14.934 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763aaa620d63c9f282dea" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763a9a620d63c9f282de9" } } m30999| Fri Feb 22 12:25:14.935 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763aaa620d63c9f282dea m30999| Fri Feb 22 12:25:14.935 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:14.935 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:14.935 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:14.936 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:14.936 [Balancer] going to move { _id: "test.foo-_id_0.119084446458146", lastmod: Timestamp 25000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:14.936 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 25|1||000000000000000000000000min: { _id: 0.119084446458146 }max: { _id: 0.1485460062976927 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:14.936 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:14.937 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.119084446458146", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:14.937 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763aa865ed09de9e1daf9 m30000| Fri Feb 22 12:25:14.938 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:14-512763aa865ed09de9e1dafa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535914938), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:14.939 [conn7] moveChunk request accepted at version 25|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:14.939 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:14.939 [migrateThread] starting receiving-end of migration of chunk { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:14.948 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:14.948 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30001| Fri Feb 22 12:25:14.949 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30000| Fri Feb 22 12:25:14.949 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:14.949 [conn7] moveChunk setting version to: 26|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:14.950 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:14.959 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30001| Fri Feb 22 12:25:14.959 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30001| Fri Feb 22 12:25:14.959 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:14-512763aa3d95f035bd477d27", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535914959), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:14.960 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:14.960 [conn7] moveChunk updating self version to: 26|1||5127638ca620d63c9f282dd1 through { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } for collection 'test.foo' m30000| Fri Feb 22 12:25:14.961 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:14-512763aa865ed09de9e1dafb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535914961), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:14.961 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:14.961 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:14.961 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:14.961 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:14.961 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:14.961 [cleanupOldData-512763aa865ed09de9e1dafc] (start) waiting to cleanup test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:14.961 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:14.962 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:14-512763aa865ed09de9e1dafd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535914961), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.119084446458146 }, max: { _id: 0.1485460062976927 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:14.962 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:14.962 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 25|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:14.962 [Balancer] major version query from 25|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 25000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 25000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 25000|0 } } ] } m30999| Fri Feb 22 12:25:14.963 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 26|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:14.963 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 27 version: 26|1||5127638ca620d63c9f282dd1 based on: 25|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:14.963 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:14.963 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:14.981 [cleanupOldData-512763aa865ed09de9e1dafc] waiting to remove documents for test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30000| Fri Feb 22 12:25:14.981 [cleanupOldData-512763aa865ed09de9e1dafc] moveChunk starting delete for: test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30000| Fri Feb 22 12:25:14.985 [cleanupOldData-512763aa865ed09de9e1dafc] moveChunk deleted 52 documents for test.foo from { _id: 0.119084446458146 } -> { _id: 0.1485460062976927 } m30999| Fri Feb 22 12:25:15.964 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:15.964 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:15.964 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:15 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763aba620d63c9f282deb" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763aaa620d63c9f282dea" } } m30999| Fri Feb 22 12:25:15.965 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763aba620d63c9f282deb m30999| Fri Feb 22 12:25:15.965 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:15.965 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:15.965 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:15.967 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:15.967 [Balancer] going to move { _id: "test.foo-_id_0.1485460062976927", lastmod: Timestamp 26000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:15.967 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 26|1||000000000000000000000000min: { _id: 0.1485460062976927 }max: { _id: 0.174441245617345 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:15.967 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:15.967 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.1485460062976927", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:15.968 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763ab865ed09de9e1dafe m30000| Fri Feb 22 12:25:15.968 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:15-512763ab865ed09de9e1daff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535915968), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:15.969 [conn7] moveChunk request accepted at version 26|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:15.969 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:15.969 [migrateThread] starting receiving-end of migration of chunk { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:15.976 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:15.976 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30001| Fri Feb 22 12:25:15.977 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30000| Fri Feb 22 12:25:15.979 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:15.979 [conn7] moveChunk setting version to: 27|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:15.979 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:15.987 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30001| Fri Feb 22 12:25:15.987 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30001| Fri Feb 22 12:25:15.987 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:15-512763ab3d95f035bd477d28", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535915987), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:15.990 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:15.990 [conn7] moveChunk updating self version to: 27|1||5127638ca620d63c9f282dd1 through { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } for collection 'test.foo' m30000| Fri Feb 22 12:25:15.990 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:15-512763ab865ed09de9e1db00", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535915990), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:15.990 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:15.990 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:15.990 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:15.991 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:15.991 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:15.991 [cleanupOldData-512763ab865ed09de9e1db01] (start) waiting to cleanup test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:15.991 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:15.991 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:15-512763ab865ed09de9e1db02", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535915991), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.1485460062976927 }, max: { _id: 0.174441245617345 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:15.991 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:15.991 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 26|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:15.991 [Balancer] major version query from 26|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 26000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 26000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 26000|0 } } ] } m30999| Fri Feb 22 12:25:15.992 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 27|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:15.992 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 28 version: 27|1||5127638ca620d63c9f282dd1 based on: 26|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:15.992 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:15.992 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:16.011 [cleanupOldData-512763ab865ed09de9e1db01] waiting to remove documents for test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30000| Fri Feb 22 12:25:16.011 [cleanupOldData-512763ab865ed09de9e1db01] moveChunk starting delete for: test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30000| Fri Feb 22 12:25:16.014 [cleanupOldData-512763ab865ed09de9e1db01] moveChunk deleted 52 documents for test.foo from { _id: 0.1485460062976927 } -> { _id: 0.174441245617345 } m30999| Fri Feb 22 12:25:16.993 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:16.993 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:16.994 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763aca620d63c9f282dec" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763aba620d63c9f282deb" } } m30999| Fri Feb 22 12:25:16.995 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763aca620d63c9f282dec m30999| Fri Feb 22 12:25:16.995 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:16.995 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:16.995 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:16.996 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:16.996 [Balancer] going to move { _id: "test.foo-_id_0.174441245617345", lastmod: Timestamp 27000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:16.996 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 27|1||000000000000000000000000min: { _id: 0.174441245617345 }max: { _id: 0.1965870081912726 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:16.996 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:16.996 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.174441245617345", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:16.997 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763ac865ed09de9e1db03 m30000| Fri Feb 22 12:25:16.997 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:16-512763ac865ed09de9e1db04", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535916997), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:16.998 [conn7] moveChunk request accepted at version 27|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:16.998 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:16.999 [migrateThread] starting receiving-end of migration of chunk { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:17.006 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:17.006 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30001| Fri Feb 22 12:25:17.007 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30000| Fri Feb 22 12:25:17.009 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:17.009 [conn7] moveChunk setting version to: 28|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:17.009 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:17.018 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30001| Fri Feb 22 12:25:17.018 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30001| Fri Feb 22 12:25:17.018 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:17-512763ad3d95f035bd477d29", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535917018), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:17.019 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:17.019 [conn7] moveChunk updating self version to: 28|1||5127638ca620d63c9f282dd1 through { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } for collection 'test.foo' m30000| Fri Feb 22 12:25:17.020 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:17-512763ad865ed09de9e1db05", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535917020), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:17.020 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:17.020 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:17.020 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:17.020 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:17.020 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:17.020 [cleanupOldData-512763ad865ed09de9e1db06] (start) waiting to cleanup test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:17.020 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:17.020 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:17-512763ad865ed09de9e1db07", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535917020), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.174441245617345 }, max: { _id: 0.1965870081912726 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:17.021 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:17.021 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 27|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:17.021 [Balancer] major version query from 27|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 27000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 27000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 27000|0 } } ] } m30999| Fri Feb 22 12:25:17.022 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 28|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:17.022 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 29 version: 28|1||5127638ca620d63c9f282dd1 based on: 27|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:17.022 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:17.022 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:17.040 [cleanupOldData-512763ad865ed09de9e1db06] waiting to remove documents for test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30000| Fri Feb 22 12:25:17.040 [cleanupOldData-512763ad865ed09de9e1db06] moveChunk starting delete for: test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30000| Fri Feb 22 12:25:17.044 [cleanupOldData-512763ad865ed09de9e1db06] moveChunk deleted 52 documents for test.foo from { _id: 0.174441245617345 } -> { _id: 0.1965870081912726 } m30999| Fri Feb 22 12:25:18.023 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:18.023 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:18.023 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:18 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763aea620d63c9f282ded" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763aca620d63c9f282dec" } } m30999| Fri Feb 22 12:25:18.024 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763aea620d63c9f282ded m30999| Fri Feb 22 12:25:18.024 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:18.024 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:18.024 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:18.026 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:18.026 [Balancer] going to move { _id: "test.foo-_id_0.1965870081912726", lastmod: Timestamp 28000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:18.026 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 28|1||000000000000000000000000min: { _id: 0.1965870081912726 }max: { _id: 0.2179276177193969 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:18.026 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:18.026 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.1965870081912726", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:18.028 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763ae865ed09de9e1db08 m30000| Fri Feb 22 12:25:18.028 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:18-512763ae865ed09de9e1db09", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535918028), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:18.029 [conn7] moveChunk request accepted at version 28|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:18.029 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:18.029 [migrateThread] starting receiving-end of migration of chunk { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:18.038 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:18.038 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30000| Fri Feb 22 12:25:18.040 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:18.040 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30000| Fri Feb 22 12:25:18.050 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:18.050 [conn7] moveChunk setting version to: 29|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:18.050 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:18.051 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30001| Fri Feb 22 12:25:18.051 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30001| Fri Feb 22 12:25:18.051 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:18-512763ae3d95f035bd477d2a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535918051), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 8, step4 of 5: 0, step5 of 5: 12 } } m30000| Fri Feb 22 12:25:18.060 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:18.060 [conn7] moveChunk updating self version to: 29|1||5127638ca620d63c9f282dd1 through { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } for collection 'test.foo' m30000| Fri Feb 22 12:25:18.061 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:18-512763ae865ed09de9e1db0a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535918061), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:18.061 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:18.061 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:18.061 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:18.061 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:18.061 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:18.061 [cleanupOldData-512763ae865ed09de9e1db0b] (start) waiting to cleanup test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:18.062 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:18.062 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:18-512763ae865ed09de9e1db0c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535918062), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.1965870081912726 }, max: { _id: 0.2179276177193969 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:18.062 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:18.062 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 28|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:18.063 [Balancer] major version query from 28|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 28000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 28000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 28000|0 } } ] } m30999| Fri Feb 22 12:25:18.063 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 29|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:18.063 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 30 version: 29|1||5127638ca620d63c9f282dd1 based on: 28|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:18.063 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:18.064 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:18.082 [cleanupOldData-512763ae865ed09de9e1db0b] waiting to remove documents for test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30000| Fri Feb 22 12:25:18.082 [cleanupOldData-512763ae865ed09de9e1db0b] moveChunk starting delete for: test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30000| Fri Feb 22 12:25:18.085 [cleanupOldData-512763ae865ed09de9e1db0b] moveChunk deleted 52 documents for test.foo from { _id: 0.1965870081912726 } -> { _id: 0.2179276177193969 } m30999| Fri Feb 22 12:25:19.064 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:19.065 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:19.065 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:19 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763afa620d63c9f282dee" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763aea620d63c9f282ded" } } m30999| Fri Feb 22 12:25:19.066 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763afa620d63c9f282dee m30999| Fri Feb 22 12:25:19.066 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:19.066 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:19.066 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:19.067 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:19.067 [Balancer] going to move { _id: "test.foo-_id_0.2179276177193969", lastmod: Timestamp 29000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:19.067 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 29|1||000000000000000000000000min: { _id: 0.2179276177193969 }max: { _id: 0.2448905608616769 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:19.067 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:19.067 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.2179276177193969", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:19.068 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763af865ed09de9e1db0d m30000| Fri Feb 22 12:25:19.068 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:19-512763af865ed09de9e1db0e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535919068), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:19.069 [conn7] moveChunk request accepted at version 29|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:19.069 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:19.069 [migrateThread] starting receiving-end of migration of chunk { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:19.077 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:19.077 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30001| Fri Feb 22 12:25:19.079 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30000| Fri Feb 22 12:25:19.079 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:19.080 [conn7] moveChunk setting version to: 30|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:19.080 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:19.089 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30001| Fri Feb 22 12:25:19.089 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30001| Fri Feb 22 12:25:19.089 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:19-512763af3d95f035bd477d2b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535919089), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:19.090 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:19.090 [conn7] moveChunk updating self version to: 30|1||5127638ca620d63c9f282dd1 through { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } for collection 'test.foo' m30000| Fri Feb 22 12:25:19.092 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:19-512763af865ed09de9e1db0f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535919092), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:19.092 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:19.092 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:19.092 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:19.092 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:19.092 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:19.092 [cleanupOldData-512763af865ed09de9e1db10] (start) waiting to cleanup test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:19.093 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:19.093 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:19-512763af865ed09de9e1db11", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535919093), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.2179276177193969 }, max: { _id: 0.2448905608616769 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 12, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:19.093 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:19.093 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 29|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:19.093 [Balancer] major version query from 29|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 29000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 29000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 29000|0 } } ] } m30999| Fri Feb 22 12:25:19.094 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 30|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:19.094 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 31 version: 30|1||5127638ca620d63c9f282dd1 based on: 29|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:19.094 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:19.094 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:19.112 [cleanupOldData-512763af865ed09de9e1db10] waiting to remove documents for test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30000| Fri Feb 22 12:25:19.113 [cleanupOldData-512763af865ed09de9e1db10] moveChunk starting delete for: test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30000| Fri Feb 22 12:25:19.115 [cleanupOldData-512763af865ed09de9e1db10] moveChunk deleted 52 documents for test.foo from { _id: 0.2179276177193969 } -> { _id: 0.2448905608616769 } m30999| Fri Feb 22 12:25:19.564 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:19.564 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:19.564 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:19.565 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.566 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 21000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0001" : 32, "shard0000" : 9 } m30999| Fri Feb 22 12:25:19.567 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:19.567 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:19.567 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.567 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.567 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:19.567 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.567 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:25:19.568 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "shard0001" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "shard0001" } } m30999| Fri Feb 22 12:25:19.568 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:19.568 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.568 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.568 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:19.568 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:19.568 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 32.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:20.095 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:20.095 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:20.096 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b0a620d63c9f282def" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763afa620d63c9f282dee" } } m30999| Fri Feb 22 12:25:20.096 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b0a620d63c9f282def m30999| Fri Feb 22 12:25:20.096 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:20.096 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:20.096 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:20.098 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:20.098 [Balancer] going to move { _id: "test.foo-_id_0.2448905608616769", lastmod: Timestamp 30000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:20.098 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 30|1||000000000000000000000000min: { _id: 0.2448905608616769 }max: { _id: 0.2748649539425969 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:20.098 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:20.098 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.2448905608616769", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:20.099 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b0865ed09de9e1db12 m30000| Fri Feb 22 12:25:20.099 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:20-512763b0865ed09de9e1db13", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535920099), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:20.100 [conn7] moveChunk request accepted at version 30|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:20.100 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:20.100 [migrateThread] starting receiving-end of migration of chunk { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:20.107 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:20.107 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30001| Fri Feb 22 12:25:20.108 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30000| Fri Feb 22 12:25:20.111 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:20.111 [conn7] moveChunk setting version to: 31|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:20.111 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:20.119 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30001| Fri Feb 22 12:25:20.119 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30001| Fri Feb 22 12:25:20.119 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:20-512763b03d95f035bd477d2c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535920119), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 6, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:20.121 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:20.121 [conn7] moveChunk updating self version to: 31|1||5127638ca620d63c9f282dd1 through { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } for collection 'test.foo' m30000| Fri Feb 22 12:25:20.122 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:20-512763b0865ed09de9e1db14", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535920121), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:20.122 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:20.122 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:20.122 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:20.122 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:20.122 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:20.122 [cleanupOldData-512763b0865ed09de9e1db15] (start) waiting to cleanup test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:20.122 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:20.122 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:20-512763b0865ed09de9e1db16", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535920122), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.2448905608616769 }, max: { _id: 0.2748649539425969 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:20.122 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:20.123 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 30|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:20.123 [Balancer] major version query from 30|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 30000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 30000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 30000|0 } } ] } m30999| Fri Feb 22 12:25:20.123 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 31|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:20.123 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 32 version: 31|1||5127638ca620d63c9f282dd1 based on: 30|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:20.123 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:20.124 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:20.142 [cleanupOldData-512763b0865ed09de9e1db15] waiting to remove documents for test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30000| Fri Feb 22 12:25:20.142 [cleanupOldData-512763b0865ed09de9e1db15] moveChunk starting delete for: test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30000| Fri Feb 22 12:25:20.145 [cleanupOldData-512763b0865ed09de9e1db15] moveChunk deleted 52 documents for test.foo from { _id: 0.2448905608616769 } -> { _id: 0.2748649539425969 } m30999| Fri Feb 22 12:25:21.124 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:21.125 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:21.125 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:21 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b1a620d63c9f282df0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b0a620d63c9f282def" } } m30999| Fri Feb 22 12:25:21.126 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b1a620d63c9f282df0 m30999| Fri Feb 22 12:25:21.126 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:21.126 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:21.126 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:21.127 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:21.127 [Balancer] going to move { _id: "test.foo-_id_0.2748649539425969", lastmod: Timestamp 31000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:21.127 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 31|1||000000000000000000000000min: { _id: 0.2748649539425969 }max: { _id: 0.3002202115021646 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:21.127 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:21.128 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.2748649539425969", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:21.129 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b1865ed09de9e1db17 m30000| Fri Feb 22 12:25:21.129 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:21-512763b1865ed09de9e1db18", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535921129), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:21.130 [conn7] moveChunk request accepted at version 31|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:21.130 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:21.130 [migrateThread] starting receiving-end of migration of chunk { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:21.138 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:21.138 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30001| Fri Feb 22 12:25:21.140 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30000| Fri Feb 22 12:25:21.140 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:21.140 [conn7] moveChunk setting version to: 32|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:21.140 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:21.150 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30001| Fri Feb 22 12:25:21.150 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30001| Fri Feb 22 12:25:21.150 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:21-512763b13d95f035bd477d2d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535921150), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:21.151 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:21.151 [conn7] moveChunk updating self version to: 32|1||5127638ca620d63c9f282dd1 through { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } for collection 'test.foo' m30000| Fri Feb 22 12:25:21.151 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:21-512763b1865ed09de9e1db19", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535921151), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:21.151 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:21.151 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:21.151 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:21.151 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:21.152 [cleanupOldData-512763b1865ed09de9e1db1a] (start) waiting to cleanup test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:21.152 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:21.152 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:21.152 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:21-512763b1865ed09de9e1db1b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535921152), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.2748649539425969 }, max: { _id: 0.3002202115021646 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:21.152 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:21.152 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 31|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:21.153 [Balancer] major version query from 31|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 31000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 31000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 31000|0 } } ] } m30999| Fri Feb 22 12:25:21.153 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 32|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:21.153 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 33 version: 32|1||5127638ca620d63c9f282dd1 based on: 31|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:21.153 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:21.154 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:21.172 [cleanupOldData-512763b1865ed09de9e1db1a] waiting to remove documents for test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30000| Fri Feb 22 12:25:21.172 [cleanupOldData-512763b1865ed09de9e1db1a] moveChunk starting delete for: test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30000| Fri Feb 22 12:25:21.176 [cleanupOldData-512763b1865ed09de9e1db1a] moveChunk deleted 52 documents for test.foo from { _id: 0.2748649539425969 } -> { _id: 0.3002202115021646 } m30999| Fri Feb 22 12:25:22.154 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:22.155 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:22.155 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:22 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b2a620d63c9f282df1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b1a620d63c9f282df0" } } m30999| Fri Feb 22 12:25:22.156 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b2a620d63c9f282df1 m30999| Fri Feb 22 12:25:22.156 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:22.156 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:22.156 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:22.157 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:22.158 [Balancer] going to move { _id: "test.foo-_id_0.3002202115021646", lastmod: Timestamp 32000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:22.158 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 32|1||000000000000000000000000min: { _id: 0.3002202115021646 }max: { _id: 0.3187460692133754 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:22.158 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:22.158 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.3002202115021646", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:22.159 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b2865ed09de9e1db1c m30000| Fri Feb 22 12:25:22.159 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:22-512763b2865ed09de9e1db1d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535922159), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:22.160 [conn7] moveChunk request accepted at version 32|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:22.160 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:22.161 [migrateThread] starting receiving-end of migration of chunk { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:22.169 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:22.169 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30000| Fri Feb 22 12:25:22.171 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:22.171 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30000| Fri Feb 22 12:25:22.181 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:22.181 [conn7] moveChunk setting version to: 33|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:22.181 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:22.181 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30001| Fri Feb 22 12:25:22.181 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30001| Fri Feb 22 12:25:22.181 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:22-512763b23d95f035bd477d2e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535922181), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 12 } } m30000| Fri Feb 22 12:25:22.191 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:22.191 [conn7] moveChunk updating self version to: 33|1||5127638ca620d63c9f282dd1 through { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } for collection 'test.foo' m30000| Fri Feb 22 12:25:22.192 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:22-512763b2865ed09de9e1db1e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535922192), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:22.192 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:22.192 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:22.192 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:22.192 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:22.192 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:22.192 [cleanupOldData-512763b2865ed09de9e1db1f] (start) waiting to cleanup test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:22.193 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:22.193 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:22-512763b2865ed09de9e1db20", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535922193), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.3002202115021646 }, max: { _id: 0.3187460692133754 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:22.193 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:22.193 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 32|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:22.193 [Balancer] major version query from 32|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 32000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 32000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 32000|0 } } ] } m30999| Fri Feb 22 12:25:22.194 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 33|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:22.194 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 34 version: 33|1||5127638ca620d63c9f282dd1 based on: 32|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:22.194 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:22.195 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:22.212 [cleanupOldData-512763b2865ed09de9e1db1f] waiting to remove documents for test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30000| Fri Feb 22 12:25:22.212 [cleanupOldData-512763b2865ed09de9e1db1f] moveChunk starting delete for: test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30000| Fri Feb 22 12:25:22.217 [cleanupOldData-512763b2865ed09de9e1db1f] moveChunk deleted 52 documents for test.foo from { _id: 0.3002202115021646 } -> { _id: 0.3187460692133754 } m30999| Fri Feb 22 12:25:23.195 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:23.196 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:23.196 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:23 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b3a620d63c9f282df2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b2a620d63c9f282df1" } } m30999| Fri Feb 22 12:25:23.197 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b3a620d63c9f282df2 m30999| Fri Feb 22 12:25:23.197 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:23.197 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:23.197 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:23.198 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:23.198 [Balancer] going to move { _id: "test.foo-_id_0.3187460692133754", lastmod: Timestamp 33000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:23.198 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 33|1||000000000000000000000000min: { _id: 0.3187460692133754 }max: { _id: 0.3477728136349469 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:23.198 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:23.198 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.3187460692133754", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:23.199 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b3865ed09de9e1db21 m30000| Fri Feb 22 12:25:23.199 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:23-512763b3865ed09de9e1db22", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535923199), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:23.200 [conn7] moveChunk request accepted at version 33|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:23.201 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:23.201 [migrateThread] starting receiving-end of migration of chunk { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:23.209 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:23.209 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30001| Fri Feb 22 12:25:23.211 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30000| Fri Feb 22 12:25:23.211 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:23.211 [conn7] moveChunk setting version to: 34|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:23.211 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:23.221 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30001| Fri Feb 22 12:25:23.221 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30001| Fri Feb 22 12:25:23.221 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:23-512763b33d95f035bd477d2f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535923221), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 12 } } m30000| Fri Feb 22 12:25:23.221 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:23.221 [conn7] moveChunk updating self version to: 34|1||5127638ca620d63c9f282dd1 through { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } for collection 'test.foo' m30000| Fri Feb 22 12:25:23.222 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:23-512763b3865ed09de9e1db23", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535923222), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:23.222 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:23.222 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:23.222 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:23.222 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:23.222 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:23.222 [cleanupOldData-512763b3865ed09de9e1db24] (start) waiting to cleanup test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:23.223 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:23.223 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:23-512763b3865ed09de9e1db25", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535923223), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.3187460692133754 }, max: { _id: 0.3477728136349469 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:23.223 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:23.223 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 33|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:23.223 [Balancer] major version query from 33|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 33000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 33000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 33000|0 } } ] } m30999| Fri Feb 22 12:25:23.224 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 34|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:23.224 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 35 version: 34|1||5127638ca620d63c9f282dd1 based on: 33|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:23.224 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:23.224 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:23.242 [cleanupOldData-512763b3865ed09de9e1db24] waiting to remove documents for test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30000| Fri Feb 22 12:25:23.242 [cleanupOldData-512763b3865ed09de9e1db24] moveChunk starting delete for: test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30000| Fri Feb 22 12:25:23.245 [cleanupOldData-512763b3865ed09de9e1db24] moveChunk deleted 52 documents for test.foo from { _id: 0.3187460692133754 } -> { _id: 0.3477728136349469 } m30999| Fri Feb 22 12:25:24.225 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:24.225 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:24.225 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:24 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b4a620d63c9f282df3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b3a620d63c9f282df2" } } m30999| Fri Feb 22 12:25:24.226 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b4a620d63c9f282df3 m30999| Fri Feb 22 12:25:24.226 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:24.226 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:24.226 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:24.227 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:24.227 [Balancer] going to move { _id: "test.foo-_id_0.3477728136349469", lastmod: Timestamp 34000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:24.227 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 34|1||000000000000000000000000min: { _id: 0.3477728136349469 }max: { _id: 0.376576479524374 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:24.227 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:24.228 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.3477728136349469", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:24.228 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b4865ed09de9e1db26 m30000| Fri Feb 22 12:25:24.228 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:24-512763b4865ed09de9e1db27", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535924228), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:24.229 [conn7] moveChunk request accepted at version 34|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:24.229 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:24.230 [migrateThread] starting receiving-end of migration of chunk { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:24.235 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:24.235 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30001| Fri Feb 22 12:25:24.236 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30000| Fri Feb 22 12:25:24.240 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:24.240 [conn7] moveChunk setting version to: 35|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:24.240 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:24.247 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30001| Fri Feb 22 12:25:24.247 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30001| Fri Feb 22 12:25:24.247 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:24-512763b43d95f035bd477d30", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535924247), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 5, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:24.250 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:24.250 [conn7] moveChunk updating self version to: 35|1||5127638ca620d63c9f282dd1 through { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } for collection 'test.foo' m30000| Fri Feb 22 12:25:24.251 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:24-512763b4865ed09de9e1db28", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535924251), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:24.251 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:24.251 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:24.251 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:24.251 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:24.251 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:24.251 [cleanupOldData-512763b4865ed09de9e1db29] (start) waiting to cleanup test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:24.251 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:24.251 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:24-512763b4865ed09de9e1db2a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535924251), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.3477728136349469 }, max: { _id: 0.376576479524374 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:24.251 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:24.252 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 34|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:24.252 [Balancer] major version query from 34|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 34000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 34000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 34000|0 } } ] } m30999| Fri Feb 22 12:25:24.252 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 35|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:24.252 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 36 version: 35|1||5127638ca620d63c9f282dd1 based on: 34|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:24.252 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:24.253 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:24.271 [cleanupOldData-512763b4865ed09de9e1db29] waiting to remove documents for test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30000| Fri Feb 22 12:25:24.271 [cleanupOldData-512763b4865ed09de9e1db29] moveChunk starting delete for: test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30000| Fri Feb 22 12:25:24.274 [cleanupOldData-512763b4865ed09de9e1db29] moveChunk deleted 52 documents for test.foo from { _id: 0.3477728136349469 } -> { _id: 0.376576479524374 } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:24.569 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.570 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 21000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0001" : 37, "shard0000" : 4 } m30999| Fri Feb 22 12:25:24.571 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:24.571 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:24.571 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.571 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.571 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:24.571 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.572 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:25:24.572 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "shard0001" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "shard0001" } } m30999| Fri Feb 22 12:25:24.572 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:24.572 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.572 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.572 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:24.572 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:24.573 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 37.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:25.253 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:25.254 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:25.254 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:25 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b5a620d63c9f282df4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b4a620d63c9f282df3" } } m30999| Fri Feb 22 12:25:25.255 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b5a620d63c9f282df4 m30999| Fri Feb 22 12:25:25.255 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:25.255 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:25.255 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:25.256 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:25.256 [Balancer] going to move { _id: "test.foo-_id_0.376576479524374", lastmod: Timestamp 35000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:25.256 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 35|1||000000000000000000000000min: { _id: 0.376576479524374 }max: { _id: 0.4025206139776856 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:25.257 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:25.257 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.376576479524374", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:25.258 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b5865ed09de9e1db2b m30000| Fri Feb 22 12:25:25.258 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:25-512763b5865ed09de9e1db2c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535925258), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:25.259 [conn7] moveChunk request accepted at version 35|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:25.259 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:25.260 [migrateThread] starting receiving-end of migration of chunk { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:25.268 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:25.268 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30000| Fri Feb 22 12:25:25.270 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:25.270 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30000| Fri Feb 22 12:25:25.280 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:25.280 [conn7] moveChunk setting version to: 36|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:25.280 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:25.280 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30001| Fri Feb 22 12:25:25.280 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30001| Fri Feb 22 12:25:25.280 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:25-512763b53d95f035bd477d31", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535925280), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:25.290 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:25.290 [conn7] moveChunk updating self version to: 36|1||5127638ca620d63c9f282dd1 through { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } for collection 'test.foo' m30000| Fri Feb 22 12:25:25.291 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:25-512763b5865ed09de9e1db2d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535925291), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:25.291 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:25.291 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:25.291 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:25.291 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:25.291 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:25.291 [cleanupOldData-512763b5865ed09de9e1db2e] (start) waiting to cleanup test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:25.291 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:25.291 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:25-512763b5865ed09de9e1db2f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535925291), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.376576479524374 }, max: { _id: 0.4025206139776856 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:25.292 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:25.292 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 35|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:25.292 [Balancer] major version query from 35|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 35000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 35000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 35000|0 } } ] } m30999| Fri Feb 22 12:25:25.293 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 36|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:25.293 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 37 version: 36|1||5127638ca620d63c9f282dd1 based on: 35|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:25.293 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:25.293 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:25.311 [cleanupOldData-512763b5865ed09de9e1db2e] waiting to remove documents for test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30000| Fri Feb 22 12:25:25.311 [cleanupOldData-512763b5865ed09de9e1db2e] moveChunk starting delete for: test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30000| Fri Feb 22 12:25:25.315 [cleanupOldData-512763b5865ed09de9e1db2e] moveChunk deleted 52 documents for test.foo from { _id: 0.376576479524374 } -> { _id: 0.4025206139776856 } m30999| Fri Feb 22 12:25:26.294 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:26.294 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:26.294 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b6a620d63c9f282df5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b5a620d63c9f282df4" } } m30999| Fri Feb 22 12:25:26.295 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b6a620d63c9f282df5 m30999| Fri Feb 22 12:25:26.295 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:26.295 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:26.295 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:26.297 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:26.297 [Balancer] going to move { _id: "test.foo-_id_0.4025206139776856", lastmod: Timestamp 36000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:26.297 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 36|1||000000000000000000000000min: { _id: 0.4025206139776856 }max: { _id: 0.4263731909450144 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:26.297 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:26.297 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.4025206139776856", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:26.298 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b6865ed09de9e1db30 m30000| Fri Feb 22 12:25:26.298 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:26-512763b6865ed09de9e1db31", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535926298), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:26.299 [conn7] moveChunk request accepted at version 36|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:26.299 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:26.300 [migrateThread] starting receiving-end of migration of chunk { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:26.308 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:26.308 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30000| Fri Feb 22 12:25:26.310 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:26.310 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30000| Fri Feb 22 12:25:26.320 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:26.320 [conn7] moveChunk setting version to: 37|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:26.320 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:26.320 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30001| Fri Feb 22 12:25:26.320 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30001| Fri Feb 22 12:25:26.320 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:26-512763b63d95f035bd477d32", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535926320), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 12 } } m30000| Fri Feb 22 12:25:26.330 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:26.330 [conn7] moveChunk updating self version to: 37|1||5127638ca620d63c9f282dd1 through { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } for collection 'test.foo' m30000| Fri Feb 22 12:25:26.331 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:26-512763b6865ed09de9e1db32", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535926331), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:26.331 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:26.331 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:26.331 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:26.331 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:26.331 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:26.331 [cleanupOldData-512763b6865ed09de9e1db33] (start) waiting to cleanup test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:26.331 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:26.331 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:26-512763b6865ed09de9e1db34", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535926331), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.4025206139776856 }, max: { _id: 0.4263731909450144 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 20, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:26.332 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:26.332 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 36|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:26.332 [Balancer] major version query from 36|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 36000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 36000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 36000|0 } } ] } m30999| Fri Feb 22 12:25:26.333 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 37|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:26.333 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 38 version: 37|1||5127638ca620d63c9f282dd1 based on: 36|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:26.333 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:26.333 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:26.351 [cleanupOldData-512763b6865ed09de9e1db33] waiting to remove documents for test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30000| Fri Feb 22 12:25:26.351 [cleanupOldData-512763b6865ed09de9e1db33] moveChunk starting delete for: test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30000| Fri Feb 22 12:25:26.355 [cleanupOldData-512763b6865ed09de9e1db33] moveChunk deleted 52 documents for test.foo from { _id: 0.4025206139776856 } -> { _id: 0.4263731909450144 } m30999| Fri Feb 22 12:25:27.334 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:27.334 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:27.334 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b7a620d63c9f282df6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b6a620d63c9f282df5" } } m30999| Fri Feb 22 12:25:27.335 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b7a620d63c9f282df6 m30999| Fri Feb 22 12:25:27.335 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:27.335 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:27.335 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:27.336 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:27.337 [Balancer] going to move { _id: "test.foo-_id_0.4263731909450144", lastmod: Timestamp 37000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:27.337 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 37|1||000000000000000000000000min: { _id: 0.4263731909450144 }max: { _id: 0.4538818108849227 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:27.337 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:27.337 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.4263731909450144", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:27.338 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b7865ed09de9e1db35 m30000| Fri Feb 22 12:25:27.338 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:27-512763b7865ed09de9e1db36", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535927338), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:27.339 [conn7] moveChunk request accepted at version 37|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:27.339 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:27.339 [migrateThread] starting receiving-end of migration of chunk { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:27.347 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:27.347 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30001| Fri Feb 22 12:25:27.349 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30000| Fri Feb 22 12:25:27.349 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:27.349 [conn7] moveChunk setting version to: 38|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:27.349 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:27.359 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30001| Fri Feb 22 12:25:27.359 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30001| Fri Feb 22 12:25:27.359 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:27-512763b73d95f035bd477d33", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535927359), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:27.359 [conn7] moveChunk migrate commit accepted by TO-shard: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:27.359 [conn7] moveChunk updating self version to: 38|1||5127638ca620d63c9f282dd1 through { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } for collection 'test.foo' m30000| Fri Feb 22 12:25:27.360 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:27-512763b7865ed09de9e1db37", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535927360), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:27.360 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:27.360 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:27.360 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:27.360 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:27.360 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:27.361 [cleanupOldData-512763b7865ed09de9e1db38] (start) waiting to cleanup test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:27.361 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:27.361 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:27-512763b7865ed09de9e1db39", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535927361), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.4263731909450144 }, max: { _id: 0.4538818108849227 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:27.361 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:27.361 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 37|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:27.361 [Balancer] major version query from 37|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 37000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 37000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 37000|0 } } ] } m30999| Fri Feb 22 12:25:27.362 [Balancer] loaded 2 chunks into new chunk manager for test.foo with version 38|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:27.362 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 39 version: 38|1||5127638ca620d63c9f282dd1 based on: 37|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:27.362 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:27.362 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:27.381 [cleanupOldData-512763b7865ed09de9e1db38] waiting to remove documents for test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30000| Fri Feb 22 12:25:27.381 [cleanupOldData-512763b7865ed09de9e1db38] moveChunk starting delete for: test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30000| Fri Feb 22 12:25:27.384 [cleanupOldData-512763b7865ed09de9e1db38] moveChunk deleted 52 documents for test.foo from { _id: 0.4263731909450144 } -> { _id: 0.4538818108849227 } m30999| Fri Feb 22 12:25:28.363 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:28.363 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:28.363 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:28 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b8a620d63c9f282df7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b7a620d63c9f282df6" } } m30999| Fri Feb 22 12:25:28.364 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b8a620d63c9f282df7 m30999| Fri Feb 22 12:25:28.364 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:28.364 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:28.364 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:28.365 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:28.365 [Balancer] going to move { _id: "test.foo-_id_0.4538818108849227", lastmod: Timestamp 38000|1, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, shard: "shard0000" } from shard0000() to shard0001 m30999| Fri Feb 22 12:25:28.365 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0000:localhost:30000lastmod: 38|1||000000000000000000000000min: { _id: 0.4538818108849227 }max: { _id: 0.4776490097865462 }) shard0000:localhost:30000 -> shard0001:localhost:30001 m30000| Fri Feb 22 12:25:28.365 [conn7] warning: secondaryThrottle selected but no replication m30000| Fri Feb 22 12:25:28.366 [conn7] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30000", to: "localhost:30001", fromShard: "shard0000", toShard: "shard0001", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_0.4538818108849227", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:25:28.366 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' acquired, ts : 512763b8865ed09de9e1db3a m30000| Fri Feb 22 12:25:28.366 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:28-512763b8865ed09de9e1db3b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535928366), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:28.367 [conn7] moveChunk request accepted at version 38|1||5127638ca620d63c9f282dd1 m30000| Fri Feb 22 12:25:28.367 [conn7] moveChunk number of documents: 52 m30001| Fri Feb 22 12:25:28.368 [migrateThread] starting receiving-end of migration of chunk { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } for collection test.foo from localhost:30000 (0 slaves detected) m30001| Fri Feb 22 12:25:28.376 [migrateThread] Waiting for replication to catch up before entering critical section m30001| Fri Feb 22 12:25:28.376 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30001| Fri Feb 22 12:25:28.377 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30000| Fri Feb 22 12:25:28.378 [conn7] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30000", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:25:28.378 [conn7] moveChunk setting version to: 39|0||5127638ca620d63c9f282dd1 m30001| Fri Feb 22 12:25:28.378 [conn6] Waiting for commit to finish m30001| Fri Feb 22 12:25:28.387 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30001| Fri Feb 22 12:25:28.387 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30001| Fri Feb 22 12:25:28.387 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:28-512763b83d95f035bd477d34", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535928387), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 7, step4 of 5: 0, step5 of 5: 11 } } m30000| Fri Feb 22 12:25:28.388 [conn7] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30000", min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 522548, catchup: 0, steady: 0 }, ok: 1.0 } m30000| Fri Feb 22 12:25:28.388 [conn7] moveChunk moved last chunk out for collection 'test.foo' m30000| Fri Feb 22 12:25:28.389 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:28-512763b8865ed09de9e1db3c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535928389), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, from: "shard0000", to: "shard0001" } } m30000| Fri Feb 22 12:25:28.389 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:28.389 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:28.389 [conn7] forking for cleanup of chunk data m30000| Fri Feb 22 12:25:28.389 [conn7] MigrateFromStatus::done About to acquire global write lock to exit critical section m30000| Fri Feb 22 12:25:28.389 [conn7] MigrateFromStatus::done Global lock acquired m30000| Fri Feb 22 12:25:28.389 [cleanupOldData-512763b8865ed09de9e1db3d] (start) waiting to cleanup test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 }, # cursors remaining: 0 m30000| Fri Feb 22 12:25:28.389 [conn7] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30000:1361535909:25287' unlocked. m30000| Fri Feb 22 12:25:28.389 [conn7] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:28-512763b8865ed09de9e1db3e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:36530", time: new Date(1361535928389), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 0.4538818108849227 }, max: { _id: 0.4776490097865462 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 10, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:25:28.389 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:25:28.390 [Balancer] loading chunk manager for collection test.foo using old chunk manager w/ version 38|1||5127638ca620d63c9f282dd1 and 41 chunks m30999| Fri Feb 22 12:25:28.390 [Balancer] major version query from 38|1||5127638ca620d63c9f282dd1 and over 2 shards is { ns: "test.foo", $or: [ { lastmod: { $gte: Timestamp 38000|1 } }, { shard: "shard0000", lastmod: { $gt: Timestamp 38000|1 } }, { shard: "shard0001", lastmod: { $gt: Timestamp 38000|0 } } ] } m30999| Fri Feb 22 12:25:28.390 [Balancer] loaded 1 chunks into new chunk manager for test.foo with version 39|0||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:28.390 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 40 version: 39|0||5127638ca620d63c9f282dd1 based on: 38|1||5127638ca620d63c9f282dd1 m30999| Fri Feb 22 12:25:28.390 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:28.391 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30000| Fri Feb 22 12:25:28.409 [cleanupOldData-512763b8865ed09de9e1db3d] waiting to remove documents for test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30000| Fri Feb 22 12:25:28.409 [cleanupOldData-512763b8865ed09de9e1db3d] moveChunk starting delete for: test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30000| Fri Feb 22 12:25:28.412 [cleanupOldData-512763b8865ed09de9e1db3d] moveChunk deleted 52 documents for test.foo from { _id: 0.4538818108849227 } -> { _id: 0.4776490097865462 } m30999| Fri Feb 22 12:25:29.392 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:25:29.392 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838 ) m30999| Fri Feb 22 12:25:29.392 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:29 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763b9a620d63c9f282df8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763b8a620d63c9f282df7" } } m30999| Fri Feb 22 12:25:29.393 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' acquired, ts : 512763b9a620d63c9f282df8 m30999| Fri Feb 22 12:25:29.393 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:29.393 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:29.393 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:29.395 [Balancer] shard0000 is unavailable m30999| Fri Feb 22 12:25:29.395 [Balancer] collection : test.foo m30999| Fri Feb 22 12:25:29.395 [Balancer] donor : shard0001 chunks on 41 m30999| Fri Feb 22 12:25:29.395 [Balancer] receiver : shard0001 chunks on 41 m30999| Fri Feb 22 12:25:29.395 [Balancer] threshold : 2 m30999| Fri Feb 22 12:25:29.395 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:25:29.395 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:29.395 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535884:16838' unlocked. m30999| Fri Feb 22 12:25:29.573 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:29.573 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:29.573 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.573 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.573 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:29.573 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.chunks", n2skip: 0, n2return: 0, options: 0, query: { ns: "test.foo" }, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.574 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "test.foo-_id_MinKey", lastmod: Timestamp 21000|0, lastmodEpoch: ObjectId('5127638ca620d63c9f282dd1'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 0.0241540502756834 }, shard: "shard0001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "shard0001" : 41, "shard0000" : 0 } m30999| Fri Feb 22 12:25:29.576 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.shards", n2skip: 0, n2return: 0, options: 0, query: {}, fields: {} } and CInfo { v_ns: "", filter: {} } m30999| Fri Feb 22 12:25:29.576 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:29.576 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.576 [conn1] [pcursor] initialized query (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.576 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:29.576 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.576 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { _id: "shard0001", host: "localhost:30001" }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } { "_id" : "shard0001", "host" : "localhost:30001" } { "_id" : "shard0000", "draining" : true, "host" : "localhost:30000" } m30999| Fri Feb 22 12:25:29.577 [conn1] [pcursor] creating pcursor over QSpec { ns: "config.$cmd", n2skip: 0, n2return: 1, options: 0, query: { count: "chunks", query: { shard: "shard0001" } }, fields: {} } and CInfo { v_ns: "config.chunks", filter: { shard: "shard0001" } } m30999| Fri Feb 22 12:25:29.577 [conn1] [pcursor] initializing over 1 shards required by [unsharded @ config:localhost:30000] m30999| Fri Feb 22 12:25:29.577 [conn1] [pcursor] initializing on shard config:localhost:30000, current connection state is { state: {}, retryNext: false, init: false, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.577 [conn1] [pcursor] initialized command (lazily) on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.577 [conn1] [pcursor] finishing over 1 shards m30999| Fri Feb 22 12:25:29.577 [conn1] [pcursor] finishing on shard config:localhost:30000, current connection state is { state: { conn: "localhost:30000", vinfo: "config:localhost:30000", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: false, errored: false } m30999| Fri Feb 22 12:25:29.577 [conn1] [pcursor] finished on shard config:localhost:30000, current connection state is { state: { conn: "(done)", vinfo: "config:localhost:30000", cursor: { n: 41.0, ok: 1.0 }, count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } m30999| Fri Feb 22 12:25:29.577 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 12:25:29.595 [conn4] end connection 127.0.0.1:36585 (5 connections now open) m30001| Fri Feb 22 12:25:29.595 [conn3] end connection 127.0.0.1:48791 (5 connections now open) m30000| Fri Feb 22 12:25:29.595 [conn3] end connection 127.0.0.1:55799 (13 connections now open) m30000| Fri Feb 22 12:25:29.595 [conn6] end connection 127.0.0.1:46871 (13 connections now open) m30000| Fri Feb 22 12:25:29.595 [conn5] end connection 127.0.0.1:37831 (13 connections now open) m30000| Fri Feb 22 12:25:29.595 [conn7] end connection 127.0.0.1:36530 (13 connections now open) Fri Feb 22 12:25:30.577 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:25:30.578 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:25:30.578 [interruptThread] now exiting m30000| Fri Feb 22 12:25:30.578 dbexit: m30000| Fri Feb 22 12:25:30.578 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:25:30.578 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:25:30.578 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:25:30.578 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:25:30.578 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:25:30.578 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:25:30.578 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:25:30.578 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:25:30.578 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:25:30.578 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:25:30.578 [conn1] end connection 127.0.0.1:62231 (9 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn2] end connection 127.0.0.1:44355 (9 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn12] end connection 127.0.0.1:64844 (9 connections now open) m30001| Fri Feb 22 12:25:30.579 [conn5] end connection 127.0.0.1:35004 (3 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn11] end connection 127.0.0.1:49696 (9 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn8] end connection 127.0.0.1:50229 (9 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn10] end connection 127.0.0.1:38768 (9 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn9] end connection 127.0.0.1:53631 (9 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn13] end connection 127.0.0.1:41865 (9 connections now open) m30001| Fri Feb 22 12:25:30.579 [conn6] end connection 127.0.0.1:32980 (2 connections now open) m30000| Fri Feb 22 12:25:30.579 [conn14] end connection 127.0.0.1:58369 (8 connections now open) m30000| Fri Feb 22 12:25:30.617 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:25:30.622 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:25:30.622 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:25:30.622 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:25:30.622 dbexit: really exiting now Fri Feb 22 12:25:31.578 shell: stopped mongo program on port 30000 m30001| Fri Feb 22 12:25:31.578 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:25:31.578 [interruptThread] now exiting m30001| Fri Feb 22 12:25:31.578 dbexit: m30001| Fri Feb 22 12:25:31.578 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:25:31.578 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:25:31.578 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:25:31.578 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:25:31.578 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:25:31.578 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:25:31.578 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:25:31.578 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:25:31.578 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:25:31.578 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:25:31.579 [conn1] end connection 127.0.0.1:51304 (1 connection now open) m30001| Fri Feb 22 12:25:31.601 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:25:31.607 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:25:31.608 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:25:31.608 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:25:31.608 dbexit: really exiting now Fri Feb 22 12:25:32.578 shell: stopped mongo program on port 30001 *** ShardingTest sharding_balance_randomorder1 completed successfully in 49.019 seconds *** Fri Feb 22 12:25:32.605 [conn14] end connection 127.0.0.1:34237 (0 connections now open) 49.2333 seconds Fri Feb 22 12:25:32.625 [initandlisten] connection accepted from 127.0.0.1:41637 #15 (1 connection now open) Fri Feb 22 12:25:32.625 [conn15] end connection 127.0.0.1:41637 (0 connections now open) ******************************************* Test : sharding_migrateBigObject.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_migrateBigObject.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_migrateBigObject.js";TestData.testFile = "sharding_migrateBigObject.js";TestData.testName = "sharding_migrateBigObject";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:25:32 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:25:32.808 [initandlisten] connection accepted from 127.0.0.1:53207 #16 (1 connection now open) null Fri Feb 22 12:25:32.822 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --shardsvr --port 30001 --dbpath /data/db/migrateBigger0 --nopreallocj --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:25:32.909 [initandlisten] MongoDB starting : pid=12576 port=30001 dbpath=/data/db/migrateBigger0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:25:32.909 [initandlisten] m30001| Fri Feb 22 12:25:32.909 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:25:32.909 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:25:32.909 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:25:32.909 [initandlisten] m30001| Fri Feb 22 12:25:32.909 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:25:32.909 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:25:32.909 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:25:32.909 [initandlisten] allocator: system m30001| Fri Feb 22 12:25:32.909 [initandlisten] options: { dbpath: "/data/db/migrateBigger0", nopreallocj: true, port: 30001, setParameter: [ "enableTestCommands=1" ], shardsvr: true } m30001| Fri Feb 22 12:25:32.910 [initandlisten] journal dir=/data/db/migrateBigger0/journal m30001| Fri Feb 22 12:25:32.910 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:25:32.911 [FileAllocator] allocating new datafile /data/db/migrateBigger0/local.ns, filling with zeroes... m30001| Fri Feb 22 12:25:32.911 [FileAllocator] creating directory /data/db/migrateBigger0/_tmp m30001| Fri Feb 22 12:25:32.911 [FileAllocator] done allocating datafile /data/db/migrateBigger0/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:25:32.912 [FileAllocator] allocating new datafile /data/db/migrateBigger0/local.0, filling with zeroes... m30001| Fri Feb 22 12:25:32.912 [FileAllocator] done allocating datafile /data/db/migrateBigger0/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:25:32.915 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:25:32.915 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:25:33.025 [initandlisten] connection accepted from 127.0.0.1:44401 #1 (1 connection now open) Fri Feb 22 12:25:33.029 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --shardsvr --port 30002 --dbpath /data/db/migrateBigger1 --nopreallocj --setParameter enableTestCommands=1 m30002| Fri Feb 22 12:25:33.119 [initandlisten] MongoDB starting : pid=12577 port=30002 dbpath=/data/db/migrateBigger1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30002| Fri Feb 22 12:25:33.120 [initandlisten] m30002| Fri Feb 22 12:25:33.120 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30002| Fri Feb 22 12:25:33.120 [initandlisten] ** uses to detect impending page faults. m30002| Fri Feb 22 12:25:33.120 [initandlisten] ** This may result in slower performance for certain use cases m30002| Fri Feb 22 12:25:33.120 [initandlisten] m30002| Fri Feb 22 12:25:33.120 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30002| Fri Feb 22 12:25:33.120 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30002| Fri Feb 22 12:25:33.120 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30002| Fri Feb 22 12:25:33.120 [initandlisten] allocator: system m30002| Fri Feb 22 12:25:33.120 [initandlisten] options: { dbpath: "/data/db/migrateBigger1", nopreallocj: true, port: 30002, setParameter: [ "enableTestCommands=1" ], shardsvr: true } m30002| Fri Feb 22 12:25:33.120 [initandlisten] journal dir=/data/db/migrateBigger1/journal m30002| Fri Feb 22 12:25:33.120 [initandlisten] recover : no journal files present, no recovery needed m30002| Fri Feb 22 12:25:33.122 [FileAllocator] allocating new datafile /data/db/migrateBigger1/local.ns, filling with zeroes... m30002| Fri Feb 22 12:25:33.122 [FileAllocator] creating directory /data/db/migrateBigger1/_tmp m30002| Fri Feb 22 12:25:33.122 [FileAllocator] done allocating datafile /data/db/migrateBigger1/local.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 12:25:33.122 [FileAllocator] allocating new datafile /data/db/migrateBigger1/local.0, filling with zeroes... m30002| Fri Feb 22 12:25:33.122 [FileAllocator] done allocating datafile /data/db/migrateBigger1/local.0, size: 64MB, took 0 secs m30002| Fri Feb 22 12:25:33.125 [websvr] admin web console waiting for connections on port 31002 m30002| Fri Feb 22 12:25:33.125 [initandlisten] waiting for connections on port 30002 m30002| Fri Feb 22 12:25:33.231 [initandlisten] connection accepted from 127.0.0.1:44767 #1 (1 connection now open) Fri Feb 22 12:25:33.233 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --configsvr --port 29999 --dbpath /data/db/migrateBiggerC --nopreallocj --setParameter enableTestCommands=1 m29999| Fri Feb 22 12:25:33.320 [initandlisten] MongoDB starting : pid=12578 port=29999 dbpath=/data/db/migrateBiggerC master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29999| Fri Feb 22 12:25:33.320 [initandlisten] m29999| Fri Feb 22 12:25:33.320 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29999| Fri Feb 22 12:25:33.321 [initandlisten] ** uses to detect impending page faults. m29999| Fri Feb 22 12:25:33.321 [initandlisten] ** This may result in slower performance for certain use cases m29999| Fri Feb 22 12:25:33.321 [initandlisten] m29999| Fri Feb 22 12:25:33.321 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29999| Fri Feb 22 12:25:33.321 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29999| Fri Feb 22 12:25:33.321 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29999| Fri Feb 22 12:25:33.321 [initandlisten] allocator: system m29999| Fri Feb 22 12:25:33.321 [initandlisten] options: { configsvr: true, dbpath: "/data/db/migrateBiggerC", nopreallocj: true, port: 29999, setParameter: [ "enableTestCommands=1" ] } m29999| Fri Feb 22 12:25:33.321 [initandlisten] journal dir=/data/db/migrateBiggerC/journal m29999| Fri Feb 22 12:25:33.321 [initandlisten] recover : no journal files present, no recovery needed m29999| Fri Feb 22 12:25:33.322 [FileAllocator] allocating new datafile /data/db/migrateBiggerC/local.ns, filling with zeroes... m29999| Fri Feb 22 12:25:33.323 [FileAllocator] creating directory /data/db/migrateBiggerC/_tmp m29999| Fri Feb 22 12:25:33.323 [FileAllocator] done allocating datafile /data/db/migrateBiggerC/local.ns, size: 16MB, took 0 secs m29999| Fri Feb 22 12:25:33.323 [FileAllocator] allocating new datafile /data/db/migrateBiggerC/local.0, filling with zeroes... m29999| Fri Feb 22 12:25:33.323 [FileAllocator] done allocating datafile /data/db/migrateBiggerC/local.0, size: 16MB, took 0 secs m29999| Fri Feb 22 12:25:33.326 [initandlisten] ****** m29999| Fri Feb 22 12:25:33.326 [initandlisten] creating replication oplog of size: 5MB... m29999| Fri Feb 22 12:25:33.330 [initandlisten] ****** m29999| Fri Feb 22 12:25:33.330 [initandlisten] waiting for connections on port 29999 m29999| Fri Feb 22 12:25:33.330 [websvr] admin web console waiting for connections on port 30999 m29999| Fri Feb 22 12:25:33.434 [initandlisten] connection accepted from 127.0.0.1:61237 #1 (1 connection now open) Fri Feb 22 12:25:33.441 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30000 --configdb localhost:29999 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:25:33.457 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30000| Fri Feb 22 12:25:33.458 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=12579 port=30000 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30000| Fri Feb 22 12:25:33.458 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:25:33.458 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:25:33.458 [mongosMain] options: { configdb: "localhost:29999", port: 30000, setParameter: [ "enableTestCommands=1" ] } m29999| Fri Feb 22 12:25:33.459 [initandlisten] connection accepted from 127.0.0.1:33782 #2 (2 connections now open) m29999| Fri Feb 22 12:25:33.460 [initandlisten] connection accepted from 127.0.0.1:33448 #3 (3 connections now open) m29999| Fri Feb 22 12:25:33.461 [conn3] CMD fsync: sync:1 lock:0 m30000| Fri Feb 22 12:25:33.466 [LockPinger] creating distributed lock ping thread for localhost:29999 and process bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838 (sleeping for 30000ms) m29999| Fri Feb 22 12:25:33.466 [FileAllocator] allocating new datafile /data/db/migrateBiggerC/config.ns, filling with zeroes... m29999| Fri Feb 22 12:25:33.466 [FileAllocator] done allocating datafile /data/db/migrateBiggerC/config.ns, size: 16MB, took 0 secs m29999| Fri Feb 22 12:25:33.467 [FileAllocator] allocating new datafile /data/db/migrateBiggerC/config.0, filling with zeroes... m29999| Fri Feb 22 12:25:33.467 [FileAllocator] done allocating datafile /data/db/migrateBiggerC/config.0, size: 16MB, took 0 secs m29999| Fri Feb 22 12:25:33.467 [FileAllocator] allocating new datafile /data/db/migrateBiggerC/config.1, filling with zeroes... m29999| Fri Feb 22 12:25:33.467 [FileAllocator] done allocating datafile /data/db/migrateBiggerC/config.1, size: 32MB, took 0 secs m29999| Fri Feb 22 12:25:33.469 [conn3] build index config.locks { _id: 1 } m29999| Fri Feb 22 12:25:33.470 [conn3] build index done. scanned 0 total records. 0 secs m29999| Fri Feb 22 12:25:33.472 [conn2] build index config.lockpings { _id: 1 } m29999| Fri Feb 22 12:25:33.473 [conn2] build index done. scanned 0 total records. 0.001 secs m29999| Fri Feb 22 12:25:33.474 [conn2] build index config.lockpings { ping: new Date(1) } m29999| Fri Feb 22 12:25:33.475 [conn2] build index done. scanned 1 total records. 0 secs m30000| Fri Feb 22 12:25:33.475 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763bd4919e191ad21a7af m30000| Fri Feb 22 12:25:33.477 [mongosMain] starting upgrade of config server from v0 to v4 m30000| Fri Feb 22 12:25:33.477 [mongosMain] starting next upgrade step from v0 to v4 m30000| Fri Feb 22 12:25:33.477 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:33-512763bd4919e191ad21a7b0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535933477), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29999| Fri Feb 22 12:25:33.478 [conn3] build index config.changelog { _id: 1 } m29999| Fri Feb 22 12:25:33.478 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:33.478 [mongosMain] writing initial config version at v4 m29999| Fri Feb 22 12:25:33.479 [conn3] build index config.version { _id: 1 } m29999| Fri Feb 22 12:25:33.479 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:33.480 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:33-512763bd4919e191ad21a7b2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535933480), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:25:33.480 [mongosMain] upgrade of config server to v4 successful m30000| Fri Feb 22 12:25:33.480 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m29999| Fri Feb 22 12:25:33.481 [conn2] build index config.settings { _id: 1 } m30000| Fri Feb 22 12:25:33.482 [Balancer] about to contact config servers and shards m30000| Fri Feb 22 12:25:33.483 [websvr] admin web console waiting for connections on port 31000 m29999| Fri Feb 22 12:25:33.483 [conn2] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:25:33.483 [mongosMain] waiting for connections on port 30000 m29999| Fri Feb 22 12:25:33.483 [conn2] build index config.chunks { _id: 1 } m29999| Fri Feb 22 12:25:33.485 [conn2] build index done. scanned 0 total records. 0.001 secs m29999| Fri Feb 22 12:25:33.485 [conn2] info: creating collection config.chunks on add index m29999| Fri Feb 22 12:25:33.485 [conn2] build index config.chunks { ns: 1, min: 1 } m29999| Fri Feb 22 12:25:33.486 [conn2] build index done. scanned 0 total records. 0 secs m29999| Fri Feb 22 12:25:33.486 [conn2] build index config.chunks { ns: 1, shard: 1, min: 1 } m29999| Fri Feb 22 12:25:33.486 [conn2] build index done. scanned 0 total records. 0 secs m29999| Fri Feb 22 12:25:33.487 [conn2] build index config.chunks { ns: 1, lastmod: 1 } m29999| Fri Feb 22 12:25:33.487 [conn2] build index done. scanned 0 total records. 0 secs m29999| Fri Feb 22 12:25:33.487 [conn2] build index config.shards { _id: 1 } m29999| Fri Feb 22 12:25:33.488 [conn2] build index done. scanned 0 total records. 0 secs m29999| Fri Feb 22 12:25:33.488 [conn2] info: creating collection config.shards on add index m29999| Fri Feb 22 12:25:33.488 [conn2] build index config.shards { host: 1 } m29999| Fri Feb 22 12:25:33.490 [conn2] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:25:33.490 [Balancer] config servers and shards contacted successfully m30000| Fri Feb 22 12:25:33.490 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30000 started at Feb 22 12:25:33 m29999| Fri Feb 22 12:25:33.490 [initandlisten] connection accepted from 127.0.0.1:64790 #4 (4 connections now open) m29999| Fri Feb 22 12:25:33.490 [conn2] build index config.mongos { _id: 1 } m29999| Fri Feb 22 12:25:33.491 [conn2] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:33.493 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763bd4919e191ad21a7b4 m30000| Fri Feb 22 12:25:33.493 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m30000| Fri Feb 22 12:25:33.642 [mongosMain] connection accepted from 127.0.0.1:59671 #1 (1 connection now open) m30000| Fri Feb 22 12:25:33.645 [conn1] couldn't find database [admin] in config db m29999| Fri Feb 22 12:25:33.645 [conn2] build index config.databases { _id: 1 } m29999| Fri Feb 22 12:25:33.646 [conn2] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:33.646 [conn1] put [admin] on: config:localhost:29999 m30001| Fri Feb 22 12:25:33.647 [initandlisten] connection accepted from 127.0.0.1:61935 #2 (2 connections now open) m30000| Fri Feb 22 12:25:33.648 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30001" } m30002| Fri Feb 22 12:25:33.650 [initandlisten] connection accepted from 127.0.0.1:44675 #2 (2 connections now open) m30000| Fri Feb 22 12:25:33.651 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30002" } m30000| Fri Feb 22 12:25:33.894 [conn1] couldn't find database [test] in config db m30000| Fri Feb 22 12:25:33.895 [conn1] put [test] on: shard0000:localhost:30001 m30000| Fri Feb 22 12:25:33.896 [conn1] creating WriteBackListener for: localhost:30001 serverID: 512763bd4919e191ad21a7b3 m30001| Fri Feb 22 12:25:33.896 [initandlisten] connection accepted from 127.0.0.1:48136 #3 (3 connections now open) m30000| Fri Feb 22 12:25:33.897 [conn1] creating WriteBackListener for: localhost:30002 serverID: 512763bd4919e191ad21a7b3 m30002| Fri Feb 22 12:25:33.897 [initandlisten] connection accepted from 127.0.0.1:54518 #3 (3 connections now open) m30001| Fri Feb 22 12:25:34.014 [FileAllocator] allocating new datafile /data/db/migrateBigger0/test.ns, filling with zeroes... m30001| Fri Feb 22 12:25:34.015 [FileAllocator] done allocating datafile /data/db/migrateBigger0/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:25:34.015 [FileAllocator] allocating new datafile /data/db/migrateBigger0/test.0, filling with zeroes... m30001| Fri Feb 22 12:25:34.015 [FileAllocator] done allocating datafile /data/db/migrateBigger0/test.0, size: 256MB, took 0 secs m30001| Fri Feb 22 12:25:34.015 [FileAllocator] allocating new datafile /data/db/migrateBigger0/test.1, filling with zeroes... m30001| Fri Feb 22 12:25:34.015 [FileAllocator] done allocating datafile /data/db/migrateBigger0/test.1, size: 256MB, took 0 secs m30001| Fri Feb 22 12:25:34.019 [conn3] build index test.stuff { _id: 1 } m30001| Fri Feb 22 12:25:34.021 [conn3] build index done. scanned 0 total records. 0.001 secs { "sharded" : false, "primary" : "shard0000", "ns" : "test.stuff", "count" : 10, "size" : 167772000, "avgObjSize" : 16777200, "storageSize" : 267448320, "numExtents" : 1, "nindexes" : 1, "lastExtentSize" : 267448320, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1 } m30001| Fri Feb 22 12:25:35.980 [FileAllocator] allocating new datafile /data/db/migrateBigger0/test.2, filling with zeroes... m30001| Fri Feb 22 12:25:35.981 [FileAllocator] done allocating datafile /data/db/migrateBigger0/test.2, size: 256MB, took 0 secs m30001| Fri Feb 22 12:25:35.981 [FileAllocator] allocating new datafile /data/db/migrateBigger0/test.3, filling with zeroes... m30001| Fri Feb 22 12:25:35.981 [FileAllocator] done allocating datafile /data/db/migrateBigger0/test.3, size: 512MB, took 0 secs m30001| Fri Feb 22 12:25:35.981 [FileAllocator] allocating new datafile /data/db/migrateBigger0/test.4, filling with zeroes... m30001| Fri Feb 22 12:25:35.981 [FileAllocator] done allocating datafile /data/db/migrateBigger0/test.4, size: 1024MB, took 0 secs { "sharded" : false, "primary" : "shard0000", "ns" : "test.stuff", "count" : 20, "size" : 335544000, "avgObjSize" : 16777200, "storageSize" : 628506624, "numExtents" : 2, "nindexes" : 1, "lastExtentSize" : 361058304, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1 } { "sharded" : false, "primary" : "shard0000", "ns" : "test.stuff", "count" : 30, "size" : 503316000, "avgObjSize" : 16777200, "storageSize" : 628506624, "numExtents" : 2, "nindexes" : 1, "lastExtentSize" : 361058304, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1 } m30001| Fri Feb 22 12:25:38.739 [FileAllocator] allocating new datafile /data/db/migrateBigger0/test.5, filling with zeroes... m30001| Fri Feb 22 12:25:38.740 [FileAllocator] done allocating datafile /data/db/migrateBigger0/test.5, size: 2047MB, took 0 secs { "sharded" : false, "primary" : "shard0000", "ns" : "test.stuff", "count" : 40, "size" : 671088000, "avgObjSize" : 16777200, "storageSize" : 1115938816, "numExtents" : 3, "nindexes" : 1, "lastExtentSize" : 487432192, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 8176, "indexSizes" : { "_id_" : 8176 }, "ok" : 1 } m30000| Fri Feb 22 12:25:39.184 [conn1] enabling sharding on: test m29999| Fri Feb 22 12:25:39.186 [initandlisten] connection accepted from 127.0.0.1:60081 #5 (5 connections now open) m30000| Fri Feb 22 12:25:39.186 [conn1] creating WriteBackListener for: localhost:29999 serverID: 512763bd4919e191ad21a7b3 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512763bd4919e191ad21a7b1") } shards: { "_id" : "shard0000", "host" : "localhost:30001" } { "_id" : "shard0001", "host" : "localhost:30002" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0000" } m30001| Fri Feb 22 12:25:39.190 [initandlisten] connection accepted from 127.0.0.1:61799 #4 (4 connections now open) m30000| Fri Feb 22 12:25:39.191 [conn1] CMD: shardcollection: { shardcollection: "test.stuff", key: { _id: 1.0 } } m30000| Fri Feb 22 12:25:39.192 [conn1] enable sharding on: test.stuff with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:25:39.192 [conn4] request split points lookup for chunk test.stuff { : MinKey } -->> { : MaxKey } m30000| Fri Feb 22 12:25:39.192 [conn1] going to create 14 chunk(s) for: test.stuff using new epoch 512763c34919e191ad21a7b5 m30000| Fri Feb 22 12:25:39.194 [conn1] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 2 version: 1|13||512763c34919e191ad21a7b5 based on: (empty) m29999| Fri Feb 22 12:25:39.195 [conn2] build index config.collections { _id: 1 } m29999| Fri Feb 22 12:25:39.198 [conn2] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 12:25:39.199 [conn3] no current chunk manager found for this shard, will initialize m29999| Fri Feb 22 12:25:39.199 [initandlisten] connection accepted from 127.0.0.1:33738 #6 (6 connections now open) [ { "shard" : "shard0000", "nChunks" : 14 } ] m30002| Fri Feb 22 12:25:39.494 [initandlisten] connection accepted from 127.0.0.1:36574 #4 (4 connections now open) m30000| Fri Feb 22 12:25:39.495 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763c34919e191ad21a7b6 m29999| Fri Feb 22 12:25:39.496 [conn2] build index config.tags { _id: 1 } m29999| Fri Feb 22 12:25:39.498 [conn2] build index done. scanned 0 total records. 0.002 secs m29999| Fri Feb 22 12:25:39.498 [conn2] info: creating collection config.tags on add index m29999| Fri Feb 22 12:25:39.498 [conn2] build index config.tags { ns: 1, min: 1 } m29999| Fri Feb 22 12:25:39.500 [conn2] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:25:39.500 [Balancer] ns: test.stuff going to move { _id: "test.stuff-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('512763c34919e191ad21a7b5'), ns: "test.stuff", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30000| Fri Feb 22 12:25:39.500 [Balancer] moving chunk ns: test.stuff moving ( ns:test.stuffshard: shard0000:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }) shard0000:localhost:30001 -> shard0001:localhost:30002 m30001| Fri Feb 22 12:25:39.500 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:39.500 [conn4] received moveChunk request: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_MinKey", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } m29999| Fri Feb 22 12:25:39.501 [initandlisten] connection accepted from 127.0.0.1:52384 #7 (7 connections now open) m30001| Fri Feb 22 12:25:39.501 [LockPinger] creating distributed lock ping thread for localhost:29999 and process bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652 (sleeping for 30000ms) m30001| Fri Feb 22 12:25:39.503 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' acquired, ts : 512763c347b18995c720dcad m30001| Fri Feb 22 12:25:39.503 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:39-512763c347b18995c720dcae", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535939503), what: "moveChunk.start", ns: "test.stuff", details: { min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:39.503 [conn4] moveChunk request accepted at version 1|13||512763c34919e191ad21a7b5 m30001| Fri Feb 22 12:25:39.503 [conn4] moveChunk number of documents: 2 m30002| Fri Feb 22 12:25:39.503 [initandlisten] connection accepted from 127.0.0.1:42915 #5 (5 connections now open) m30002| Fri Feb 22 12:25:39.504 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } for collection test.stuff from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:39.504 [initandlisten] connection accepted from 127.0.0.1:59222 #5 (5 connections now open) m30002| Fri Feb 22 12:25:39.505 [FileAllocator] allocating new datafile /data/db/migrateBigger1/test.ns, filling with zeroes... m30002| Fri Feb 22 12:25:39.505 [FileAllocator] done allocating datafile /data/db/migrateBigger1/test.ns, size: 16MB, took 0 secs m30002| Fri Feb 22 12:25:39.505 [FileAllocator] allocating new datafile /data/db/migrateBigger1/test.0, filling with zeroes... m30002| Fri Feb 22 12:25:39.505 [FileAllocator] done allocating datafile /data/db/migrateBigger1/test.0, size: 64MB, took 0 secs m30002| Fri Feb 22 12:25:39.505 [FileAllocator] allocating new datafile /data/db/migrateBigger1/test.1, filling with zeroes... m30002| Fri Feb 22 12:25:39.506 [FileAllocator] done allocating datafile /data/db/migrateBigger1/test.1, size: 128MB, took 0 secs m30002| Fri Feb 22 12:25:39.508 [migrateThread] build index test.stuff { _id: 1 } m30002| Fri Feb 22 12:25:39.509 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30002| Fri Feb 22 12:25:39.509 [migrateThread] info: creating collection test.stuff on add index m30001| Fri Feb 22 12:25:39.514 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:39.524 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:39.534 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:39.545 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:39.561 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:39.593 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:39.625 [FileAllocator] allocating new datafile /data/db/migrateBigger1/test.2, filling with zeroes... m30002| Fri Feb 22 12:25:39.626 [FileAllocator] done allocating datafile /data/db/migrateBigger1/test.2, size: 256MB, took 0 secs m30002| Fri Feb 22 12:25:39.626 [FileAllocator] allocating new datafile /data/db/migrateBigger1/test.3, filling with zeroes... m30002| Fri Feb 22 12:25:39.626 [FileAllocator] done allocating datafile /data/db/migrateBigger1/test.3, size: 512MB, took 0 secs m30001| Fri Feb 22 12:25:39.657 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:39.785 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 16715428, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:39.793 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 12:25:39.793 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } m30002| Fri Feb 22 12:25:39.827 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } m30001| Fri Feb 22 12:25:40.042 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 2, clonedBytes: 33430856, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:40.042 [conn4] moveChunk setting version to: 2|0||512763c34919e191ad21a7b5 m30002| Fri Feb 22 12:25:40.042 [initandlisten] connection accepted from 127.0.0.1:57332 #6 (6 connections now open) m30002| Fri Feb 22 12:25:40.042 [conn6] Waiting for commit to finish m30002| Fri Feb 22 12:25:40.050 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } m30002| Fri Feb 22 12:25:40.050 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } m30002| Fri Feb 22 12:25:40.050 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:40-512763c40c8c7a3f858e90f0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535940050), what: "moveChunk.to", ns: "test.stuff", details: { min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 283, step4 of 5: 0, step5 of 5: 257 } } m29999| Fri Feb 22 12:25:40.051 [initandlisten] connection accepted from 127.0.0.1:34465 #8 (8 connections now open) m30001| Fri Feb 22 12:25:40.052 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff", from: "localhost:30001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 2, clonedBytes: 33430856, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:40.052 [conn4] moveChunk updating self version to: 2|1||512763c34919e191ad21a7b5 through { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } for collection 'test.stuff' m29999| Fri Feb 22 12:25:40.053 [initandlisten] connection accepted from 127.0.0.1:49862 #9 (9 connections now open) m30001| Fri Feb 22 12:25:40.053 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:40-512763c447b18995c720dcaf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535940053), what: "moveChunk.commit", ns: "test.stuff", details: { min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:40.053 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:40.053 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:40.053 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:40.053 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:40.054 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:40.054 [cleanupOldData-512763c447b18995c720dcb0] (start) waiting to cleanup test.stuff from { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:40.054 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' unlocked. m30001| Fri Feb 22 12:25:40.054 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:40-512763c447b18995c720dcb1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535940054), what: "moveChunk.from", ns: "test.stuff", details: { min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:25:40.054 [conn4] command admin.$cmd command: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: MinKey }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_MinKey", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:50 w:18 reslen:37 553ms m30000| Fri Feb 22 12:25:40.055 [Balancer] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 3 version: 2|1||512763c34919e191ad21a7b5 based on: 1|13||512763c34919e191ad21a7b5 m30000| Fri Feb 22 12:25:40.055 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m30001| Fri Feb 22 12:25:40.074 [cleanupOldData-512763c447b18995c720dcb0] waiting to remove documents for test.stuff from { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } m30001| Fri Feb 22 12:25:40.074 [cleanupOldData-512763c447b18995c720dcb0] moveChunk starting delete for: test.stuff from { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } m30001| Fri Feb 22 12:25:40.074 [cleanupOldData-512763c447b18995c720dcb0] moveChunk deleted 2 documents for test.stuff from { _id: MinKey } -> { _id: ObjectId('512763bebbafd6ee3fe7ec87') } [ { "shard" : "shard0001", "nChunks" : 1 }, { "shard" : "shard0000", "nChunks" : 13 } ] m30000| Fri Feb 22 12:25:41.057 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763c54919e191ad21a7b7 m30000| Fri Feb 22 12:25:41.058 [Balancer] ns: test.stuff going to move { _id: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec87')", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('512763c34919e191ad21a7b5'), ns: "test.stuff", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30000| Fri Feb 22 12:25:41.058 [Balancer] moving chunk ns: test.stuff moving ( ns:test.stuffshard: shard0000:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }) shard0000:localhost:30001 -> shard0001:localhost:30002 m30001| Fri Feb 22 12:25:41.058 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:41.058 [conn4] received moveChunk request: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec87')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:41.059 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' acquired, ts : 512763c547b18995c720dcb2 m30001| Fri Feb 22 12:25:41.059 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:41-512763c547b18995c720dcb3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535941059), what: "moveChunk.start", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:41.060 [conn4] moveChunk request accepted at version 2|1||512763c34919e191ad21a7b5 m30001| Fri Feb 22 12:25:41.060 [conn4] moveChunk number of documents: 3 m30002| Fri Feb 22 12:25:41.060 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } for collection test.stuff from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:41.070 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:41.081 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:41.091 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:41.101 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:41.118 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:41.150 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:41.214 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 16715428, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [ { "shard" : "shard0001", "nChunks" : 1 }, { "shard" : "shard0000", "nChunks" : 13 } ] m30001| Fri Feb 22 12:25:41.342 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 33430856, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:41.391 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 12:25:41.391 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } m30002| Fri Feb 22 12:25:41.427 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } m30001| Fri Feb 22 12:25:41.598 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:41.599 [conn4] moveChunk setting version to: 3|0||512763c34919e191ad21a7b5 m30002| Fri Feb 22 12:25:41.599 [conn6] Waiting for commit to finish m30002| Fri Feb 22 12:25:41.600 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } m30002| Fri Feb 22 12:25:41.600 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } m30002| Fri Feb 22 12:25:41.600 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:41-512763c50c8c7a3f858e90f1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535941600), what: "moveChunk.to", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 330, step4 of 5: 0, step5 of 5: 208 } } m30001| Fri Feb 22 12:25:41.609 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:41.609 [conn4] moveChunk updating self version to: 3|1||512763c34919e191ad21a7b5 through { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } for collection 'test.stuff' m30001| Fri Feb 22 12:25:41.609 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:41-512763c547b18995c720dcb4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535941609), what: "moveChunk.commit", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:41.609 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:41.609 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:41.610 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:41.610 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:41.610 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:41.610 [cleanupOldData-512763c547b18995c720dcb5] (start) waiting to cleanup test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:41.610 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' unlocked. m30001| Fri Feb 22 12:25:41.610 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:41-512763c547b18995c720dcb6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535941610), what: "moveChunk.from", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 538, step5 of 6: 10, step6 of 6: 0 } } m30001| Fri Feb 22 12:25:41.610 [conn4] command admin.$cmd command: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec87') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec87')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:47 w:16 reslen:37 552ms m30000| Fri Feb 22 12:25:41.611 [Balancer] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 4 version: 3|1||512763c34919e191ad21a7b5 based on: 2|1||512763c34919e191ad21a7b5 m30000| Fri Feb 22 12:25:41.612 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m30001| Fri Feb 22 12:25:41.630 [cleanupOldData-512763c547b18995c720dcb5] waiting to remove documents for test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } m30001| Fri Feb 22 12:25:41.630 [cleanupOldData-512763c547b18995c720dcb5] moveChunk starting delete for: test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } m30001| Fri Feb 22 12:25:41.630 [cleanupOldData-512763c547b18995c720dcb5] moveChunk deleted 3 documents for test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec87') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } [ { "shard" : "shard0001", "nChunks" : 2 }, { "shard" : "shard0000", "nChunks" : 12 } ] m30000| Fri Feb 22 12:25:42.614 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763c64919e191ad21a7b8 m30000| Fri Feb 22 12:25:42.615 [Balancer] ns: test.stuff going to move { _id: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec8a')", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('512763c34919e191ad21a7b5'), ns: "test.stuff", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30000| Fri Feb 22 12:25:42.615 [Balancer] moving chunk ns: test.stuff moving ( ns:test.stuffshard: shard0000:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }) shard0000:localhost:30001 -> shard0001:localhost:30002 m30001| Fri Feb 22 12:25:42.615 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:42.615 [conn4] received moveChunk request: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec8a')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:42.616 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' acquired, ts : 512763c647b18995c720dcb7 m30001| Fri Feb 22 12:25:42.616 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:42-512763c647b18995c720dcb8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535942616), what: "moveChunk.start", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:42.617 [conn4] moveChunk request accepted at version 3|1||512763c34919e191ad21a7b5 m30001| Fri Feb 22 12:25:42.618 [conn4] moveChunk number of documents: 3 m30002| Fri Feb 22 12:25:42.618 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } for collection test.stuff from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:42.628 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:42.638 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:42.649 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:42.659 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:42.675 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:42.707 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:42.772 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:42.900 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 16715428, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:43.123 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 12:25:43.123 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } m30001| Fri Feb 22 12:25:43.156 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:43.162 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } [ { "shard" : "shard0001", "nChunks" : 2 }, { "shard" : "shard0000", "nChunks" : 12 } ] m30001| Fri Feb 22 12:25:43.668 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:43.669 [conn4] moveChunk setting version to: 4|0||512763c34919e191ad21a7b5 m30002| Fri Feb 22 12:25:43.669 [conn6] Waiting for commit to finish m30002| Fri Feb 22 12:25:43.677 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } m30002| Fri Feb 22 12:25:43.677 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } m30002| Fri Feb 22 12:25:43.678 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:43-512763c70c8c7a3f858e90f2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535943677), what: "moveChunk.to", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 504, step4 of 5: 0, step5 of 5: 554 } } m30001| Fri Feb 22 12:25:43.679 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:43.679 [conn4] moveChunk updating self version to: 4|1||512763c34919e191ad21a7b5 through { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } for collection 'test.stuff' m30001| Fri Feb 22 12:25:43.679 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:43-512763c747b18995c720dcb9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535943679), what: "moveChunk.commit", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:43.679 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:43.679 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:43.680 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:43.680 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:43.680 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:43.680 [cleanupOldData-512763c747b18995c720dcba] (start) waiting to cleanup test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:43.680 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' unlocked. m30001| Fri Feb 22 12:25:43.680 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:43-512763c747b18995c720dcbb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535943680), what: "moveChunk.from", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 10, step6 of 6: 0 } } m30001| Fri Feb 22 12:25:43.680 [conn4] command admin.$cmd command: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8a') }, max: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec8a')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:24 r:123 w:16 reslen:37 1065ms m30000| Fri Feb 22 12:25:43.681 [Balancer] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 5 version: 4|1||512763c34919e191ad21a7b5 based on: 3|1||512763c34919e191ad21a7b5 m30000| Fri Feb 22 12:25:43.682 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m30001| Fri Feb 22 12:25:43.700 [cleanupOldData-512763c747b18995c720dcba] waiting to remove documents for test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } m30001| Fri Feb 22 12:25:43.700 [cleanupOldData-512763c747b18995c720dcba] moveChunk starting delete for: test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } m30001| Fri Feb 22 12:25:43.700 [cleanupOldData-512763c747b18995c720dcba] moveChunk deleted 3 documents for test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8a') } -> { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } [ { "shard" : "shard0001", "nChunks" : 3 }, { "shard" : "shard0000", "nChunks" : 11 } ] m30000| Fri Feb 22 12:25:44.683 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763c84919e191ad21a7b9 m30000| Fri Feb 22 12:25:44.684 [Balancer] ns: test.stuff going to move { _id: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec8d')", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('512763c34919e191ad21a7b5'), ns: "test.stuff", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30000| Fri Feb 22 12:25:44.685 [Balancer] moving chunk ns: test.stuff moving ( ns:test.stuffshard: shard0000:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }) shard0000:localhost:30001 -> shard0001:localhost:30002 m30001| Fri Feb 22 12:25:44.685 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:44.685 [conn4] received moveChunk request: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec8d')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:44.686 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' acquired, ts : 512763c847b18995c720dcbc m30001| Fri Feb 22 12:25:44.686 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:44-512763c847b18995c720dcbd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535944686), what: "moveChunk.start", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:44.687 [conn4] moveChunk request accepted at version 4|1||512763c34919e191ad21a7b5 m30001| Fri Feb 22 12:25:44.687 [conn4] moveChunk number of documents: 3 m30002| Fri Feb 22 12:25:44.687 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } for collection test.stuff from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:44.697 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:44.707 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:44.718 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:44.728 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:44.744 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:44.776 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:44.840 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:44.969 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 16715428, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:45.180 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 12:25:45.180 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } m30002| Fri Feb 22 12:25:45.210 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } m30001| Fri Feb 22 12:25:45.225 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:45.225 [conn4] moveChunk setting version to: 5|0||512763c34919e191ad21a7b5 m30002| Fri Feb 22 12:25:45.225 [conn6] Waiting for commit to finish m30002| Fri Feb 22 12:25:45.231 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } m30002| Fri Feb 22 12:25:45.231 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } m30002| Fri Feb 22 12:25:45.231 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:45-512763c90c8c7a3f858e90f3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535945231), what: "moveChunk.to", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 492, step4 of 5: 0, step5 of 5: 50 } } m30001| Fri Feb 22 12:25:45.235 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:45.235 [conn4] moveChunk updating self version to: 5|1||512763c34919e191ad21a7b5 through { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } for collection 'test.stuff' m30001| Fri Feb 22 12:25:45.236 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:45-512763c947b18995c720dcbe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535945236), what: "moveChunk.commit", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:45.236 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:45.236 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:45.236 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:45.236 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:45.236 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:45.236 [cleanupOldData-512763c947b18995c720dcbf] (start) waiting to cleanup test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:45.236 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' unlocked. m30001| Fri Feb 22 12:25:45.236 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:45-512763c947b18995c720dcc0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535945236), what: "moveChunk.from", ns: "test.stuff", details: { min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 538, step5 of 6: 10, step6 of 6: 0 } } m30001| Fri Feb 22 12:25:45.236 [conn4] command admin.$cmd command: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bebbafd6ee3fe7ec8d') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bebbafd6ee3fe7ec8d')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:13 r:91 w:16 reslen:37 551ms m30000| Fri Feb 22 12:25:45.237 [Balancer] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 6 version: 5|1||512763c34919e191ad21a7b5 based on: 4|1||512763c34919e191ad21a7b5 m30000| Fri Feb 22 12:25:45.238 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m30001| Fri Feb 22 12:25:45.256 [cleanupOldData-512763c947b18995c720dcbf] waiting to remove documents for test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } m30001| Fri Feb 22 12:25:45.256 [cleanupOldData-512763c947b18995c720dcbf] moveChunk starting delete for: test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } m30001| Fri Feb 22 12:25:45.256 [cleanupOldData-512763c947b18995c720dcbf] moveChunk deleted 3 documents for test.stuff from { _id: ObjectId('512763bebbafd6ee3fe7ec8d') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } [ { "shard" : "shard0001", "nChunks" : 4 }, { "shard" : "shard0000", "nChunks" : 10 } ] m30000| Fri Feb 22 12:25:46.240 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763ca4919e191ad21a7ba m30000| Fri Feb 22 12:25:46.241 [Balancer] ns: test.stuff going to move { _id: "test.stuff-_id_ObjectId('512763bfbbafd6ee3fe7ec90')", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('512763c34919e191ad21a7b5'), ns: "test.stuff", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30000| Fri Feb 22 12:25:46.241 [Balancer] moving chunk ns: test.stuff moving ( ns:test.stuffshard: shard0000:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }) shard0000:localhost:30001 -> shard0001:localhost:30002 m30001| Fri Feb 22 12:25:46.241 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:46.241 [conn4] received moveChunk request: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bfbbafd6ee3fe7ec90')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:46.242 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' acquired, ts : 512763ca47b18995c720dcc1 m30001| Fri Feb 22 12:25:46.242 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:46-512763ca47b18995c720dcc2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535946242), what: "moveChunk.start", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:46.243 [conn4] moveChunk request accepted at version 5|1||512763c34919e191ad21a7b5 m30001| Fri Feb 22 12:25:46.243 [conn4] moveChunk number of documents: 3 m30002| Fri Feb 22 12:25:46.244 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } for collection test.stuff from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:46.254 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:46.264 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:46.274 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:46.285 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:46.301 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [ { "shard" : "shard0001", "nChunks" : 4 }, { "shard" : "shard0000", "nChunks" : 10 } ] m30001| Fri Feb 22 12:25:46.333 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:46.397 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:46.525 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 16715428, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:46.747 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 12:25:46.747 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } m30001| Fri Feb 22 12:25:46.782 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:46.787 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } m30001| Fri Feb 22 12:25:47.294 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:47.294 [conn4] moveChunk setting version to: 6|0||512763c34919e191ad21a7b5 m30002| Fri Feb 22 12:25:47.294 [conn6] Waiting for commit to finish m30002| Fri Feb 22 12:25:47.299 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } m30002| Fri Feb 22 12:25:47.299 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } m30002| Fri Feb 22 12:25:47.299 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:47-512763cb0c8c7a3f858e90f4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535947299), what: "moveChunk.to", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 503, step4 of 5: 0, step5 of 5: 551 } } m30001| Fri Feb 22 12:25:47.305 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:47.305 [conn4] moveChunk updating self version to: 6|1||512763c34919e191ad21a7b5 through { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } for collection 'test.stuff' m30001| Fri Feb 22 12:25:47.306 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:47-512763cb47b18995c720dcc3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535947306), what: "moveChunk.commit", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:47.306 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:47.306 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:47.306 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:47.306 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:47.306 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:47.306 [cleanupOldData-512763cb47b18995c720dcc4] (start) waiting to cleanup test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:47.306 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' unlocked. m30001| Fri Feb 22 12:25:47.307 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:47-512763cb47b18995c720dcc5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535947306), what: "moveChunk.from", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:25:47.307 [conn4] command admin.$cmd command: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec90') }, max: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bfbbafd6ee3fe7ec90')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:45 r:121 w:26 reslen:37 1065ms m30000| Fri Feb 22 12:25:47.308 [Balancer] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 7 version: 6|1||512763c34919e191ad21a7b5 based on: 5|1||512763c34919e191ad21a7b5 m30000| Fri Feb 22 12:25:47.308 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m30001| Fri Feb 22 12:25:47.326 [cleanupOldData-512763cb47b18995c720dcc4] waiting to remove documents for test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } m30001| Fri Feb 22 12:25:47.326 [cleanupOldData-512763cb47b18995c720dcc4] moveChunk starting delete for: test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } m30001| Fri Feb 22 12:25:47.327 [cleanupOldData-512763cb47b18995c720dcc4] moveChunk deleted 3 documents for test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec90') } -> { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } [ { "shard" : "shard0001", "nChunks" : 5 }, { "shard" : "shard0000", "nChunks" : 9 } ] m30000| Fri Feb 22 12:25:48.310 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' acquired, ts : 512763cc4919e191ad21a7bb m30000| Fri Feb 22 12:25:48.311 [Balancer] ns: test.stuff going to move { _id: "test.stuff-_id_ObjectId('512763bfbbafd6ee3fe7ec93')", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('512763c34919e191ad21a7b5'), ns: "test.stuff", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shard: "shard0000" } from: shard0000 to: shard0001 tag [] m30000| Fri Feb 22 12:25:48.311 [Balancer] moving chunk ns: test.stuff moving ( ns:test.stuffshard: shard0000:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }) shard0000:localhost:30001 -> shard0001:localhost:30002 m30001| Fri Feb 22 12:25:48.311 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:25:48.312 [conn4] received moveChunk request: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bfbbafd6ee3fe7ec93')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:25:48.312 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' acquired, ts : 512763cc47b18995c720dcc6 m30001| Fri Feb 22 12:25:48.313 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:48-512763cc47b18995c720dcc7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535948313), what: "moveChunk.start", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:48.313 [conn4] moveChunk request accepted at version 6|1||512763c34919e191ad21a7b5 m30001| Fri Feb 22 12:25:48.313 [conn4] moveChunk number of documents: 3 m30002| Fri Feb 22 12:25:48.314 [migrateThread] starting receiving-end of migration of chunk { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } for collection test.stuff from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:25:48.324 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:48.335 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 [ { "shard" : "shard0001", "nChunks" : 5 }, { "shard" : "shard0000", "nChunks" : 9 } ] m30001| Fri Feb 22 12:25:48.345 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:48.355 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:48.371 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:48.404 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:48.468 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 16715428, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:48.596 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 16715428, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:48.671 [FileAllocator] allocating new datafile /data/db/migrateBigger1/test.4, filling with zeroes... m30002| Fri Feb 22 12:25:48.671 [FileAllocator] done allocating datafile /data/db/migrateBigger1/test.4, size: 1024MB, took 0 secs m30002| Fri Feb 22 12:25:48.833 [migrateThread] Waiting for replication to catch up before entering critical section m30002| Fri Feb 22 12:25:48.833 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } m30001| Fri Feb 22 12:25:48.852 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30002| Fri Feb 22 12:25:48.869 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } [ { "shard" : "shard0001", "nChunks" : 5 }, { "shard" : "shard0000", "nChunks" : 9 } ] m30001| Fri Feb 22 12:25:49.365 [conn4] moveChunk data transfer progress: { active: true, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:25:49.365 [conn4] moveChunk setting version to: 7|0||512763c34919e191ad21a7b5 m30002| Fri Feb 22 12:25:49.365 [conn6] Waiting for commit to finish m30002| Fri Feb 22 12:25:49.368 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } m30002| Fri Feb 22 12:25:49.368 [migrateThread] migrate commit flushed to journal for 'test.stuff' { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } m30002| Fri Feb 22 12:25:49.368 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:49-512763cd0c8c7a3f858e90f5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535949368), what: "moveChunk.to", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 518, step4 of 5: 0, step5 of 5: 535 } } m30001| Fri Feb 22 12:25:49.375 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.stuff", from: "localhost:30001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 3, clonedBytes: 50146284, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:25:49.375 [conn4] moveChunk updating self version to: 7|1||512763c34919e191ad21a7b5 through { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec99') } for collection 'test.stuff' m30001| Fri Feb 22 12:25:49.382 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:49-512763cd47b18995c720dcc8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535949382), what: "moveChunk.commit", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, from: "shard0000", to: "shard0001" } } m30001| Fri Feb 22 12:25:49.382 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:49.382 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:49.382 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:25:49.382 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:25:49.382 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:25:49.383 [cleanupOldData-512763cd47b18995c720dcc9] (start) waiting to cleanup test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, # cursors remaining: 0 m30001| Fri Feb 22 12:25:49.383 [conn4] distributed lock 'test.stuff/bs-smartos-x86-64-1.10gen.cc:30001:1361535939:30652' unlocked. m30001| Fri Feb 22 12:25:49.383 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:49-512763cd47b18995c720dcca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:61799", time: new Date(1361535949383), what: "moveChunk.from", ns: "test.stuff", details: { min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1051, step5 of 6: 17, step6 of 6: 0 } } m30001| Fri Feb 22 12:25:49.383 [conn4] command admin.$cmd command: { moveChunk: "test.stuff", from: "localhost:30001", to: "localhost:30002", fromShard: "shard0000", toShard: "shard0001", min: { _id: ObjectId('512763bfbbafd6ee3fe7ec93') }, max: { _id: ObjectId('512763c0bbafd6ee3fe7ec96') }, maxChunkSizeBytes: 67108864, shardId: "test.stuff-_id_ObjectId('512763bfbbafd6ee3fe7ec93')", configdb: "localhost:29999", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:21 r:96 w:16 reslen:37 1071ms m30000| Fri Feb 22 12:25:49.384 [Balancer] ChunkManager: time to load chunks for test.stuff: 0ms sequenceNumber: 8 version: 7|1||512763c34919e191ad21a7b5 based on: 6|1||512763c34919e191ad21a7b5 m30000| Fri Feb 22 12:25:49.384 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30000:1361535933:16838' unlocked. m30001| Fri Feb 22 12:25:49.403 [cleanupOldData-512763cd47b18995c720dcc9] waiting to remove documents for test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } m30001| Fri Feb 22 12:25:49.403 [cleanupOldData-512763cd47b18995c720dcc9] moveChunk starting delete for: test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } m30001| Fri Feb 22 12:25:49.403 [cleanupOldData-512763cd47b18995c720dcc9] moveChunk deleted 3 documents for test.stuff from { _id: ObjectId('512763bfbbafd6ee3fe7ec93') } -> { _id: ObjectId('512763c0bbafd6ee3fe7ec96') } [ { "shard" : "shard0001", "nChunks" : 6 }, { "shard" : "shard0000", "nChunks" : 8 } ] m30000| Fri Feb 22 12:25:50.358 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m29999| Fri Feb 22 12:25:50.376 [conn3] end connection 127.0.0.1:33448 (8 connections now open) m29999| Fri Feb 22 12:25:50.376 [conn2] end connection 127.0.0.1:33782 (8 connections now open) m29999| Fri Feb 22 12:25:50.376 [conn4] end connection 127.0.0.1:64790 (8 connections now open) m29999| Fri Feb 22 12:25:50.376 [conn5] end connection 127.0.0.1:60081 (8 connections now open) m30001| Fri Feb 22 12:25:50.377 [conn4] end connection 127.0.0.1:61799 (4 connections now open) m30001| Fri Feb 22 12:25:50.377 [conn3] end connection 127.0.0.1:48136 (4 connections now open) m30002| Fri Feb 22 12:25:50.377 [conn4] end connection 127.0.0.1:36574 (5 connections now open) m30002| Fri Feb 22 12:25:50.377 [conn3] end connection 127.0.0.1:54518 (5 connections now open) Fri Feb 22 12:25:51.358 shell: stopped mongo program on port 30000 m29999| Fri Feb 22 12:25:51.359 got signal 15 (Terminated), will terminate after current cmd ends m29999| Fri Feb 22 12:25:51.359 [interruptThread] now exiting m29999| Fri Feb 22 12:25:51.359 dbexit: m29999| Fri Feb 22 12:25:51.359 [interruptThread] shutdown: going to close listening sockets... m29999| Fri Feb 22 12:25:51.359 [interruptThread] closing listening socket: 18 m29999| Fri Feb 22 12:25:51.359 [interruptThread] closing listening socket: 19 m29999| Fri Feb 22 12:25:51.359 [interruptThread] closing listening socket: 20 m29999| Fri Feb 22 12:25:51.359 [interruptThread] removing socket file: /tmp/mongodb-29999.sock m29999| Fri Feb 22 12:25:51.359 [interruptThread] shutdown: going to flush diaglog... m29999| Fri Feb 22 12:25:51.359 [interruptThread] shutdown: going to close sockets... m29999| Fri Feb 22 12:25:51.359 [interruptThread] shutdown: waiting for fs preallocator... m29999| Fri Feb 22 12:25:51.359 [interruptThread] shutdown: lock for final commit... m29999| Fri Feb 22 12:25:51.359 [interruptThread] shutdown: final commit... m29999| Fri Feb 22 12:25:51.359 [conn1] end connection 127.0.0.1:61237 (4 connections now open) m29999| Fri Feb 22 12:25:51.359 [conn6] end connection 127.0.0.1:33738 (4 connections now open) m29999| Fri Feb 22 12:25:51.359 [conn8] end connection 127.0.0.1:34465 (4 connections now open) m29999| Fri Feb 22 12:25:51.359 [conn7] end connection 127.0.0.1:52384 (4 connections now open) m29999| Fri Feb 22 12:25:51.359 [conn9] end connection 127.0.0.1:49862 (3 connections now open) m29999| Fri Feb 22 12:25:51.370 [interruptThread] shutdown: closing all files... m29999| Fri Feb 22 12:25:51.371 [interruptThread] closeAllFiles() finished m29999| Fri Feb 22 12:25:51.371 [interruptThread] journalCleanup... m29999| Fri Feb 22 12:25:51.371 [interruptThread] removeJournalFiles m29999| Fri Feb 22 12:25:51.371 dbexit: really exiting now Fri Feb 22 12:25:52.359 shell: stopped mongo program on port 29999 m30001| Fri Feb 22 12:25:52.359 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:25:52.359 [interruptThread] now exiting m30001| Fri Feb 22 12:25:52.359 dbexit: m30001| Fri Feb 22 12:25:52.359 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:25:52.359 [interruptThread] closing listening socket: 12 m30001| Fri Feb 22 12:25:52.359 [interruptThread] closing listening socket: 13 m30001| Fri Feb 22 12:25:52.359 [interruptThread] closing listening socket: 14 m30001| Fri Feb 22 12:25:52.359 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:25:52.359 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:25:52.359 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:25:52.359 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:25:52.359 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:25:52.359 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:25:52.359 [conn1] end connection 127.0.0.1:44401 (2 connections now open) m30001| Fri Feb 22 12:25:52.359 [conn5] end connection 127.0.0.1:59222 (2 connections now open) m30002| Fri Feb 22 12:25:52.359 [conn5] end connection 127.0.0.1:42915 (3 connections now open) m30002| Fri Feb 22 12:25:52.359 [conn6] end connection 127.0.0.1:57332 (3 connections now open) m30001| Fri Feb 22 12:25:52.698 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:25:52.808 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:25:52.808 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:25:52.808 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:25:52.809 dbexit: really exiting now Fri Feb 22 12:25:53.359 shell: stopped mongo program on port 30001 m30002| Fri Feb 22 12:25:53.359 got signal 15 (Terminated), will terminate after current cmd ends m30002| Fri Feb 22 12:25:53.359 [interruptThread] now exiting m30002| Fri Feb 22 12:25:53.359 dbexit: m30002| Fri Feb 22 12:25:53.359 [interruptThread] shutdown: going to close listening sockets... m30002| Fri Feb 22 12:25:53.359 [interruptThread] closing listening socket: 15 m30002| Fri Feb 22 12:25:53.359 [interruptThread] closing listening socket: 16 m30002| Fri Feb 22 12:25:53.359 [interruptThread] closing listening socket: 17 m30002| Fri Feb 22 12:25:53.359 [interruptThread] removing socket file: /tmp/mongodb-30002.sock m30002| Fri Feb 22 12:25:53.359 [interruptThread] shutdown: going to flush diaglog... m30002| Fri Feb 22 12:25:53.359 [interruptThread] shutdown: going to close sockets... m30002| Fri Feb 22 12:25:53.360 [interruptThread] shutdown: waiting for fs preallocator... m30002| Fri Feb 22 12:25:53.360 [interruptThread] shutdown: lock for final commit... m30002| Fri Feb 22 12:25:53.360 [interruptThread] shutdown: final commit... m30002| Fri Feb 22 12:25:53.360 [conn1] end connection 127.0.0.1:44767 (1 connection now open) m30002| Fri Feb 22 12:25:53.482 [interruptThread] shutdown: closing all files... m30002| Fri Feb 22 12:25:53.505 [interruptThread] closeAllFiles() finished m30002| Fri Feb 22 12:25:53.505 [interruptThread] journalCleanup... m30002| Fri Feb 22 12:25:53.505 [interruptThread] removeJournalFiles m30002| Fri Feb 22 12:25:53.506 dbexit: really exiting now Fri Feb 22 12:25:54.359 shell: stopped mongo program on port 30002 Fri Feb 22 12:25:54.363 [conn16] end connection 127.0.0.1:53207 (0 connections now open) 21.7824 seconds Fri Feb 22 12:25:54.409 [initandlisten] connection accepted from 127.0.0.1:60890 #17 (1 connection now open) Fri Feb 22 12:25:54.410 [conn17] end connection 127.0.0.1:60890 (0 connections now open) ******************************************* Test : sharding_migrate_cursor1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_migrate_cursor1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_migrate_cursor1.js";TestData.testFile = "sharding_migrate_cursor1.js";TestData.testName = "sharding_migrate_cursor1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:25:54 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:25:54.593 [initandlisten] connection accepted from 127.0.0.1:53985 #18 (1 connection now open) null Resetting db path '/data/db/migrate_cursor10' Fri Feb 22 12:25:54.609 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/migrate_cursor10 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:25:54.697 [initandlisten] MongoDB starting : pid=12603 port=30000 dbpath=/data/db/migrate_cursor10 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:25:54.698 [initandlisten] m30000| Fri Feb 22 12:25:54.698 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:25:54.698 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:25:54.698 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:25:54.698 [initandlisten] m30000| Fri Feb 22 12:25:54.698 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:25:54.698 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:25:54.698 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:25:54.698 [initandlisten] allocator: system m30000| Fri Feb 22 12:25:54.698 [initandlisten] options: { dbpath: "/data/db/migrate_cursor10", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:25:54.698 [initandlisten] journal dir=/data/db/migrate_cursor10/journal m30000| Fri Feb 22 12:25:54.698 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:25:54.715 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/local.ns, filling with zeroes... m30000| Fri Feb 22 12:25:54.715 [FileAllocator] creating directory /data/db/migrate_cursor10/_tmp m30000| Fri Feb 22 12:25:54.715 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:25:54.715 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/local.0, filling with zeroes... m30000| Fri Feb 22 12:25:54.716 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:25:54.718 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:25:54.718 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:25:54.812 [initandlisten] connection accepted from 127.0.0.1:57353 #1 (1 connection now open) Resetting db path '/data/db/migrate_cursor11' Fri Feb 22 12:25:54.815 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/migrate_cursor11 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:25:54.906 [initandlisten] MongoDB starting : pid=12604 port=30001 dbpath=/data/db/migrate_cursor11 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:25:54.906 [initandlisten] m30001| Fri Feb 22 12:25:54.906 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:25:54.906 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:25:54.906 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:25:54.906 [initandlisten] m30001| Fri Feb 22 12:25:54.906 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:25:54.906 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:25:54.906 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:25:54.907 [initandlisten] allocator: system m30001| Fri Feb 22 12:25:54.907 [initandlisten] options: { dbpath: "/data/db/migrate_cursor11", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:25:54.907 [initandlisten] journal dir=/data/db/migrate_cursor11/journal m30001| Fri Feb 22 12:25:54.907 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:25:54.921 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/local.ns, filling with zeroes... m30001| Fri Feb 22 12:25:54.921 [FileAllocator] creating directory /data/db/migrate_cursor11/_tmp m30001| Fri Feb 22 12:25:54.922 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:25:54.922 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/local.0, filling with zeroes... m30001| Fri Feb 22 12:25:54.922 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:25:54.925 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:25:54.925 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:25:55.017 [initandlisten] connection accepted from 127.0.0.1:58546 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:25:55.018 [initandlisten] connection accepted from 127.0.0.1:53247 #2 (2 connections now open) ShardingTest migrate_cursor1 : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:25:55.026 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -v --chunkSize 25 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:25:55.045 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:25:55.045 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=12605 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:25:55.045 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:25:55.045 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:25:55.045 [mongosMain] options: { chunkSize: 25, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 12:25:55.045 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 12:25:55.045 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:25:55.046 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:25:55.047 [mongosMain] connected connection! m30000| Fri Feb 22 12:25:55.047 [initandlisten] connection accepted from 127.0.0.1:43401 #3 (3 connections now open) m30999| Fri Feb 22 12:25:55.047 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:25:55.048 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:25:55.048 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:25:55.048 [initandlisten] connection accepted from 127.0.0.1:53640 #4 (4 connections now open) m30999| Fri Feb 22 12:25:55.048 [mongosMain] connected connection! m30000| Fri Feb 22 12:25:55.048 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:25:55.057 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:25:55.058 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838 ) m30999| Fri Feb 22 12:25:55.058 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:25:55.058 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30000| Fri Feb 22 12:25:55.058 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/config.ns, filling with zeroes... m30999| Fri Feb 22 12:25:55.058 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:55 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "512763d3b6629e564e620a71" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 12:25:55.058 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:25:55.058 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/config.0, filling with zeroes... m30000| Fri Feb 22 12:25:55.059 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:25:55.059 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/config.1, filling with zeroes... m30000| Fri Feb 22 12:25:55.059 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:25:55.063 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:25:55.064 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:25:55.065 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:25:55.066 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:25:55.067 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:25:55 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838', sleeping for 30000ms m30000| Fri Feb 22 12:25:55.067 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:25:55.067 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:25:55.068 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838' acquired, ts : 512763d3b6629e564e620a71 m30999| Fri Feb 22 12:25:55.070 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:25:55.070 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:25:55.070 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:55-512763d3b6629e564e620a72", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535955070), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:25:55.070 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:25:55.071 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:25:55.071 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:25:55.072 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:25:55.072 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:25:55.073 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:25:55-512763d3b6629e564e620a74", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535955073), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:25:55.073 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:25:55.073 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838' unlocked. m30000| Fri Feb 22 12:25:55.074 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:25:55.075 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:25:55.075 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:25:55.075 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:25:55.075 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:25:55.075 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:25:55.075 BackgroundJob starting: PeriodicTask::Runner m30000| Fri Feb 22 12:25:55.075 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:25:55.076 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 12:25:55.076 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 12:25:55.076 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:25:55.077 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:25:55.077 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:25:55.077 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:25:55.078 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:55.078 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:25:55.079 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:55.079 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:25:55.080 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:55.080 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:25:55.081 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:25:55.081 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:25:55.081 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:25:55.082 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:25:55.082 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:25:55.082 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:25:55 m30999| Fri Feb 22 12:25:55.082 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:25:55.082 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:25:55.083 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:25:55.083 [conn3] build index config.mongos { _id: 1 } m30000| Fri Feb 22 12:25:55.083 [initandlisten] connection accepted from 127.0.0.1:37871 #5 (5 connections now open) m30999| Fri Feb 22 12:25:55.083 [Balancer] connected connection! m30000| Fri Feb 22 12:25:55.084 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:25:55.084 [Balancer] Refreshing MaxChunkSize: 25 m30999| Fri Feb 22 12:25:55.084 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838 ) m30999| Fri Feb 22 12:25:55.084 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:25:55.084 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:25:55 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763d3b6629e564e620a76" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:25:55.085 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838' acquired, ts : 512763d3b6629e564e620a76 m30999| Fri Feb 22 12:25:55.085 [Balancer] *** start balancing round m30999| Fri Feb 22 12:25:55.085 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:25:55.085 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:25:55.085 [Balancer] no collections to balance m30999| Fri Feb 22 12:25:55.085 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:25:55.085 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:25:55.086 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838' unlocked. m30999| Fri Feb 22 12:25:55.228 [mongosMain] connection accepted from 127.0.0.1:61483 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 12:25:55.230 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:25:55.231 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:25:55.232 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:25:55.232 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:25:55.233 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 12:25:55.235 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:25:55.235 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:25:55.236 [conn1] connected connection! m30001| Fri Feb 22 12:25:55.236 [initandlisten] connection accepted from 127.0.0.1:63103 #2 (2 connections now open) m30999| Fri Feb 22 12:25:55.237 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 12:25:55.238 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:25:55.238 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:25:55.238 [initandlisten] connection accepted from 127.0.0.1:59787 #6 (6 connections now open) m30999| Fri Feb 22 12:25:55.238 [conn1] connected connection! m30999| Fri Feb 22 12:25:55.238 [conn1] creating WriteBackListener for: localhost:30000 serverID: 512763d3b6629e564e620a75 m30999| Fri Feb 22 12:25:55.238 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:25:55.238 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 12:25:55.239 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:25:55.239 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:25:55.242 [conn1] connected connection! m30999| Fri Feb 22 12:25:55.242 [conn1] creating WriteBackListener for: localhost:30001 serverID: 512763d3b6629e564e620a75 m30001| Fri Feb 22 12:25:55.242 [initandlisten] connection accepted from 127.0.0.1:32787 #3 (3 connections now open) m30999| Fri Feb 22 12:25:55.242 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:25:55.242 BackgroundJob starting: WriteBackListener-localhost:30001 m30999| Fri Feb 22 12:25:55.243 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:25:55.243 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:25:55.243 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:25:55.243 [initandlisten] connection accepted from 127.0.0.1:63176 #7 (7 connections now open) m30999| Fri Feb 22 12:25:55.243 [conn1] connected connection! m30999| Fri Feb 22 12:25:55.244 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:25:55.244 BackgroundJob starting: ConnectBG m30001| Fri Feb 22 12:25:55.244 [initandlisten] connection accepted from 127.0.0.1:47752 #4 (4 connections now open) m30999| Fri Feb 22 12:25:55.244 [conn1] connected connection! m30999| Fri Feb 22 12:25:55.244 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:25:55.245 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:25:55.245 [conn1] enabling sharding on: test stringSize: 1024 docsPerChunk: 25904 numDocs: 518080 m30001| Fri Feb 22 12:25:55.246 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/test.ns, filling with zeroes... m30001| Fri Feb 22 12:25:55.247 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:25:55.247 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/test.0, filling with zeroes... m30001| Fri Feb 22 12:25:55.247 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:25:55.247 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/test.1, filling with zeroes... m30001| Fri Feb 22 12:25:55.247 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:25:55.251 [conn3] build index test.foo { _id: 1 } m30001| Fri Feb 22 12:25:55.253 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:25:58.053 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/test.2, filling with zeroes... m30001| Fri Feb 22 12:25:58.053 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/test.2, size: 256MB, took 0 secs m30999| Fri Feb 22 12:26:01.086 [Balancer] Refreshing MaxChunkSize: 25 m30999| Fri Feb 22 12:26:01.086 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 12:26:03.115 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/test.3, filling with zeroes... m30001| Fri Feb 22 12:26:03.115 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/test.3, size: 512MB, took 0 secs m30999| Fri Feb 22 12:26:07.087 [Balancer] Refreshing MaxChunkSize: 25 m30999| Fri Feb 22 12:26:07.087 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:26:13.088 [Balancer] Refreshing MaxChunkSize: 25 m30999| Fri Feb 22 12:26:13.088 [Balancer] skipping balancing round because balancing is disabled m30001| Fri Feb 22 12:26:15.335 [FileAllocator] allocating new datafile /data/db/migrate_cursor11/test.4, filling with zeroes... m30001| Fri Feb 22 12:26:15.335 [FileAllocator] done allocating datafile /data/db/migrate_cursor11/test.4, size: 1024MB, took 0 secs m30999| Fri Feb 22 12:26:19.089 [Balancer] Refreshing MaxChunkSize: 25 m30999| Fri Feb 22 12:26:19.089 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:26:25.068 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:26:25 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535955:16838', sleeping for 30000ms m30999| Fri Feb 22 12:26:25.090 [Balancer] Refreshing MaxChunkSize: 25 m30999| Fri Feb 22 12:26:25.090 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:26:27.151 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:26:27.151 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:26:27.151 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:26:27.584 [conn4] warning: Finding the split vector for test.foo over { _id: 1.0 } keyCount: 11548 numSplits: 44 lookedAt: 9924 took 432ms m30001| Fri Feb 22 12:26:27.584 [conn4] command admin.$cmd command: { splitVector: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, maxChunkSizeBytes: 26214400, maxSplitPoints: 0, maxChunkObjects: 0 } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) r:454573 reslen:1027 432ms m30999| Fri Feb 22 12:26:27.584 [conn1] going to create 45 chunk(s) for: test.foo using new epoch 512763f3b6629e564e620a77 m30999| Fri Feb 22 12:26:27.590 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 1|44||512763f3b6629e564e620a77 based on: (empty) m30000| Fri Feb 22 12:26:27.591 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 12:26:27.594 [conn3] build index done. scanned 0 total records. 0.002 secs m30999| Fri Feb 22 12:26:27.594 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|44, versionEpoch: ObjectId('512763f3b6629e564e620a77'), serverID: ObjectId('512763d3b6629e564e620a75'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f660 2 m30999| Fri Feb 22 12:26:27.595 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:26:27.596 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|44, versionEpoch: ObjectId('512763f3b6629e564e620a77'), serverID: ObjectId('512763d3b6629e564e620a75'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x117f660 2 m30001| Fri Feb 22 12:26:27.596 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:26:27.596 [initandlisten] connection accepted from 127.0.0.1:55754 #8 (8 connections now open) m30999| Fri Feb 22 12:26:27.597 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:26:27.615 [conn1] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 0.0 }, to: "localhost:30000" } m30999| Fri Feb 22 12:26:27.615 [conn1] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 11548.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:26:27.615 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 11548.0 }, maxChunkSizeBytes: 26214400, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30000| Fri Feb 22 12:26:27.616 [initandlisten] connection accepted from 127.0.0.1:53482 #9 (9 connections now open) m30001| Fri Feb 22 12:26:27.616 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361535987:27473 (sleeping for 30000ms) m30001| Fri Feb 22 12:26:27.618 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535987:27473' acquired, ts : 512763f329f93006548bd68a m30001| Fri Feb 22 12:26:27.618 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:27-512763f329f93006548bd68b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:47752", time: new Date(1361535987618), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 11548.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:26:27.618 [conn4] moveChunk request accepted at version 1|44||512763f3b6629e564e620a77 m30001| Fri Feb 22 12:26:27.634 [conn4] moveChunk number of documents: 11548 m30000| Fri Feb 22 12:26:27.635 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 11548.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:26:27.635 [initandlisten] connection accepted from 127.0.0.1:47754 #5 (5 connections now open) m30000| Fri Feb 22 12:26:27.636 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/test.ns, filling with zeroes... m30000| Fri Feb 22 12:26:27.636 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:26:27.636 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/test.0, filling with zeroes... m30000| Fri Feb 22 12:26:27.637 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:26:27.637 [FileAllocator] allocating new datafile /data/db/migrate_cursor10/test.1, filling with zeroes... m30000| Fri Feb 22 12:26:27.637 [FileAllocator] done allocating datafile /data/db/migrate_cursor10/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:26:27.639 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 12:26:27.640 [migrateThread] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:26:27.641 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 12:26:27.645 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:27.655 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:27.666 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:27.676 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:27.692 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:27.724 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:27.755 [conn5] command admin.$cmd command: { _migrateClone: 1 } ntoreturn:1 keyUpdates:0 locks(micros) r:61568 reslen:12195177 114ms m30001| Fri Feb 22 12:26:27.789 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:27.917 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3323, clonedBytes: 3489150, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:28.173 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9528, clonedBytes: 10004400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:26:28.244 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:26:28.244 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 11548.0 } m30000| Fri Feb 22 12:26:28.256 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 11548.0 } m30001| Fri Feb 22 12:26:28.685 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 11548, clonedBytes: 12125400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:28.685 [conn4] moveChunk setting version to: 2|0||512763f3b6629e564e620a77 m30000| Fri Feb 22 12:26:28.686 [initandlisten] connection accepted from 127.0.0.1:35705 #10 (10 connections now open) m30000| Fri Feb 22 12:26:28.686 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:26:28.691 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 11548.0 } m30000| Fri Feb 22 12:26:28.691 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 11548.0 } m30000| Fri Feb 22 12:26:28.691 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:28-512763f4e53c3304f16c10d3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535988691), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 11548.0 }, step1 of 5: 5, step2 of 5: 0, step3 of 5: 603, step4 of 5: 0, step5 of 5: 447 } } m30000| Fri Feb 22 12:26:28.692 [initandlisten] connection accepted from 127.0.0.1:63599 #11 (11 connections now open) m30001| Fri Feb 22 12:26:28.696 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 11548.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 11548, clonedBytes: 12125400, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:26:28.696 [conn4] moveChunk updating self version to: 2|1||512763f3b6629e564e620a77 through { _id: 11548.0 } -> { _id: 23097.0 } for collection 'test.foo' m30000| Fri Feb 22 12:26:28.696 [initandlisten] connection accepted from 127.0.0.1:52379 #12 (12 connections now open) m30001| Fri Feb 22 12:26:28.697 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:28-512763f429f93006548bd68c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:47752", time: new Date(1361535988697), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 11548.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:26:28.697 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:26:28.697 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:26:28.697 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:26:28.697 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:26:28.697 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:26:28.698 [cleanupOldData-512763f429f93006548bd68d] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 11548.0 }, # cursors remaining: 1 m30001| Fri Feb 22 12:26:28.698 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535987:27473' unlocked. m30001| Fri Feb 22 12:26:28.698 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:28-512763f429f93006548bd68e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:47752", time: new Date(1361535988698), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 11548.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 16, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:26:28.698 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 11548.0 }, maxChunkSizeBytes: 26214400, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:45 r:16434 w:38 reslen:37 1082ms m30999| Fri Feb 22 12:26:28.698 [conn1] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:26:28.699 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|1||512763f3b6629e564e620a77 based on: 1|44||512763f3b6629e564e620a77 Fri Feb 22 12:26:28.704 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_migrate_cursor1.js", "testFile" : "sharding_migrate_cursor1.js", "testName" : "sharding_migrate_cursor1", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep(5); db.x.insert( {x:1} ); db.adminCommand( { moveChunk : 'test.foo' , find : { _id : 77712 } , to : 'localhost:30000' } ) localhost:30999/admin m30001| Fri Feb 22 12:26:28.718 [cleanupOldData-512763f429f93006548bd68d] (looping 1) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 11548.0 } # cursors:1 m30001| Fri Feb 22 12:26:28.718 [cleanupOldData-512763f429f93006548bd68d] cursors: 139818501996076 sh12657| MongoDB shell version: 2.4.0-rc1-pre- sh12657| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:26:28.777 [mongosMain] connection accepted from 127.0.0.1:46429 #2 (2 connections now open) m30999| Fri Feb 22 12:26:28.789 [conn2] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:26:28.789 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:26:28.789 [conn2] connected connection! m30999| Fri Feb 22 12:26:28.790 [conn2] initializing shard connection to localhost:30000 m30000| Fri Feb 22 12:26:28.790 [initandlisten] connection accepted from 127.0.0.1:56073 #13 (13 connections now open) m30999| Fri Feb 22 12:26:28.790 [conn2] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:26:28.790 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:26:28.790 [conn2] connected connection! m30001| Fri Feb 22 12:26:28.790 [initandlisten] connection accepted from 127.0.0.1:62139 #6 (6 connections now open) m30999| Fri Feb 22 12:26:28.790 [conn2] initializing shard connection to localhost:30001 m30001| Fri Feb 22 12:26:28.791 [conn6] build index test.x { _id: 1 } m30001| Fri Feb 22 12:26:28.793 [conn6] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:26:28.793 [conn2] CMD: movechunk: { moveChunk: "test.foo", find: { _id: 77712.0 }, to: "localhost:30000" } m30999| Fri Feb 22 12:26:28.793 [conn2] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|6||000000000000000000000000min: { _id: 69293.0 }max: { _id: 80842.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:26:28.794 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 69293.0 }, max: { _id: 80842.0 }, maxChunkSizeBytes: 26214400, shardId: "test.foo-_id_69293.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } m30001| Fri Feb 22 12:26:28.795 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535987:27473' acquired, ts : 512763f429f93006548bd68f m30001| Fri Feb 22 12:26:28.795 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:28-512763f429f93006548bd690", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:47752", time: new Date(1361535988795), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 69293.0 }, max: { _id: 80842.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:26:28.796 [conn4] moveChunk request accepted at version 2|1||512763f3b6629e564e620a77 m30001| Fri Feb 22 12:26:28.822 [conn4] moveChunk number of documents: 11549 m30000| Fri Feb 22 12:26:28.823 [migrateThread] starting receiving-end of migration of chunk { _id: 69293.0 } -> { _id: 80842.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:26:28.833 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 itcount: 499 m30001| Fri Feb 22 12:26:28.838 [cleanupOldData-512763f429f93006548bd68d] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 11548.0 } m30001| Fri Feb 22 12:26:28.838 [cleanupOldData-512763f429f93006548bd68d] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 11548.0 } m30001| Fri Feb 22 12:26:28.843 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:28.853 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:28.863 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:28.880 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:28.912 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 27, clonedBytes: 28350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 cursor should be gone m30001| Fri Feb 22 12:26:28.976 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1322, clonedBytes: 1388100, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:29.105 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3563, clonedBytes: 3741150, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:29.361 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8073, clonedBytes: 8476650, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:26:29.559 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:26:29.559 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 69293.0 } -> { _id: 80842.0 } m30000| Fri Feb 22 12:26:29.560 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 69293.0 } -> { _id: 80842.0 } m30001| Fri Feb 22 12:26:29.873 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 11549, clonedBytes: 12126450, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:26:29.873 [conn4] moveChunk setting version to: 3|0||512763f3b6629e564e620a77 m30000| Fri Feb 22 12:26:29.873 [conn10] Waiting for commit to finish m30000| Fri Feb 22 12:26:29.877 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 69293.0 } -> { _id: 80842.0 } m30000| Fri Feb 22 12:26:29.877 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 69293.0 } -> { _id: 80842.0 } m30000| Fri Feb 22 12:26:29.877 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:29-512763f5e53c3304f16c10d4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361535989877), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 69293.0 }, max: { _id: 80842.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 735, step4 of 5: 0, step5 of 5: 317 } } m30001| Fri Feb 22 12:26:29.883 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 69293.0 }, max: { _id: 80842.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 11549, clonedBytes: 12126450, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:26:29.884 [conn4] moveChunk updating self version to: 3|1||512763f3b6629e564e620a77 through { _id: 11548.0 } -> { _id: 23097.0 } for collection 'test.foo' m30001| Fri Feb 22 12:26:29.885 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:29-512763f529f93006548bd691", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:47752", time: new Date(1361535989884), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 69293.0 }, max: { _id: 80842.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:26:29.885 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:26:29.885 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:26:29.885 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:26:29.885 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:26:29.885 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:26:29.885 [cleanupOldData-512763f529f93006548bd692] (start) waiting to cleanup test.foo from { _id: 69293.0 } -> { _id: 80842.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:26:29.885 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361535987:27473' unlocked. m30001| Fri Feb 22 12:26:29.885 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:29-512763f529f93006548bd693", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:47752", time: new Date(1361535989885), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 69293.0 }, max: { _id: 80842.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 26, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:26:29.885 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 69293.0 }, max: { _id: 80842.0 }, maxChunkSizeBytes: 26214400, shardId: "test.foo-_id_69293.0", configdb: "localhost:30000", secondaryThrottle: false, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) W:37 r:43610 w:42 reslen:37 1091ms m30999| Fri Feb 22 12:26:29.886 [conn2] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:26:29.887 [conn2] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 4 version: 3|1||512763f3b6629e564e620a77 based on: 2|1||512763f3b6629e564e620a77 sh12657| [object Object] m30001| Fri Feb 22 12:26:29.905 [cleanupOldData-512763f529f93006548bd692] waiting to remove documents for test.foo from { _id: 69293.0 } -> { _id: 80842.0 } m30001| Fri Feb 22 12:26:29.906 [cleanupOldData-512763f429f93006548bd68d] moveChunk deleted 11548 documents for test.foo from { _id: MinKey } -> { _id: 11548.0 } m30001| Fri Feb 22 12:26:29.907 [cleanupOldData-512763f529f93006548bd692] moveChunk starting delete for: test.foo from { _id: 69293.0 } -> { _id: 80842.0 } m30999| Fri Feb 22 12:26:29.913 [conn2] end connection 127.0.0.1:46429 (1 connection now open) m30001| Fri Feb 22 12:26:30.693 [cleanupOldData-512763f529f93006548bd692] moveChunk deleted 11549 documents for test.foo from { _id: 69293.0 } -> { _id: 80842.0 } m30999| Fri Feb 22 12:26:31.091 [Balancer] Refreshing MaxChunkSize: 25 m30999| Fri Feb 22 12:26:31.091 [Balancer] skipping balancing round because balancing is disabled m30999| Fri Feb 22 12:26:34.906 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('512763f3b6629e564e620a77'), serverID: ObjectId('512763d3b6629e564e620a75'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f2e0 4 m30999| Fri Feb 22 12:26:34.906 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:26:34.907 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|0, versionEpoch: ObjectId('512763f3b6629e564e620a77'), serverID: ObjectId('512763d3b6629e564e620a75'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f2e0 4 m30000| Fri Feb 22 12:26:34.907 [conn6] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 12:26:34.908 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:26:34.908 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('512763f3b6629e564e620a77'), serverID: ObjectId('512763d3b6629e564e620a75'), shard: "shard0001", shardHost: "localhost:30001" } 0x117f660 4 m30999| Fri Feb 22 12:26:34.908 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|44, oldVersionEpoch: ObjectId('512763f3b6629e564e620a77'), ok: 1.0 } m30999| Fri Feb 22 12:26:34.910 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30001| Fri Feb 22 12:26:34.911 [conn4] end connection 127.0.0.1:47752 (5 connections now open) m30001| Fri Feb 22 12:26:34.911 [conn6] end connection 127.0.0.1:62139 (5 connections now open) m30001| Fri Feb 22 12:26:34.911 [conn3] end connection 127.0.0.1:32787 (5 connections now open) m30000| Fri Feb 22 12:26:34.929 [conn13] end connection 127.0.0.1:56073 (12 connections now open) m30000| Fri Feb 22 12:26:34.929 [conn6] end connection 127.0.0.1:59787 (12 connections now open) m30000| Fri Feb 22 12:26:34.929 [conn7] end connection 127.0.0.1:63176 (12 connections now open) m30000| Fri Feb 22 12:26:34.929 [conn3] end connection 127.0.0.1:43401 (12 connections now open) m30000| Fri Feb 22 12:26:34.929 [conn5] end connection 127.0.0.1:37871 (12 connections now open) Fri Feb 22 12:26:35.910 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:26:35.910 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:26:35.910 [interruptThread] now exiting m30000| Fri Feb 22 12:26:35.910 dbexit: m30000| Fri Feb 22 12:26:35.910 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:26:35.910 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:26:35.910 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:26:35.910 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:26:35.910 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:26:35.911 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:26:35.911 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:26:35.911 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:26:35.911 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:26:35.911 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:26:35.911 [conn1] end connection 127.0.0.1:57353 (7 connections now open) m30000| Fri Feb 22 12:26:35.911 [conn2] end connection 127.0.0.1:53247 (7 connections now open) m30000| Fri Feb 22 12:26:35.911 [conn8] end connection 127.0.0.1:55754 (7 connections now open) m30000| Fri Feb 22 12:26:35.911 [conn9] end connection 127.0.0.1:53482 (7 connections now open) m30000| Fri Feb 22 12:26:35.911 [conn12] end connection 127.0.0.1:52379 (7 connections now open) m30001| Fri Feb 22 12:26:35.911 [conn5] end connection 127.0.0.1:47754 (2 connections now open) m30000| Fri Feb 22 12:26:35.911 [conn10] end connection 127.0.0.1:35705 (7 connections now open) m30000| Fri Feb 22 12:26:35.911 [conn11] end connection 127.0.0.1:63599 (6 connections now open) m30000| Fri Feb 22 12:26:35.952 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:26:35.955 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:26:35.955 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:26:35.955 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:26:35.955 dbexit: really exiting now Fri Feb 22 12:26:36.910 shell: stopped mongo program on port 30000 Fri Feb 22 12:26:37.106 [conn18] end connection 127.0.0.1:53985 (0 connections now open) m30001| Fri Feb 22 12:26:36.911 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:26:36.911 [interruptThread] now exiting m30001| Fri Feb 22 12:26:36.911 dbexit: m30001| Fri Feb 22 12:26:36.911 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:26:36.911 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:26:36.911 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:26:36.911 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:26:36.911 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:26:36.911 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:26:36.911 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:26:36.911 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:26:36.911 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:26:36.911 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:26:36.911 [conn1] end connection 127.0.0.1:58546 (1 connection now open) m30001| Fri Feb 22 12:26:37.037 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:26:37.104 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:26:37.104 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:26:37.104 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:26:37.106 dbexit: really exiting now Fri Feb 22 12:26:37.911 shell: stopped mongo program on port 30001 *** ShardingTest migrate_cursor1 completed successfully in 43.466 seconds *** 43.6895 seconds Fri Feb 22 12:26:38.101 [initandlisten] connection accepted from 127.0.0.1:36137 #19 (1 connection now open) Fri Feb 22 12:26:38.102 [conn19] end connection 127.0.0.1:36137 (0 connections now open) ******************************************* Test : sharding_multiple_collections.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_multiple_collections.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_multiple_collections.js";TestData.testFile = "sharding_multiple_collections.js";TestData.testName = "sharding_multiple_collections";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:26:38 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:26:38.278 [initandlisten] connection accepted from 127.0.0.1:38771 #20 (1 connection now open) null Resetting db path '/data/db/multcollections0' Fri Feb 22 12:26:38.296 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/multcollections0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:26:38.400 [initandlisten] MongoDB starting : pid=12667 port=30000 dbpath=/data/db/multcollections0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:26:38.400 [initandlisten] m30000| Fri Feb 22 12:26:38.400 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:26:38.400 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:26:38.400 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:26:38.400 [initandlisten] m30000| Fri Feb 22 12:26:38.400 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:26:38.400 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:26:38.401 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:26:38.401 [initandlisten] allocator: system m30000| Fri Feb 22 12:26:38.401 [initandlisten] options: { dbpath: "/data/db/multcollections0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:26:38.401 [initandlisten] journal dir=/data/db/multcollections0/journal m30000| Fri Feb 22 12:26:38.401 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:26:38.419 [FileAllocator] allocating new datafile /data/db/multcollections0/local.ns, filling with zeroes... m30000| Fri Feb 22 12:26:38.419 [FileAllocator] creating directory /data/db/multcollections0/_tmp m30000| Fri Feb 22 12:26:38.419 [FileAllocator] done allocating datafile /data/db/multcollections0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:26:38.420 [FileAllocator] allocating new datafile /data/db/multcollections0/local.0, filling with zeroes... m30000| Fri Feb 22 12:26:38.420 [FileAllocator] done allocating datafile /data/db/multcollections0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:26:38.423 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:26:38.423 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:26:38.499 [initandlisten] connection accepted from 127.0.0.1:42078 #1 (1 connection now open) Resetting db path '/data/db/multcollections1' Fri Feb 22 12:26:38.503 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/multcollections1 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:26:38.592 [initandlisten] MongoDB starting : pid=12668 port=30001 dbpath=/data/db/multcollections1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:26:38.592 [initandlisten] m30001| Fri Feb 22 12:26:38.592 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:26:38.592 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:26:38.592 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:26:38.592 [initandlisten] m30001| Fri Feb 22 12:26:38.592 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:26:38.592 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:26:38.592 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:26:38.592 [initandlisten] allocator: system m30001| Fri Feb 22 12:26:38.592 [initandlisten] options: { dbpath: "/data/db/multcollections1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:26:38.593 [initandlisten] journal dir=/data/db/multcollections1/journal m30001| Fri Feb 22 12:26:38.593 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:26:38.607 [FileAllocator] allocating new datafile /data/db/multcollections1/local.ns, filling with zeroes... m30001| Fri Feb 22 12:26:38.607 [FileAllocator] creating directory /data/db/multcollections1/_tmp m30001| Fri Feb 22 12:26:38.607 [FileAllocator] done allocating datafile /data/db/multcollections1/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:26:38.607 [FileAllocator] allocating new datafile /data/db/multcollections1/local.0, filling with zeroes... m30001| Fri Feb 22 12:26:38.607 [FileAllocator] done allocating datafile /data/db/multcollections1/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:26:38.610 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:26:38.610 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:26:38.705 [initandlisten] connection accepted from 127.0.0.1:51379 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:26:38.706 [initandlisten] connection accepted from 127.0.0.1:64252 #2 (2 connections now open) ShardingTest multcollections : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:26:38.715 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:26:38.734 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:26:38.734 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=12669 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:26:38.734 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:26:38.734 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:26:38.734 [mongosMain] options: { chunkSize: 1, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 12:26:38.734 [mongosMain] config string : localhost:30000 m30999| Fri Feb 22 12:26:38.734 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:26:38.735 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:26:38.736 [mongosMain] connected connection! m30000| Fri Feb 22 12:26:38.736 [initandlisten] connection accepted from 127.0.0.1:41716 #3 (3 connections now open) m30999| Fri Feb 22 12:26:38.736 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:26:38.736 [mongosMain] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:26:38.737 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:26:38.737 [initandlisten] connection accepted from 127.0.0.1:63416 #4 (4 connections now open) m30999| Fri Feb 22 12:26:38.737 [mongosMain] connected connection! m30000| Fri Feb 22 12:26:38.737 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:26:38.750 [mongosMain] created new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:26:38.751 [mongosMain] trying to acquire new distributed lock for configUpgrade on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:26:38.751 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:26:38.751 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 12:26:38.751 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:26:38 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "512763febd1f994466593642" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m30000| Fri Feb 22 12:26:38.751 [FileAllocator] allocating new datafile /data/db/multcollections0/config.ns, filling with zeroes... m30000| Fri Feb 22 12:26:38.751 [FileAllocator] done allocating datafile /data/db/multcollections0/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:26:38.751 [FileAllocator] allocating new datafile /data/db/multcollections0/config.0, filling with zeroes... m30000| Fri Feb 22 12:26:38.752 [FileAllocator] done allocating datafile /data/db/multcollections0/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:26:38.752 [FileAllocator] allocating new datafile /data/db/multcollections0/config.1, filling with zeroes... m30000| Fri Feb 22 12:26:38.752 [FileAllocator] done allocating datafile /data/db/multcollections0/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:26:38.754 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:26:38.755 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:26:38.756 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:26:38.757 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:26:38.757 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:26:38 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30000| Fri Feb 22 12:26:38.757 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:26:38.758 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:26:38.758 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512763febd1f994466593642 m30999| Fri Feb 22 12:26:38.761 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:26:38.761 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:26:38.761 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:38-512763febd1f994466593643", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535998761), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:26:38.761 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:26:38.761 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:26:38.762 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:26:38.762 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:26:38.763 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:26:38.763 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:26:38-512763febd1f994466593645", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361535998763), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:26:38.763 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:26:38.764 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30000| Fri Feb 22 12:26:38.765 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:26:38.765 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:26:38.765 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:26:38.765 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:26:38.765 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:26:38.765 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:26:38.765 BackgroundJob starting: PeriodicTask::Runner m30000| Fri Feb 22 12:26:38.766 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:26:38.766 [mongosMain] waiting for connections on port 30999 m30999| Fri Feb 22 12:26:38.766 [websvr] admin web console waiting for connections on port 31999 m30000| Fri Feb 22 12:26:38.766 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:26:38.767 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:26:38.767 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:26:38.767 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:26:38.768 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:26:38.768 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:26:38.768 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:26:38.768 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:26:38.769 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:26:38.769 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:26:38.769 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:26:38.770 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:26:38.770 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:26:38.770 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:26:38.771 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:26:38.771 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:26:38 m30999| Fri Feb 22 12:26:38.771 [Balancer] created new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:26:38.771 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:26:38.771 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:26:38.771 [conn3] build index config.mongos { _id: 1 } m30000| Fri Feb 22 12:26:38.771 [initandlisten] connection accepted from 127.0.0.1:42877 #5 (5 connections now open) m30999| Fri Feb 22 12:26:38.771 [Balancer] connected connection! m30000| Fri Feb 22 12:26:38.772 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:26:38.772 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:26:38.772 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:26:38.772 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:26:38.772 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:26:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512763febd1f994466593647" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:26:38.773 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512763febd1f994466593647 m30999| Fri Feb 22 12:26:38.773 [Balancer] *** start balancing round m30999| Fri Feb 22 12:26:38.773 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:26:38.773 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:26:38.773 [Balancer] no collections to balance m30999| Fri Feb 22 12:26:38.773 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:26:38.773 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:26:38.773 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:26:38.917 [mongosMain] connection accepted from 127.0.0.1:50482 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 12:26:38.919 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:26:38.919 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:26:38.920 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:26:38.920 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:26:38.921 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30999| Fri Feb 22 12:26:38.923 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:26:38.923 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:26:38.923 [conn1] connected connection! m30001| Fri Feb 22 12:26:38.923 [initandlisten] connection accepted from 127.0.0.1:35157 #2 (2 connections now open) m30999| Fri Feb 22 12:26:38.924 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 12:26:38.925 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:26:38.926 [conn1] best shard for new allocation is shard: shard0001:localhost:30001 mapped: 80 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:26:38.926 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:26:38.926 [conn1] enabling sharding on: test m30999| Fri Feb 22 12:26:38.927 [conn1] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:26:38.927 BackgroundJob starting: ConnectBG m30000| Fri Feb 22 12:26:38.927 [initandlisten] connection accepted from 127.0.0.1:46374 #6 (6 connections now open) m30999| Fri Feb 22 12:26:38.927 [conn1] connected connection! m30999| Fri Feb 22 12:26:38.927 [conn1] creating WriteBackListener for: localhost:30000 serverID: 512763febd1f994466593646 m30999| Fri Feb 22 12:26:38.927 [conn1] initializing shard connection to localhost:30000 m30999| Fri Feb 22 12:26:38.927 BackgroundJob starting: WriteBackListener-localhost:30000 m30999| Fri Feb 22 12:26:38.928 [conn1] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:26:38.928 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:26:38.928 [conn1] connected connection! m30001| Fri Feb 22 12:26:38.928 [initandlisten] connection accepted from 127.0.0.1:61557 #3 (3 connections now open) m30999| Fri Feb 22 12:26:38.928 [conn1] creating WriteBackListener for: localhost:30001 serverID: 512763febd1f994466593646 m30999| Fri Feb 22 12:26:38.928 [conn1] initializing shard connection to localhost:30001 m30999| Fri Feb 22 12:26:38.928 BackgroundJob starting: WriteBackListener-localhost:30001 m30001| Fri Feb 22 12:26:38.929 [FileAllocator] allocating new datafile /data/db/multcollections1/test.ns, filling with zeroes... m30001| Fri Feb 22 12:26:38.929 [FileAllocator] done allocating datafile /data/db/multcollections1/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:26:38.929 [FileAllocator] allocating new datafile /data/db/multcollections1/test.0, filling with zeroes... m30001| Fri Feb 22 12:26:38.929 [FileAllocator] done allocating datafile /data/db/multcollections1/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:26:38.930 [FileAllocator] allocating new datafile /data/db/multcollections1/test.1, filling with zeroes... m30001| Fri Feb 22 12:26:38.930 [FileAllocator] done allocating datafile /data/db/multcollections1/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:26:38.933 [conn3] build index test.foo { _id: 1 } m30001| Fri Feb 22 12:26:38.934 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:26:38.936 [conn3] build index test.bar { _id: 1 } m30001| Fri Feb 22 12:26:38.937 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:26:44.774 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:26:44.774 [Balancer] creating new connection to:localhost:30000 m30999| Fri Feb 22 12:26:44.775 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:26:44.775 [Balancer] connected connection! m30000| Fri Feb 22 12:26:44.775 [initandlisten] connection accepted from 127.0.0.1:47249 #7 (7 connections now open) m30999| Fri Feb 22 12:26:44.775 [Balancer] creating new connection to:localhost:30001 m30999| Fri Feb 22 12:26:44.775 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:26:44.775 [Balancer] connected connection! m30001| Fri Feb 22 12:26:44.775 [initandlisten] connection accepted from 127.0.0.1:57660 #4 (4 connections now open) m30999| Fri Feb 22 12:26:44.776 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:26:44.776 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:26:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276404bd1f994466593648" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512763febd1f994466593647" } } m30999| Fri Feb 22 12:26:44.776 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276404bd1f994466593648 m30999| Fri Feb 22 12:26:44.776 [Balancer] *** start balancing round m30999| Fri Feb 22 12:26:44.776 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:26:44.776 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:26:44.777 [Balancer] no collections to balance m30999| Fri Feb 22 12:26:44.777 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:26:44.777 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:26:44.777 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:26:49.464 [FileAllocator] allocating new datafile /data/db/multcollections1/test.2, filling with zeroes... m30001| Fri Feb 22 12:26:49.464 [FileAllocator] done allocating datafile /data/db/multcollections1/test.2, size: 256MB, took 0 secs m30999| Fri Feb 22 12:26:50.778 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:26:50.778 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:26:50.778 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:26:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127640abd1f994466593649" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276404bd1f994466593648" } } m30999| Fri Feb 22 12:26:50.779 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127640abd1f994466593649 m30999| Fri Feb 22 12:26:50.779 [Balancer] *** start balancing round m30999| Fri Feb 22 12:26:50.779 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:26:50.779 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:26:50.779 [Balancer] no collections to balance m30999| Fri Feb 22 12:26:50.779 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:26:50.779 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:26:50.779 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:26:56.780 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:26:56.781 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:26:56.781 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:26:56 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276410bd1f99446659364a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127640abd1f994466593649" } } m30999| Fri Feb 22 12:26:56.782 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276410bd1f99446659364a m30999| Fri Feb 22 12:26:56.782 [Balancer] *** start balancing round m30999| Fri Feb 22 12:26:56.782 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:26:56.782 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:26:56.782 [Balancer] no collections to balance m30999| Fri Feb 22 12:26:56.782 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:26:56.782 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:26:56.782 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:27:02.783 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:02.783 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:02.784 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276416bd1f99446659364b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276410bd1f99446659364a" } } m30999| Fri Feb 22 12:27:02.784 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276416bd1f99446659364b m30999| Fri Feb 22 12:27:02.784 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:02.784 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:02.784 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:02.784 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:02.784 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:02.784 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:02.784 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:27:08.207 [conn3] insert test.foo ninserted:1 keyUpdates:0 locks(micros) w:64 200ms m30999| Fri Feb 22 12:27:08.758 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:27:08 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30999| Fri Feb 22 12:27:08.785 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:08.785 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:08.786 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127641cbd1f99446659364c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276416bd1f99446659364b" } } m30999| Fri Feb 22 12:27:08.786 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127641cbd1f99446659364c m30999| Fri Feb 22 12:27:08.786 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:08.786 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:08.786 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:08.786 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:08.786 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:08.786 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:08.786 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:27:14.787 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:14.788 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:14.788 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276422bd1f99446659364d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127641cbd1f99446659364c" } } m30999| Fri Feb 22 12:27:14.789 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276422bd1f99446659364d m30999| Fri Feb 22 12:27:14.789 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:14.789 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:14.789 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:14.789 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:14.789 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:14.789 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:14.789 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:27:16.482 [FileAllocator] allocating new datafile /data/db/multcollections1/test.3, filling with zeroes... m30001| Fri Feb 22 12:27:16.482 [FileAllocator] done allocating datafile /data/db/multcollections1/test.3, size: 512MB, took 0 secs --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512763febd1f994466593644") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0001" } 0 1000 2000 m30999| Fri Feb 22 12:27:20.790 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:20.790 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:20.790 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276428bd1f99446659364e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276422bd1f99446659364d" } } m30999| Fri Feb 22 12:27:20.791 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276428bd1f99446659364e m30999| Fri Feb 22 12:27:20.791 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:20.791 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:20.791 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:20.791 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:20.791 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:20.791 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:20.791 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 3000 4000 5000 6000 7000 8000 9000 10000 m30999| Fri Feb 22 12:27:26.792 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:26.792 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:26.793 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127642ebd1f99446659364f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276428bd1f99446659364e" } } m30999| Fri Feb 22 12:27:26.793 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127642ebd1f99446659364f m30999| Fri Feb 22 12:27:26.793 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:26.793 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:26.793 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:26.793 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:26.793 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:26.794 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:26.794 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 11000 12000 13000 14000 15000 16000 17000 m30999| Fri Feb 22 12:27:32.794 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:32.795 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:32.795 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276434bd1f994466593650" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127642ebd1f99446659364f" } } m30999| Fri Feb 22 12:27:32.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276434bd1f994466593650 m30999| Fri Feb 22 12:27:32.796 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:32.796 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:32.796 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:32.796 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:32.796 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:32.796 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:32.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 18000 19000 20000 21000 22000 23000 24000 25000 m30999| Fri Feb 22 12:27:38.765 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:27:38 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30999| Fri Feb 22 12:27:38.796 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:38.797 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:38.797 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127643abd1f994466593651" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276434bd1f994466593650" } } m30999| Fri Feb 22 12:27:38.798 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127643abd1f994466593651 m30999| Fri Feb 22 12:27:38.798 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:38.798 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:38.798 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:38.798 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:38.798 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:38.798 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:38.798 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 26000 27000 28000 29000 30000 31000 32000 33000 m30999| Fri Feb 22 12:27:44.799 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:44.799 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:44.799 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276440bd1f994466593652" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127643abd1f994466593651" } } m30999| Fri Feb 22 12:27:44.800 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276440bd1f994466593652 m30999| Fri Feb 22 12:27:44.800 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:44.800 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:44.800 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:44.800 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:44.800 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:44.800 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:44.800 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 34000 35000 36000 37000 38000 39000 40000 41000 m30999| Fri Feb 22 12:27:50.801 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:50.801 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:50.802 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276446bd1f994466593653" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276440bd1f994466593652" } } m30999| Fri Feb 22 12:27:50.802 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276446bd1f994466593653 m30999| Fri Feb 22 12:27:50.802 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:50.802 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:50.802 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:50.803 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:50.803 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:50.803 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:50.803 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 42000 43000 44000 45000 46000 47000 48000 m30999| Fri Feb 22 12:27:56.804 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:27:56.804 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:27:56.804 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:27:56 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127644cbd1f994466593654" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276446bd1f994466593653" } } m30999| Fri Feb 22 12:27:56.805 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127644cbd1f994466593654 m30999| Fri Feb 22 12:27:56.805 [Balancer] *** start balancing round m30999| Fri Feb 22 12:27:56.805 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:27:56.805 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:27:56.805 [Balancer] no collections to balance m30999| Fri Feb 22 12:27:56.805 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:27:56.805 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:27:56.805 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 49000 50000 51000 52000 53000 54000 55000 56000 m30999| Fri Feb 22 12:28:02.806 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:02.806 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:02.807 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276452bd1f994466593655" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127644cbd1f994466593654" } } m30999| Fri Feb 22 12:28:02.808 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276452bd1f994466593655 m30999| Fri Feb 22 12:28:02.808 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:02.808 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:02.808 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:02.808 [Balancer] no collections to balance m30999| Fri Feb 22 12:28:02.808 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:28:02.808 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:02.808 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 57000 58000 59000 60000 61000 62000 63000 m30999| Fri Feb 22 12:28:08.766 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:28:08 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30999| Fri Feb 22 12:28:08.809 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:08.809 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:08.810 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276458bd1f994466593656" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276452bd1f994466593655" } } m30999| Fri Feb 22 12:28:08.810 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276458bd1f994466593656 m30999| Fri Feb 22 12:28:08.810 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:08.810 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:08.810 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:08.810 [Balancer] no collections to balance m30999| Fri Feb 22 12:28:08.810 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:28:08.810 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:08.811 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 64000 65000 66000 67000 68000 69000 70000 71000 m30999| Fri Feb 22 12:28:14.811 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:14.812 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:14.812 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127645ebd1f994466593657" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276458bd1f994466593656" } } m30999| Fri Feb 22 12:28:14.813 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127645ebd1f994466593657 m30999| Fri Feb 22 12:28:14.813 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:14.813 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:14.813 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:14.813 [Balancer] no collections to balance m30999| Fri Feb 22 12:28:14.813 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:28:14.813 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:14.813 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 72000 73000 74000 75000 76000 77000 78000 79000 m30999| Fri Feb 22 12:28:20.814 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:20.814 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:20.814 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276464bd1f994466593658" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127645ebd1f994466593657" } } m30999| Fri Feb 22 12:28:20.815 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276464bd1f994466593658 m30999| Fri Feb 22 12:28:20.815 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:20.815 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:20.815 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:20.815 [Balancer] no collections to balance m30999| Fri Feb 22 12:28:20.815 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:28:20.815 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:20.816 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 80000 81000 82000 83000 84000 85000 86000 m30999| Fri Feb 22 12:28:26.816 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:26.817 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:26.817 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127646abd1f994466593659" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276464bd1f994466593658" } } m30999| Fri Feb 22 12:28:26.818 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127646abd1f994466593659 m30999| Fri Feb 22 12:28:26.818 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:26.818 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:26.818 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:26.818 [Balancer] no collections to balance m30999| Fri Feb 22 12:28:26.818 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:28:26.818 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:26.818 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 87000 88000 89000 90000 91000 92000 93000 94000 m30999| Fri Feb 22 12:28:32.819 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:32.819 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:32.819 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276470bd1f99446659365a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127646abd1f994466593659" } } m30999| Fri Feb 22 12:28:32.820 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276470bd1f99446659365a m30999| Fri Feb 22 12:28:32.820 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:32.820 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:32.820 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:32.820 [Balancer] no collections to balance m30999| Fri Feb 22 12:28:32.820 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:28:32.820 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:32.821 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 95000 96000 97000 98000 99000 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512763febd1f994466593644") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0001" } m30999| Fri Feb 22 12:28:37.417 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:28:37.418 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:28:37.418 [conn4] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:28:37.498 [conn1] going to create 107 chunk(s) for: test.foo using new epoch 51276475bd1f99446659365b m30999| Fri Feb 22 12:28:37.522 [conn1] ChunkManager: time to load chunks for test.foo: 16ms sequenceNumber: 2 version: 1|106||51276475bd1f99446659365b based on: (empty) m30000| Fri Feb 22 12:28:37.523 [conn3] build index config.collections { _id: 1 } m30000| Fri Feb 22 12:28:37.523 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:28:37.523 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|106, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 2 m30999| Fri Feb 22 12:28:37.524 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:28:37.524 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 1000|106, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 2 m30001| Fri Feb 22 12:28:37.524 [conn3] no current chunk manager found for this shard, will initialize m30000| Fri Feb 22 12:28:37.524 [initandlisten] connection accepted from 127.0.0.1:42759 #8 (8 connections now open) m30999| Fri Feb 22 12:28:37.528 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:28:37.529 [conn1] CMD: shardcollection: { shardcollection: "test.bar", key: { _id: 1.0 } } m30999| Fri Feb 22 12:28:37.529 [conn1] enable sharding on: test.bar with shard key: { _id: 1.0 } m30001| Fri Feb 22 12:28:37.529 [conn4] request split points lookup for chunk test.bar { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:28:37.610 [conn1] going to create 217 chunk(s) for: test.bar using new epoch 51276475bd1f99446659365c m30999| Fri Feb 22 12:28:37.635 [conn1] ChunkManager: time to load chunks for test.bar: 5ms sequenceNumber: 3 version: 1|216||51276475bd1f99446659365c based on: (empty) m30999| Fri Feb 22 12:28:37.635 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 1000|216, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 3 m30999| Fri Feb 22 12:28:37.635 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.bar", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.bar'" } m30999| Fri Feb 22 12:28:37.636 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 1000|216, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), authoritative: true, shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 3 m30001| Fri Feb 22 12:28:37.636 [conn3] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 12:28:37.641 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } ShardingTest input: { "shard0000" : 0, "shard0001" : 107 } min: 0 max: 107 0 1000 m30999| Fri Feb 22 12:28:38.778 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:28:38 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30999| Fri Feb 22 12:28:38.821 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:38.822 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:38.822 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276476bd1f99446659365d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276470bd1f99446659365a" } } m30999| Fri Feb 22 12:28:38.822 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276476bd1f99446659365d m30999| Fri Feb 22 12:28:38.823 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:38.823 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:38.823 [Balancer] secondaryThrottle: 1 m30000| Fri Feb 22 12:28:38.825 [conn5] build index config.tags { _id: 1 } m30000| Fri Feb 22 12:28:38.826 [conn5] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:28:38.826 [conn5] info: creating collection config.tags on add index m30000| Fri Feb 22 12:28:38.826 [conn5] build index config.tags { ns: 1, min: 1 } m30000| Fri Feb 22 12:28:38.827 [conn5] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:28:38.827 [Balancer] shard0001 has more chunks me:107 best: shard0000:0 m30999| Fri Feb 22 12:28:38.827 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:38.827 [Balancer] donor : shard0001 chunks on 107 m30999| Fri Feb 22 12:28:38.827 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 12:28:38.827 [Balancer] threshold : 8 m30999| Fri Feb 22 12:28:38.827 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 936.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:38.829 [Balancer] shard0001 has more chunks me:217 best: shard0000:0 m30999| Fri Feb 22 12:28:38.829 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:38.829 [Balancer] donor : shard0001 chunks on 217 m30999| Fri Feb 22 12:28:38.829 [Balancer] receiver : shard0000 chunks on 0 m30999| Fri Feb 22 12:28:38.829 [Balancer] threshold : 8 m30999| Fri Feb 22 12:28:38.829 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: MinKey }, max: { _id: 461.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:38.830 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 936.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:38.830 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:38.830 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 936.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30000| Fri Feb 22 12:28:38.830 [initandlisten] connection accepted from 127.0.0.1:33730 #9 (9 connections now open) m30001| Fri Feb 22 12:28:38.831 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113 (sleeping for 30000ms) m30001| Fri Feb 22 12:28:38.833 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647699334798f3e47d44 m30001| Fri Feb 22 12:28:38.833 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-5127647699334798f3e47d45", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536118833), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 936.0 }, from: "shard0001", to: "shard0000" } } m30000| Fri Feb 22 12:28:38.833 [initandlisten] connection accepted from 127.0.0.1:41381 #10 (10 connections now open) m30001| Fri Feb 22 12:28:38.834 [conn4] moveChunk request accepted at version 1|106||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:38.836 [conn4] moveChunk number of documents: 936 m30000| Fri Feb 22 12:28:38.837 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 936.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:38.837 [initandlisten] connection accepted from 127.0.0.1:59513 #5 (5 connections now open) m30000| Fri Feb 22 12:28:38.838 [FileAllocator] allocating new datafile /data/db/multcollections0/test.ns, filling with zeroes... m30000| Fri Feb 22 12:28:38.838 [FileAllocator] done allocating datafile /data/db/multcollections0/test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:28:38.838 [FileAllocator] allocating new datafile /data/db/multcollections0/test.0, filling with zeroes... m30000| Fri Feb 22 12:28:38.839 [FileAllocator] done allocating datafile /data/db/multcollections0/test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:28:38.839 [FileAllocator] allocating new datafile /data/db/multcollections0/test.1, filling with zeroes... m30000| Fri Feb 22 12:28:38.839 [FileAllocator] done allocating datafile /data/db/multcollections0/test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:28:38.842 [migrateThread] build index test.foo { _id: 1 } m30000| Fri Feb 22 12:28:38.844 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:28:38.844 [migrateThread] info: creating collection test.foo on add index m30001| Fri Feb 22 12:28:38.847 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 936.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:38.857 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 936.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 85, clonedBytes: 45050, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:38.867 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 936.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 310, clonedBytes: 164300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:38.878 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 936.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 525, clonedBytes: 278250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:38.894 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 936.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 878, clonedBytes: 465340, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:38.897 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:38.897 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 936.0 } m30000| Fri Feb 22 12:28:38.901 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 936.0 } m30001| Fri Feb 22 12:28:38.926 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 936.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 936, clonedBytes: 496080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:38.926 [conn4] moveChunk setting version to: 2|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:38.927 [initandlisten] connection accepted from 127.0.0.1:34115 #11 (11 connections now open) m30000| Fri Feb 22 12:28:38.927 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:38.932 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 936.0 } m30000| Fri Feb 22 12:28:38.932 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 936.0 } m30000| Fri Feb 22 12:28:38.932 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-51276476c49297cf54df55d4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536118932), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 936.0 }, step1 of 5: 7, step2 of 5: 0, step3 of 5: 52, step4 of 5: 0, step5 of 5: 35 } } m30000| Fri Feb 22 12:28:38.932 [initandlisten] connection accepted from 127.0.0.1:46366 #12 (12 connections now open) m30001| Fri Feb 22 12:28:38.937 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 936.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 936, clonedBytes: 496080, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:38.937 [conn4] moveChunk updating self version to: 2|1||51276475bd1f99446659365b through { _id: 936.0 } -> { _id: 1873.0 } for collection 'test.foo' m30000| Fri Feb 22 12:28:38.937 [initandlisten] connection accepted from 127.0.0.1:47848 #13 (13 connections now open) m30001| Fri Feb 22 12:28:38.938 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-5127647699334798f3e47d46", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536118938), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 936.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:38.938 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:38.938 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:38.938 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:38.938 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:38.938 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:38.938 [cleanupOldData-5127647699334798f3e47d47] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 936.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:38.939 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:38.939 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-5127647699334798f3e47d48", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536118939), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 936.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 2, step4 of 6: 89, step5 of 6: 12, step6 of 6: 0 } } m30001| Fri Feb 22 12:28:38.939 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 936.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:17 r:2424 w:58 reslen:37 109ms m30999| Fri Feb 22 12:28:38.939 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:38.940 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 2|1||51276475bd1f99446659365b based on: 1|106||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:38.940 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 461.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:38.940 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:38.941 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: MinKey }, max: { _id: 461.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_MinKey", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:38.941 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 4 m30999| Fri Feb 22 12:28:38.941 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|106, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:38.941 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647699334798f3e47d49 m30001| Fri Feb 22 12:28:38.941 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-5127647699334798f3e47d4a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536118941), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: MinKey }, max: { _id: 461.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:38.942 [conn4] moveChunk request accepted at version 1|216||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:38.943 [conn4] moveChunk number of documents: 461 m30000| Fri Feb 22 12:28:38.943 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 461.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30000| Fri Feb 22 12:28:38.945 [migrateThread] build index test.bar { _id: 1 } m30000| Fri Feb 22 12:28:38.946 [migrateThread] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:28:38.946 [migrateThread] info: creating collection test.bar on add index m30001| Fri Feb 22 12:28:38.953 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 461.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 95, clonedBytes: 99085, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:38.958 [cleanupOldData-5127647699334798f3e47d47] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 936.0 } m30001| Fri Feb 22 12:28:38.959 [cleanupOldData-5127647699334798f3e47d47] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 936.0 } m30001| Fri Feb 22 12:28:38.964 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 461.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 306, clonedBytes: 319158, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:38.972 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:38.972 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: MinKey } -> { _id: 461.0 } m30001| Fri Feb 22 12:28:38.974 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 461.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 461, clonedBytes: 480823, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:38.974 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: MinKey } -> { _id: 461.0 } m30001| Fri Feb 22 12:28:38.984 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 461.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 461, clonedBytes: 480823, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:38.984 [conn4] moveChunk setting version to: 2|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:38.984 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:38.985 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: MinKey } -> { _id: 461.0 } m30000| Fri Feb 22 12:28:38.985 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: MinKey } -> { _id: 461.0 } m30000| Fri Feb 22 12:28:38.985 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-51276476c49297cf54df55d5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536118985), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: MinKey }, max: { _id: 461.0 }, step1 of 5: 2, step2 of 5: 0, step3 of 5: 25, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:28:38.987 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 5 version: 1|216||51276475bd1f99446659365c based on: 1|216||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:38.987 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 1|216||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:38.987 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 1000|216, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 5 m30001| Fri Feb 22 12:28:38.987 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:38.995 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: MinKey }, max: { _id: 461.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 461, clonedBytes: 480823, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:38.995 [conn4] moveChunk updating self version to: 2|1||51276475bd1f99446659365c through { _id: 461.0 } -> { _id: 923.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:38.996 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-5127647699334798f3e47d4b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536118996), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: MinKey }, max: { _id: 461.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:38.996 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:38.996 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:38.996 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 1000|216, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 1000|216, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 2000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:38.996 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:38.996 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:38.996 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:38.996 [cleanupOldData-5127647699334798f3e47d4c] (start) waiting to cleanup test.bar from { _id: MinKey } -> { _id: 461.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:38.996 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:38.996 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:38-5127647699334798f3e47d4d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536118996), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: MinKey }, max: { _id: 461.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:38.997 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:38.998 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 6 version: 2|1||51276475bd1f99446659365c based on: 1|216||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:38.999 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 7 version: 2|1||51276475bd1f99446659365c based on: 1|216||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:39.000 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:39.000 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:39.002 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 8 version: 2|1||51276475bd1f99446659365c based on: 2|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:39.002 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 2|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:39.002 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 8 m30999| Fri Feb 22 12:28:39.002 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|216, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:39.016 [cleanupOldData-5127647699334798f3e47d4c] waiting to remove documents for test.bar from { _id: MinKey } -> { _id: 461.0 } 2000 m30999| Fri Feb 22 12:28:40.001 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:40.001 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:40.002 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276478bd1f99446659365e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276476bd1f99446659365d" } } m30999| Fri Feb 22 12:28:40.002 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276478bd1f99446659365e m30999| Fri Feb 22 12:28:40.002 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:40.002 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:40.002 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:40.005 [Balancer] shard0001 has more chunks me:106 best: shard0000:1 m30999| Fri Feb 22 12:28:40.005 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:40.005 [Balancer] donor : shard0001 chunks on 106 m30999| Fri Feb 22 12:28:40.005 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 12:28:40.005 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:40.005 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_936.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 936.0 }, max: { _id: 1873.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:40.008 [Balancer] shard0001 has more chunks me:216 best: shard0000:1 m30999| Fri Feb 22 12:28:40.008 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:40.008 [Balancer] donor : shard0001 chunks on 216 m30999| Fri Feb 22 12:28:40.008 [Balancer] receiver : shard0000 chunks on 1 m30999| Fri Feb 22 12:28:40.008 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:40.008 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_461.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 461.0 }, max: { _id: 923.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:40.008 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 936.0 }max: { _id: 1873.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:40.008 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:40.008 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 936.0 }, max: { _id: 1873.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_936.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:40.009 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647899334798f3e47d4e m30001| Fri Feb 22 12:28:40.009 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-5127647899334798f3e47d4f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536120009), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 936.0 }, max: { _id: 1873.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:40.010 [conn4] moveChunk request accepted at version 2|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:40.013 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:40.014 [migrateThread] starting receiving-end of migration of chunk { _id: 936.0 } -> { _id: 1873.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:40.024 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 936.0 }, max: { _id: 1873.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 122, clonedBytes: 64660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:40.034 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 936.0 }, max: { _id: 1873.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 323, clonedBytes: 171190, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:40.044 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 936.0 }, max: { _id: 1873.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 524, clonedBytes: 277720, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:40.055 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 936.0 }, max: { _id: 1873.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 728, clonedBytes: 385840, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:40.069 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:40.069 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 936.0 } -> { _id: 1873.0 } m30000| Fri Feb 22 12:28:40.069 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 936.0 } -> { _id: 1873.0 } m30001| Fri Feb 22 12:28:40.071 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 936.0 }, max: { _id: 1873.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:40.071 [conn4] moveChunk setting version to: 3|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:40.071 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:40.073 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 9 version: 2|1||51276475bd1f99446659365b based on: 2|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:40.073 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 2|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:40.073 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 9 m30001| Fri Feb 22 12:28:40.073 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:40.079 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 936.0 } -> { _id: 1873.0 } m30000| Fri Feb 22 12:28:40.079 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 936.0 } -> { _id: 1873.0 } m30000| Fri Feb 22 12:28:40.080 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-51276478c49297cf54df55d6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536120079), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 936.0 }, max: { _id: 1873.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 54, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 12:28:40.081 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 936.0 }, max: { _id: 1873.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:40.081 [conn4] moveChunk updating self version to: 3|1||51276475bd1f99446659365b through { _id: 1873.0 } -> { _id: 2810.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:40.082 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-5127647899334798f3e47d50", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536120082), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 936.0 }, max: { _id: 1873.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:40.082 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:40.082 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:40.082 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 2000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 3000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:40.082 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:40.082 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:40.082 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:40.082 [cleanupOldData-5127647899334798f3e47d51] (start) waiting to cleanup test.foo from { _id: 936.0 } -> { _id: 1873.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:40.083 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:40.083 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-5127647899334798f3e47d52", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536120083), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 936.0 }, max: { _id: 1873.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:40.083 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:40.084 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 10 version: 3|1||51276475bd1f99446659365b based on: 2|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:40.085 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 11 version: 3|1||51276475bd1f99446659365b based on: 2|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:40.085 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 2|1||000000000000000000000000min: { _id: 461.0 }max: { _id: 923.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:40.086 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:40.086 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 461.0 }, max: { _id: 923.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_461.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:40.086 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 12 version: 3|1||51276475bd1f99446659365b based on: 3|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:40.087 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 3|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:40.087 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 12 m30999| Fri Feb 22 12:28:40.087 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:40.087 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647899334798f3e47d53 m30001| Fri Feb 22 12:28:40.087 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-5127647899334798f3e47d54", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536120087), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 461.0 }, max: { _id: 923.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:40.088 [conn4] moveChunk request accepted at version 2|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:40.090 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:40.090 [migrateThread] starting receiving-end of migration of chunk { _id: 461.0 } -> { _id: 923.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:40.100 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 923.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 127, clonedBytes: 132461, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:40.102 [cleanupOldData-5127647899334798f3e47d51] waiting to remove documents for test.foo from { _id: 936.0 } -> { _id: 1873.0 } m30001| Fri Feb 22 12:28:40.110 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 923.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 304, clonedBytes: 317072, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:40.120 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:40.120 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 461.0 } -> { _id: 923.0 } m30001| Fri Feb 22 12:28:40.121 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 923.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:40.122 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 461.0 } -> { _id: 923.0 } m30001| Fri Feb 22 12:28:40.131 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 923.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:40.131 [conn4] moveChunk setting version to: 3|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:40.131 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:40.133 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 461.0 } -> { _id: 923.0 } m30000| Fri Feb 22 12:28:40.133 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 461.0 } -> { _id: 923.0 } m30000| Fri Feb 22 12:28:40.133 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-51276478c49297cf54df55d7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536120133), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 461.0 }, max: { _id: 923.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:28:40.140 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 13 version: 2|1||51276475bd1f99446659365c based on: 2|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:40.140 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 2|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:40.140 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 2000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 13 m30001| Fri Feb 22 12:28:40.140 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:40.141 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 461.0 }, max: { _id: 923.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:40.141 [conn4] moveChunk updating self version to: 3|1||51276475bd1f99446659365c through { _id: 923.0 } -> { _id: 1385.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:40.142 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-5127647899334798f3e47d55", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536120142), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 461.0 }, max: { _id: 923.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:40.142 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:40.142 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:40.142 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 2000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 3000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:40.142 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:40.142 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:40.142 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:40.142 [cleanupOldData-5127647899334798f3e47d56] (start) waiting to cleanup test.bar from { _id: 461.0 } -> { _id: 923.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:40.143 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:40.143 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:40-5127647899334798f3e47d57", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536120143), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 461.0 }, max: { _id: 923.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:40.143 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:40.144 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 14 version: 3|1||51276475bd1f99446659365c based on: 2|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:40.146 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 15 version: 3|1||51276475bd1f99446659365c based on: 2|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:40.146 [cleanupOldData-5127647699334798f3e47d47] moveChunk deleted 936 documents for test.foo from { _id: MinKey } -> { _id: 936.0 } m30001| Fri Feb 22 12:28:40.146 [cleanupOldData-5127647899334798f3e47d51] moveChunk starting delete for: test.foo from { _id: 936.0 } -> { _id: 1873.0 } m30999| Fri Feb 22 12:28:40.147 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:40.147 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:40.148 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 16 version: 3|1||51276475bd1f99446659365c based on: 3|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:40.148 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 3|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:40.149 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 16 m30999| Fri Feb 22 12:28:40.149 [conn1] setShardVersion success: { oldVersion: Timestamp 2000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:40.162 [cleanupOldData-5127647899334798f3e47d56] waiting to remove documents for test.bar from { _id: 461.0 } -> { _id: 923.0 } 3000 m30999| Fri Feb 22 12:28:41.148 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:41.148 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:41.149 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:41 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276479bd1f99446659365f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276478bd1f99446659365e" } } m30999| Fri Feb 22 12:28:41.150 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276479bd1f99446659365f m30999| Fri Feb 22 12:28:41.150 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:41.150 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:41.150 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:41.152 [Balancer] shard0001 has more chunks me:105 best: shard0000:2 m30999| Fri Feb 22 12:28:41.152 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:41.152 [Balancer] donor : shard0001 chunks on 105 m30999| Fri Feb 22 12:28:41.152 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 12:28:41.152 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:41.152 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1873.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 1873.0 }, max: { _id: 2810.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:41.155 [Balancer] shard0001 has more chunks me:215 best: shard0000:2 m30999| Fri Feb 22 12:28:41.155 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:41.155 [Balancer] donor : shard0001 chunks on 215 m30999| Fri Feb 22 12:28:41.155 [Balancer] receiver : shard0000 chunks on 2 m30999| Fri Feb 22 12:28:41.155 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:41.155 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_923.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 923.0 }, max: { _id: 1385.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:41.156 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 1873.0 }max: { _id: 2810.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:41.156 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:41.156 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1873.0 }, max: { _id: 2810.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1873.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:41.157 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647999334798f3e47d58 m30001| Fri Feb 22 12:28:41.157 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-5127647999334798f3e47d59", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536121157), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1873.0 }, max: { _id: 2810.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:41.158 [conn4] moveChunk request accepted at version 3|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:41.161 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:41.162 [migrateThread] starting receiving-end of migration of chunk { _id: 1873.0 } -> { _id: 2810.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:41.172 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1873.0 }, max: { _id: 2810.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 129, clonedBytes: 68370, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:41.182 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1873.0 }, max: { _id: 2810.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 331, clonedBytes: 175430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:41.192 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1873.0 }, max: { _id: 2810.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 492, clonedBytes: 260760, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:41.203 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1873.0 }, max: { _id: 2810.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 689, clonedBytes: 365170, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:41.216 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:41.216 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1873.0 } -> { _id: 2810.0 } m30000| Fri Feb 22 12:28:41.218 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1873.0 } -> { _id: 2810.0 } m30001| Fri Feb 22 12:28:41.219 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 1873.0 }, max: { _id: 2810.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:41.219 [conn4] moveChunk setting version to: 4|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:41.219 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:41.228 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1873.0 } -> { _id: 2810.0 } m30000| Fri Feb 22 12:28:41.228 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1873.0 } -> { _id: 2810.0 } m30000| Fri Feb 22 12:28:41.229 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-51276479c49297cf54df55d8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536121229), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1873.0 }, max: { _id: 2810.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 54, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:41.229 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 1873.0 }, max: { _id: 2810.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:41.229 [conn4] moveChunk updating self version to: 4|1||51276475bd1f99446659365b through { _id: 2810.0 } -> { _id: 3747.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:41.230 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-5127647999334798f3e47d5a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536121230), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1873.0 }, max: { _id: 2810.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:41.230 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:41.230 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:41.230 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:41.230 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:41.230 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:41.230 [cleanupOldData-5127647999334798f3e47d5b] (start) waiting to cleanup test.foo from { _id: 1873.0 } -> { _id: 2810.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:41.231 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:41.231 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-5127647999334798f3e47d5c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536121231), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1873.0 }, max: { _id: 2810.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:41.231 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:41.232 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 17 version: 4|1||51276475bd1f99446659365b based on: 3|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:41.233 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 3|1||000000000000000000000000min: { _id: 923.0 }max: { _id: 1385.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:41.233 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:41.233 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 923.0 }, max: { _id: 1385.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_923.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:41.234 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647999334798f3e47d5d m30001| Fri Feb 22 12:28:41.234 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-5127647999334798f3e47d5e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536121234), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 923.0 }, max: { _id: 1385.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:41.235 [conn4] moveChunk request accepted at version 3|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:41.237 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:41.237 [migrateThread] starting receiving-end of migration of chunk { _id: 923.0 } -> { _id: 1385.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30999| Fri Feb 22 12:28:41.237 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 17 m30999| Fri Feb 22 12:28:41.237 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:41.247 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 923.0 }, max: { _id: 1385.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 129, clonedBytes: 134547, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:41.250 [cleanupOldData-5127647999334798f3e47d5b] waiting to remove documents for test.foo from { _id: 1873.0 } -> { _id: 2810.0 } m30001| Fri Feb 22 12:28:41.257 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 923.0 }, max: { _id: 1385.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 308, clonedBytes: 321244, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:41.265 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:41.265 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 923.0 } -> { _id: 1385.0 } m30001| Fri Feb 22 12:28:41.268 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 923.0 }, max: { _id: 1385.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:41.268 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 923.0 } -> { _id: 1385.0 } m30001| Fri Feb 22 12:28:41.278 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 923.0 }, max: { _id: 1385.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:41.278 [conn4] moveChunk setting version to: 4|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:41.278 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:41.278 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 923.0 } -> { _id: 1385.0 } m30000| Fri Feb 22 12:28:41.278 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 923.0 } -> { _id: 1385.0 } m30000| Fri Feb 22 12:28:41.278 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-51276479c49297cf54df55d9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536121278), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 923.0 }, max: { _id: 1385.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 27, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:41.280 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 18 version: 3|1||51276475bd1f99446659365c based on: 3|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:41.280 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 3|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:41.280 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 3000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 18 m30001| Fri Feb 22 12:28:41.281 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:41.285 [cleanupOldData-5127647899334798f3e47d51] moveChunk deleted 937 documents for test.foo from { _id: 936.0 } -> { _id: 1873.0 } m30001| Fri Feb 22 12:28:41.285 [cleanupOldData-5127647999334798f3e47d5b] moveChunk starting delete for: test.foo from { _id: 1873.0 } -> { _id: 2810.0 } m30001| Fri Feb 22 12:28:41.288 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 923.0 }, max: { _id: 1385.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:41.288 [conn4] moveChunk updating self version to: 4|1||51276475bd1f99446659365c through { _id: 1385.0 } -> { _id: 1847.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:41.289 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-5127647999334798f3e47d5f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536121289), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 923.0 }, max: { _id: 1385.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:41.289 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:41.289 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:41.289 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 3000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 4000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:41.289 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:41.289 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:41.289 [cleanupOldData-5127647999334798f3e47d60] (start) waiting to cleanup test.bar from { _id: 923.0 } -> { _id: 1385.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:41.289 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:41.290 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:41.290 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:41-5127647999334798f3e47d61", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536121290), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 923.0 }, max: { _id: 1385.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:41.290 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:41.291 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 19 version: 4|1||51276475bd1f99446659365c based on: 3|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:41.293 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 20 version: 4|1||51276475bd1f99446659365c based on: 3|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:41.294 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:41.294 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:41.295 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 21 version: 4|1||51276475bd1f99446659365c based on: 4|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:41.295 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 4|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:41.295 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 21 m30999| Fri Feb 22 12:28:41.296 [conn1] setShardVersion success: { oldVersion: Timestamp 3000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:41.309 [cleanupOldData-5127647999334798f3e47d60] waiting to remove documents for test.bar from { _id: 923.0 } -> { _id: 1385.0 } m30001| Fri Feb 22 12:28:42.119 [cleanupOldData-5127647999334798f3e47d5b] moveChunk deleted 937 documents for test.foo from { _id: 1873.0 } -> { _id: 2810.0 } m30001| Fri Feb 22 12:28:42.119 [cleanupOldData-5127647999334798f3e47d60] moveChunk starting delete for: test.bar from { _id: 923.0 } -> { _id: 1385.0 } 4000 m30999| Fri Feb 22 12:28:42.294 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:42.295 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:42.295 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:42 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127647abd1f994466593660" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276479bd1f99446659365f" } } m30999| Fri Feb 22 12:28:42.295 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127647abd1f994466593660 m30999| Fri Feb 22 12:28:42.295 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:42.295 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:42.295 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:42.297 [Balancer] shard0001 has more chunks me:104 best: shard0000:3 m30999| Fri Feb 22 12:28:42.297 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:42.297 [Balancer] donor : shard0001 chunks on 104 m30999| Fri Feb 22 12:28:42.297 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 12:28:42.297 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:42.297 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_2810.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 2810.0 }, max: { _id: 3747.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:42.299 [Balancer] shard0001 has more chunks me:214 best: shard0000:3 m30999| Fri Feb 22 12:28:42.299 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:42.299 [Balancer] donor : shard0001 chunks on 214 m30999| Fri Feb 22 12:28:42.299 [Balancer] receiver : shard0000 chunks on 3 m30999| Fri Feb 22 12:28:42.299 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:42.299 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_1385.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 1385.0 }, max: { _id: 1847.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:42.299 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 2810.0 }max: { _id: 3747.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:42.300 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:42.300 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 2810.0 }, max: { _id: 3747.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_2810.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:42.300 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647a99334798f3e47d62 m30001| Fri Feb 22 12:28:42.300 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647a99334798f3e47d63", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536122300), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 2810.0 }, max: { _id: 3747.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:42.301 [conn4] moveChunk request accepted at version 4|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:42.303 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:42.304 [migrateThread] starting receiving-end of migration of chunk { _id: 2810.0 } -> { _id: 3747.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:42.314 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 2810.0 }, max: { _id: 3747.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 132, clonedBytes: 69960, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:42.324 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 2810.0 }, max: { _id: 3747.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 314, clonedBytes: 166420, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:42.334 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 2810.0 }, max: { _id: 3747.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 516, clonedBytes: 273480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:42.344 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 2810.0 }, max: { _id: 3747.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 722, clonedBytes: 382660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:42.355 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:42.355 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2810.0 } -> { _id: 3747.0 } m30000| Fri Feb 22 12:28:42.359 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2810.0 } -> { _id: 3747.0 } m30001| Fri Feb 22 12:28:42.361 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 2810.0 }, max: { _id: 3747.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:42.361 [conn4] moveChunk setting version to: 5|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:42.361 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:42.363 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 22 version: 4|1||51276475bd1f99446659365b based on: 4|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:42.363 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 4|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:42.363 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 22 m30001| Fri Feb 22 12:28:42.363 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:42.369 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 2810.0 } -> { _id: 3747.0 } m30000| Fri Feb 22 12:28:42.369 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 2810.0 } -> { _id: 3747.0 } m30000| Fri Feb 22 12:28:42.369 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647ac49297cf54df55da", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536122369), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 2810.0 }, max: { _id: 3747.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:42.371 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 2810.0 }, max: { _id: 3747.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:42.371 [conn4] moveChunk updating self version to: 5|1||51276475bd1f99446659365b through { _id: 3747.0 } -> { _id: 4684.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:42.372 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647a99334798f3e47d64", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536122372), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 2810.0 }, max: { _id: 3747.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:42.372 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 4000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 5000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:42.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:42.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:42.372 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:42.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:42.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:42.372 [cleanupOldData-5127647a99334798f3e47d65] (start) waiting to cleanup test.foo from { _id: 2810.0 } -> { _id: 3747.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:42.372 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:42.372 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647a99334798f3e47d66", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536122372), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 2810.0 }, max: { _id: 3747.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:42.372 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:42.373 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 23 version: 5|1||51276475bd1f99446659365b based on: 4|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:42.374 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 24 version: 5|1||51276475bd1f99446659365b based on: 4|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:42.374 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 4|1||000000000000000000000000min: { _id: 1385.0 }max: { _id: 1847.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:42.375 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:42.375 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1385.0 }, max: { _id: 1847.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_1385.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:42.375 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 25 version: 5|1||51276475bd1f99446659365b based on: 5|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:42.375 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647a99334798f3e47d67 m30001| Fri Feb 22 12:28:42.375 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647a99334798f3e47d68", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536122375), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 1385.0 }, max: { _id: 1847.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:42.376 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 5|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:42.376 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 25 m30999| Fri Feb 22 12:28:42.376 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:42.376 [conn4] moveChunk request accepted at version 4|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:42.377 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:42.377 [migrateThread] starting receiving-end of migration of chunk { _id: 1385.0 } -> { _id: 1847.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:42.388 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 1385.0 }, max: { _id: 1847.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 186, clonedBytes: 193998, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:42.392 [cleanupOldData-5127647a99334798f3e47d65] waiting to remove documents for test.foo from { _id: 2810.0 } -> { _id: 3747.0 } m30001| Fri Feb 22 12:28:42.398 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 1385.0 }, max: { _id: 1847.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 451, clonedBytes: 470393, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:42.398 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:42.398 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 1385.0 } -> { _id: 1847.0 } m30000| Fri Feb 22 12:28:42.400 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 1385.0 } -> { _id: 1847.0 } m30001| Fri Feb 22 12:28:42.408 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 1385.0 }, max: { _id: 1847.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:42.408 [conn4] moveChunk setting version to: 5|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:42.408 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:42.411 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 1385.0 } -> { _id: 1847.0 } m30000| Fri Feb 22 12:28:42.411 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 1385.0 } -> { _id: 1847.0 } m30999| Fri Feb 22 12:28:42.411 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 26 version: 4|1||51276475bd1f99446659365c based on: 4|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:42.411 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647ac49297cf54df55db", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536122411), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 1385.0 }, max: { _id: 1847.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:42.411 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 4|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:42.411 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 4000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 26 m30001| Fri Feb 22 12:28:42.411 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:42.418 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 1385.0 }, max: { _id: 1847.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:42.418 [conn4] moveChunk updating self version to: 5|1||51276475bd1f99446659365c through { _id: 1847.0 } -> { _id: 2309.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:42.419 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647a99334798f3e47d69", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536122419), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 1385.0 }, max: { _id: 1847.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:42.419 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:42.419 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:42.419 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:42.419 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 4000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 5000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:42.419 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:42.419 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:42.419 [cleanupOldData-5127647a99334798f3e47d6a] (start) waiting to cleanup test.bar from { _id: 1385.0 } -> { _id: 1847.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:42.420 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:42.420 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:42-5127647a99334798f3e47d6b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536122420), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 1385.0 }, max: { _id: 1847.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:42.420 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:42.421 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 27 version: 5|1||51276475bd1f99446659365c based on: 4|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:42.422 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 28 version: 5|1||51276475bd1f99446659365c based on: 4|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:42.423 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:42.423 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:42.424 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 29 version: 5|1||51276475bd1f99446659365c based on: 5|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:42.425 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 5|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:42.425 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 5000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 29 m30999| Fri Feb 22 12:28:42.425 [conn1] setShardVersion success: { oldVersion: Timestamp 4000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:42.439 [cleanupOldData-5127647a99334798f3e47d6a] waiting to remove documents for test.bar from { _id: 1385.0 } -> { _id: 1847.0 } m30001| Fri Feb 22 12:28:42.919 [cleanupOldData-5127647999334798f3e47d60] moveChunk deleted 462 documents for test.bar from { _id: 923.0 } -> { _id: 1385.0 } m30001| Fri Feb 22 12:28:42.919 [cleanupOldData-5127647a99334798f3e47d6a] moveChunk starting delete for: test.bar from { _id: 1385.0 } -> { _id: 1847.0 } 5000 m30999| Fri Feb 22 12:28:43.424 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:43.424 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:43.424 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:43 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127647bbd1f994466593661" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127647abd1f994466593660" } } m30999| Fri Feb 22 12:28:43.425 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127647bbd1f994466593661 m30999| Fri Feb 22 12:28:43.425 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:43.425 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:43.425 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:43.427 [Balancer] shard0001 has more chunks me:103 best: shard0000:4 m30999| Fri Feb 22 12:28:43.427 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:43.427 [Balancer] donor : shard0001 chunks on 103 m30999| Fri Feb 22 12:28:43.427 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 12:28:43.427 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:43.427 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_3747.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 3747.0 }, max: { _id: 4684.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:43.429 [Balancer] shard0001 has more chunks me:213 best: shard0000:4 m30999| Fri Feb 22 12:28:43.429 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:43.429 [Balancer] donor : shard0001 chunks on 213 m30999| Fri Feb 22 12:28:43.429 [Balancer] receiver : shard0000 chunks on 4 m30999| Fri Feb 22 12:28:43.429 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:43.429 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_1847.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 1847.0 }, max: { _id: 2309.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:43.429 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 3747.0 }max: { _id: 4684.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:43.429 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:43.429 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 3747.0 }, max: { _id: 4684.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_3747.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:43.430 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647b99334798f3e47d6c m30001| Fri Feb 22 12:28:43.430 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647b99334798f3e47d6d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536123430), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 3747.0 }, max: { _id: 4684.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:43.431 [conn4] moveChunk request accepted at version 5|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:43.433 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:43.433 [migrateThread] starting receiving-end of migration of chunk { _id: 3747.0 } -> { _id: 4684.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:43.444 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 3747.0 }, max: { _id: 4684.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 197, clonedBytes: 104410, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:43.454 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 3747.0 }, max: { _id: 4684.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 458, clonedBytes: 242740, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:43.464 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 3747.0 }, max: { _id: 4684.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 756, clonedBytes: 400680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:43.470 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:43.470 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 3747.0 } -> { _id: 4684.0 } m30000| Fri Feb 22 12:28:43.472 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 3747.0 } -> { _id: 4684.0 } m30001| Fri Feb 22 12:28:43.474 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 3747.0 }, max: { _id: 4684.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:43.474 [conn4] moveChunk setting version to: 6|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:43.474 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:43.477 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 30 version: 5|1||51276475bd1f99446659365b based on: 5|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:43.477 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 5|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:43.477 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 5000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 30 m30001| Fri Feb 22 12:28:43.477 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:43.479 [cleanupOldData-5127647a99334798f3e47d6a] moveChunk deleted 462 documents for test.bar from { _id: 1385.0 } -> { _id: 1847.0 } m30001| Fri Feb 22 12:28:43.480 [cleanupOldData-5127647a99334798f3e47d65] moveChunk starting delete for: test.foo from { _id: 2810.0 } -> { _id: 3747.0 } m30000| Fri Feb 22 12:28:43.482 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 3747.0 } -> { _id: 4684.0 } m30000| Fri Feb 22 12:28:43.482 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 3747.0 } -> { _id: 4684.0 } m30000| Fri Feb 22 12:28:43.482 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647bc49297cf54df55dc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536123482), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 3747.0 }, max: { _id: 4684.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 36, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:43.484 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 3747.0 }, max: { _id: 4684.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:43.484 [conn4] moveChunk updating self version to: 6|1||51276475bd1f99446659365b through { _id: 4684.0 } -> { _id: 5621.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:43.485 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647b99334798f3e47d6e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536123485), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 3747.0 }, max: { _id: 4684.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:43.485 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:43.485 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:43.485 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:43.485 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:43.485 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 5000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 6000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:43.485 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:43.485 [cleanupOldData-5127647b99334798f3e47d6f] (start) waiting to cleanup test.foo from { _id: 3747.0 } -> { _id: 4684.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:43.486 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:43.486 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647b99334798f3e47d70", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536123486), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 3747.0 }, max: { _id: 4684.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:43.486 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:43.487 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 31 version: 6|1||51276475bd1f99446659365b based on: 5|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:43.488 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 32 version: 6|1||51276475bd1f99446659365b based on: 5|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:43.488 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 5|1||000000000000000000000000min: { _id: 1847.0 }max: { _id: 2309.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:43.488 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:43.488 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 1847.0 }, max: { _id: 2309.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_1847.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:43.489 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647b99334798f3e47d71 m30001| Fri Feb 22 12:28:43.489 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647b99334798f3e47d72", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536123489), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 1847.0 }, max: { _id: 2309.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:43.489 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 33 version: 6|1||51276475bd1f99446659365b based on: 6|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:43.489 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 6|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:43.490 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 33 m30999| Fri Feb 22 12:28:43.490 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:43.490 [conn4] moveChunk request accepted at version 5|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:43.491 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:43.491 [migrateThread] starting receiving-end of migration of chunk { _id: 1847.0 } -> { _id: 2309.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:43.501 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 1847.0 }, max: { _id: 2309.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 173, clonedBytes: 180439, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:43.505 [cleanupOldData-5127647b99334798f3e47d6f] waiting to remove documents for test.foo from { _id: 3747.0 } -> { _id: 4684.0 } m30001| Fri Feb 22 12:28:43.511 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 1847.0 }, max: { _id: 2309.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 419, clonedBytes: 437017, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:43.513 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:43.513 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 1847.0 } -> { _id: 2309.0 } m30000| Fri Feb 22 12:28:43.515 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 1847.0 } -> { _id: 2309.0 } m30001| Fri Feb 22 12:28:43.522 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 1847.0 }, max: { _id: 2309.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:43.522 [conn4] moveChunk setting version to: 6|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:43.522 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:43.525 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 34 version: 5|1||51276475bd1f99446659365c based on: 5|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:43.525 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 5|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:43.525 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 5000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 34 m30001| Fri Feb 22 12:28:43.525 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:43.525 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 1847.0 } -> { _id: 2309.0 } m30000| Fri Feb 22 12:28:43.525 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 1847.0 } -> { _id: 2309.0 } m30000| Fri Feb 22 12:28:43.525 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647bc49297cf54df55dd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536123525), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 1847.0 }, max: { _id: 2309.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:43.532 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 1847.0 }, max: { _id: 2309.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:43.532 [conn4] moveChunk updating self version to: 6|1||51276475bd1f99446659365c through { _id: 2309.0 } -> { _id: 2771.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:43.533 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647b99334798f3e47d73", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536123533), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 1847.0 }, max: { _id: 2309.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:43.533 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:43.533 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:43.533 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:43.533 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:43.533 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 5000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 6000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:43.533 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:43.533 [cleanupOldData-5127647b99334798f3e47d74] (start) waiting to cleanup test.bar from { _id: 1847.0 } -> { _id: 2309.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:43.534 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:43.534 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:43-5127647b99334798f3e47d75", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536123534), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 1847.0 }, max: { _id: 2309.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:43.534 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:43.535 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 35 version: 6|1||51276475bd1f99446659365c based on: 5|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:43.537 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 36 version: 6|1||51276475bd1f99446659365c based on: 5|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:43.537 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:43.538 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:43.539 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 37 version: 6|1||51276475bd1f99446659365c based on: 6|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:43.539 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 6|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:43.539 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 37 m30999| Fri Feb 22 12:28:43.540 [conn1] setShardVersion success: { oldVersion: Timestamp 5000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:43.553 [cleanupOldData-5127647b99334798f3e47d74] waiting to remove documents for test.bar from { _id: 1847.0 } -> { _id: 2309.0 } 6000 m30999| Fri Feb 22 12:28:44.538 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:44.539 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:44.539 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127647cbd1f994466593662" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127647bbd1f994466593661" } } m30999| Fri Feb 22 12:28:44.540 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127647cbd1f994466593662 m30999| Fri Feb 22 12:28:44.540 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:44.540 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:44.540 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:44.542 [Balancer] shard0001 has more chunks me:102 best: shard0000:5 m30999| Fri Feb 22 12:28:44.542 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:44.542 [Balancer] donor : shard0001 chunks on 102 m30999| Fri Feb 22 12:28:44.542 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 12:28:44.542 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:44.542 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_4684.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 4684.0 }, max: { _id: 5621.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:44.545 [Balancer] shard0001 has more chunks me:212 best: shard0000:5 m30999| Fri Feb 22 12:28:44.545 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:44.545 [Balancer] donor : shard0001 chunks on 212 m30999| Fri Feb 22 12:28:44.545 [Balancer] receiver : shard0000 chunks on 5 m30999| Fri Feb 22 12:28:44.545 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:44.545 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_2309.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 2309.0 }, max: { _id: 2771.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:44.545 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: 4684.0 }max: { _id: 5621.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:44.545 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:44.546 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 4684.0 }, max: { _id: 5621.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_4684.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:44.546 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647c99334798f3e47d76 m30001| Fri Feb 22 12:28:44.547 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647c99334798f3e47d77", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536124547), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 4684.0 }, max: { _id: 5621.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:44.548 [conn4] moveChunk request accepted at version 6|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:44.551 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:44.551 [migrateThread] starting receiving-end of migration of chunk { _id: 4684.0 } -> { _id: 5621.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:44.561 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 4684.0 }, max: { _id: 5621.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 120, clonedBytes: 63600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:44.571 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 4684.0 }, max: { _id: 5621.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 296, clonedBytes: 156880, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:44.581 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 4684.0 }, max: { _id: 5621.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 496, clonedBytes: 262880, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:44.592 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 4684.0 }, max: { _id: 5621.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 701, clonedBytes: 371530, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:44.604 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:44.605 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4684.0 } -> { _id: 5621.0 } m30000| Fri Feb 22 12:28:44.607 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4684.0 } -> { _id: 5621.0 } m30001| Fri Feb 22 12:28:44.608 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 4684.0 }, max: { _id: 5621.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:44.608 [conn4] moveChunk setting version to: 7|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:44.608 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:44.610 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 38 version: 6|1||51276475bd1f99446659365b based on: 6|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:44.610 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 6|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:44.610 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 38 m30001| Fri Feb 22 12:28:44.610 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:44.617 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 4684.0 } -> { _id: 5621.0 } m30000| Fri Feb 22 12:28:44.617 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 4684.0 } -> { _id: 5621.0 } m30000| Fri Feb 22 12:28:44.617 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647cc49297cf54df55de", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536124617), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 4684.0 }, max: { _id: 5621.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 52, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:44.618 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 4684.0 }, max: { _id: 5621.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:44.618 [conn4] moveChunk updating self version to: 7|1||51276475bd1f99446659365b through { _id: 5621.0 } -> { _id: 6558.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:44.619 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647c99334798f3e47d78", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536124619), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 4684.0 }, max: { _id: 5621.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:44.619 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:44.619 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:44.619 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 6000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 7000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:44.619 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:44.619 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:44.619 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:44.619 [cleanupOldData-5127647c99334798f3e47d79] (start) waiting to cleanup test.foo from { _id: 4684.0 } -> { _id: 5621.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:44.619 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:44.620 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647c99334798f3e47d7a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536124619), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 4684.0 }, max: { _id: 5621.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:44.620 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:44.620 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 39 version: 7|1||51276475bd1f99446659365b based on: 6|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:44.622 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 40 version: 7|1||51276475bd1f99446659365b based on: 6|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:44.622 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 6|1||000000000000000000000000min: { _id: 2309.0 }max: { _id: 2771.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:44.623 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:44.623 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 2309.0 }, max: { _id: 2771.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_2309.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:44.623 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 41 version: 7|1||51276475bd1f99446659365b based on: 7|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:44.623 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 7|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:44.624 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 41 m30999| Fri Feb 22 12:28:44.624 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:44.624 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647c99334798f3e47d7b m30001| Fri Feb 22 12:28:44.624 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647c99334798f3e47d7c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536124624), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 2309.0 }, max: { _id: 2771.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:44.625 [conn4] moveChunk request accepted at version 6|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:44.627 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:44.627 [migrateThread] starting receiving-end of migration of chunk { _id: 2309.0 } -> { _id: 2771.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:44.637 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 2309.0 }, max: { _id: 2771.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 126, clonedBytes: 131418, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:44.639 [cleanupOldData-5127647c99334798f3e47d79] waiting to remove documents for test.foo from { _id: 4684.0 } -> { _id: 5621.0 } m30001| Fri Feb 22 12:28:44.647 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 2309.0 }, max: { _id: 2771.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 308, clonedBytes: 321244, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:44.656 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:44.656 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 2309.0 } -> { _id: 2771.0 } m30001| Fri Feb 22 12:28:44.658 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 2309.0 }, max: { _id: 2771.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:44.659 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 2309.0 } -> { _id: 2771.0 } m30001| Fri Feb 22 12:28:44.668 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 2309.0 }, max: { _id: 2771.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:44.668 [conn4] moveChunk setting version to: 7|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:44.668 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:44.669 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 2309.0 } -> { _id: 2771.0 } m30000| Fri Feb 22 12:28:44.669 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 2309.0 } -> { _id: 2771.0 } m30000| Fri Feb 22 12:28:44.670 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647cc49297cf54df55df", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536124670), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 2309.0 }, max: { _id: 2771.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:28:44.671 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 42 version: 6|1||51276475bd1f99446659365c based on: 6|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:44.671 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 6|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:44.671 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 6000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 42 m30001| Fri Feb 22 12:28:44.671 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:44.678 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 2309.0 }, max: { _id: 2771.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:44.678 [conn4] moveChunk updating self version to: 7|1||51276475bd1f99446659365c through { _id: 2771.0 } -> { _id: 3233.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:44.679 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647c99334798f3e47d7d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536124679), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 2309.0 }, max: { _id: 2771.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:44.679 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:44.679 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:44.679 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 6000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 7000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:44.679 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:44.679 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:44.679 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:44.679 [cleanupOldData-5127647c99334798f3e47d7e] (start) waiting to cleanup test.bar from { _id: 2309.0 } -> { _id: 2771.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:44.680 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:44.680 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:44-5127647c99334798f3e47d7f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536124680), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 2309.0 }, max: { _id: 2771.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:44.680 [Balancer] moveChunk result: { ok: 1.0 } m30001| Fri Feb 22 12:28:44.681 [cleanupOldData-5127647a99334798f3e47d65] moveChunk deleted 937 documents for test.foo from { _id: 2810.0 } -> { _id: 3747.0 } m30001| Fri Feb 22 12:28:44.681 [cleanupOldData-5127647c99334798f3e47d79] moveChunk starting delete for: test.foo from { _id: 4684.0 } -> { _id: 5621.0 } m30999| Fri Feb 22 12:28:44.681 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 43 version: 7|1||51276475bd1f99446659365c based on: 6|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:44.683 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 44 version: 7|1||51276475bd1f99446659365c based on: 6|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:44.684 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:44.684 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:44.685 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 45 version: 7|1||51276475bd1f99446659365c based on: 7|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:44.686 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 7|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:44.686 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 45 m30999| Fri Feb 22 12:28:44.686 [conn1] setShardVersion success: { oldVersion: Timestamp 6000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:44.699 [cleanupOldData-5127647c99334798f3e47d7e] waiting to remove documents for test.bar from { _id: 2309.0 } -> { _id: 2771.0 } 7000 m30001| Fri Feb 22 12:28:45.594 [cleanupOldData-5127647c99334798f3e47d79] moveChunk deleted 937 documents for test.foo from { _id: 4684.0 } -> { _id: 5621.0 } m30001| Fri Feb 22 12:28:45.594 [cleanupOldData-5127647c99334798f3e47d7e] moveChunk starting delete for: test.bar from { _id: 2309.0 } -> { _id: 2771.0 } m30999| Fri Feb 22 12:28:45.685 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:45.685 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:45.686 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:45 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127647dbd1f994466593663" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127647cbd1f994466593662" } } m30999| Fri Feb 22 12:28:45.686 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127647dbd1f994466593663 m30999| Fri Feb 22 12:28:45.686 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:45.686 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:45.686 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:45.688 [Balancer] shard0001 has more chunks me:101 best: shard0000:6 m30999| Fri Feb 22 12:28:45.689 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:45.689 [Balancer] donor : shard0001 chunks on 101 m30999| Fri Feb 22 12:28:45.689 [Balancer] receiver : shard0000 chunks on 6 m30999| Fri Feb 22 12:28:45.689 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:45.689 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_5621.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 5621.0 }, max: { _id: 6558.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:45.691 [Balancer] shard0001 has more chunks me:211 best: shard0000:6 m30999| Fri Feb 22 12:28:45.691 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:45.691 [Balancer] donor : shard0001 chunks on 211 m30999| Fri Feb 22 12:28:45.691 [Balancer] receiver : shard0000 chunks on 6 m30999| Fri Feb 22 12:28:45.691 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:45.691 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_2771.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 2771.0 }, max: { _id: 3233.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:45.691 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { _id: 5621.0 }max: { _id: 6558.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:45.691 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:45.692 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 5621.0 }, max: { _id: 6558.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_5621.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:45.693 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647d99334798f3e47d80 m30001| Fri Feb 22 12:28:45.693 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647d99334798f3e47d81", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536125693), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 5621.0 }, max: { _id: 6558.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:45.694 [conn4] moveChunk request accepted at version 7|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:45.697 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:45.697 [migrateThread] starting receiving-end of migration of chunk { _id: 5621.0 } -> { _id: 6558.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:45.707 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 5621.0 }, max: { _id: 6558.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 110, clonedBytes: 58300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:45.717 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 5621.0 }, max: { _id: 6558.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 297, clonedBytes: 157410, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:45.727 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 5621.0 }, max: { _id: 6558.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 496, clonedBytes: 262880, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:45.738 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 5621.0 }, max: { _id: 6558.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 694, clonedBytes: 367820, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:45.750 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:45.750 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5621.0 } -> { _id: 6558.0 } m30000| Fri Feb 22 12:28:45.753 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 5621.0 } -> { _id: 6558.0 } m30001| Fri Feb 22 12:28:45.754 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 5621.0 }, max: { _id: 6558.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:45.754 [conn4] moveChunk setting version to: 8|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:45.754 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:45.756 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 46 version: 7|1||51276475bd1f99446659365b based on: 7|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:45.756 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 7|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:45.756 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 46 m30001| Fri Feb 22 12:28:45.756 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:45.763 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 5621.0 } -> { _id: 6558.0 } m30000| Fri Feb 22 12:28:45.763 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 5621.0 } -> { _id: 6558.0 } m30000| Fri Feb 22 12:28:45.764 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647dc49297cf54df55e0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536125763), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 5621.0 }, max: { _id: 6558.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 52, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:45.764 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 5621.0 }, max: { _id: 6558.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:45.764 [conn4] moveChunk updating self version to: 8|1||51276475bd1f99446659365b through { _id: 6558.0 } -> { _id: 7495.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:45.765 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647d99334798f3e47d82", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536125765), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 5621.0 }, max: { _id: 6558.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:45.765 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:45.765 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:45.765 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 7000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 8000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:45.765 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:45.765 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:45.765 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:45.765 [cleanupOldData-5127647d99334798f3e47d83] (start) waiting to cleanup test.foo from { _id: 5621.0 } -> { _id: 6558.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:45.765 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:45.765 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647d99334798f3e47d84", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536125765), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 5621.0 }, max: { _id: 6558.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 2, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:45.766 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:45.766 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 47 version: 8|1||51276475bd1f99446659365b based on: 7|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:45.768 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 48 version: 8|1||51276475bd1f99446659365b based on: 7|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:45.768 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 7|1||000000000000000000000000min: { _id: 2771.0 }max: { _id: 3233.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:45.768 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:45.768 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 2771.0 }, max: { _id: 3233.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_2771.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:45.769 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 49 version: 8|1||51276475bd1f99446659365b based on: 8|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:45.769 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 8|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:45.769 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 8000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 49 m30999| Fri Feb 22 12:28:45.769 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:45.770 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647d99334798f3e47d85 m30001| Fri Feb 22 12:28:45.770 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647d99334798f3e47d86", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536125770), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 2771.0 }, max: { _id: 3233.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:45.771 [conn4] moveChunk request accepted at version 7|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:45.772 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:45.772 [migrateThread] starting receiving-end of migration of chunk { _id: 2771.0 } -> { _id: 3233.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:45.783 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 2771.0 }, max: { _id: 3233.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 191, clonedBytes: 199213, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:45.785 [cleanupOldData-5127647d99334798f3e47d83] waiting to remove documents for test.foo from { _id: 5621.0 } -> { _id: 6558.0 } m30001| Fri Feb 22 12:28:45.793 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 2771.0 }, max: { _id: 3233.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 448, clonedBytes: 467264, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:45.793 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:45.794 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 2771.0 } -> { _id: 3233.0 } m30000| Fri Feb 22 12:28:45.796 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 2771.0 } -> { _id: 3233.0 } m30001| Fri Feb 22 12:28:45.803 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 2771.0 }, max: { _id: 3233.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:45.803 [conn4] moveChunk setting version to: 8|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:45.803 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:45.805 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 50 version: 7|1||51276475bd1f99446659365c based on: 7|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:45.806 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 7|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:45.806 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 7000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 50 m30001| Fri Feb 22 12:28:45.806 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:45.806 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 2771.0 } -> { _id: 3233.0 } m30000| Fri Feb 22 12:28:45.806 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 2771.0 } -> { _id: 3233.0 } m30000| Fri Feb 22 12:28:45.806 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647dc49297cf54df55e1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536125806), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 2771.0 }, max: { _id: 3233.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:45.813 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 2771.0 }, max: { _id: 3233.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:45.813 [conn4] moveChunk updating self version to: 8|1||51276475bd1f99446659365c through { _id: 3233.0 } -> { _id: 3695.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:45.814 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647d99334798f3e47d87", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536125814), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 2771.0 }, max: { _id: 3233.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:45.814 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:45.814 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:45.814 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:45.814 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 7000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 8000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:45.814 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:45.814 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:45.814 [cleanupOldData-5127647d99334798f3e47d88] (start) waiting to cleanup test.bar from { _id: 2771.0 } -> { _id: 3233.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:45.815 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:45.815 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:45-5127647d99334798f3e47d89", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536125815), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 2771.0 }, max: { _id: 3233.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:45.815 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:45.817 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 51 version: 8|1||51276475bd1f99446659365c based on: 7|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:45.819 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 52 version: 8|1||51276475bd1f99446659365c based on: 7|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:45.819 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:45.820 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:45.821 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 53 version: 8|1||51276475bd1f99446659365c based on: 8|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:45.821 [cleanupOldData-5127647c99334798f3e47d7e] moveChunk deleted 462 documents for test.bar from { _id: 2309.0 } -> { _id: 2771.0 } m30001| Fri Feb 22 12:28:45.821 [cleanupOldData-5127647d99334798f3e47d83] moveChunk starting delete for: test.foo from { _id: 5621.0 } -> { _id: 6558.0 } m30999| Fri Feb 22 12:28:45.821 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 8|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:45.822 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 8000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 53 m30999| Fri Feb 22 12:28:45.822 [conn1] setShardVersion success: { oldVersion: Timestamp 7000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:45.834 [cleanupOldData-5127647d99334798f3e47d88] waiting to remove documents for test.bar from { _id: 2771.0 } -> { _id: 3233.0 } 8000 m30999| Fri Feb 22 12:28:46.820 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:46.821 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:46.821 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:46 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127647ebd1f994466593664" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127647dbd1f994466593663" } } m30999| Fri Feb 22 12:28:46.822 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127647ebd1f994466593664 m30999| Fri Feb 22 12:28:46.822 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:46.822 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:46.822 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:46.824 [Balancer] shard0001 has more chunks me:100 best: shard0000:7 m30999| Fri Feb 22 12:28:46.824 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:46.824 [Balancer] donor : shard0001 chunks on 100 m30999| Fri Feb 22 12:28:46.824 [Balancer] receiver : shard0000 chunks on 7 m30999| Fri Feb 22 12:28:46.824 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:46.824 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_6558.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 6558.0 }, max: { _id: 7495.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:46.826 [Balancer] shard0001 has more chunks me:210 best: shard0000:7 m30999| Fri Feb 22 12:28:46.826 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:46.826 [Balancer] donor : shard0001 chunks on 210 m30999| Fri Feb 22 12:28:46.826 [Balancer] receiver : shard0000 chunks on 7 m30999| Fri Feb 22 12:28:46.826 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:46.826 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_3233.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 3233.0 }, max: { _id: 3695.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:46.826 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { _id: 6558.0 }max: { _id: 7495.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:46.826 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:46.827 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 6558.0 }, max: { _id: 7495.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_6558.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:46.828 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647e99334798f3e47d8a m30001| Fri Feb 22 12:28:46.828 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647e99334798f3e47d8b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536126828), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 6558.0 }, max: { _id: 7495.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:46.829 [conn4] moveChunk request accepted at version 8|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:46.832 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:46.832 [migrateThread] starting receiving-end of migration of chunk { _id: 6558.0 } -> { _id: 7495.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:46.842 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 6558.0 }, max: { _id: 7495.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 213, clonedBytes: 112890, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:46.852 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 6558.0 }, max: { _id: 7495.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 502, clonedBytes: 266060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:46.863 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 6558.0 }, max: { _id: 7495.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 801, clonedBytes: 424530, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:46.867 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:46.867 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6558.0 } -> { _id: 7495.0 } m30000| Fri Feb 22 12:28:46.870 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6558.0 } -> { _id: 7495.0 } m30001| Fri Feb 22 12:28:46.873 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 6558.0 }, max: { _id: 7495.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:46.873 [conn4] moveChunk setting version to: 9|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:46.873 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:46.875 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 54 version: 8|1||51276475bd1f99446659365b based on: 8|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:46.875 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 8|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:46.875 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 8000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 54 m30001| Fri Feb 22 12:28:46.875 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:46.880 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 6558.0 } -> { _id: 7495.0 } m30000| Fri Feb 22 12:28:46.880 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 6558.0 } -> { _id: 7495.0 } m30000| Fri Feb 22 12:28:46.880 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647ec49297cf54df55e2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536126880), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 6558.0 }, max: { _id: 7495.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 34, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:46.883 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 6558.0 }, max: { _id: 7495.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:46.883 [conn4] moveChunk updating self version to: 9|1||51276475bd1f99446659365b through { _id: 7495.0 } -> { _id: 8432.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:46.884 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647e99334798f3e47d8c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536126884), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 6558.0 }, max: { _id: 7495.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:46.884 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 8000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 8000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 9000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:46.884 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:46.884 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:46.884 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:46.884 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:46.884 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:46.884 [cleanupOldData-5127647e99334798f3e47d8d] (start) waiting to cleanup test.foo from { _id: 6558.0 } -> { _id: 7495.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:46.884 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:46.885 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647e99334798f3e47d8e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536126885), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 6558.0 }, max: { _id: 7495.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:46.885 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:46.885 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 55 version: 9|1||51276475bd1f99446659365b based on: 8|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:46.887 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 56 version: 9|1||51276475bd1f99446659365b based on: 8|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:46.887 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 8|1||000000000000000000000000min: { _id: 3233.0 }max: { _id: 3695.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:46.888 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:46.888 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 3233.0 }, max: { _id: 3695.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_3233.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:46.888 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 57 version: 9|1||51276475bd1f99446659365b based on: 9|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:46.888 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 9|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:46.889 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 9000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 57 m30999| Fri Feb 22 12:28:46.889 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:46.889 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647e99334798f3e47d8f m30001| Fri Feb 22 12:28:46.889 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647e99334798f3e47d90", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536126889), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 3233.0 }, max: { _id: 3695.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:46.890 [conn4] moveChunk request accepted at version 8|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:46.892 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:46.892 [migrateThread] starting receiving-end of migration of chunk { _id: 3233.0 } -> { _id: 3695.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:46.902 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 3233.0 }, max: { _id: 3695.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 134, clonedBytes: 139762, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:46.904 [cleanupOldData-5127647e99334798f3e47d8d] waiting to remove documents for test.foo from { _id: 6558.0 } -> { _id: 7495.0 } m30001| Fri Feb 22 12:28:46.912 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 3233.0 }, max: { _id: 3695.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 306, clonedBytes: 319158, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:46.922 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:46.922 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 3233.0 } -> { _id: 3695.0 } m30001| Fri Feb 22 12:28:46.922 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 3233.0 }, max: { _id: 3695.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:46.924 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 3233.0 } -> { _id: 3695.0 } m30001| Fri Feb 22 12:28:46.933 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 3233.0 }, max: { _id: 3695.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:46.933 [conn4] moveChunk setting version to: 9|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:46.933 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:46.934 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 3233.0 } -> { _id: 3695.0 } m30000| Fri Feb 22 12:28:46.934 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 3233.0 } -> { _id: 3695.0 } m30000| Fri Feb 22 12:28:46.935 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647ec49297cf54df55e3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536126935), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 3233.0 }, max: { _id: 3695.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:46.935 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 58 version: 8|1||51276475bd1f99446659365c based on: 8|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:46.935 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 8|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:46.935 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 8000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 58 m30001| Fri Feb 22 12:28:46.936 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:46.943 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 3233.0 }, max: { _id: 3695.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:46.943 [conn4] moveChunk updating self version to: 9|1||51276475bd1f99446659365c through { _id: 3695.0 } -> { _id: 4157.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:46.944 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647e99334798f3e47d91", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536126944), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 3233.0 }, max: { _id: 3695.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:46.944 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:46.944 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:46.944 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:46.944 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 8000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 8000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 9000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:46.944 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:46.944 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:46.944 [cleanupOldData-5127647e99334798f3e47d92] (start) waiting to cleanup test.bar from { _id: 3233.0 } -> { _id: 3695.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:46.944 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:46.944 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:46-5127647e99334798f3e47d93", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536126944), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 3233.0 }, max: { _id: 3695.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:46.944 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:46.945 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 59 version: 9|1||51276475bd1f99446659365c based on: 8|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:46.947 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 60 version: 9|1||51276475bd1f99446659365c based on: 8|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:46.948 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:46.949 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:46.949 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 61 version: 9|1||51276475bd1f99446659365c based on: 9|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:46.950 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 9|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:46.950 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 9000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 61 m30999| Fri Feb 22 12:28:46.950 [conn1] setShardVersion success: { oldVersion: Timestamp 8000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:46.964 [cleanupOldData-5127647e99334798f3e47d92] waiting to remove documents for test.bar from { _id: 3233.0 } -> { _id: 3695.0 } m30001| Fri Feb 22 12:28:47.176 [cleanupOldData-5127647d99334798f3e47d83] moveChunk deleted 937 documents for test.foo from { _id: 5621.0 } -> { _id: 6558.0 } m30001| Fri Feb 22 12:28:47.176 [cleanupOldData-5127647e99334798f3e47d92] moveChunk starting delete for: test.bar from { _id: 3233.0 } -> { _id: 3695.0 } 9000 m30001| Fri Feb 22 12:28:47.885 [cleanupOldData-5127647e99334798f3e47d92] moveChunk deleted 462 documents for test.bar from { _id: 3233.0 } -> { _id: 3695.0 } m30001| Fri Feb 22 12:28:47.885 [cleanupOldData-5127647e99334798f3e47d8d] moveChunk starting delete for: test.foo from { _id: 6558.0 } -> { _id: 7495.0 } m30999| Fri Feb 22 12:28:47.949 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:47.950 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:47.950 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:47 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127647fbd1f994466593665" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127647ebd1f994466593664" } } m30999| Fri Feb 22 12:28:47.951 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127647fbd1f994466593665 m30999| Fri Feb 22 12:28:47.951 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:47.951 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:47.951 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:47.953 [Balancer] shard0001 has more chunks me:99 best: shard0000:8 m30999| Fri Feb 22 12:28:47.953 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:47.953 [Balancer] donor : shard0001 chunks on 99 m30999| Fri Feb 22 12:28:47.953 [Balancer] receiver : shard0000 chunks on 8 m30999| Fri Feb 22 12:28:47.953 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:47.953 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_7495.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 7495.0 }, max: { _id: 8432.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:47.955 [Balancer] shard0001 has more chunks me:209 best: shard0000:8 m30999| Fri Feb 22 12:28:47.955 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:47.955 [Balancer] donor : shard0001 chunks on 209 m30999| Fri Feb 22 12:28:47.955 [Balancer] receiver : shard0000 chunks on 8 m30999| Fri Feb 22 12:28:47.955 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:47.955 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_3695.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 3695.0 }, max: { _id: 4157.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:47.955 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { _id: 7495.0 }max: { _id: 8432.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:47.955 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:47.955 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 7495.0 }, max: { _id: 8432.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_7495.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:47.956 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127647f99334798f3e47d94 m30001| Fri Feb 22 12:28:47.956 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:47-5127647f99334798f3e47d95", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536127956), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 7495.0 }, max: { _id: 8432.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:47.957 [conn4] moveChunk request accepted at version 9|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:47.960 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:47.961 [migrateThread] starting receiving-end of migration of chunk { _id: 7495.0 } -> { _id: 8432.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:47.971 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 7495.0 }, max: { _id: 8432.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 137, clonedBytes: 72610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:47.981 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 7495.0 }, max: { _id: 8432.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 338, clonedBytes: 179140, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:47.991 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 7495.0 }, max: { _id: 8432.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 543, clonedBytes: 287790, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:48.001 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 7495.0 }, max: { _id: 8432.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 749, clonedBytes: 396970, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:48.011 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:48.011 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 7495.0 } -> { _id: 8432.0 } m30000| Fri Feb 22 12:28:48.015 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 7495.0 } -> { _id: 8432.0 } m30001| Fri Feb 22 12:28:48.018 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 7495.0 }, max: { _id: 8432.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:48.018 [conn4] moveChunk setting version to: 10|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:48.018 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:48.020 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 62 version: 9|1||51276475bd1f99446659365b based on: 9|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:48.020 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 9|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:48.020 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 9000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 62 m30001| Fri Feb 22 12:28:48.020 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:48.025 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 7495.0 } -> { _id: 8432.0 } m30000| Fri Feb 22 12:28:48.025 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 7495.0 } -> { _id: 8432.0 } m30000| Fri Feb 22 12:28:48.025 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:48-51276480c49297cf54df55e4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536128025), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 7495.0 }, max: { _id: 8432.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 49, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:48.028 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 7495.0 }, max: { _id: 8432.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:48.028 [conn4] moveChunk updating self version to: 10|1||51276475bd1f99446659365b through { _id: 8432.0 } -> { _id: 9369.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:48.028 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:48-5127648099334798f3e47d96", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536128028), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 7495.0 }, max: { _id: 8432.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:48.029 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:48.029 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:48.029 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:48.029 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 9000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 10000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:48.029 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:48.029 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:48.029 [cleanupOldData-5127648099334798f3e47d97] (start) waiting to cleanup test.foo from { _id: 7495.0 } -> { _id: 8432.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:48.029 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:48.029 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:48-5127648099334798f3e47d98", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536128029), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 7495.0 }, max: { _id: 8432.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:48.029 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:48.030 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 63 version: 10|1||51276475bd1f99446659365b based on: 9|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:48.031 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 64 version: 10|1||51276475bd1f99446659365b based on: 9|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:48.032 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 9|1||000000000000000000000000min: { _id: 3695.0 }max: { _id: 4157.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:48.032 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:48.032 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 3695.0 }, max: { _id: 4157.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_3695.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:48.033 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 65 version: 10|1||51276475bd1f99446659365b based on: 10|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:48.033 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 10|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:48.033 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 10000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 65 m30999| Fri Feb 22 12:28:48.033 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:48.033 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648099334798f3e47d99 m30001| Fri Feb 22 12:28:48.033 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:48-5127648099334798f3e47d9a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536128033), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 3695.0 }, max: { _id: 4157.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:48.035 [conn4] moveChunk request accepted at version 9|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:48.037 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:48.037 [migrateThread] starting receiving-end of migration of chunk { _id: 3695.0 } -> { _id: 4157.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:48.047 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 3695.0 }, max: { _id: 4157.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 202, clonedBytes: 210686, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:48.049 [cleanupOldData-5127648099334798f3e47d97] waiting to remove documents for test.foo from { _id: 7495.0 } -> { _id: 8432.0 } m30001| Fri Feb 22 12:28:48.058 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 3695.0 }, max: { _id: 4157.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 427, clonedBytes: 445361, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:48.059 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:48.059 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 3695.0 } -> { _id: 4157.0 } m30000| Fri Feb 22 12:28:48.060 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 3695.0 } -> { _id: 4157.0 } m30001| Fri Feb 22 12:28:48.068 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 3695.0 }, max: { _id: 4157.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:48.068 [conn4] moveChunk setting version to: 10|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:48.068 [conn11] Waiting for commit to finish m30001| Fri Feb 22 12:28:48.068 [conn3] assertion 13388 [test.bar] shard version not ok in Client::Context: version mismatch detected for test.bar, stored major version 10 does not match received 9 ( ns : test.bar, received : 9|1||51276475bd1f99446659365c, wanted : 10|0||51276475bd1f99446659365c, send ) ( ns : test.bar, received : 9|1||51276475bd1f99446659365c, wanted : 10|0||51276475bd1f99446659365c, send ) ns:test.bar query:{ query: { _id: 9601.0 }, $explain: true } m30001| Fri Feb 22 12:28:48.068 [conn3] stale version detected during query over test.bar : { $err: "[test.bar] shard version not ok in Client::Context: version mismatch detected for test.bar, stored major version 10 does not match received 9 ( ns : t...", code: 13388, ns: "test.bar", vReceived: Timestamp 9000|1, vReceivedEpoch: ObjectId('51276475bd1f99446659365c'), vWanted: Timestamp 10000|0, vWantedEpoch: ObjectId('51276475bd1f99446659365c') } m30999| Fri Feb 22 12:28:48.070 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 66 version: 9|1||51276475bd1f99446659365c based on: 9|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:48.070 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 9|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:48.070 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 9000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 66 m30001| Fri Feb 22 12:28:48.070 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:48.070 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 3695.0 } -> { _id: 4157.0 } m30000| Fri Feb 22 12:28:48.070 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 3695.0 } -> { _id: 4157.0 } m30000| Fri Feb 22 12:28:48.070 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:48-51276480c49297cf54df55e5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536128070), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 3695.0 }, max: { _id: 4157.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:28:48.078 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 3695.0 }, max: { _id: 4157.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:48.078 [conn4] moveChunk updating self version to: 10|1||51276475bd1f99446659365c through { _id: 4157.0 } -> { _id: 4619.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:48.079 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:48-5127648099334798f3e47d9b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536128079), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 3695.0 }, max: { _id: 4157.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:48.079 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:48.079 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 9000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 10000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:48.079 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:48.079 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:48.079 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:48.079 [cleanupOldData-5127648099334798f3e47d9c] (start) waiting to cleanup test.bar from { _id: 3695.0 } -> { _id: 4157.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:48.079 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:48.079 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:48.080 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:48-5127648099334798f3e47d9d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536128080), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 3695.0 }, max: { _id: 4157.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:48.080 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:48.080 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 67 version: 10|1||51276475bd1f99446659365c based on: 9|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:48.082 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 68 version: 10|1||51276475bd1f99446659365c based on: 9|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:48.083 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:48.084 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:48.085 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 69 version: 10|1||51276475bd1f99446659365c based on: 10|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:48.085 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 10|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:48.085 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 10000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 69 m30999| Fri Feb 22 12:28:48.085 [conn1] setShardVersion success: { oldVersion: Timestamp 9000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:48.099 [cleanupOldData-5127648099334798f3e47d9c] waiting to remove documents for test.bar from { _id: 3695.0 } -> { _id: 4157.0 } m30001| Fri Feb 22 12:28:48.134 [cleanupOldData-5127647e99334798f3e47d8d] moveChunk deleted 937 documents for test.foo from { _id: 6558.0 } -> { _id: 7495.0 } m30001| Fri Feb 22 12:28:48.134 [cleanupOldData-5127648099334798f3e47d97] moveChunk starting delete for: test.foo from { _id: 7495.0 } -> { _id: 8432.0 } 10000 m30999| Fri Feb 22 12:28:49.084 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:49.085 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:49.085 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:49 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276481bd1f994466593666" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127647fbd1f994466593665" } } m30999| Fri Feb 22 12:28:49.086 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276481bd1f994466593666 m30999| Fri Feb 22 12:28:49.086 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:49.086 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:49.086 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:49.089 [Balancer] shard0001 has more chunks me:98 best: shard0000:9 m30999| Fri Feb 22 12:28:49.089 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:49.089 [Balancer] donor : shard0001 chunks on 98 m30999| Fri Feb 22 12:28:49.089 [Balancer] receiver : shard0000 chunks on 9 m30999| Fri Feb 22 12:28:49.089 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:49.089 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_8432.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 8432.0 }, max: { _id: 9369.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:49.091 [Balancer] shard0001 has more chunks me:208 best: shard0000:9 m30999| Fri Feb 22 12:28:49.091 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:49.091 [Balancer] donor : shard0001 chunks on 208 m30999| Fri Feb 22 12:28:49.091 [Balancer] receiver : shard0000 chunks on 9 m30999| Fri Feb 22 12:28:49.091 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:49.091 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_4157.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 4157.0 }, max: { _id: 4619.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:49.092 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { _id: 8432.0 }max: { _id: 9369.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:49.092 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:49.092 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 8432.0 }, max: { _id: 9369.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_8432.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:49.093 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648199334798f3e47d9e m30001| Fri Feb 22 12:28:49.093 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-5127648199334798f3e47d9f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536129093), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 8432.0 }, max: { _id: 9369.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:49.095 [conn4] moveChunk request accepted at version 10|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:49.098 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:49.098 [migrateThread] starting receiving-end of migration of chunk { _id: 8432.0 } -> { _id: 9369.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:49.108 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 8432.0 }, max: { _id: 9369.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 136, clonedBytes: 72080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:49.119 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 8432.0 }, max: { _id: 9369.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 343, clonedBytes: 181790, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:49.129 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 8432.0 }, max: { _id: 9369.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 548, clonedBytes: 290440, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:49.139 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 8432.0 }, max: { _id: 9369.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 751, clonedBytes: 398030, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:49.149 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:49.149 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8432.0 } -> { _id: 9369.0 } m30000| Fri Feb 22 12:28:49.152 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8432.0 } -> { _id: 9369.0 } m30001| Fri Feb 22 12:28:49.156 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 8432.0 }, max: { _id: 9369.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:49.156 [conn4] moveChunk setting version to: 11|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:49.156 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:49.158 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 70 version: 10|1||51276475bd1f99446659365b based on: 10|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:49.158 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 10|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:49.158 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 10000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 70 m30001| Fri Feb 22 12:28:49.158 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:49.162 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 8432.0 } -> { _id: 9369.0 } m30000| Fri Feb 22 12:28:49.162 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 8432.0 } -> { _id: 9369.0 } m30000| Fri Feb 22 12:28:49.162 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-51276481c49297cf54df55e6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536129162), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 8432.0 }, max: { _id: 9369.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 49, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:49.166 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 8432.0 }, max: { _id: 9369.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:49.166 [conn4] moveChunk updating self version to: 11|1||51276475bd1f99446659365b through { _id: 9369.0 } -> { _id: 10306.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:49.167 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-5127648199334798f3e47da0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536129167), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 8432.0 }, max: { _id: 9369.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:49.167 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:49.167 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:49.167 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 10000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 10000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 11000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:49.167 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:49.167 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:49.167 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:49.167 [cleanupOldData-5127648199334798f3e47da1] (start) waiting to cleanup test.foo from { _id: 8432.0 } -> { _id: 9369.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:49.167 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:49.168 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-5127648199334798f3e47da2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536129168), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 8432.0 }, max: { _id: 9369.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:49.168 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:49.169 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 71 version: 11|1||51276475bd1f99446659365b based on: 10|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:49.170 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 72 version: 11|1||51276475bd1f99446659365b based on: 10|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:49.170 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 10|1||000000000000000000000000min: { _id: 4157.0 }max: { _id: 4619.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:49.171 [cleanupOldData-5127648099334798f3e47d97] moveChunk deleted 937 documents for test.foo from { _id: 7495.0 } -> { _id: 8432.0 } m30001| Fri Feb 22 12:28:49.171 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:49.171 [cleanupOldData-5127647d99334798f3e47d88] moveChunk starting delete for: test.bar from { _id: 2771.0 } -> { _id: 3233.0 } m30001| Fri Feb 22 12:28:49.171 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 4157.0 }, max: { _id: 4619.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_4157.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:49.172 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 73 version: 11|1||51276475bd1f99446659365b based on: 11|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:49.172 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 11|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:49.172 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 11000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 73 m30999| Fri Feb 22 12:28:49.172 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:49.172 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648199334798f3e47da3 m30001| Fri Feb 22 12:28:49.172 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-5127648199334798f3e47da4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536129172), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 4157.0 }, max: { _id: 4619.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:49.174 [conn4] moveChunk request accepted at version 10|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:49.175 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:49.175 [migrateThread] starting receiving-end of migration of chunk { _id: 4157.0 } -> { _id: 4619.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:49.186 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 4157.0 }, max: { _id: 4619.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 122, clonedBytes: 127246, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:49.187 [cleanupOldData-5127648199334798f3e47da1] waiting to remove documents for test.foo from { _id: 8432.0 } -> { _id: 9369.0 } m30001| Fri Feb 22 12:28:49.196 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 4157.0 }, max: { _id: 4619.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 281, clonedBytes: 293083, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:49.206 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 4157.0 }, max: { _id: 4619.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 459, clonedBytes: 478737, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:49.207 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:49.207 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 4157.0 } -> { _id: 4619.0 } m30000| Fri Feb 22 12:28:49.209 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 4157.0 } -> { _id: 4619.0 } m30001| Fri Feb 22 12:28:49.217 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 4157.0 }, max: { _id: 4619.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:49.217 [conn4] moveChunk setting version to: 11|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:49.217 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:49.219 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 4157.0 } -> { _id: 4619.0 } m30000| Fri Feb 22 12:28:49.219 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 4157.0 } -> { _id: 4619.0 } m30000| Fri Feb 22 12:28:49.220 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-51276481c49297cf54df55e7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536129220), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 4157.0 }, max: { _id: 4619.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:49.220 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 74 version: 10|1||51276475bd1f99446659365c based on: 10|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:49.220 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 10|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:49.220 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 10000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 74 m30001| Fri Feb 22 12:28:49.221 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:49.227 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 4157.0 }, max: { _id: 4619.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:49.227 [conn4] moveChunk updating self version to: 11|1||51276475bd1f99446659365c through { _id: 4619.0 } -> { _id: 5081.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:49.228 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-5127648199334798f3e47da5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536129228), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 4157.0 }, max: { _id: 4619.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:49.228 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:49.228 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 10000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 10000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 11000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:49.228 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:49.228 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:49.228 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:49.228 [cleanupOldData-5127648199334798f3e47da6] (start) waiting to cleanup test.bar from { _id: 4157.0 } -> { _id: 4619.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:49.228 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:49.229 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:49.229 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:49-5127648199334798f3e47da7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536129229), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 4157.0 }, max: { _id: 4619.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:49.229 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:49.230 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 75 version: 11|1||51276475bd1f99446659365c based on: 10|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:49.232 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 76 version: 11|1||51276475bd1f99446659365c based on: 10|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:49.233 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:49.234 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:49.235 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 77 version: 11|1||51276475bd1f99446659365c based on: 11|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:49.235 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 11|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:49.235 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 11000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 77 m30999| Fri Feb 22 12:28:49.235 [conn1] setShardVersion success: { oldVersion: Timestamp 10000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:49.249 [cleanupOldData-5127648199334798f3e47da6] waiting to remove documents for test.bar from { _id: 4157.0 } -> { _id: 4619.0 } 11000 m30001| Fri Feb 22 12:28:49.831 [cleanupOldData-5127647d99334798f3e47d88] moveChunk deleted 462 documents for test.bar from { _id: 2771.0 } -> { _id: 3233.0 } m30001| Fri Feb 22 12:28:49.831 [cleanupOldData-5127648199334798f3e47da6] moveChunk starting delete for: test.bar from { _id: 4157.0 } -> { _id: 4619.0 } m30999| Fri Feb 22 12:28:50.234 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:50.235 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:50.235 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276482bd1f994466593667" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276481bd1f994466593666" } } m30999| Fri Feb 22 12:28:50.235 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276482bd1f994466593667 m30999| Fri Feb 22 12:28:50.235 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:50.235 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:50.235 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:50.237 [Balancer] shard0001 has more chunks me:97 best: shard0000:10 m30999| Fri Feb 22 12:28:50.237 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:50.237 [Balancer] donor : shard0001 chunks on 97 m30999| Fri Feb 22 12:28:50.237 [Balancer] receiver : shard0000 chunks on 10 m30999| Fri Feb 22 12:28:50.237 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:50.237 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_9369.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 9369.0 }, max: { _id: 10306.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30001| Fri Feb 22 12:28:50.239 [cleanupOldData-5127648199334798f3e47da6] moveChunk deleted 462 documents for test.bar from { _id: 4157.0 } -> { _id: 4619.0 } m30001| Fri Feb 22 12:28:50.239 [cleanupOldData-5127648199334798f3e47da1] moveChunk starting delete for: test.foo from { _id: 8432.0 } -> { _id: 9369.0 } m30999| Fri Feb 22 12:28:50.240 [Balancer] shard0001 has more chunks me:207 best: shard0000:10 m30999| Fri Feb 22 12:28:50.240 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:50.240 [Balancer] donor : shard0001 chunks on 207 m30999| Fri Feb 22 12:28:50.240 [Balancer] receiver : shard0000 chunks on 10 m30999| Fri Feb 22 12:28:50.240 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:50.240 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_4619.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 4619.0 }, max: { _id: 5081.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:50.240 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: 9369.0 }max: { _id: 10306.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:50.240 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:50.241 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 9369.0 }, max: { _id: 10306.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_9369.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:50.241 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648299334798f3e47da8 m30001| Fri Feb 22 12:28:50.241 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-5127648299334798f3e47da9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536130241), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 9369.0 }, max: { _id: 10306.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:50.242 [conn4] moveChunk request accepted at version 11|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:50.245 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:50.245 [migrateThread] starting receiving-end of migration of chunk { _id: 9369.0 } -> { _id: 10306.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:50.255 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 9369.0 }, max: { _id: 10306.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 172, clonedBytes: 91160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:50.265 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 9369.0 }, max: { _id: 10306.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 477, clonedBytes: 252810, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:50.275 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 9369.0 }, max: { _id: 10306.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 782, clonedBytes: 414460, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:50.281 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:50.281 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 9369.0 } -> { _id: 10306.0 } m30000| Fri Feb 22 12:28:50.283 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 9369.0 } -> { _id: 10306.0 } m30001| Fri Feb 22 12:28:50.285 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 9369.0 }, max: { _id: 10306.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:50.286 [conn4] moveChunk setting version to: 12|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:50.286 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:50.288 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 78 version: 11|1||51276475bd1f99446659365b based on: 11|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:50.288 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 11|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:50.288 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 11000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 78 m30001| Fri Feb 22 12:28:50.288 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:50.294 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 9369.0 } -> { _id: 10306.0 } m30000| Fri Feb 22 12:28:50.294 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 9369.0 } -> { _id: 10306.0 } m30000| Fri Feb 22 12:28:50.294 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-51276482c49297cf54df55e8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536130294), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 9369.0 }, max: { _id: 10306.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 35, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:50.296 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 9369.0 }, max: { _id: 10306.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:50.296 [conn4] moveChunk updating self version to: 12|1||51276475bd1f99446659365b through { _id: 10306.0 } -> { _id: 11243.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:50.297 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-5127648299334798f3e47daa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536130297), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 9369.0 }, max: { _id: 10306.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:50.297 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 11000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 12000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:50.297 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:50.297 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:50.297 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:50.297 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:50.297 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:50.297 [cleanupOldData-5127648299334798f3e47dab] (start) waiting to cleanup test.foo from { _id: 9369.0 } -> { _id: 10306.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:50.297 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:50.297 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-5127648299334798f3e47dac", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536130297), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 9369.0 }, max: { _id: 10306.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:50.297 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:50.298 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 79 version: 12|1||51276475bd1f99446659365b based on: 11|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:50.299 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 80 version: 12|1||51276475bd1f99446659365b based on: 11|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:50.300 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 11|1||000000000000000000000000min: { _id: 4619.0 }max: { _id: 5081.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:50.300 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:50.300 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 4619.0 }, max: { _id: 5081.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_4619.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:50.301 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 81 version: 12|1||51276475bd1f99446659365b based on: 12|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:50.301 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648299334798f3e47dad m30001| Fri Feb 22 12:28:50.301 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-5127648299334798f3e47dae", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536130301), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 4619.0 }, max: { _id: 5081.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:50.301 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 12|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:50.301 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 12000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 81 m30999| Fri Feb 22 12:28:50.301 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:50.302 [conn4] moveChunk request accepted at version 11|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:50.303 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:50.303 [migrateThread] starting receiving-end of migration of chunk { _id: 4619.0 } -> { _id: 5081.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:50.314 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 4619.0 }, max: { _id: 5081.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 187, clonedBytes: 195041, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:50.317 [cleanupOldData-5127648299334798f3e47dab] waiting to remove documents for test.foo from { _id: 9369.0 } -> { _id: 10306.0 } m30001| Fri Feb 22 12:28:50.324 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 4619.0 }, max: { _id: 5081.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 450, clonedBytes: 469350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:50.324 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:50.324 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 4619.0 } -> { _id: 5081.0 } m30000| Fri Feb 22 12:28:50.326 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 4619.0 } -> { _id: 5081.0 } m30001| Fri Feb 22 12:28:50.334 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 4619.0 }, max: { _id: 5081.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:50.334 [conn4] moveChunk setting version to: 12|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:50.334 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:50.336 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 4619.0 } -> { _id: 5081.0 } m30000| Fri Feb 22 12:28:50.336 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 4619.0 } -> { _id: 5081.0 } m30000| Fri Feb 22 12:28:50.337 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-51276482c49297cf54df55e9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536130336), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 4619.0 }, max: { _id: 5081.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:50.337 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 82 version: 11|1||51276475bd1f99446659365c based on: 11|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:50.337 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 11|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:50.337 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 11000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 82 m30001| Fri Feb 22 12:28:50.337 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:50.344 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 4619.0 }, max: { _id: 5081.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:50.344 [conn4] moveChunk updating self version to: 12|1||51276475bd1f99446659365c through { _id: 5081.0 } -> { _id: 5543.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:50.345 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-5127648299334798f3e47daf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536130345), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 4619.0 }, max: { _id: 5081.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:50.345 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:50.345 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 11000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 12000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:50.345 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:50.345 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:50.345 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:50.346 [cleanupOldData-5127648299334798f3e47db0] (start) waiting to cleanup test.bar from { _id: 4619.0 } -> { _id: 5081.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:50.346 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:50.346 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:50.346 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:50-5127648299334798f3e47db1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536130346), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 4619.0 }, max: { _id: 5081.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:50.346 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:50.347 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 83 version: 12|1||51276475bd1f99446659365c based on: 11|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:50.349 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 84 version: 12|1||51276475bd1f99446659365c based on: 11|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:50.349 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:50.350 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:50.351 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 85 version: 12|1||51276475bd1f99446659365c based on: 12|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:50.351 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 12|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:50.351 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 12000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 85 m30999| Fri Feb 22 12:28:50.351 [conn1] setShardVersion success: { oldVersion: Timestamp 11000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:50.366 [cleanupOldData-5127648299334798f3e47db0] waiting to remove documents for test.bar from { _id: 4619.0 } -> { _id: 5081.0 } 12000 m30001| Fri Feb 22 12:28:51.213 [cleanupOldData-5127648199334798f3e47da1] moveChunk deleted 937 documents for test.foo from { _id: 8432.0 } -> { _id: 9369.0 } m30001| Fri Feb 22 12:28:51.213 [cleanupOldData-5127648299334798f3e47db0] moveChunk starting delete for: test.bar from { _id: 4619.0 } -> { _id: 5081.0 } m30999| Fri Feb 22 12:28:51.350 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:51.351 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:51.351 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:51 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276483bd1f994466593668" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276482bd1f994466593667" } } m30999| Fri Feb 22 12:28:51.351 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276483bd1f994466593668 m30999| Fri Feb 22 12:28:51.351 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:51.351 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:51.351 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:51.353 [Balancer] shard0001 has more chunks me:96 best: shard0000:11 m30999| Fri Feb 22 12:28:51.353 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:51.353 [Balancer] donor : shard0001 chunks on 96 m30999| Fri Feb 22 12:28:51.353 [Balancer] receiver : shard0000 chunks on 11 m30999| Fri Feb 22 12:28:51.353 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:51.353 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_10306.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 10306.0 }, max: { _id: 11243.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:51.356 [Balancer] shard0001 has more chunks me:206 best: shard0000:11 m30999| Fri Feb 22 12:28:51.356 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:51.356 [Balancer] donor : shard0001 chunks on 206 m30999| Fri Feb 22 12:28:51.356 [Balancer] receiver : shard0000 chunks on 11 m30999| Fri Feb 22 12:28:51.356 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:51.356 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_5081.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 5081.0 }, max: { _id: 5543.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:51.356 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 12|1||000000000000000000000000min: { _id: 10306.0 }max: { _id: 11243.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:51.356 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:51.357 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 10306.0 }, max: { _id: 11243.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_10306.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:51.357 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648399334798f3e47db2 m30001| Fri Feb 22 12:28:51.357 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-5127648399334798f3e47db3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536131357), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 10306.0 }, max: { _id: 11243.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:51.358 [conn4] moveChunk request accepted at version 12|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:51.360 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:51.361 [migrateThread] starting receiving-end of migration of chunk { _id: 10306.0 } -> { _id: 11243.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:51.371 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 10306.0 }, max: { _id: 11243.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 168, clonedBytes: 89040, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:51.381 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 10306.0 }, max: { _id: 11243.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 345, clonedBytes: 182850, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:51.391 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 10306.0 }, max: { _id: 11243.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 524, clonedBytes: 277720, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:51.401 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 10306.0 }, max: { _id: 11243.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 769, clonedBytes: 407570, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:51.407 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:51.408 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10306.0 } -> { _id: 11243.0 } m30000| Fri Feb 22 12:28:51.410 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10306.0 } -> { _id: 11243.0 } m30001| Fri Feb 22 12:28:51.417 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 10306.0 }, max: { _id: 11243.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:51.417 [conn4] moveChunk setting version to: 13|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:51.418 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:51.420 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 86 version: 12|1||51276475bd1f99446659365b based on: 12|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:51.420 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 12|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:51.420 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 12000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 86 m30001| Fri Feb 22 12:28:51.420 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:51.421 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 10306.0 } -> { _id: 11243.0 } m30000| Fri Feb 22 12:28:51.421 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 10306.0 } -> { _id: 11243.0 } m30000| Fri Feb 22 12:28:51.421 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-51276483c49297cf54df55ea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536131421), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 10306.0 }, max: { _id: 11243.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 46, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:51.428 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 10306.0 }, max: { _id: 11243.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:51.428 [conn4] moveChunk updating self version to: 13|1||51276475bd1f99446659365b through { _id: 11243.0 } -> { _id: 12180.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:51.429 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-5127648399334798f3e47db4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536131429), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 10306.0 }, max: { _id: 11243.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:51.429 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:51.429 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:51.429 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 12000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 13000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:51.429 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:51.429 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:51.429 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:51.429 [cleanupOldData-5127648399334798f3e47db5] (start) waiting to cleanup test.foo from { _id: 10306.0 } -> { _id: 11243.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:51.429 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:51.429 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-5127648399334798f3e47db6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536131429), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 10306.0 }, max: { _id: 11243.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:51.429 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:51.430 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 87 version: 13|1||51276475bd1f99446659365b based on: 12|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:51.431 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 88 version: 13|1||51276475bd1f99446659365b based on: 12|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:51.432 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 12|1||000000000000000000000000min: { _id: 5081.0 }max: { _id: 5543.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:51.432 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:51.432 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 5081.0 }, max: { _id: 5543.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_5081.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:51.433 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 89 version: 13|1||51276475bd1f99446659365b based on: 13|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:51.433 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648399334798f3e47db7 m30999| Fri Feb 22 12:28:51.433 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 13|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:51.433 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-5127648399334798f3e47db8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536131433), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 5081.0 }, max: { _id: 5543.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:51.433 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 13000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 89 m30999| Fri Feb 22 12:28:51.434 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:51.434 [conn4] moveChunk request accepted at version 12|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:51.435 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:51.435 [migrateThread] starting receiving-end of migration of chunk { _id: 5081.0 } -> { _id: 5543.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:51.445 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 5081.0 }, max: { _id: 5543.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 175, clonedBytes: 182525, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:51.449 [cleanupOldData-5127648399334798f3e47db5] waiting to remove documents for test.foo from { _id: 10306.0 } -> { _id: 11243.0 } m30001| Fri Feb 22 12:28:51.456 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 5081.0 }, max: { _id: 5543.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 419, clonedBytes: 437017, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:51.458 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:51.458 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 5081.0 } -> { _id: 5543.0 } m30000| Fri Feb 22 12:28:51.460 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 5081.0 } -> { _id: 5543.0 } m30001| Fri Feb 22 12:28:51.466 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 5081.0 }, max: { _id: 5543.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:51.466 [conn4] moveChunk setting version to: 13|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:51.466 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:51.470 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 90 version: 12|1||51276475bd1f99446659365c based on: 12|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:51.470 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 12|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:51.470 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 12000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 90 m30001| Fri Feb 22 12:28:51.470 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:51.470 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 5081.0 } -> { _id: 5543.0 } m30000| Fri Feb 22 12:28:51.470 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 5081.0 } -> { _id: 5543.0 } m30000| Fri Feb 22 12:28:51.470 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-51276483c49297cf54df55eb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536131470), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 5081.0 }, max: { _id: 5543.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 22, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:51.476 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 5081.0 }, max: { _id: 5543.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:51.476 [conn4] moveChunk updating self version to: 13|1||51276475bd1f99446659365c through { _id: 5543.0 } -> { _id: 6005.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:51.477 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-5127648399334798f3e47db9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536131477), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 5081.0 }, max: { _id: 5543.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:51.477 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:51.477 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:51.477 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:51.477 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 12000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 13000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:51.477 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:51.477 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:51.477 [cleanupOldData-5127648399334798f3e47dba] (start) waiting to cleanup test.bar from { _id: 5081.0 } -> { _id: 5543.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:51.478 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:51.478 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:51-5127648399334798f3e47dbb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536131478), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 5081.0 }, max: { _id: 5543.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:51.478 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:51.479 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 91 version: 13|1||51276475bd1f99446659365c based on: 12|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:51.479 [cleanupOldData-5127648299334798f3e47db0] moveChunk deleted 462 documents for test.bar from { _id: 4619.0 } -> { _id: 5081.0 } m30001| Fri Feb 22 12:28:51.480 [cleanupOldData-5127648399334798f3e47db5] moveChunk starting delete for: test.foo from { _id: 10306.0 } -> { _id: 11243.0 } m30999| Fri Feb 22 12:28:51.481 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 92 version: 13|1||51276475bd1f99446659365c based on: 12|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:51.481 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:51.482 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:51.483 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 93 version: 13|1||51276475bd1f99446659365c based on: 13|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:51.483 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 13|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:51.483 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 13000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 93 m30999| Fri Feb 22 12:28:51.484 [conn1] setShardVersion success: { oldVersion: Timestamp 12000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:51.497 [cleanupOldData-5127648399334798f3e47dba] waiting to remove documents for test.bar from { _id: 5081.0 } -> { _id: 5543.0 } 13000 m30001| Fri Feb 22 12:28:52.110 [cleanupOldData-5127648399334798f3e47db5] moveChunk deleted 937 documents for test.foo from { _id: 10306.0 } -> { _id: 11243.0 } m30001| Fri Feb 22 12:28:52.110 [cleanupOldData-5127648399334798f3e47dba] moveChunk starting delete for: test.bar from { _id: 5081.0 } -> { _id: 5543.0 } m30999| Fri Feb 22 12:28:52.482 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:52.483 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:52.483 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:52 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276484bd1f994466593669" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276483bd1f994466593668" } } m30999| Fri Feb 22 12:28:52.484 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276484bd1f994466593669 m30999| Fri Feb 22 12:28:52.484 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:52.484 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:52.484 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:52.486 [Balancer] shard0001 has more chunks me:95 best: shard0000:12 m30999| Fri Feb 22 12:28:52.486 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:52.486 [Balancer] donor : shard0001 chunks on 95 m30999| Fri Feb 22 12:28:52.486 [Balancer] receiver : shard0000 chunks on 12 m30999| Fri Feb 22 12:28:52.486 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:52.486 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_11243.0", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 11243.0 }, max: { _id: 12180.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:52.489 [Balancer] shard0001 has more chunks me:205 best: shard0000:12 m30999| Fri Feb 22 12:28:52.489 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:52.489 [Balancer] donor : shard0001 chunks on 205 m30999| Fri Feb 22 12:28:52.489 [Balancer] receiver : shard0000 chunks on 12 m30999| Fri Feb 22 12:28:52.489 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:52.489 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_5543.0", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 5543.0 }, max: { _id: 6005.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:52.489 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 13|1||000000000000000000000000min: { _id: 11243.0 }max: { _id: 12180.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:52.489 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:52.489 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 11243.0 }, max: { _id: 12180.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_11243.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:52.490 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648499334798f3e47dbc m30001| Fri Feb 22 12:28:52.490 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-5127648499334798f3e47dbd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536132490), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 11243.0 }, max: { _id: 12180.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:52.492 [conn4] moveChunk request accepted at version 13|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:52.494 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:52.494 [migrateThread] starting receiving-end of migration of chunk { _id: 11243.0 } -> { _id: 12180.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:52.504 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 11243.0 }, max: { _id: 12180.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 129, clonedBytes: 68370, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:52.515 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 11243.0 }, max: { _id: 12180.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 326, clonedBytes: 172780, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:52.525 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 11243.0 }, max: { _id: 12180.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 531, clonedBytes: 281430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:52.535 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 11243.0 }, max: { _id: 12180.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 733, clonedBytes: 388490, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:52.546 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:52.546 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 11243.0 } -> { _id: 12180.0 } m30000| Fri Feb 22 12:28:52.549 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 11243.0 } -> { _id: 12180.0 } m30001| Fri Feb 22 12:28:52.551 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 11243.0 }, max: { _id: 12180.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:52.551 [conn4] moveChunk setting version to: 14|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:52.552 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:52.553 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 94 version: 13|1||51276475bd1f99446659365b based on: 13|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:52.553 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 13|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:52.554 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 13000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 94 m30001| Fri Feb 22 12:28:52.554 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:52.559 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 11243.0 } -> { _id: 12180.0 } m30000| Fri Feb 22 12:28:52.559 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 11243.0 } -> { _id: 12180.0 } m30000| Fri Feb 22 12:28:52.559 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-51276484c49297cf54df55ec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536132559), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 11243.0 }, max: { _id: 12180.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:52.562 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 11243.0 }, max: { _id: 12180.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:52.562 [conn4] moveChunk updating self version to: 14|1||51276475bd1f99446659365b through { _id: 12180.0 } -> { _id: 13117.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:52.563 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-5127648499334798f3e47dbe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536132563), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 11243.0 }, max: { _id: 12180.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:52.563 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:52.563 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:52.563 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:52.563 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 13000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 13000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 14000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:52.563 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:52.563 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:52.563 [cleanupOldData-5127648499334798f3e47dbf] (start) waiting to cleanup test.foo from { _id: 11243.0 } -> { _id: 12180.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:52.563 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:52.563 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-5127648499334798f3e47dc0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536132563), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 11243.0 }, max: { _id: 12180.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 2, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:52.564 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:52.564 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 95 version: 14|1||51276475bd1f99446659365b based on: 13|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:52.566 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 96 version: 14|1||51276475bd1f99446659365b based on: 13|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:52.566 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 13|1||000000000000000000000000min: { _id: 5543.0 }max: { _id: 6005.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:52.567 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:52.567 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 5543.0 }, max: { _id: 6005.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_5543.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:52.567 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 97 version: 14|1||51276475bd1f99446659365b based on: 14|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:52.568 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 14|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:52.568 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 14000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 97 m30999| Fri Feb 22 12:28:52.568 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:52.568 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648499334798f3e47dc1 m30001| Fri Feb 22 12:28:52.568 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-5127648499334798f3e47dc2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536132568), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 5543.0 }, max: { _id: 6005.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:52.570 [conn4] moveChunk request accepted at version 13|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:52.571 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:52.571 [migrateThread] starting receiving-end of migration of chunk { _id: 5543.0 } -> { _id: 6005.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:52.582 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 5543.0 }, max: { _id: 6005.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 115, clonedBytes: 119945, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:52.583 [cleanupOldData-5127648499334798f3e47dbf] waiting to remove documents for test.foo from { _id: 11243.0 } -> { _id: 12180.0 } m30001| Fri Feb 22 12:28:52.592 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 5543.0 }, max: { _id: 6005.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 293, clonedBytes: 305599, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:52.602 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 5543.0 }, max: { _id: 6005.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:52.602 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:52.602 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 5543.0 } -> { _id: 6005.0 } m30000| Fri Feb 22 12:28:52.605 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 5543.0 } -> { _id: 6005.0 } m30001| Fri Feb 22 12:28:52.612 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 5543.0 }, max: { _id: 6005.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:52.613 [conn4] moveChunk setting version to: 14|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:52.613 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:52.615 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 98 version: 13|1||51276475bd1f99446659365c based on: 13|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:52.615 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 13|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:52.615 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 5543.0 } -> { _id: 6005.0 } m30000| Fri Feb 22 12:28:52.615 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 5543.0 } -> { _id: 6005.0 } m30999| Fri Feb 22 12:28:52.615 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 13000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 98 m30000| Fri Feb 22 12:28:52.615 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-51276484c49297cf54df55ed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536132615), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 5543.0 }, max: { _id: 6005.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:52.615 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:52.623 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 5543.0 }, max: { _id: 6005.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:52.623 [conn4] moveChunk updating self version to: 14|1||51276475bd1f99446659365c through { _id: 6005.0 } -> { _id: 6467.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:52.624 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-5127648499334798f3e47dc3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536132624), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 5543.0 }, max: { _id: 6005.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:52.624 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:52.624 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:52.624 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:52.624 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 13000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 13000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 14000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:52.624 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:52.624 [cleanupOldData-5127648499334798f3e47dc4] (start) waiting to cleanup test.bar from { _id: 5543.0 } -> { _id: 6005.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:52.624 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:52.625 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:52.625 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:52-5127648499334798f3e47dc5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536132625), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 5543.0 }, max: { _id: 6005.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:52.625 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:52.626 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 99 version: 14|1||51276475bd1f99446659365c based on: 13|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:52.628 [cleanupOldData-5127648399334798f3e47dba] moveChunk deleted 462 documents for test.bar from { _id: 5081.0 } -> { _id: 5543.0 } m30999| Fri Feb 22 12:28:52.628 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 100 version: 14|1||51276475bd1f99446659365c based on: 13|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:52.628 [cleanupOldData-5127648499334798f3e47dbf] moveChunk starting delete for: test.foo from { _id: 11243.0 } -> { _id: 12180.0 } m30999| Fri Feb 22 12:28:52.629 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:52.629 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:52.630 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 101 version: 14|1||51276475bd1f99446659365c based on: 14|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:52.631 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 14|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:52.631 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 14000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 101 m30999| Fri Feb 22 12:28:52.631 [conn1] setShardVersion success: { oldVersion: Timestamp 13000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:52.644 [cleanupOldData-5127648499334798f3e47dc4] waiting to remove documents for test.bar from { _id: 5543.0 } -> { _id: 6005.0 } 14000 m30999| Fri Feb 22 12:28:53.630 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:53.630 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:53.631 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:53 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276485bd1f99446659366a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276484bd1f994466593669" } } m30999| Fri Feb 22 12:28:53.631 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276485bd1f99446659366a m30999| Fri Feb 22 12:28:53.631 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:53.631 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:53.631 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:53.633 [Balancer] shard0001 has more chunks me:94 best: shard0000:13 m30999| Fri Feb 22 12:28:53.633 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:53.633 [Balancer] donor : shard0001 chunks on 94 m30999| Fri Feb 22 12:28:53.633 [Balancer] receiver : shard0000 chunks on 13 m30999| Fri Feb 22 12:28:53.633 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:53.633 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_12180.0", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 12180.0 }, max: { _id: 13117.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:53.636 [Balancer] shard0001 has more chunks me:204 best: shard0000:13 m30999| Fri Feb 22 12:28:53.636 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:53.636 [Balancer] donor : shard0001 chunks on 204 m30999| Fri Feb 22 12:28:53.636 [Balancer] receiver : shard0000 chunks on 13 m30999| Fri Feb 22 12:28:53.636 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:53.636 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_6005.0", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 6005.0 }, max: { _id: 6467.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:53.636 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 14|1||000000000000000000000000min: { _id: 12180.0 }max: { _id: 13117.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:53.636 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:53.636 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 12180.0 }, max: { _id: 13117.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_12180.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:53.637 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648599334798f3e47dc6 m30001| Fri Feb 22 12:28:53.637 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-5127648599334798f3e47dc7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536133637), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 12180.0 }, max: { _id: 13117.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:53.639 [conn4] moveChunk request accepted at version 14|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:53.642 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:53.642 [migrateThread] starting receiving-end of migration of chunk { _id: 12180.0 } -> { _id: 13117.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:53.652 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 12180.0 }, max: { _id: 13117.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 137, clonedBytes: 72610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:53.663 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 12180.0 }, max: { _id: 13117.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 355, clonedBytes: 188150, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:53.673 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 12180.0 }, max: { _id: 13117.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 572, clonedBytes: 303160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:53.683 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 12180.0 }, max: { _id: 13117.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 741, clonedBytes: 392730, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:53.693 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:53.693 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12180.0 } -> { _id: 13117.0 } m30000| Fri Feb 22 12:28:53.694 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12180.0 } -> { _id: 13117.0 } m30001| Fri Feb 22 12:28:53.699 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 12180.0 }, max: { _id: 13117.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:53.700 [conn4] moveChunk setting version to: 15|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:53.700 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:53.702 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 102 version: 14|1||51276475bd1f99446659365b based on: 14|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:53.702 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 14|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:53.702 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 14000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 102 m30001| Fri Feb 22 12:28:53.702 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:53.704 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 12180.0 } -> { _id: 13117.0 } m30000| Fri Feb 22 12:28:53.704 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 12180.0 } -> { _id: 13117.0 } m30000| Fri Feb 22 12:28:53.704 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-51276485c49297cf54df55ee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536133704), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 12180.0 }, max: { _id: 13117.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:28:53.709 [cleanupOldData-5127648499334798f3e47dbf] moveChunk deleted 937 documents for test.foo from { _id: 11243.0 } -> { _id: 12180.0 } m30001| Fri Feb 22 12:28:53.709 [cleanupOldData-5127648499334798f3e47dc4] moveChunk starting delete for: test.bar from { _id: 5543.0 } -> { _id: 6005.0 } m30001| Fri Feb 22 12:28:53.710 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 12180.0 }, max: { _id: 13117.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:53.710 [conn4] moveChunk updating self version to: 15|1||51276475bd1f99446659365b through { _id: 13117.0 } -> { _id: 14054.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:53.711 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-5127648599334798f3e47dc8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536133711), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 12180.0 }, max: { _id: 13117.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:53.711 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:53.711 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 14000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 15000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:53.711 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:53.711 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:53.711 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:53.711 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:53.711 [cleanupOldData-5127648599334798f3e47dc9] (start) waiting to cleanup test.foo from { _id: 12180.0 } -> { _id: 13117.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:53.711 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:53.711 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-5127648599334798f3e47dca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536133711), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 12180.0 }, max: { _id: 13117.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:53.712 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:53.712 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 103 version: 15|1||51276475bd1f99446659365b based on: 14|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:53.714 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 104 version: 15|1||51276475bd1f99446659365b based on: 14|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:53.714 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 14|1||000000000000000000000000min: { _id: 6005.0 }max: { _id: 6467.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:53.714 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:53.715 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 6005.0 }, max: { _id: 6467.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_6005.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:53.715 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 105 version: 15|1||51276475bd1f99446659365b based on: 15|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:53.715 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 15|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:53.716 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 15000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 105 m30999| Fri Feb 22 12:28:53.716 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:53.716 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648599334798f3e47dcb m30001| Fri Feb 22 12:28:53.716 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-5127648599334798f3e47dcc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536133716), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 6005.0 }, max: { _id: 6467.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:53.717 [conn4] moveChunk request accepted at version 14|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:53.718 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:53.718 [migrateThread] starting receiving-end of migration of chunk { _id: 6005.0 } -> { _id: 6467.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:53.729 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6005.0 }, max: { _id: 6467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 135, clonedBytes: 140805, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:53.731 [cleanupOldData-5127648599334798f3e47dc9] waiting to remove documents for test.foo from { _id: 12180.0 } -> { _id: 13117.0 } m30001| Fri Feb 22 12:28:53.739 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6005.0 }, max: { _id: 6467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 314, clonedBytes: 327502, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:53.747 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:53.748 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 6005.0 } -> { _id: 6467.0 } m30001| Fri Feb 22 12:28:53.749 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6005.0 }, max: { _id: 6467.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:53.750 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 6005.0 } -> { _id: 6467.0 } m30001| Fri Feb 22 12:28:53.759 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6005.0 }, max: { _id: 6467.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:53.760 [conn4] moveChunk setting version to: 15|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:53.760 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:53.760 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 6005.0 } -> { _id: 6467.0 } m30000| Fri Feb 22 12:28:53.760 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 6005.0 } -> { _id: 6467.0 } m30000| Fri Feb 22 12:28:53.761 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-51276485c49297cf54df55ef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536133761), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 6005.0 }, max: { _id: 6467.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:53.762 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 106 version: 14|1||51276475bd1f99446659365c based on: 14|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:53.763 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 14|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:53.763 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 14000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 106 m30001| Fri Feb 22 12:28:53.763 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:53.770 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 6005.0 }, max: { _id: 6467.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:53.770 [conn4] moveChunk updating self version to: 15|1||51276475bd1f99446659365c through { _id: 6467.0 } -> { _id: 6929.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:53.771 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-5127648599334798f3e47dcd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536133771), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 6005.0 }, max: { _id: 6467.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:53.771 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:53.771 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:53.771 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:53.771 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:53.771 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 14000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 15000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:53.771 [cleanupOldData-5127648599334798f3e47dce] (start) waiting to cleanup test.bar from { _id: 6005.0 } -> { _id: 6467.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:53.771 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:53.771 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:53.771 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:53-5127648599334798f3e47dcf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536133771), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 6005.0 }, max: { _id: 6467.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:53.771 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:53.773 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 107 version: 15|1||51276475bd1f99446659365c based on: 14|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:53.774 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 108 version: 15|1||51276475bd1f99446659365c based on: 14|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:53.775 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:53.775 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:53.776 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 109 version: 15|1||51276475bd1f99446659365c based on: 15|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:53.776 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 15|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:53.777 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 15000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 109 m30999| Fri Feb 22 12:28:53.777 [conn1] setShardVersion success: { oldVersion: Timestamp 14000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:53.791 [cleanupOldData-5127648599334798f3e47dce] waiting to remove documents for test.bar from { _id: 6005.0 } -> { _id: 6467.0 } 15000 m30001| Fri Feb 22 12:28:54.301 [cleanupOldData-5127648499334798f3e47dc4] moveChunk deleted 462 documents for test.bar from { _id: 5543.0 } -> { _id: 6005.0 } m30001| Fri Feb 22 12:28:54.301 [cleanupOldData-5127648599334798f3e47dce] moveChunk starting delete for: test.bar from { _id: 6005.0 } -> { _id: 6467.0 } m30999| Fri Feb 22 12:28:54.776 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:54.776 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:54.776 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:54 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276486bd1f99446659366b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276485bd1f99446659366a" } } m30999| Fri Feb 22 12:28:54.777 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276486bd1f99446659366b m30999| Fri Feb 22 12:28:54.777 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:54.777 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:54.777 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:54.779 [Balancer] shard0001 has more chunks me:93 best: shard0000:14 m30999| Fri Feb 22 12:28:54.779 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:54.779 [Balancer] donor : shard0001 chunks on 93 m30999| Fri Feb 22 12:28:54.779 [Balancer] receiver : shard0000 chunks on 14 m30999| Fri Feb 22 12:28:54.779 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:54.779 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_13117.0", lastmod: Timestamp 15000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 13117.0 }, max: { _id: 14054.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:54.781 [Balancer] shard0001 has more chunks me:203 best: shard0000:14 m30999| Fri Feb 22 12:28:54.781 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:54.781 [Balancer] donor : shard0001 chunks on 203 m30999| Fri Feb 22 12:28:54.781 [Balancer] receiver : shard0000 chunks on 14 m30999| Fri Feb 22 12:28:54.781 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:54.781 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_6467.0", lastmod: Timestamp 15000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 6467.0 }, max: { _id: 6929.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:54.781 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 15|1||000000000000000000000000min: { _id: 13117.0 }max: { _id: 14054.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:54.781 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:54.781 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 13117.0 }, max: { _id: 14054.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_13117.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:54.782 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648699334798f3e47dd0 m30001| Fri Feb 22 12:28:54.782 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-5127648699334798f3e47dd1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536134782), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 13117.0 }, max: { _id: 14054.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:54.783 [conn4] moveChunk request accepted at version 15|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:54.785 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:54.785 [migrateThread] starting receiving-end of migration of chunk { _id: 13117.0 } -> { _id: 14054.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:54.796 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 13117.0 }, max: { _id: 14054.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 120, clonedBytes: 63600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:54.806 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 13117.0 }, max: { _id: 14054.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 296, clonedBytes: 156880, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:54.816 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 13117.0 }, max: { _id: 14054.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 501, clonedBytes: 265530, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:54.826 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 13117.0 }, max: { _id: 14054.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 706, clonedBytes: 374180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:54.838 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:54.838 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 13117.0 } -> { _id: 14054.0 } m30000| Fri Feb 22 12:28:54.840 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 13117.0 } -> { _id: 14054.0 } m30001| Fri Feb 22 12:28:54.843 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 13117.0 }, max: { _id: 14054.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:54.843 [conn4] moveChunk setting version to: 16|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:54.843 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:54.845 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 110 version: 15|1||51276475bd1f99446659365b based on: 15|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:54.845 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 15|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:54.845 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 15000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 110 m30001| Fri Feb 22 12:28:54.845 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:54.849 [cleanupOldData-5127648599334798f3e47dce] moveChunk deleted 462 documents for test.bar from { _id: 6005.0 } -> { _id: 6467.0 } m30001| Fri Feb 22 12:28:54.849 [cleanupOldData-5127648599334798f3e47dc9] moveChunk starting delete for: test.foo from { _id: 12180.0 } -> { _id: 13117.0 } m30000| Fri Feb 22 12:28:54.851 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 13117.0 } -> { _id: 14054.0 } m30000| Fri Feb 22 12:28:54.851 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 13117.0 } -> { _id: 14054.0 } m30000| Fri Feb 22 12:28:54.851 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-51276486c49297cf54df55f0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536134851), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 13117.0 }, max: { _id: 14054.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:54.853 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 13117.0 }, max: { _id: 14054.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:54.853 [conn4] moveChunk updating self version to: 16|1||51276475bd1f99446659365b through { _id: 14054.0 } -> { _id: 14991.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:54.854 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-5127648699334798f3e47dd2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536134854), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 13117.0 }, max: { _id: 14054.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:54.854 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:54.854 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:28:54.854 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:54.854 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 15000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 15000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 16000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:54.854 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:54.854 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:54.854 [cleanupOldData-5127648699334798f3e47dd3] (start) waiting to cleanup test.foo from { _id: 13117.0 } -> { _id: 14054.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:54.854 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:54.854 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-5127648699334798f3e47dd4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536134854), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 13117.0 }, max: { _id: 14054.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 57, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:54.854 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:54.855 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 111 version: 16|1||51276475bd1f99446659365b based on: 15|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:54.856 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 112 version: 16|1||51276475bd1f99446659365b based on: 15|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:54.856 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 15|1||000000000000000000000000min: { _id: 6467.0 }max: { _id: 6929.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:54.856 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:54.857 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 6467.0 }, max: { _id: 6929.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_6467.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:54.857 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 113 version: 16|1||51276475bd1f99446659365b based on: 16|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:54.857 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648699334798f3e47dd5 m30001| Fri Feb 22 12:28:54.857 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-5127648699334798f3e47dd6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536134857), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 6467.0 }, max: { _id: 6929.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:54.857 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 16|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:54.858 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 16000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 113 m30999| Fri Feb 22 12:28:54.858 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:54.858 [conn4] moveChunk request accepted at version 15|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:54.859 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:54.859 [migrateThread] starting receiving-end of migration of chunk { _id: 6467.0 } -> { _id: 6929.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:54.870 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6467.0 }, max: { _id: 6929.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 186, clonedBytes: 193998, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:54.874 [cleanupOldData-5127648699334798f3e47dd3] waiting to remove documents for test.foo from { _id: 13117.0 } -> { _id: 14054.0 } m30001| Fri Feb 22 12:28:54.880 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6467.0 }, max: { _id: 6929.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 450, clonedBytes: 469350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:54.880 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:54.880 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 6467.0 } -> { _id: 6929.0 } m30000| Fri Feb 22 12:28:54.882 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 6467.0 } -> { _id: 6929.0 } m30001| Fri Feb 22 12:28:54.890 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6467.0 }, max: { _id: 6929.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:54.890 [conn4] moveChunk setting version to: 16|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:54.890 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:54.892 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 114 version: 15|1||51276475bd1f99446659365c based on: 15|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:54.893 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 15|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:54.893 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 15000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 114 m30000| Fri Feb 22 12:28:54.893 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 6467.0 } -> { _id: 6929.0 } m30000| Fri Feb 22 12:28:54.893 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 6467.0 } -> { _id: 6929.0 } m30000| Fri Feb 22 12:28:54.893 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-51276486c49297cf54df55f1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536134893), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 6467.0 }, max: { _id: 6929.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:54.893 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:54.900 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 6467.0 }, max: { _id: 6929.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:54.901 [conn4] moveChunk updating self version to: 16|1||51276475bd1f99446659365c through { _id: 6929.0 } -> { _id: 7391.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:54.901 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-5127648699334798f3e47dd7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536134901), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 6467.0 }, max: { _id: 6929.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:54.901 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:54.901 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:54.901 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:54.901 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 15000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 15000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 16000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:54.901 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:54.901 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:54.901 [cleanupOldData-5127648699334798f3e47dd8] (start) waiting to cleanup test.bar from { _id: 6467.0 } -> { _id: 6929.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:54.902 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:54.902 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:54-5127648699334798f3e47dd9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536134902), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 6467.0 }, max: { _id: 6929.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:54.902 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:54.903 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 115 version: 16|1||51276475bd1f99446659365c based on: 15|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:54.904 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 116 version: 16|1||51276475bd1f99446659365c based on: 15|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:54.905 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:54.905 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:54.907 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 117 version: 16|1||51276475bd1f99446659365c based on: 16|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:54.907 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 16|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:54.907 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 16000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 117 m30999| Fri Feb 22 12:28:54.907 [conn1] setShardVersion success: { oldVersion: Timestamp 15000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:54.921 [cleanupOldData-5127648699334798f3e47dd8] waiting to remove documents for test.bar from { _id: 6467.0 } -> { _id: 6929.0 } 16000 m30001| Fri Feb 22 12:28:55.802 [cleanupOldData-5127648599334798f3e47dc9] moveChunk deleted 937 documents for test.foo from { _id: 12180.0 } -> { _id: 13117.0 } m30001| Fri Feb 22 12:28:55.802 [cleanupOldData-5127648699334798f3e47dd8] moveChunk starting delete for: test.bar from { _id: 6467.0 } -> { _id: 6929.0 } m30999| Fri Feb 22 12:28:55.906 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:55.907 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:55.907 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:55 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276487bd1f99446659366c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276486bd1f99446659366b" } } m30999| Fri Feb 22 12:28:55.908 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276487bd1f99446659366c m30999| Fri Feb 22 12:28:55.908 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:55.908 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:55.908 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:55.910 [Balancer] shard0001 has more chunks me:92 best: shard0000:15 m30999| Fri Feb 22 12:28:55.910 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:55.910 [Balancer] donor : shard0001 chunks on 92 m30999| Fri Feb 22 12:28:55.910 [Balancer] receiver : shard0000 chunks on 15 m30999| Fri Feb 22 12:28:55.910 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:55.910 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_14054.0", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 14054.0 }, max: { _id: 14991.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:55.912 [Balancer] shard0001 has more chunks me:202 best: shard0000:15 m30999| Fri Feb 22 12:28:55.913 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:55.913 [Balancer] donor : shard0001 chunks on 202 m30999| Fri Feb 22 12:28:55.913 [Balancer] receiver : shard0000 chunks on 15 m30999| Fri Feb 22 12:28:55.913 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:55.913 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_6929.0", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 6929.0 }, max: { _id: 7391.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:55.913 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 16|1||000000000000000000000000min: { _id: 14054.0 }max: { _id: 14991.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:55.913 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:55.913 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 14054.0 }, max: { _id: 14991.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_14054.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:55.914 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648799334798f3e47dda m30001| Fri Feb 22 12:28:55.914 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:55-5127648799334798f3e47ddb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536135914), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 14054.0 }, max: { _id: 14991.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:55.916 [conn4] moveChunk request accepted at version 16|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:55.919 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:55.919 [migrateThread] starting receiving-end of migration of chunk { _id: 14054.0 } -> { _id: 14991.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:55.929 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14054.0 }, max: { _id: 14991.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 127, clonedBytes: 67310, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:55.940 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14054.0 }, max: { _id: 14991.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 320, clonedBytes: 169600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:55.950 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14054.0 }, max: { _id: 14991.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 524, clonedBytes: 277720, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:55.960 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14054.0 }, max: { _id: 14991.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 731, clonedBytes: 387430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:55.971 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:55.971 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 14054.0 } -> { _id: 14991.0 } m30000| Fri Feb 22 12:28:55.974 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 14054.0 } -> { _id: 14991.0 } m30001| Fri Feb 22 12:28:55.976 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14054.0 }, max: { _id: 14991.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:55.977 [conn4] moveChunk setting version to: 17|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:55.977 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:55.979 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 118 version: 16|1||51276475bd1f99446659365b based on: 16|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:55.979 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 16|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:55.979 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 16000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 118 m30001| Fri Feb 22 12:28:55.979 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:55.984 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 14054.0 } -> { _id: 14991.0 } m30000| Fri Feb 22 12:28:55.984 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 14054.0 } -> { _id: 14991.0 } m30000| Fri Feb 22 12:28:55.984 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:55-51276487c49297cf54df55f2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536135984), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 14054.0 }, max: { _id: 14991.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:55.987 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 14054.0 }, max: { _id: 14991.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:55.987 [conn4] moveChunk updating self version to: 17|1||51276475bd1f99446659365b through { _id: 14991.0 } -> { _id: 15928.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:55.988 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:55-5127648799334798f3e47ddc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536135988), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 14054.0 }, max: { _id: 14991.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:55.988 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:55.988 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:55.988 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 16000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 17000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:55.988 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:55.988 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:55.988 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:55.988 [cleanupOldData-5127648799334798f3e47ddd] (start) waiting to cleanup test.foo from { _id: 14054.0 } -> { _id: 14991.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:55.988 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:55.988 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:55-5127648799334798f3e47dde", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536135988), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 14054.0 }, max: { _id: 14991.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:55.988 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:55.989 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 119 version: 17|1||51276475bd1f99446659365b based on: 16|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:55.991 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 120 version: 17|1||51276475bd1f99446659365b based on: 16|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:55.991 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 16|1||000000000000000000000000min: { _id: 6929.0 }max: { _id: 7391.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:55.991 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:55.992 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 6929.0 }, max: { _id: 7391.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_6929.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:55.993 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 121 version: 17|1||51276475bd1f99446659365b based on: 17|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:55.993 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648799334798f3e47ddf m30999| Fri Feb 22 12:28:55.993 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 17|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:55.993 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:55-5127648799334798f3e47de0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536135993), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 6929.0 }, max: { _id: 7391.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:55.993 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 17000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 121 m30999| Fri Feb 22 12:28:55.993 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:55.994 [conn4] moveChunk request accepted at version 16|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:55.996 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:55.996 [migrateThread] starting receiving-end of migration of chunk { _id: 6929.0 } -> { _id: 7391.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:56.006 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6929.0 }, max: { _id: 7391.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 133, clonedBytes: 138719, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:56.008 [cleanupOldData-5127648799334798f3e47ddd] waiting to remove documents for test.foo from { _id: 14054.0 } -> { _id: 14991.0 } m30001| Fri Feb 22 12:28:56.017 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6929.0 }, max: { _id: 7391.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 313, clonedBytes: 326459, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:56.025 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:56.025 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 6929.0 } -> { _id: 7391.0 } m30001| Fri Feb 22 12:28:56.027 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6929.0 }, max: { _id: 7391.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:56.028 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 6929.0 } -> { _id: 7391.0 } m30001| Fri Feb 22 12:28:56.037 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 6929.0 }, max: { _id: 7391.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:56.038 [conn4] moveChunk setting version to: 17|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:56.038 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:56.038 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 6929.0 } -> { _id: 7391.0 } m30000| Fri Feb 22 12:28:56.038 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 6929.0 } -> { _id: 7391.0 } m30000| Fri Feb 22 12:28:56.038 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:56-51276488c49297cf54df55f3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536136038), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 6929.0 }, max: { _id: 7391.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:56.041 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 122 version: 16|1||51276475bd1f99446659365c based on: 16|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:56.041 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 16|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:56.041 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 16000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 122 m30001| Fri Feb 22 12:28:56.041 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:56.048 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 6929.0 }, max: { _id: 7391.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:56.048 [conn4] moveChunk updating self version to: 17|1||51276475bd1f99446659365c through { _id: 7391.0 } -> { _id: 7853.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:56.049 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:56-5127648899334798f3e47de1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536136049), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 6929.0 }, max: { _id: 7391.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:56.049 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:56.049 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 16000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 17000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:56.049 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:56.049 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:56.049 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:56.049 [cleanupOldData-5127648899334798f3e47de2] (start) waiting to cleanup test.bar from { _id: 6929.0 } -> { _id: 7391.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:56.049 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:56.050 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:56.050 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:56-5127648899334798f3e47de3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536136050), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 6929.0 }, max: { _id: 7391.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 2, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:56.050 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:56.051 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 123 version: 17|1||51276475bd1f99446659365c based on: 16|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:56.053 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 124 version: 17|1||51276475bd1f99446659365c based on: 16|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:56.054 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:56.054 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:56.056 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 125 version: 17|1||51276475bd1f99446659365c based on: 17|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:56.056 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 17|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:56.056 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 17000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 125 m30999| Fri Feb 22 12:28:56.056 [conn1] setShardVersion success: { oldVersion: Timestamp 16000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:56.069 [cleanupOldData-5127648899334798f3e47de2] waiting to remove documents for test.bar from { _id: 6929.0 } -> { _id: 7391.0 } m30001| Fri Feb 22 12:28:56.169 [cleanupOldData-5127648699334798f3e47dd8] moveChunk deleted 462 documents for test.bar from { _id: 6467.0 } -> { _id: 6929.0 } m30001| Fri Feb 22 12:28:56.170 [cleanupOldData-5127648899334798f3e47de2] moveChunk starting delete for: test.bar from { _id: 6929.0 } -> { _id: 7391.0 } 17000 m30001| Fri Feb 22 12:28:56.801 [cleanupOldData-5127648899334798f3e47de2] moveChunk deleted 462 documents for test.bar from { _id: 6929.0 } -> { _id: 7391.0 } m30001| Fri Feb 22 12:28:56.802 [cleanupOldData-5127648799334798f3e47ddd] moveChunk starting delete for: test.foo from { _id: 14054.0 } -> { _id: 14991.0 } m30999| Fri Feb 22 12:28:57.055 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:57.056 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:57.056 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:57 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276489bd1f99446659366d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276487bd1f99446659366c" } } m30999| Fri Feb 22 12:28:57.057 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276489bd1f99446659366d m30999| Fri Feb 22 12:28:57.057 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:57.057 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:57.057 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:57.060 [Balancer] shard0001 has more chunks me:91 best: shard0000:16 m30999| Fri Feb 22 12:28:57.060 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:57.060 [Balancer] donor : shard0001 chunks on 91 m30999| Fri Feb 22 12:28:57.060 [Balancer] receiver : shard0000 chunks on 16 m30999| Fri Feb 22 12:28:57.060 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:57.060 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_14991.0", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:57.062 [Balancer] shard0001 has more chunks me:201 best: shard0000:16 m30999| Fri Feb 22 12:28:57.062 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:57.062 [Balancer] donor : shard0001 chunks on 201 m30999| Fri Feb 22 12:28:57.062 [Balancer] receiver : shard0000 chunks on 16 m30999| Fri Feb 22 12:28:57.062 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:57.062 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_7391.0", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 7391.0 }, max: { _id: 7853.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:57.063 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 17|1||000000000000000000000000min: { _id: 14991.0 }max: { _id: 15928.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:57.063 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:57.063 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 14991.0 }, max: { _id: 15928.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_14991.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:57.064 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648999334798f3e47de4 m30001| Fri Feb 22 12:28:57.064 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-5127648999334798f3e47de5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536137064), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 14991.0 }, max: { _id: 15928.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:57.065 [conn4] moveChunk request accepted at version 17|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:57.068 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:57.069 [migrateThread] starting receiving-end of migration of chunk { _id: 14991.0 } -> { _id: 15928.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:57.079 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 95, clonedBytes: 50350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:57.089 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 299, clonedBytes: 158470, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:57.100 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 501, clonedBytes: 265530, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:57.110 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 704, clonedBytes: 373120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:57.123 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:57.123 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 14991.0 } -> { _id: 15928.0 } m30001| Fri Feb 22 12:28:57.126 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:57.127 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 14991.0 } -> { _id: 15928.0 } m30001| Fri Feb 22 12:28:57.158 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:57.158 [conn4] moveChunk setting version to: 18|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:57.159 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:57.160 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 126 version: 17|1||51276475bd1f99446659365b based on: 17|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:57.161 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 17|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:57.161 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 17000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 126 m30001| Fri Feb 22 12:28:57.161 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:57.168 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 14991.0 } -> { _id: 15928.0 } m30000| Fri Feb 22 12:28:57.168 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 14991.0 } -> { _id: 15928.0 } m30000| Fri Feb 22 12:28:57.168 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-51276489c49297cf54df55f4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536137168), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 14991.0 }, max: { _id: 15928.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 53, step4 of 5: 0, step5 of 5: 44 } } m30001| Fri Feb 22 12:28:57.169 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 14991.0 }, max: { _id: 15928.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:57.169 [conn4] moveChunk updating self version to: 18|1||51276475bd1f99446659365b through { _id: 15928.0 } -> { _id: 16865.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:57.169 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-5127648999334798f3e47de6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536137169), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 14991.0 }, max: { _id: 15928.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:57.170 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:57.170 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:57.170 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:57.170 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 17000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 17000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 18000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:57.170 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:57.170 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:57.170 [cleanupOldData-5127648999334798f3e47de7] (start) waiting to cleanup test.foo from { _id: 14991.0 } -> { _id: 15928.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:57.170 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:57.170 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-5127648999334798f3e47de8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536137170), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 14991.0 }, max: { _id: 15928.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:28:57.170 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 14991.0 }, max: { _id: 15928.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_14991.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:2980 w:64 reslen:37 107ms m30999| Fri Feb 22 12:28:57.170 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:57.172 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 127 version: 18|1||51276475bd1f99446659365b based on: 17|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:57.173 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 128 version: 18|1||51276475bd1f99446659365b based on: 17|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:57.174 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 17|1||000000000000000000000000min: { _id: 7391.0 }max: { _id: 7853.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:57.174 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:57.174 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 7391.0 }, max: { _id: 7853.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_7391.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:57.175 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 129 version: 18|1||51276475bd1f99446659365b based on: 18|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:57.175 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 18|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:57.175 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 18000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 129 m30999| Fri Feb 22 12:28:57.175 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:57.175 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648999334798f3e47de9 m30001| Fri Feb 22 12:28:57.175 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-5127648999334798f3e47dea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536137175), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 7391.0 }, max: { _id: 7853.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:57.177 [conn4] moveChunk request accepted at version 17|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:57.179 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:57.179 [migrateThread] starting receiving-end of migration of chunk { _id: 7391.0 } -> { _id: 7853.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:57.189 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 7391.0 }, max: { _id: 7853.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 103, clonedBytes: 107429, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:57.190 [cleanupOldData-5127648999334798f3e47de7] waiting to remove documents for test.foo from { _id: 14991.0 } -> { _id: 15928.0 } m30001| Fri Feb 22 12:28:57.199 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 7391.0 }, max: { _id: 7853.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 280, clonedBytes: 292040, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:57.210 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 7391.0 }, max: { _id: 7853.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 454, clonedBytes: 473522, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:57.210 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:57.210 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 7391.0 } -> { _id: 7853.0 } m30000| Fri Feb 22 12:28:57.213 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 7391.0 } -> { _id: 7853.0 } m30001| Fri Feb 22 12:28:57.220 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 7391.0 }, max: { _id: 7853.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:57.220 [conn4] moveChunk setting version to: 18|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:57.220 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:57.223 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 130 version: 17|1||51276475bd1f99446659365c based on: 17|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:57.223 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 17|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:57.223 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 17000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 130 m30001| Fri Feb 22 12:28:57.223 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:57.224 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 7391.0 } -> { _id: 7853.0 } m30000| Fri Feb 22 12:28:57.224 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 7391.0 } -> { _id: 7853.0 } m30000| Fri Feb 22 12:28:57.224 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-51276489c49297cf54df55f5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536137224), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 7391.0 }, max: { _id: 7853.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:28:57.230 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 7391.0 }, max: { _id: 7853.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:57.231 [conn4] moveChunk updating self version to: 18|1||51276475bd1f99446659365c through { _id: 7853.0 } -> { _id: 8315.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:57.231 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-5127648999334798f3e47deb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536137231), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 7391.0 }, max: { _id: 7853.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:28:57.231 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:57.231 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 17000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 17000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 18000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:57.231 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:57.231 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:57.234 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 131 version: 18|1||51276475bd1f99446659365c based on: 17|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:57.236 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 132 version: 18|1||51276475bd1f99446659365c based on: 18|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:57.236 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 18|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:57.236 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 18000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 132 m30999| Fri Feb 22 12:28:57.237 [conn1] setShardVersion success: { oldVersion: Timestamp 17000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:57.243 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:57.243 [cleanupOldData-5127648999334798f3e47dec] (start) waiting to cleanup test.bar from { _id: 7391.0 } -> { _id: 7853.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:57.243 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:57.244 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:57.244 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:57-5127648999334798f3e47ded", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536137244), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 7391.0 }, max: { _id: 7853.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 11 } } m30999| Fri Feb 22 12:28:57.244 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:57.244 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:57.245 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:28:57.263 [cleanupOldData-5127648999334798f3e47dec] waiting to remove documents for test.bar from { _id: 7391.0 } -> { _id: 7853.0 } m30001| Fri Feb 22 12:28:57.544 [cleanupOldData-5127648799334798f3e47ddd] moveChunk deleted 937 documents for test.foo from { _id: 14054.0 } -> { _id: 14991.0 } m30001| Fri Feb 22 12:28:57.544 [cleanupOldData-5127648999334798f3e47de7] moveChunk starting delete for: test.foo from { _id: 14991.0 } -> { _id: 15928.0 } 18000 m30999| Fri Feb 22 12:28:58.245 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:58.246 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:58.246 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:58 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127648abd1f99446659366e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276489bd1f99446659366d" } } m30999| Fri Feb 22 12:28:58.247 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127648abd1f99446659366e m30999| Fri Feb 22 12:28:58.247 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:58.247 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:58.247 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:58.249 [Balancer] shard0001 has more chunks me:90 best: shard0000:17 m30999| Fri Feb 22 12:28:58.249 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:58.249 [Balancer] donor : shard0001 chunks on 90 m30999| Fri Feb 22 12:28:58.249 [Balancer] receiver : shard0000 chunks on 17 m30999| Fri Feb 22 12:28:58.249 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:58.249 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_15928.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 15928.0 }, max: { _id: 16865.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:58.251 [Balancer] shard0001 has more chunks me:200 best: shard0000:17 m30999| Fri Feb 22 12:28:58.251 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:58.251 [Balancer] donor : shard0001 chunks on 200 m30999| Fri Feb 22 12:28:58.251 [Balancer] receiver : shard0000 chunks on 17 m30999| Fri Feb 22 12:28:58.251 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:58.251 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_7853.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 7853.0 }, max: { _id: 8315.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:58.251 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 18|1||000000000000000000000000min: { _id: 15928.0 }max: { _id: 16865.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:58.251 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:58.251 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 15928.0 }, max: { _id: 16865.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_15928.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:58.252 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648a99334798f3e47dee m30001| Fri Feb 22 12:28:58.252 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648a99334798f3e47def", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536138252), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 15928.0 }, max: { _id: 16865.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:58.253 [conn4] moveChunk request accepted at version 18|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:58.256 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:58.256 [migrateThread] starting receiving-end of migration of chunk { _id: 15928.0 } -> { _id: 16865.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:58.266 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 15928.0 }, max: { _id: 16865.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 201, clonedBytes: 106530, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:58.276 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 15928.0 }, max: { _id: 16865.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 485, clonedBytes: 257050, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:58.286 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 15928.0 }, max: { _id: 16865.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 713, clonedBytes: 377890, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:58.295 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:58.295 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 15928.0 } -> { _id: 16865.0 } m30000| Fri Feb 22 12:28:58.296 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 15928.0 } -> { _id: 16865.0 } m30001| Fri Feb 22 12:28:58.297 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 15928.0 }, max: { _id: 16865.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:58.297 [conn4] moveChunk setting version to: 19|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:58.297 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:58.299 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 133 version: 18|1||51276475bd1f99446659365b based on: 18|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:58.299 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 18|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:58.299 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 18000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 133 m30001| Fri Feb 22 12:28:58.299 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:58.306 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 15928.0 } -> { _id: 16865.0 } m30000| Fri Feb 22 12:28:58.306 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 15928.0 } -> { _id: 16865.0 } m30000| Fri Feb 22 12:28:58.306 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648ac49297cf54df55f6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536138306), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 15928.0 }, max: { _id: 16865.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 38, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:28:58.307 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 15928.0 }, max: { _id: 16865.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:58.307 [conn4] moveChunk updating self version to: 19|1||51276475bd1f99446659365b through { _id: 16865.0 } -> { _id: 17802.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:58.308 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648a99334798f3e47df0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536138308), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 15928.0 }, max: { _id: 16865.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:58.308 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:58.308 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:58.308 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:58.308 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 18000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 19000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:58.308 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:58.308 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:58.308 [cleanupOldData-5127648a99334798f3e47df1] (start) waiting to cleanup test.foo from { _id: 15928.0 } -> { _id: 16865.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:58.308 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:58.308 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648a99334798f3e47df2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536138308), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 15928.0 }, max: { _id: 16865.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:58.308 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:58.309 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 134 version: 19|1||51276475bd1f99446659365b based on: 18|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:58.310 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 135 version: 19|1||51276475bd1f99446659365b based on: 18|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:58.310 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 18|1||000000000000000000000000min: { _id: 7853.0 }max: { _id: 8315.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:58.311 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:58.311 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 7853.0 }, max: { _id: 8315.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_7853.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:58.311 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 136 version: 19|1||51276475bd1f99446659365b based on: 19|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:58.311 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 19|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:58.311 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 19000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 136 m30999| Fri Feb 22 12:28:58.312 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:58.312 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648a99334798f3e47df3 m30001| Fri Feb 22 12:28:58.312 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648a99334798f3e47df4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536138312), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 7853.0 }, max: { _id: 8315.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:58.312 [conn4] moveChunk request accepted at version 18|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:58.313 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:58.313 [migrateThread] starting receiving-end of migration of chunk { _id: 7853.0 } -> { _id: 8315.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:58.324 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 7853.0 }, max: { _id: 8315.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 180, clonedBytes: 187740, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:58.328 [cleanupOldData-5127648a99334798f3e47df1] waiting to remove documents for test.foo from { _id: 15928.0 } -> { _id: 16865.0 } m30001| Fri Feb 22 12:28:58.334 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 7853.0 }, max: { _id: 8315.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 419, clonedBytes: 437017, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:58.336 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:58.336 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 7853.0 } -> { _id: 8315.0 } m30000| Fri Feb 22 12:28:58.338 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 7853.0 } -> { _id: 8315.0 } m30001| Fri Feb 22 12:28:58.344 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 7853.0 }, max: { _id: 8315.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:58.344 [conn4] moveChunk setting version to: 19|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:58.344 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:28:58.346 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 137 version: 18|1||51276475bd1f99446659365c based on: 18|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:58.346 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 18|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:58.346 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 18000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 137 m30001| Fri Feb 22 12:28:58.346 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:28:58.348 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 7853.0 } -> { _id: 8315.0 } m30000| Fri Feb 22 12:28:58.348 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 7853.0 } -> { _id: 8315.0 } m30000| Fri Feb 22 12:28:58.348 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648ac49297cf54df55f7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536138348), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 7853.0 }, max: { _id: 8315.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:28:58.354 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 7853.0 }, max: { _id: 8315.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:58.354 [conn4] moveChunk updating self version to: 19|1||51276475bd1f99446659365c through { _id: 8315.0 } -> { _id: 8777.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:58.355 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648a99334798f3e47df5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536138355), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 7853.0 }, max: { _id: 8315.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:58.355 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:58.355 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:58.355 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:28:58.355 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 18000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 19000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:58.355 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:58.355 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:58.355 [cleanupOldData-5127648a99334798f3e47df6] (start) waiting to cleanup test.bar from { _id: 7853.0 } -> { _id: 8315.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:58.356 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:58.356 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:58-5127648a99334798f3e47df7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536138356), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 7853.0 }, max: { _id: 8315.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:58.356 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:58.357 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 138 version: 19|1||51276475bd1f99446659365c based on: 18|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:58.358 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 139 version: 19|1||51276475bd1f99446659365c based on: 18|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:58.359 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:58.359 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:58.360 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 140 version: 19|1||51276475bd1f99446659365c based on: 19|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:58.360 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 19|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:58.361 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 19000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 140 m30999| Fri Feb 22 12:28:58.361 [conn1] setShardVersion success: { oldVersion: Timestamp 18000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:58.375 [cleanupOldData-5127648a99334798f3e47df6] waiting to remove documents for test.bar from { _id: 7853.0 } -> { _id: 8315.0 } 19000 m30001| Fri Feb 22 12:28:58.659 [cleanupOldData-5127648999334798f3e47de7] moveChunk deleted 937 documents for test.foo from { _id: 14991.0 } -> { _id: 15928.0 } m30001| Fri Feb 22 12:28:58.659 [cleanupOldData-5127648a99334798f3e47df6] moveChunk starting delete for: test.bar from { _id: 7853.0 } -> { _id: 8315.0 } m30001| Fri Feb 22 12:28:58.982 [cleanupOldData-5127648a99334798f3e47df6] moveChunk deleted 462 documents for test.bar from { _id: 7853.0 } -> { _id: 8315.0 } m30001| Fri Feb 22 12:28:58.982 [cleanupOldData-5127648a99334798f3e47df1] moveChunk starting delete for: test.foo from { _id: 15928.0 } -> { _id: 16865.0 } m30999| Fri Feb 22 12:28:59.360 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:28:59.360 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:28:59.361 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:28:59 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127648bbd1f99446659366f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127648abd1f99446659366e" } } m30999| Fri Feb 22 12:28:59.361 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127648bbd1f99446659366f m30999| Fri Feb 22 12:28:59.361 [Balancer] *** start balancing round m30999| Fri Feb 22 12:28:59.361 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:28:59.361 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:28:59.363 [Balancer] shard0001 has more chunks me:89 best: shard0000:18 m30999| Fri Feb 22 12:28:59.363 [Balancer] collection : test.foo m30999| Fri Feb 22 12:28:59.363 [Balancer] donor : shard0001 chunks on 89 m30999| Fri Feb 22 12:28:59.363 [Balancer] receiver : shard0000 chunks on 18 m30999| Fri Feb 22 12:28:59.363 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:59.364 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_16865.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:59.365 [Balancer] shard0001 has more chunks me:199 best: shard0000:18 m30999| Fri Feb 22 12:28:59.366 [Balancer] collection : test.bar m30999| Fri Feb 22 12:28:59.366 [Balancer] donor : shard0001 chunks on 199 m30999| Fri Feb 22 12:28:59.366 [Balancer] receiver : shard0000 chunks on 18 m30999| Fri Feb 22 12:28:59.366 [Balancer] threshold : 2 m30999| Fri Feb 22 12:28:59.366 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_8315.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 8315.0 }, max: { _id: 8777.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:28:59.366 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 19|1||000000000000000000000000min: { _id: 16865.0 }max: { _id: 17802.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:59.366 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:59.366 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 16865.0 }, max: { _id: 17802.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_16865.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:28:59.367 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648b99334798f3e47df8 m30001| Fri Feb 22 12:28:59.367 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648b99334798f3e47df9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536139367), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 16865.0 }, max: { _id: 17802.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:59.368 [conn4] moveChunk request accepted at version 19|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:28:59.371 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:28:59.371 [migrateThread] starting receiving-end of migration of chunk { _id: 16865.0 } -> { _id: 17802.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:59.381 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 120, clonedBytes: 63600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:59.391 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 276, clonedBytes: 146280, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:59.402 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 435, clonedBytes: 230550, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:59.412 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 538, clonedBytes: 285140, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:59.428 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 779, clonedBytes: 412870, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:59.440 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:59.440 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 16865.0 } -> { _id: 17802.0 } m30000| Fri Feb 22 12:28:59.441 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 16865.0 } -> { _id: 17802.0 } m30001| Fri Feb 22 12:28:59.460 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:59.461 [conn4] moveChunk setting version to: 20|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:28:59.461 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:59.461 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 16865.0 } -> { _id: 17802.0 } m30000| Fri Feb 22 12:28:59.461 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 16865.0 } -> { _id: 17802.0 } m30000| Fri Feb 22 12:28:59.461 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648bc49297cf54df55f8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536139461), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 16865.0 }, max: { _id: 17802.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 68, step4 of 5: 0, step5 of 5: 21 } } m30999| Fri Feb 22 12:28:59.463 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 141 version: 19|1||51276475bd1f99446659365b based on: 19|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:59.464 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 19|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:59.464 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 19000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 141 m30001| Fri Feb 22 12:28:59.464 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:59.471 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 16865.0 }, max: { _id: 17802.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:59.471 [conn4] moveChunk updating self version to: 20|1||51276475bd1f99446659365b through { _id: 17802.0 } -> { _id: 18739.0 } for collection 'test.foo' m30001| Fri Feb 22 12:28:59.472 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648b99334798f3e47dfa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536139472), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 16865.0 }, max: { _id: 17802.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:59.472 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:59.472 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 19000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 19000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 20000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:28:59.472 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:59.472 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:59.472 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:59.472 [cleanupOldData-5127648b99334798f3e47dfb] (start) waiting to cleanup test.foo from { _id: 16865.0 } -> { _id: 17802.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:59.472 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:59.473 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:59.473 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648b99334798f3e47dfc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536139473), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 16865.0 }, max: { _id: 17802.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:28:59.473 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 16865.0 }, max: { _id: 17802.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_16865.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:67 r:2565 w:104 reslen:37 106ms m30999| Fri Feb 22 12:28:59.473 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:59.474 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 142 version: 20|1||51276475bd1f99446659365b based on: 19|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:59.475 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 143 version: 20|1||51276475bd1f99446659365b based on: 19|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:59.475 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 19|1||000000000000000000000000min: { _id: 8315.0 }max: { _id: 8777.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:28:59.476 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:28:59.476 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 8315.0 }, max: { _id: 8777.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_8315.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:28:59.476 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 144 version: 20|1||51276475bd1f99446659365b based on: 20|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:59.476 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 20|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:28:59.477 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 20000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 144 m30999| Fri Feb 22 12:28:59.477 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:28:59.477 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648b99334798f3e47dfd m30001| Fri Feb 22 12:28:59.477 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648b99334798f3e47dfe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536139477), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 8315.0 }, max: { _id: 8777.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:59.479 [conn4] moveChunk request accepted at version 19|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:28:59.480 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:28:59.480 [migrateThread] starting receiving-end of migration of chunk { _id: 8315.0 } -> { _id: 8777.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:28:59.491 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 8315.0 }, max: { _id: 8777.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 132, clonedBytes: 137676, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:59.492 [cleanupOldData-5127648b99334798f3e47dfb] waiting to remove documents for test.foo from { _id: 16865.0 } -> { _id: 17802.0 } m30001| Fri Feb 22 12:28:59.501 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 8315.0 }, max: { _id: 8777.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 312, clonedBytes: 325416, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:59.509 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:28:59.509 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 8315.0 } -> { _id: 8777.0 } m30001| Fri Feb 22 12:28:59.511 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 8315.0 }, max: { _id: 8777.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:28:59.512 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 8315.0 } -> { _id: 8777.0 } 20000 m30001| Fri Feb 22 12:28:59.521 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 8315.0 }, max: { _id: 8777.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:28:59.521 [conn4] moveChunk setting version to: 20|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:28:59.522 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:28:59.522 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 8315.0 } -> { _id: 8777.0 } m30000| Fri Feb 22 12:28:59.522 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 8315.0 } -> { _id: 8777.0 } m30000| Fri Feb 22 12:28:59.522 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648bc49297cf54df55f9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536139522), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 8315.0 }, max: { _id: 8777.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:28:59.524 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 145 version: 19|1||51276475bd1f99446659365c based on: 19|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:59.525 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 19|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:59.525 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 19000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 145 m30001| Fri Feb 22 12:28:59.525 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:28:59.532 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 8315.0 }, max: { _id: 8777.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:28:59.532 [conn4] moveChunk updating self version to: 20|1||51276475bd1f99446659365c through { _id: 8777.0 } -> { _id: 9239.0 } for collection 'test.bar' m30001| Fri Feb 22 12:28:59.533 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648b99334798f3e47dff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536139533), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 8315.0 }, max: { _id: 8777.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:28:59.533 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:28:59.533 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:28:59.533 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 19000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 19000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 20000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:28:59.533 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:28:59.533 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:28:59.533 [cleanupOldData-5127648b99334798f3e47e00] (start) waiting to cleanup test.bar from { _id: 8315.0 } -> { _id: 8777.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:28:59.533 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:28:59.533 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:28:59.534 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:28:59-5127648b99334798f3e47e01", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536139534), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 8315.0 }, max: { _id: 8777.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:28:59.534 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:28:59.534 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 146 version: 20|1||51276475bd1f99446659365c based on: 19|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:59.536 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 147 version: 20|1||51276475bd1f99446659365c based on: 19|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:59.537 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:28:59.538 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:28:59.539 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 148 version: 20|1||51276475bd1f99446659365c based on: 20|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:59.539 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 20|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:28:59.539 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 20000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 148 m30999| Fri Feb 22 12:28:59.539 [conn1] setShardVersion success: { oldVersion: Timestamp 19000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:28:59.553 [cleanupOldData-5127648b99334798f3e47e00] waiting to remove documents for test.bar from { _id: 8315.0 } -> { _id: 8777.0 } m30001| Fri Feb 22 12:28:59.638 [cleanupOldData-5127648a99334798f3e47df1] moveChunk deleted 937 documents for test.foo from { _id: 15928.0 } -> { _id: 16865.0 } m30001| Fri Feb 22 12:28:59.638 [cleanupOldData-5127648b99334798f3e47e00] moveChunk starting delete for: test.bar from { _id: 8315.0 } -> { _id: 8777.0 } m30001| Fri Feb 22 12:29:00.219 [cleanupOldData-5127648b99334798f3e47e00] moveChunk deleted 462 documents for test.bar from { _id: 8315.0 } -> { _id: 8777.0 } m30001| Fri Feb 22 12:29:00.219 [cleanupOldData-5127648b99334798f3e47dfb] moveChunk starting delete for: test.foo from { _id: 16865.0 } -> { _id: 17802.0 } m30999| Fri Feb 22 12:29:00.538 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:00.539 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:00.539 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:00 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127648cbd1f994466593670" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127648bbd1f99446659366f" } } m30999| Fri Feb 22 12:29:00.540 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127648cbd1f994466593670 m30999| Fri Feb 22 12:29:00.540 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:00.540 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:00.540 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:00.542 [Balancer] shard0001 has more chunks me:88 best: shard0000:19 m30999| Fri Feb 22 12:29:00.542 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:00.542 [Balancer] donor : shard0001 chunks on 88 m30999| Fri Feb 22 12:29:00.542 [Balancer] receiver : shard0000 chunks on 19 m30999| Fri Feb 22 12:29:00.542 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:00.542 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_17802.0", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 17802.0 }, max: { _id: 18739.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:00.545 [Balancer] shard0001 has more chunks me:198 best: shard0000:19 m30999| Fri Feb 22 12:29:00.545 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:00.545 [Balancer] donor : shard0001 chunks on 198 m30999| Fri Feb 22 12:29:00.545 [Balancer] receiver : shard0000 chunks on 19 m30999| Fri Feb 22 12:29:00.545 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:00.545 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_8777.0", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 8777.0 }, max: { _id: 9239.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:00.545 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 20|1||000000000000000000000000min: { _id: 17802.0 }max: { _id: 18739.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:00.545 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:00.545 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 17802.0 }, max: { _id: 18739.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_17802.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:00.546 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648c99334798f3e47e02 m30001| Fri Feb 22 12:29:00.546 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648c99334798f3e47e03", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536140546), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 17802.0 }, max: { _id: 18739.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:00.548 [conn4] moveChunk request accepted at version 20|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:00.551 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:00.551 [migrateThread] starting receiving-end of migration of chunk { _id: 17802.0 } -> { _id: 18739.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:00.561 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 17802.0 }, max: { _id: 18739.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 120, clonedBytes: 63600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:00.571 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 17802.0 }, max: { _id: 18739.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 327, clonedBytes: 173310, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:00.582 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 17802.0 }, max: { _id: 18739.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 531, clonedBytes: 281430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:00.592 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 17802.0 }, max: { _id: 18739.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 736, clonedBytes: 390080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:00.602 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:00.602 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 17802.0 } -> { _id: 18739.0 } m30000| Fri Feb 22 12:29:00.606 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 17802.0 } -> { _id: 18739.0 } m30001| Fri Feb 22 12:29:00.608 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 17802.0 }, max: { _id: 18739.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:00.608 [conn4] moveChunk setting version to: 21|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:00.608 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:00.610 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 149 version: 20|1||51276475bd1f99446659365b based on: 20|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:00.610 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 20|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:00.610 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 20000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 149 m30001| Fri Feb 22 12:29:00.611 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:00.616 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 17802.0 } -> { _id: 18739.0 } m30000| Fri Feb 22 12:29:00.616 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 17802.0 } -> { _id: 18739.0 } m30000| Fri Feb 22 12:29:00.616 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648cc49297cf54df55fa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536140616), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 17802.0 }, max: { _id: 18739.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:00.618 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 17802.0 }, max: { _id: 18739.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:00.619 [conn4] moveChunk updating self version to: 21|1||51276475bd1f99446659365b through { _id: 18739.0 } -> { _id: 19676.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:00.619 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648c99334798f3e47e04", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536140619), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 17802.0 }, max: { _id: 18739.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:00.619 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:00.619 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 20000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 21000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:00.620 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:00.620 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:00.620 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:00.620 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:00.620 [cleanupOldData-5127648c99334798f3e47e05] (start) waiting to cleanup test.foo from { _id: 17802.0 } -> { _id: 18739.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:00.620 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:00.620 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648c99334798f3e47e06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536140620), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 17802.0 }, max: { _id: 18739.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:00.620 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:00.621 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 150 version: 21|1||51276475bd1f99446659365b based on: 20|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:00.622 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 151 version: 21|1||51276475bd1f99446659365b based on: 21|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:00.622 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 21|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:00.623 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 21000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 151 m30999| Fri Feb 22 12:29:00.623 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30999| Fri Feb 22 12:29:00.624 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 152 version: 21|1||51276475bd1f99446659365b based on: 20|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:00.625 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 20|1||000000000000000000000000min: { _id: 8777.0 }max: { _id: 9239.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:00.625 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:00.625 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 8777.0 }, max: { _id: 9239.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_8777.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:00.626 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648c99334798f3e47e07 m30001| Fri Feb 22 12:29:00.626 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648c99334798f3e47e08", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536140626), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 8777.0 }, max: { _id: 9239.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:00.628 [conn4] moveChunk request accepted at version 20|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:00.629 [conn4] moveChunk number of documents: 462 m30001| Fri Feb 22 12:29:00.640 [cleanupOldData-5127648c99334798f3e47e05] waiting to remove documents for test.foo from { _id: 17802.0 } -> { _id: 18739.0 } m30000| Fri Feb 22 12:29:00.642 [migrateThread] starting receiving-end of migration of chunk { _id: 8777.0 } -> { _id: 9239.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:00.653 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 8777.0 }, max: { _id: 9239.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 138, clonedBytes: 143934, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:00.663 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 8777.0 }, max: { _id: 9239.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 335, clonedBytes: 349405, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 21000 m30000| Fri Feb 22 12:29:00.670 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:00.670 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 8777.0 } -> { _id: 9239.0 } m30000| Fri Feb 22 12:29:00.672 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 8777.0 } -> { _id: 9239.0 } m30001| Fri Feb 22 12:29:00.673 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 8777.0 }, max: { _id: 9239.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:00.674 [conn4] moveChunk setting version to: 21|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:00.674 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:00.676 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 153 version: 20|1||51276475bd1f99446659365c based on: 20|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:00.677 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 20|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:00.677 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 20000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 153 m30001| Fri Feb 22 12:29:00.677 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:00.683 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 8777.0 } -> { _id: 9239.0 } m30000| Fri Feb 22 12:29:00.683 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 8777.0 } -> { _id: 9239.0 } m30000| Fri Feb 22 12:29:00.683 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648cc49297cf54df55fb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536140683), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 8777.0 }, max: { _id: 9239.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 27, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:00.684 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 8777.0 }, max: { _id: 9239.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:00.684 [conn4] moveChunk updating self version to: 21|1||51276475bd1f99446659365c through { _id: 9239.0 } -> { _id: 9701.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:00.685 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648c99334798f3e47e09", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536140684), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 8777.0 }, max: { _id: 9239.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:00.685 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:00.685 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:00.685 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 20000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 21000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:00.685 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:00.685 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:00.685 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:00.685 [cleanupOldData-5127648c99334798f3e47e0a] (start) waiting to cleanup test.bar from { _id: 8777.0 } -> { _id: 9239.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:00.685 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:00.685 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:00-5127648c99334798f3e47e0b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536140685), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 8777.0 }, max: { _id: 9239.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 14, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:00.685 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:00.686 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 154 version: 21|1||51276475bd1f99446659365c based on: 20|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:00.688 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 155 version: 21|1||51276475bd1f99446659365c based on: 20|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:00.689 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:00.690 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:00.691 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 156 version: 21|1||51276475bd1f99446659365c based on: 21|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:00.691 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 21|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:00.691 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 21000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 156 m30999| Fri Feb 22 12:29:00.691 [conn1] setShardVersion success: { oldVersion: Timestamp 20000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:00.705 [cleanupOldData-5127648c99334798f3e47e0a] waiting to remove documents for test.bar from { _id: 8777.0 } -> { _id: 9239.0 } m30001| Fri Feb 22 12:29:01.028 [cleanupOldData-5127648b99334798f3e47dfb] moveChunk deleted 937 documents for test.foo from { _id: 16865.0 } -> { _id: 17802.0 } m30001| Fri Feb 22 12:29:01.028 [cleanupOldData-5127648c99334798f3e47e0a] moveChunk starting delete for: test.bar from { _id: 8777.0 } -> { _id: 9239.0 } m30001| Fri Feb 22 12:29:01.480 [cleanupOldData-5127648c99334798f3e47e0a] moveChunk deleted 462 documents for test.bar from { _id: 8777.0 } -> { _id: 9239.0 } m30001| Fri Feb 22 12:29:01.480 [cleanupOldData-5127648c99334798f3e47e05] moveChunk starting delete for: test.foo from { _id: 17802.0 } -> { _id: 18739.0 } m30999| Fri Feb 22 12:29:01.690 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:01.691 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:01.691 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127648dbd1f994466593671" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127648cbd1f994466593670" } } m30999| Fri Feb 22 12:29:01.691 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127648dbd1f994466593671 m30999| Fri Feb 22 12:29:01.691 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:01.691 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:01.691 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:01.693 [Balancer] shard0001 has more chunks me:87 best: shard0000:20 m30999| Fri Feb 22 12:29:01.693 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:01.693 [Balancer] donor : shard0001 chunks on 87 m30999| Fri Feb 22 12:29:01.693 [Balancer] receiver : shard0000 chunks on 20 m30999| Fri Feb 22 12:29:01.693 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:01.693 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_18739.0", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 18739.0 }, max: { _id: 19676.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:01.695 [Balancer] shard0001 has more chunks me:197 best: shard0000:20 m30999| Fri Feb 22 12:29:01.695 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:01.695 [Balancer] donor : shard0001 chunks on 197 m30999| Fri Feb 22 12:29:01.695 [Balancer] receiver : shard0000 chunks on 20 m30999| Fri Feb 22 12:29:01.695 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:01.695 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_9239.0", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 9239.0 }, max: { _id: 9701.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:01.695 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 21|1||000000000000000000000000min: { _id: 18739.0 }max: { _id: 19676.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:01.695 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:01.696 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 18739.0 }, max: { _id: 19676.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_18739.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:01.696 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648d99334798f3e47e0c m30001| Fri Feb 22 12:29:01.696 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648d99334798f3e47e0d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536141696), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 18739.0 }, max: { _id: 19676.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:01.697 [conn4] moveChunk request accepted at version 21|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:01.699 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:01.700 [migrateThread] starting receiving-end of migration of chunk { _id: 18739.0 } -> { _id: 19676.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:01.710 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 18739.0 }, max: { _id: 19676.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 116, clonedBytes: 61480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:01.720 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 18739.0 }, max: { _id: 19676.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 319, clonedBytes: 169070, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:01.730 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 18739.0 }, max: { _id: 19676.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 522, clonedBytes: 276660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:01.741 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 18739.0 }, max: { _id: 19676.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 719, clonedBytes: 381070, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:01.752 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:01.752 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 18739.0 } -> { _id: 19676.0 } m30000| Fri Feb 22 12:29:01.755 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 18739.0 } -> { _id: 19676.0 } m30001| Fri Feb 22 12:29:01.757 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 18739.0 }, max: { _id: 19676.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:01.757 [conn4] moveChunk setting version to: 22|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:01.757 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:01.759 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 157 version: 21|1||51276475bd1f99446659365b based on: 21|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:01.759 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 21|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:01.759 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 21000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 157 m30001| Fri Feb 22 12:29:01.759 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:01.766 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 18739.0 } -> { _id: 19676.0 } m30000| Fri Feb 22 12:29:01.766 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 18739.0 } -> { _id: 19676.0 } m30000| Fri Feb 22 12:29:01.766 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648dc49297cf54df55fc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536141766), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 18739.0 }, max: { _id: 19676.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:01.767 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 18739.0 }, max: { _id: 19676.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:01.767 [conn4] moveChunk updating self version to: 22|1||51276475bd1f99446659365b through { _id: 19676.0 } -> { _id: 20613.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:01.768 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648d99334798f3e47e0e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536141768), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 18739.0 }, max: { _id: 19676.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:01.768 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:01.768 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:01.768 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:01.768 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:01.768 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 21000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 21000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 22000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:01.768 [cleanupOldData-5127648d99334798f3e47e0f] (start) waiting to cleanup test.foo from { _id: 18739.0 } -> { _id: 19676.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:01.768 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:01.768 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:01.768 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648d99334798f3e47e10", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536141768), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 18739.0 }, max: { _id: 19676.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:01.768 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:01.769 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 158 version: 22|1||51276475bd1f99446659365b based on: 21|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:01.770 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 159 version: 22|1||51276475bd1f99446659365b based on: 21|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:01.771 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 21|1||000000000000000000000000min: { _id: 9239.0 }max: { _id: 9701.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:01.771 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:01.771 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 9239.0 }, max: { _id: 9701.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_9239.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:01.772 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 160 version: 22|1||51276475bd1f99446659365b based on: 22|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:01.772 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648d99334798f3e47e11 m30001| Fri Feb 22 12:29:01.772 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648d99334798f3e47e12", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536141772), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 9239.0 }, max: { _id: 9701.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:01.772 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 22|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:01.772 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 22000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 160 m30999| Fri Feb 22 12:29:01.772 [conn1] setShardVersion success: { oldVersion: Timestamp 21000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:01.773 [conn4] moveChunk request accepted at version 21|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:01.774 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:01.774 [migrateThread] starting receiving-end of migration of chunk { _id: 9239.0 } -> { _id: 9701.0 } for collection test.bar from localhost:30001 (0 slaves detected) 22000 m30001| Fri Feb 22 12:29:01.784 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 9239.0 }, max: { _id: 9701.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 185, clonedBytes: 192955, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:01.788 [cleanupOldData-5127648d99334798f3e47e0f] waiting to remove documents for test.foo from { _id: 18739.0 } -> { _id: 19676.0 } m30001| Fri Feb 22 12:29:01.794 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 9239.0 }, max: { _id: 9701.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 450, clonedBytes: 469350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:01.795 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:01.795 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 9239.0 } -> { _id: 9701.0 } m30000| Fri Feb 22 12:29:01.797 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 9239.0 } -> { _id: 9701.0 } m30001| Fri Feb 22 12:29:01.805 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 9239.0 }, max: { _id: 9701.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:01.805 [conn4] moveChunk setting version to: 22|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:01.805 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:01.807 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 161 version: 21|1||51276475bd1f99446659365c based on: 21|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:01.807 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 9239.0 } -> { _id: 9701.0 } m30000| Fri Feb 22 12:29:01.807 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 9239.0 } -> { _id: 9701.0 } m30000| Fri Feb 22 12:29:01.807 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648dc49297cf54df55fd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536141807), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 9239.0 }, max: { _id: 9701.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:01.807 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 21|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:01.807 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 21000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 161 m30001| Fri Feb 22 12:29:01.807 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:01.815 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 9239.0 }, max: { _id: 9701.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:01.815 [conn4] moveChunk updating self version to: 22|1||51276475bd1f99446659365c through { _id: 9701.0 } -> { _id: 10163.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:01.816 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648d99334798f3e47e13", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536141816), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 9239.0 }, max: { _id: 9701.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:01.816 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:01.816 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:01.816 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:01.816 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 21000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 21000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 22000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:01.816 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:01.816 [cleanupOldData-5127648d99334798f3e47e14] (start) waiting to cleanup test.bar from { _id: 9239.0 } -> { _id: 9701.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:01.816 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:01.816 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:01.816 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:01-5127648d99334798f3e47e15", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536141816), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 9239.0 }, max: { _id: 9701.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:01.816 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:01.818 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 162 version: 22|1||51276475bd1f99446659365c based on: 21|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:01.819 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 163 version: 22|1||51276475bd1f99446659365c based on: 21|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:01.820 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:01.820 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:01.822 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 164 version: 22|1||51276475bd1f99446659365c based on: 22|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:01.822 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 22|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:01.822 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 22000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 164 m30999| Fri Feb 22 12:29:01.822 [conn1] setShardVersion success: { oldVersion: Timestamp 21000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:01.836 [cleanupOldData-5127648d99334798f3e47e14] waiting to remove documents for test.bar from { _id: 9239.0 } -> { _id: 9701.0 } m30001| Fri Feb 22 12:29:02.310 [cleanupOldData-5127648c99334798f3e47e05] moveChunk deleted 937 documents for test.foo from { _id: 17802.0 } -> { _id: 18739.0 } m30001| Fri Feb 22 12:29:02.310 [cleanupOldData-5127648d99334798f3e47e14] moveChunk starting delete for: test.bar from { _id: 9239.0 } -> { _id: 9701.0 } m30001| Fri Feb 22 12:29:02.738 [cleanupOldData-5127648d99334798f3e47e14] moveChunk deleted 462 documents for test.bar from { _id: 9239.0 } -> { _id: 9701.0 } m30001| Fri Feb 22 12:29:02.738 [cleanupOldData-5127648d99334798f3e47e0f] moveChunk starting delete for: test.foo from { _id: 18739.0 } -> { _id: 19676.0 } m30999| Fri Feb 22 12:29:02.821 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:02.822 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:02.822 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127648ebd1f994466593672" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127648dbd1f994466593671" } } m30999| Fri Feb 22 12:29:02.822 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127648ebd1f994466593672 m30999| Fri Feb 22 12:29:02.822 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:02.822 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:02.822 [Balancer] secondaryThrottle: 1 23000 m30999| Fri Feb 22 12:29:02.825 [Balancer] shard0001 has more chunks me:86 best: shard0000:21 m30999| Fri Feb 22 12:29:02.825 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:02.825 [Balancer] donor : shard0001 chunks on 86 m30999| Fri Feb 22 12:29:02.825 [Balancer] receiver : shard0000 chunks on 21 m30999| Fri Feb 22 12:29:02.825 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:02.825 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_19676.0", lastmod: Timestamp 22000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 19676.0 }, max: { _id: 20613.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:02.827 [Balancer] shard0001 has more chunks me:196 best: shard0000:21 m30999| Fri Feb 22 12:29:02.827 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:02.827 [Balancer] donor : shard0001 chunks on 196 m30999| Fri Feb 22 12:29:02.827 [Balancer] receiver : shard0000 chunks on 21 m30999| Fri Feb 22 12:29:02.827 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:02.827 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_9701.0", lastmod: Timestamp 22000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 9701.0 }, max: { _id: 10163.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:02.827 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 22|1||000000000000000000000000min: { _id: 19676.0 }max: { _id: 20613.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:02.827 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:02.828 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 19676.0 }, max: { _id: 20613.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_19676.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:02.828 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648e99334798f3e47e16 m30001| Fri Feb 22 12:29:02.828 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648e99334798f3e47e17", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536142828), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 19676.0 }, max: { _id: 20613.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:02.829 [conn4] moveChunk request accepted at version 22|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:02.832 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:02.832 [migrateThread] starting receiving-end of migration of chunk { _id: 19676.0 } -> { _id: 20613.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:02.842 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 19676.0 }, max: { _id: 20613.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 119, clonedBytes: 63070, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:02.852 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 19676.0 }, max: { _id: 20613.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 321, clonedBytes: 170130, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:02.863 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 19676.0 }, max: { _id: 20613.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 521, clonedBytes: 276130, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:02.873 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 19676.0 }, max: { _id: 20613.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 720, clonedBytes: 381600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:02.884 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:02.884 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 19676.0 } -> { _id: 20613.0 } m30000| Fri Feb 22 12:29:02.888 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 19676.0 } -> { _id: 20613.0 } m30001| Fri Feb 22 12:29:02.889 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 19676.0 }, max: { _id: 20613.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:02.889 [conn4] moveChunk setting version to: 23|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:02.889 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:02.891 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 165 version: 22|1||51276475bd1f99446659365b based on: 22|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:02.891 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 22|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:02.892 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 22000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 165 m30001| Fri Feb 22 12:29:02.892 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:02.898 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 19676.0 } -> { _id: 20613.0 } m30000| Fri Feb 22 12:29:02.898 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 19676.0 } -> { _id: 20613.0 } m30000| Fri Feb 22 12:29:02.898 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648ec49297cf54df55fe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536142898), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 19676.0 }, max: { _id: 20613.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:02.899 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 19676.0 }, max: { _id: 20613.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:02.899 [conn4] moveChunk updating self version to: 23|1||51276475bd1f99446659365b through { _id: 20613.0 } -> { _id: 21550.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:02.900 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648e99334798f3e47e18", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536142900), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 19676.0 }, max: { _id: 20613.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:02.900 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:02.900 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:02.900 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 22000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 22000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 23000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:02.900 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:02.900 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:02.900 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:02.900 [cleanupOldData-5127648e99334798f3e47e19] (start) waiting to cleanup test.foo from { _id: 19676.0 } -> { _id: 20613.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:02.901 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:02.901 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648e99334798f3e47e1a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536142901), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 19676.0 }, max: { _id: 20613.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:02.901 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:02.901 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 166 version: 23|1||51276475bd1f99446659365b based on: 22|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:02.902 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 167 version: 23|1||51276475bd1f99446659365b based on: 22|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:02.903 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 22|1||000000000000000000000000min: { _id: 9701.0 }max: { _id: 10163.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:02.903 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:02.903 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 9701.0 }, max: { _id: 10163.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_9701.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:02.904 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 168 version: 23|1||51276475bd1f99446659365b based on: 23|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:02.904 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 23|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:02.904 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 23000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 168 m30999| Fri Feb 22 12:29:02.904 [conn1] setShardVersion success: { oldVersion: Timestamp 22000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:02.904 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648e99334798f3e47e1b m30001| Fri Feb 22 12:29:02.904 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648e99334798f3e47e1c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536142904), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 9701.0 }, max: { _id: 10163.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:02.905 [conn4] moveChunk request accepted at version 22|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:02.906 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:02.906 [migrateThread] starting receiving-end of migration of chunk { _id: 9701.0 } -> { _id: 10163.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:02.916 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 9701.0 }, max: { _id: 10163.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 119, clonedBytes: 124117, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:02.920 [cleanupOldData-5127648e99334798f3e47e19] waiting to remove documents for test.foo from { _id: 19676.0 } -> { _id: 20613.0 } m30001| Fri Feb 22 12:29:02.926 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 9701.0 }, max: { _id: 10163.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 300, clonedBytes: 312900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:02.936 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:02.936 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 9701.0 } -> { _id: 10163.0 } m30001| Fri Feb 22 12:29:02.937 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 9701.0 }, max: { _id: 10163.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:02.938 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 9701.0 } -> { _id: 10163.0 } m30001| Fri Feb 22 12:29:02.947 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 9701.0 }, max: { _id: 10163.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:02.947 [conn4] moveChunk setting version to: 23|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:02.947 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:02.949 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 9701.0 } -> { _id: 10163.0 } m30000| Fri Feb 22 12:29:02.949 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 9701.0 } -> { _id: 10163.0 } m30000| Fri Feb 22 12:29:02.949 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648ec49297cf54df55ff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536142949), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 9701.0 }, max: { _id: 10163.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:02.950 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 169 version: 22|1||51276475bd1f99446659365c based on: 22|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:02.950 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 22|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:02.950 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 22000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 169 m30001| Fri Feb 22 12:29:02.950 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:02.957 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 9701.0 }, max: { _id: 10163.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:02.957 [conn4] moveChunk updating self version to: 23|1||51276475bd1f99446659365c through { _id: 10163.0 } -> { _id: 10625.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:02.958 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648e99334798f3e47e1d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536142958), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 9701.0 }, max: { _id: 10163.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:02.958 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:02.958 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 22000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 22000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 23000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:02.958 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:02.958 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:02.958 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:02.958 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:02.958 [cleanupOldData-5127648e99334798f3e47e1e] (start) waiting to cleanup test.bar from { _id: 9701.0 } -> { _id: 10163.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:02.959 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:02.959 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:02-5127648e99334798f3e47e1f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536142959), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 9701.0 }, max: { _id: 10163.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:02.959 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:02.960 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 170 version: 23|1||51276475bd1f99446659365c based on: 22|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:02.961 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 171 version: 23|1||51276475bd1f99446659365c based on: 22|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:02.962 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:02.962 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:02.964 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 172 version: 23|1||51276475bd1f99446659365c based on: 23|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:02.964 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 23|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:02.964 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 23000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 172 m30999| Fri Feb 22 12:29:02.964 [conn1] setShardVersion success: { oldVersion: Timestamp 22000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:02.978 [cleanupOldData-5127648e99334798f3e47e1e] waiting to remove documents for test.bar from { _id: 9701.0 } -> { _id: 10163.0 } m30001| Fri Feb 22 12:29:03.686 [cleanupOldData-5127648d99334798f3e47e0f] moveChunk deleted 937 documents for test.foo from { _id: 18739.0 } -> { _id: 19676.0 } m30001| Fri Feb 22 12:29:03.686 [cleanupOldData-5127648e99334798f3e47e1e] moveChunk starting delete for: test.bar from { _id: 9701.0 } -> { _id: 10163.0 } 24000 m30999| Fri Feb 22 12:29:03.963 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:03.963 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:03.964 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127648fbd1f994466593673" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127648ebd1f994466593672" } } m30999| Fri Feb 22 12:29:03.964 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127648fbd1f994466593673 m30999| Fri Feb 22 12:29:03.964 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:03.964 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:03.964 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:03.966 [Balancer] shard0001 has more chunks me:85 best: shard0000:22 m30999| Fri Feb 22 12:29:03.966 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:03.966 [Balancer] donor : shard0001 chunks on 85 m30999| Fri Feb 22 12:29:03.966 [Balancer] receiver : shard0000 chunks on 22 m30999| Fri Feb 22 12:29:03.966 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:03.966 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_20613.0", lastmod: Timestamp 23000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:03.968 [Balancer] shard0001 has more chunks me:195 best: shard0000:22 m30999| Fri Feb 22 12:29:03.969 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:03.969 [Balancer] donor : shard0001 chunks on 195 m30999| Fri Feb 22 12:29:03.969 [Balancer] receiver : shard0000 chunks on 22 m30999| Fri Feb 22 12:29:03.969 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:03.969 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_10163.0", lastmod: Timestamp 23000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 10163.0 }, max: { _id: 10625.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:03.969 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 23|1||000000000000000000000000min: { _id: 20613.0 }max: { _id: 21550.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:03.969 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:03.969 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 20613.0 }, max: { _id: 21550.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_20613.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:03.970 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127648f99334798f3e47e20 m30001| Fri Feb 22 12:29:03.970 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:03-5127648f99334798f3e47e21", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536143970), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 20613.0 }, max: { _id: 21550.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:03.971 [conn4] moveChunk request accepted at version 23|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:03.974 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:03.974 [migrateThread] starting receiving-end of migration of chunk { _id: 20613.0 } -> { _id: 21550.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:03.984 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 125, clonedBytes: 66250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:03.994 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 322, clonedBytes: 170660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:04.005 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 525, clonedBytes: 278250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:04.015 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 712, clonedBytes: 377360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:04.030 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:04.030 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 20613.0 } -> { _id: 21550.0 } m30001| Fri Feb 22 12:29:04.031 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:04.031 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 20613.0 } -> { _id: 21550.0 } m30001| Fri Feb 22 12:29:04.063 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:04.064 [conn4] moveChunk setting version to: 24|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:04.064 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:04.066 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 173 version: 23|1||51276475bd1f99446659365b based on: 23|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:04.066 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 23|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:04.066 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 23000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 173 m30001| Fri Feb 22 12:29:04.067 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:04.072 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 20613.0 } -> { _id: 21550.0 } m30000| Fri Feb 22 12:29:04.072 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 20613.0 } -> { _id: 21550.0 } m30000| Fri Feb 22 12:29:04.072 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:04-51276490c49297cf54df5600", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536144072), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 20613.0 }, max: { _id: 21550.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 55, step4 of 5: 0, step5 of 5: 41 } } m30001| Fri Feb 22 12:29:04.074 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 20613.0 }, max: { _id: 21550.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:04.074 [conn4] moveChunk updating self version to: 24|1||51276475bd1f99446659365b through { _id: 21550.0 } -> { _id: 22487.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:04.075 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:04-5127649099334798f3e47e22", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536144075), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 20613.0 }, max: { _id: 21550.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:04.075 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:04.075 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:04.075 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:04.075 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:04.075 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 23000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 23000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 24000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:04.075 [cleanupOldData-5127649099334798f3e47e23] (start) waiting to cleanup test.foo from { _id: 20613.0 } -> { _id: 21550.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:04.075 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:04.075 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:04.075 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:04-5127649099334798f3e47e24", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536144075), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 20613.0 }, max: { _id: 21550.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 2, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:29:04.075 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 20613.0 }, max: { _id: 21550.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_20613.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:22 r:2608 w:64 reslen:37 106ms m30999| Fri Feb 22 12:29:04.076 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:04.076 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 174 version: 24|1||51276475bd1f99446659365b based on: 23|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:04.077 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 175 version: 24|1||51276475bd1f99446659365b based on: 23|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:04.078 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 23|1||000000000000000000000000min: { _id: 10163.0 }max: { _id: 10625.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:04.078 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:04.078 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 10163.0 }, max: { _id: 10625.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_10163.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:04.079 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 176 version: 24|1||51276475bd1f99446659365b based on: 24|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:04.079 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 24|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:04.079 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649099334798f3e47e25 m30001| Fri Feb 22 12:29:04.079 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:04-5127649099334798f3e47e26", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536144079), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 10163.0 }, max: { _id: 10625.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:04.079 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 24000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 176 m30999| Fri Feb 22 12:29:04.080 [conn1] setShardVersion success: { oldVersion: Timestamp 23000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:04.080 [conn4] moveChunk request accepted at version 23|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:04.081 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:04.082 [migrateThread] starting receiving-end of migration of chunk { _id: 10163.0 } -> { _id: 10625.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:04.092 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 10163.0 }, max: { _id: 10625.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 129, clonedBytes: 134547, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:04.095 [cleanupOldData-5127649099334798f3e47e23] waiting to remove documents for test.foo from { _id: 20613.0 } -> { _id: 21550.0 } m30001| Fri Feb 22 12:29:04.102 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 10163.0 }, max: { _id: 10625.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 309, clonedBytes: 322287, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:04.111 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:04.111 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 10163.0 } -> { _id: 10625.0 } m30001| Fri Feb 22 12:29:04.112 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 10163.0 }, max: { _id: 10625.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:04.113 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 10163.0 } -> { _id: 10625.0 } m30001| Fri Feb 22 12:29:04.122 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 10163.0 }, max: { _id: 10625.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:04.123 [conn4] moveChunk setting version to: 24|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:04.123 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:04.124 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 10163.0 } -> { _id: 10625.0 } m30000| Fri Feb 22 12:29:04.124 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 10163.0 } -> { _id: 10625.0 } m30000| Fri Feb 22 12:29:04.124 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:04-51276490c49297cf54df5601", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536144124), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 10163.0 }, max: { _id: 10625.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:04.126 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 177 version: 23|1||51276475bd1f99446659365c based on: 23|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:04.126 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 23|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:04.126 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 23000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 177 m30001| Fri Feb 22 12:29:04.126 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:04.133 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 10163.0 }, max: { _id: 10625.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:04.133 [conn4] moveChunk updating self version to: 24|1||51276475bd1f99446659365c through { _id: 10625.0 } -> { _id: 11087.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:04.134 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:04-5127649099334798f3e47e27", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536144134), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 10163.0 }, max: { _id: 10625.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:04.134 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:04.134 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 23000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 23000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 24000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:04.134 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:04.134 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:04.134 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:04.134 [cleanupOldData-5127649099334798f3e47e28] (start) waiting to cleanup test.bar from { _id: 10163.0 } -> { _id: 10625.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:04.134 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:04.134 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:04.135 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:04-5127649099334798f3e47e29", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536144135), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 10163.0 }, max: { _id: 10625.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:04.135 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:04.136 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 178 version: 24|1||51276475bd1f99446659365c based on: 23|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:04.137 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 179 version: 24|1||51276475bd1f99446659365c based on: 23|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:04.138 [cleanupOldData-5127648e99334798f3e47e1e] moveChunk deleted 462 documents for test.bar from { _id: 9701.0 } -> { _id: 10163.0 } m30001| Fri Feb 22 12:29:04.138 [cleanupOldData-5127649099334798f3e47e23] moveChunk starting delete for: test.foo from { _id: 20613.0 } -> { _id: 21550.0 } m30999| Fri Feb 22 12:29:04.138 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:04.139 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:04.140 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 180 version: 24|1||51276475bd1f99446659365c based on: 24|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:04.140 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 24|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:04.140 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 24000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 180 m30999| Fri Feb 22 12:29:04.140 [conn1] setShardVersion success: { oldVersion: Timestamp 23000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:04.154 [cleanupOldData-5127649099334798f3e47e28] waiting to remove documents for test.bar from { _id: 10163.0 } -> { _id: 10625.0 } m30001| Fri Feb 22 12:29:05.065 [cleanupOldData-5127649099334798f3e47e23] moveChunk deleted 937 documents for test.foo from { _id: 20613.0 } -> { _id: 21550.0 } m30001| Fri Feb 22 12:29:05.065 [cleanupOldData-5127649099334798f3e47e28] moveChunk starting delete for: test.bar from { _id: 10163.0 } -> { _id: 10625.0 } 25000 m30999| Fri Feb 22 12:29:05.139 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:05.140 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:05.140 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:05 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276491bd1f994466593674" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127648fbd1f994466593673" } } m30999| Fri Feb 22 12:29:05.141 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276491bd1f994466593674 m30999| Fri Feb 22 12:29:05.141 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:05.141 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:05.141 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:05.143 [Balancer] shard0001 has more chunks me:84 best: shard0000:23 m30999| Fri Feb 22 12:29:05.143 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:05.143 [Balancer] donor : shard0001 chunks on 84 m30999| Fri Feb 22 12:29:05.143 [Balancer] receiver : shard0000 chunks on 23 m30999| Fri Feb 22 12:29:05.143 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:05.143 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_21550.0", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 21550.0 }, max: { _id: 22487.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:05.146 [Balancer] shard0001 has more chunks me:194 best: shard0000:23 m30999| Fri Feb 22 12:29:05.146 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:05.146 [Balancer] donor : shard0001 chunks on 194 m30999| Fri Feb 22 12:29:05.146 [Balancer] receiver : shard0000 chunks on 23 m30999| Fri Feb 22 12:29:05.146 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:05.146 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_10625.0", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 10625.0 }, max: { _id: 11087.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:05.146 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 24|1||000000000000000000000000min: { _id: 21550.0 }max: { _id: 22487.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:05.146 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:05.146 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 21550.0 }, max: { _id: 22487.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_21550.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:05.147 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649199334798f3e47e2a m30001| Fri Feb 22 12:29:05.147 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-5127649199334798f3e47e2b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536145147), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 21550.0 }, max: { _id: 22487.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:05.149 [conn4] moveChunk request accepted at version 24|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:05.152 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:05.152 [migrateThread] starting receiving-end of migration of chunk { _id: 21550.0 } -> { _id: 22487.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:05.162 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 21550.0 }, max: { _id: 22487.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 118, clonedBytes: 62540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:05.172 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 21550.0 }, max: { _id: 22487.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 315, clonedBytes: 166950, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:05.183 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 21550.0 }, max: { _id: 22487.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 521, clonedBytes: 276130, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:05.193 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 21550.0 }, max: { _id: 22487.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 730, clonedBytes: 386900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:05.204 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:05.204 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 21550.0 } -> { _id: 22487.0 } m30000| Fri Feb 22 12:29:05.207 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 21550.0 } -> { _id: 22487.0 } m30001| Fri Feb 22 12:29:05.209 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 21550.0 }, max: { _id: 22487.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:05.209 [conn4] moveChunk setting version to: 25|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:05.210 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:05.211 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 181 version: 24|1||51276475bd1f99446659365b based on: 24|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:05.211 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 24|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:05.211 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 24000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 181 m30001| Fri Feb 22 12:29:05.212 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:05.217 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 21550.0 } -> { _id: 22487.0 } m30000| Fri Feb 22 12:29:05.217 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 21550.0 } -> { _id: 22487.0 } m30000| Fri Feb 22 12:29:05.218 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-51276491c49297cf54df5602", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536145218), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 21550.0 }, max: { _id: 22487.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 14 } } m30001| Fri Feb 22 12:29:05.220 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 21550.0 }, max: { _id: 22487.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:05.220 [conn4] moveChunk updating self version to: 25|1||51276475bd1f99446659365b through { _id: 22487.0 } -> { _id: 23424.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:05.221 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-5127649199334798f3e47e2c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536145220), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 21550.0 }, max: { _id: 22487.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:05.221 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:05.221 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:05.221 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 24000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 24000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 25000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:05.221 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:05.221 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:05.221 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:05.221 [cleanupOldData-5127649199334798f3e47e2d] (start) waiting to cleanup test.foo from { _id: 21550.0 } -> { _id: 22487.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:05.221 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:05.221 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-5127649199334798f3e47e2e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536145221), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 21550.0 }, max: { _id: 22487.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:05.221 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:05.222 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 182 version: 25|1||51276475bd1f99446659365b based on: 24|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:05.223 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 183 version: 25|1||51276475bd1f99446659365b based on: 24|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:05.224 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 24|1||000000000000000000000000min: { _id: 10625.0 }max: { _id: 11087.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:05.224 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:05.224 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 10625.0 }, max: { _id: 11087.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_10625.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:05.225 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 184 version: 25|1||51276475bd1f99446659365b based on: 25|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:05.225 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 25|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:05.225 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 25000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 184 m30999| Fri Feb 22 12:29:05.226 [conn1] setShardVersion success: { oldVersion: Timestamp 24000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:05.226 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649199334798f3e47e2f m30001| Fri Feb 22 12:29:05.226 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-5127649199334798f3e47e30", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536145226), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 10625.0 }, max: { _id: 11087.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:05.227 [conn4] moveChunk request accepted at version 24|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:05.228 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:05.229 [migrateThread] starting receiving-end of migration of chunk { _id: 10625.0 } -> { _id: 11087.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:05.239 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 10625.0 }, max: { _id: 11087.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 134, clonedBytes: 139762, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:05.241 [cleanupOldData-5127649199334798f3e47e2d] waiting to remove documents for test.foo from { _id: 21550.0 } -> { _id: 22487.0 } m30001| Fri Feb 22 12:29:05.249 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 10625.0 }, max: { _id: 11087.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 301, clonedBytes: 313943, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:05.258 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:05.258 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 10625.0 } -> { _id: 11087.0 } m30000| Fri Feb 22 12:29:05.258 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 10625.0 } -> { _id: 11087.0 } m30001| Fri Feb 22 12:29:05.259 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 10625.0 }, max: { _id: 11087.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:05.259 [conn4] moveChunk setting version to: 25|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:05.260 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:05.262 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 185 version: 24|1||51276475bd1f99446659365c based on: 24|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:05.262 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 24|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:05.263 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 24000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 185 m30001| Fri Feb 22 12:29:05.263 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:05.268 [cleanupOldData-5127649099334798f3e47e28] moveChunk deleted 462 documents for test.bar from { _id: 10163.0 } -> { _id: 10625.0 } m30001| Fri Feb 22 12:29:05.268 [cleanupOldData-5127649199334798f3e47e2d] moveChunk starting delete for: test.foo from { _id: 21550.0 } -> { _id: 22487.0 } m30000| Fri Feb 22 12:29:05.269 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 10625.0 } -> { _id: 11087.0 } m30000| Fri Feb 22 12:29:05.269 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 10625.0 } -> { _id: 11087.0 } m30000| Fri Feb 22 12:29:05.269 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-51276491c49297cf54df5603", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536145269), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 10625.0 }, max: { _id: 11087.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:29:05.270 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 10625.0 }, max: { _id: 11087.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:05.270 [conn4] moveChunk updating self version to: 25|1||51276475bd1f99446659365c through { _id: 11087.0 } -> { _id: 11549.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:05.270 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-5127649199334798f3e47e31", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536145270), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 10625.0 }, max: { _id: 11087.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:05.270 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:05.270 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 24000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 24000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 25000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:05.271 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:05.271 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:05.272 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 186 version: 25|1||51276475bd1f99446659365c based on: 24|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:05.274 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 187 version: 25|1||51276475bd1f99446659365c based on: 25|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:05.274 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 25|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:05.274 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 25000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 187 m30999| Fri Feb 22 12:29:05.274 [conn1] setShardVersion success: { oldVersion: Timestamp 24000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:05.276 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:05.276 [cleanupOldData-5127649199334798f3e47e32] (start) waiting to cleanup test.bar from { _id: 10625.0 } -> { _id: 11087.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:05.276 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:05.277 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:05.277 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:05-5127649199334798f3e47e33", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536145277), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 10625.0 }, max: { _id: 11087.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 5 } } m30999| Fri Feb 22 12:29:05.277 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:05.278 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:05.278 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:29:05.297 [cleanupOldData-5127649199334798f3e47e32] waiting to remove documents for test.bar from { _id: 10625.0 } -> { _id: 11087.0 } 26000 m30999| Fri Feb 22 12:29:06.279 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:06.279 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:06.279 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276492bd1f994466593675" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276491bd1f994466593674" } } m30999| Fri Feb 22 12:29:06.280 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276492bd1f994466593675 m30999| Fri Feb 22 12:29:06.280 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:06.280 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:06.280 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:06.282 [Balancer] shard0001 has more chunks me:83 best: shard0000:24 m30999| Fri Feb 22 12:29:06.283 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:06.283 [Balancer] donor : shard0001 chunks on 83 m30999| Fri Feb 22 12:29:06.283 [Balancer] receiver : shard0000 chunks on 24 m30999| Fri Feb 22 12:29:06.283 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:06.283 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_22487.0", lastmod: Timestamp 25000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 22487.0 }, max: { _id: 23424.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30001| Fri Feb 22 12:29:06.283 [cleanupOldData-5127649199334798f3e47e2d] moveChunk deleted 937 documents for test.foo from { _id: 21550.0 } -> { _id: 22487.0 } m30001| Fri Feb 22 12:29:06.283 [cleanupOldData-5127649199334798f3e47e32] moveChunk starting delete for: test.bar from { _id: 10625.0 } -> { _id: 11087.0 } m30999| Fri Feb 22 12:29:06.285 [Balancer] shard0001 has more chunks me:193 best: shard0000:24 m30999| Fri Feb 22 12:29:06.285 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:06.285 [Balancer] donor : shard0001 chunks on 193 m30999| Fri Feb 22 12:29:06.285 [Balancer] receiver : shard0000 chunks on 24 m30999| Fri Feb 22 12:29:06.285 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:06.285 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_11087.0", lastmod: Timestamp 25000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 11087.0 }, max: { _id: 11549.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:06.285 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 25|1||000000000000000000000000min: { _id: 22487.0 }max: { _id: 23424.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:06.285 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:06.286 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 22487.0 }, max: { _id: 23424.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_22487.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:06.286 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649299334798f3e47e34 m30001| Fri Feb 22 12:29:06.287 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-5127649299334798f3e47e35", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536146287), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 22487.0 }, max: { _id: 23424.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:06.288 [conn4] moveChunk request accepted at version 25|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:06.291 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:06.291 [migrateThread] starting receiving-end of migration of chunk { _id: 22487.0 } -> { _id: 23424.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:06.301 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 22487.0 }, max: { _id: 23424.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 128, clonedBytes: 67840, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:06.311 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 22487.0 }, max: { _id: 23424.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 330, clonedBytes: 174900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:06.322 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 22487.0 }, max: { _id: 23424.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 531, clonedBytes: 281430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:06.332 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 22487.0 }, max: { _id: 23424.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 734, clonedBytes: 389020, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:06.342 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:06.342 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 22487.0 } -> { _id: 23424.0 } m30000| Fri Feb 22 12:29:06.346 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 22487.0 } -> { _id: 23424.0 } m30001| Fri Feb 22 12:29:06.348 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 22487.0 }, max: { _id: 23424.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:06.348 [conn4] moveChunk setting version to: 26|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:06.348 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:06.350 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 188 version: 25|1||51276475bd1f99446659365b based on: 25|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:06.350 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 25|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:06.351 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 25000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 188 m30001| Fri Feb 22 12:29:06.351 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:06.356 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 22487.0 } -> { _id: 23424.0 } m30000| Fri Feb 22 12:29:06.356 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 22487.0 } -> { _id: 23424.0 } m30000| Fri Feb 22 12:29:06.356 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-51276492c49297cf54df5604", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536146356), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 22487.0 }, max: { _id: 23424.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:06.358 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 22487.0 }, max: { _id: 23424.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:06.359 [conn4] moveChunk updating self version to: 26|1||51276475bd1f99446659365b through { _id: 23424.0 } -> { _id: 24361.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:06.359 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-5127649299334798f3e47e36", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536146359), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 22487.0 }, max: { _id: 23424.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:06.359 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:06.359 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:06.359 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:06.359 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 25000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 25000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 26000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:06.359 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:06.359 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:06.359 [cleanupOldData-5127649299334798f3e47e37] (start) waiting to cleanup test.foo from { _id: 22487.0 } -> { _id: 23424.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:06.360 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:06.360 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-5127649299334798f3e47e38", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536146360), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 22487.0 }, max: { _id: 23424.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:06.360 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:06.361 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 189 version: 26|1||51276475bd1f99446659365b based on: 25|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:06.362 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 190 version: 26|1||51276475bd1f99446659365b based on: 25|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:06.363 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 25|1||000000000000000000000000min: { _id: 11087.0 }max: { _id: 11549.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:06.363 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:06.363 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 11087.0 }, max: { _id: 11549.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_11087.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:06.364 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 191 version: 26|1||51276475bd1f99446659365b based on: 26|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:06.364 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 26|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:06.364 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 26000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 191 m30999| Fri Feb 22 12:29:06.364 [conn1] setShardVersion success: { oldVersion: Timestamp 25000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:06.365 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649299334798f3e47e39 m30001| Fri Feb 22 12:29:06.365 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-5127649299334798f3e47e3a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536146365), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 11087.0 }, max: { _id: 11549.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:06.366 [conn4] moveChunk request accepted at version 25|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:06.368 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:06.368 [migrateThread] starting receiving-end of migration of chunk { _id: 11087.0 } -> { _id: 11549.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:06.378 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 11087.0 }, max: { _id: 11549.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 111, clonedBytes: 115773, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:06.380 [cleanupOldData-5127649299334798f3e47e37] waiting to remove documents for test.foo from { _id: 22487.0 } -> { _id: 23424.0 } m30001| Fri Feb 22 12:29:06.388 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 11087.0 }, max: { _id: 11549.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 292, clonedBytes: 304556, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:06.398 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:06.398 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 11087.0 } -> { _id: 11549.0 } m30001| Fri Feb 22 12:29:06.399 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 11087.0 }, max: { _id: 11549.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:06.400 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 11087.0 } -> { _id: 11549.0 } m30001| Fri Feb 22 12:29:06.409 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 11087.0 }, max: { _id: 11549.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:06.409 [conn4] moveChunk setting version to: 26|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:06.409 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:06.411 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 11087.0 } -> { _id: 11549.0 } m30000| Fri Feb 22 12:29:06.411 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 11087.0 } -> { _id: 11549.0 } m30000| Fri Feb 22 12:29:06.411 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-51276492c49297cf54df5605", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536146411), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 11087.0 }, max: { _id: 11549.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:06.412 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 192 version: 25|1||51276475bd1f99446659365c based on: 25|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:06.413 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 25|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:06.413 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 25000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 192 m30001| Fri Feb 22 12:29:06.413 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:06.419 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 11087.0 }, max: { _id: 11549.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:06.420 [conn4] moveChunk updating self version to: 26|1||51276475bd1f99446659365c through { _id: 11549.0 } -> { _id: 12011.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:06.420 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-5127649299334798f3e47e3b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536146420), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 11087.0 }, max: { _id: 11549.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:06.420 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:06.420 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 25000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 25000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 26000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:06.420 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:06.420 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:06.421 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:06.421 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:06.421 [cleanupOldData-5127649299334798f3e47e3c] (start) waiting to cleanup test.bar from { _id: 11087.0 } -> { _id: 11549.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:06.421 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:06.421 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:06-5127649299334798f3e47e3d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536146421), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 11087.0 }, max: { _id: 11549.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 2, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:06.421 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:06.423 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 193 version: 26|1||51276475bd1f99446659365c based on: 25|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:06.425 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 194 version: 26|1||51276475bd1f99446659365c based on: 25|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:06.426 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:06.426 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:06.427 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 195 version: 26|1||51276475bd1f99446659365c based on: 26|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:06.427 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 26|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:06.428 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 26000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 195 m30999| Fri Feb 22 12:29:06.428 [conn1] setShardVersion success: { oldVersion: Timestamp 25000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:06.441 [cleanupOldData-5127649299334798f3e47e3c] waiting to remove documents for test.bar from { _id: 11087.0 } -> { _id: 11549.0 } m30001| Fri Feb 22 12:29:06.617 [cleanupOldData-5127649199334798f3e47e32] moveChunk deleted 462 documents for test.bar from { _id: 10625.0 } -> { _id: 11087.0 } m30001| Fri Feb 22 12:29:06.617 [cleanupOldData-5127649299334798f3e47e37] moveChunk starting delete for: test.foo from { _id: 22487.0 } -> { _id: 23424.0 } 27000 m30999| Fri Feb 22 12:29:07.427 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:07.427 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:07.427 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:07 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276493bd1f994466593676" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276492bd1f994466593675" } } m30999| Fri Feb 22 12:29:07.428 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276493bd1f994466593676 m30999| Fri Feb 22 12:29:07.428 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:07.428 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:07.428 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:07.430 [Balancer] shard0001 has more chunks me:82 best: shard0000:25 m30999| Fri Feb 22 12:29:07.430 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:07.430 [Balancer] donor : shard0001 chunks on 82 m30999| Fri Feb 22 12:29:07.430 [Balancer] receiver : shard0000 chunks on 25 m30999| Fri Feb 22 12:29:07.430 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:07.430 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_23424.0", lastmod: Timestamp 26000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 23424.0 }, max: { _id: 24361.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:07.432 [Balancer] shard0001 has more chunks me:192 best: shard0000:25 m30999| Fri Feb 22 12:29:07.432 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:07.432 [Balancer] donor : shard0001 chunks on 192 m30999| Fri Feb 22 12:29:07.432 [Balancer] receiver : shard0000 chunks on 25 m30999| Fri Feb 22 12:29:07.432 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:07.432 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_11549.0", lastmod: Timestamp 26000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 11549.0 }, max: { _id: 12011.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:07.432 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 26|1||000000000000000000000000min: { _id: 23424.0 }max: { _id: 24361.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:07.432 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:07.432 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 23424.0 }, max: { _id: 24361.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_23424.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:07.433 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649399334798f3e47e3e m30001| Fri Feb 22 12:29:07.433 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-5127649399334798f3e47e3f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536147433), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 23424.0 }, max: { _id: 24361.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:07.434 [conn4] moveChunk request accepted at version 26|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:07.436 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:07.437 [migrateThread] starting receiving-end of migration of chunk { _id: 23424.0 } -> { _id: 24361.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:07.447 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 23424.0 }, max: { _id: 24361.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 105, clonedBytes: 55650, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:07.457 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 23424.0 }, max: { _id: 24361.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 307, clonedBytes: 162710, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:07.467 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 23424.0 }, max: { _id: 24361.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 510, clonedBytes: 270300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:07.477 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 23424.0 }, max: { _id: 24361.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 712, clonedBytes: 377360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:07.489 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:07.489 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 23424.0 } -> { _id: 24361.0 } m30000| Fri Feb 22 12:29:07.492 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 23424.0 } -> { _id: 24361.0 } m30001| Fri Feb 22 12:29:07.493 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 23424.0 }, max: { _id: 24361.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:07.494 [conn4] moveChunk setting version to: 27|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:07.494 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:07.496 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 196 version: 26|1||51276475bd1f99446659365b based on: 26|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:07.496 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 26|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:07.496 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 26000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 196 m30001| Fri Feb 22 12:29:07.496 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:07.502 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 23424.0 } -> { _id: 24361.0 } m30000| Fri Feb 22 12:29:07.502 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 23424.0 } -> { _id: 24361.0 } m30000| Fri Feb 22 12:29:07.503 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-51276493c49297cf54df5606", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536147503), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 23424.0 }, max: { _id: 24361.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:07.504 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 23424.0 }, max: { _id: 24361.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:07.504 [conn4] moveChunk updating self version to: 27|1||51276475bd1f99446659365b through { _id: 24361.0 } -> { _id: 25298.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:07.505 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-5127649399334798f3e47e40", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536147505), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 23424.0 }, max: { _id: 24361.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:07.505 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:07.505 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:07.505 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:07.505 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:07.505 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 26000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 26000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 27000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:07.505 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:07.505 [cleanupOldData-5127649399334798f3e47e41] (start) waiting to cleanup test.foo from { _id: 23424.0 } -> { _id: 24361.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:07.505 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:07.505 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-5127649399334798f3e47e42", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536147505), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 23424.0 }, max: { _id: 24361.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:07.505 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:07.506 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 197 version: 27|1||51276475bd1f99446659365b based on: 26|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:07.507 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 198 version: 27|1||51276475bd1f99446659365b based on: 26|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:07.508 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 26|1||000000000000000000000000min: { _id: 11549.0 }max: { _id: 12011.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:07.508 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:07.508 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 11549.0 }, max: { _id: 12011.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_11549.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:07.509 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 199 version: 27|1||51276475bd1f99446659365b based on: 27|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:07.509 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649399334798f3e47e43 m30001| Fri Feb 22 12:29:07.509 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-5127649399334798f3e47e44", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536147509), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 11549.0 }, max: { _id: 12011.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:07.509 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 27|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:07.509 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 27000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 199 m30999| Fri Feb 22 12:29:07.509 [conn1] setShardVersion success: { oldVersion: Timestamp 26000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:07.510 [conn4] moveChunk request accepted at version 26|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:07.511 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:07.511 [migrateThread] starting receiving-end of migration of chunk { _id: 11549.0 } -> { _id: 12011.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:07.521 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 11549.0 }, max: { _id: 12011.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 189, clonedBytes: 197127, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:07.525 [cleanupOldData-5127649399334798f3e47e41] waiting to remove documents for test.foo from { _id: 23424.0 } -> { _id: 24361.0 } m30001| Fri Feb 22 12:29:07.531 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 11549.0 }, max: { _id: 12011.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 444, clonedBytes: 463092, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:07.532 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:07.532 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 11549.0 } -> { _id: 12011.0 } m30000| Fri Feb 22 12:29:07.534 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 11549.0 } -> { _id: 12011.0 } m30001| Fri Feb 22 12:29:07.541 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 11549.0 }, max: { _id: 12011.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:07.541 [conn4] moveChunk setting version to: 27|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:07.542 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:07.544 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 11549.0 } -> { _id: 12011.0 } m30000| Fri Feb 22 12:29:07.544 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 11549.0 } -> { _id: 12011.0 } m30000| Fri Feb 22 12:29:07.544 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-51276493c49297cf54df5607", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536147544), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 11549.0 }, max: { _id: 12011.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:07.544 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 200 version: 26|1||51276475bd1f99446659365c based on: 26|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:07.545 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 26|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:07.545 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 26000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 200 m30001| Fri Feb 22 12:29:07.545 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:07.551 [cleanupOldData-5127649299334798f3e47e37] moveChunk deleted 937 documents for test.foo from { _id: 22487.0 } -> { _id: 23424.0 } m30001| Fri Feb 22 12:29:07.551 [cleanupOldData-5127649399334798f3e47e41] moveChunk starting delete for: test.foo from { _id: 23424.0 } -> { _id: 24361.0 } m30001| Fri Feb 22 12:29:07.552 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 11549.0 }, max: { _id: 12011.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:07.552 [conn4] moveChunk updating self version to: 27|1||51276475bd1f99446659365c through { _id: 12011.0 } -> { _id: 12473.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:07.552 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-5127649399334798f3e47e45", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536147552), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 11549.0 }, max: { _id: 12011.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:07.552 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:07.553 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 26000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 26000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 27000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:07.553 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:07.553 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:07.553 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:07.553 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:07.553 [cleanupOldData-5127649399334798f3e47e46] (start) waiting to cleanup test.bar from { _id: 11549.0 } -> { _id: 12011.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:07.553 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:07.553 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:07-5127649399334798f3e47e47", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536147553), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 11549.0 }, max: { _id: 12011.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:07.553 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:07.554 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 201 version: 27|1||51276475bd1f99446659365c based on: 26|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:07.556 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 202 version: 27|1||51276475bd1f99446659365c based on: 26|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:07.557 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:07.557 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:07.558 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 203 version: 27|1||51276475bd1f99446659365c based on: 27|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:07.558 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 27|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:07.559 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 27000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 203 m30999| Fri Feb 22 12:29:07.559 [conn1] setShardVersion success: { oldVersion: Timestamp 26000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:07.573 [cleanupOldData-5127649399334798f3e47e46] waiting to remove documents for test.bar from { _id: 11549.0 } -> { _id: 12011.0 } m30001| Fri Feb 22 12:29:08.387 [cleanupOldData-5127649399334798f3e47e41] moveChunk deleted 937 documents for test.foo from { _id: 23424.0 } -> { _id: 24361.0 } m30001| Fri Feb 22 12:29:08.387 [cleanupOldData-5127649399334798f3e47e46] moveChunk starting delete for: test.bar from { _id: 11549.0 } -> { _id: 12011.0 } m30999| Fri Feb 22 12:29:08.558 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:08.558 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:08.558 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276494bd1f994466593677" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276493bd1f994466593676" } } m30999| Fri Feb 22 12:29:08.559 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276494bd1f994466593677 m30999| Fri Feb 22 12:29:08.559 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:08.559 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:08.559 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:08.562 [Balancer] shard0001 has more chunks me:81 best: shard0000:26 m30999| Fri Feb 22 12:29:08.562 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:08.562 [Balancer] donor : shard0001 chunks on 81 m30999| Fri Feb 22 12:29:08.562 [Balancer] receiver : shard0000 chunks on 26 m30999| Fri Feb 22 12:29:08.562 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:08.562 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_24361.0", lastmod: Timestamp 27000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 24361.0 }, max: { _id: 25298.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:08.565 [Balancer] shard0001 has more chunks me:191 best: shard0000:26 m30999| Fri Feb 22 12:29:08.565 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:08.565 [Balancer] donor : shard0001 chunks on 191 m30999| Fri Feb 22 12:29:08.565 [Balancer] receiver : shard0000 chunks on 26 m30999| Fri Feb 22 12:29:08.565 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:08.565 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_12011.0", lastmod: Timestamp 27000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 12011.0 }, max: { _id: 12473.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:08.565 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 27|1||000000000000000000000000min: { _id: 24361.0 }max: { _id: 25298.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:08.565 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:08.565 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 24361.0 }, max: { _id: 25298.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_24361.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:08.566 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649499334798f3e47e48 m30001| Fri Feb 22 12:29:08.567 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-5127649499334798f3e47e49", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536148567), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 24361.0 }, max: { _id: 25298.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:08.568 [conn4] moveChunk request accepted at version 27|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:08.571 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:08.571 [migrateThread] starting receiving-end of migration of chunk { _id: 24361.0 } -> { _id: 25298.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:08.581 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 24361.0 }, max: { _id: 25298.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 119, clonedBytes: 63070, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 28000 m30001| Fri Feb 22 12:29:08.592 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 24361.0 }, max: { _id: 25298.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 322, clonedBytes: 170660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:08.602 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 24361.0 }, max: { _id: 25298.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 525, clonedBytes: 278250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:08.612 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 24361.0 }, max: { _id: 25298.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 729, clonedBytes: 386370, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:08.623 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:08.623 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 24361.0 } -> { _id: 25298.0 } m30000| Fri Feb 22 12:29:08.626 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 24361.0 } -> { _id: 25298.0 } m30001| Fri Feb 22 12:29:08.628 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 24361.0 }, max: { _id: 25298.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:08.629 [conn4] moveChunk setting version to: 28|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:08.629 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:08.631 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 204 version: 27|1||51276475bd1f99446659365b based on: 27|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:08.631 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 27|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:08.631 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 27000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 204 m30001| Fri Feb 22 12:29:08.631 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:08.637 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 24361.0 } -> { _id: 25298.0 } m30000| Fri Feb 22 12:29:08.637 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 24361.0 } -> { _id: 25298.0 } m30000| Fri Feb 22 12:29:08.637 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-51276494c49297cf54df5608", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536148637), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 24361.0 }, max: { _id: 25298.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:08.639 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 24361.0 }, max: { _id: 25298.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:08.639 [conn4] moveChunk updating self version to: 28|1||51276475bd1f99446659365b through { _id: 25298.0 } -> { _id: 26235.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:08.640 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-5127649499334798f3e47e4a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536148640), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 24361.0 }, max: { _id: 25298.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:08.640 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:08.640 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:08.640 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:08.640 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:08.640 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 27000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 27000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 28000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:08.640 [cleanupOldData-5127649499334798f3e47e4b] (start) waiting to cleanup test.foo from { _id: 24361.0 } -> { _id: 25298.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:08.640 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:08.640 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:08.640 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-5127649499334798f3e47e4c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536148640), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 24361.0 }, max: { _id: 25298.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:08.641 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:08.642 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 205 version: 28|1||51276475bd1f99446659365b based on: 27|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:08.643 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 206 version: 28|1||51276475bd1f99446659365b based on: 27|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:08.644 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 27|1||000000000000000000000000min: { _id: 12011.0 }max: { _id: 12473.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:08.644 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:08.644 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 12011.0 }, max: { _id: 12473.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_12011.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:08.645 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 207 version: 28|1||51276475bd1f99446659365b based on: 28|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:08.645 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 28|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:08.646 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649499334798f3e47e4d m30001| Fri Feb 22 12:29:08.646 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-5127649499334798f3e47e4e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536148646), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 12011.0 }, max: { _id: 12473.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:08.646 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 28000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 207 m30999| Fri Feb 22 12:29:08.646 [conn1] setShardVersion success: { oldVersion: Timestamp 27000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:08.647 [conn4] moveChunk request accepted at version 27|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:08.648 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:08.649 [migrateThread] starting receiving-end of migration of chunk { _id: 12011.0 } -> { _id: 12473.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:08.658 [cleanupOldData-5127649399334798f3e47e46] moveChunk deleted 462 documents for test.bar from { _id: 11549.0 } -> { _id: 12011.0 } m30001| Fri Feb 22 12:29:08.658 [cleanupOldData-5127648e99334798f3e47e19] moveChunk starting delete for: test.foo from { _id: 19676.0 } -> { _id: 20613.0 } m30001| Fri Feb 22 12:29:08.659 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12011.0 }, max: { _id: 12473.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 71, clonedBytes: 74053, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:08.660 [cleanupOldData-5127649499334798f3e47e4b] waiting to remove documents for test.foo from { _id: 24361.0 } -> { _id: 25298.0 } m30001| Fri Feb 22 12:29:08.669 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12011.0 }, max: { _id: 12473.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 203, clonedBytes: 211729, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:08.679 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12011.0 }, max: { _id: 12473.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 370, clonedBytes: 385910, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:08.686 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:08.686 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 12011.0 } -> { _id: 12473.0 } m30000| Fri Feb 22 12:29:08.689 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 12011.0 } -> { _id: 12473.0 } m30001| Fri Feb 22 12:29:08.690 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12011.0 }, max: { _id: 12473.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:08.690 [conn4] moveChunk setting version to: 28|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:08.690 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:08.693 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 208 version: 27|1||51276475bd1f99446659365c based on: 27|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:08.693 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 27|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:08.693 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 27000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 208 m30001| Fri Feb 22 12:29:08.693 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:08.699 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 12011.0 } -> { _id: 12473.0 } m30000| Fri Feb 22 12:29:08.699 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 12011.0 } -> { _id: 12473.0 } m30000| Fri Feb 22 12:29:08.699 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-51276494c49297cf54df5609", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536148699), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 12011.0 }, max: { _id: 12473.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 36, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:08.700 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 12011.0 }, max: { _id: 12473.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:08.700 [conn4] moveChunk updating self version to: 28|1||51276475bd1f99446659365c through { _id: 12473.0 } -> { _id: 12935.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:08.701 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-5127649499334798f3e47e4f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536148701), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 12011.0 }, max: { _id: 12473.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:08.701 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:08.701 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:08.701 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 27000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 27000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 28000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:08.701 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:08.701 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:08.701 [cleanupOldData-5127649499334798f3e47e50] (start) waiting to cleanup test.bar from { _id: 12011.0 } -> { _id: 12473.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:08.701 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:08.702 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:08.702 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:08-5127649499334798f3e47e51", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536148702), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 12011.0 }, max: { _id: 12473.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:08.702 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:08.703 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 209 version: 28|1||51276475bd1f99446659365c based on: 27|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:08.705 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 210 version: 28|1||51276475bd1f99446659365c based on: 27|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:08.706 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:08.707 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:08.708 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 211 version: 28|1||51276475bd1f99446659365c based on: 28|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:08.708 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 28|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:08.709 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 28000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 211 m30999| Fri Feb 22 12:29:08.709 [conn1] setShardVersion success: { oldVersion: Timestamp 27000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:08.721 [cleanupOldData-5127649499334798f3e47e50] waiting to remove documents for test.bar from { _id: 12011.0 } -> { _id: 12473.0 } m30999| Fri Feb 22 12:29:08.779 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:29:08 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30001| Fri Feb 22 12:29:09.612 [cleanupOldData-5127648e99334798f3e47e19] moveChunk deleted 937 documents for test.foo from { _id: 19676.0 } -> { _id: 20613.0 } m30001| Fri Feb 22 12:29:09.612 [cleanupOldData-5127649499334798f3e47e50] moveChunk starting delete for: test.bar from { _id: 12011.0 } -> { _id: 12473.0 } m30999| Fri Feb 22 12:29:09.708 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:09.708 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:09.708 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:09 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276495bd1f994466593678" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276494bd1f994466593677" } } m30999| Fri Feb 22 12:29:09.709 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276495bd1f994466593678 m30999| Fri Feb 22 12:29:09.709 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:09.709 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:09.709 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:09.712 [Balancer] shard0001 has more chunks me:80 best: shard0000:27 m30999| Fri Feb 22 12:29:09.712 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:09.712 [Balancer] donor : shard0001 chunks on 80 m30999| Fri Feb 22 12:29:09.712 [Balancer] receiver : shard0000 chunks on 27 m30999| Fri Feb 22 12:29:09.712 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:09.712 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_25298.0", lastmod: Timestamp 28000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:09.715 [Balancer] shard0001 has more chunks me:190 best: shard0000:27 m30999| Fri Feb 22 12:29:09.715 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:09.715 [Balancer] donor : shard0001 chunks on 190 m30999| Fri Feb 22 12:29:09.715 [Balancer] receiver : shard0000 chunks on 27 m30999| Fri Feb 22 12:29:09.715 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:09.715 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_12473.0", lastmod: Timestamp 28000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 12473.0 }, max: { _id: 12935.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:09.715 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 28|1||000000000000000000000000min: { _id: 25298.0 }max: { _id: 26235.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:09.715 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:09.715 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 25298.0 }, max: { _id: 26235.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_25298.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:09.716 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649599334798f3e47e52 m30001| Fri Feb 22 12:29:09.716 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-5127649599334798f3e47e53", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536149716), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 25298.0 }, max: { _id: 26235.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:09.718 [conn4] moveChunk request accepted at version 28|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:09.721 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:09.721 [migrateThread] starting receiving-end of migration of chunk { _id: 25298.0 } -> { _id: 26235.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:09.731 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 95, clonedBytes: 50350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:09.742 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 292, clonedBytes: 154760, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:09.752 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 487, clonedBytes: 258110, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:09.762 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 645, clonedBytes: 341850, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:09.777 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:09.777 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 25298.0 } -> { _id: 26235.0 } m30001| Fri Feb 22 12:29:09.778 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:09.779 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 25298.0 } -> { _id: 26235.0 } m30001| Fri Feb 22 12:29:09.811 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:09.811 [conn4] moveChunk setting version to: 29|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:09.811 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:09.813 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 212 version: 28|1||51276475bd1f99446659365b based on: 28|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:09.813 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 28|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:09.813 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 28000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 212 m30001| Fri Feb 22 12:29:09.813 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:09.820 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 25298.0 } -> { _id: 26235.0 } m30000| Fri Feb 22 12:29:09.820 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 25298.0 } -> { _id: 26235.0 } m30000| Fri Feb 22 12:29:09.820 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-51276495c49297cf54df560a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536149820), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 25298.0 }, max: { _id: 26235.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 55, step4 of 5: 0, step5 of 5: 42 } } m30001| Fri Feb 22 12:29:09.821 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 25298.0 }, max: { _id: 26235.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:09.821 [conn4] moveChunk updating self version to: 29|1||51276475bd1f99446659365b through { _id: 26235.0 } -> { _id: 27172.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:09.822 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-5127649599334798f3e47e54", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536149822), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 25298.0 }, max: { _id: 26235.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:09.822 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:09.822 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:09.822 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:09.822 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 28000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 28000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 29000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:09.822 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:09.822 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:09.822 [cleanupOldData-5127649599334798f3e47e55] (start) waiting to cleanup test.foo from { _id: 25298.0 } -> { _id: 26235.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:09.822 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:09.822 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-5127649599334798f3e47e56", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536149822), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 25298.0 }, max: { _id: 26235.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:29:09.822 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 25298.0 }, max: { _id: 26235.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_25298.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:27 r:3857 w:58 reslen:37 107ms m30999| Fri Feb 22 12:29:09.823 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:09.823 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 213 version: 29|1||51276475bd1f99446659365b based on: 28|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:09.825 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 214 version: 29|1||51276475bd1f99446659365b based on: 28|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:09.825 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 28|1||000000000000000000000000min: { _id: 12473.0 }max: { _id: 12935.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:09.825 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:09.826 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 12473.0 }, max: { _id: 12935.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_12473.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:09.826 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 215 version: 29|1||51276475bd1f99446659365b based on: 29|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:09.826 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 29|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:09.827 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 29000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 215 m30999| Fri Feb 22 12:29:09.827 [conn1] setShardVersion success: { oldVersion: Timestamp 28000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:09.827 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649599334798f3e47e57 m30001| Fri Feb 22 12:29:09.827 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-5127649599334798f3e47e58", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536149827), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 12473.0 }, max: { _id: 12935.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:09.828 [conn4] moveChunk request accepted at version 28|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:09.829 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:09.830 [migrateThread] starting receiving-end of migration of chunk { _id: 12473.0 } -> { _id: 12935.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:09.840 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12473.0 }, max: { _id: 12935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 135, clonedBytes: 140805, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:09.842 [cleanupOldData-5127649599334798f3e47e55] waiting to remove documents for test.foo from { _id: 25298.0 } -> { _id: 26235.0 } m30001| Fri Feb 22 12:29:09.850 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12473.0 }, max: { _id: 12935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 302, clonedBytes: 314986, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 29000 m30001| Fri Feb 22 12:29:09.860 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12473.0 }, max: { _id: 12935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 430, clonedBytes: 448490, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:09.862 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:09.863 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 12473.0 } -> { _id: 12935.0 } m30000| Fri Feb 22 12:29:09.863 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 12473.0 } -> { _id: 12935.0 } m30001| Fri Feb 22 12:29:09.870 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12473.0 }, max: { _id: 12935.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:09.871 [conn4] moveChunk setting version to: 29|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:09.871 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:09.873 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 216 version: 28|1||51276475bd1f99446659365c based on: 28|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:09.873 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 12473.0 } -> { _id: 12935.0 } m30000| Fri Feb 22 12:29:09.873 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 12473.0 } -> { _id: 12935.0 } m30999| Fri Feb 22 12:29:09.873 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 28|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:09.873 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-51276495c49297cf54df560b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536149873), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 12473.0 }, max: { _id: 12935.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 32, step4 of 5: 0, step5 of 5: 10 } } m30999| Fri Feb 22 12:29:09.874 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 28000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 216 m30001| Fri Feb 22 12:29:09.874 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:09.881 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 12473.0 }, max: { _id: 12935.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:09.881 [conn4] moveChunk updating self version to: 29|1||51276475bd1f99446659365c through { _id: 12935.0 } -> { _id: 13397.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:09.882 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-5127649599334798f3e47e59", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536149882), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 12473.0 }, max: { _id: 12935.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:09.882 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:09.882 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:09.882 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:09.882 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 28000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 28000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 29000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:09.882 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:09.882 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:09.882 [cleanupOldData-5127649599334798f3e47e5a] (start) waiting to cleanup test.bar from { _id: 12473.0 } -> { _id: 12935.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:09.883 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:09.883 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:09-5127649599334798f3e47e5b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536149883), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 12473.0 }, max: { _id: 12935.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:09.883 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:09.884 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 217 version: 29|1||51276475bd1f99446659365c based on: 28|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:09.886 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 218 version: 29|1||51276475bd1f99446659365c based on: 28|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:09.887 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:09.888 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:09.889 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 219 version: 29|1||51276475bd1f99446659365c based on: 29|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:09.889 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 29|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:09.889 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 29000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 219 m30999| Fri Feb 22 12:29:09.889 [conn1] setShardVersion success: { oldVersion: Timestamp 28000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:09.902 [cleanupOldData-5127649599334798f3e47e5a] waiting to remove documents for test.bar from { _id: 12473.0 } -> { _id: 12935.0 } m30001| Fri Feb 22 12:29:10.019 [cleanupOldData-5127649499334798f3e47e50] moveChunk deleted 462 documents for test.bar from { _id: 12011.0 } -> { _id: 12473.0 } m30001| Fri Feb 22 12:29:10.019 [cleanupOldData-5127649599334798f3e47e5a] moveChunk starting delete for: test.bar from { _id: 12473.0 } -> { _id: 12935.0 } m30001| Fri Feb 22 12:29:10.437 [cleanupOldData-5127649599334798f3e47e5a] moveChunk deleted 462 documents for test.bar from { _id: 12473.0 } -> { _id: 12935.0 } m30001| Fri Feb 22 12:29:10.437 [cleanupOldData-5127649599334798f3e47e55] moveChunk starting delete for: test.foo from { _id: 25298.0 } -> { _id: 26235.0 } m30999| Fri Feb 22 12:29:10.888 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:10.889 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:10.889 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:10 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276496bd1f994466593679" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276495bd1f994466593678" } } m30999| Fri Feb 22 12:29:10.890 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276496bd1f994466593679 m30999| Fri Feb 22 12:29:10.890 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:10.890 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:10.890 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:10.892 [Balancer] shard0001 has more chunks me:79 best: shard0000:28 m30999| Fri Feb 22 12:29:10.892 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:10.892 [Balancer] donor : shard0001 chunks on 79 m30999| Fri Feb 22 12:29:10.892 [Balancer] receiver : shard0000 chunks on 28 m30999| Fri Feb 22 12:29:10.892 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:10.892 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_26235.0", lastmod: Timestamp 29000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 26235.0 }, max: { _id: 27172.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:10.894 [Balancer] shard0001 has more chunks me:189 best: shard0000:28 m30999| Fri Feb 22 12:29:10.894 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:10.894 [Balancer] donor : shard0001 chunks on 189 m30999| Fri Feb 22 12:29:10.894 [Balancer] receiver : shard0000 chunks on 28 m30999| Fri Feb 22 12:29:10.894 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:10.894 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_12935.0", lastmod: Timestamp 29000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 12935.0 }, max: { _id: 13397.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:10.894 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 29|1||000000000000000000000000min: { _id: 26235.0 }max: { _id: 27172.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:10.894 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:10.894 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 26235.0 }, max: { _id: 27172.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_26235.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:10.895 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649699334798f3e47e5c m30001| Fri Feb 22 12:29:10.895 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-5127649699334798f3e47e5d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536150895), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 26235.0 }, max: { _id: 27172.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:10.896 [conn4] moveChunk request accepted at version 29|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:10.898 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:10.898 [migrateThread] starting receiving-end of migration of chunk { _id: 26235.0 } -> { _id: 27172.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:10.909 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 26235.0 }, max: { _id: 27172.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 188, clonedBytes: 99640, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:10.919 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 26235.0 }, max: { _id: 27172.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 493, clonedBytes: 261290, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:10.929 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 26235.0 }, max: { _id: 27172.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 790, clonedBytes: 418700, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:10.934 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:10.934 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 26235.0 } -> { _id: 27172.0 } m30000| Fri Feb 22 12:29:10.936 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 26235.0 } -> { _id: 27172.0 } m30001| Fri Feb 22 12:29:10.939 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 26235.0 }, max: { _id: 27172.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:10.939 [conn4] moveChunk setting version to: 30|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:10.939 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:10.941 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 220 version: 29|1||51276475bd1f99446659365b based on: 29|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:10.941 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 29|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:10.941 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 29000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 220 m30001| Fri Feb 22 12:29:10.942 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:10.947 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 26235.0 } -> { _id: 27172.0 } m30000| Fri Feb 22 12:29:10.947 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 26235.0 } -> { _id: 27172.0 } m30000| Fri Feb 22 12:29:10.947 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-51276496c49297cf54df560c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536150947), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 26235.0 }, max: { _id: 27172.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 35, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:10.949 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 26235.0 }, max: { _id: 27172.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:10.950 [conn4] moveChunk updating self version to: 30|1||51276475bd1f99446659365b through { _id: 27172.0 } -> { _id: 28109.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:10.950 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-5127649699334798f3e47e5e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536150950), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 26235.0 }, max: { _id: 27172.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:10.950 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:10.950 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:10.950 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 29000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 29000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 30000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:10.950 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:10.950 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:10.950 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:10.950 [cleanupOldData-5127649699334798f3e47e5f] (start) waiting to cleanup test.foo from { _id: 26235.0 } -> { _id: 27172.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:10.951 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:10.951 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-5127649699334798f3e47e60", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536150951), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 26235.0 }, max: { _id: 27172.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:10.951 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:10.952 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 221 version: 30|1||51276475bd1f99446659365b based on: 29|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:10.953 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 222 version: 30|1||51276475bd1f99446659365b based on: 29|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:10.953 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 29|1||000000000000000000000000min: { _id: 12935.0 }max: { _id: 13397.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:10.953 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:10.953 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 12935.0 }, max: { _id: 13397.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_12935.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:10.954 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 223 version: 30|1||51276475bd1f99446659365b based on: 30|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:10.954 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649699334798f3e47e61 m30001| Fri Feb 22 12:29:10.954 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-5127649699334798f3e47e62", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536150954), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 12935.0 }, max: { _id: 13397.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:10.954 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 30|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:10.954 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 30000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 223 m30999| Fri Feb 22 12:29:10.954 [conn1] setShardVersion success: { oldVersion: Timestamp 29000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:10.955 [conn4] moveChunk request accepted at version 29|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:10.956 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:10.956 [migrateThread] starting receiving-end of migration of chunk { _id: 12935.0 } -> { _id: 13397.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:10.966 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12935.0 }, max: { _id: 13397.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 188, clonedBytes: 196084, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 30000 m30001| Fri Feb 22 12:29:10.971 [cleanupOldData-5127649699334798f3e47e5f] waiting to remove documents for test.foo from { _id: 26235.0 } -> { _id: 27172.0 } m30001| Fri Feb 22 12:29:10.976 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12935.0 }, max: { _id: 13397.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 450, clonedBytes: 469350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:10.977 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:10.977 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 12935.0 } -> { _id: 13397.0 } m30000| Fri Feb 22 12:29:10.979 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 12935.0 } -> { _id: 13397.0 } m30001| Fri Feb 22 12:29:10.987 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 12935.0 }, max: { _id: 13397.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:10.987 [conn4] moveChunk setting version to: 30|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:10.987 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:10.989 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 224 version: 29|1||51276475bd1f99446659365c based on: 29|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:10.989 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 12935.0 } -> { _id: 13397.0 } m30000| Fri Feb 22 12:29:10.989 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 12935.0 } -> { _id: 13397.0 } m30999| Fri Feb 22 12:29:10.989 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 29|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:10.989 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-51276496c49297cf54df560d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536150989), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 12935.0 }, max: { _id: 13397.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:10.989 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 29000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 224 m30001| Fri Feb 22 12:29:10.990 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:10.997 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 12935.0 }, max: { _id: 13397.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:10.997 [conn4] moveChunk updating self version to: 30|1||51276475bd1f99446659365c through { _id: 13397.0 } -> { _id: 13859.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:10.998 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-5127649699334798f3e47e63", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536150998), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 12935.0 }, max: { _id: 13397.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:10.998 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:10.998 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:10.998 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:10.998 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 29000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 29000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 30000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:10.998 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:10.998 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:10.998 [cleanupOldData-5127649699334798f3e47e64] (start) waiting to cleanup test.bar from { _id: 12935.0 } -> { _id: 13397.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:10.999 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:10.999 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:10-5127649699334798f3e47e65", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536150999), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 12935.0 }, max: { _id: 13397.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:10.999 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:11.000 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 225 version: 30|1||51276475bd1f99446659365c based on: 29|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:11.002 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 226 version: 30|1||51276475bd1f99446659365c based on: 29|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:11.002 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:11.003 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:29:11.003 [cleanupOldData-5127649599334798f3e47e55] moveChunk deleted 937 documents for test.foo from { _id: 25298.0 } -> { _id: 26235.0 } m30001| Fri Feb 22 12:29:11.003 [cleanupOldData-5127649699334798f3e47e5f] moveChunk starting delete for: test.foo from { _id: 26235.0 } -> { _id: 27172.0 } m30999| Fri Feb 22 12:29:11.004 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 227 version: 30|1||51276475bd1f99446659365c based on: 30|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:11.005 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 30|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:11.005 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 30000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 227 m30999| Fri Feb 22 12:29:11.005 [conn1] setShardVersion success: { oldVersion: Timestamp 29000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:11.018 [cleanupOldData-5127649699334798f3e47e64] waiting to remove documents for test.bar from { _id: 12935.0 } -> { _id: 13397.0 } m30999| Fri Feb 22 12:29:12.003 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:12.004 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:12.004 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:12 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276498bd1f99446659367a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276496bd1f994466593679" } } m30999| Fri Feb 22 12:29:12.004 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276498bd1f99446659367a m30999| Fri Feb 22 12:29:12.004 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:12.004 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:12.004 [Balancer] secondaryThrottle: 1 31000 m30999| Fri Feb 22 12:29:12.007 [Balancer] shard0001 has more chunks me:78 best: shard0000:29 m30999| Fri Feb 22 12:29:12.007 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:12.007 [Balancer] donor : shard0001 chunks on 78 m30999| Fri Feb 22 12:29:12.007 [Balancer] receiver : shard0000 chunks on 29 m30999| Fri Feb 22 12:29:12.007 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:12.007 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_27172.0", lastmod: Timestamp 30000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:12.009 [Balancer] shard0001 has more chunks me:188 best: shard0000:29 m30999| Fri Feb 22 12:29:12.009 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:12.009 [Balancer] donor : shard0001 chunks on 188 m30999| Fri Feb 22 12:29:12.009 [Balancer] receiver : shard0000 chunks on 29 m30999| Fri Feb 22 12:29:12.009 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:12.009 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_13397.0", lastmod: Timestamp 30000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 13397.0 }, max: { _id: 13859.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:12.010 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 30|1||000000000000000000000000min: { _id: 27172.0 }max: { _id: 28109.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:12.010 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:12.010 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 27172.0 }, max: { _id: 28109.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_27172.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:12.011 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649899334798f3e47e66 m30001| Fri Feb 22 12:29:12.011 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-5127649899334798f3e47e67", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536152011), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 27172.0 }, max: { _id: 28109.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:12.012 [conn4] moveChunk request accepted at version 30|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:12.014 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:12.014 [migrateThread] starting receiving-end of migration of chunk { _id: 27172.0 } -> { _id: 28109.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:12.024 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 110, clonedBytes: 58300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:12.035 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 299, clonedBytes: 158470, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:12.045 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 493, clonedBytes: 261290, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:12.055 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 687, clonedBytes: 364110, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:12.068 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:12.068 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 27172.0 } -> { _id: 28109.0 } m30001| Fri Feb 22 12:29:12.071 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:12.072 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 27172.0 } -> { _id: 28109.0 } m30001| Fri Feb 22 12:29:12.103 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:12.104 [conn4] moveChunk setting version to: 31|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:12.104 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:12.106 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 228 version: 30|1||51276475bd1f99446659365b based on: 30|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:12.106 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 30|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:12.106 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 30000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 228 m30001| Fri Feb 22 12:29:12.106 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:12.113 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 27172.0 } -> { _id: 28109.0 } m30000| Fri Feb 22 12:29:12.113 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 27172.0 } -> { _id: 28109.0 } m30000| Fri Feb 22 12:29:12.113 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-51276498c49297cf54df560e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536152113), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 27172.0 }, max: { _id: 28109.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 53, step4 of 5: 0, step5 of 5: 44 } } m30001| Fri Feb 22 12:29:12.114 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 27172.0 }, max: { _id: 28109.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:12.114 [conn4] moveChunk updating self version to: 31|1||51276475bd1f99446659365b through { _id: 28109.0 } -> { _id: 29046.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:12.115 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-5127649899334798f3e47e68", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536152115), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 27172.0 }, max: { _id: 28109.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:12.115 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:12.115 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:12.115 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:12.115 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 30000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 30000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 31000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:12.115 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:12.115 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:12.115 [cleanupOldData-5127649899334798f3e47e69] (start) waiting to cleanup test.foo from { _id: 27172.0 } -> { _id: 28109.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:12.115 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:12.115 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-5127649899334798f3e47e6a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536152115), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 27172.0 }, max: { _id: 28109.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:29:12.115 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 27172.0 }, max: { _id: 28109.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_27172.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:32 r:2314 w:65 reslen:37 105ms m30999| Fri Feb 22 12:29:12.115 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:12.116 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 229 version: 31|1||51276475bd1f99446659365b based on: 30|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:12.117 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 230 version: 31|1||51276475bd1f99446659365b based on: 30|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:12.117 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 30|1||000000000000000000000000min: { _id: 13397.0 }max: { _id: 13859.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:12.118 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:12.118 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 13397.0 }, max: { _id: 13859.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_13397.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:12.118 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 231 version: 31|1||51276475bd1f99446659365b based on: 31|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:12.118 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649899334798f3e47e6b m30001| Fri Feb 22 12:29:12.118 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-5127649899334798f3e47e6c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536152118), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 13397.0 }, max: { _id: 13859.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:12.118 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 31|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:12.119 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 31000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 231 m30999| Fri Feb 22 12:29:12.119 [conn1] setShardVersion success: { oldVersion: Timestamp 30000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:12.119 [conn4] moveChunk request accepted at version 30|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:12.120 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:12.120 [migrateThread] starting receiving-end of migration of chunk { _id: 13397.0 } -> { _id: 13859.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:12.131 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 13397.0 }, max: { _id: 13859.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 191, clonedBytes: 199213, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:12.135 [cleanupOldData-5127649899334798f3e47e69] waiting to remove documents for test.foo from { _id: 27172.0 } -> { _id: 28109.0 } m30001| Fri Feb 22 12:29:12.141 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 13397.0 }, max: { _id: 13859.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 457, clonedBytes: 476651, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:12.141 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:12.141 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 13397.0 } -> { _id: 13859.0 } m30000| Fri Feb 22 12:29:12.143 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 13397.0 } -> { _id: 13859.0 } m30001| Fri Feb 22 12:29:12.151 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 13397.0 }, max: { _id: 13859.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:12.151 [conn4] moveChunk setting version to: 31|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:12.151 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:12.153 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 13397.0 } -> { _id: 13859.0 } m30000| Fri Feb 22 12:29:12.153 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 13397.0 } -> { _id: 13859.0 } m30000| Fri Feb 22 12:29:12.153 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-51276498c49297cf54df560f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536152153), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 13397.0 }, max: { _id: 13859.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 11 } } m30999| Fri Feb 22 12:29:12.153 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 232 version: 30|1||51276475bd1f99446659365c based on: 30|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:12.153 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 30|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:12.153 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 30000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 232 m30001| Fri Feb 22 12:29:12.153 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:12.157 [cleanupOldData-5127649699334798f3e47e5f] moveChunk deleted 937 documents for test.foo from { _id: 26235.0 } -> { _id: 27172.0 } m30001| Fri Feb 22 12:29:12.157 [cleanupOldData-5127649899334798f3e47e69] moveChunk starting delete for: test.foo from { _id: 27172.0 } -> { _id: 28109.0 } m30001| Fri Feb 22 12:29:12.161 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 13397.0 }, max: { _id: 13859.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:12.161 [conn4] moveChunk updating self version to: 31|1||51276475bd1f99446659365c through { _id: 13859.0 } -> { _id: 14321.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:12.162 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-5127649899334798f3e47e6d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536152162), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 13397.0 }, max: { _id: 13859.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:12.162 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:12.162 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:12.162 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:12.162 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 30000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 30000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 31000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:12.162 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:12.162 [cleanupOldData-5127649899334798f3e47e6e] (start) waiting to cleanup test.bar from { _id: 13397.0 } -> { _id: 13859.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:12.162 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:12.162 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:12.162 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:12-5127649899334798f3e47e6f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536152162), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 13397.0 }, max: { _id: 13859.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:12.163 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:12.164 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 233 version: 31|1||51276475bd1f99446659365c based on: 30|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:12.165 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 234 version: 31|1||51276475bd1f99446659365c based on: 30|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:12.166 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:12.166 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:12.167 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 235 version: 31|1||51276475bd1f99446659365c based on: 31|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:12.168 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 31|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:12.168 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 31000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 235 m30999| Fri Feb 22 12:29:12.168 [conn1] setShardVersion success: { oldVersion: Timestamp 30000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:12.182 [cleanupOldData-5127649899334798f3e47e6e] waiting to remove documents for test.bar from { _id: 13397.0 } -> { _id: 13859.0 } m30999| Fri Feb 22 12:29:13.167 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:13.167 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:13.168 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:13 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276499bd1f99446659367b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276498bd1f99446659367a" } } 32000 m30999| Fri Feb 22 12:29:13.168 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276499bd1f99446659367b m30999| Fri Feb 22 12:29:13.168 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:13.168 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:13.168 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:13.170 [Balancer] shard0001 has more chunks me:77 best: shard0000:30 m30999| Fri Feb 22 12:29:13.170 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:13.170 [Balancer] donor : shard0001 chunks on 77 m30999| Fri Feb 22 12:29:13.171 [Balancer] receiver : shard0000 chunks on 30 m30999| Fri Feb 22 12:29:13.171 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:13.171 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_28109.0", lastmod: Timestamp 31000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 28109.0 }, max: { _id: 29046.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:13.173 [Balancer] shard0001 has more chunks me:187 best: shard0000:30 m30999| Fri Feb 22 12:29:13.173 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:13.173 [Balancer] donor : shard0001 chunks on 187 m30999| Fri Feb 22 12:29:13.173 [Balancer] receiver : shard0000 chunks on 30 m30999| Fri Feb 22 12:29:13.173 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:13.173 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_13859.0", lastmod: Timestamp 31000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 13859.0 }, max: { _id: 14321.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:13.173 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 31|1||000000000000000000000000min: { _id: 28109.0 }max: { _id: 29046.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:13.173 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:13.174 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 28109.0 }, max: { _id: 29046.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_28109.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:13.175 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649999334798f3e47e70 m30001| Fri Feb 22 12:29:13.175 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-5127649999334798f3e47e71", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536153175), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 28109.0 }, max: { _id: 29046.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:13.176 [conn4] moveChunk request accepted at version 31|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:13.179 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:13.179 [migrateThread] starting receiving-end of migration of chunk { _id: 28109.0 } -> { _id: 29046.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:13.189 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 28109.0 }, max: { _id: 29046.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 123, clonedBytes: 65190, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:13.199 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 28109.0 }, max: { _id: 29046.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 324, clonedBytes: 171720, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:13.210 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 28109.0 }, max: { _id: 29046.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 524, clonedBytes: 277720, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:13.220 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 28109.0 }, max: { _id: 29046.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 729, clonedBytes: 386370, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:13.230 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:13.230 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 28109.0 } -> { _id: 29046.0 } m30000| Fri Feb 22 12:29:13.234 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 28109.0 } -> { _id: 29046.0 } m30001| Fri Feb 22 12:29:13.236 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 28109.0 }, max: { _id: 29046.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:13.236 [conn4] moveChunk setting version to: 32|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:13.236 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:13.238 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 236 version: 31|1||51276475bd1f99446659365b based on: 31|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:13.238 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 31|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:13.238 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 31000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 236 m30001| Fri Feb 22 12:29:13.238 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:13.244 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 28109.0 } -> { _id: 29046.0 } m30000| Fri Feb 22 12:29:13.244 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 28109.0 } -> { _id: 29046.0 } m30000| Fri Feb 22 12:29:13.244 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-51276499c49297cf54df5610", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536153244), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 28109.0 }, max: { _id: 29046.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:13.246 [cleanupOldData-5127649899334798f3e47e69] moveChunk deleted 937 documents for test.foo from { _id: 27172.0 } -> { _id: 28109.0 } m30001| Fri Feb 22 12:29:13.246 [cleanupOldData-5127649899334798f3e47e6e] moveChunk starting delete for: test.bar from { _id: 13397.0 } -> { _id: 13859.0 } m30001| Fri Feb 22 12:29:13.247 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 28109.0 }, max: { _id: 29046.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:13.247 [conn4] moveChunk updating self version to: 32|1||51276475bd1f99446659365b through { _id: 29046.0 } -> { _id: 29983.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:13.247 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-5127649999334798f3e47e72", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536153247), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 28109.0 }, max: { _id: 29046.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:13.247 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:13.247 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:13.247 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:13.247 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 31000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 31000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 32000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:13.247 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:13.248 [cleanupOldData-5127649999334798f3e47e73] (start) waiting to cleanup test.foo from { _id: 28109.0 } -> { _id: 29046.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:13.248 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:13.248 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:13.248 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-5127649999334798f3e47e74", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536153248), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 28109.0 }, max: { _id: 29046.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:13.248 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:13.249 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 237 version: 32|1||51276475bd1f99446659365b based on: 31|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:13.250 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 238 version: 32|1||51276475bd1f99446659365b based on: 31|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:13.251 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 31|1||000000000000000000000000min: { _id: 13859.0 }max: { _id: 14321.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:13.251 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:13.251 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 13859.0 }, max: { _id: 14321.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_13859.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:13.252 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 239 version: 32|1||51276475bd1f99446659365b based on: 32|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:13.252 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 32|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:13.252 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 32000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 239 m30999| Fri Feb 22 12:29:13.253 [conn1] setShardVersion success: { oldVersion: Timestamp 31000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:13.253 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649999334798f3e47e75 m30001| Fri Feb 22 12:29:13.253 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-5127649999334798f3e47e76", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536153253), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 13859.0 }, max: { _id: 14321.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:13.254 [conn4] moveChunk request accepted at version 31|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:13.255 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:13.256 [migrateThread] starting receiving-end of migration of chunk { _id: 13859.0 } -> { _id: 14321.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:13.266 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 13859.0 }, max: { _id: 14321.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 127, clonedBytes: 132461, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:13.268 [cleanupOldData-5127649999334798f3e47e73] waiting to remove documents for test.foo from { _id: 28109.0 } -> { _id: 29046.0 } m30001| Fri Feb 22 12:29:13.276 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 13859.0 }, max: { _id: 14321.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 307, clonedBytes: 320201, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:13.285 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:13.285 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 13859.0 } -> { _id: 14321.0 } m30001| Fri Feb 22 12:29:13.286 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 13859.0 }, max: { _id: 14321.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:13.287 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 13859.0 } -> { _id: 14321.0 } m30001| Fri Feb 22 12:29:13.296 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 13859.0 }, max: { _id: 14321.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:13.297 [conn4] moveChunk setting version to: 32|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:13.297 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:13.297 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 13859.0 } -> { _id: 14321.0 } m30000| Fri Feb 22 12:29:13.297 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 13859.0 } -> { _id: 14321.0 } m30000| Fri Feb 22 12:29:13.297 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-51276499c49297cf54df5611", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536153297), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 13859.0 }, max: { _id: 14321.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:13.299 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 240 version: 31|1||51276475bd1f99446659365c based on: 31|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:13.299 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 31|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:13.299 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 31000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 240 m30001| Fri Feb 22 12:29:13.299 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:13.307 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 13859.0 }, max: { _id: 14321.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:13.307 [conn4] moveChunk updating self version to: 32|1||51276475bd1f99446659365c through { _id: 14321.0 } -> { _id: 14783.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:13.308 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-5127649999334798f3e47e77", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536153308), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 13859.0 }, max: { _id: 14321.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:13.308 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:13.308 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 31000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 31000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 32000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:13.308 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:13.308 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:13.308 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:13.308 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:13.308 [cleanupOldData-5127649999334798f3e47e78] (start) waiting to cleanup test.bar from { _id: 13859.0 } -> { _id: 14321.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:13.308 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:13.309 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:13-5127649999334798f3e47e79", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536153308), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 13859.0 }, max: { _id: 14321.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:13.309 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:13.310 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 241 version: 32|1||51276475bd1f99446659365c based on: 31|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:13.312 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 242 version: 32|1||51276475bd1f99446659365c based on: 31|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:13.313 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:13.313 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:13.314 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 243 version: 32|1||51276475bd1f99446659365c based on: 32|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:13.315 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 32|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:13.315 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 32000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 243 m30999| Fri Feb 22 12:29:13.315 [conn1] setShardVersion success: { oldVersion: Timestamp 31000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:13.328 [cleanupOldData-5127649999334798f3e47e78] waiting to remove documents for test.bar from { _id: 13859.0 } -> { _id: 14321.0 } m30001| Fri Feb 22 12:29:13.663 [cleanupOldData-5127649899334798f3e47e6e] moveChunk deleted 462 documents for test.bar from { _id: 13397.0 } -> { _id: 13859.0 } m30001| Fri Feb 22 12:29:13.663 [cleanupOldData-5127649999334798f3e47e78] moveChunk starting delete for: test.bar from { _id: 13859.0 } -> { _id: 14321.0 } m30001| Fri Feb 22 12:29:14.036 [cleanupOldData-5127649999334798f3e47e78] moveChunk deleted 462 documents for test.bar from { _id: 13859.0 } -> { _id: 14321.0 } m30001| Fri Feb 22 12:29:14.036 [cleanupOldData-5127649999334798f3e47e73] moveChunk starting delete for: test.foo from { _id: 28109.0 } -> { _id: 29046.0 } 33000 m30999| Fri Feb 22 12:29:14.314 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:14.314 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:14.314 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127649abd1f99446659367c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276499bd1f99446659367b" } } m30999| Fri Feb 22 12:29:14.315 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127649abd1f99446659367c m30999| Fri Feb 22 12:29:14.315 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:14.315 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:14.315 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:14.317 [Balancer] shard0001 has more chunks me:76 best: shard0000:31 m30999| Fri Feb 22 12:29:14.317 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:14.317 [Balancer] donor : shard0001 chunks on 76 m30999| Fri Feb 22 12:29:14.317 [Balancer] receiver : shard0000 chunks on 31 m30999| Fri Feb 22 12:29:14.317 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:14.317 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_29046.0", lastmod: Timestamp 32000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:14.319 [Balancer] shard0001 has more chunks me:186 best: shard0000:31 m30999| Fri Feb 22 12:29:14.319 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:14.319 [Balancer] donor : shard0001 chunks on 186 m30999| Fri Feb 22 12:29:14.319 [Balancer] receiver : shard0000 chunks on 31 m30999| Fri Feb 22 12:29:14.319 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:14.319 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_14321.0", lastmod: Timestamp 32000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 14321.0 }, max: { _id: 14783.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:14.319 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 32|1||000000000000000000000000min: { _id: 29046.0 }max: { _id: 29983.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:14.320 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:14.320 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 29046.0 }, max: { _id: 29983.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_29046.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:14.321 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649a99334798f3e47e7a m30001| Fri Feb 22 12:29:14.321 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649a99334798f3e47e7b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536154321), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 29046.0 }, max: { _id: 29983.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:14.321 [conn4] moveChunk request accepted at version 32|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:14.324 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:14.324 [migrateThread] starting receiving-end of migration of chunk { _id: 29046.0 } -> { _id: 29983.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:14.334 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 114, clonedBytes: 60420, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:14.344 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 318, clonedBytes: 168540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:14.354 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 502, clonedBytes: 266060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:14.365 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 631, clonedBytes: 334430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:14.380 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:14.380 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 29046.0 } -> { _id: 29983.0 } m30001| Fri Feb 22 12:29:14.381 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:14.382 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 29046.0 } -> { _id: 29983.0 } m30001| Fri Feb 22 12:29:14.413 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:14.413 [conn4] moveChunk setting version to: 33|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:14.413 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:14.415 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 244 version: 32|1||51276475bd1f99446659365b based on: 32|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:14.415 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 32|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:14.416 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 32000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 244 m30001| Fri Feb 22 12:29:14.416 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:14.423 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 29046.0 } -> { _id: 29983.0 } m30000| Fri Feb 22 12:29:14.423 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 29046.0 } -> { _id: 29983.0 } m30000| Fri Feb 22 12:29:14.423 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649ac49297cf54df5612", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536154423), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 29046.0 }, max: { _id: 29983.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 55, step4 of 5: 0, step5 of 5: 42 } } m30001| Fri Feb 22 12:29:14.423 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 29046.0 }, max: { _id: 29983.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:14.423 [conn4] moveChunk updating self version to: 33|1||51276475bd1f99446659365b through { _id: 29983.0 } -> { _id: 30920.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:14.424 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649a99334798f3e47e7c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536154424), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 29046.0 }, max: { _id: 29983.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:14.424 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:14.424 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:14.424 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:14.424 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 32000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 32000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 33000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:14.424 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:14.424 [cleanupOldData-5127649a99334798f3e47e7d] (start) waiting to cleanup test.foo from { _id: 29046.0 } -> { _id: 29983.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:14.424 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:14.425 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:14.425 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649a99334798f3e47e7e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536154425), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 29046.0 }, max: { _id: 29983.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:29:14.425 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 29046.0 }, max: { _id: 29983.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_29046.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:2044 w:44 reslen:37 105ms m30999| Fri Feb 22 12:29:14.425 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:14.426 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 245 version: 33|1||51276475bd1f99446659365b based on: 32|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:14.427 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 246 version: 33|1||51276475bd1f99446659365b based on: 32|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:14.428 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 32|1||000000000000000000000000min: { _id: 14321.0 }max: { _id: 14783.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:14.428 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:14.428 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 14321.0 }, max: { _id: 14783.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_14321.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:14.428 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 247 version: 33|1||51276475bd1f99446659365b based on: 33|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:14.429 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649a99334798f3e47e7f m30001| Fri Feb 22 12:29:14.429 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649a99334798f3e47e80", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536154429), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 14321.0 }, max: { _id: 14783.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:14.429 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 33|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:14.429 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 33000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 247 m30999| Fri Feb 22 12:29:14.429 [conn1] setShardVersion success: { oldVersion: Timestamp 32000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:14.429 [conn4] moveChunk request accepted at version 32|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:14.430 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:14.431 [migrateThread] starting receiving-end of migration of chunk { _id: 14321.0 } -> { _id: 14783.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:14.441 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14321.0 }, max: { _id: 14783.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 118, clonedBytes: 123074, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:14.445 [cleanupOldData-5127649a99334798f3e47e7d] waiting to remove documents for test.foo from { _id: 29046.0 } -> { _id: 29983.0 } m30001| Fri Feb 22 12:29:14.451 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14321.0 }, max: { _id: 14783.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 296, clonedBytes: 308728, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:14.461 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:14.461 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 14321.0 } -> { _id: 14783.0 } m30001| Fri Feb 22 12:29:14.461 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14321.0 }, max: { _id: 14783.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:14.463 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 14321.0 } -> { _id: 14783.0 } m30001| Fri Feb 22 12:29:14.471 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14321.0 }, max: { _id: 14783.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:14.472 [conn4] moveChunk setting version to: 33|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:14.472 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:14.474 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 14321.0 } -> { _id: 14783.0 } m30000| Fri Feb 22 12:29:14.474 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 14321.0 } -> { _id: 14783.0 } m30000| Fri Feb 22 12:29:14.474 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649ac49297cf54df5613", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536154474), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 14321.0 }, max: { _id: 14783.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:29:14.474 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 248 version: 32|1||51276475bd1f99446659365c based on: 32|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:14.474 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 32|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:14.474 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 32000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 248 m30001| Fri Feb 22 12:29:14.477 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:14.482 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 14321.0 }, max: { _id: 14783.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:14.482 [conn4] moveChunk updating self version to: 33|1||51276475bd1f99446659365c through { _id: 14783.0 } -> { _id: 15245.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:14.483 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649a99334798f3e47e81", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536154483), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 14321.0 }, max: { _id: 14783.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:14.483 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:14.483 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:14.483 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 32000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 32000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 33000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:14.483 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:14.483 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:14.483 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:14.483 [cleanupOldData-5127649a99334798f3e47e82] (start) waiting to cleanup test.bar from { _id: 14321.0 } -> { _id: 14783.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:14.483 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:14.483 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:14-5127649a99334798f3e47e83", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536154483), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 14321.0 }, max: { _id: 14783.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:14.483 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:14.485 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 249 version: 33|1||51276475bd1f99446659365c based on: 32|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:14.486 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 250 version: 33|1||51276475bd1f99446659365c based on: 32|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:14.487 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:14.487 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:14.489 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 251 version: 33|1||51276475bd1f99446659365c based on: 33|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:14.489 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 33|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:14.489 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 33000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 251 m30999| Fri Feb 22 12:29:14.489 [conn1] setShardVersion success: { oldVersion: Timestamp 32000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:14.503 [cleanupOldData-5127649a99334798f3e47e82] waiting to remove documents for test.bar from { _id: 14321.0 } -> { _id: 14783.0 } m30001| Fri Feb 22 12:29:15.174 [cleanupOldData-5127649999334798f3e47e73] moveChunk deleted 937 documents for test.foo from { _id: 28109.0 } -> { _id: 29046.0 } m30001| Fri Feb 22 12:29:15.174 [cleanupOldData-5127649a99334798f3e47e82] moveChunk starting delete for: test.bar from { _id: 14321.0 } -> { _id: 14783.0 } 34000 m30999| Fri Feb 22 12:29:15.488 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:15.488 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:15.488 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:15 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127649bbd1f99446659367d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127649abd1f99446659367c" } } m30999| Fri Feb 22 12:29:15.489 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127649bbd1f99446659367d m30999| Fri Feb 22 12:29:15.489 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:15.489 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:15.489 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:15.491 [Balancer] shard0001 has more chunks me:75 best: shard0000:32 m30999| Fri Feb 22 12:29:15.491 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:15.491 [Balancer] donor : shard0001 chunks on 75 m30999| Fri Feb 22 12:29:15.491 [Balancer] receiver : shard0000 chunks on 32 m30999| Fri Feb 22 12:29:15.492 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:15.492 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_29983.0", lastmod: Timestamp 33000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 29983.0 }, max: { _id: 30920.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:15.493 [Balancer] shard0001 has more chunks me:185 best: shard0000:32 m30999| Fri Feb 22 12:29:15.493 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:15.493 [Balancer] donor : shard0001 chunks on 185 m30999| Fri Feb 22 12:29:15.493 [Balancer] receiver : shard0000 chunks on 32 m30999| Fri Feb 22 12:29:15.493 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:15.493 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_14783.0", lastmod: Timestamp 33000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 14783.0 }, max: { _id: 15245.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:15.494 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 33|1||000000000000000000000000min: { _id: 29983.0 }max: { _id: 30920.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:15.494 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:15.494 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 29983.0 }, max: { _id: 30920.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_29983.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:15.495 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649b99334798f3e47e84 m30001| Fri Feb 22 12:29:15.495 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649b99334798f3e47e85", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536155495), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 29983.0 }, max: { _id: 30920.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:15.496 [conn4] moveChunk request accepted at version 33|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:15.498 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:15.498 [migrateThread] starting receiving-end of migration of chunk { _id: 29983.0 } -> { _id: 30920.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:15.508 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29983.0 }, max: { _id: 30920.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 189, clonedBytes: 100170, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:15.519 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29983.0 }, max: { _id: 30920.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 496, clonedBytes: 262880, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:15.529 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29983.0 }, max: { _id: 30920.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 802, clonedBytes: 425060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:15.534 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:15.534 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 29983.0 } -> { _id: 30920.0 } m30000| Fri Feb 22 12:29:15.536 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 29983.0 } -> { _id: 30920.0 } m30001| Fri Feb 22 12:29:15.539 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 29983.0 }, max: { _id: 30920.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:15.539 [conn4] moveChunk setting version to: 34|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:15.539 [conn11] Waiting for commit to finish m30001| Fri Feb 22 12:29:15.541 [cleanupOldData-5127649a99334798f3e47e82] moveChunk deleted 462 documents for test.bar from { _id: 14321.0 } -> { _id: 14783.0 } m30001| Fri Feb 22 12:29:15.541 [cleanupOldData-5127649a99334798f3e47e7d] moveChunk starting delete for: test.foo from { _id: 29046.0 } -> { _id: 29983.0 } m30999| Fri Feb 22 12:29:15.541 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 252 version: 33|1||51276475bd1f99446659365b based on: 33|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:15.541 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 33|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:15.542 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 33000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 252 m30001| Fri Feb 22 12:29:15.542 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:15.546 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 29983.0 } -> { _id: 30920.0 } m30000| Fri Feb 22 12:29:15.546 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 29983.0 } -> { _id: 30920.0 } m30000| Fri Feb 22 12:29:15.546 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649bc49297cf54df5614", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536155546), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 29983.0 }, max: { _id: 30920.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 34, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:15.549 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 29983.0 }, max: { _id: 30920.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:15.549 [conn4] moveChunk updating self version to: 34|1||51276475bd1f99446659365b through { _id: 30920.0 } -> { _id: 31857.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:15.550 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649b99334798f3e47e86", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536155550), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 29983.0 }, max: { _id: 30920.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:15.550 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:15.550 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:15.550 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:15.550 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 33000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 33000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 34000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:15.550 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:15.550 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:15.550 [cleanupOldData-5127649b99334798f3e47e87] (start) waiting to cleanup test.foo from { _id: 29983.0 } -> { _id: 30920.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:15.551 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:15.551 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649b99334798f3e47e88", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536155551), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 29983.0 }, max: { _id: 30920.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:15.551 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:15.552 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 253 version: 34|1||51276475bd1f99446659365b based on: 33|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:15.553 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 254 version: 34|1||51276475bd1f99446659365b based on: 33|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:15.553 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 33|1||000000000000000000000000min: { _id: 14783.0 }max: { _id: 15245.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:15.553 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:15.553 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 14783.0 }, max: { _id: 15245.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_14783.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:15.554 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 255 version: 34|1||51276475bd1f99446659365b based on: 34|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:15.554 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649b99334798f3e47e89 m30001| Fri Feb 22 12:29:15.554 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649b99334798f3e47e8a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536155554), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 14783.0 }, max: { _id: 15245.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:15.554 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 34|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:15.554 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 34000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 255 m30999| Fri Feb 22 12:29:15.555 [conn1] setShardVersion success: { oldVersion: Timestamp 33000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:15.555 [conn4] moveChunk request accepted at version 33|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:15.556 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:15.556 [migrateThread] starting receiving-end of migration of chunk { _id: 14783.0 } -> { _id: 15245.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:15.566 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14783.0 }, max: { _id: 15245.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 122, clonedBytes: 127246, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:15.570 [cleanupOldData-5127649b99334798f3e47e87] waiting to remove documents for test.foo from { _id: 29983.0 } -> { _id: 30920.0 } m30001| Fri Feb 22 12:29:15.576 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14783.0 }, max: { _id: 15245.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 299, clonedBytes: 311857, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:15.586 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:15.586 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 14783.0 } -> { _id: 15245.0 } m30001| Fri Feb 22 12:29:15.587 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14783.0 }, max: { _id: 15245.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:15.589 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 14783.0 } -> { _id: 15245.0 } m30001| Fri Feb 22 12:29:15.597 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 14783.0 }, max: { _id: 15245.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:15.597 [conn4] moveChunk setting version to: 34|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:15.597 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:15.599 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 14783.0 } -> { _id: 15245.0 } m30000| Fri Feb 22 12:29:15.599 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 14783.0 } -> { _id: 15245.0 } m30000| Fri Feb 22 12:29:15.599 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649bc49297cf54df5615", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536155599), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 14783.0 }, max: { _id: 15245.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:29:15.600 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 256 version: 33|1||51276475bd1f99446659365c based on: 33|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:15.600 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 33|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:15.600 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 33000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 256 m30001| Fri Feb 22 12:29:15.600 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:15.607 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 14783.0 }, max: { _id: 15245.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:15.607 [conn4] moveChunk updating self version to: 34|1||51276475bd1f99446659365c through { _id: 15245.0 } -> { _id: 15707.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:15.608 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649b99334798f3e47e8b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536155608), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 14783.0 }, max: { _id: 15245.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:15.608 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:15.608 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:15.608 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:15.608 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 33000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 33000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 34000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:15.608 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:15.608 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:15.608 [cleanupOldData-5127649b99334798f3e47e8c] (start) waiting to cleanup test.bar from { _id: 14783.0 } -> { _id: 15245.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:15.608 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:15.609 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:15-5127649b99334798f3e47e8d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536155608), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 14783.0 }, max: { _id: 15245.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:15.609 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:15.610 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 257 version: 34|1||51276475bd1f99446659365c based on: 33|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:15.611 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 258 version: 34|1||51276475bd1f99446659365c based on: 33|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:15.612 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:15.612 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:15.613 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 259 version: 34|1||51276475bd1f99446659365c based on: 34|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:15.614 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 34|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:15.614 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 34000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 259 m30999| Fri Feb 22 12:29:15.614 [conn1] setShardVersion success: { oldVersion: Timestamp 33000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:15.628 [cleanupOldData-5127649b99334798f3e47e8c] waiting to remove documents for test.bar from { _id: 14783.0 } -> { _id: 15245.0 } 35000 m30001| Fri Feb 22 12:29:16.482 [cleanupOldData-5127649a99334798f3e47e7d] moveChunk deleted 937 documents for test.foo from { _id: 29046.0 } -> { _id: 29983.0 } m30001| Fri Feb 22 12:29:16.482 [cleanupOldData-5127649b99334798f3e47e87] moveChunk starting delete for: test.foo from { _id: 29983.0 } -> { _id: 30920.0 } m30999| Fri Feb 22 12:29:16.613 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:16.613 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:16.614 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127649cbd1f99446659367e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127649bbd1f99446659367d" } } m30999| Fri Feb 22 12:29:16.614 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127649cbd1f99446659367e m30999| Fri Feb 22 12:29:16.614 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:16.614 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:16.614 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:16.616 [Balancer] shard0001 has more chunks me:74 best: shard0000:33 m30999| Fri Feb 22 12:29:16.616 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:16.616 [Balancer] donor : shard0001 chunks on 74 m30999| Fri Feb 22 12:29:16.616 [Balancer] receiver : shard0000 chunks on 33 m30999| Fri Feb 22 12:29:16.616 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:16.616 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_30920.0", lastmod: Timestamp 34000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 30920.0 }, max: { _id: 31857.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:16.618 [Balancer] shard0001 has more chunks me:184 best: shard0000:33 m30999| Fri Feb 22 12:29:16.618 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:16.618 [Balancer] donor : shard0001 chunks on 184 m30999| Fri Feb 22 12:29:16.618 [Balancer] receiver : shard0000 chunks on 33 m30999| Fri Feb 22 12:29:16.618 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:16.618 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_15245.0", lastmod: Timestamp 34000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 15245.0 }, max: { _id: 15707.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:16.618 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 34|1||000000000000000000000000min: { _id: 30920.0 }max: { _id: 31857.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:16.619 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:16.619 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 30920.0 }, max: { _id: 31857.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_30920.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:16.620 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649c99334798f3e47e8e m30001| Fri Feb 22 12:29:16.620 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649c99334798f3e47e8f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536156620), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 30920.0 }, max: { _id: 31857.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:16.621 [conn4] moveChunk request accepted at version 34|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:16.624 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:16.624 [migrateThread] starting receiving-end of migration of chunk { _id: 30920.0 } -> { _id: 31857.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:16.634 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 30920.0 }, max: { _id: 31857.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 218, clonedBytes: 115540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:16.644 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 30920.0 }, max: { _id: 31857.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 530, clonedBytes: 280900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:16.655 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 30920.0 }, max: { _id: 31857.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 842, clonedBytes: 446260, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:16.658 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:16.658 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 30920.0 } -> { _id: 31857.0 } m30000| Fri Feb 22 12:29:16.660 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 30920.0 } -> { _id: 31857.0 } m30001| Fri Feb 22 12:29:16.665 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 30920.0 }, max: { _id: 31857.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:16.665 [conn4] moveChunk setting version to: 35|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:16.665 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:16.667 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 260 version: 34|1||51276475bd1f99446659365b based on: 34|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:16.667 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 34|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:16.667 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 34000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 260 m30001| Fri Feb 22 12:29:16.667 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:16.671 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 30920.0 } -> { _id: 31857.0 } m30000| Fri Feb 22 12:29:16.671 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 30920.0 } -> { _id: 31857.0 } m30000| Fri Feb 22 12:29:16.671 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649cc49297cf54df5616", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536156671), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 30920.0 }, max: { _id: 31857.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 33, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:16.675 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 30920.0 }, max: { _id: 31857.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:16.675 [conn4] moveChunk updating self version to: 35|1||51276475bd1f99446659365b through { _id: 31857.0 } -> { _id: 32794.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:16.676 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649c99334798f3e47e90", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536156676), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 30920.0 }, max: { _id: 31857.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:16.676 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:16.676 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 34000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 34000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 35000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:16.676 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:16.676 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:16.676 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:16.676 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:16.676 [cleanupOldData-5127649c99334798f3e47e91] (start) waiting to cleanup test.foo from { _id: 30920.0 } -> { _id: 31857.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:16.676 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:16.676 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649c99334798f3e47e92", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536156676), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 30920.0 }, max: { _id: 31857.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 3, step4 of 6: 41, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:16.677 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:16.677 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 261 version: 35|1||51276475bd1f99446659365b based on: 34|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:16.679 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 262 version: 35|1||51276475bd1f99446659365b based on: 34|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:16.679 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 34|1||000000000000000000000000min: { _id: 15245.0 }max: { _id: 15707.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:16.679 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:16.679 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 15245.0 }, max: { _id: 15707.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_15245.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:16.680 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 263 version: 35|1||51276475bd1f99446659365b based on: 35|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:16.680 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 35|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:16.680 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 35000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 263 m30999| Fri Feb 22 12:29:16.680 [conn1] setShardVersion success: { oldVersion: Timestamp 34000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:16.681 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649c99334798f3e47e93 m30001| Fri Feb 22 12:29:16.681 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649c99334798f3e47e94", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536156681), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 15245.0 }, max: { _id: 15707.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:16.682 [conn4] moveChunk request accepted at version 34|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:16.683 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:16.683 [migrateThread] starting receiving-end of migration of chunk { _id: 15245.0 } -> { _id: 15707.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:16.694 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 15245.0 }, max: { _id: 15707.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 210, clonedBytes: 219030, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:16.696 [cleanupOldData-5127649c99334798f3e47e91] waiting to remove documents for test.foo from { _id: 30920.0 } -> { _id: 31857.0 } m30000| Fri Feb 22 12:29:16.703 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:16.703 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 15245.0 } -> { _id: 15707.0 } m30001| Fri Feb 22 12:29:16.704 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 15245.0 }, max: { _id: 15707.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:16.705 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 15245.0 } -> { _id: 15707.0 } m30001| Fri Feb 22 12:29:16.714 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 15245.0 }, max: { _id: 15707.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:16.714 [conn4] moveChunk setting version to: 35|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:16.714 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:16.716 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 15245.0 } -> { _id: 15707.0 } m30000| Fri Feb 22 12:29:16.716 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 15245.0 } -> { _id: 15707.0 } m30000| Fri Feb 22 12:29:16.716 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649cc49297cf54df5617", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536156716), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 15245.0 }, max: { _id: 15707.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 19, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:16.717 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 264 version: 34|1||51276475bd1f99446659365c based on: 34|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:16.717 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 34|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:16.717 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 34000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 264 m30001| Fri Feb 22 12:29:16.717 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:16.724 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 15245.0 }, max: { _id: 15707.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:16.724 [conn4] moveChunk updating self version to: 35|1||51276475bd1f99446659365c through { _id: 15707.0 } -> { _id: 16169.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:16.725 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649c99334798f3e47e95", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536156725), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 15245.0 }, max: { _id: 15707.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:16.725 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:16.725 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 34000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 34000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 35000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:16.725 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:16.725 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:16.725 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:16.725 [cleanupOldData-5127649c99334798f3e47e96] (start) waiting to cleanup test.bar from { _id: 15245.0 } -> { _id: 15707.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:16.725 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:16.726 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:16.726 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:16-5127649c99334798f3e47e97", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536156726), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 15245.0 }, max: { _id: 15707.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:16.726 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:16.727 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 265 version: 35|1||51276475bd1f99446659365c based on: 34|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:16.729 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 266 version: 35|1||51276475bd1f99446659365c based on: 34|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:16.730 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:16.730 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:16.731 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 267 version: 35|1||51276475bd1f99446659365c based on: 35|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:16.731 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 35|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:16.731 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 35000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 267 m30999| Fri Feb 22 12:29:16.732 [conn1] setShardVersion success: { oldVersion: Timestamp 34000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:16.745 [cleanupOldData-5127649c99334798f3e47e96] waiting to remove documents for test.bar from { _id: 15245.0 } -> { _id: 15707.0 } m30001| Fri Feb 22 12:29:17.409 [cleanupOldData-5127649b99334798f3e47e87] moveChunk deleted 937 documents for test.foo from { _id: 29983.0 } -> { _id: 30920.0 } m30001| Fri Feb 22 12:29:17.410 [cleanupOldData-5127649c99334798f3e47e96] moveChunk starting delete for: test.bar from { _id: 15245.0 } -> { _id: 15707.0 } 36000 m30999| Fri Feb 22 12:29:17.731 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:17.731 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:17.732 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:17 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127649dbd1f99446659367f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127649cbd1f99446659367e" } } m30999| Fri Feb 22 12:29:17.732 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127649dbd1f99446659367f m30999| Fri Feb 22 12:29:17.733 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:17.733 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:17.733 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:17.735 [Balancer] shard0001 has more chunks me:73 best: shard0000:34 m30999| Fri Feb 22 12:29:17.735 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:17.735 [Balancer] donor : shard0001 chunks on 73 m30999| Fri Feb 22 12:29:17.735 [Balancer] receiver : shard0000 chunks on 34 m30999| Fri Feb 22 12:29:17.735 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:17.735 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_31857.0", lastmod: Timestamp 35000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 31857.0 }, max: { _id: 32794.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:17.737 [Balancer] shard0001 has more chunks me:183 best: shard0000:34 m30999| Fri Feb 22 12:29:17.737 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:17.737 [Balancer] donor : shard0001 chunks on 183 m30999| Fri Feb 22 12:29:17.737 [Balancer] receiver : shard0000 chunks on 34 m30999| Fri Feb 22 12:29:17.737 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:17.737 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_15707.0", lastmod: Timestamp 35000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 15707.0 }, max: { _id: 16169.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:17.737 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 35|1||000000000000000000000000min: { _id: 31857.0 }max: { _id: 32794.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:17.738 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:17.738 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 31857.0 }, max: { _id: 32794.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_31857.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:17.739 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649d99334798f3e47e98 m30001| Fri Feb 22 12:29:17.739 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649d99334798f3e47e99", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536157739), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 31857.0 }, max: { _id: 32794.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:17.740 [conn4] moveChunk request accepted at version 35|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:17.743 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:17.743 [migrateThread] starting receiving-end of migration of chunk { _id: 31857.0 } -> { _id: 32794.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:17.775 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 31857.0 }, max: { _id: 32794.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:17.785 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 31857.0 }, max: { _id: 32794.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 298, clonedBytes: 157940, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:17.796 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 31857.0 }, max: { _id: 32794.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 605, clonedBytes: 320650, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:17.806 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 31857.0 }, max: { _id: 32794.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 913, clonedBytes: 483890, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:17.807 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:17.807 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 31857.0 } -> { _id: 32794.0 } m30000| Fri Feb 22 12:29:17.809 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 31857.0 } -> { _id: 32794.0 } m30001| Fri Feb 22 12:29:17.822 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 31857.0 }, max: { _id: 32794.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:17.822 [conn4] moveChunk setting version to: 36|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:17.822 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:17.824 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 268 version: 35|1||51276475bd1f99446659365b based on: 35|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:17.825 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 35|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:17.825 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 35000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 268 m30001| Fri Feb 22 12:29:17.825 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:17.829 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 31857.0 } -> { _id: 32794.0 } m30000| Fri Feb 22 12:29:17.829 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 31857.0 } -> { _id: 32794.0 } m30000| Fri Feb 22 12:29:17.830 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649dc49297cf54df5618", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536157829), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 31857.0 }, max: { _id: 32794.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 62, step4 of 5: 0, step5 of 5: 22 } } m30001| Fri Feb 22 12:29:17.832 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 31857.0 }, max: { _id: 32794.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:17.832 [conn4] moveChunk updating self version to: 36|1||51276475bd1f99446659365b through { _id: 32794.0 } -> { _id: 33731.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:17.833 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649d99334798f3e47e9a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536157833), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 31857.0 }, max: { _id: 32794.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:17.833 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:17.833 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 35000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 35000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 36000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:17.833 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:17.833 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:17.833 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:17.833 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:17.833 [cleanupOldData-5127649d99334798f3e47e9b] (start) waiting to cleanup test.foo from { _id: 31857.0 } -> { _id: 32794.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:17.834 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:17.834 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649d99334798f3e47e9c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536157834), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 31857.0 }, max: { _id: 32794.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 78, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:17.834 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:17.835 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 269 version: 36|1||51276475bd1f99446659365b based on: 35|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:17.836 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 270 version: 36|1||51276475bd1f99446659365b based on: 35|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:17.836 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 35|1||000000000000000000000000min: { _id: 15707.0 }max: { _id: 16169.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:17.836 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:17.836 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 15707.0 }, max: { _id: 16169.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_15707.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:17.837 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 271 version: 36|1||51276475bd1f99446659365b based on: 36|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:17.837 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649d99334798f3e47e9d m30999| Fri Feb 22 12:29:17.837 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 36|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:17.837 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649d99334798f3e47e9e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536157837), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 15707.0 }, max: { _id: 16169.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:17.837 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 36000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 271 m30999| Fri Feb 22 12:29:17.837 [conn1] setShardVersion success: { oldVersion: Timestamp 35000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:17.838 [conn4] moveChunk request accepted at version 35|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:17.839 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:17.839 [migrateThread] starting receiving-end of migration of chunk { _id: 15707.0 } -> { _id: 16169.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:17.849 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 15707.0 }, max: { _id: 16169.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 172, clonedBytes: 179396, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:17.854 [cleanupOldData-5127649d99334798f3e47e9b] waiting to remove documents for test.foo from { _id: 31857.0 } -> { _id: 32794.0 } m30001| Fri Feb 22 12:29:17.859 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 15707.0 }, max: { _id: 16169.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 431, clonedBytes: 449533, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:17.861 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:17.861 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 15707.0 } -> { _id: 16169.0 } m30000| Fri Feb 22 12:29:17.863 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 15707.0 } -> { _id: 16169.0 } m30001| Fri Feb 22 12:29:17.870 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 15707.0 }, max: { _id: 16169.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:17.870 [conn4] moveChunk setting version to: 36|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:17.870 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:17.872 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 272 version: 35|1||51276475bd1f99446659365c based on: 35|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:17.873 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 35|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:17.873 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 15707.0 } -> { _id: 16169.0 } m30000| Fri Feb 22 12:29:17.873 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 15707.0 } -> { _id: 16169.0 } m30000| Fri Feb 22 12:29:17.873 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649dc49297cf54df5619", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536157873), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 15707.0 }, max: { _id: 16169.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 11 } } m30999| Fri Feb 22 12:29:17.873 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 35000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 272 m30001| Fri Feb 22 12:29:17.873 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:17.880 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 15707.0 }, max: { _id: 16169.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:17.880 [conn4] moveChunk updating self version to: 36|1||51276475bd1f99446659365c through { _id: 16169.0 } -> { _id: 16631.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:17.881 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649d99334798f3e47e9f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536157881), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 15707.0 }, max: { _id: 16169.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:17.881 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:17.881 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 35000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 35000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 36000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:17.881 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:17.881 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:17.881 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:17.881 [cleanupOldData-5127649d99334798f3e47ea0] (start) waiting to cleanup test.bar from { _id: 15707.0 } -> { _id: 16169.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:17.881 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:17.881 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:17.882 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:17-5127649d99334798f3e47ea1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536157882), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 15707.0 }, max: { _id: 16169.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:17.882 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:17.883 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 273 version: 36|1||51276475bd1f99446659365c based on: 35|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:17.884 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 274 version: 36|1||51276475bd1f99446659365c based on: 35|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:17.885 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:17.885 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:29:17.886 [cleanupOldData-5127649c99334798f3e47e96] moveChunk deleted 462 documents for test.bar from { _id: 15245.0 } -> { _id: 15707.0 } m30001| Fri Feb 22 12:29:17.886 [cleanupOldData-5127649d99334798f3e47e9b] moveChunk starting delete for: test.foo from { _id: 31857.0 } -> { _id: 32794.0 } m30999| Fri Feb 22 12:29:17.886 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 275 version: 36|1||51276475bd1f99446659365c based on: 36|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:17.886 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 36|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:17.887 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 36000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 275 m30999| Fri Feb 22 12:29:17.887 [conn1] setShardVersion success: { oldVersion: Timestamp 35000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:17.901 [cleanupOldData-5127649d99334798f3e47ea0] waiting to remove documents for test.bar from { _id: 15707.0 } -> { _id: 16169.0 } 37000 m30999| Fri Feb 22 12:29:18.886 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:18.886 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:18.886 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:18 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127649ebd1f994466593680" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127649dbd1f99446659367f" } } m30999| Fri Feb 22 12:29:18.887 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127649ebd1f994466593680 m30999| Fri Feb 22 12:29:18.887 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:18.887 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:18.887 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:18.889 [Balancer] shard0001 has more chunks me:72 best: shard0000:35 m30999| Fri Feb 22 12:29:18.889 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:18.889 [Balancer] donor : shard0001 chunks on 72 m30999| Fri Feb 22 12:29:18.889 [Balancer] receiver : shard0000 chunks on 35 m30999| Fri Feb 22 12:29:18.889 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:18.889 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_32794.0", lastmod: Timestamp 36000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 32794.0 }, max: { _id: 33731.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:18.891 [Balancer] shard0001 has more chunks me:182 best: shard0000:35 m30999| Fri Feb 22 12:29:18.891 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:18.891 [Balancer] donor : shard0001 chunks on 182 m30999| Fri Feb 22 12:29:18.891 [Balancer] receiver : shard0000 chunks on 35 m30999| Fri Feb 22 12:29:18.891 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:18.891 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_16169.0", lastmod: Timestamp 36000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 16169.0 }, max: { _id: 16631.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:18.891 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 36|1||000000000000000000000000min: { _id: 32794.0 }max: { _id: 33731.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:18.891 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:18.891 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 32794.0 }, max: { _id: 33731.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_32794.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:18.892 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649e99334798f3e47ea2 m30001| Fri Feb 22 12:29:18.892 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649e99334798f3e47ea3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536158892), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 32794.0 }, max: { _id: 33731.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:18.893 [conn4] moveChunk request accepted at version 36|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:18.895 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:18.895 [migrateThread] starting receiving-end of migration of chunk { _id: 32794.0 } -> { _id: 33731.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:18.906 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 32794.0 }, max: { _id: 33731.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 178, clonedBytes: 94340, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:18.916 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 32794.0 }, max: { _id: 33731.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 486, clonedBytes: 257580, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:18.926 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 32794.0 }, max: { _id: 33731.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 794, clonedBytes: 420820, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:18.931 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:18.931 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 32794.0 } -> { _id: 33731.0 } m30000| Fri Feb 22 12:29:18.933 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 32794.0 } -> { _id: 33731.0 } m30001| Fri Feb 22 12:29:18.936 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 32794.0 }, max: { _id: 33731.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:18.936 [conn4] moveChunk setting version to: 37|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:18.936 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:18.938 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 276 version: 36|1||51276475bd1f99446659365b based on: 36|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:18.938 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 36|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:18.939 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 36000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 276 m30001| Fri Feb 22 12:29:18.939 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:18.944 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 32794.0 } -> { _id: 33731.0 } m30000| Fri Feb 22 12:29:18.944 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 32794.0 } -> { _id: 33731.0 } m30000| Fri Feb 22 12:29:18.944 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649ec49297cf54df561a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536158944), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 32794.0 }, max: { _id: 33731.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 34, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:18.947 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 32794.0 }, max: { _id: 33731.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:18.947 [conn4] moveChunk updating self version to: 37|1||51276475bd1f99446659365b through { _id: 33731.0 } -> { _id: 34668.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:18.948 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649e99334798f3e47ea4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536158948), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 32794.0 }, max: { _id: 33731.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:18.948 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:18.948 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:18.948 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 36000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 36000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 37000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:18.948 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:18.948 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:18.948 [cleanupOldData-5127649e99334798f3e47ea5] (start) waiting to cleanup test.foo from { _id: 32794.0 } -> { _id: 33731.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:18.948 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:18.948 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:18.948 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649e99334798f3e47ea6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536158948), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 32794.0 }, max: { _id: 33731.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:18.948 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:18.950 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 277 version: 37|1||51276475bd1f99446659365b based on: 36|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:18.951 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 278 version: 37|1||51276475bd1f99446659365b based on: 36|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:18.951 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 36|1||000000000000000000000000min: { _id: 16169.0 }max: { _id: 16631.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:18.951 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:18.951 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 16169.0 }, max: { _id: 16631.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_16169.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:18.952 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 5127649e99334798f3e47ea7 m30001| Fri Feb 22 12:29:18.952 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649e99334798f3e47ea8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536158952), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 16169.0 }, max: { _id: 16631.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:18.952 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 279 version: 37|1||51276475bd1f99446659365b based on: 37|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:18.953 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 37|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:18.953 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 37000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 279 m30001| Fri Feb 22 12:29:18.953 [conn4] moveChunk request accepted at version 36|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:18.953 [conn1] setShardVersion success: { oldVersion: Timestamp 36000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:18.954 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:18.954 [migrateThread] starting receiving-end of migration of chunk { _id: 16169.0 } -> { _id: 16631.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:18.964 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 16169.0 }, max: { _id: 16631.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 181, clonedBytes: 188783, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:18.968 [cleanupOldData-5127649e99334798f3e47ea5] waiting to remove documents for test.foo from { _id: 32794.0 } -> { _id: 33731.0 } m30001| Fri Feb 22 12:29:18.975 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 16169.0 }, max: { _id: 16631.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 425, clonedBytes: 443275, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:18.976 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:18.976 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 16169.0 } -> { _id: 16631.0 } m30000| Fri Feb 22 12:29:18.979 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 16169.0 } -> { _id: 16631.0 } m30001| Fri Feb 22 12:29:18.985 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 16169.0 }, max: { _id: 16631.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:18.985 [conn4] moveChunk setting version to: 37|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:18.985 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:18.988 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 280 version: 36|1||51276475bd1f99446659365c based on: 36|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:18.988 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 36|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:18.989 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 36000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 280 m30001| Fri Feb 22 12:29:18.989 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:18.989 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 16169.0 } -> { _id: 16631.0 } m30000| Fri Feb 22 12:29:18.989 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 16169.0 } -> { _id: 16631.0 } m30000| Fri Feb 22 12:29:18.989 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649ec49297cf54df561b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536158989), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 16169.0 }, max: { _id: 16631.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:18.996 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 16169.0 }, max: { _id: 16631.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:18.996 [conn4] moveChunk updating self version to: 37|1||51276475bd1f99446659365c through { _id: 16631.0 } -> { _id: 17093.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:18.996 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649e99334798f3e47ea9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536158996), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 16169.0 }, max: { _id: 16631.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:18.996 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:18.997 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:18.997 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 36000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 36000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 37000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:18.997 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:18.997 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:18.997 [cleanupOldData-5127649e99334798f3e47eaa] (start) waiting to cleanup test.bar from { _id: 16169.0 } -> { _id: 16631.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:18.997 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:18.997 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:18.997 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:18-5127649e99334798f3e47eab", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536158997), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 16169.0 }, max: { _id: 16631.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:18.997 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:18.998 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 281 version: 37|1||51276475bd1f99446659365c based on: 36|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:19.000 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 282 version: 37|1||51276475bd1f99446659365c based on: 36|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:19.001 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:19.001 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:19.002 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 283 version: 37|1||51276475bd1f99446659365c based on: 37|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:19.002 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 37|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:19.002 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 37000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 283 m30999| Fri Feb 22 12:29:19.003 [conn1] setShardVersion success: { oldVersion: Timestamp 36000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:19.017 [cleanupOldData-5127649e99334798f3e47eaa] waiting to remove documents for test.bar from { _id: 16169.0 } -> { _id: 16631.0 } m30001| Fri Feb 22 12:29:19.035 [cleanupOldData-5127649d99334798f3e47e9b] moveChunk deleted 937 documents for test.foo from { _id: 31857.0 } -> { _id: 32794.0 } m30001| Fri Feb 22 12:29:19.035 [cleanupOldData-5127649e99334798f3e47eaa] moveChunk starting delete for: test.bar from { _id: 16169.0 } -> { _id: 16631.0 } m30001| Fri Feb 22 12:29:19.663 [cleanupOldData-5127649e99334798f3e47eaa] moveChunk deleted 462 documents for test.bar from { _id: 16169.0 } -> { _id: 16631.0 } m30001| Fri Feb 22 12:29:19.663 [cleanupOldData-5127649e99334798f3e47ea5] moveChunk starting delete for: test.foo from { _id: 32794.0 } -> { _id: 33731.0 } 38000 m30999| Fri Feb 22 12:29:20.002 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:20.002 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:20.002 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a0bd1f994466593681" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127649ebd1f994466593680" } } m30999| Fri Feb 22 12:29:20.003 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a0bd1f994466593681 m30999| Fri Feb 22 12:29:20.003 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:20.003 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:20.003 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:20.005 [Balancer] shard0001 has more chunks me:71 best: shard0000:36 m30999| Fri Feb 22 12:29:20.005 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:20.005 [Balancer] donor : shard0001 chunks on 71 m30999| Fri Feb 22 12:29:20.005 [Balancer] receiver : shard0000 chunks on 36 m30999| Fri Feb 22 12:29:20.005 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:20.005 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_33731.0", lastmod: Timestamp 37000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 33731.0 }, max: { _id: 34668.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:20.007 [Balancer] shard0001 has more chunks me:181 best: shard0000:36 m30999| Fri Feb 22 12:29:20.007 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:20.007 [Balancer] donor : shard0001 chunks on 181 m30999| Fri Feb 22 12:29:20.007 [Balancer] receiver : shard0000 chunks on 36 m30999| Fri Feb 22 12:29:20.007 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:20.007 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_16631.0", lastmod: Timestamp 37000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 16631.0 }, max: { _id: 17093.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:20.008 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 37|1||000000000000000000000000min: { _id: 33731.0 }max: { _id: 34668.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:20.008 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:20.008 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 33731.0 }, max: { _id: 34668.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_33731.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:20.009 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a099334798f3e47eac m30001| Fri Feb 22 12:29:20.009 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a099334798f3e47ead", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536160009), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 33731.0 }, max: { _id: 34668.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:20.010 [conn4] moveChunk request accepted at version 37|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:20.013 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:20.013 [migrateThread] starting receiving-end of migration of chunk { _id: 33731.0 } -> { _id: 34668.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:20.023 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 33731.0 }, max: { _id: 34668.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 107, clonedBytes: 56710, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:20.034 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 33731.0 }, max: { _id: 34668.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 287, clonedBytes: 152110, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:20.044 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 33731.0 }, max: { _id: 34668.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 486, clonedBytes: 257580, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:20.054 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 33731.0 }, max: { _id: 34668.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 692, clonedBytes: 366760, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:20.067 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:20.067 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 33731.0 } -> { _id: 34668.0 } m30000| Fri Feb 22 12:29:20.070 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 33731.0 } -> { _id: 34668.0 } m30001| Fri Feb 22 12:29:20.070 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 33731.0 }, max: { _id: 34668.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:20.070 [conn4] moveChunk setting version to: 38|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:20.071 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:20.073 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 284 version: 37|1||51276475bd1f99446659365b based on: 37|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:20.073 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 37|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:20.073 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 37000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 284 m30001| Fri Feb 22 12:29:20.073 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:20.080 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 33731.0 } -> { _id: 34668.0 } m30000| Fri Feb 22 12:29:20.080 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 33731.0 } -> { _id: 34668.0 } m30000| Fri Feb 22 12:29:20.080 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a0c49297cf54df561c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536160080), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 33731.0 }, max: { _id: 34668.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 52, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:20.081 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 33731.0 }, max: { _id: 34668.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:20.081 [conn4] moveChunk updating self version to: 38|1||51276475bd1f99446659365b through { _id: 34668.0 } -> { _id: 35605.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:20.081 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a099334798f3e47eae", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536160081), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 33731.0 }, max: { _id: 34668.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:20.081 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:20.081 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:20.081 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 37000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 37000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 38000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:20.081 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:20.081 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:20.081 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:20.082 [cleanupOldData-512764a099334798f3e47eaf] (start) waiting to cleanup test.foo from { _id: 33731.0 } -> { _id: 34668.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:20.082 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:20.082 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a099334798f3e47eb0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536160082), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 33731.0 }, max: { _id: 34668.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:20.082 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:20.083 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 285 version: 38|1||51276475bd1f99446659365b based on: 37|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:20.084 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 286 version: 38|1||51276475bd1f99446659365b based on: 37|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:20.084 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 37|1||000000000000000000000000min: { _id: 16631.0 }max: { _id: 17093.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:20.084 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:20.085 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 16631.0 }, max: { _id: 17093.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_16631.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:20.085 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 287 version: 38|1||51276475bd1f99446659365b based on: 38|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:20.086 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 38|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:20.085 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a099334798f3e47eb1 m30001| Fri Feb 22 12:29:20.086 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a099334798f3e47eb2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536160086), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 16631.0 }, max: { _id: 17093.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:20.086 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 38000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 287 m30999| Fri Feb 22 12:29:20.086 [conn1] setShardVersion success: { oldVersion: Timestamp 37000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:20.086 [conn4] moveChunk request accepted at version 37|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:20.088 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:20.088 [migrateThread] starting receiving-end of migration of chunk { _id: 16631.0 } -> { _id: 17093.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:20.098 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 16631.0 }, max: { _id: 17093.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 127, clonedBytes: 132461, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:20.102 [cleanupOldData-512764a099334798f3e47eaf] waiting to remove documents for test.foo from { _id: 33731.0 } -> { _id: 34668.0 } m30001| Fri Feb 22 12:29:20.108 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 16631.0 }, max: { _id: 17093.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 279, clonedBytes: 290997, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:20.118 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 16631.0 }, max: { _id: 17093.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 453, clonedBytes: 472479, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:20.119 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:20.119 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 16631.0 } -> { _id: 17093.0 } m30000| Fri Feb 22 12:29:20.121 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 16631.0 } -> { _id: 17093.0 } m30001| Fri Feb 22 12:29:20.128 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 16631.0 }, max: { _id: 17093.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:20.129 [conn4] moveChunk setting version to: 38|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:20.129 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:20.131 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 288 version: 37|1||51276475bd1f99446659365c based on: 37|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:20.131 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 16631.0 } -> { _id: 17093.0 } m30000| Fri Feb 22 12:29:20.131 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 16631.0 } -> { _id: 17093.0 } m30000| Fri Feb 22 12:29:20.131 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a0c49297cf54df561d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536160131), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 16631.0 }, max: { _id: 17093.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 11 } } m30999| Fri Feb 22 12:29:20.131 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 37|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:20.131 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 37000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 288 m30001| Fri Feb 22 12:29:20.131 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:20.139 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 16631.0 }, max: { _id: 17093.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:20.139 [conn4] moveChunk updating self version to: 38|1||51276475bd1f99446659365c through { _id: 17093.0 } -> { _id: 17555.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:20.140 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a099334798f3e47eb3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536160139), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 16631.0 }, max: { _id: 17093.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:20.140 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:20.140 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:20.140 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:20.140 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 37000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 37000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 38000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:20.140 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:20.140 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:20.140 [cleanupOldData-512764a099334798f3e47eb4] (start) waiting to cleanup test.bar from { _id: 16631.0 } -> { _id: 17093.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:20.140 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:20.140 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:20-512764a099334798f3e47eb5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536160140), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 16631.0 }, max: { _id: 17093.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:20.140 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:20.141 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 289 version: 38|1||51276475bd1f99446659365c based on: 37|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:20.143 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 290 version: 38|1||51276475bd1f99446659365c based on: 37|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:20.144 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:20.144 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:20.145 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 291 version: 38|1||51276475bd1f99446659365c based on: 38|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:20.146 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 38|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:20.146 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 38000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 291 m30999| Fri Feb 22 12:29:20.146 [conn1] setShardVersion success: { oldVersion: Timestamp 37000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:20.160 [cleanupOldData-512764a099334798f3e47eb4] waiting to remove documents for test.bar from { _id: 16631.0 } -> { _id: 17093.0 } m30001| Fri Feb 22 12:29:20.726 [cleanupOldData-5127649e99334798f3e47ea5] moveChunk deleted 937 documents for test.foo from { _id: 32794.0 } -> { _id: 33731.0 } m30001| Fri Feb 22 12:29:20.726 [cleanupOldData-512764a099334798f3e47eb4] moveChunk starting delete for: test.bar from { _id: 16631.0 } -> { _id: 17093.0 } 39000 m30999| Fri Feb 22 12:29:21.145 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:21.145 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:21.146 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:21 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a1bd1f994466593682" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a0bd1f994466593681" } } m30999| Fri Feb 22 12:29:21.146 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a1bd1f994466593682 m30999| Fri Feb 22 12:29:21.146 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:21.146 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:21.146 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:21.149 [Balancer] shard0001 has more chunks me:70 best: shard0000:37 m30999| Fri Feb 22 12:29:21.149 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:21.149 [Balancer] donor : shard0001 chunks on 70 m30999| Fri Feb 22 12:29:21.149 [Balancer] receiver : shard0000 chunks on 37 m30999| Fri Feb 22 12:29:21.149 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:21.149 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_34668.0", lastmod: Timestamp 38000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 34668.0 }, max: { _id: 35605.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:21.151 [Balancer] shard0001 has more chunks me:180 best: shard0000:37 m30999| Fri Feb 22 12:29:21.151 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:21.151 [Balancer] donor : shard0001 chunks on 180 m30999| Fri Feb 22 12:29:21.151 [Balancer] receiver : shard0000 chunks on 37 m30999| Fri Feb 22 12:29:21.151 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:21.151 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_17093.0", lastmod: Timestamp 38000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 17093.0 }, max: { _id: 17555.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:21.152 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 38|1||000000000000000000000000min: { _id: 34668.0 }max: { _id: 35605.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:21.152 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:21.152 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 34668.0 }, max: { _id: 35605.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_34668.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:21.153 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a199334798f3e47eb6 m30001| Fri Feb 22 12:29:21.153 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a199334798f3e47eb7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536161153), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 34668.0 }, max: { _id: 35605.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:21.154 [conn4] moveChunk request accepted at version 38|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:21.157 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:21.158 [migrateThread] starting receiving-end of migration of chunk { _id: 34668.0 } -> { _id: 35605.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:21.168 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 34668.0 }, max: { _id: 35605.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 111, clonedBytes: 58830, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:21.178 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 34668.0 }, max: { _id: 35605.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 313, clonedBytes: 165890, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:21.188 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 34668.0 }, max: { _id: 35605.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 512, clonedBytes: 271360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:21.198 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 34668.0 }, max: { _id: 35605.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 713, clonedBytes: 377890, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:21.210 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:21.210 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 34668.0 } -> { _id: 35605.0 } m30000| Fri Feb 22 12:29:21.213 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 34668.0 } -> { _id: 35605.0 } m30001| Fri Feb 22 12:29:21.214 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 34668.0 }, max: { _id: 35605.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:21.215 [conn4] moveChunk setting version to: 39|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:21.215 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:21.217 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 292 version: 38|1||51276475bd1f99446659365b based on: 38|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:21.217 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 38|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:21.217 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 38000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 292 m30001| Fri Feb 22 12:29:21.217 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:21.223 [cleanupOldData-512764a099334798f3e47eb4] moveChunk deleted 462 documents for test.bar from { _id: 16631.0 } -> { _id: 17093.0 } m30001| Fri Feb 22 12:29:21.223 [cleanupOldData-512764a099334798f3e47eaf] moveChunk starting delete for: test.foo from { _id: 33731.0 } -> { _id: 34668.0 } m30000| Fri Feb 22 12:29:21.224 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 34668.0 } -> { _id: 35605.0 } m30000| Fri Feb 22 12:29:21.224 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 34668.0 } -> { _id: 35605.0 } m30000| Fri Feb 22 12:29:21.224 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a1c49297cf54df561e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536161224), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 34668.0 }, max: { _id: 35605.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:21.225 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 34668.0 }, max: { _id: 35605.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:21.225 [conn4] moveChunk updating self version to: 39|1||51276475bd1f99446659365b through { _id: 35605.0 } -> { _id: 36542.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:21.226 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a199334798f3e47eb8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536161226), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 34668.0 }, max: { _id: 35605.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:21.226 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:21.226 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:21.226 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 38000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 38000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 39000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:21.226 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:21.226 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:21.226 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:21.226 [cleanupOldData-512764a199334798f3e47eb9] (start) waiting to cleanup test.foo from { _id: 34668.0 } -> { _id: 35605.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:21.226 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:21.226 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a199334798f3e47eba", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536161226), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 34668.0 }, max: { _id: 35605.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:21.226 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:21.227 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 293 version: 39|1||51276475bd1f99446659365b based on: 38|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:21.229 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 294 version: 39|1||51276475bd1f99446659365b based on: 38|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:21.229 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 38|1||000000000000000000000000min: { _id: 17093.0 }max: { _id: 17555.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:21.229 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:21.230 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 17093.0 }, max: { _id: 17555.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_17093.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:21.230 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 295 version: 39|1||51276475bd1f99446659365b based on: 39|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:21.231 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 39|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:21.231 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 39000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 295 m30999| Fri Feb 22 12:29:21.231 [conn1] setShardVersion success: { oldVersion: Timestamp 38000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:21.231 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a199334798f3e47ebb m30001| Fri Feb 22 12:29:21.231 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a199334798f3e47ebc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536161231), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 17093.0 }, max: { _id: 17555.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:21.232 [conn4] moveChunk request accepted at version 38|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:21.234 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:21.234 [migrateThread] starting receiving-end of migration of chunk { _id: 17093.0 } -> { _id: 17555.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:21.244 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17093.0 }, max: { _id: 17555.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 121, clonedBytes: 126203, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:21.246 [cleanupOldData-512764a199334798f3e47eb9] waiting to remove documents for test.foo from { _id: 34668.0 } -> { _id: 35605.0 } m30001| Fri Feb 22 12:29:21.254 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17093.0 }, max: { _id: 17555.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 293, clonedBytes: 305599, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:21.264 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:21.264 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 17093.0 } -> { _id: 17555.0 } m30001| Fri Feb 22 12:29:21.264 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17093.0 }, max: { _id: 17555.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:21.266 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 17093.0 } -> { _id: 17555.0 } m30001| Fri Feb 22 12:29:21.274 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17093.0 }, max: { _id: 17555.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:21.275 [conn4] moveChunk setting version to: 39|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:21.275 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:21.277 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 17093.0 } -> { _id: 17555.0 } m30000| Fri Feb 22 12:29:21.277 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 17093.0 } -> { _id: 17555.0 } m30000| Fri Feb 22 12:29:21.277 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a1c49297cf54df561f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536161277), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 17093.0 }, max: { _id: 17555.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:21.278 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 296 version: 38|1||51276475bd1f99446659365c based on: 38|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:21.278 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 38|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:21.278 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 38000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 296 m30001| Fri Feb 22 12:29:21.278 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:21.285 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 17093.0 }, max: { _id: 17555.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:21.285 [conn4] moveChunk updating self version to: 39|1||51276475bd1f99446659365c through { _id: 17555.0 } -> { _id: 18017.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:21.286 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a199334798f3e47ebd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536161286), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 17093.0 }, max: { _id: 17555.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:21.286 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:21.286 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:21.286 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 38000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 38000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 39000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:21.286 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:21.286 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:21.286 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:21.286 [cleanupOldData-512764a199334798f3e47ebe] (start) waiting to cleanup test.bar from { _id: 17093.0 } -> { _id: 17555.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:21.286 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:21.286 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:21-512764a199334798f3e47ebf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536161286), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 17093.0 }, max: { _id: 17555.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:21.286 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:21.288 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 297 version: 39|1||51276475bd1f99446659365c based on: 38|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:21.290 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 298 version: 39|1||51276475bd1f99446659365c based on: 38|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:21.291 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:21.291 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:21.292 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 299 version: 39|1||51276475bd1f99446659365c based on: 39|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:21.292 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 39|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:21.293 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 39000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 299 m30999| Fri Feb 22 12:29:21.293 [conn1] setShardVersion success: { oldVersion: Timestamp 38000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:21.306 [cleanupOldData-512764a199334798f3e47ebe] waiting to remove documents for test.bar from { _id: 17093.0 } -> { _id: 17555.0 } 40000 m30999| Fri Feb 22 12:29:22.292 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:22.292 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:22.292 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:22 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a2bd1f994466593683" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a1bd1f994466593682" } } m30999| Fri Feb 22 12:29:22.293 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a2bd1f994466593683 m30999| Fri Feb 22 12:29:22.293 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:22.293 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:22.293 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:22.295 [Balancer] shard0001 has more chunks me:69 best: shard0000:38 m30999| Fri Feb 22 12:29:22.295 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:22.295 [Balancer] donor : shard0001 chunks on 69 m30999| Fri Feb 22 12:29:22.296 [Balancer] receiver : shard0000 chunks on 38 m30999| Fri Feb 22 12:29:22.296 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:22.296 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_35605.0", lastmod: Timestamp 39000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 35605.0 }, max: { _id: 36542.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:22.298 [Balancer] shard0001 has more chunks me:179 best: shard0000:38 m30999| Fri Feb 22 12:29:22.298 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:22.298 [Balancer] donor : shard0001 chunks on 179 m30999| Fri Feb 22 12:29:22.298 [Balancer] receiver : shard0000 chunks on 38 m30999| Fri Feb 22 12:29:22.298 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:22.298 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_17555.0", lastmod: Timestamp 39000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 17555.0 }, max: { _id: 18017.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:22.298 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 39|1||000000000000000000000000min: { _id: 35605.0 }max: { _id: 36542.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:22.298 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:22.298 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 35605.0 }, max: { _id: 36542.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_35605.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:22.299 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a299334798f3e47ec0 m30001| Fri Feb 22 12:29:22.299 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a299334798f3e47ec1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536162299), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 35605.0 }, max: { _id: 36542.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:22.301 [conn4] moveChunk request accepted at version 39|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:22.303 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:22.304 [migrateThread] starting receiving-end of migration of chunk { _id: 35605.0 } -> { _id: 36542.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:22.314 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 35605.0 }, max: { _id: 36542.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 124, clonedBytes: 65720, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:22.324 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 35605.0 }, max: { _id: 36542.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 329, clonedBytes: 174370, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:22.334 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 35605.0 }, max: { _id: 36542.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 541, clonedBytes: 286730, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:22.345 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 35605.0 }, max: { _id: 36542.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 745, clonedBytes: 394850, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:22.357 [cleanupOldData-512764a099334798f3e47eaf] moveChunk deleted 937 documents for test.foo from { _id: 33731.0 } -> { _id: 34668.0 } m30001| Fri Feb 22 12:29:22.357 [cleanupOldData-512764a199334798f3e47ebe] moveChunk starting delete for: test.bar from { _id: 17093.0 } -> { _id: 17555.0 } m30000| Fri Feb 22 12:29:22.358 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:22.358 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 35605.0 } -> { _id: 36542.0 } m30000| Fri Feb 22 12:29:22.358 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 35605.0 } -> { _id: 36542.0 } m30001| Fri Feb 22 12:29:22.361 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 35605.0 }, max: { _id: 36542.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:22.361 [conn4] moveChunk setting version to: 40|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:22.361 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:22.363 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 300 version: 39|1||51276475bd1f99446659365b based on: 39|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:22.363 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 39|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:22.364 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 39000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 300 m30001| Fri Feb 22 12:29:22.364 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:22.369 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 35605.0 } -> { _id: 36542.0 } m30000| Fri Feb 22 12:29:22.369 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 35605.0 } -> { _id: 36542.0 } m30000| Fri Feb 22 12:29:22.369 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a2c49297cf54df5620", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536162369), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 35605.0 }, max: { _id: 36542.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 53, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 12:29:22.371 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 35605.0 }, max: { _id: 36542.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:22.371 [conn4] moveChunk updating self version to: 40|1||51276475bd1f99446659365b through { _id: 36542.0 } -> { _id: 37479.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:22.372 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a299334798f3e47ec2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536162372), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 35605.0 }, max: { _id: 36542.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:22.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:22.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:22.372 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:22.372 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 39000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 39000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 40000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:22.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:22.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:22.372 [cleanupOldData-512764a299334798f3e47ec3] (start) waiting to cleanup test.foo from { _id: 35605.0 } -> { _id: 36542.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:22.372 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:22.372 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a299334798f3e47ec4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536162372), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 35605.0 }, max: { _id: 36542.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:22.373 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:22.373 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 301 version: 40|1||51276475bd1f99446659365b based on: 39|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:22.375 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 302 version: 40|1||51276475bd1f99446659365b based on: 39|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:22.375 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 39|1||000000000000000000000000min: { _id: 17555.0 }max: { _id: 18017.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:22.375 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:22.376 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 17555.0 }, max: { _id: 18017.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_17555.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:22.376 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 303 version: 40|1||51276475bd1f99446659365b based on: 40|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:22.377 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 40|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:22.377 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 40000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 303 m30001| Fri Feb 22 12:29:22.377 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a299334798f3e47ec5 m30999| Fri Feb 22 12:29:22.377 [conn1] setShardVersion success: { oldVersion: Timestamp 39000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:22.377 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a299334798f3e47ec6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536162377), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 17555.0 }, max: { _id: 18017.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:22.378 [conn4] moveChunk request accepted at version 39|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:22.380 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:22.380 [migrateThread] starting receiving-end of migration of chunk { _id: 17555.0 } -> { _id: 18017.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:22.390 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17555.0 }, max: { _id: 18017.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 134, clonedBytes: 139762, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:22.392 [cleanupOldData-512764a299334798f3e47ec3] waiting to remove documents for test.foo from { _id: 35605.0 } -> { _id: 36542.0 } m30001| Fri Feb 22 12:29:22.400 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17555.0 }, max: { _id: 18017.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 315, clonedBytes: 328545, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:22.409 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:22.409 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 17555.0 } -> { _id: 18017.0 } m30001| Fri Feb 22 12:29:22.411 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17555.0 }, max: { _id: 18017.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:22.411 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 17555.0 } -> { _id: 18017.0 } m30001| Fri Feb 22 12:29:22.421 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 17555.0 }, max: { _id: 18017.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:22.421 [conn4] moveChunk setting version to: 40|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:22.421 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:22.422 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 17555.0 } -> { _id: 18017.0 } m30000| Fri Feb 22 12:29:22.422 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 17555.0 } -> { _id: 18017.0 } m30000| Fri Feb 22 12:29:22.422 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a2c49297cf54df5621", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536162422), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 17555.0 }, max: { _id: 18017.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:22.424 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 304 version: 39|1||51276475bd1f99446659365c based on: 39|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:22.425 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 39|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:22.425 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 39000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 304 m30001| Fri Feb 22 12:29:22.425 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:22.431 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 17555.0 }, max: { _id: 18017.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:22.431 [conn4] moveChunk updating self version to: 40|1||51276475bd1f99446659365c through { _id: 18017.0 } -> { _id: 18479.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:22.432 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a299334798f3e47ec7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536162432), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 17555.0 }, max: { _id: 18017.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:22.432 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:22.433 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:22.433 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:22.433 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 39000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 39000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 40000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:22.433 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:22.433 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:22.433 [cleanupOldData-512764a299334798f3e47ec8] (start) waiting to cleanup test.bar from { _id: 17555.0 } -> { _id: 18017.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:22.433 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:22.433 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:22-512764a299334798f3e47ec9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536162433), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 17555.0 }, max: { _id: 18017.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:22.433 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:22.435 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 305 version: 40|1||51276475bd1f99446659365c based on: 39|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:22.437 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 306 version: 40|1||51276475bd1f99446659365c based on: 39|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:22.438 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:22.438 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:22.439 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 307 version: 40|1||51276475bd1f99446659365c based on: 40|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:22.440 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 40|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:22.440 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 40000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 307 m30999| Fri Feb 22 12:29:22.440 [conn1] setShardVersion success: { oldVersion: Timestamp 39000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:22.453 [cleanupOldData-512764a299334798f3e47ec8] waiting to remove documents for test.bar from { _id: 17555.0 } -> { _id: 18017.0 } m30001| Fri Feb 22 12:29:22.685 [cleanupOldData-512764a199334798f3e47ebe] moveChunk deleted 462 documents for test.bar from { _id: 17093.0 } -> { _id: 17555.0 } m30001| Fri Feb 22 12:29:22.685 [cleanupOldData-512764a299334798f3e47ec8] moveChunk starting delete for: test.bar from { _id: 17555.0 } -> { _id: 18017.0 } 41000 m30001| Fri Feb 22 12:29:23.302 [cleanupOldData-512764a299334798f3e47ec8] moveChunk deleted 462 documents for test.bar from { _id: 17555.0 } -> { _id: 18017.0 } m30001| Fri Feb 22 12:29:23.302 [cleanupOldData-512764a299334798f3e47ec3] moveChunk starting delete for: test.foo from { _id: 35605.0 } -> { _id: 36542.0 } m30999| Fri Feb 22 12:29:23.439 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:23.439 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:23.440 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:23 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a3bd1f994466593684" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a2bd1f994466593683" } } m30999| Fri Feb 22 12:29:23.440 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a3bd1f994466593684 m30999| Fri Feb 22 12:29:23.440 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:23.440 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:23.440 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:23.442 [Balancer] shard0001 has more chunks me:68 best: shard0000:39 m30999| Fri Feb 22 12:29:23.443 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:23.443 [Balancer] donor : shard0001 chunks on 68 m30999| Fri Feb 22 12:29:23.443 [Balancer] receiver : shard0000 chunks on 39 m30999| Fri Feb 22 12:29:23.443 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:23.443 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_36542.0", lastmod: Timestamp 40000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 36542.0 }, max: { _id: 37479.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:23.445 [Balancer] shard0001 has more chunks me:178 best: shard0000:39 m30999| Fri Feb 22 12:29:23.445 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:23.445 [Balancer] donor : shard0001 chunks on 178 m30999| Fri Feb 22 12:29:23.445 [Balancer] receiver : shard0000 chunks on 39 m30999| Fri Feb 22 12:29:23.445 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:23.445 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_18017.0", lastmod: Timestamp 40000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 18017.0 }, max: { _id: 18479.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:23.446 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 40|1||000000000000000000000000min: { _id: 36542.0 }max: { _id: 37479.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:23.446 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:23.446 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 36542.0 }, max: { _id: 37479.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_36542.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:23.447 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a399334798f3e47eca m30001| Fri Feb 22 12:29:23.447 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a399334798f3e47ecb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536163447), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 36542.0 }, max: { _id: 37479.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:23.448 [conn4] moveChunk request accepted at version 40|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:23.451 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:23.451 [migrateThread] starting receiving-end of migration of chunk { _id: 36542.0 } -> { _id: 37479.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:23.462 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 36542.0 }, max: { _id: 37479.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 172, clonedBytes: 91160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:23.472 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 36542.0 }, max: { _id: 37479.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 431, clonedBytes: 228430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:23.482 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 36542.0 }, max: { _id: 37479.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 630, clonedBytes: 333900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:23.492 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 36542.0 }, max: { _id: 37479.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 846, clonedBytes: 448380, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:23.496 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:23.496 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 36542.0 } -> { _id: 37479.0 } m30000| Fri Feb 22 12:29:23.497 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 36542.0 } -> { _id: 37479.0 } m30001| Fri Feb 22 12:29:23.508 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 36542.0 }, max: { _id: 37479.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:23.508 [conn4] moveChunk setting version to: 41|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:23.509 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:23.510 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 308 version: 40|1||51276475bd1f99446659365b based on: 40|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:23.511 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 40|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:23.511 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 40000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 308 m30001| Fri Feb 22 12:29:23.511 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:23.517 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 36542.0 } -> { _id: 37479.0 } m30000| Fri Feb 22 12:29:23.517 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 36542.0 } -> { _id: 37479.0 } m30000| Fri Feb 22 12:29:23.517 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a3c49297cf54df5622", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536163517), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 36542.0 }, max: { _id: 37479.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 43, step4 of 5: 0, step5 of 5: 21 } } m30001| Fri Feb 22 12:29:23.519 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 36542.0 }, max: { _id: 37479.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:23.519 [conn4] moveChunk updating self version to: 41|1||51276475bd1f99446659365b through { _id: 37479.0 } -> { _id: 38416.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:23.519 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a399334798f3e47ecc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536163519), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 36542.0 }, max: { _id: 37479.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:23.519 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:23.519 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:23.519 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:23.519 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 40000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 40000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 41000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:23.519 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:23.519 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:23.519 [cleanupOldData-512764a399334798f3e47ecd] (start) waiting to cleanup test.foo from { _id: 36542.0 } -> { _id: 37479.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:23.520 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:23.520 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a399334798f3e47ece", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536163520), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 36542.0 }, max: { _id: 37479.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 56, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:23.520 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:23.521 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 309 version: 41|1||51276475bd1f99446659365b based on: 40|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:23.522 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 310 version: 41|1||51276475bd1f99446659365b based on: 40|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:23.523 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 40|1||000000000000000000000000min: { _id: 18017.0 }max: { _id: 18479.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:23.523 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:23.523 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 18017.0 }, max: { _id: 18479.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_18017.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:23.524 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 311 version: 41|1||51276475bd1f99446659365b based on: 41|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:23.524 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 41|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:23.524 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 41000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 311 m30999| Fri Feb 22 12:29:23.524 [conn1] setShardVersion success: { oldVersion: Timestamp 40000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:23.524 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a399334798f3e47ecf m30001| Fri Feb 22 12:29:23.524 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a399334798f3e47ed0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536163524), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 18017.0 }, max: { _id: 18479.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:23.526 [conn4] moveChunk request accepted at version 40|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:23.527 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:23.527 [migrateThread] starting receiving-end of migration of chunk { _id: 18017.0 } -> { _id: 18479.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:23.537 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18017.0 }, max: { _id: 18479.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 138, clonedBytes: 143934, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:23.540 [cleanupOldData-512764a399334798f3e47ecd] waiting to remove documents for test.foo from { _id: 36542.0 } -> { _id: 37479.0 } m30001| Fri Feb 22 12:29:23.548 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18017.0 }, max: { _id: 18479.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 314, clonedBytes: 327502, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:23.556 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:23.556 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 18017.0 } -> { _id: 18479.0 } m30001| Fri Feb 22 12:29:23.558 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18017.0 }, max: { _id: 18479.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:23.559 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 18017.0 } -> { _id: 18479.0 } m30001| Fri Feb 22 12:29:23.568 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18017.0 }, max: { _id: 18479.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:23.568 [conn4] moveChunk setting version to: 41|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:23.568 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:23.569 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 18017.0 } -> { _id: 18479.0 } m30000| Fri Feb 22 12:29:23.569 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 18017.0 } -> { _id: 18479.0 } m30000| Fri Feb 22 12:29:23.569 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a3c49297cf54df5623", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536163569), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 18017.0 }, max: { _id: 18479.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:23.570 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 312 version: 40|1||51276475bd1f99446659365c based on: 40|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:23.570 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 40|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:23.570 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 40000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 312 m30001| Fri Feb 22 12:29:23.570 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:23.578 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 18017.0 }, max: { _id: 18479.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:23.578 [conn4] moveChunk updating self version to: 41|1||51276475bd1f99446659365c through { _id: 18479.0 } -> { _id: 18941.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:23.579 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a399334798f3e47ed1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536163579), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 18017.0 }, max: { _id: 18479.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:23.579 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:23.579 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 40000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 40000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 41000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:23.579 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:23.579 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:23.579 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:23.579 [cleanupOldData-512764a399334798f3e47ed2] (start) waiting to cleanup test.bar from { _id: 18017.0 } -> { _id: 18479.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:23.579 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:23.580 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:23.580 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:23-512764a399334798f3e47ed3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536163580), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 18017.0 }, max: { _id: 18479.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:23.580 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:23.581 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 313 version: 41|1||51276475bd1f99446659365c based on: 40|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:23.583 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 314 version: 41|1||51276475bd1f99446659365c based on: 40|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:23.584 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:23.584 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:23.585 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 315 version: 41|1||51276475bd1f99446659365c based on: 41|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:23.585 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 41|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:23.585 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 41000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 315 m30999| Fri Feb 22 12:29:23.586 [conn1] setShardVersion success: { oldVersion: Timestamp 40000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:23.599 [cleanupOldData-512764a399334798f3e47ed2] waiting to remove documents for test.bar from { _id: 18017.0 } -> { _id: 18479.0 } m30001| Fri Feb 22 12:29:23.765 [cleanupOldData-512764a299334798f3e47ec3] moveChunk deleted 937 documents for test.foo from { _id: 35605.0 } -> { _id: 36542.0 } m30001| Fri Feb 22 12:29:23.765 [cleanupOldData-512764a399334798f3e47ed2] moveChunk starting delete for: test.bar from { _id: 18017.0 } -> { _id: 18479.0 } 42000 m30001| Fri Feb 22 12:29:24.332 [cleanupOldData-512764a399334798f3e47ed2] moveChunk deleted 462 documents for test.bar from { _id: 18017.0 } -> { _id: 18479.0 } m30001| Fri Feb 22 12:29:24.332 [cleanupOldData-512764a399334798f3e47ecd] moveChunk starting delete for: test.foo from { _id: 36542.0 } -> { _id: 37479.0 } m30999| Fri Feb 22 12:29:24.585 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:24.585 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:24.586 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:24 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a4bd1f994466593685" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a3bd1f994466593684" } } m30999| Fri Feb 22 12:29:24.586 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a4bd1f994466593685 m30999| Fri Feb 22 12:29:24.586 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:24.586 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:24.586 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:24.589 [Balancer] shard0001 has more chunks me:67 best: shard0000:40 m30999| Fri Feb 22 12:29:24.589 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:24.589 [Balancer] donor : shard0001 chunks on 67 m30999| Fri Feb 22 12:29:24.589 [Balancer] receiver : shard0000 chunks on 40 m30999| Fri Feb 22 12:29:24.589 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:24.589 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_37479.0", lastmod: Timestamp 41000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 37479.0 }, max: { _id: 38416.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:24.591 [Balancer] shard0001 has more chunks me:177 best: shard0000:40 m30999| Fri Feb 22 12:29:24.591 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:24.591 [Balancer] donor : shard0001 chunks on 177 m30999| Fri Feb 22 12:29:24.591 [Balancer] receiver : shard0000 chunks on 40 m30999| Fri Feb 22 12:29:24.591 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:24.591 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_18479.0", lastmod: Timestamp 41000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 18479.0 }, max: { _id: 18941.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:24.591 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 41|1||000000000000000000000000min: { _id: 37479.0 }max: { _id: 38416.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:24.592 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:24.592 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 37479.0 }, max: { _id: 38416.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_37479.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:24.593 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a499334798f3e47ed4 m30001| Fri Feb 22 12:29:24.593 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a499334798f3e47ed5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536164593), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 37479.0 }, max: { _id: 38416.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:24.594 [conn4] moveChunk request accepted at version 41|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:24.597 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:24.597 [migrateThread] starting receiving-end of migration of chunk { _id: 37479.0 } -> { _id: 38416.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:24.607 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 37479.0 }, max: { _id: 38416.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 125, clonedBytes: 66250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:24.618 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 37479.0 }, max: { _id: 38416.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 311, clonedBytes: 164830, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:24.628 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 37479.0 }, max: { _id: 38416.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 516, clonedBytes: 273480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:24.638 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 37479.0 }, max: { _id: 38416.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 722, clonedBytes: 382660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:24.649 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:24.649 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 37479.0 } -> { _id: 38416.0 } m30000| Fri Feb 22 12:29:24.652 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 37479.0 } -> { _id: 38416.0 } m30001| Fri Feb 22 12:29:24.654 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 37479.0 }, max: { _id: 38416.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:24.654 [conn4] moveChunk setting version to: 42|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:24.654 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:24.657 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 316 version: 41|1||51276475bd1f99446659365b based on: 41|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:24.657 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 41|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:24.657 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 41000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 316 m30001| Fri Feb 22 12:29:24.657 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:24.662 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 37479.0 } -> { _id: 38416.0 } m30000| Fri Feb 22 12:29:24.662 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 37479.0 } -> { _id: 38416.0 } m30000| Fri Feb 22 12:29:24.662 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a4c49297cf54df5624", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536164662), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 37479.0 }, max: { _id: 38416.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 51, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:24.665 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 37479.0 }, max: { _id: 38416.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:24.665 [conn4] moveChunk updating self version to: 42|1||51276475bd1f99446659365b through { _id: 38416.0 } -> { _id: 39353.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:24.665 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a499334798f3e47ed6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536164665), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 37479.0 }, max: { _id: 38416.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:24.665 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:24.665 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:24.665 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 41000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 41000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 42000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:24.666 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:24.666 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:24.666 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:24.666 [cleanupOldData-512764a499334798f3e47ed7] (start) waiting to cleanup test.foo from { _id: 37479.0 } -> { _id: 38416.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:24.666 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:24.666 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a499334798f3e47ed8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536164666), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 37479.0 }, max: { _id: 38416.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:24.666 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:24.667 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 317 version: 42|1||51276475bd1f99446659365b based on: 41|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:24.668 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 318 version: 42|1||51276475bd1f99446659365b based on: 41|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:24.669 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 41|1||000000000000000000000000min: { _id: 18479.0 }max: { _id: 18941.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:24.669 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:24.669 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 18479.0 }, max: { _id: 18941.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_18479.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:24.670 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 319 version: 42|1||51276475bd1f99446659365b based on: 42|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:24.670 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 42|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:24.670 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 42000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 319 m30999| Fri Feb 22 12:29:24.670 [conn1] setShardVersion success: { oldVersion: Timestamp 41000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:24.671 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a499334798f3e47ed9 m30001| Fri Feb 22 12:29:24.671 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a499334798f3e47eda", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536164671), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 18479.0 }, max: { _id: 18941.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:24.672 [conn4] moveChunk request accepted at version 41|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:24.673 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:24.673 [migrateThread] starting receiving-end of migration of chunk { _id: 18479.0 } -> { _id: 18941.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:24.684 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18479.0 }, max: { _id: 18941.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 130, clonedBytes: 135590, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:24.686 [cleanupOldData-512764a499334798f3e47ed7] waiting to remove documents for test.foo from { _id: 37479.0 } -> { _id: 38416.0 } m30001| Fri Feb 22 12:29:24.694 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18479.0 }, max: { _id: 18941.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 311, clonedBytes: 324373, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:24.702 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:24.703 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 18479.0 } -> { _id: 18941.0 } m30001| Fri Feb 22 12:29:24.704 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18479.0 }, max: { _id: 18941.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:24.705 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 18479.0 } -> { _id: 18941.0 } m30001| Fri Feb 22 12:29:24.714 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18479.0 }, max: { _id: 18941.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:24.714 [conn4] moveChunk setting version to: 42|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:24.714 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:24.715 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 18479.0 } -> { _id: 18941.0 } m30000| Fri Feb 22 12:29:24.715 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 18479.0 } -> { _id: 18941.0 } m30000| Fri Feb 22 12:29:24.715 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a4c49297cf54df5625", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536164715), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 18479.0 }, max: { _id: 18941.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:24.717 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 320 version: 41|1||51276475bd1f99446659365c based on: 41|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:24.718 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 41|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:24.718 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 41000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 320 m30001| Fri Feb 22 12:29:24.718 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:24.725 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 18479.0 }, max: { _id: 18941.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:24.725 [conn4] moveChunk updating self version to: 42|1||51276475bd1f99446659365c through { _id: 18941.0 } -> { _id: 19403.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:24.725 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a499334798f3e47edb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536164725), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 18479.0 }, max: { _id: 18941.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:24.725 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:24.725 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 41000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 41000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 42000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:24.725 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:24.725 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:24.726 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:24.726 [cleanupOldData-512764a499334798f3e47edc] (start) waiting to cleanup test.bar from { _id: 18479.0 } -> { _id: 18941.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:24.726 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:24.726 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:24.726 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:24-512764a499334798f3e47edd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536164726), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 18479.0 }, max: { _id: 18941.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:24.726 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:24.727 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 321 version: 42|1||51276475bd1f99446659365c based on: 41|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:24.729 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 322 version: 42|1||51276475bd1f99446659365c based on: 41|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:24.730 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:24.731 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:24.732 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 323 version: 42|1||51276475bd1f99446659365c based on: 42|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:24.732 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 42|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:24.732 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 42000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 323 m30999| Fri Feb 22 12:29:24.732 [conn1] setShardVersion success: { oldVersion: Timestamp 41000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:24.746 [cleanupOldData-512764a499334798f3e47edc] waiting to remove documents for test.bar from { _id: 18479.0 } -> { _id: 18941.0 } m30001| Fri Feb 22 12:29:25.174 [cleanupOldData-512764a399334798f3e47ecd] moveChunk deleted 937 documents for test.foo from { _id: 36542.0 } -> { _id: 37479.0 } m30001| Fri Feb 22 12:29:25.174 [cleanupOldData-512764a499334798f3e47ed7] moveChunk starting delete for: test.foo from { _id: 37479.0 } -> { _id: 38416.0 } 43000 m30999| Fri Feb 22 12:29:25.731 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:25.732 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:25.732 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:25 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a5bd1f994466593686" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a4bd1f994466593685" } } m30999| Fri Feb 22 12:29:25.733 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a5bd1f994466593686 m30999| Fri Feb 22 12:29:25.733 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:25.733 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:25.733 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:25.735 [Balancer] shard0001 has more chunks me:66 best: shard0000:41 m30999| Fri Feb 22 12:29:25.735 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:25.735 [Balancer] donor : shard0001 chunks on 66 m30999| Fri Feb 22 12:29:25.735 [Balancer] receiver : shard0000 chunks on 41 m30999| Fri Feb 22 12:29:25.735 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:25.735 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_38416.0", lastmod: Timestamp 42000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 38416.0 }, max: { _id: 39353.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:25.738 [Balancer] shard0001 has more chunks me:176 best: shard0000:41 m30999| Fri Feb 22 12:29:25.738 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:25.738 [Balancer] donor : shard0001 chunks on 176 m30999| Fri Feb 22 12:29:25.738 [Balancer] receiver : shard0000 chunks on 41 m30999| Fri Feb 22 12:29:25.738 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:25.738 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_18941.0", lastmod: Timestamp 42000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 18941.0 }, max: { _id: 19403.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:25.738 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 42|1||000000000000000000000000min: { _id: 38416.0 }max: { _id: 39353.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:25.738 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:25.738 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 38416.0 }, max: { _id: 39353.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_38416.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:25.739 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a599334798f3e47ede m30001| Fri Feb 22 12:29:25.739 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a599334798f3e47edf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536165739), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 38416.0 }, max: { _id: 39353.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:25.741 [conn4] moveChunk request accepted at version 42|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:25.744 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:25.744 [migrateThread] starting receiving-end of migration of chunk { _id: 38416.0 } -> { _id: 39353.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:25.754 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 38416.0 }, max: { _id: 39353.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 131, clonedBytes: 69430, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:25.765 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 38416.0 }, max: { _id: 39353.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 432, clonedBytes: 228960, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:25.775 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 38416.0 }, max: { _id: 39353.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 739, clonedBytes: 391670, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:25.781 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:25.781 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 38416.0 } -> { _id: 39353.0 } m30000| Fri Feb 22 12:29:25.784 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 38416.0 } -> { _id: 39353.0 } m30001| Fri Feb 22 12:29:25.785 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 38416.0 }, max: { _id: 39353.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:25.785 [conn4] moveChunk setting version to: 43|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:25.785 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:25.787 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 324 version: 42|1||51276475bd1f99446659365b based on: 42|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:25.787 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 42|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:25.787 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 42000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 324 m30001| Fri Feb 22 12:29:25.787 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:25.794 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 38416.0 } -> { _id: 39353.0 } m30000| Fri Feb 22 12:29:25.794 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 38416.0 } -> { _id: 39353.0 } m30000| Fri Feb 22 12:29:25.794 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a5c49297cf54df5626", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536165794), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 38416.0 }, max: { _id: 39353.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 36, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:25.795 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 38416.0 }, max: { _id: 39353.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:25.795 [conn4] moveChunk updating self version to: 43|1||51276475bd1f99446659365b through { _id: 39353.0 } -> { _id: 40290.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:25.796 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a599334798f3e47ee0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536165796), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 38416.0 }, max: { _id: 39353.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:25.796 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:25.796 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:25.796 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:25.796 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 42000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 42000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 43000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:25.796 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:25.796 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:25.796 [cleanupOldData-512764a599334798f3e47ee1] (start) waiting to cleanup test.foo from { _id: 38416.0 } -> { _id: 39353.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:25.796 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:25.796 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a599334798f3e47ee2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536165796), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 38416.0 }, max: { _id: 39353.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 40, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:25.796 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:25.797 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 325 version: 43|1||51276475bd1f99446659365b based on: 42|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:25.798 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 326 version: 43|1||51276475bd1f99446659365b based on: 42|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:25.799 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 42|1||000000000000000000000000min: { _id: 18941.0 }max: { _id: 19403.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:25.799 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:25.799 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 18941.0 }, max: { _id: 19403.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_18941.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:25.800 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 327 version: 43|1||51276475bd1f99446659365b based on: 43|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:25.800 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a599334798f3e47ee3 m30001| Fri Feb 22 12:29:25.800 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a599334798f3e47ee4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536165800), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 18941.0 }, max: { _id: 19403.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:25.800 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 43|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:25.800 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 43000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 327 m30999| Fri Feb 22 12:29:25.800 [conn1] setShardVersion success: { oldVersion: Timestamp 42000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:25.800 [conn4] moveChunk request accepted at version 42|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:25.801 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:25.802 [migrateThread] starting receiving-end of migration of chunk { _id: 18941.0 } -> { _id: 19403.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:25.812 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18941.0 }, max: { _id: 19403.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 133, clonedBytes: 138719, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:25.816 [cleanupOldData-512764a599334798f3e47ee1] waiting to remove documents for test.foo from { _id: 38416.0 } -> { _id: 39353.0 } m30001| Fri Feb 22 12:29:25.822 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18941.0 }, max: { _id: 19403.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 313, clonedBytes: 326459, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:25.831 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:25.831 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 18941.0 } -> { _id: 19403.0 } m30001| Fri Feb 22 12:29:25.832 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18941.0 }, max: { _id: 19403.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:25.833 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 18941.0 } -> { _id: 19403.0 } m30001| Fri Feb 22 12:29:25.842 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 18941.0 }, max: { _id: 19403.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:25.843 [conn4] moveChunk setting version to: 43|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:25.843 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:25.843 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 18941.0 } -> { _id: 19403.0 } m30000| Fri Feb 22 12:29:25.843 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 18941.0 } -> { _id: 19403.0 } m30000| Fri Feb 22 12:29:25.843 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a5c49297cf54df5627", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536165843), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 18941.0 }, max: { _id: 19403.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:25.845 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 328 version: 42|1||51276475bd1f99446659365c based on: 42|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:25.845 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 42|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:25.845 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 42000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 328 m30001| Fri Feb 22 12:29:25.845 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:25.853 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 18941.0 }, max: { _id: 19403.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:25.853 [conn4] moveChunk updating self version to: 43|1||51276475bd1f99446659365c through { _id: 19403.0 } -> { _id: 19865.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:25.854 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a599334798f3e47ee5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536165854), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 18941.0 }, max: { _id: 19403.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:25.854 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 42000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 42000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 43000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:25.854 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:25.854 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:25.854 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:25.854 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:25.854 [cleanupOldData-512764a599334798f3e47ee6] (start) waiting to cleanup test.bar from { _id: 18941.0 } -> { _id: 19403.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:25.854 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:25.854 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:25.854 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:25-512764a599334798f3e47ee7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536165854), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 18941.0 }, max: { _id: 19403.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:25.854 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:25.855 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 329 version: 43|1||51276475bd1f99446659365c based on: 42|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:25.857 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 330 version: 43|1||51276475bd1f99446659365c based on: 42|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:25.858 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:25.858 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:25.859 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 331 version: 43|1||51276475bd1f99446659365c based on: 43|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:25.859 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 43|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:25.860 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 43000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 331 m30999| Fri Feb 22 12:29:25.860 [conn1] setShardVersion success: { oldVersion: Timestamp 42000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:25.874 [cleanupOldData-512764a599334798f3e47ee6] waiting to remove documents for test.bar from { _id: 18941.0 } -> { _id: 19403.0 } m30001| Fri Feb 22 12:29:26.292 [cleanupOldData-512764a499334798f3e47ed7] moveChunk deleted 937 documents for test.foo from { _id: 37479.0 } -> { _id: 38416.0 } m30001| Fri Feb 22 12:29:26.292 [cleanupOldData-512764a599334798f3e47ee6] moveChunk starting delete for: test.bar from { _id: 18941.0 } -> { _id: 19403.0 } 44000 m30999| Fri Feb 22 12:29:26.859 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:26.859 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:26.859 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a6bd1f994466593687" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a5bd1f994466593686" } } m30999| Fri Feb 22 12:29:26.860 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a6bd1f994466593687 m30999| Fri Feb 22 12:29:26.860 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:26.860 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:26.860 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:26.862 [Balancer] shard0001 has more chunks me:65 best: shard0000:42 m30999| Fri Feb 22 12:29:26.862 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:26.862 [Balancer] donor : shard0001 chunks on 65 m30999| Fri Feb 22 12:29:26.862 [Balancer] receiver : shard0000 chunks on 42 m30999| Fri Feb 22 12:29:26.862 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:26.862 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_39353.0", lastmod: Timestamp 43000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:26.864 [Balancer] shard0001 has more chunks me:175 best: shard0000:42 m30999| Fri Feb 22 12:29:26.864 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:26.864 [Balancer] donor : shard0001 chunks on 175 m30999| Fri Feb 22 12:29:26.864 [Balancer] receiver : shard0000 chunks on 42 m30999| Fri Feb 22 12:29:26.864 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:26.864 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_19403.0", lastmod: Timestamp 43000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 19403.0 }, max: { _id: 19865.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:26.864 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 43|1||000000000000000000000000min: { _id: 39353.0 }max: { _id: 40290.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:26.864 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:26.865 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 39353.0 }, max: { _id: 40290.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_39353.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:26.865 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a699334798f3e47ee8 m30001| Fri Feb 22 12:29:26.865 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:26-512764a699334798f3e47ee9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536166865), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 39353.0 }, max: { _id: 40290.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:26.866 [conn4] moveChunk request accepted at version 43|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:26.869 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:26.869 [migrateThread] starting receiving-end of migration of chunk { _id: 39353.0 } -> { _id: 40290.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:26.879 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 104, clonedBytes: 55120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:26.889 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 284, clonedBytes: 150520, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:26.899 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 476, clonedBytes: 252280, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:26.910 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 680, clonedBytes: 360400, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:26.923 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:26.923 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 39353.0 } -> { _id: 40290.0 } m30001| Fri Feb 22 12:29:26.926 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:26.927 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 39353.0 } -> { _id: 40290.0 } m30001| Fri Feb 22 12:29:26.958 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:26.958 [conn4] moveChunk setting version to: 44|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:26.958 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:26.960 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 332 version: 43|1||51276475bd1f99446659365b based on: 43|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:26.960 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 43|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:26.960 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 43000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 332 m30001| Fri Feb 22 12:29:26.960 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:26.965 [cleanupOldData-512764a599334798f3e47ee6] moveChunk deleted 462 documents for test.bar from { _id: 18941.0 } -> { _id: 19403.0 } m30001| Fri Feb 22 12:29:26.965 [cleanupOldData-512764a599334798f3e47ee1] moveChunk starting delete for: test.foo from { _id: 38416.0 } -> { _id: 39353.0 } m30000| Fri Feb 22 12:29:26.967 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 39353.0 } -> { _id: 40290.0 } m30000| Fri Feb 22 12:29:26.967 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 39353.0 } -> { _id: 40290.0 } m30000| Fri Feb 22 12:29:26.967 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:26-512764a6c49297cf54df5628", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536166967), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 39353.0 }, max: { _id: 40290.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 53, step4 of 5: 0, step5 of 5: 44 } } m30001| Fri Feb 22 12:29:26.968 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 39353.0 }, max: { _id: 40290.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:26.968 [conn4] moveChunk updating self version to: 44|1||51276475bd1f99446659365b through { _id: 40290.0 } -> { _id: 41227.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:26.969 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:26-512764a699334798f3e47eea", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536166969), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 39353.0 }, max: { _id: 40290.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:26.969 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 43000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 43000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 44000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:26.969 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:26.969 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:26.969 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:26.969 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:26.969 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:26.969 [cleanupOldData-512764a699334798f3e47eeb] (start) waiting to cleanup test.foo from { _id: 39353.0 } -> { _id: 40290.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:26.970 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:26.970 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:26-512764a699334798f3e47eec", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536166970), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 39353.0 }, max: { _id: 40290.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:29:26.970 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 39353.0 }, max: { _id: 40290.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_39353.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:40 r:2215 w:41 reslen:37 105ms m30999| Fri Feb 22 12:29:26.970 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:26.971 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 333 version: 44|1||51276475bd1f99446659365b based on: 43|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:26.972 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 334 version: 44|1||51276475bd1f99446659365b based on: 43|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:26.972 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 43|1||000000000000000000000000min: { _id: 19403.0 }max: { _id: 19865.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:26.972 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:26.972 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 19403.0 }, max: { _id: 19865.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_19403.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:26.973 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 335 version: 44|1||51276475bd1f99446659365b based on: 44|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:26.973 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a699334798f3e47eed m30001| Fri Feb 22 12:29:26.973 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:26-512764a699334798f3e47eee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536166973), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 19403.0 }, max: { _id: 19865.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:26.973 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 44|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:26.973 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 44000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 335 m30999| Fri Feb 22 12:29:26.973 [conn1] setShardVersion success: { oldVersion: Timestamp 43000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:26.974 [conn4] moveChunk request accepted at version 43|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:26.975 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:26.975 [migrateThread] starting receiving-end of migration of chunk { _id: 19403.0 } -> { _id: 19865.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:26.985 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19403.0 }, max: { _id: 19865.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 132, clonedBytes: 137676, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:26.989 [cleanupOldData-512764a699334798f3e47eeb] waiting to remove documents for test.foo from { _id: 39353.0 } -> { _id: 40290.0 } m30001| Fri Feb 22 12:29:26.995 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19403.0 }, max: { _id: 19865.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 305, clonedBytes: 318115, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:27.005 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:27.005 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 19403.0 } -> { _id: 19865.0 } m30001| Fri Feb 22 12:29:27.006 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19403.0 }, max: { _id: 19865.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:27.008 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 19403.0 } -> { _id: 19865.0 } m30001| Fri Feb 22 12:29:27.016 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19403.0 }, max: { _id: 19865.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:27.016 [conn4] moveChunk setting version to: 44|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:27.016 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:27.018 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 19403.0 } -> { _id: 19865.0 } m30000| Fri Feb 22 12:29:27.018 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 19403.0 } -> { _id: 19865.0 } m30000| Fri Feb 22 12:29:27.018 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:27-512764a7c49297cf54df5629", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536167018), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 19403.0 }, max: { _id: 19865.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:29:27.019 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 336 version: 43|1||51276475bd1f99446659365c based on: 43|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:27.019 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 43|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:27.019 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 43000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 336 m30001| Fri Feb 22 12:29:27.019 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:27.026 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 19403.0 }, max: { _id: 19865.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:27.026 [conn4] moveChunk updating self version to: 44|1||51276475bd1f99446659365c through { _id: 19865.0 } -> { _id: 20327.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:27.027 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:27-512764a799334798f3e47eef", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536167027), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 19403.0 }, max: { _id: 19865.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:27.027 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:27.027 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:27.027 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:27.027 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:27.027 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 43000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 43000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 44000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:27.027 [cleanupOldData-512764a799334798f3e47ef0] (start) waiting to cleanup test.bar from { _id: 19403.0 } -> { _id: 19865.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:27.027 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:27.028 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:27.028 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:27-512764a799334798f3e47ef1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536167028), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 19403.0 }, max: { _id: 19865.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:27.028 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:27.029 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 337 version: 44|1||51276475bd1f99446659365c based on: 43|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:27.031 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 338 version: 44|1||51276475bd1f99446659365c based on: 43|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:27.032 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:27.032 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:27.033 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 339 version: 44|1||51276475bd1f99446659365c based on: 44|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:27.033 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 44|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:27.034 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 44000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 339 m30999| Fri Feb 22 12:29:27.034 [conn1] setShardVersion success: { oldVersion: Timestamp 43000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:27.047 [cleanupOldData-512764a799334798f3e47ef0] waiting to remove documents for test.bar from { _id: 19403.0 } -> { _id: 19865.0 } 45000 m30001| Fri Feb 22 12:29:27.868 [cleanupOldData-512764a599334798f3e47ee1] moveChunk deleted 937 documents for test.foo from { _id: 38416.0 } -> { _id: 39353.0 } m30001| Fri Feb 22 12:29:27.868 [cleanupOldData-512764a799334798f3e47ef0] moveChunk starting delete for: test.bar from { _id: 19403.0 } -> { _id: 19865.0 } m30999| Fri Feb 22 12:29:28.032 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:28.033 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:28.033 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:28 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a8bd1f994466593688" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a6bd1f994466593687" } } m30999| Fri Feb 22 12:29:28.034 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a8bd1f994466593688 m30999| Fri Feb 22 12:29:28.034 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:28.034 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:28.034 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:28.035 [Balancer] shard0001 has more chunks me:64 best: shard0000:43 m30999| Fri Feb 22 12:29:28.035 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:28.035 [Balancer] donor : shard0001 chunks on 64 m30999| Fri Feb 22 12:29:28.035 [Balancer] receiver : shard0000 chunks on 43 m30999| Fri Feb 22 12:29:28.035 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:28.035 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_40290.0", lastmod: Timestamp 44000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:28.038 [Balancer] shard0001 has more chunks me:174 best: shard0000:43 m30999| Fri Feb 22 12:29:28.038 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:28.038 [Balancer] donor : shard0001 chunks on 174 m30999| Fri Feb 22 12:29:28.038 [Balancer] receiver : shard0000 chunks on 43 m30999| Fri Feb 22 12:29:28.038 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:28.038 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_19865.0", lastmod: Timestamp 44000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 19865.0 }, max: { _id: 20327.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:28.038 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 44|1||000000000000000000000000min: { _id: 40290.0 }max: { _id: 41227.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:28.038 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:28.038 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 40290.0 }, max: { _id: 41227.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_40290.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:28.038 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a899334798f3e47ef2 m30001| Fri Feb 22 12:29:28.039 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a899334798f3e47ef3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536168039), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 40290.0 }, max: { _id: 41227.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:28.039 [conn4] moveChunk request accepted at version 44|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:28.041 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:28.042 [migrateThread] starting receiving-end of migration of chunk { _id: 40290.0 } -> { _id: 41227.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:28.052 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 120, clonedBytes: 63600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.062 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 318, clonedBytes: 168540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.072 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 519, clonedBytes: 275070, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.082 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 700, clonedBytes: 371000, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.098 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 911, clonedBytes: 482830, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:28.100 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:28.100 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 40290.0 } -> { _id: 41227.0 } m30000| Fri Feb 22 12:29:28.101 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 40290.0 } -> { _id: 41227.0 } m30001| Fri Feb 22 12:29:28.131 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.131 [conn4] moveChunk setting version to: 45|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:28.131 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:28.131 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 40290.0 } -> { _id: 41227.0 } m30000| Fri Feb 22 12:29:28.131 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 40290.0 } -> { _id: 41227.0 } m30000| Fri Feb 22 12:29:28.132 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a8c49297cf54df562a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536168131), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 40290.0 }, max: { _id: 41227.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 57, step4 of 5: 0, step5 of 5: 31 } } m30999| Fri Feb 22 12:29:28.133 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 340 version: 44|1||51276475bd1f99446659365b based on: 44|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:28.133 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 44|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:28.134 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 44000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 340 m30001| Fri Feb 22 12:29:28.134 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:28.140 [cleanupOldData-512764a799334798f3e47ef0] moveChunk deleted 462 documents for test.bar from { _id: 19403.0 } -> { _id: 19865.0 } m30001| Fri Feb 22 12:29:28.140 [cleanupOldData-512764a699334798f3e47eeb] moveChunk starting delete for: test.foo from { _id: 39353.0 } -> { _id: 40290.0 } m30001| Fri Feb 22 12:29:28.141 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 40290.0 }, max: { _id: 41227.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:28.141 [conn4] moveChunk updating self version to: 45|1||51276475bd1f99446659365b through { _id: 41227.0 } -> { _id: 42164.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:28.142 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a899334798f3e47ef4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536168142), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 40290.0 }, max: { _id: 41227.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:28.142 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:28.142 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:28.142 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:28.142 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 44000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 44000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 45000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:28.142 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:28.142 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:28.142 [cleanupOldData-512764a899334798f3e47ef5] (start) waiting to cleanup test.foo from { _id: 40290.0 } -> { _id: 41227.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:28.142 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:28.142 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a899334798f3e47ef6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536168142), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 40290.0 }, max: { _id: 41227.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:29:28.142 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 40290.0 }, max: { _id: 41227.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_40290.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:37 r:2035 w:39 reslen:37 104ms m30999| Fri Feb 22 12:29:28.143 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:28.143 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 341 version: 45|1||51276475bd1f99446659365b based on: 44|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:28.144 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 342 version: 45|1||51276475bd1f99446659365b based on: 44|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:28.145 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 44|1||000000000000000000000000min: { _id: 19865.0 }max: { _id: 20327.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:28.145 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:28.145 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 19865.0 }, max: { _id: 20327.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_19865.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:28.146 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 343 version: 45|1||51276475bd1f99446659365b based on: 45|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:28.146 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a899334798f3e47ef7 m30999| Fri Feb 22 12:29:28.146 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 45|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:28.146 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a899334798f3e47ef8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536168146), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 19865.0 }, max: { _id: 20327.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:28.146 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 45000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 343 m30999| Fri Feb 22 12:29:28.146 [conn1] setShardVersion success: { oldVersion: Timestamp 44000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:28.147 [conn4] moveChunk request accepted at version 44|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:28.149 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:28.149 [migrateThread] starting receiving-end of migration of chunk { _id: 19865.0 } -> { _id: 20327.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:28.159 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19865.0 }, max: { _id: 20327.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 111, clonedBytes: 115773, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.162 [cleanupOldData-512764a899334798f3e47ef5] waiting to remove documents for test.foo from { _id: 40290.0 } -> { _id: 41227.0 } m30001| Fri Feb 22 12:29:28.169 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19865.0 }, max: { _id: 20327.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 265, clonedBytes: 276395, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.179 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19865.0 }, max: { _id: 20327.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 430, clonedBytes: 448490, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:28.182 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:28.182 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 19865.0 } -> { _id: 20327.0 } m30000| Fri Feb 22 12:29:28.184 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 19865.0 } -> { _id: 20327.0 } m30001| Fri Feb 22 12:29:28.189 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 19865.0 }, max: { _id: 20327.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:28.190 [conn4] moveChunk setting version to: 45|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:28.190 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:28.192 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 344 version: 44|1||51276475bd1f99446659365c based on: 44|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:28.192 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 44|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:28.192 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 44000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 344 m30001| Fri Feb 22 12:29:28.192 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:28.195 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 19865.0 } -> { _id: 20327.0 } m30000| Fri Feb 22 12:29:28.195 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 19865.0 } -> { _id: 20327.0 } m30000| Fri Feb 22 12:29:28.195 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a8c49297cf54df562b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536168195), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 19865.0 }, max: { _id: 20327.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 32, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:28.200 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 19865.0 }, max: { _id: 20327.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:28.200 [conn4] moveChunk updating self version to: 45|1||51276475bd1f99446659365c through { _id: 20327.0 } -> { _id: 20789.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:28.201 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a899334798f3e47ef9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536168201), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 19865.0 }, max: { _id: 20327.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:28.201 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:28.201 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 44000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 44000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 45000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:28.201 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:28.201 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:28.201 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:28.201 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:28.201 [cleanupOldData-512764a899334798f3e47efa] (start) waiting to cleanup test.bar from { _id: 19865.0 } -> { _id: 20327.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:28.201 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:28.201 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:28-512764a899334798f3e47efb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536168201), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 19865.0 }, max: { _id: 20327.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:28.202 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:28.203 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 345 version: 45|1||51276475bd1f99446659365c based on: 44|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:28.204 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 346 version: 45|1||51276475bd1f99446659365c based on: 44|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:28.205 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:28.205 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:28.206 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 347 version: 45|1||51276475bd1f99446659365c based on: 45|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:28.207 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 45|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:28.207 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 45000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 347 m30999| Fri Feb 22 12:29:28.207 [conn1] setShardVersion success: { oldVersion: Timestamp 44000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:28.221 [cleanupOldData-512764a899334798f3e47efa] waiting to remove documents for test.bar from { _id: 19865.0 } -> { _id: 20327.0 } 46000 m30999| Fri Feb 22 12:29:29.206 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:29.206 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:29.207 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:29 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764a9bd1f994466593689" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a8bd1f994466593688" } } m30999| Fri Feb 22 12:29:29.207 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764a9bd1f994466593689 m30999| Fri Feb 22 12:29:29.207 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:29.207 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:29.207 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:29.210 [Balancer] shard0001 has more chunks me:63 best: shard0000:44 m30999| Fri Feb 22 12:29:29.210 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:29.210 [Balancer] donor : shard0001 chunks on 63 m30999| Fri Feb 22 12:29:29.210 [Balancer] receiver : shard0000 chunks on 44 m30999| Fri Feb 22 12:29:29.210 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:29.210 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_41227.0", lastmod: Timestamp 45000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:29.212 [Balancer] shard0001 has more chunks me:173 best: shard0000:44 m30999| Fri Feb 22 12:29:29.212 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:29.212 [Balancer] donor : shard0001 chunks on 173 m30999| Fri Feb 22 12:29:29.212 [Balancer] receiver : shard0000 chunks on 44 m30999| Fri Feb 22 12:29:29.212 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:29.212 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_20327.0", lastmod: Timestamp 45000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 20327.0 }, max: { _id: 20789.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:29.212 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 45|1||000000000000000000000000min: { _id: 41227.0 }max: { _id: 42164.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:29.213 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:29.213 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 41227.0 }, max: { _id: 42164.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_41227.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:29.214 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a999334798f3e47efc m30001| Fri Feb 22 12:29:29.214 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a999334798f3e47efd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536169214), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 41227.0 }, max: { _id: 42164.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:29.215 [conn4] moveChunk request accepted at version 45|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:29.218 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:29.218 [migrateThread] starting receiving-end of migration of chunk { _id: 41227.0 } -> { _id: 42164.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:29.229 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 125, clonedBytes: 66250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:29.233 [cleanupOldData-512764a699334798f3e47eeb] moveChunk deleted 937 documents for test.foo from { _id: 39353.0 } -> { _id: 40290.0 } m30001| Fri Feb 22 12:29:29.233 [cleanupOldData-512764a899334798f3e47efa] moveChunk starting delete for: test.bar from { _id: 19865.0 } -> { _id: 20327.0 } m30001| Fri Feb 22 12:29:29.239 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 319, clonedBytes: 169070, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:29.249 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 514, clonedBytes: 272420, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:29.259 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 698, clonedBytes: 369940, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:29.273 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:29.273 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 41227.0 } -> { _id: 42164.0 } m30001| Fri Feb 22 12:29:29.276 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:29.276 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 41227.0 } -> { _id: 42164.0 } m30001| Fri Feb 22 12:29:29.308 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:29.308 [conn4] moveChunk setting version to: 46|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:29.308 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:29.311 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 348 version: 45|1||51276475bd1f99446659365b based on: 45|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:29.311 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 45|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:29.311 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 45000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 348 m30001| Fri Feb 22 12:29:29.311 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:29.317 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 41227.0 } -> { _id: 42164.0 } m30000| Fri Feb 22 12:29:29.317 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 41227.0 } -> { _id: 42164.0 } m30000| Fri Feb 22 12:29:29.317 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a9c49297cf54df562c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536169317), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 41227.0 }, max: { _id: 42164.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 53, step4 of 5: 0, step5 of 5: 44 } } m30001| Fri Feb 22 12:29:29.318 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 41227.0 }, max: { _id: 42164.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:29.318 [conn4] moveChunk updating self version to: 46|1||51276475bd1f99446659365b through { _id: 42164.0 } -> { _id: 43101.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:29.319 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a999334798f3e47efe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536169319), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 41227.0 }, max: { _id: 42164.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:29.319 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:29.319 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:29.319 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 45000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 45000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 46000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:29.319 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:29.319 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:29.319 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:29.319 [cleanupOldData-512764a999334798f3e47eff] (start) waiting to cleanup test.foo from { _id: 41227.0 } -> { _id: 42164.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:29.320 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:29.320 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a999334798f3e47f00", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536169320), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 41227.0 }, max: { _id: 42164.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 89, step5 of 6: 11, step6 of 6: 0 } } m30001| Fri Feb 22 12:29:29.320 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 41227.0 }, max: { _id: 42164.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_41227.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:39 r:2868 w:54 reslen:37 107ms m30999| Fri Feb 22 12:29:29.320 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:29.321 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 349 version: 46|1||51276475bd1f99446659365b based on: 45|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:29.322 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 350 version: 46|1||51276475bd1f99446659365b based on: 45|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:29.323 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 45|1||000000000000000000000000min: { _id: 20327.0 }max: { _id: 20789.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:29.323 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:29.323 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 20327.0 }, max: { _id: 20789.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_20327.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:29.324 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 351 version: 46|1||51276475bd1f99446659365b based on: 46|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:29.324 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 46|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:29.324 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 46000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 351 m30999| Fri Feb 22 12:29:29.324 [conn1] setShardVersion success: { oldVersion: Timestamp 45000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:29.325 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764a999334798f3e47f01 m30001| Fri Feb 22 12:29:29.325 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a999334798f3e47f02", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536169325), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 20327.0 }, max: { _id: 20789.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:29.326 [conn4] moveChunk request accepted at version 45|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:29.328 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:29.328 [migrateThread] starting receiving-end of migration of chunk { _id: 20327.0 } -> { _id: 20789.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:29.339 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20327.0 }, max: { _id: 20789.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 128, clonedBytes: 133504, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:29.339 [cleanupOldData-512764a999334798f3e47eff] waiting to remove documents for test.foo from { _id: 41227.0 } -> { _id: 42164.0 } m30001| Fri Feb 22 12:29:29.349 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20327.0 }, max: { _id: 20789.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 306, clonedBytes: 319158, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:29.358 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:29.358 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 20327.0 } -> { _id: 20789.0 } m30001| Fri Feb 22 12:29:29.359 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20327.0 }, max: { _id: 20789.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:29.361 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 20327.0 } -> { _id: 20789.0 } m30001| Fri Feb 22 12:29:29.369 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20327.0 }, max: { _id: 20789.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:29.369 [conn4] moveChunk setting version to: 46|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:29.370 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:29.371 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 20327.0 } -> { _id: 20789.0 } m30000| Fri Feb 22 12:29:29.371 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 20327.0 } -> { _id: 20789.0 } m30000| Fri Feb 22 12:29:29.371 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a9c49297cf54df562d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536169371), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 20327.0 }, max: { _id: 20789.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:29.372 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 352 version: 45|1||51276475bd1f99446659365c based on: 45|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:29.373 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 45|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:29.373 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 45000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 352 m30001| Fri Feb 22 12:29:29.373 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:29.380 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 20327.0 }, max: { _id: 20789.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:29.380 [conn4] moveChunk updating self version to: 46|1||51276475bd1f99446659365c through { _id: 20789.0 } -> { _id: 21251.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:29.381 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a999334798f3e47f03", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536169380), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 20327.0 }, max: { _id: 20789.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:29.381 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:29.381 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:29.381 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 45000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 45000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 46000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:29.381 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:29.381 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:29.381 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:29.381 [cleanupOldData-512764a999334798f3e47f04] (start) waiting to cleanup test.bar from { _id: 20327.0 } -> { _id: 20789.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:29.381 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:29.381 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:29-512764a999334798f3e47f05", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536169381), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 20327.0 }, max: { _id: 20789.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:29.381 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:29.383 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 353 version: 46|1||51276475bd1f99446659365c based on: 45|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:29.385 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 354 version: 46|1||51276475bd1f99446659365c based on: 45|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:29.386 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:29.386 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:29.387 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 355 version: 46|1||51276475bd1f99446659365c based on: 46|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:29.387 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 46|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:29.388 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 46000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 355 m30999| Fri Feb 22 12:29:29.388 [conn1] setShardVersion success: { oldVersion: Timestamp 45000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:29.401 [cleanupOldData-512764a999334798f3e47f04] waiting to remove documents for test.bar from { _id: 20327.0 } -> { _id: 20789.0 } 47000 m30001| Fri Feb 22 12:29:29.770 [cleanupOldData-512764a899334798f3e47efa] moveChunk deleted 462 documents for test.bar from { _id: 19865.0 } -> { _id: 20327.0 } m30001| Fri Feb 22 12:29:29.770 [cleanupOldData-512764a999334798f3e47f04] moveChunk starting delete for: test.bar from { _id: 20327.0 } -> { _id: 20789.0 } m30999| Fri Feb 22 12:29:30.387 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:30.387 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:30.387 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:30 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764aabd1f99446659368a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764a9bd1f994466593689" } } m30999| Fri Feb 22 12:29:30.387 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764aabd1f99446659368a m30999| Fri Feb 22 12:29:30.387 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:30.388 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:30.388 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:30.389 [Balancer] shard0001 has more chunks me:62 best: shard0000:45 m30999| Fri Feb 22 12:29:30.389 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:30.389 [Balancer] donor : shard0001 chunks on 62 m30999| Fri Feb 22 12:29:30.390 [Balancer] receiver : shard0000 chunks on 45 m30999| Fri Feb 22 12:29:30.390 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:30.390 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_42164.0", lastmod: Timestamp 46000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 42164.0 }, max: { _id: 43101.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:30.392 [Balancer] shard0001 has more chunks me:172 best: shard0000:45 m30999| Fri Feb 22 12:29:30.392 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:30.392 [Balancer] donor : shard0001 chunks on 172 m30999| Fri Feb 22 12:29:30.392 [Balancer] receiver : shard0000 chunks on 45 m30999| Fri Feb 22 12:29:30.392 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:30.392 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_20789.0", lastmod: Timestamp 46000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 20789.0 }, max: { _id: 21251.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:30.392 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 46|1||000000000000000000000000min: { _id: 42164.0 }max: { _id: 43101.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:30.392 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:30.392 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 42164.0 }, max: { _id: 43101.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_42164.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:30.393 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764aa99334798f3e47f06 m30001| Fri Feb 22 12:29:30.393 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aa99334798f3e47f07", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536170393), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 42164.0 }, max: { _id: 43101.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:30.394 [conn4] moveChunk request accepted at version 46|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:30.396 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:30.396 [migrateThread] starting receiving-end of migration of chunk { _id: 42164.0 } -> { _id: 43101.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:30.406 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 42164.0 }, max: { _id: 43101.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 117, clonedBytes: 62010, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:30.417 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 42164.0 }, max: { _id: 43101.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 315, clonedBytes: 166950, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:30.427 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 42164.0 }, max: { _id: 43101.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 513, clonedBytes: 271890, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:30.433 [cleanupOldData-512764a999334798f3e47f04] moveChunk deleted 462 documents for test.bar from { _id: 20327.0 } -> { _id: 20789.0 } m30001| Fri Feb 22 12:29:30.433 [cleanupOldData-512764a999334798f3e47eff] moveChunk starting delete for: test.foo from { _id: 41227.0 } -> { _id: 42164.0 } m30001| Fri Feb 22 12:29:30.437 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 42164.0 }, max: { _id: 43101.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 716, clonedBytes: 379480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:30.451 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:30.451 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 42164.0 } -> { _id: 43101.0 } m30000| Fri Feb 22 12:29:30.452 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 42164.0 } -> { _id: 43101.0 } m30001| Fri Feb 22 12:29:30.453 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 42164.0 }, max: { _id: 43101.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:30.453 [conn4] moveChunk setting version to: 47|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:30.453 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:30.455 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 356 version: 46|1||51276475bd1f99446659365b based on: 46|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:30.455 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 46|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:30.455 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 46000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 356 m30001| Fri Feb 22 12:29:30.455 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:30.462 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 42164.0 } -> { _id: 43101.0 } m30000| Fri Feb 22 12:29:30.462 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 42164.0 } -> { _id: 43101.0 } m30000| Fri Feb 22 12:29:30.462 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aac49297cf54df562e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536170462), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 42164.0 }, max: { _id: 43101.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 54, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:29:30.463 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 42164.0 }, max: { _id: 43101.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:30.463 [conn4] moveChunk updating self version to: 47|1||51276475bd1f99446659365b through { _id: 43101.0 } -> { _id: 44038.0 } for collection 'test.foo' m30999| Fri Feb 22 12:29:30.464 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 46000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 46000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 47000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:30.464 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aa99334798f3e47f08", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536170464), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 42164.0 }, max: { _id: 43101.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:30.465 [Balancer] moveChunk result: { ok: 1.0 } m30001| Fri Feb 22 12:29:30.464 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:30.464 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:30.464 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:30.464 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:30.464 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:30.464 [cleanupOldData-512764aa99334798f3e47f09] (start) waiting to cleanup test.foo from { _id: 42164.0 } -> { _id: 43101.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:30.465 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:30.465 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aa99334798f3e47f0a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536170465), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 42164.0 }, max: { _id: 43101.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:30.465 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 357 version: 47|1||51276475bd1f99446659365b based on: 46|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:30.466 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 358 version: 47|1||51276475bd1f99446659365b based on: 46|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:30.467 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 46|1||000000000000000000000000min: { _id: 20789.0 }max: { _id: 21251.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:30.467 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:30.467 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 20789.0 }, max: { _id: 21251.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_20789.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:30.468 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 359 version: 47|1||51276475bd1f99446659365b based on: 47|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:30.468 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764aa99334798f3e47f0b m30001| Fri Feb 22 12:29:30.468 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aa99334798f3e47f0c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536170468), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 20789.0 }, max: { _id: 21251.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:30.468 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 47|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:30.468 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 47000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 359 m30999| Fri Feb 22 12:29:30.468 [conn1] setShardVersion success: { oldVersion: Timestamp 46000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:30.469 [conn4] moveChunk request accepted at version 46|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:30.470 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:30.470 [migrateThread] starting receiving-end of migration of chunk { _id: 20789.0 } -> { _id: 21251.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:30.480 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20789.0 }, max: { _id: 21251.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 117, clonedBytes: 122031, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:30.484 [cleanupOldData-512764aa99334798f3e47f09] waiting to remove documents for test.foo from { _id: 42164.0 } -> { _id: 43101.0 } m30001| Fri Feb 22 12:29:30.490 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20789.0 }, max: { _id: 21251.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 296, clonedBytes: 308728, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:30.500 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:30.500 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 20789.0 } -> { _id: 21251.0 } m30001| Fri Feb 22 12:29:30.500 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20789.0 }, max: { _id: 21251.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:30.503 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 20789.0 } -> { _id: 21251.0 } m30001| Fri Feb 22 12:29:30.511 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 20789.0 }, max: { _id: 21251.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:30.511 [conn4] moveChunk setting version to: 47|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:30.511 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:30.513 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 20789.0 } -> { _id: 21251.0 } m30000| Fri Feb 22 12:29:30.513 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 20789.0 } -> { _id: 21251.0 } m30000| Fri Feb 22 12:29:30.513 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aac49297cf54df562f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536170513), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 20789.0 }, max: { _id: 21251.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:30.513 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 360 version: 46|1||51276475bd1f99446659365c based on: 46|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:30.514 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 46|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:30.514 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 46000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 360 m30001| Fri Feb 22 12:29:30.514 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:30.521 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 20789.0 }, max: { _id: 21251.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:30.521 [conn4] moveChunk updating self version to: 47|1||51276475bd1f99446659365c through { _id: 21251.0 } -> { _id: 21713.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:30.522 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aa99334798f3e47f0d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536170522), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 20789.0 }, max: { _id: 21251.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:30.522 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:30.522 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:30.522 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 46000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 46000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 47000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:30.522 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:30.522 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:30.522 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:30.522 [cleanupOldData-512764aa99334798f3e47f0e] (start) waiting to cleanup test.bar from { _id: 20789.0 } -> { _id: 21251.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:30.523 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:30.523 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:30-512764aa99334798f3e47f0f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536170523), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 20789.0 }, max: { _id: 21251.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:30.523 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:30.524 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 361 version: 47|1||51276475bd1f99446659365c based on: 46|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:30.525 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 362 version: 47|1||51276475bd1f99446659365c based on: 46|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:30.526 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:30.526 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:30.528 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 363 version: 47|1||51276475bd1f99446659365c based on: 47|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:30.528 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 47|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:30.528 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 47000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 363 m30999| Fri Feb 22 12:29:30.528 [conn1] setShardVersion success: { oldVersion: Timestamp 46000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:30.542 [cleanupOldData-512764aa99334798f3e47f0e] waiting to remove documents for test.bar from { _id: 20789.0 } -> { _id: 21251.0 } 48000 m30001| Fri Feb 22 12:29:31.326 [cleanupOldData-512764a999334798f3e47eff] moveChunk deleted 937 documents for test.foo from { _id: 41227.0 } -> { _id: 42164.0 } m30001| Fri Feb 22 12:29:31.327 [cleanupOldData-512764aa99334798f3e47f0e] moveChunk starting delete for: test.bar from { _id: 20789.0 } -> { _id: 21251.0 } m30999| Fri Feb 22 12:29:31.527 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:31.527 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:31.528 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:31 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764abbd1f99446659368b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764aabd1f99446659368a" } } m30999| Fri Feb 22 12:29:31.528 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764abbd1f99446659368b m30999| Fri Feb 22 12:29:31.528 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:31.528 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:31.528 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:31.530 [Balancer] shard0001 has more chunks me:61 best: shard0000:46 m30999| Fri Feb 22 12:29:31.530 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:31.530 [Balancer] donor : shard0001 chunks on 61 m30999| Fri Feb 22 12:29:31.530 [Balancer] receiver : shard0000 chunks on 46 m30999| Fri Feb 22 12:29:31.530 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:31.530 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_43101.0", lastmod: Timestamp 47000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 43101.0 }, max: { _id: 44038.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:31.532 [Balancer] shard0001 has more chunks me:171 best: shard0000:46 m30999| Fri Feb 22 12:29:31.532 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:31.532 [Balancer] donor : shard0001 chunks on 171 m30999| Fri Feb 22 12:29:31.532 [Balancer] receiver : shard0000 chunks on 46 m30999| Fri Feb 22 12:29:31.532 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:31.532 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_21251.0", lastmod: Timestamp 47000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 21251.0 }, max: { _id: 21713.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:31.533 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 47|1||000000000000000000000000min: { _id: 43101.0 }max: { _id: 44038.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:31.533 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:31.533 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 43101.0 }, max: { _id: 44038.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_43101.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:31.534 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ab99334798f3e47f10 m30001| Fri Feb 22 12:29:31.534 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764ab99334798f3e47f11", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536171534), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 43101.0 }, max: { _id: 44038.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:31.535 [conn4] moveChunk request accepted at version 47|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:31.537 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:31.537 [migrateThread] starting receiving-end of migration of chunk { _id: 43101.0 } -> { _id: 44038.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:31.548 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 43101.0 }, max: { _id: 44038.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 189, clonedBytes: 100170, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:31.558 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 43101.0 }, max: { _id: 44038.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 486, clonedBytes: 257580, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:31.568 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 43101.0 }, max: { _id: 44038.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 803, clonedBytes: 425590, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:31.573 [FileAllocator] allocating new datafile /data/db/multcollections0/test.2, filling with zeroes... m30000| Fri Feb 22 12:29:31.573 [FileAllocator] done allocating datafile /data/db/multcollections0/test.2, size: 256MB, took 0 secs m30000| Fri Feb 22 12:29:31.576 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:31.576 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 43101.0 } -> { _id: 44038.0 } m30000| Fri Feb 22 12:29:31.577 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 43101.0 } -> { _id: 44038.0 } m30001| Fri Feb 22 12:29:31.578 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 43101.0 }, max: { _id: 44038.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:31.578 [conn4] moveChunk setting version to: 48|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:31.578 [conn11] Waiting for commit to finish m30001| Fri Feb 22 12:29:31.579 [conn3] assertion 13388 [test.foo] shard version not ok in Client::Context: version mismatch detected for test.foo, stored major version 48 does not match received 47 ( ns : test.foo, received : 47|1||51276475bd1f99446659365b, wanted : 48|0||51276475bd1f99446659365b, send ) ( ns : test.foo, received : 47|1||51276475bd1f99446659365b, wanted : 48|0||51276475bd1f99446659365b, send ) ns:test.foo query:{ query: { _id: 48803.0 }, $explain: true } m30001| Fri Feb 22 12:29:31.579 [conn3] stale version detected during query over test.foo : { $err: "[test.foo] shard version not ok in Client::Context: version mismatch detected for test.foo, stored major version 48 does not match received 47 ( ns : ...", code: 13388, ns: "test.foo", vReceived: Timestamp 47000|1, vReceivedEpoch: ObjectId('51276475bd1f99446659365b'), vWanted: Timestamp 48000|0, vWantedEpoch: ObjectId('51276475bd1f99446659365b') } m30999| Fri Feb 22 12:29:31.580 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 364 version: 47|1||51276475bd1f99446659365b based on: 47|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:31.580 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 47|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:31.580 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 47000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 364 m30001| Fri Feb 22 12:29:31.581 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:31.587 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 43101.0 } -> { _id: 44038.0 } m30000| Fri Feb 22 12:29:31.587 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 43101.0 } -> { _id: 44038.0 } m30000| Fri Feb 22 12:29:31.587 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764abc49297cf54df5630", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536171587), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 43101.0 }, max: { _id: 44038.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 38, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 12:29:31.588 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 43101.0 }, max: { _id: 44038.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:31.589 [conn4] moveChunk updating self version to: 48|1||51276475bd1f99446659365b through { _id: 44038.0 } -> { _id: 44975.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:31.590 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764ab99334798f3e47f12", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536171590), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 43101.0 }, max: { _id: 44038.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:31.590 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:31.590 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:31.590 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:31.590 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:31.590 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 47000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 47000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 48000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:31.590 [cleanupOldData-512764ab99334798f3e47f13] (start) waiting to cleanup test.foo from { _id: 43101.0 } -> { _id: 44038.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:31.590 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:31.590 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:31.590 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764ab99334798f3e47f14", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536171590), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 43101.0 }, max: { _id: 44038.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:31.590 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:31.591 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 365 version: 48|1||51276475bd1f99446659365b based on: 47|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:31.592 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 366 version: 48|1||51276475bd1f99446659365b based on: 47|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:31.593 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 47|1||000000000000000000000000min: { _id: 21251.0 }max: { _id: 21713.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:31.593 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:31.593 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 21251.0 }, max: { _id: 21713.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_21251.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:31.594 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 367 version: 48|1||51276475bd1f99446659365b based on: 48|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:31.594 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ab99334798f3e47f15 m30999| Fri Feb 22 12:29:31.594 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 48|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:31.594 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764ab99334798f3e47f16", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536171594), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 21251.0 }, max: { _id: 21713.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:31.594 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 48000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 367 m30999| Fri Feb 22 12:29:31.594 [conn1] setShardVersion success: { oldVersion: Timestamp 47000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:31.595 [conn4] moveChunk request accepted at version 47|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:31.596 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:31.596 [migrateThread] starting receiving-end of migration of chunk { _id: 21251.0 } -> { _id: 21713.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:31.606 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21251.0 }, max: { _id: 21713.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 129, clonedBytes: 134547, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:31.610 [cleanupOldData-512764ab99334798f3e47f13] waiting to remove documents for test.foo from { _id: 43101.0 } -> { _id: 44038.0 } m30001| Fri Feb 22 12:29:31.616 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21251.0 }, max: { _id: 21713.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 306, clonedBytes: 319158, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:31.625 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:31.625 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 21251.0 } -> { _id: 21713.0 } m30001| Fri Feb 22 12:29:31.627 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21251.0 }, max: { _id: 21713.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:31.628 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 21251.0 } -> { _id: 21713.0 } m30001| Fri Feb 22 12:29:31.637 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21251.0 }, max: { _id: 21713.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:31.637 [conn4] moveChunk setting version to: 48|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:31.637 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:31.638 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 21251.0 } -> { _id: 21713.0 } m30000| Fri Feb 22 12:29:31.638 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 21251.0 } -> { _id: 21713.0 } m30000| Fri Feb 22 12:29:31.638 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764abc49297cf54df5631", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536171638), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 21251.0 }, max: { _id: 21713.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:31.640 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 368 version: 47|1||51276475bd1f99446659365c based on: 47|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:31.640 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 47|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:31.640 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 47000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 368 m30001| Fri Feb 22 12:29:31.641 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:31.647 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 21251.0 }, max: { _id: 21713.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:31.647 [conn4] moveChunk updating self version to: 48|1||51276475bd1f99446659365c through { _id: 21713.0 } -> { _id: 22175.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:31.648 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764ab99334798f3e47f17", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536171648), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 21251.0 }, max: { _id: 21713.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:31.648 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:31.648 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:31.648 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 47000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 47000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 48000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:31.648 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:31.648 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:31.648 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:31.649 [cleanupOldData-512764ab99334798f3e47f18] (start) waiting to cleanup test.bar from { _id: 21251.0 } -> { _id: 21713.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:31.649 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:31.649 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:31-512764ab99334798f3e47f19", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536171649), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 21251.0 }, max: { _id: 21713.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:31.649 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:31.650 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 369 version: 48|1||51276475bd1f99446659365c based on: 47|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:31.652 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 370 version: 48|1||51276475bd1f99446659365c based on: 47|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:31.652 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:31.653 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:31.654 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 371 version: 48|1||51276475bd1f99446659365c based on: 48|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:31.654 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 48|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:31.654 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 48000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 371 m30999| Fri Feb 22 12:29:31.654 [conn1] setShardVersion success: { oldVersion: Timestamp 47000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:31.669 [cleanupOldData-512764ab99334798f3e47f18] waiting to remove documents for test.bar from { _id: 21251.0 } -> { _id: 21713.0 } m30001| Fri Feb 22 12:29:31.732 [cleanupOldData-512764aa99334798f3e47f0e] moveChunk deleted 462 documents for test.bar from { _id: 20789.0 } -> { _id: 21251.0 } m30001| Fri Feb 22 12:29:31.732 [cleanupOldData-512764ab99334798f3e47f18] moveChunk starting delete for: test.bar from { _id: 21251.0 } -> { _id: 21713.0 } 49000 m30001| Fri Feb 22 12:29:32.393 [cleanupOldData-512764ab99334798f3e47f18] moveChunk deleted 462 documents for test.bar from { _id: 21251.0 } -> { _id: 21713.0 } m30001| Fri Feb 22 12:29:32.393 [cleanupOldData-512764ab99334798f3e47f13] moveChunk starting delete for: test.foo from { _id: 43101.0 } -> { _id: 44038.0 } m30999| Fri Feb 22 12:29:32.653 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:32.654 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:32.654 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764acbd1f99446659368c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764abbd1f99446659368b" } } m30999| Fri Feb 22 12:29:32.654 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764acbd1f99446659368c m30999| Fri Feb 22 12:29:32.654 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:32.654 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:32.654 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:32.656 [Balancer] shard0001 has more chunks me:60 best: shard0000:47 m30999| Fri Feb 22 12:29:32.656 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:32.656 [Balancer] donor : shard0001 chunks on 60 m30999| Fri Feb 22 12:29:32.656 [Balancer] receiver : shard0000 chunks on 47 m30999| Fri Feb 22 12:29:32.656 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:32.656 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_44038.0", lastmod: Timestamp 48000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 44038.0 }, max: { _id: 44975.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:32.658 [Balancer] shard0001 has more chunks me:170 best: shard0000:47 m30999| Fri Feb 22 12:29:32.658 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:32.658 [Balancer] donor : shard0001 chunks on 170 m30999| Fri Feb 22 12:29:32.658 [Balancer] receiver : shard0000 chunks on 47 m30999| Fri Feb 22 12:29:32.658 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:32.658 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_21713.0", lastmod: Timestamp 48000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 21713.0 }, max: { _id: 22175.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:32.659 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 48|1||000000000000000000000000min: { _id: 44038.0 }max: { _id: 44975.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:32.659 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:32.659 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 44038.0 }, max: { _id: 44975.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_44038.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:32.660 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ac99334798f3e47f1a m30001| Fri Feb 22 12:29:32.660 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764ac99334798f3e47f1b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536172660), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 44038.0 }, max: { _id: 44975.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:32.661 [conn4] moveChunk request accepted at version 48|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:32.663 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:32.663 [migrateThread] starting receiving-end of migration of chunk { _id: 44038.0 } -> { _id: 44975.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:32.673 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44038.0 }, max: { _id: 44975.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 121, clonedBytes: 64130, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:32.683 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44038.0 }, max: { _id: 44975.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 336, clonedBytes: 178080, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:32.693 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44038.0 }, max: { _id: 44975.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 549, clonedBytes: 290970, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:32.704 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44038.0 }, max: { _id: 44975.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 707, clonedBytes: 374710, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:32.715 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:32.715 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 44038.0 } -> { _id: 44975.0 } m30000| Fri Feb 22 12:29:32.716 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 44038.0 } -> { _id: 44975.0 } m30001| Fri Feb 22 12:29:32.720 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44038.0 }, max: { _id: 44975.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:32.720 [conn4] moveChunk setting version to: 49|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:32.720 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:32.722 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 372 version: 48|1||51276475bd1f99446659365b based on: 48|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:32.722 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 48|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:32.722 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 48000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 372 m30001| Fri Feb 22 12:29:32.723 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:32.726 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 44038.0 } -> { _id: 44975.0 } m30000| Fri Feb 22 12:29:32.726 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 44038.0 } -> { _id: 44975.0 } m30000| Fri Feb 22 12:29:32.726 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764acc49297cf54df5632", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536172726), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 44038.0 }, max: { _id: 44975.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:29:32.730 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 44038.0 }, max: { _id: 44975.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:32.730 [conn4] moveChunk updating self version to: 49|1||51276475bd1f99446659365b through { _id: 44975.0 } -> { _id: 45912.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:32.731 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764ac99334798f3e47f1c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536172731), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 44038.0 }, max: { _id: 44975.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:32.731 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:32.731 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:32.731 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:32.731 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 48000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 48000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 49000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:32.731 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:32.731 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:32.731 [cleanupOldData-512764ac99334798f3e47f1d] (start) waiting to cleanup test.foo from { _id: 44038.0 } -> { _id: 44975.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:32.731 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:32.731 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764ac99334798f3e47f1e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536172731), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 44038.0 }, max: { _id: 44975.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:32.731 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:32.732 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 373 version: 49|1||51276475bd1f99446659365b based on: 48|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:32.733 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 374 version: 49|1||51276475bd1f99446659365b based on: 48|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:32.734 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 48|1||000000000000000000000000min: { _id: 21713.0 }max: { _id: 22175.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:32.734 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:32.734 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 21713.0 }, max: { _id: 22175.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_21713.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:32.735 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 375 version: 49|1||51276475bd1f99446659365b based on: 49|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:32.735 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ac99334798f3e47f1f m30001| Fri Feb 22 12:29:32.735 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764ac99334798f3e47f20", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536172735), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 21713.0 }, max: { _id: 22175.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:32.735 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 49|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:32.735 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 49000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 375 m30999| Fri Feb 22 12:29:32.735 [conn1] setShardVersion success: { oldVersion: Timestamp 48000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:32.736 [conn4] moveChunk request accepted at version 48|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:32.737 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:32.737 [migrateThread] starting receiving-end of migration of chunk { _id: 21713.0 } -> { _id: 22175.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:32.747 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21713.0 }, max: { _id: 22175.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 115, clonedBytes: 119945, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:32.751 [cleanupOldData-512764ac99334798f3e47f1d] waiting to remove documents for test.foo from { _id: 44038.0 } -> { _id: 44975.0 } m30001| Fri Feb 22 12:29:32.757 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21713.0 }, max: { _id: 22175.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 291, clonedBytes: 303513, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:32.767 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:32.767 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 21713.0 } -> { _id: 22175.0 } m30001| Fri Feb 22 12:29:32.768 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21713.0 }, max: { _id: 22175.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:32.770 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 21713.0 } -> { _id: 22175.0 } m30001| Fri Feb 22 12:29:32.778 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 21713.0 }, max: { _id: 22175.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:32.778 [conn4] moveChunk setting version to: 49|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:32.778 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:32.780 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 21713.0 } -> { _id: 22175.0 } m30000| Fri Feb 22 12:29:32.780 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 21713.0 } -> { _id: 22175.0 } m30000| Fri Feb 22 12:29:32.781 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764acc49297cf54df5633", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536172781), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 21713.0 }, max: { _id: 22175.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:29:32.781 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 376 version: 48|1||51276475bd1f99446659365c based on: 48|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:32.781 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 48|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:32.781 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 48000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 376 m30001| Fri Feb 22 12:29:32.781 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:32.788 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 21713.0 }, max: { _id: 22175.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:32.788 [conn4] moveChunk updating self version to: 49|1||51276475bd1f99446659365c through { _id: 22175.0 } -> { _id: 22637.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:32.789 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764ac99334798f3e47f21", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536172789), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 21713.0 }, max: { _id: 22175.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:32.789 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:32.789 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:32.789 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:32.789 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 48000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 48000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 49000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:32.789 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:32.789 [cleanupOldData-512764ac99334798f3e47f22] (start) waiting to cleanup test.bar from { _id: 21713.0 } -> { _id: 22175.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:32.789 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:32.790 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:32.790 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:32-512764ac99334798f3e47f23", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536172790), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 21713.0 }, max: { _id: 22175.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:32.790 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:32.791 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 377 version: 49|1||51276475bd1f99446659365c based on: 48|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:32.793 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 378 version: 49|1||51276475bd1f99446659365c based on: 48|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:32.794 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:32.794 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:32.796 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 379 version: 49|1||51276475bd1f99446659365c based on: 49|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:32.796 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 49|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:32.796 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 49000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 379 m30999| Fri Feb 22 12:29:32.796 [conn1] setShardVersion success: { oldVersion: Timestamp 48000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:32.809 [cleanupOldData-512764ac99334798f3e47f22] waiting to remove documents for test.bar from { _id: 21713.0 } -> { _id: 22175.0 } 50000 m30001| Fri Feb 22 12:29:33.298 [cleanupOldData-512764ab99334798f3e47f13] moveChunk deleted 937 documents for test.foo from { _id: 43101.0 } -> { _id: 44038.0 } m30001| Fri Feb 22 12:29:33.298 [cleanupOldData-512764ac99334798f3e47f22] moveChunk starting delete for: test.bar from { _id: 21713.0 } -> { _id: 22175.0 } m30999| Fri Feb 22 12:29:33.795 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:33.796 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:33.796 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:33 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764adbd1f99446659368d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764acbd1f99446659368c" } } m30999| Fri Feb 22 12:29:33.797 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764adbd1f99446659368d m30999| Fri Feb 22 12:29:33.797 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:33.797 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:33.797 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:33.799 [Balancer] shard0001 has more chunks me:59 best: shard0000:48 m30999| Fri Feb 22 12:29:33.799 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:33.799 [Balancer] donor : shard0001 chunks on 59 m30999| Fri Feb 22 12:29:33.799 [Balancer] receiver : shard0000 chunks on 48 m30999| Fri Feb 22 12:29:33.799 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:33.799 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_44975.0", lastmod: Timestamp 49000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 44975.0 }, max: { _id: 45912.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:33.802 [Balancer] shard0001 has more chunks me:169 best: shard0000:48 m30999| Fri Feb 22 12:29:33.802 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:33.802 [Balancer] donor : shard0001 chunks on 169 m30999| Fri Feb 22 12:29:33.802 [Balancer] receiver : shard0000 chunks on 48 m30999| Fri Feb 22 12:29:33.802 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:33.802 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_22175.0", lastmod: Timestamp 49000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 22175.0 }, max: { _id: 22637.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:33.802 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 49|1||000000000000000000000000min: { _id: 44975.0 }max: { _id: 45912.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:33.802 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:33.803 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 44975.0 }, max: { _id: 45912.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_44975.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:33.804 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ad99334798f3e47f24 m30001| Fri Feb 22 12:29:33.804 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764ad99334798f3e47f25", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536173804), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 44975.0 }, max: { _id: 45912.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:33.805 [conn4] moveChunk request accepted at version 49|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:33.808 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:33.808 [migrateThread] starting receiving-end of migration of chunk { _id: 44975.0 } -> { _id: 45912.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:33.819 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44975.0 }, max: { _id: 45912.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 129, clonedBytes: 68370, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:33.829 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44975.0 }, max: { _id: 45912.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 304, clonedBytes: 161120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:33.839 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44975.0 }, max: { _id: 45912.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 507, clonedBytes: 268710, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:33.850 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44975.0 }, max: { _id: 45912.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 709, clonedBytes: 375770, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:33.861 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:33.861 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 44975.0 } -> { _id: 45912.0 } m30000| Fri Feb 22 12:29:33.864 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 44975.0 } -> { _id: 45912.0 } m30001| Fri Feb 22 12:29:33.866 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 44975.0 }, max: { _id: 45912.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:33.866 [conn4] moveChunk setting version to: 50|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:33.866 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:33.868 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 380 version: 49|1||51276475bd1f99446659365b based on: 49|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:33.868 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 49|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:33.868 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 49000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 380 m30001| Fri Feb 22 12:29:33.869 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:33.874 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 44975.0 } -> { _id: 45912.0 } m30000| Fri Feb 22 12:29:33.874 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 44975.0 } -> { _id: 45912.0 } m30000| Fri Feb 22 12:29:33.874 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764adc49297cf54df5634", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536173874), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 44975.0 }, max: { _id: 45912.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 52, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:33.876 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 44975.0 }, max: { _id: 45912.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:33.876 [conn4] moveChunk updating self version to: 50|1||51276475bd1f99446659365b through { _id: 45912.0 } -> { _id: 46849.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:33.877 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764ad99334798f3e47f26", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536173877), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 44975.0 }, max: { _id: 45912.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:33.877 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 49000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 49000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 50000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:33.877 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:33.877 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:33.877 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:33.878 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 381 version: 50|1||51276475bd1f99446659365b based on: 49|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:33.880 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 382 version: 50|1||51276475bd1f99446659365b based on: 50|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:33.880 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 50|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:33.880 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 50000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 382 m30999| Fri Feb 22 12:29:33.880 [conn1] setShardVersion success: { oldVersion: Timestamp 49000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:33.903 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:33.903 [cleanupOldData-512764ad99334798f3e47f27] (start) waiting to cleanup test.foo from { _id: 44975.0 } -> { _id: 45912.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:33.903 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:33.904 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:33.904 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764ad99334798f3e47f28", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536173904), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 44975.0 }, max: { _id: 45912.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 57, step5 of 6: 11, step6 of 6: 26 } } m30001| Fri Feb 22 12:29:33.904 [conn4] command admin.$cmd command: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 44975.0 }, max: { _id: 45912.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_44975.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 numYields: 1 locks(micros) W:52 r:5854 w:52 reslen:37 101ms m30999| Fri Feb 22 12:29:33.904 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:33.905 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 49|1||000000000000000000000000min: { _id: 22175.0 }max: { _id: 22637.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:33.905 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:33.905 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 22175.0 }, max: { _id: 22637.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_22175.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:33.906 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ad99334798f3e47f29 m30001| Fri Feb 22 12:29:33.906 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764ad99334798f3e47f2a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536173906), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 22175.0 }, max: { _id: 22637.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:33.907 [conn4] moveChunk request accepted at version 49|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:33.909 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:33.909 [migrateThread] starting receiving-end of migration of chunk { _id: 22175.0 } -> { _id: 22637.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:33.919 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22175.0 }, max: { _id: 22637.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 127, clonedBytes: 132461, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:33.924 [cleanupOldData-512764ad99334798f3e47f27] waiting to remove documents for test.foo from { _id: 44975.0 } -> { _id: 45912.0 } m30001| Fri Feb 22 12:29:33.930 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22175.0 }, max: { _id: 22637.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 304, clonedBytes: 317072, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:33.940 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22175.0 }, max: { _id: 22637.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 445, clonedBytes: 464135, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:33.941 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:33.942 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 22175.0 } -> { _id: 22637.0 } m30000| Fri Feb 22 12:29:33.943 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 22175.0 } -> { _id: 22637.0 } m30001| Fri Feb 22 12:29:33.950 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22175.0 }, max: { _id: 22637.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:33.951 [conn4] moveChunk setting version to: 50|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:33.951 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:33.953 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 22175.0 } -> { _id: 22637.0 } m30000| Fri Feb 22 12:29:33.953 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 22175.0 } -> { _id: 22637.0 } m30000| Fri Feb 22 12:29:33.953 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764adc49297cf54df5635", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536173953), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 22175.0 }, max: { _id: 22637.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 31, step4 of 5: 0, step5 of 5: 11 } } m30999| Fri Feb 22 12:29:33.953 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 383 version: 49|1||51276475bd1f99446659365c based on: 49|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:33.954 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 49|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:33.954 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 49000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 383 m30001| Fri Feb 22 12:29:33.954 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:33.959 [cleanupOldData-512764ac99334798f3e47f22] moveChunk deleted 462 documents for test.bar from { _id: 21713.0 } -> { _id: 22175.0 } m30001| Fri Feb 22 12:29:33.959 [cleanupOldData-512764ad99334798f3e47f27] moveChunk starting delete for: test.foo from { _id: 44975.0 } -> { _id: 45912.0 } m30001| Fri Feb 22 12:29:33.961 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 22175.0 }, max: { _id: 22637.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:33.961 [conn4] moveChunk updating self version to: 50|1||51276475bd1f99446659365c through { _id: 22637.0 } -> { _id: 23099.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:33.962 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764ad99334798f3e47f2b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536173962), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 22175.0 }, max: { _id: 22637.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:33.962 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:33.962 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 49000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 49000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 50000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:33.962 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:33.962 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:33.962 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:33.962 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:33.962 [cleanupOldData-512764ad99334798f3e47f2c] (start) waiting to cleanup test.bar from { _id: 22175.0 } -> { _id: 22637.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:33.962 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:33.962 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:33-512764ad99334798f3e47f2d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536173962), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 22175.0 }, max: { _id: 22637.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:33.963 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:33.964 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 384 version: 50|1||51276475bd1f99446659365c based on: 49|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:33.966 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 385 version: 50|1||51276475bd1f99446659365c based on: 49|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:33.967 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:33.967 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:33.969 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 386 version: 50|1||51276475bd1f99446659365c based on: 50|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:33.969 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 50|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:33.969 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 50000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 386 m30999| Fri Feb 22 12:29:33.969 [conn1] setShardVersion success: { oldVersion: Timestamp 49000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:33.982 [cleanupOldData-512764ad99334798f3e47f2c] waiting to remove documents for test.bar from { _id: 22175.0 } -> { _id: 22637.0 } 51000 m30999| Fri Feb 22 12:29:34.968 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:34.968 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:34.968 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:34 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764aebd1f99446659368e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764adbd1f99446659368d" } } m30999| Fri Feb 22 12:29:34.969 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764aebd1f99446659368e m30999| Fri Feb 22 12:29:34.969 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:34.969 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:34.969 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:34.971 [Balancer] shard0001 has more chunks me:58 best: shard0000:49 m30999| Fri Feb 22 12:29:34.971 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:34.971 [Balancer] donor : shard0001 chunks on 58 m30999| Fri Feb 22 12:29:34.971 [Balancer] receiver : shard0000 chunks on 49 m30999| Fri Feb 22 12:29:34.971 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:34.971 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_45912.0", lastmod: Timestamp 50000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 45912.0 }, max: { _id: 46849.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:34.973 [Balancer] shard0001 has more chunks me:168 best: shard0000:49 m30999| Fri Feb 22 12:29:34.973 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:34.973 [Balancer] donor : shard0001 chunks on 168 m30999| Fri Feb 22 12:29:34.973 [Balancer] receiver : shard0000 chunks on 49 m30999| Fri Feb 22 12:29:34.973 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:34.973 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_22637.0", lastmod: Timestamp 50000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 22637.0 }, max: { _id: 23099.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:34.973 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 50|1||000000000000000000000000min: { _id: 45912.0 }max: { _id: 46849.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:34.973 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:34.974 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 45912.0 }, max: { _id: 46849.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_45912.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:34.974 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ae99334798f3e47f2e m30001| Fri Feb 22 12:29:34.974 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:34-512764ae99334798f3e47f2f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536174974), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 45912.0 }, max: { _id: 46849.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:34.975 [conn4] moveChunk request accepted at version 50|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:34.977 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:34.978 [migrateThread] starting receiving-end of migration of chunk { _id: 45912.0 } -> { _id: 46849.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:34.988 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 45912.0 }, max: { _id: 46849.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 203, clonedBytes: 107590, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:34.998 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 45912.0 }, max: { _id: 46849.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 504, clonedBytes: 267120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:35.008 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 45912.0 }, max: { _id: 46849.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 808, clonedBytes: 428240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:35.013 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:35.013 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 45912.0 } -> { _id: 46849.0 } m30000| Fri Feb 22 12:29:35.015 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 45912.0 } -> { _id: 46849.0 } m30001| Fri Feb 22 12:29:35.018 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 45912.0 }, max: { _id: 46849.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:35.018 [conn4] moveChunk setting version to: 51|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:35.019 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:35.021 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 387 version: 50|1||51276475bd1f99446659365b based on: 50|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:35.021 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 50|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:35.021 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 50000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 387 m30001| Fri Feb 22 12:29:35.021 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:35.026 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 45912.0 } -> { _id: 46849.0 } m30000| Fri Feb 22 12:29:35.026 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 45912.0 } -> { _id: 46849.0 } m30000| Fri Feb 22 12:29:35.026 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:35-512764afc49297cf54df5636", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536175026), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 45912.0 }, max: { _id: 46849.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 34, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:35.029 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 45912.0 }, max: { _id: 46849.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:35.029 [conn4] moveChunk updating self version to: 51|1||51276475bd1f99446659365b through { _id: 46849.0 } -> { _id: 47786.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:35.029 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:35-512764af99334798f3e47f30", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536175029), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 45912.0 }, max: { _id: 46849.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:35.030 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:35.030 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:35.030 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:35.030 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:35.030 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 50000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 50000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 51000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:35.030 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:35.030 [cleanupOldData-512764af99334798f3e47f31] (start) waiting to cleanup test.foo from { _id: 45912.0 } -> { _id: 46849.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:35.030 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:35.030 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:35-512764af99334798f3e47f32", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536175030), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 45912.0 }, max: { _id: 46849.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:35.030 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:35.031 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 388 version: 51|1||51276475bd1f99446659365b based on: 50|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:35.032 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 389 version: 51|1||51276475bd1f99446659365b based on: 50|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:35.032 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 50|1||000000000000000000000000min: { _id: 22637.0 }max: { _id: 23099.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:35.032 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:35.033 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 22637.0 }, max: { _id: 23099.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_22637.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:35.033 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 390 version: 51|1||51276475bd1f99446659365b based on: 51|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:35.033 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764af99334798f3e47f33 m30001| Fri Feb 22 12:29:35.033 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:35-512764af99334798f3e47f34", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536175033), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 22637.0 }, max: { _id: 23099.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:35.033 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 51|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:35.034 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 51000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 390 m30999| Fri Feb 22 12:29:35.034 [conn1] setShardVersion success: { oldVersion: Timestamp 50000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:35.034 [conn4] moveChunk request accepted at version 50|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:35.035 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:35.035 [migrateThread] starting receiving-end of migration of chunk { _id: 22637.0 } -> { _id: 23099.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:35.046 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22637.0 }, max: { _id: 23099.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 178, clonedBytes: 185654, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:35.050 [cleanupOldData-512764af99334798f3e47f31] waiting to remove documents for test.foo from { _id: 45912.0 } -> { _id: 46849.0 } m30001| Fri Feb 22 12:29:35.056 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22637.0 }, max: { _id: 23099.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 439, clonedBytes: 457877, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:35.066 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22637.0 }, max: { _id: 23099.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:35.076 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22637.0 }, max: { _id: 23099.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:35.086 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:35.086 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 22637.0 } -> { _id: 23099.0 } m30000| Fri Feb 22 12:29:35.086 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 22637.0 } -> { _id: 23099.0 } m30001| Fri Feb 22 12:29:35.092 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 22637.0 }, max: { _id: 23099.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:35.092 [conn4] moveChunk setting version to: 51|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:35.093 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:35.095 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 391 version: 50|1||51276475bd1f99446659365c based on: 50|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:35.095 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 50|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:35.095 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 50000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 391 m30001| Fri Feb 22 12:29:35.096 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:35.097 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 22637.0 } -> { _id: 23099.0 } m30000| Fri Feb 22 12:29:35.097 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 22637.0 } -> { _id: 23099.0 } m30000| Fri Feb 22 12:29:35.097 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:35-512764afc49297cf54df5637", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536175097), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 22637.0 }, max: { _id: 23099.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 12:29:35.103 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 22637.0 }, max: { _id: 23099.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:35.103 [conn4] moveChunk updating self version to: 51|1||51276475bd1f99446659365c through { _id: 23099.0 } -> { _id: 23561.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:35.103 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:35-512764af99334798f3e47f35", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536175103), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 22637.0 }, max: { _id: 23099.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:35.103 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 50000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 50000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 51000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30999| Fri Feb 22 12:29:35.105 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 392 version: 51|1||51276475bd1f99446659365c based on: 50|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:35.107 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 393 version: 51|1||51276475bd1f99446659365c based on: 51|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:35.107 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 51|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:35.107 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 51000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 393 m30999| Fri Feb 22 12:29:35.108 [conn1] setShardVersion success: { oldVersion: Timestamp 50000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:35.113 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:35.113 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:35.113 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:35.113 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:35.113 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:35.113 [cleanupOldData-512764af99334798f3e47f36] (start) waiting to cleanup test.bar from { _id: 22637.0 } -> { _id: 23099.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:35.114 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:35.114 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:35-512764af99334798f3e47f37", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536175114), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 22637.0 }, max: { _id: 23099.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 57, step5 of 6: 20, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:35.114 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:35.114 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:35.115 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:29:35.134 [cleanupOldData-512764af99334798f3e47f36] waiting to remove documents for test.bar from { _id: 22637.0 } -> { _id: 23099.0 } m30001| Fri Feb 22 12:29:35.141 [cleanupOldData-512764ad99334798f3e47f27] moveChunk deleted 937 documents for test.foo from { _id: 44975.0 } -> { _id: 45912.0 } m30001| Fri Feb 22 12:29:35.141 [cleanupOldData-512764af99334798f3e47f36] moveChunk starting delete for: test.bar from { _id: 22637.0 } -> { _id: 23099.0 } 52000 m30001| Fri Feb 22 12:29:35.810 [cleanupOldData-512764af99334798f3e47f36] moveChunk deleted 462 documents for test.bar from { _id: 22637.0 } -> { _id: 23099.0 } m30001| Fri Feb 22 12:29:35.810 [cleanupOldData-512764af99334798f3e47f31] moveChunk starting delete for: test.foo from { _id: 45912.0 } -> { _id: 46849.0 } m30999| Fri Feb 22 12:29:36.115 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:36.116 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:36.116 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:36 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b0bd1f99446659368f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764aebd1f99446659368e" } } m30999| Fri Feb 22 12:29:36.116 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b0bd1f99446659368f m30999| Fri Feb 22 12:29:36.116 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:36.116 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:36.116 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:36.119 [Balancer] shard0001 has more chunks me:57 best: shard0000:50 m30999| Fri Feb 22 12:29:36.119 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:36.119 [Balancer] donor : shard0001 chunks on 57 m30999| Fri Feb 22 12:29:36.119 [Balancer] receiver : shard0000 chunks on 50 m30999| Fri Feb 22 12:29:36.119 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:36.119 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_46849.0", lastmod: Timestamp 51000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 46849.0 }, max: { _id: 47786.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:36.121 [Balancer] shard0001 has more chunks me:167 best: shard0000:50 m30999| Fri Feb 22 12:29:36.121 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:36.121 [Balancer] donor : shard0001 chunks on 167 m30999| Fri Feb 22 12:29:36.121 [Balancer] receiver : shard0000 chunks on 50 m30999| Fri Feb 22 12:29:36.121 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:36.121 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_23099.0", lastmod: Timestamp 51000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 23099.0 }, max: { _id: 23561.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:36.121 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 51|1||000000000000000000000000min: { _id: 46849.0 }max: { _id: 47786.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:36.121 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:36.121 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 46849.0 }, max: { _id: 47786.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_46849.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:36.122 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b099334798f3e47f38 m30001| Fri Feb 22 12:29:36.122 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b099334798f3e47f39", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536176122), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 46849.0 }, max: { _id: 47786.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:36.123 [conn4] moveChunk request accepted at version 51|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:36.125 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:36.125 [migrateThread] starting receiving-end of migration of chunk { _id: 46849.0 } -> { _id: 47786.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:36.136 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 46849.0 }, max: { _id: 47786.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 181, clonedBytes: 95930, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:36.146 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 46849.0 }, max: { _id: 47786.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 482, clonedBytes: 255460, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:36.156 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 46849.0 }, max: { _id: 47786.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 781, clonedBytes: 413930, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:36.162 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:36.162 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 46849.0 } -> { _id: 47786.0 } m30000| Fri Feb 22 12:29:36.164 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 46849.0 } -> { _id: 47786.0 } m30001| Fri Feb 22 12:29:36.166 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 46849.0 }, max: { _id: 47786.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:36.166 [conn4] moveChunk setting version to: 52|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:36.166 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:36.169 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 394 version: 51|1||51276475bd1f99446659365b based on: 51|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:36.169 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 51|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:36.169 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 51000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 394 m30001| Fri Feb 22 12:29:36.169 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:36.174 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 46849.0 } -> { _id: 47786.0 } m30000| Fri Feb 22 12:29:36.174 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 46849.0 } -> { _id: 47786.0 } m30000| Fri Feb 22 12:29:36.174 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b0c49297cf54df5638", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536176174), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 46849.0 }, max: { _id: 47786.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 35, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:36.176 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 46849.0 }, max: { _id: 47786.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:36.177 [conn4] moveChunk updating self version to: 52|1||51276475bd1f99446659365b through { _id: 47786.0 } -> { _id: 48723.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:36.177 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b099334798f3e47f3a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536176177), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 46849.0 }, max: { _id: 47786.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:36.177 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:36.177 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:36.177 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:36.177 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 51000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 51000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 52000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:36.177 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:36.177 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:36.177 [cleanupOldData-512764b099334798f3e47f3b] (start) waiting to cleanup test.foo from { _id: 46849.0 } -> { _id: 47786.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:36.178 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:36.178 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b099334798f3e47f3c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536176178), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 46849.0 }, max: { _id: 47786.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:36.178 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:36.179 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 395 version: 52|1||51276475bd1f99446659365b based on: 51|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:36.180 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 396 version: 52|1||51276475bd1f99446659365b based on: 51|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:36.180 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 51|1||000000000000000000000000min: { _id: 23099.0 }max: { _id: 23561.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:36.180 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:36.180 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 23099.0 }, max: { _id: 23561.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_23099.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:36.181 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 397 version: 52|1||51276475bd1f99446659365b based on: 52|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:36.181 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b099334798f3e47f3d m30001| Fri Feb 22 12:29:36.181 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b099334798f3e47f3e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536176181), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 23099.0 }, max: { _id: 23561.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:36.181 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 52|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:36.181 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 52000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 397 m30999| Fri Feb 22 12:29:36.182 [conn1] setShardVersion success: { oldVersion: Timestamp 51000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:36.182 [conn4] moveChunk request accepted at version 51|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:36.183 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:36.183 [migrateThread] starting receiving-end of migration of chunk { _id: 23099.0 } -> { _id: 23561.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:36.193 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23099.0 }, max: { _id: 23561.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 126, clonedBytes: 131418, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:36.197 [cleanupOldData-512764b099334798f3e47f3b] waiting to remove documents for test.foo from { _id: 46849.0 } -> { _id: 47786.0 } m30001| Fri Feb 22 12:29:36.203 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23099.0 }, max: { _id: 23561.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 306, clonedBytes: 319158, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:36.212 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:36.212 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 23099.0 } -> { _id: 23561.0 } m30001| Fri Feb 22 12:29:36.213 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23099.0 }, max: { _id: 23561.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:36.215 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 23099.0 } -> { _id: 23561.0 } m30001| Fri Feb 22 12:29:36.224 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23099.0 }, max: { _id: 23561.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:36.224 [conn4] moveChunk setting version to: 52|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:36.224 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:36.225 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 23099.0 } -> { _id: 23561.0 } m30000| Fri Feb 22 12:29:36.225 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 23099.0 } -> { _id: 23561.0 } m30000| Fri Feb 22 12:29:36.225 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b0c49297cf54df5639", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536176225), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 23099.0 }, max: { _id: 23561.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:36.226 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 398 version: 51|1||51276475bd1f99446659365c based on: 51|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:36.226 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 51|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:36.227 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 51000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 398 m30001| Fri Feb 22 12:29:36.227 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:36.234 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 23099.0 }, max: { _id: 23561.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:36.234 [conn4] moveChunk updating self version to: 52|1||51276475bd1f99446659365c through { _id: 23561.0 } -> { _id: 24023.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:36.235 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b099334798f3e47f3f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536176235), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 23099.0 }, max: { _id: 23561.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:36.235 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:36.235 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:36.235 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:36.235 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 51000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 51000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 52000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:36.235 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:36.235 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:36.235 [cleanupOldData-512764b099334798f3e47f40] (start) waiting to cleanup test.bar from { _id: 23099.0 } -> { _id: 23561.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:36.235 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:36.235 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:36-512764b099334798f3e47f41", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536176235), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 23099.0 }, max: { _id: 23561.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:36.235 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:36.236 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 399 version: 52|1||51276475bd1f99446659365c based on: 51|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:36.238 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 400 version: 52|1||51276475bd1f99446659365c based on: 51|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:36.239 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:36.239 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:36.240 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 401 version: 52|1||51276475bd1f99446659365c based on: 52|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:36.240 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 52|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:36.241 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 52000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 401 m30999| Fri Feb 22 12:29:36.241 [conn1] setShardVersion success: { oldVersion: Timestamp 51000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } 53000 m30001| Fri Feb 22 12:29:36.255 [cleanupOldData-512764b099334798f3e47f40] waiting to remove documents for test.bar from { _id: 23099.0 } -> { _id: 23561.0 } m30001| Fri Feb 22 12:29:36.763 [cleanupOldData-512764af99334798f3e47f31] moveChunk deleted 937 documents for test.foo from { _id: 45912.0 } -> { _id: 46849.0 } m30001| Fri Feb 22 12:29:36.763 [cleanupOldData-512764b099334798f3e47f40] moveChunk starting delete for: test.bar from { _id: 23099.0 } -> { _id: 23561.0 } m30999| Fri Feb 22 12:29:37.240 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:37.240 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:37.240 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:37 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b1bd1f994466593690" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b0bd1f99446659368f" } } m30999| Fri Feb 22 12:29:37.241 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b1bd1f994466593690 m30999| Fri Feb 22 12:29:37.241 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:37.241 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:37.241 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:37.243 [Balancer] shard0001 has more chunks me:56 best: shard0000:51 m30999| Fri Feb 22 12:29:37.243 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:37.243 [Balancer] donor : shard0001 chunks on 56 m30999| Fri Feb 22 12:29:37.243 [Balancer] receiver : shard0000 chunks on 51 m30999| Fri Feb 22 12:29:37.243 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:37.244 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_47786.0", lastmod: Timestamp 52000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 47786.0 }, max: { _id: 48723.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:37.246 [Balancer] shard0001 has more chunks me:166 best: shard0000:51 m30999| Fri Feb 22 12:29:37.246 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:37.246 [Balancer] donor : shard0001 chunks on 166 m30999| Fri Feb 22 12:29:37.246 [Balancer] receiver : shard0000 chunks on 51 m30999| Fri Feb 22 12:29:37.246 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:37.246 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_23561.0", lastmod: Timestamp 52000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 23561.0 }, max: { _id: 24023.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:37.246 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 52|1||000000000000000000000000min: { _id: 47786.0 }max: { _id: 48723.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:37.246 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:37.247 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 47786.0 }, max: { _id: 48723.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_47786.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:37.247 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b199334798f3e47f42 m30001| Fri Feb 22 12:29:37.248 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b199334798f3e47f43", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536177248), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 47786.0 }, max: { _id: 48723.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:37.249 [conn4] moveChunk request accepted at version 52|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:37.252 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:37.252 [migrateThread] starting receiving-end of migration of chunk { _id: 47786.0 } -> { _id: 48723.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:37.262 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 47786.0 }, max: { _id: 48723.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 116, clonedBytes: 61480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:37.272 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 47786.0 }, max: { _id: 48723.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 317, clonedBytes: 168010, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:37.283 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 47786.0 }, max: { _id: 48723.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 522, clonedBytes: 276660, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:37.293 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 47786.0 }, max: { _id: 48723.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 725, clonedBytes: 384250, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:37.304 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:37.304 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 47786.0 } -> { _id: 48723.0 } m30000| Fri Feb 22 12:29:37.308 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 47786.0 } -> { _id: 48723.0 } m30001| Fri Feb 22 12:29:37.309 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 47786.0 }, max: { _id: 48723.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:37.309 [conn4] moveChunk setting version to: 53|0||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:37.309 [conn3] assertion 13388 [test.foo] shard version not ok in Client::Context: version mismatch detected for test.foo, stored major version 53 does not match received 52 ( ns : test.foo, received : 52|1||51276475bd1f99446659365b, wanted : 53|0||51276475bd1f99446659365b, send ) ( ns : test.foo, received : 52|1||51276475bd1f99446659365b, wanted : 53|0||51276475bd1f99446659365b, send ) ns:test.foo query:{ query: { _id: 53967.0 }, $explain: true } m30000| Fri Feb 22 12:29:37.309 [conn11] Waiting for commit to finish m30001| Fri Feb 22 12:29:37.309 [conn3] stale version detected during query over test.foo : { $err: "[test.foo] shard version not ok in Client::Context: version mismatch detected for test.foo, stored major version 53 does not match received 52 ( ns : ...", code: 13388, ns: "test.foo", vReceived: Timestamp 52000|1, vReceivedEpoch: ObjectId('51276475bd1f99446659365b'), vWanted: Timestamp 53000|0, vWantedEpoch: ObjectId('51276475bd1f99446659365b') } m30999| Fri Feb 22 12:29:37.312 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 402 version: 52|1||51276475bd1f99446659365b based on: 52|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:37.312 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 52|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:37.312 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 52000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 402 m30001| Fri Feb 22 12:29:37.312 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:37.318 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 47786.0 } -> { _id: 48723.0 } m30000| Fri Feb 22 12:29:37.318 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 47786.0 } -> { _id: 48723.0 } m30000| Fri Feb 22 12:29:37.318 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b1c49297cf54df563a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536177318), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 47786.0 }, max: { _id: 48723.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 50, step4 of 5: 0, step5 of 5: 14 } } m30001| Fri Feb 22 12:29:37.319 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 47786.0 }, max: { _id: 48723.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:37.319 [conn4] moveChunk updating self version to: 53|1||51276475bd1f99446659365b through { _id: 48723.0 } -> { _id: 49660.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:37.320 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b199334798f3e47f44", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536177320), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 47786.0 }, max: { _id: 48723.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:37.320 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:37.320 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 52000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 52000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 53000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:37.320 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:37.320 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:37.320 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:37.320 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:37.320 [cleanupOldData-512764b199334798f3e47f45] (start) waiting to cleanup test.foo from { _id: 47786.0 } -> { _id: 48723.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:37.321 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:37.321 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b199334798f3e47f46", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536177321), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 47786.0 }, max: { _id: 48723.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 3, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:37.321 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:37.322 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 403 version: 53|1||51276475bd1f99446659365b based on: 52|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:37.323 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 404 version: 53|1||51276475bd1f99446659365b based on: 52|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:37.324 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 52|1||000000000000000000000000min: { _id: 23561.0 }max: { _id: 24023.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:37.324 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:37.324 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 23561.0 }, max: { _id: 24023.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_23561.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:37.325 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 405 version: 53|1||51276475bd1f99446659365b based on: 53|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:37.325 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 53|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:37.325 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 53000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 405 m30999| Fri Feb 22 12:29:37.325 [conn1] setShardVersion success: { oldVersion: Timestamp 52000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:37.325 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b199334798f3e47f47 m30001| Fri Feb 22 12:29:37.325 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b199334798f3e47f48", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536177325), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 23561.0 }, max: { _id: 24023.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:37.327 [conn4] moveChunk request accepted at version 52|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:37.328 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:37.328 [migrateThread] starting receiving-end of migration of chunk { _id: 23561.0 } -> { _id: 24023.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:37.339 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23561.0 }, max: { _id: 24023.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 131, clonedBytes: 136633, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:37.340 [cleanupOldData-512764b199334798f3e47f45] waiting to remove documents for test.foo from { _id: 47786.0 } -> { _id: 48723.0 } m30001| Fri Feb 22 12:29:37.349 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23561.0 }, max: { _id: 24023.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 305, clonedBytes: 318115, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:37.358 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:37.358 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 23561.0 } -> { _id: 24023.0 } m30001| Fri Feb 22 12:29:37.359 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23561.0 }, max: { _id: 24023.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:37.360 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 23561.0 } -> { _id: 24023.0 } 54000 m30001| Fri Feb 22 12:29:37.369 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 23561.0 }, max: { _id: 24023.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:37.369 [conn4] moveChunk setting version to: 53|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:37.369 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:37.371 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 23561.0 } -> { _id: 24023.0 } m30000| Fri Feb 22 12:29:37.371 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 23561.0 } -> { _id: 24023.0 } m30000| Fri Feb 22 12:29:37.371 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b1c49297cf54df563b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536177371), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 23561.0 }, max: { _id: 24023.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:37.372 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 406 version: 52|1||51276475bd1f99446659365c based on: 52|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:37.372 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 52|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:37.372 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 52000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 406 m30001| Fri Feb 22 12:29:37.372 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:37.373 [cleanupOldData-512764b099334798f3e47f40] moveChunk deleted 462 documents for test.bar from { _id: 23099.0 } -> { _id: 23561.0 } m30001| Fri Feb 22 12:29:37.373 [cleanupOldData-512764b199334798f3e47f45] moveChunk starting delete for: test.foo from { _id: 47786.0 } -> { _id: 48723.0 } m30001| Fri Feb 22 12:29:37.380 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 23561.0 }, max: { _id: 24023.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:37.380 [conn4] moveChunk updating self version to: 53|1||51276475bd1f99446659365c through { _id: 24023.0 } -> { _id: 24485.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:37.380 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b199334798f3e47f49", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536177380), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 23561.0 }, max: { _id: 24023.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:37.380 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:37.380 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:37.381 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:37.381 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 52000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 52000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 53000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:37.381 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:37.381 [cleanupOldData-512764b199334798f3e47f4a] (start) waiting to cleanup test.bar from { _id: 23561.0 } -> { _id: 24023.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:37.381 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:37.381 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:37.381 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:37-512764b199334798f3e47f4b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536177381), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 23561.0 }, max: { _id: 24023.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:37.381 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:37.383 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 407 version: 53|1||51276475bd1f99446659365c based on: 52|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:37.385 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 408 version: 53|1||51276475bd1f99446659365c based on: 52|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:37.386 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:37.386 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:37.388 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 409 version: 53|1||51276475bd1f99446659365c based on: 53|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:37.388 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 53|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:37.388 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 53000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 409 m30999| Fri Feb 22 12:29:37.388 [conn1] setShardVersion success: { oldVersion: Timestamp 52000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:37.401 [cleanupOldData-512764b199334798f3e47f4a] waiting to remove documents for test.bar from { _id: 23561.0 } -> { _id: 24023.0 } m30001| Fri Feb 22 12:29:37.972 [cleanupOldData-512764b199334798f3e47f45] moveChunk deleted 937 documents for test.foo from { _id: 47786.0 } -> { _id: 48723.0 } m30001| Fri Feb 22 12:29:37.972 [cleanupOldData-512764b199334798f3e47f4a] moveChunk starting delete for: test.bar from { _id: 23561.0 } -> { _id: 24023.0 } m30999| Fri Feb 22 12:29:38.387 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:38.387 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:38.388 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b2bd1f994466593691" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b1bd1f994466593690" } } m30999| Fri Feb 22 12:29:38.388 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b2bd1f994466593691 m30999| Fri Feb 22 12:29:38.388 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:38.388 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:38.388 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:38.390 [Balancer] shard0001 has more chunks me:55 best: shard0000:52 m30999| Fri Feb 22 12:29:38.390 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:38.390 [Balancer] donor : shard0001 chunks on 55 m30999| Fri Feb 22 12:29:38.390 [Balancer] receiver : shard0000 chunks on 52 m30999| Fri Feb 22 12:29:38.390 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:38.390 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_48723.0", lastmod: Timestamp 53000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", min: { _id: 48723.0 }, max: { _id: 49660.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:38.392 [Balancer] shard0001 has more chunks me:165 best: shard0000:52 m30999| Fri Feb 22 12:29:38.392 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:38.392 [Balancer] donor : shard0001 chunks on 165 m30999| Fri Feb 22 12:29:38.392 [Balancer] receiver : shard0000 chunks on 52 m30999| Fri Feb 22 12:29:38.392 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:38.392 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_24023.0", lastmod: Timestamp 53000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 24023.0 }, max: { _id: 24485.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:38.392 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: shard0001:localhost:30001lastmod: 53|1||000000000000000000000000min: { _id: 48723.0 }max: { _id: 49660.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:38.393 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:38.393 [conn4] received moveChunk request: { moveChunk: "test.foo", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 48723.0 }, max: { _id: 49660.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_48723.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:38.394 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b299334798f3e47f4c m30001| Fri Feb 22 12:29:38.394 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b299334798f3e47f4d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536178394), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 48723.0 }, max: { _id: 49660.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:38.394 [conn4] moveChunk request accepted at version 53|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:38.397 [conn4] moveChunk number of documents: 937 m30000| Fri Feb 22 12:29:38.397 [migrateThread] starting receiving-end of migration of chunk { _id: 48723.0 } -> { _id: 49660.0 } for collection test.foo from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:38.407 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 48723.0 }, max: { _id: 49660.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 178, clonedBytes: 94340, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 55000 m30001| Fri Feb 22 12:29:38.417 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 48723.0 }, max: { _id: 49660.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 482, clonedBytes: 255460, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:38.427 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 48723.0 }, max: { _id: 49660.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 746, clonedBytes: 395380, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:38.435 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:38.435 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 48723.0 } -> { _id: 49660.0 } m30001| Fri Feb 22 12:29:38.438 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 48723.0 }, max: { _id: 49660.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:38.438 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 48723.0 } -> { _id: 49660.0 } m30001| Fri Feb 22 12:29:38.454 [conn4] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "localhost:30001", min: { _id: 48723.0 }, max: { _id: 49660.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:38.454 [conn4] moveChunk setting version to: 54|0||51276475bd1f99446659365b m30000| Fri Feb 22 12:29:38.454 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:38.456 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 410 version: 53|1||51276475bd1f99446659365b based on: 53|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:38.456 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 53|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:38.456 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 53000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 410 m30001| Fri Feb 22 12:29:38.456 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:38.459 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 48723.0 } -> { _id: 49660.0 } m30000| Fri Feb 22 12:29:38.459 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 48723.0 } -> { _id: 49660.0 } m30000| Fri Feb 22 12:29:38.459 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b2c49297cf54df563c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536178459), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 48723.0 }, max: { _id: 49660.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 37, step4 of 5: 0, step5 of 5: 23 } } m30001| Fri Feb 22 12:29:38.464 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "localhost:30001", min: { _id: 48723.0 }, max: { _id: 49660.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 937, clonedBytes: 496610, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:38.464 [conn4] moveChunk updating self version to: 54|1||51276475bd1f99446659365b through { _id: 49660.0 } -> { _id: 50597.0 } for collection 'test.foo' m30001| Fri Feb 22 12:29:38.465 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b299334798f3e47f4e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536178465), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 48723.0 }, max: { _id: 49660.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:38.465 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:38.465 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 53000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ns: "test.foo", version: Timestamp 53000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), globalVersion: Timestamp 54000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365b'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.foo'" } m30001| Fri Feb 22 12:29:38.465 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:38.465 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:38.465 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:38.465 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:38.465 [cleanupOldData-512764b299334798f3e47f4f] (start) waiting to cleanup test.foo from { _id: 48723.0 } -> { _id: 49660.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:38.466 [conn4] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:38.466 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b299334798f3e47f50", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536178466), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 48723.0 }, max: { _id: 49660.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 2, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:38.466 [Balancer] moveChunk result: { ok: 1.0 } m30001| Fri Feb 22 12:29:38.466 [cleanupOldData-512764b199334798f3e47f4a] moveChunk deleted 462 documents for test.bar from { _id: 23561.0 } -> { _id: 24023.0 } m30001| Fri Feb 22 12:29:38.466 [cleanupOldData-512764b099334798f3e47f3b] moveChunk starting delete for: test.foo from { _id: 46849.0 } -> { _id: 47786.0 } m30999| Fri Feb 22 12:29:38.466 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 411 version: 54|1||51276475bd1f99446659365b based on: 53|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:38.467 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 412 version: 54|1||51276475bd1f99446659365b based on: 53|1||51276475bd1f99446659365b m30999| Fri Feb 22 12:29:38.468 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 53|1||000000000000000000000000min: { _id: 24023.0 }max: { _id: 24485.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:38.468 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:38.468 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 24023.0 }, max: { _id: 24485.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_24023.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30999| Fri Feb 22 12:29:38.469 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 413 version: 54|1||51276475bd1f99446659365b based on: 54|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:38.469 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b299334798f3e47f51 m30999| Fri Feb 22 12:29:38.469 [conn1] warning: chunk manager reload forced for collection 'test.foo', config version is 54|1||51276475bd1f99446659365b m30001| Fri Feb 22 12:29:38.469 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b299334798f3e47f52", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536178469), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 24023.0 }, max: { _id: 24485.0 }, from: "shard0001", to: "shard0000" } } m30999| Fri Feb 22 12:29:38.469 [conn1] setShardVersion shard0001 localhost:30001 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 54000|1, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 413 m30999| Fri Feb 22 12:29:38.469 [conn1] setShardVersion success: { oldVersion: Timestamp 53000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365b'), ok: 1.0 } m30001| Fri Feb 22 12:29:38.470 [conn4] moveChunk request accepted at version 53|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:38.471 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:38.471 [migrateThread] starting receiving-end of migration of chunk { _id: 24023.0 } -> { _id: 24485.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:38.481 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24023.0 }, max: { _id: 24485.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 197, clonedBytes: 205471, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:38.485 [cleanupOldData-512764b299334798f3e47f4f] waiting to remove documents for test.foo from { _id: 48723.0 } -> { _id: 49660.0 } m30001| Fri Feb 22 12:29:38.491 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24023.0 }, max: { _id: 24485.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:38.492 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:38.492 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 24023.0 } -> { _id: 24485.0 } m30000| Fri Feb 22 12:29:38.493 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 24023.0 } -> { _id: 24485.0 } m30001| Fri Feb 22 12:29:38.501 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24023.0 }, max: { _id: 24485.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:38.502 [conn4] moveChunk setting version to: 54|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:38.502 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:38.504 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 24023.0 } -> { _id: 24485.0 } m30000| Fri Feb 22 12:29:38.504 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 24023.0 } -> { _id: 24485.0 } m30000| Fri Feb 22 12:29:38.504 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b2c49297cf54df563d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536178504), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 24023.0 }, max: { _id: 24485.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 19, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:38.504 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 414 version: 53|1||51276475bd1f99446659365c based on: 53|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:38.505 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 53|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:38.505 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 53000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 414 m30001| Fri Feb 22 12:29:38.505 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:38.512 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 24023.0 }, max: { _id: 24485.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:38.512 [conn4] moveChunk updating self version to: 54|1||51276475bd1f99446659365c through { _id: 24485.0 } -> { _id: 24947.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:38.513 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b299334798f3e47f53", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536178512), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 24023.0 }, max: { _id: 24485.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:38.513 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:38.513 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:38.513 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:38.513 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:38.513 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 53000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 53000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 54000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:38.513 [cleanupOldData-512764b299334798f3e47f54] (start) waiting to cleanup test.bar from { _id: 24023.0 } -> { _id: 24485.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:38.513 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:38.513 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:38.513 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:38-512764b299334798f3e47f55", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536178513), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 24023.0 }, max: { _id: 24485.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:38.513 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:38.514 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 415 version: 54|1||51276475bd1f99446659365c based on: 53|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:38.516 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 416 version: 54|1||51276475bd1f99446659365c based on: 53|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:38.517 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:38.517 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:38.518 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 417 version: 54|1||51276475bd1f99446659365c based on: 54|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:38.518 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 54|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:38.519 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 54000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 417 m30999| Fri Feb 22 12:29:38.519 [conn1] setShardVersion success: { oldVersion: Timestamp 53000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:38.533 [cleanupOldData-512764b299334798f3e47f54] waiting to remove documents for test.bar from { _id: 24023.0 } -> { _id: 24485.0 } m30999| Fri Feb 22 12:29:38.790 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:29:38 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30001| Fri Feb 22 12:29:39.357 [cleanupOldData-512764b099334798f3e47f3b] moveChunk deleted 937 documents for test.foo from { _id: 46849.0 } -> { _id: 47786.0 } m30001| Fri Feb 22 12:29:39.357 [cleanupOldData-512764b299334798f3e47f54] moveChunk starting delete for: test.bar from { _id: 24023.0 } -> { _id: 24485.0 } 56000 m30999| Fri Feb 22 12:29:39.518 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:39.518 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:39.518 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:39 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b3bd1f994466593692" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b2bd1f994466593691" } } m30999| Fri Feb 22 12:29:39.519 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b3bd1f994466593692 m30999| Fri Feb 22 12:29:39.519 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:39.519 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:39.519 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:39.521 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:39.521 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:39.521 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:39.521 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:39.521 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:39.523 [Balancer] shard0001 has more chunks me:164 best: shard0000:53 m30999| Fri Feb 22 12:29:39.523 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:39.523 [Balancer] donor : shard0001 chunks on 164 m30999| Fri Feb 22 12:29:39.523 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:39.523 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:39.523 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_24485.0", lastmod: Timestamp 54000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 24485.0 }, max: { _id: 24947.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:39.523 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 54|1||000000000000000000000000min: { _id: 24485.0 }max: { _id: 24947.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:39.523 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:39.523 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 24485.0 }, max: { _id: 24947.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_24485.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:39.524 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b399334798f3e47f56 m30001| Fri Feb 22 12:29:39.524 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:39-512764b399334798f3e47f57", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536179524), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 24485.0 }, max: { _id: 24947.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:39.526 [conn4] moveChunk request accepted at version 54|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:39.527 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:39.527 [migrateThread] starting receiving-end of migration of chunk { _id: 24485.0 } -> { _id: 24947.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:39.537 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24485.0 }, max: { _id: 24947.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 128, clonedBytes: 133504, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:39.547 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24485.0 }, max: { _id: 24947.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 307, clonedBytes: 320201, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:39.556 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:39.556 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 24485.0 } -> { _id: 24947.0 } m30001| Fri Feb 22 12:29:39.558 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24485.0 }, max: { _id: 24947.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:39.559 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 24485.0 } -> { _id: 24947.0 } m30001| Fri Feb 22 12:29:39.568 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24485.0 }, max: { _id: 24947.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:39.568 [conn4] moveChunk setting version to: 55|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:39.568 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:39.569 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 24485.0 } -> { _id: 24947.0 } m30000| Fri Feb 22 12:29:39.569 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 24485.0 } -> { _id: 24947.0 } m30000| Fri Feb 22 12:29:39.569 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:39-512764b3c49297cf54df563e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536179569), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 24485.0 }, max: { _id: 24947.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:39.571 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 418 version: 54|1||51276475bd1f99446659365c based on: 54|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:39.571 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 54|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:39.571 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 54000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 418 m30001| Fri Feb 22 12:29:39.571 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:39.578 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 24485.0 }, max: { _id: 24947.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:39.578 [conn4] moveChunk updating self version to: 55|1||51276475bd1f99446659365c through { _id: 24947.0 } -> { _id: 25409.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:39.579 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:39-512764b399334798f3e47f58", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536179579), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 24485.0 }, max: { _id: 24947.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:39.579 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:39.579 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:39.579 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 54000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 54000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 55000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:39.579 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:39.579 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:39.579 [cleanupOldData-512764b399334798f3e47f59] (start) waiting to cleanup test.bar from { _id: 24485.0 } -> { _id: 24947.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:39.579 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:39.579 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:39.579 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:39-512764b399334798f3e47f5a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536179579), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 24485.0 }, max: { _id: 24947.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:39.580 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:39.581 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 419 version: 55|1||51276475bd1f99446659365c based on: 54|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:39.582 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 420 version: 55|1||51276475bd1f99446659365c based on: 54|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:39.583 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:39.583 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:39.584 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 421 version: 55|1||51276475bd1f99446659365c based on: 55|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:39.585 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 55|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:39.585 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 55000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 421 m30999| Fri Feb 22 12:29:39.585 [conn1] setShardVersion success: { oldVersion: Timestamp 54000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:39.599 [cleanupOldData-512764b399334798f3e47f59] waiting to remove documents for test.bar from { _id: 24485.0 } -> { _id: 24947.0 } m30001| Fri Feb 22 12:29:39.825 [cleanupOldData-512764b299334798f3e47f54] moveChunk deleted 462 documents for test.bar from { _id: 24023.0 } -> { _id: 24485.0 } m30001| Fri Feb 22 12:29:39.825 [cleanupOldData-512764b399334798f3e47f59] moveChunk starting delete for: test.bar from { _id: 24485.0 } -> { _id: 24947.0 } m30001| Fri Feb 22 12:29:40.302 [cleanupOldData-512764b399334798f3e47f59] moveChunk deleted 462 documents for test.bar from { _id: 24485.0 } -> { _id: 24947.0 } m30001| Fri Feb 22 12:29:40.302 [cleanupOldData-512764b299334798f3e47f4f] moveChunk starting delete for: test.foo from { _id: 48723.0 } -> { _id: 49660.0 } 57000 m30999| Fri Feb 22 12:29:40.584 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:40.584 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:40.584 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b4bd1f994466593693" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b3bd1f994466593692" } } m30999| Fri Feb 22 12:29:40.585 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b4bd1f994466593693 m30999| Fri Feb 22 12:29:40.585 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:40.585 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:40.585 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:40.587 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:40.587 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:40.587 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:40.587 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:40.587 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:40.589 [Balancer] shard0001 has more chunks me:163 best: shard0000:54 m30999| Fri Feb 22 12:29:40.589 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:40.589 [Balancer] donor : shard0001 chunks on 163 m30999| Fri Feb 22 12:29:40.589 [Balancer] receiver : shard0000 chunks on 54 m30999| Fri Feb 22 12:29:40.589 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:40.589 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_24947.0", lastmod: Timestamp 55000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 24947.0 }, max: { _id: 25409.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:40.589 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 55|1||000000000000000000000000min: { _id: 24947.0 }max: { _id: 25409.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:40.589 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:40.589 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 24947.0 }, max: { _id: 25409.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_24947.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:40.590 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b499334798f3e47f5b m30001| Fri Feb 22 12:29:40.590 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:40-512764b499334798f3e47f5c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536180590), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 24947.0 }, max: { _id: 25409.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:40.591 [conn4] moveChunk request accepted at version 55|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:40.593 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:40.593 [migrateThread] starting receiving-end of migration of chunk { _id: 24947.0 } -> { _id: 25409.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:40.603 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24947.0 }, max: { _id: 25409.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 105, clonedBytes: 109515, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:40.613 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24947.0 }, max: { _id: 25409.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 281, clonedBytes: 293083, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:40.623 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24947.0 }, max: { _id: 25409.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 451, clonedBytes: 470393, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:40.624 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:40.624 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 24947.0 } -> { _id: 25409.0 } m30000| Fri Feb 22 12:29:40.626 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 24947.0 } -> { _id: 25409.0 } m30001| Fri Feb 22 12:29:40.633 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 24947.0 }, max: { _id: 25409.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:40.634 [conn4] moveChunk setting version to: 56|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:40.634 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:40.636 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 422 version: 55|1||51276475bd1f99446659365c based on: 55|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:40.636 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 55|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:40.636 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 55000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 422 m30001| Fri Feb 22 12:29:40.636 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:40.637 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 24947.0 } -> { _id: 25409.0 } m30000| Fri Feb 22 12:29:40.637 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 24947.0 } -> { _id: 25409.0 } m30000| Fri Feb 22 12:29:40.637 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:40-512764b4c49297cf54df563f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536180637), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 24947.0 }, max: { _id: 25409.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:40.644 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 24947.0 }, max: { _id: 25409.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:40.644 [conn4] moveChunk updating self version to: 56|1||51276475bd1f99446659365c through { _id: 25409.0 } -> { _id: 25871.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:40.644 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:40-512764b499334798f3e47f5d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536180644), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 24947.0 }, max: { _id: 25409.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:40.645 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:40.645 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:40.645 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:40.645 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 55000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 55000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 56000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:40.645 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:40.645 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:40.645 [cleanupOldData-512764b499334798f3e47f5e] (start) waiting to cleanup test.bar from { _id: 24947.0 } -> { _id: 25409.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:40.645 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:40.645 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:40-512764b499334798f3e47f5f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536180645), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 24947.0 }, max: { _id: 25409.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:40.645 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:40.646 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 423 version: 56|1||51276475bd1f99446659365c based on: 55|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:40.648 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 424 version: 56|1||51276475bd1f99446659365c based on: 55|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:40.648 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:40.648 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:40.650 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 425 version: 56|1||51276475bd1f99446659365c based on: 56|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:40.650 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 56|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:40.650 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 56000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 425 m30999| Fri Feb 22 12:29:40.650 [conn1] setShardVersion success: { oldVersion: Timestamp 55000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:40.665 [cleanupOldData-512764b499334798f3e47f5e] waiting to remove documents for test.bar from { _id: 24947.0 } -> { _id: 25409.0 } m30001| Fri Feb 22 12:29:41.444 [cleanupOldData-512764b299334798f3e47f4f] moveChunk deleted 937 documents for test.foo from { _id: 48723.0 } -> { _id: 49660.0 } m30001| Fri Feb 22 12:29:41.444 [cleanupOldData-512764b499334798f3e47f5e] moveChunk starting delete for: test.bar from { _id: 24947.0 } -> { _id: 25409.0 } 58000 m30999| Fri Feb 22 12:29:41.649 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:41.650 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:41.650 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:41 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b5bd1f994466593694" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b4bd1f994466593693" } } m30999| Fri Feb 22 12:29:41.651 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b5bd1f994466593694 m30999| Fri Feb 22 12:29:41.651 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:41.651 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:41.651 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:41.653 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:41.653 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:41.653 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:41.653 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:41.653 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:41.655 [Balancer] shard0001 has more chunks me:162 best: shard0000:55 m30999| Fri Feb 22 12:29:41.655 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:41.655 [Balancer] donor : shard0001 chunks on 162 m30999| Fri Feb 22 12:29:41.655 [Balancer] receiver : shard0000 chunks on 55 m30999| Fri Feb 22 12:29:41.655 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:41.655 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_25409.0", lastmod: Timestamp 56000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 25409.0 }, max: { _id: 25871.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:41.656 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 56|1||000000000000000000000000min: { _id: 25409.0 }max: { _id: 25871.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:41.656 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:41.656 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 25409.0 }, max: { _id: 25871.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_25409.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:41.657 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b599334798f3e47f60 m30001| Fri Feb 22 12:29:41.657 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:41-512764b599334798f3e47f61", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536181657), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 25409.0 }, max: { _id: 25871.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:41.658 [conn4] moveChunk request accepted at version 56|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:41.660 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:41.660 [migrateThread] starting receiving-end of migration of chunk { _id: 25409.0 } -> { _id: 25871.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:41.670 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25409.0 }, max: { _id: 25871.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 130, clonedBytes: 135590, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:41.681 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25409.0 }, max: { _id: 25871.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 311, clonedBytes: 324373, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:41.689 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:41.689 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 25409.0 } -> { _id: 25871.0 } m30001| Fri Feb 22 12:29:41.691 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25409.0 }, max: { _id: 25871.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:41.692 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 25409.0 } -> { _id: 25871.0 } m30001| Fri Feb 22 12:29:41.701 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25409.0 }, max: { _id: 25871.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:41.701 [conn4] moveChunk setting version to: 57|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:41.701 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:41.702 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 25409.0 } -> { _id: 25871.0 } m30000| Fri Feb 22 12:29:41.702 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 25409.0 } -> { _id: 25871.0 } m30000| Fri Feb 22 12:29:41.702 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:41-512764b5c49297cf54df5640", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536181702), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 25409.0 }, max: { _id: 25871.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:41.703 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 426 version: 56|1||51276475bd1f99446659365c based on: 56|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:41.704 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 56|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:41.704 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 56000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 426 m30001| Fri Feb 22 12:29:41.704 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:41.711 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 25409.0 }, max: { _id: 25871.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:41.712 [conn4] moveChunk updating self version to: 57|1||51276475bd1f99446659365c through { _id: 25871.0 } -> { _id: 26333.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:41.713 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:41-512764b599334798f3e47f62", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536181713), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 25409.0 }, max: { _id: 25871.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:41.713 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:41.713 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 56000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 56000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 57000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:41.713 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:41.713 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:41.713 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:41.713 [cleanupOldData-512764b599334798f3e47f63] (start) waiting to cleanup test.bar from { _id: 25409.0 } -> { _id: 25871.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:41.713 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:41.714 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:41.714 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:41-512764b599334798f3e47f64", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536181714), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 25409.0 }, max: { _id: 25871.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:41.714 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:41.715 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 427 version: 57|1||51276475bd1f99446659365c based on: 56|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:41.717 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 428 version: 57|1||51276475bd1f99446659365c based on: 56|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:41.718 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:41.718 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:41.719 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 429 version: 57|1||51276475bd1f99446659365c based on: 57|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:41.720 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 57|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:41.720 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 57000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 429 m30999| Fri Feb 22 12:29:41.720 [conn1] setShardVersion success: { oldVersion: Timestamp 56000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:41.733 [cleanupOldData-512764b599334798f3e47f63] waiting to remove documents for test.bar from { _id: 25409.0 } -> { _id: 25871.0 } m30001| Fri Feb 22 12:29:41.900 [cleanupOldData-512764b499334798f3e47f5e] moveChunk deleted 462 documents for test.bar from { _id: 24947.0 } -> { _id: 25409.0 } m30001| Fri Feb 22 12:29:41.900 [cleanupOldData-512764b599334798f3e47f63] moveChunk starting delete for: test.bar from { _id: 25409.0 } -> { _id: 25871.0 } m30001| Fri Feb 22 12:29:42.196 [cleanupOldData-512764b599334798f3e47f63] moveChunk deleted 462 documents for test.bar from { _id: 25409.0 } -> { _id: 25871.0 } m30001| Fri Feb 22 12:29:42.196 [cleanupOldData-512764ac99334798f3e47f1d] moveChunk starting delete for: test.foo from { _id: 44038.0 } -> { _id: 44975.0 } m30999| Fri Feb 22 12:29:42.719 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:42.719 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:42.719 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:42 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b6bd1f994466593695" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b5bd1f994466593694" } } m30999| Fri Feb 22 12:29:42.720 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b6bd1f994466593695 m30999| Fri Feb 22 12:29:42.720 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:42.720 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:42.720 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:42.722 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:42.722 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:42.722 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:42.722 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:42.722 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:42.724 [Balancer] shard0001 has more chunks me:161 best: shard0000:56 m30999| Fri Feb 22 12:29:42.724 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:42.724 [Balancer] donor : shard0001 chunks on 161 m30999| Fri Feb 22 12:29:42.724 [Balancer] receiver : shard0000 chunks on 56 m30999| Fri Feb 22 12:29:42.724 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:42.724 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_25871.0", lastmod: Timestamp 57000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 25871.0 }, max: { _id: 26333.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:42.724 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 57|1||000000000000000000000000min: { _id: 25871.0 }max: { _id: 26333.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:42.724 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:42.725 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 25871.0 }, max: { _id: 26333.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_25871.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:42.725 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b699334798f3e47f65 m30001| Fri Feb 22 12:29:42.725 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:42-512764b699334798f3e47f66", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536182725), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 25871.0 }, max: { _id: 26333.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:42.726 [conn4] moveChunk request accepted at version 57|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:42.728 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:42.728 [migrateThread] starting receiving-end of migration of chunk { _id: 25871.0 } -> { _id: 26333.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:42.738 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25871.0 }, max: { _id: 26333.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 102, clonedBytes: 106386, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 59000 m30001| Fri Feb 22 12:29:42.748 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25871.0 }, max: { _id: 26333.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 281, clonedBytes: 293083, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:42.759 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25871.0 }, max: { _id: 26333.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 457, clonedBytes: 476651, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:42.759 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:42.759 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 25871.0 } -> { _id: 26333.0 } m30000| Fri Feb 22 12:29:42.761 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 25871.0 } -> { _id: 26333.0 } m30001| Fri Feb 22 12:29:42.769 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 25871.0 }, max: { _id: 26333.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:42.769 [conn4] moveChunk setting version to: 58|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:42.769 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:42.771 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 25871.0 } -> { _id: 26333.0 } m30000| Fri Feb 22 12:29:42.771 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 25871.0 } -> { _id: 26333.0 } m30000| Fri Feb 22 12:29:42.771 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:42-512764b6c49297cf54df5641", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536182771), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 25871.0 }, max: { _id: 26333.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:42.772 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 430 version: 57|1||51276475bd1f99446659365c based on: 57|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:42.772 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 57|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:42.772 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 57000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 430 m30001| Fri Feb 22 12:29:42.772 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:42.779 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 25871.0 }, max: { _id: 26333.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:42.779 [conn4] moveChunk updating self version to: 58|1||51276475bd1f99446659365c through { _id: 26333.0 } -> { _id: 26795.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:42.780 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:42-512764b699334798f3e47f67", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536182780), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 25871.0 }, max: { _id: 26333.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:42.780 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:42.780 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:42.780 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:42.780 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 57000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 57000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 58000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:42.780 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:42.780 [cleanupOldData-512764b699334798f3e47f68] (start) waiting to cleanup test.bar from { _id: 25871.0 } -> { _id: 26333.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:42.780 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:42.780 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:42.781 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:42-512764b699334798f3e47f69", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536182781), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 25871.0 }, max: { _id: 26333.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:42.781 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:42.782 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 431 version: 58|1||51276475bd1f99446659365c based on: 57|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:42.783 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 432 version: 58|1||51276475bd1f99446659365c based on: 57|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:42.784 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:42.784 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:42.786 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 433 version: 58|1||51276475bd1f99446659365c based on: 58|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:42.786 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 58|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:42.786 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 58000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 433 m30999| Fri Feb 22 12:29:42.786 [conn1] setShardVersion success: { oldVersion: Timestamp 57000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:42.800 [cleanupOldData-512764b699334798f3e47f68] waiting to remove documents for test.bar from { _id: 25871.0 } -> { _id: 26333.0 } m30001| Fri Feb 22 12:29:43.393 [cleanupOldData-512764ac99334798f3e47f1d] moveChunk deleted 937 documents for test.foo from { _id: 44038.0 } -> { _id: 44975.0 } m30001| Fri Feb 22 12:29:43.393 [cleanupOldData-512764b699334798f3e47f68] moveChunk starting delete for: test.bar from { _id: 25871.0 } -> { _id: 26333.0 } 60000 m30999| Fri Feb 22 12:29:43.785 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:43.785 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:43.786 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:43 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b7bd1f994466593696" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b6bd1f994466593695" } } m30999| Fri Feb 22 12:29:43.786 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b7bd1f994466593696 m30999| Fri Feb 22 12:29:43.786 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:43.786 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:43.786 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:43.788 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:43.788 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:43.788 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:43.788 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:43.788 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:43.790 [Balancer] shard0001 has more chunks me:160 best: shard0000:57 m30999| Fri Feb 22 12:29:43.790 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:43.790 [Balancer] donor : shard0001 chunks on 160 m30999| Fri Feb 22 12:29:43.790 [Balancer] receiver : shard0000 chunks on 57 m30999| Fri Feb 22 12:29:43.790 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:43.790 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_26333.0", lastmod: Timestamp 58000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 26333.0 }, max: { _id: 26795.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:43.791 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 58|1||000000000000000000000000min: { _id: 26333.0 }max: { _id: 26795.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:43.791 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:43.791 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 26333.0 }, max: { _id: 26795.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_26333.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:43.792 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b799334798f3e47f6a m30001| Fri Feb 22 12:29:43.792 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:43-512764b799334798f3e47f6b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536183792), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 26333.0 }, max: { _id: 26795.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:43.793 [conn4] moveChunk request accepted at version 58|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:43.794 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:43.795 [migrateThread] starting receiving-end of migration of chunk { _id: 26333.0 } -> { _id: 26795.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:43.805 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 26333.0 }, max: { _id: 26795.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 184, clonedBytes: 191912, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:43.815 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 26333.0 }, max: { _id: 26795.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 446, clonedBytes: 465178, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:43.816 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:43.816 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 26333.0 } -> { _id: 26795.0 } m30000| Fri Feb 22 12:29:43.818 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 26333.0 } -> { _id: 26795.0 } m30001| Fri Feb 22 12:29:43.825 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 26333.0 }, max: { _id: 26795.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:43.825 [conn4] moveChunk setting version to: 59|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:43.826 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:43.828 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 434 version: 58|1||51276475bd1f99446659365c based on: 58|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:43.828 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 58|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:43.828 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 58000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 434 m30001| Fri Feb 22 12:29:43.828 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:43.829 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 26333.0 } -> { _id: 26795.0 } m30000| Fri Feb 22 12:29:43.829 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 26333.0 } -> { _id: 26795.0 } m30000| Fri Feb 22 12:29:43.829 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:43-512764b7c49297cf54df5642", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536183829), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 26333.0 }, max: { _id: 26795.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:43.836 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 26333.0 }, max: { _id: 26795.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:43.836 [conn4] moveChunk updating self version to: 59|1||51276475bd1f99446659365c through { _id: 26795.0 } -> { _id: 27257.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:43.836 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:43-512764b799334798f3e47f6c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536183836), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 26333.0 }, max: { _id: 26795.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:43.837 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:43.837 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:43.837 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:43.837 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 58000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 58000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 59000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:43.837 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:43.837 [cleanupOldData-512764b799334798f3e47f6d] (start) waiting to cleanup test.bar from { _id: 26333.0 } -> { _id: 26795.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:43.837 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:43.837 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:43.837 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:43-512764b799334798f3e47f6e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536183837), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 26333.0 }, max: { _id: 26795.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:43.837 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:43.838 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 435 version: 59|1||51276475bd1f99446659365c based on: 58|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:43.840 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 436 version: 59|1||51276475bd1f99446659365c based on: 58|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:43.841 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:43.841 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:43.842 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 437 version: 59|1||51276475bd1f99446659365c based on: 59|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:43.842 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 59|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:43.843 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 59000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 437 m30999| Fri Feb 22 12:29:43.843 [conn1] setShardVersion success: { oldVersion: Timestamp 58000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:43.857 [cleanupOldData-512764b799334798f3e47f6d] waiting to remove documents for test.bar from { _id: 26333.0 } -> { _id: 26795.0 } m30001| Fri Feb 22 12:29:43.865 [cleanupOldData-512764b699334798f3e47f68] moveChunk deleted 462 documents for test.bar from { _id: 25871.0 } -> { _id: 26333.0 } m30001| Fri Feb 22 12:29:43.865 [cleanupOldData-512764b799334798f3e47f6d] moveChunk starting delete for: test.bar from { _id: 26333.0 } -> { _id: 26795.0 } m30001| Fri Feb 22 12:29:44.495 [cleanupOldData-512764b799334798f3e47f6d] moveChunk deleted 462 documents for test.bar from { _id: 26333.0 } -> { _id: 26795.0 } m30001| Fri Feb 22 12:29:44.495 [cleanupOldData-512764aa99334798f3e47f09] moveChunk starting delete for: test.foo from { _id: 42164.0 } -> { _id: 43101.0 } m30999| Fri Feb 22 12:29:44.842 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:44.842 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:44.842 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b8bd1f994466593697" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b7bd1f994466593696" } } m30999| Fri Feb 22 12:29:44.843 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b8bd1f994466593697 m30999| Fri Feb 22 12:29:44.843 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:44.843 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:44.843 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:44.845 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:44.845 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:44.845 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:44.845 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:44.845 [Balancer] threshold : 2 61000 m30999| Fri Feb 22 12:29:44.847 [Balancer] shard0001 has more chunks me:159 best: shard0000:58 m30999| Fri Feb 22 12:29:44.847 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:44.847 [Balancer] donor : shard0001 chunks on 159 m30999| Fri Feb 22 12:29:44.847 [Balancer] receiver : shard0000 chunks on 58 m30999| Fri Feb 22 12:29:44.847 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:44.847 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_26795.0", lastmod: Timestamp 59000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 26795.0 }, max: { _id: 27257.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:44.847 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 59|1||000000000000000000000000min: { _id: 26795.0 }max: { _id: 27257.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:44.847 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:44.848 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 26795.0 }, max: { _id: 27257.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_26795.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:44.849 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b899334798f3e47f6f m30001| Fri Feb 22 12:29:44.849 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:44-512764b899334798f3e47f70", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536184849), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 26795.0 }, max: { _id: 27257.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:44.850 [conn4] moveChunk request accepted at version 59|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:44.851 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:44.851 [migrateThread] starting receiving-end of migration of chunk { _id: 26795.0 } -> { _id: 27257.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:44.861 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 26795.0 }, max: { _id: 27257.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 110, clonedBytes: 114730, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:44.872 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 26795.0 }, max: { _id: 27257.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 281, clonedBytes: 293083, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:44.882 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 26795.0 }, max: { _id: 27257.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 448, clonedBytes: 467264, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:44.883 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:44.883 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 26795.0 } -> { _id: 27257.0 } m30000| Fri Feb 22 12:29:44.886 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 26795.0 } -> { _id: 27257.0 } m30001| Fri Feb 22 12:29:44.892 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 26795.0 }, max: { _id: 27257.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:44.892 [conn4] moveChunk setting version to: 60|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:44.892 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:44.895 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 438 version: 59|1||51276475bd1f99446659365c based on: 59|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:44.895 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 59|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:44.895 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 59000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 438 m30001| Fri Feb 22 12:29:44.895 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:44.896 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 26795.0 } -> { _id: 27257.0 } m30000| Fri Feb 22 12:29:44.896 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 26795.0 } -> { _id: 27257.0 } m30000| Fri Feb 22 12:29:44.896 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:44-512764b8c49297cf54df5643", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536184896), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 26795.0 }, max: { _id: 27257.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 31, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:44.903 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 26795.0 }, max: { _id: 27257.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:44.903 [conn4] moveChunk updating self version to: 60|1||51276475bd1f99446659365c through { _id: 27257.0 } -> { _id: 27719.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:44.903 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:44-512764b899334798f3e47f71", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536184903), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 26795.0 }, max: { _id: 27257.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:44.903 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:44.903 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:44.903 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:44.903 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 59000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 59000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 60000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:44.903 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:44.903 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:44.903 [cleanupOldData-512764b899334798f3e47f72] (start) waiting to cleanup test.bar from { _id: 26795.0 } -> { _id: 27257.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:44.904 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:44.904 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:44-512764b899334798f3e47f73", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536184904), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 26795.0 }, max: { _id: 27257.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:44.904 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:44.905 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 439 version: 60|1||51276475bd1f99446659365c based on: 59|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:44.907 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 440 version: 60|1||51276475bd1f99446659365c based on: 59|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:44.907 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:44.908 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:44.909 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 441 version: 60|1||51276475bd1f99446659365c based on: 60|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:44.909 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 60|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:44.909 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 60000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 441 m30999| Fri Feb 22 12:29:44.909 [conn1] setShardVersion success: { oldVersion: Timestamp 59000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:44.924 [cleanupOldData-512764b899334798f3e47f72] waiting to remove documents for test.bar from { _id: 26795.0 } -> { _id: 27257.0 } m30001| Fri Feb 22 12:29:45.721 [cleanupOldData-512764aa99334798f3e47f09] moveChunk deleted 937 documents for test.foo from { _id: 42164.0 } -> { _id: 43101.0 } m30001| Fri Feb 22 12:29:45.721 [cleanupOldData-512764b899334798f3e47f72] moveChunk starting delete for: test.bar from { _id: 26795.0 } -> { _id: 27257.0 } 62000 m30999| Fri Feb 22 12:29:45.908 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:45.909 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:45.909 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:45 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764b9bd1f994466593698" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b8bd1f994466593697" } } m30999| Fri Feb 22 12:29:45.910 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764b9bd1f994466593698 m30999| Fri Feb 22 12:29:45.910 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:45.910 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:45.910 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:45.912 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:45.912 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:45.912 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:45.912 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:45.912 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:45.914 [Balancer] shard0001 has more chunks me:158 best: shard0000:59 m30999| Fri Feb 22 12:29:45.914 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:45.914 [Balancer] donor : shard0001 chunks on 158 m30999| Fri Feb 22 12:29:45.914 [Balancer] receiver : shard0000 chunks on 59 m30999| Fri Feb 22 12:29:45.914 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:45.915 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_27257.0", lastmod: Timestamp 60000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 27257.0 }, max: { _id: 27719.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:45.915 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 60|1||000000000000000000000000min: { _id: 27257.0 }max: { _id: 27719.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:45.915 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:45.915 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 27257.0 }, max: { _id: 27719.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_27257.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:45.916 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764b999334798f3e47f74 m30001| Fri Feb 22 12:29:45.916 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:45-512764b999334798f3e47f75", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536185916), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 27257.0 }, max: { _id: 27719.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:45.918 [conn4] moveChunk request accepted at version 60|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:45.919 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:45.919 [migrateThread] starting receiving-end of migration of chunk { _id: 27257.0 } -> { _id: 27719.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:45.929 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27257.0 }, max: { _id: 27719.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 110, clonedBytes: 114730, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:45.940 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27257.0 }, max: { _id: 27719.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 277, clonedBytes: 288911, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:45.950 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27257.0 }, max: { _id: 27719.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 437, clonedBytes: 455791, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:45.952 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:45.952 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 27257.0 } -> { _id: 27719.0 } m30000| Fri Feb 22 12:29:45.955 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 27257.0 } -> { _id: 27719.0 } m30001| Fri Feb 22 12:29:45.960 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27257.0 }, max: { _id: 27719.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:45.960 [conn4] moveChunk setting version to: 61|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:45.961 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:45.964 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 442 version: 60|1||51276475bd1f99446659365c based on: 60|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:45.964 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 60|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:45.964 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 60000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 442 m30001| Fri Feb 22 12:29:45.964 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:45.965 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 27257.0 } -> { _id: 27719.0 } m30000| Fri Feb 22 12:29:45.965 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 27257.0 } -> { _id: 27719.0 } m30000| Fri Feb 22 12:29:45.965 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:45-512764b9c49297cf54df5644", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536185965), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 27257.0 }, max: { _id: 27719.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 31, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:45.971 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 27257.0 }, max: { _id: 27719.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:45.971 [conn4] moveChunk updating self version to: 61|1||51276475bd1f99446659365c through { _id: 27719.0 } -> { _id: 28181.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:45.972 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:45-512764b999334798f3e47f76", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536185972), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 27257.0 }, max: { _id: 27719.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:45.972 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:45.972 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:45.972 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:45.972 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 60000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 60000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 61000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:45.972 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:45.972 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:45.972 [cleanupOldData-512764b999334798f3e47f77] (start) waiting to cleanup test.bar from { _id: 27257.0 } -> { _id: 27719.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:45.972 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:45.972 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:45-512764b999334798f3e47f78", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536185972), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 27257.0 }, max: { _id: 27719.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:45.972 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:45.974 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 443 version: 61|1||51276475bd1f99446659365c based on: 60|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:45.975 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 444 version: 61|1||51276475bd1f99446659365c based on: 60|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:45.976 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:45.977 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:45.978 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 445 version: 61|1||51276475bd1f99446659365c based on: 61|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:45.978 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 61|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:45.978 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 61000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 445 m30999| Fri Feb 22 12:29:45.979 [conn1] setShardVersion success: { oldVersion: Timestamp 60000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:45.992 [cleanupOldData-512764b999334798f3e47f77] waiting to remove documents for test.bar from { _id: 27257.0 } -> { _id: 27719.0 } m30001| Fri Feb 22 12:29:46.326 [cleanupOldData-512764b899334798f3e47f72] moveChunk deleted 462 documents for test.bar from { _id: 26795.0 } -> { _id: 27257.0 } m30001| Fri Feb 22 12:29:46.326 [cleanupOldData-512764b999334798f3e47f77] moveChunk starting delete for: test.bar from { _id: 27257.0 } -> { _id: 27719.0 } 63000 m30999| Fri Feb 22 12:29:46.977 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:46.978 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:46.978 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:46 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764babd1f994466593699" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764b9bd1f994466593698" } } m30999| Fri Feb 22 12:29:46.978 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764babd1f994466593699 m30999| Fri Feb 22 12:29:46.978 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:46.979 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:46.979 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:46.980 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:46.980 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:46.980 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:46.980 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:46.981 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:46.983 [Balancer] shard0001 has more chunks me:157 best: shard0000:60 m30999| Fri Feb 22 12:29:46.983 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:46.983 [Balancer] donor : shard0001 chunks on 157 m30999| Fri Feb 22 12:29:46.983 [Balancer] receiver : shard0000 chunks on 60 m30999| Fri Feb 22 12:29:46.983 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:46.983 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_27719.0", lastmod: Timestamp 61000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 27719.0 }, max: { _id: 28181.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:46.983 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 61|1||000000000000000000000000min: { _id: 27719.0 }max: { _id: 28181.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:46.983 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:46.983 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 27719.0 }, max: { _id: 28181.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_27719.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:46.984 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ba99334798f3e47f79 m30001| Fri Feb 22 12:29:46.984 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:46-512764ba99334798f3e47f7a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536186984), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 27719.0 }, max: { _id: 28181.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:46.985 [conn4] moveChunk request accepted at version 61|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:46.986 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:46.987 [migrateThread] starting receiving-end of migration of chunk { _id: 27719.0 } -> { _id: 28181.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:46.997 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27719.0 }, max: { _id: 28181.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 125, clonedBytes: 130375, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:47.007 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27719.0 }, max: { _id: 28181.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 299, clonedBytes: 311857, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:47.017 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:47.017 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 27719.0 } -> { _id: 28181.0 } m30001| Fri Feb 22 12:29:47.017 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27719.0 }, max: { _id: 28181.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:47.020 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 27719.0 } -> { _id: 28181.0 } m30001| Fri Feb 22 12:29:47.028 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 27719.0 }, max: { _id: 28181.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:47.028 [conn4] moveChunk setting version to: 62|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:47.028 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:47.030 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 446 version: 61|1||51276475bd1f99446659365c based on: 61|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:47.030 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 61|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:47.030 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 27719.0 } -> { _id: 28181.0 } m30000| Fri Feb 22 12:29:47.030 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 27719.0 } -> { _id: 28181.0 } m30999| Fri Feb 22 12:29:47.031 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 61000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 446 m30000| Fri Feb 22 12:29:47.031 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:47-512764bbc49297cf54df5645", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536187031), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 27719.0 }, max: { _id: 28181.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:47.031 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:47.038 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 27719.0 }, max: { _id: 28181.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:47.038 [conn4] moveChunk updating self version to: 62|1||51276475bd1f99446659365c through { _id: 28181.0 } -> { _id: 28643.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:47.039 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:47-512764bb99334798f3e47f7b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536187039), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 27719.0 }, max: { _id: 28181.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:47.039 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:47.039 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:47.039 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:47.039 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:47.039 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:47.039 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 61000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 61000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 62000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:47.039 [cleanupOldData-512764bb99334798f3e47f7c] (start) waiting to cleanup test.bar from { _id: 27719.0 } -> { _id: 28181.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:47.039 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:47.039 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:47-512764bb99334798f3e47f7d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536187039), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 27719.0 }, max: { _id: 28181.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:47.039 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:47.041 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 447 version: 62|1||51276475bd1f99446659365c based on: 61|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:47.042 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 448 version: 62|1||51276475bd1f99446659365c based on: 61|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:47.043 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:47.043 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:29:47.044 [cleanupOldData-512764b999334798f3e47f77] moveChunk deleted 462 documents for test.bar from { _id: 27257.0 } -> { _id: 27719.0 } m30001| Fri Feb 22 12:29:47.044 [cleanupOldData-512764a899334798f3e47ef5] moveChunk starting delete for: test.foo from { _id: 40290.0 } -> { _id: 41227.0 } m30999| Fri Feb 22 12:29:47.045 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 449 version: 62|1||51276475bd1f99446659365c based on: 62|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:47.045 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 62|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:47.045 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 62000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 449 m30999| Fri Feb 22 12:29:47.045 [conn1] setShardVersion success: { oldVersion: Timestamp 61000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:47.059 [cleanupOldData-512764bb99334798f3e47f7c] waiting to remove documents for test.bar from { _id: 27719.0 } -> { _id: 28181.0 } 64000 m30999| Fri Feb 22 12:29:48.044 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:48.044 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:48.045 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:48 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764bcbd1f99446659369a" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764babd1f994466593699" } } m30999| Fri Feb 22 12:29:48.045 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764bcbd1f99446659369a m30999| Fri Feb 22 12:29:48.045 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:48.045 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:48.045 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:48.047 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:48.047 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:48.047 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:48.047 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:48.048 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:48.049 [Balancer] shard0001 has more chunks me:156 best: shard0000:61 m30999| Fri Feb 22 12:29:48.049 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:48.049 [Balancer] donor : shard0001 chunks on 156 m30999| Fri Feb 22 12:29:48.049 [Balancer] receiver : shard0000 chunks on 61 m30999| Fri Feb 22 12:29:48.049 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:48.050 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_28181.0", lastmod: Timestamp 62000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 28181.0 }, max: { _id: 28643.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:48.050 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 62|1||000000000000000000000000min: { _id: 28181.0 }max: { _id: 28643.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:48.050 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:48.050 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 28181.0 }, max: { _id: 28643.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_28181.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:48.051 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764bc99334798f3e47f7e m30001| Fri Feb 22 12:29:48.051 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:48-512764bc99334798f3e47f7f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536188051), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 28181.0 }, max: { _id: 28643.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:48.052 [conn4] moveChunk request accepted at version 62|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:48.053 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:48.053 [migrateThread] starting receiving-end of migration of chunk { _id: 28181.0 } -> { _id: 28643.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:48.064 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28181.0 }, max: { _id: 28643.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 129, clonedBytes: 134547, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:48.074 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28181.0 }, max: { _id: 28643.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 312, clonedBytes: 325416, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:48.083 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:48.083 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 28181.0 } -> { _id: 28643.0 } m30001| Fri Feb 22 12:29:48.084 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28181.0 }, max: { _id: 28643.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:48.086 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 28181.0 } -> { _id: 28643.0 } m30001| Fri Feb 22 12:29:48.095 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28181.0 }, max: { _id: 28643.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:48.095 [conn4] moveChunk setting version to: 63|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:48.095 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:48.096 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 28181.0 } -> { _id: 28643.0 } m30000| Fri Feb 22 12:29:48.096 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 28181.0 } -> { _id: 28643.0 } m30000| Fri Feb 22 12:29:48.097 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:48-512764bcc49297cf54df5646", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536188097), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 28181.0 }, max: { _id: 28643.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:29:48.097 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 450 version: 62|1||51276475bd1f99446659365c based on: 62|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:48.098 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 62|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:48.098 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 62000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 450 m30001| Fri Feb 22 12:29:48.098 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:48.105 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 28181.0 }, max: { _id: 28643.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:48.105 [conn4] moveChunk updating self version to: 63|1||51276475bd1f99446659365c through { _id: 28643.0 } -> { _id: 29105.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:48.106 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:48-512764bc99334798f3e47f80", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536188106), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 28181.0 }, max: { _id: 28643.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:48.106 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:48.106 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 62000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 62000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 63000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:48.106 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:48.106 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:48.106 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:48.106 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:48.106 [cleanupOldData-512764bc99334798f3e47f81] (start) waiting to cleanup test.bar from { _id: 28181.0 } -> { _id: 28643.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:48.107 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:48.107 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:48-512764bc99334798f3e47f82", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536188107), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 28181.0 }, max: { _id: 28643.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:48.107 [Balancer] moveChunk result: { ok: 1.0 } m30001| Fri Feb 22 12:29:48.107 [cleanupOldData-512764a899334798f3e47ef5] moveChunk deleted 937 documents for test.foo from { _id: 40290.0 } -> { _id: 41227.0 } m30001| Fri Feb 22 12:29:48.107 [cleanupOldData-512764a199334798f3e47eb9] moveChunk starting delete for: test.foo from { _id: 34668.0 } -> { _id: 35605.0 } m30999| Fri Feb 22 12:29:48.108 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 451 version: 63|1||51276475bd1f99446659365c based on: 62|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:48.110 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 452 version: 63|1||51276475bd1f99446659365c based on: 62|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:48.111 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:48.112 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:48.113 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 453 version: 63|1||51276475bd1f99446659365c based on: 63|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:48.113 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 63|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:48.113 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 63000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 453 m30999| Fri Feb 22 12:29:48.113 [conn1] setShardVersion success: { oldVersion: Timestamp 62000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:48.127 [cleanupOldData-512764bc99334798f3e47f81] waiting to remove documents for test.bar from { _id: 28181.0 } -> { _id: 28643.0 } 65000 m30999| Fri Feb 22 12:29:49.112 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:49.112 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:49.113 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:49 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764bdbd1f99446659369b" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764bcbd1f99446659369a" } } m30999| Fri Feb 22 12:29:49.113 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764bdbd1f99446659369b m30999| Fri Feb 22 12:29:49.113 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:49.113 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:49.113 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:49.115 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:49.115 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:49.115 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:49.115 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:49.115 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:49.117 [Balancer] shard0001 has more chunks me:155 best: shard0000:62 m30999| Fri Feb 22 12:29:49.117 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:49.117 [Balancer] donor : shard0001 chunks on 155 m30999| Fri Feb 22 12:29:49.118 [Balancer] receiver : shard0000 chunks on 62 m30999| Fri Feb 22 12:29:49.118 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:49.118 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_28643.0", lastmod: Timestamp 63000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 28643.0 }, max: { _id: 29105.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:49.118 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 63|1||000000000000000000000000min: { _id: 28643.0 }max: { _id: 29105.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:49.118 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:49.118 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 28643.0 }, max: { _id: 29105.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_28643.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:49.119 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764bd99334798f3e47f83 m30001| Fri Feb 22 12:29:49.119 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:49-512764bd99334798f3e47f84", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536189119), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 28643.0 }, max: { _id: 29105.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:49.120 [conn4] moveChunk request accepted at version 63|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:49.121 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:49.121 [migrateThread] starting receiving-end of migration of chunk { _id: 28643.0 } -> { _id: 29105.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:49.131 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28643.0 }, max: { _id: 29105.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 125, clonedBytes: 130375, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:49.142 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28643.0 }, max: { _id: 29105.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 303, clonedBytes: 316029, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:49.151 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:49.151 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 28643.0 } -> { _id: 29105.0 } m30001| Fri Feb 22 12:29:49.152 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28643.0 }, max: { _id: 29105.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:49.153 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 28643.0 } -> { _id: 29105.0 } m30001| Fri Feb 22 12:29:49.162 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 28643.0 }, max: { _id: 29105.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:49.162 [conn4] moveChunk setting version to: 64|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:49.162 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:49.164 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 28643.0 } -> { _id: 29105.0 } m30000| Fri Feb 22 12:29:49.164 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 28643.0 } -> { _id: 29105.0 } m30000| Fri Feb 22 12:29:49.164 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:49-512764bdc49297cf54df5647", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536189164), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 28643.0 }, max: { _id: 29105.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:49.164 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 454 version: 63|1||51276475bd1f99446659365c based on: 63|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:49.165 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 63|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:49.165 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 63000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 454 m30001| Fri Feb 22 12:29:49.165 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:49.171 [cleanupOldData-512764a199334798f3e47eb9] moveChunk deleted 937 documents for test.foo from { _id: 34668.0 } -> { _id: 35605.0 } m30001| Fri Feb 22 12:29:49.172 [cleanupOldData-512764bc99334798f3e47f81] moveChunk starting delete for: test.bar from { _id: 28181.0 } -> { _id: 28643.0 } m30001| Fri Feb 22 12:29:49.172 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 28643.0 }, max: { _id: 29105.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:49.172 [conn4] moveChunk updating self version to: 64|1||51276475bd1f99446659365c through { _id: 29105.0 } -> { _id: 29567.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:49.173 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:49-512764bd99334798f3e47f85", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536189173), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 28643.0 }, max: { _id: 29105.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:49.173 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:49.173 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:49.173 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:49.173 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:49.173 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 63000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 63000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 64000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:49.173 [cleanupOldData-512764bd99334798f3e47f86] (start) waiting to cleanup test.bar from { _id: 28643.0 } -> { _id: 29105.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:49.173 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:49.174 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:49.174 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:49-512764bd99334798f3e47f87", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536189174), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 28643.0 }, max: { _id: 29105.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:49.174 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:49.175 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 455 version: 64|1||51276475bd1f99446659365c based on: 63|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:49.177 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 456 version: 64|1||51276475bd1f99446659365c based on: 63|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:49.177 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:49.178 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:49.179 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 457 version: 64|1||51276475bd1f99446659365c based on: 64|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:49.179 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 64|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:49.179 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 64000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 457 m30999| Fri Feb 22 12:29:49.179 [conn1] setShardVersion success: { oldVersion: Timestamp 63000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:49.193 [cleanupOldData-512764bd99334798f3e47f86] waiting to remove documents for test.bar from { _id: 28643.0 } -> { _id: 29105.0 } m30001| Fri Feb 22 12:29:49.736 [cleanupOldData-512764bc99334798f3e47f81] moveChunk deleted 462 documents for test.bar from { _id: 28181.0 } -> { _id: 28643.0 } m30001| Fri Feb 22 12:29:49.736 [cleanupOldData-512764bd99334798f3e47f86] moveChunk starting delete for: test.bar from { _id: 28643.0 } -> { _id: 29105.0 } m30001| Fri Feb 22 12:29:50.090 [cleanupOldData-512764bd99334798f3e47f86] moveChunk deleted 462 documents for test.bar from { _id: 28643.0 } -> { _id: 29105.0 } m30001| Fri Feb 22 12:29:50.090 [cleanupOldData-5127649d99334798f3e47ea0] moveChunk starting delete for: test.bar from { _id: 15707.0 } -> { _id: 16169.0 } 66000 m30999| Fri Feb 22 12:29:50.178 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:50.179 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:50.179 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764bebd1f99446659369c" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764bdbd1f99446659369b" } } m30999| Fri Feb 22 12:29:50.180 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764bebd1f99446659369c m30999| Fri Feb 22 12:29:50.180 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:50.180 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:50.180 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:50.182 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:50.182 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:50.182 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:50.182 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:50.182 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:50.185 [Balancer] shard0001 has more chunks me:154 best: shard0000:63 m30999| Fri Feb 22 12:29:50.185 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:50.185 [Balancer] donor : shard0001 chunks on 154 m30999| Fri Feb 22 12:29:50.185 [Balancer] receiver : shard0000 chunks on 63 m30999| Fri Feb 22 12:29:50.185 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:50.185 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_29105.0", lastmod: Timestamp 64000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 29105.0 }, max: { _id: 29567.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:50.185 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 64|1||000000000000000000000000min: { _id: 29105.0 }max: { _id: 29567.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:50.185 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:50.185 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 29105.0 }, max: { _id: 29567.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_29105.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:50.186 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764be99334798f3e47f88 m30001| Fri Feb 22 12:29:50.187 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:50-512764be99334798f3e47f89", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536190187), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 29105.0 }, max: { _id: 29567.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:50.188 [conn4] moveChunk request accepted at version 64|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:50.189 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:50.190 [migrateThread] starting receiving-end of migration of chunk { _id: 29105.0 } -> { _id: 29567.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:50.200 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29105.0 }, max: { _id: 29567.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 124, clonedBytes: 129332, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:50.210 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29105.0 }, max: { _id: 29567.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 284, clonedBytes: 296212, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:50.220 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29105.0 }, max: { _id: 29567.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:50.221 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:50.221 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 29105.0 } -> { _id: 29567.0 } m30000| Fri Feb 22 12:29:50.222 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 29105.0 } -> { _id: 29567.0 } m30001| Fri Feb 22 12:29:50.231 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29105.0 }, max: { _id: 29567.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:50.231 [conn4] moveChunk setting version to: 65|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:50.231 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:50.233 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 29105.0 } -> { _id: 29567.0 } m30000| Fri Feb 22 12:29:50.233 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 29105.0 } -> { _id: 29567.0 } m30000| Fri Feb 22 12:29:50.233 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:50-512764bec49297cf54df5648", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536190233), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 29105.0 }, max: { _id: 29567.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:50.234 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 458 version: 64|1||51276475bd1f99446659365c based on: 64|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:50.234 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 64|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:50.234 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 64000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 458 m30001| Fri Feb 22 12:29:50.234 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:50.241 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 29105.0 }, max: { _id: 29567.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:50.241 [conn4] moveChunk updating self version to: 65|1||51276475bd1f99446659365c through { _id: 29567.0 } -> { _id: 30029.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:50.242 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:50-512764be99334798f3e47f8a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536190242), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 29105.0 }, max: { _id: 29567.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:50.242 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:50.242 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:50.242 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:50.242 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 64000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 64000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 65000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:50.242 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:50.242 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:50.242 [cleanupOldData-512764be99334798f3e47f8b] (start) waiting to cleanup test.bar from { _id: 29105.0 } -> { _id: 29567.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:50.243 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:50.243 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:50-512764be99334798f3e47f8c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536190243), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 29105.0 }, max: { _id: 29567.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:50.243 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:50.244 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 459 version: 65|1||51276475bd1f99446659365c based on: 64|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:50.246 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 460 version: 65|1||51276475bd1f99446659365c based on: 64|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:50.247 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:50.248 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:50.249 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 461 version: 65|1||51276475bd1f99446659365c based on: 65|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:50.249 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 65|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:50.249 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 65000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 461 m30999| Fri Feb 22 12:29:50.250 [conn1] setShardVersion success: { oldVersion: Timestamp 64000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:50.263 [cleanupOldData-512764be99334798f3e47f8b] waiting to remove documents for test.bar from { _id: 29105.0 } -> { _id: 29567.0 } m30001| Fri Feb 22 12:29:50.675 [cleanupOldData-5127649d99334798f3e47ea0] moveChunk deleted 462 documents for test.bar from { _id: 15707.0 } -> { _id: 16169.0 } m30001| Fri Feb 22 12:29:50.675 [cleanupOldData-512764be99334798f3e47f8b] moveChunk starting delete for: test.bar from { _id: 29105.0 } -> { _id: 29567.0 } m30001| Fri Feb 22 12:29:51.175 [cleanupOldData-512764be99334798f3e47f8b] moveChunk deleted 462 documents for test.bar from { _id: 29105.0 } -> { _id: 29567.0 } m30001| Fri Feb 22 12:29:51.175 [cleanupOldData-5127649c99334798f3e47e91] moveChunk starting delete for: test.foo from { _id: 30920.0 } -> { _id: 31857.0 } m30999| Fri Feb 22 12:29:51.248 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:51.249 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:51.249 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:51 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764bfbd1f99446659369d" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764bebd1f99446659369c" } } m30999| Fri Feb 22 12:29:51.249 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764bfbd1f99446659369d m30999| Fri Feb 22 12:29:51.249 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:51.250 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:51.250 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:51.252 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:51.252 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:51.252 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:51.252 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:51.252 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:51.253 [Balancer] shard0001 has more chunks me:153 best: shard0000:64 m30999| Fri Feb 22 12:29:51.254 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:51.254 [Balancer] donor : shard0001 chunks on 153 m30999| Fri Feb 22 12:29:51.254 [Balancer] receiver : shard0000 chunks on 64 m30999| Fri Feb 22 12:29:51.254 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:51.254 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_29567.0", lastmod: Timestamp 65000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 29567.0 }, max: { _id: 30029.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:51.254 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 65|1||000000000000000000000000min: { _id: 29567.0 }max: { _id: 30029.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:51.254 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:51.254 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 29567.0 }, max: { _id: 30029.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_29567.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:51.255 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764bf99334798f3e47f8d m30001| Fri Feb 22 12:29:51.255 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:51-512764bf99334798f3e47f8e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536191255), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 29567.0 }, max: { _id: 30029.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:51.256 [conn4] moveChunk request accepted at version 65|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:51.257 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:51.257 [migrateThread] starting receiving-end of migration of chunk { _id: 29567.0 } -> { _id: 30029.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:51.267 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29567.0 }, max: { _id: 30029.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 113, clonedBytes: 117859, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 67000 m30001| Fri Feb 22 12:29:51.278 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29567.0 }, max: { _id: 30029.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 271, clonedBytes: 282653, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:51.288 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29567.0 }, max: { _id: 30029.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 446, clonedBytes: 465178, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:51.289 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:51.289 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 29567.0 } -> { _id: 30029.0 } m30000| Fri Feb 22 12:29:51.292 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 29567.0 } -> { _id: 30029.0 } m30001| Fri Feb 22 12:29:51.298 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 29567.0 }, max: { _id: 30029.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:51.299 [conn4] moveChunk setting version to: 66|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:51.299 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:51.302 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 462 version: 65|1||51276475bd1f99446659365c based on: 65|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:51.302 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 65|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:51.302 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 65000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 462 m30001| Fri Feb 22 12:29:51.302 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:51.302 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 29567.0 } -> { _id: 30029.0 } m30000| Fri Feb 22 12:29:51.302 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 29567.0 } -> { _id: 30029.0 } m30000| Fri Feb 22 12:29:51.302 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:51-512764bfc49297cf54df5649", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536191302), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 29567.0 }, max: { _id: 30029.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 31, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:51.309 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 29567.0 }, max: { _id: 30029.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:51.309 [conn4] moveChunk updating self version to: 66|1||51276475bd1f99446659365c through { _id: 30029.0 } -> { _id: 30491.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:51.310 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:51-512764bf99334798f3e47f8f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536191310), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 29567.0 }, max: { _id: 30029.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:51.310 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:51.310 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:29:51.310 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:51.310 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 65000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 65000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 66000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:51.310 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:51.310 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:51.310 [cleanupOldData-512764bf99334798f3e47f90] (start) waiting to cleanup test.bar from { _id: 29567.0 } -> { _id: 30029.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:51.310 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:51.310 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:51-512764bf99334798f3e47f91", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536191310), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 29567.0 }, max: { _id: 30029.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:51.310 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:51.311 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 463 version: 66|1||51276475bd1f99446659365c based on: 65|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:51.313 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 464 version: 66|1||51276475bd1f99446659365c based on: 65|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:51.314 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:51.314 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:51.315 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 465 version: 66|1||51276475bd1f99446659365c based on: 66|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:51.315 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 66|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:51.316 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 66000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 465 m30999| Fri Feb 22 12:29:51.316 [conn1] setShardVersion success: { oldVersion: Timestamp 65000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:51.330 [cleanupOldData-512764bf99334798f3e47f90] waiting to remove documents for test.bar from { _id: 29567.0 } -> { _id: 30029.0 } 68000 m30999| Fri Feb 22 12:29:52.315 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:52.315 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:52.315 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:52 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c0bd1f99446659369e" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764bfbd1f99446659369d" } } m30999| Fri Feb 22 12:29:52.316 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c0bd1f99446659369e m30999| Fri Feb 22 12:29:52.316 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:52.316 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:52.316 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:52.318 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:52.318 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:52.318 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:52.318 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:52.318 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:52.320 [Balancer] shard0001 has more chunks me:152 best: shard0000:65 m30999| Fri Feb 22 12:29:52.320 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:52.320 [Balancer] donor : shard0001 chunks on 152 m30999| Fri Feb 22 12:29:52.320 [Balancer] receiver : shard0000 chunks on 65 m30999| Fri Feb 22 12:29:52.320 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:52.320 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_30029.0", lastmod: Timestamp 66000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 30029.0 }, max: { _id: 30491.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:52.320 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 66|1||000000000000000000000000min: { _id: 30029.0 }max: { _id: 30491.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:52.320 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:52.320 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 30029.0 }, max: { _id: 30491.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_30029.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:52.321 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c099334798f3e47f92 m30001| Fri Feb 22 12:29:52.321 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:52-512764c099334798f3e47f93", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536192321), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 30029.0 }, max: { _id: 30491.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:52.322 [conn4] moveChunk request accepted at version 66|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:52.323 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:52.324 [migrateThread] starting receiving-end of migration of chunk { _id: 30029.0 } -> { _id: 30491.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:52.334 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30029.0 }, max: { _id: 30491.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 130, clonedBytes: 135590, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:52.344 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30029.0 }, max: { _id: 30491.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 310, clonedBytes: 323330, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:52.354 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30029.0 }, max: { _id: 30491.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 453, clonedBytes: 472479, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:52.355 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:52.355 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 30029.0 } -> { _id: 30491.0 } m30000| Fri Feb 22 12:29:52.355 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 30029.0 } -> { _id: 30491.0 } m30001| Fri Feb 22 12:29:52.364 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30029.0 }, max: { _id: 30491.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:52.364 [conn4] moveChunk setting version to: 67|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:52.365 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:52.366 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 30029.0 } -> { _id: 30491.0 } m30000| Fri Feb 22 12:29:52.366 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 30029.0 } -> { _id: 30491.0 } m30000| Fri Feb 22 12:29:52.366 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:52-512764c0c49297cf54df564a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536192366), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 30029.0 }, max: { _id: 30491.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 10 } } m30999| Fri Feb 22 12:29:52.367 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 466 version: 66|1||51276475bd1f99446659365c based on: 66|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:52.367 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 66|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:52.368 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 66000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 466 m30001| Fri Feb 22 12:29:52.368 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:52.371 [cleanupOldData-5127649c99334798f3e47e91] moveChunk deleted 937 documents for test.foo from { _id: 30920.0 } -> { _id: 31857.0 } m30001| Fri Feb 22 12:29:52.371 [cleanupOldData-512764bf99334798f3e47f90] moveChunk starting delete for: test.bar from { _id: 29567.0 } -> { _id: 30029.0 } m30001| Fri Feb 22 12:29:52.375 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 30029.0 }, max: { _id: 30491.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:52.375 [conn4] moveChunk updating self version to: 67|1||51276475bd1f99446659365c through { _id: 30491.0 } -> { _id: 30953.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:52.376 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:52-512764c099334798f3e47f94", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536192376), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 30029.0 }, max: { _id: 30491.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:52.376 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:52.376 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:52.376 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:52.376 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:52.376 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 66000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 66000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 67000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:52.376 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:52.376 [cleanupOldData-512764c099334798f3e47f95] (start) waiting to cleanup test.bar from { _id: 30029.0 } -> { _id: 30491.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:52.376 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:52.376 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:52-512764c099334798f3e47f96", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536192376), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 30029.0 }, max: { _id: 30491.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:52.376 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:52.378 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 467 version: 67|1||51276475bd1f99446659365c based on: 66|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:52.379 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 468 version: 67|1||51276475bd1f99446659365c based on: 66|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:52.380 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:52.380 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:52.381 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 469 version: 67|1||51276475bd1f99446659365c based on: 67|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:52.382 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 67|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:52.382 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 67000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 469 m30999| Fri Feb 22 12:29:52.382 [conn1] setShardVersion success: { oldVersion: Timestamp 66000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:52.396 [cleanupOldData-512764c099334798f3e47f95] waiting to remove documents for test.bar from { _id: 30029.0 } -> { _id: 30491.0 } m30001| Fri Feb 22 12:29:52.654 [cleanupOldData-512764bf99334798f3e47f90] moveChunk deleted 462 documents for test.bar from { _id: 29567.0 } -> { _id: 30029.0 } m30001| Fri Feb 22 12:29:52.654 [cleanupOldData-512764c099334798f3e47f95] moveChunk starting delete for: test.bar from { _id: 30029.0 } -> { _id: 30491.0 } 69000 m30999| Fri Feb 22 12:29:53.381 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:53.381 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:53.382 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:53 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c1bd1f99446659369f" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c0bd1f99446659369e" } } m30999| Fri Feb 22 12:29:53.382 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c1bd1f99446659369f m30999| Fri Feb 22 12:29:53.382 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:53.382 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:53.382 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:53.385 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:53.385 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:53.385 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:53.385 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:53.385 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:53.387 [Balancer] shard0001 has more chunks me:151 best: shard0000:66 m30999| Fri Feb 22 12:29:53.387 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:53.387 [Balancer] donor : shard0001 chunks on 151 m30999| Fri Feb 22 12:29:53.387 [Balancer] receiver : shard0000 chunks on 66 m30999| Fri Feb 22 12:29:53.387 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:53.387 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_30491.0", lastmod: Timestamp 67000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 30491.0 }, max: { _id: 30953.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:53.387 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 67|1||000000000000000000000000min: { _id: 30491.0 }max: { _id: 30953.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:53.388 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:53.388 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 30491.0 }, max: { _id: 30953.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_30491.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:53.389 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c199334798f3e47f97 m30001| Fri Feb 22 12:29:53.389 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:53-512764c199334798f3e47f98", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536193389), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 30491.0 }, max: { _id: 30953.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:53.390 [conn4] moveChunk request accepted at version 67|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:53.392 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:53.392 [migrateThread] starting receiving-end of migration of chunk { _id: 30491.0 } -> { _id: 30953.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:53.402 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30491.0 }, max: { _id: 30953.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 120, clonedBytes: 125160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:53.412 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30491.0 }, max: { _id: 30953.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 301, clonedBytes: 313943, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:53.413 [cleanupOldData-512764c099334798f3e47f95] moveChunk deleted 462 documents for test.bar from { _id: 30029.0 } -> { _id: 30491.0 } m30001| Fri Feb 22 12:29:53.413 [cleanupOldData-5127649699334798f3e47e64] moveChunk starting delete for: test.bar from { _id: 12935.0 } -> { _id: 13397.0 } m30001| Fri Feb 22 12:29:53.422 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30491.0 }, max: { _id: 30953.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 457, clonedBytes: 476651, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:53.423 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:53.423 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 30491.0 } -> { _id: 30953.0 } m30000| Fri Feb 22 12:29:53.426 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 30491.0 } -> { _id: 30953.0 } m30001| Fri Feb 22 12:29:53.433 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30491.0 }, max: { _id: 30953.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:53.433 [conn4] moveChunk setting version to: 68|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:53.433 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:53.436 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 470 version: 67|1||51276475bd1f99446659365c based on: 67|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:53.436 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 67|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:53.436 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 30491.0 } -> { _id: 30953.0 } m30000| Fri Feb 22 12:29:53.436 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 30491.0 } -> { _id: 30953.0 } m30999| Fri Feb 22 12:29:53.436 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 67000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 470 m30000| Fri Feb 22 12:29:53.436 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:53-512764c1c49297cf54df564b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536193436), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 30491.0 }, max: { _id: 30953.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:53.436 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:53.443 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 30491.0 }, max: { _id: 30953.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:53.443 [conn4] moveChunk updating self version to: 68|1||51276475bd1f99446659365c through { _id: 30953.0 } -> { _id: 31415.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:53.444 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:53-512764c199334798f3e47f99", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536193444), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 30491.0 }, max: { _id: 30953.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:53.444 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:53.444 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:29:53.444 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 67000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 67000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 68000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:53.444 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:53.444 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:53.444 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:53.444 [cleanupOldData-512764c199334798f3e47f9a] (start) waiting to cleanup test.bar from { _id: 30491.0 } -> { _id: 30953.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:53.444 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:53.445 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:53-512764c199334798f3e47f9b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536193444), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 30491.0 }, max: { _id: 30953.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:53.445 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:53.446 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 471 version: 68|1||51276475bd1f99446659365c based on: 67|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:53.448 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 472 version: 68|1||51276475bd1f99446659365c based on: 67|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:53.449 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:53.449 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:53.450 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 473 version: 68|1||51276475bd1f99446659365c based on: 68|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:53.451 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 68|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:53.451 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 68000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 473 m30999| Fri Feb 22 12:29:53.451 [conn1] setShardVersion success: { oldVersion: Timestamp 67000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:53.464 [cleanupOldData-512764c199334798f3e47f9a] waiting to remove documents for test.bar from { _id: 30491.0 } -> { _id: 30953.0 } m30001| Fri Feb 22 12:29:54.013 [cleanupOldData-5127649699334798f3e47e64] moveChunk deleted 462 documents for test.bar from { _id: 12935.0 } -> { _id: 13397.0 } m30001| Fri Feb 22 12:29:54.014 [cleanupOldData-512764c199334798f3e47f9a] moveChunk starting delete for: test.bar from { _id: 30491.0 } -> { _id: 30953.0 } m30001| Fri Feb 22 12:29:54.355 [cleanupOldData-512764c199334798f3e47f9a] moveChunk deleted 462 documents for test.bar from { _id: 30491.0 } -> { _id: 30953.0 } m30001| Fri Feb 22 12:29:54.355 [cleanupOldData-5127649499334798f3e47e4b] moveChunk starting delete for: test.foo from { _id: 24361.0 } -> { _id: 25298.0 } m30999| Fri Feb 22 12:29:54.450 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:54.450 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:54.451 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:54 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c2bd1f9944665936a0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c1bd1f99446659369f" } } m30999| Fri Feb 22 12:29:54.452 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c2bd1f9944665936a0 m30999| Fri Feb 22 12:29:54.452 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:54.452 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:54.452 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:54.454 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:54.454 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:54.454 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:54.454 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:54.454 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:54.456 [Balancer] shard0001 has more chunks me:150 best: shard0000:67 m30999| Fri Feb 22 12:29:54.457 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:54.457 [Balancer] donor : shard0001 chunks on 150 m30999| Fri Feb 22 12:29:54.457 [Balancer] receiver : shard0000 chunks on 67 m30999| Fri Feb 22 12:29:54.457 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:54.457 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_30953.0", lastmod: Timestamp 68000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 30953.0 }, max: { _id: 31415.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:54.457 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 68|1||000000000000000000000000min: { _id: 30953.0 }max: { _id: 31415.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:54.457 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:54.457 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 30953.0 }, max: { _id: 31415.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_30953.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:54.458 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c299334798f3e47f9c m30001| Fri Feb 22 12:29:54.458 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:54-512764c299334798f3e47f9d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536194458), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 30953.0 }, max: { _id: 31415.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:54.460 [conn4] moveChunk request accepted at version 68|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:54.461 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:54.461 [migrateThread] starting receiving-end of migration of chunk { _id: 30953.0 } -> { _id: 31415.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:54.472 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30953.0 }, max: { _id: 31415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 136, clonedBytes: 141848, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:54.482 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30953.0 }, max: { _id: 31415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 315, clonedBytes: 328545, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:54.490 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:54.490 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 30953.0 } -> { _id: 31415.0 } m30001| Fri Feb 22 12:29:54.492 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30953.0 }, max: { _id: 31415.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:54.493 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 30953.0 } -> { _id: 31415.0 } m30001| Fri Feb 22 12:29:54.503 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 30953.0 }, max: { _id: 31415.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:54.503 [conn4] moveChunk setting version to: 69|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:54.503 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:54.503 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 30953.0 } -> { _id: 31415.0 } m30000| Fri Feb 22 12:29:54.503 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 30953.0 } -> { _id: 31415.0 } m30000| Fri Feb 22 12:29:54.503 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:54-512764c2c49297cf54df564c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536194503), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 30953.0 }, max: { _id: 31415.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:54.506 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 474 version: 68|1||51276475bd1f99446659365c based on: 68|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:54.506 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 68|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:54.506 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 68000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 474 m30001| Fri Feb 22 12:29:54.506 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:54.513 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 30953.0 }, max: { _id: 31415.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:54.513 [conn4] moveChunk updating self version to: 69|1||51276475bd1f99446659365c through { _id: 31415.0 } -> { _id: 31877.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:54.514 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:54-512764c299334798f3e47f9e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536194514), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 30953.0 }, max: { _id: 31415.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:54.514 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:54.514 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 68000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 68000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 69000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:54.514 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:54.514 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:54.514 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:54.514 [cleanupOldData-512764c299334798f3e47f9f] (start) waiting to cleanup test.bar from { _id: 30953.0 } -> { _id: 31415.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:54.514 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:54.515 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:54.515 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:54-512764c299334798f3e47fa0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536194515), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 30953.0 }, max: { _id: 31415.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:54.515 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:54.516 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 475 version: 69|1||51276475bd1f99446659365c based on: 68|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:54.518 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 476 version: 69|1||51276475bd1f99446659365c based on: 68|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:54.519 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:54.520 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:54.521 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 477 version: 69|1||51276475bd1f99446659365c based on: 69|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:54.521 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 69|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:54.521 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 69000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 477 m30999| Fri Feb 22 12:29:54.522 [conn1] setShardVersion success: { oldVersion: Timestamp 68000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:54.534 [cleanupOldData-512764c299334798f3e47f9f] waiting to remove documents for test.bar from { _id: 30953.0 } -> { _id: 31415.0 } 70000 m30001| Fri Feb 22 12:29:55.147 [cleanupOldData-5127649499334798f3e47e4b] moveChunk deleted 937 documents for test.foo from { _id: 24361.0 } -> { _id: 25298.0 } m30001| Fri Feb 22 12:29:55.147 [cleanupOldData-512764c299334798f3e47f9f] moveChunk starting delete for: test.bar from { _id: 30953.0 } -> { _id: 31415.0 } m30999| Fri Feb 22 12:29:55.520 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:55.521 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:55.521 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:55 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c3bd1f9944665936a1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c2bd1f9944665936a0" } } m30999| Fri Feb 22 12:29:55.522 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c3bd1f9944665936a1 m30999| Fri Feb 22 12:29:55.522 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:55.522 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:55.522 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:55.524 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:55.524 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:55.524 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:55.524 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:55.524 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:55.526 [Balancer] shard0001 has more chunks me:149 best: shard0000:68 m30999| Fri Feb 22 12:29:55.526 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:55.526 [Balancer] donor : shard0001 chunks on 149 m30999| Fri Feb 22 12:29:55.526 [Balancer] receiver : shard0000 chunks on 68 m30999| Fri Feb 22 12:29:55.526 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:55.526 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_31415.0", lastmod: Timestamp 69000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 31415.0 }, max: { _id: 31877.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:55.526 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 69|1||000000000000000000000000min: { _id: 31415.0 }max: { _id: 31877.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:55.526 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:55.526 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 31415.0 }, max: { _id: 31877.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_31415.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:55.527 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c399334798f3e47fa1 m30001| Fri Feb 22 12:29:55.527 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:55-512764c399334798f3e47fa2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536195527), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 31415.0 }, max: { _id: 31877.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:55.528 [conn4] moveChunk request accepted at version 69|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:55.529 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:55.529 [migrateThread] starting receiving-end of migration of chunk { _id: 31415.0 } -> { _id: 31877.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:55.540 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31415.0 }, max: { _id: 31877.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 119, clonedBytes: 124117, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:55.550 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31415.0 }, max: { _id: 31877.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 297, clonedBytes: 309771, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:55.560 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:55.560 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 31415.0 } -> { _id: 31877.0 } m30001| Fri Feb 22 12:29:55.560 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31415.0 }, max: { _id: 31877.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:55.563 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 31415.0 } -> { _id: 31877.0 } m30001| Fri Feb 22 12:29:55.570 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31415.0 }, max: { _id: 31877.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:55.570 [conn4] moveChunk setting version to: 70|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:55.570 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:55.572 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 478 version: 69|1||51276475bd1f99446659365c based on: 69|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:55.573 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 69|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:55.573 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 69000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 478 m30000| Fri Feb 22 12:29:55.573 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 31415.0 } -> { _id: 31877.0 } m30000| Fri Feb 22 12:29:55.573 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 31415.0 } -> { _id: 31877.0 } m30001| Fri Feb 22 12:29:55.573 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:55.573 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:55-512764c3c49297cf54df564d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536195573), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 31415.0 }, max: { _id: 31877.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 13 } } m30001| Fri Feb 22 12:29:55.580 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 31415.0 }, max: { _id: 31877.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:55.581 [conn4] moveChunk updating self version to: 70|1||51276475bd1f99446659365c through { _id: 31877.0 } -> { _id: 32339.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:55.581 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:55-512764c399334798f3e47fa3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536195581), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 31415.0 }, max: { _id: 31877.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:55.581 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:55.581 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:55.581 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:55.581 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 69000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 69000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 70000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:55.581 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:55.581 [cleanupOldData-512764c399334798f3e47fa4] (start) waiting to cleanup test.bar from { _id: 31415.0 } -> { _id: 31877.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:55.581 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:55.582 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:55.582 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:55-512764c399334798f3e47fa5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536195582), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 31415.0 }, max: { _id: 31877.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:55.582 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:55.583 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 479 version: 70|1||51276475bd1f99446659365c based on: 69|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:55.585 [cleanupOldData-512764c299334798f3e47f9f] moveChunk deleted 462 documents for test.bar from { _id: 30953.0 } -> { _id: 31415.0 } m30001| Fri Feb 22 12:29:55.585 [cleanupOldData-5127648699334798f3e47dd3] moveChunk starting delete for: test.foo from { _id: 13117.0 } -> { _id: 14054.0 } m30999| Fri Feb 22 12:29:55.585 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 480 version: 70|1||51276475bd1f99446659365c based on: 69|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:55.585 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:55.586 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:55.587 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 481 version: 70|1||51276475bd1f99446659365c based on: 70|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:55.587 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 70|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:55.587 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 70000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 481 m30999| Fri Feb 22 12:29:55.587 [conn1] setShardVersion success: { oldVersion: Timestamp 69000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:55.601 [cleanupOldData-512764c399334798f3e47fa4] waiting to remove documents for test.bar from { _id: 31415.0 } -> { _id: 31877.0 } 71000 m30999| Fri Feb 22 12:29:56.586 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:56.587 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:56.587 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:56 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c4bd1f9944665936a2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c3bd1f9944665936a1" } } m30999| Fri Feb 22 12:29:56.587 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c4bd1f9944665936a2 m30999| Fri Feb 22 12:29:56.587 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:56.587 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:56.587 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:56.589 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:56.589 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:56.589 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:56.589 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:56.589 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:56.591 [Balancer] shard0001 has more chunks me:148 best: shard0000:69 m30999| Fri Feb 22 12:29:56.591 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:56.591 [Balancer] donor : shard0001 chunks on 148 m30999| Fri Feb 22 12:29:56.591 [Balancer] receiver : shard0000 chunks on 69 m30999| Fri Feb 22 12:29:56.591 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:56.591 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_31877.0", lastmod: Timestamp 70000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 31877.0 }, max: { _id: 32339.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:56.592 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 70|1||000000000000000000000000min: { _id: 31877.0 }max: { _id: 32339.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:56.592 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:56.592 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 31877.0 }, max: { _id: 32339.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_31877.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:56.593 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c499334798f3e47fa6 m30001| Fri Feb 22 12:29:56.593 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:56-512764c499334798f3e47fa7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536196593), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 31877.0 }, max: { _id: 32339.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:56.594 [conn4] moveChunk request accepted at version 70|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:56.595 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:56.595 [migrateThread] starting receiving-end of migration of chunk { _id: 31877.0 } -> { _id: 32339.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:56.606 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31877.0 }, max: { _id: 32339.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 128, clonedBytes: 133504, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:56.616 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31877.0 }, max: { _id: 32339.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 308, clonedBytes: 321244, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:56.625 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:56.625 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 31877.0 } -> { _id: 32339.0 } m30001| Fri Feb 22 12:29:56.626 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31877.0 }, max: { _id: 32339.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:56.627 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 31877.0 } -> { _id: 32339.0 } m30001| Fri Feb 22 12:29:56.636 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 31877.0 }, max: { _id: 32339.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:56.636 [conn4] moveChunk setting version to: 71|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:56.636 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:56.637 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 31877.0 } -> { _id: 32339.0 } m30000| Fri Feb 22 12:29:56.637 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 31877.0 } -> { _id: 32339.0 } m30000| Fri Feb 22 12:29:56.637 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:56-512764c4c49297cf54df564e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536196637), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 31877.0 }, max: { _id: 32339.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:56.639 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 482 version: 70|1||51276475bd1f99446659365c based on: 70|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:56.639 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 70|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:56.639 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 70000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 482 m30001| Fri Feb 22 12:29:56.640 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:56.646 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 31877.0 }, max: { _id: 32339.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:56.647 [conn4] moveChunk updating self version to: 71|1||51276475bd1f99446659365c through { _id: 32339.0 } -> { _id: 32801.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:56.647 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:56-512764c499334798f3e47fa8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536196647), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 31877.0 }, max: { _id: 32339.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:56.647 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:56.647 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 70000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 70000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 71000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:56.648 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:56.648 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:56.648 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:56.648 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:56.648 [cleanupOldData-512764c499334798f3e47fa9] (start) waiting to cleanup test.bar from { _id: 31877.0 } -> { _id: 32339.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:56.648 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:56.648 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:56-512764c499334798f3e47faa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536196648), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 31877.0 }, max: { _id: 32339.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:56.648 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:56.649 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 483 version: 71|1||51276475bd1f99446659365c based on: 70|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:56.651 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 484 version: 71|1||51276475bd1f99446659365c based on: 70|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:56.652 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:56.652 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:56.653 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 485 version: 71|1||51276475bd1f99446659365c based on: 71|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:56.653 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 71|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:56.653 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 71000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 485 m30999| Fri Feb 22 12:29:56.654 [conn1] setShardVersion success: { oldVersion: Timestamp 70000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:56.668 [cleanupOldData-512764c499334798f3e47fa9] waiting to remove documents for test.bar from { _id: 31877.0 } -> { _id: 32339.0 } 72000 m30001| Fri Feb 22 12:29:56.757 [cleanupOldData-5127648699334798f3e47dd3] moveChunk deleted 937 documents for test.foo from { _id: 13117.0 } -> { _id: 14054.0 } m30001| Fri Feb 22 12:29:56.757 [cleanupOldData-512764c499334798f3e47fa9] moveChunk starting delete for: test.bar from { _id: 31877.0 } -> { _id: 32339.0 } m30001| Fri Feb 22 12:29:57.638 [cleanupOldData-512764c499334798f3e47fa9] moveChunk deleted 462 documents for test.bar from { _id: 31877.0 } -> { _id: 32339.0 } m30001| Fri Feb 22 12:29:57.638 [cleanupOldData-512764c399334798f3e47fa4] moveChunk starting delete for: test.bar from { _id: 31415.0 } -> { _id: 31877.0 } m30999| Fri Feb 22 12:29:57.653 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:57.653 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:57.653 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:57 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c5bd1f9944665936a3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c4bd1f9944665936a2" } } m30999| Fri Feb 22 12:29:57.654 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c5bd1f9944665936a3 m30999| Fri Feb 22 12:29:57.654 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:57.654 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:57.654 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:57.656 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:57.656 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:57.656 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:57.656 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:57.656 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:57.659 [Balancer] shard0001 has more chunks me:147 best: shard0000:70 m30999| Fri Feb 22 12:29:57.659 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:57.659 [Balancer] donor : shard0001 chunks on 147 m30999| Fri Feb 22 12:29:57.659 [Balancer] receiver : shard0000 chunks on 70 m30999| Fri Feb 22 12:29:57.659 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:57.659 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_32339.0", lastmod: Timestamp 71000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 32339.0 }, max: { _id: 32801.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:57.659 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 71|1||000000000000000000000000min: { _id: 32339.0 }max: { _id: 32801.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:57.659 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:57.660 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 32339.0 }, max: { _id: 32801.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_32339.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:57.661 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c599334798f3e47fab m30001| Fri Feb 22 12:29:57.661 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:57-512764c599334798f3e47fac", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536197661), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 32339.0 }, max: { _id: 32801.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:57.662 [conn4] moveChunk request accepted at version 71|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:57.663 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:57.664 [migrateThread] starting receiving-end of migration of chunk { _id: 32339.0 } -> { _id: 32801.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:57.674 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 32339.0 }, max: { _id: 32801.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 114, clonedBytes: 118902, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:57.684 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 32339.0 }, max: { _id: 32801.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 297, clonedBytes: 309771, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:57.694 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:57.694 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 32339.0 } -> { _id: 32801.0 } m30001| Fri Feb 22 12:29:57.694 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 32339.0 }, max: { _id: 32801.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:57.696 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 32339.0 } -> { _id: 32801.0 } m30001| Fri Feb 22 12:29:57.704 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 32339.0 }, max: { _id: 32801.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:57.705 [conn4] moveChunk setting version to: 72|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:57.705 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:29:57.706 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 32339.0 } -> { _id: 32801.0 } m30000| Fri Feb 22 12:29:57.706 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 32339.0 } -> { _id: 32801.0 } m30000| Fri Feb 22 12:29:57.706 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:57-512764c5c49297cf54df564f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536197706), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 32339.0 }, max: { _id: 32801.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:29:57.708 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 486 version: 71|1||51276475bd1f99446659365c based on: 71|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:57.708 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 71|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:57.708 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 71000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 486 m30001| Fri Feb 22 12:29:57.708 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:29:57.715 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 32339.0 }, max: { _id: 32801.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:57.715 [conn4] moveChunk updating self version to: 72|1||51276475bd1f99446659365c through { _id: 32801.0 } -> { _id: 33263.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:57.716 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:57-512764c599334798f3e47fad", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536197716), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 32339.0 }, max: { _id: 32801.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:57.716 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:29:57.716 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 71000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 71000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 72000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:57.716 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:57.716 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:29:57.716 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:57.716 [cleanupOldData-512764c599334798f3e47fae] (start) waiting to cleanup test.bar from { _id: 32339.0 } -> { _id: 32801.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:57.716 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:57.716 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:57.716 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:57-512764c599334798f3e47faf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536197716), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 32339.0 }, max: { _id: 32801.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:57.716 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:57.718 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 487 version: 72|1||51276475bd1f99446659365c based on: 71|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:57.720 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 488 version: 72|1||51276475bd1f99446659365c based on: 71|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:57.721 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:57.721 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:57.722 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 489 version: 72|1||51276475bd1f99446659365c based on: 72|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:57.723 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 72|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:57.723 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 72000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 489 m30999| Fri Feb 22 12:29:57.723 [conn1] setShardVersion success: { oldVersion: Timestamp 71000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:57.736 [cleanupOldData-512764c599334798f3e47fae] waiting to remove documents for test.bar from { _id: 32339.0 } -> { _id: 32801.0 } 73000 m30001| Fri Feb 22 12:29:57.893 [cleanupOldData-512764c399334798f3e47fa4] moveChunk deleted 462 documents for test.bar from { _id: 31415.0 } -> { _id: 31877.0 } m30001| Fri Feb 22 12:29:57.893 [cleanupOldData-512764c599334798f3e47fae] moveChunk starting delete for: test.bar from { _id: 32339.0 } -> { _id: 32801.0 } m30001| Fri Feb 22 12:29:58.612 [cleanupOldData-512764c599334798f3e47fae] moveChunk deleted 462 documents for test.bar from { _id: 32339.0 } -> { _id: 32801.0 } m30001| Fri Feb 22 12:29:58.612 [cleanupOldData-5127648299334798f3e47dab] moveChunk starting delete for: test.foo from { _id: 9369.0 } -> { _id: 10306.0 } m30999| Fri Feb 22 12:29:58.722 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:58.722 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:58.723 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:58 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c6bd1f9944665936a4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c5bd1f9944665936a3" } } m30999| Fri Feb 22 12:29:58.723 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c6bd1f9944665936a4 m30999| Fri Feb 22 12:29:58.723 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:58.723 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:58.723 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:58.726 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:58.726 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:58.726 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:58.726 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:58.726 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:58.728 [Balancer] shard0001 has more chunks me:146 best: shard0000:71 m30999| Fri Feb 22 12:29:58.728 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:58.728 [Balancer] donor : shard0001 chunks on 146 m30999| Fri Feb 22 12:29:58.728 [Balancer] receiver : shard0000 chunks on 71 m30999| Fri Feb 22 12:29:58.728 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:58.728 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_32801.0", lastmod: Timestamp 72000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 32801.0 }, max: { _id: 33263.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:58.728 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 72|1||000000000000000000000000min: { _id: 32801.0 }max: { _id: 33263.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:58.729 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:58.729 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 32801.0 }, max: { _id: 33263.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_32801.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:58.730 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c699334798f3e47fb0 m30001| Fri Feb 22 12:29:58.730 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:58-512764c699334798f3e47fb1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536198730), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 32801.0 }, max: { _id: 33263.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:58.731 [conn4] moveChunk request accepted at version 72|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:58.733 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:58.733 [migrateThread] starting receiving-end of migration of chunk { _id: 32801.0 } -> { _id: 33263.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:58.743 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 32801.0 }, max: { _id: 33263.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 113, clonedBytes: 117859, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:58.753 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 32801.0 }, max: { _id: 33263.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 290, clonedBytes: 302470, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:58.761 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:58.761 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 32801.0 } -> { _id: 33263.0 } m30000| Fri Feb 22 12:29:58.763 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 32801.0 } -> { _id: 33263.0 } m30001| Fri Feb 22 12:29:58.764 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 32801.0 }, max: { _id: 33263.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:58.764 [conn4] moveChunk setting version to: 73|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:58.764 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:58.767 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 490 version: 72|1||51276475bd1f99446659365c based on: 72|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:58.767 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 72|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:58.767 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 72000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 490 m30001| Fri Feb 22 12:29:58.767 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:58.773 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 32801.0 } -> { _id: 33263.0 } m30000| Fri Feb 22 12:29:58.773 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 32801.0 } -> { _id: 33263.0 } m30000| Fri Feb 22 12:29:58.773 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:58-512764c6c49297cf54df5650", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536198773), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 32801.0 }, max: { _id: 33263.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 27, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:29:58.774 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 32801.0 }, max: { _id: 33263.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:58.774 [conn4] moveChunk updating self version to: 73|1||51276475bd1f99446659365c through { _id: 33263.0 } -> { _id: 33725.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:58.775 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:58-512764c699334798f3e47fb2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536198775), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 32801.0 }, max: { _id: 33263.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:58.775 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:58.775 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:58.775 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:58.775 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 72000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 72000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 73000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:58.775 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:58.775 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:58.775 [cleanupOldData-512764c699334798f3e47fb3] (start) waiting to cleanup test.bar from { _id: 32801.0 } -> { _id: 33263.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:58.775 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:58.776 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:58-512764c699334798f3e47fb4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536198775), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 32801.0 }, max: { _id: 33263.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:58.776 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:58.777 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 491 version: 73|1||51276475bd1f99446659365c based on: 72|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:58.778 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 492 version: 73|1||51276475bd1f99446659365c based on: 72|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:58.779 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:58.779 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:58.780 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 493 version: 73|1||51276475bd1f99446659365c based on: 73|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:58.781 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 73|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:58.781 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 73000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 493 m30999| Fri Feb 22 12:29:58.781 [conn1] setShardVersion success: { oldVersion: Timestamp 72000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:58.795 [cleanupOldData-512764c699334798f3e47fb3] waiting to remove documents for test.bar from { _id: 32801.0 } -> { _id: 33263.0 } 74000 m30001| Fri Feb 22 12:29:59.643 [cleanupOldData-5127648299334798f3e47dab] moveChunk deleted 937 documents for test.foo from { _id: 9369.0 } -> { _id: 10306.0 } m30001| Fri Feb 22 12:29:59.644 [cleanupOldData-512764c699334798f3e47fb3] moveChunk starting delete for: test.bar from { _id: 32801.0 } -> { _id: 33263.0 } m30999| Fri Feb 22 12:29:59.780 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:29:59.780 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:29:59.780 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:29:59 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c7bd1f9944665936a5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c6bd1f9944665936a4" } } m30999| Fri Feb 22 12:29:59.781 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c7bd1f9944665936a5 m30999| Fri Feb 22 12:29:59.781 [Balancer] *** start balancing round m30999| Fri Feb 22 12:29:59.781 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:29:59.781 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:29:59.783 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:29:59.783 [Balancer] collection : test.foo m30999| Fri Feb 22 12:29:59.783 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:29:59.783 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:29:59.783 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:59.785 [Balancer] shard0001 has more chunks me:145 best: shard0000:72 m30999| Fri Feb 22 12:29:59.785 [Balancer] collection : test.bar m30999| Fri Feb 22 12:29:59.785 [Balancer] donor : shard0001 chunks on 145 m30999| Fri Feb 22 12:29:59.785 [Balancer] receiver : shard0000 chunks on 72 m30999| Fri Feb 22 12:29:59.785 [Balancer] threshold : 2 m30999| Fri Feb 22 12:29:59.785 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_33263.0", lastmod: Timestamp 73000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 33263.0 }, max: { _id: 33725.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:29:59.785 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 73|1||000000000000000000000000min: { _id: 33263.0 }max: { _id: 33725.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:29:59.785 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:29:59.786 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 33263.0 }, max: { _id: 33725.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_33263.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:29:59.786 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c799334798f3e47fb5 m30001| Fri Feb 22 12:29:59.786 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:59-512764c799334798f3e47fb6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536199786), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 33263.0 }, max: { _id: 33725.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:59.787 [conn4] moveChunk request accepted at version 73|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:29:59.789 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:29:59.789 [migrateThread] starting receiving-end of migration of chunk { _id: 33263.0 } -> { _id: 33725.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:29:59.799 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 33263.0 }, max: { _id: 33725.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 190, clonedBytes: 198170, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:59.809 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 33263.0 }, max: { _id: 33725.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 450, clonedBytes: 469350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:29:59.810 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:29:59.810 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 33263.0 } -> { _id: 33725.0 } m30000| Fri Feb 22 12:29:59.812 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 33263.0 } -> { _id: 33725.0 } m30001| Fri Feb 22 12:29:59.819 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 33263.0 }, max: { _id: 33725.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:29:59.819 [conn4] moveChunk setting version to: 74|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:29:59.820 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:29:59.822 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 494 version: 73|1||51276475bd1f99446659365c based on: 73|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:59.822 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 73|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:59.822 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 73000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 494 m30001| Fri Feb 22 12:29:59.824 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:29:59.824 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 33263.0 } -> { _id: 33725.0 } m30000| Fri Feb 22 12:29:59.824 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 33263.0 } -> { _id: 33725.0 } m30000| Fri Feb 22 12:29:59.824 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:59-512764c7c49297cf54df5651", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536199824), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 33263.0 }, max: { _id: 33725.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 14 } } m30001| Fri Feb 22 12:29:59.830 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 33263.0 }, max: { _id: 33725.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:29:59.830 [conn4] moveChunk updating self version to: 74|1||51276475bd1f99446659365c through { _id: 33725.0 } -> { _id: 34187.0 } for collection 'test.bar' m30001| Fri Feb 22 12:29:59.830 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:59-512764c799334798f3e47fb7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536199830), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 33263.0 }, max: { _id: 33725.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:29:59.831 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:59.831 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:59.831 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:29:59.831 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 73000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 73000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 74000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:29:59.831 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:29:59.831 [cleanupOldData-512764c799334798f3e47fb8] (start) waiting to cleanup test.bar from { _id: 33263.0 } -> { _id: 33725.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:29:59.831 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:29:59.831 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:29:59.831 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:29:59-512764c799334798f3e47fb9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536199831), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 33263.0 }, max: { _id: 33725.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:29:59.831 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:29:59.832 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 495 version: 74|1||51276475bd1f99446659365c based on: 73|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:59.834 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 496 version: 74|1||51276475bd1f99446659365c based on: 73|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:59.835 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:29:59.835 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:29:59.836 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 497 version: 74|1||51276475bd1f99446659365c based on: 74|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:59.836 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 74|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:29:59.836 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 74000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 497 m30999| Fri Feb 22 12:29:59.837 [conn1] setShardVersion success: { oldVersion: Timestamp 73000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:29:59.851 [cleanupOldData-512764c799334798f3e47fb8] waiting to remove documents for test.bar from { _id: 33263.0 } -> { _id: 33725.0 } m30001| Fri Feb 22 12:29:59.909 [cleanupOldData-512764c699334798f3e47fb3] moveChunk deleted 462 documents for test.bar from { _id: 32801.0 } -> { _id: 33263.0 } m30001| Fri Feb 22 12:29:59.909 [cleanupOldData-512764c799334798f3e47fb8] moveChunk starting delete for: test.bar from { _id: 33263.0 } -> { _id: 33725.0 } 75000 m30001| Fri Feb 22 12:30:00.635 [cleanupOldData-512764c799334798f3e47fb8] moveChunk deleted 462 documents for test.bar from { _id: 33263.0 } -> { _id: 33725.0 } m30001| Fri Feb 22 12:30:00.636 [cleanupOldData-5127647b99334798f3e47d74] moveChunk starting delete for: test.bar from { _id: 1847.0 } -> { _id: 2309.0 } m30999| Fri Feb 22 12:30:00.835 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:00.836 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:00.836 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:00 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c8bd1f9944665936a6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c7bd1f9944665936a5" } } m30999| Fri Feb 22 12:30:00.837 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c8bd1f9944665936a6 m30999| Fri Feb 22 12:30:00.837 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:00.837 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:00.837 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:00.839 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:00.839 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:00.839 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:00.839 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:00.839 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:00.841 [Balancer] shard0001 has more chunks me:144 best: shard0000:73 m30999| Fri Feb 22 12:30:00.841 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:00.841 [Balancer] donor : shard0001 chunks on 144 m30999| Fri Feb 22 12:30:00.841 [Balancer] receiver : shard0000 chunks on 73 m30999| Fri Feb 22 12:30:00.841 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:00.841 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_33725.0", lastmod: Timestamp 74000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 33725.0 }, max: { _id: 34187.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:00.841 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 74|1||000000000000000000000000min: { _id: 33725.0 }max: { _id: 34187.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:00.841 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:00.842 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 33725.0 }, max: { _id: 34187.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_33725.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:00.842 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c899334798f3e47fba m30001| Fri Feb 22 12:30:00.842 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:00-512764c899334798f3e47fbb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536200842), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 33725.0 }, max: { _id: 34187.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:00.843 [conn4] moveChunk request accepted at version 74|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:00.844 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:00.845 [migrateThread] starting receiving-end of migration of chunk { _id: 33725.0 } -> { _id: 34187.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:00.855 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 33725.0 }, max: { _id: 34187.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 176, clonedBytes: 183568, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:00.865 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 33725.0 }, max: { _id: 34187.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 423, clonedBytes: 441189, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:00.867 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:00.867 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 33725.0 } -> { _id: 34187.0 } m30000| Fri Feb 22 12:30:00.869 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 33725.0 } -> { _id: 34187.0 } m30001| Fri Feb 22 12:30:00.875 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 33725.0 }, max: { _id: 34187.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:00.875 [conn4] moveChunk setting version to: 75|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:00.876 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:00.878 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 498 version: 74|1||51276475bd1f99446659365c based on: 74|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:00.878 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 74|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:00.878 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 74000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 498 m30001| Fri Feb 22 12:30:00.879 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:00.879 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 33725.0 } -> { _id: 34187.0 } m30000| Fri Feb 22 12:30:00.879 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 33725.0 } -> { _id: 34187.0 } m30000| Fri Feb 22 12:30:00.879 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:00-512764c8c49297cf54df5652", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536200879), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 33725.0 }, max: { _id: 34187.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:00.886 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 33725.0 }, max: { _id: 34187.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:00.886 [conn4] moveChunk updating self version to: 75|1||51276475bd1f99446659365c through { _id: 34187.0 } -> { _id: 34649.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:00.887 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:00-512764c899334798f3e47fbc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536200887), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 33725.0 }, max: { _id: 34187.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:00.887 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:00.887 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:00.887 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:00.887 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:00.887 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:00.887 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 74000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 74000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 75000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:00.887 [cleanupOldData-512764c899334798f3e47fbd] (start) waiting to cleanup test.bar from { _id: 33725.0 } -> { _id: 34187.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:00.887 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:00.887 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:00-512764c899334798f3e47fbe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536200887), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 33725.0 }, max: { _id: 34187.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:00.887 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:00.889 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 499 version: 75|1||51276475bd1f99446659365c based on: 74|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:00.891 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 500 version: 75|1||51276475bd1f99446659365c based on: 74|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:00.892 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:00.892 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:00.894 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 501 version: 75|1||51276475bd1f99446659365c based on: 75|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:00.894 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 75|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:00.894 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 75000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 501 m30999| Fri Feb 22 12:30:00.895 [conn1] setShardVersion success: { oldVersion: Timestamp 74000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:00.907 [cleanupOldData-512764c899334798f3e47fbd] waiting to remove documents for test.bar from { _id: 33725.0 } -> { _id: 34187.0 } 76000 m30001| Fri Feb 22 12:30:01.400 [cleanupOldData-5127647b99334798f3e47d74] moveChunk deleted 462 documents for test.bar from { _id: 1847.0 } -> { _id: 2309.0 } m30001| Fri Feb 22 12:30:01.400 [cleanupOldData-512764c899334798f3e47fbd] moveChunk starting delete for: test.bar from { _id: 33725.0 } -> { _id: 34187.0 } m30999| Fri Feb 22 12:30:01.893 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:01.893 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:01.893 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:01 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764c9bd1f9944665936a7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c8bd1f9944665936a6" } } m30999| Fri Feb 22 12:30:01.894 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764c9bd1f9944665936a7 m30999| Fri Feb 22 12:30:01.894 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:01.894 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:01.895 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:01.897 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:01.897 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:01.897 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:01.897 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:01.898 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:01.900 [Balancer] shard0001 has more chunks me:143 best: shard0000:74 m30999| Fri Feb 22 12:30:01.900 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:01.900 [Balancer] donor : shard0001 chunks on 143 m30999| Fri Feb 22 12:30:01.900 [Balancer] receiver : shard0000 chunks on 74 m30999| Fri Feb 22 12:30:01.900 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:01.900 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_34187.0", lastmod: Timestamp 75000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 34187.0 }, max: { _id: 34649.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:01.900 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 75|1||000000000000000000000000min: { _id: 34187.0 }max: { _id: 34649.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:01.901 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:01.901 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 34187.0 }, max: { _id: 34649.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_34187.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:01.902 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764c999334798f3e47fbf m30001| Fri Feb 22 12:30:01.902 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:01-512764c999334798f3e47fc0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536201902), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 34187.0 }, max: { _id: 34649.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:01.904 [conn4] moveChunk request accepted at version 75|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:01.906 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:01.907 [migrateThread] starting receiving-end of migration of chunk { _id: 34187.0 } -> { _id: 34649.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:01.917 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 34187.0 }, max: { _id: 34649.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 115, clonedBytes: 119945, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:01.918 [cleanupOldData-512764c899334798f3e47fbd] moveChunk deleted 462 documents for test.bar from { _id: 33725.0 } -> { _id: 34187.0 } m30001| Fri Feb 22 12:30:01.919 [cleanupOldData-5127647b99334798f3e47d6f] moveChunk starting delete for: test.foo from { _id: 3747.0 } -> { _id: 4684.0 } m30001| Fri Feb 22 12:30:01.927 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 34187.0 }, max: { _id: 34649.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 276, clonedBytes: 287868, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:01.938 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 34187.0 }, max: { _id: 34649.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 422, clonedBytes: 440146, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:01.941 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:01.941 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 34187.0 } -> { _id: 34649.0 } m30000| Fri Feb 22 12:30:01.945 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 34187.0 } -> { _id: 34649.0 } m30001| Fri Feb 22 12:30:01.948 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 34187.0 }, max: { _id: 34649.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:01.948 [conn11] Waiting for commit to finish m30001| Fri Feb 22 12:30:01.948 [conn4] moveChunk setting version to: 76|0||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:01.951 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 502 version: 75|1||51276475bd1f99446659365c based on: 75|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:01.951 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 75|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:01.951 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 75000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 502 m30001| Fri Feb 22 12:30:01.951 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:01.956 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 34187.0 } -> { _id: 34649.0 } m30000| Fri Feb 22 12:30:01.956 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 34187.0 } -> { _id: 34649.0 } m30000| Fri Feb 22 12:30:01.956 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:01-512764c9c49297cf54df5653", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536201956), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 34187.0 }, max: { _id: 34649.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 33, step4 of 5: 0, step5 of 5: 14 } } m30001| Fri Feb 22 12:30:01.958 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 34187.0 }, max: { _id: 34649.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:01.958 [conn4] moveChunk updating self version to: 76|1||51276475bd1f99446659365c through { _id: 34649.0 } -> { _id: 35111.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:01.959 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:01-512764c999334798f3e47fc1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536201959), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 34187.0 }, max: { _id: 34649.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:01.959 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:01.959 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 75000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 75000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 76000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:01.959 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:01.960 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:01.960 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:01.960 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:01.960 [cleanupOldData-512764c999334798f3e47fc2] (start) waiting to cleanup test.bar from { _id: 34187.0 } -> { _id: 34649.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:01.960 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:01.960 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:01-512764c999334798f3e47fc3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536201960), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 34187.0 }, max: { _id: 34649.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 2, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:01.961 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:01.962 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 503 version: 76|1||51276475bd1f99446659365c based on: 75|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:01.965 [Balancer] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 504 version: 76|1||51276475bd1f99446659365c based on: 75|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:01.966 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:01.966 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:01.968 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 505 version: 76|1||51276475bd1f99446659365c based on: 76|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:01.968 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 76|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:01.968 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 76000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 505 m30999| Fri Feb 22 12:30:01.968 [conn1] setShardVersion success: { oldVersion: Timestamp 75000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:01.980 [cleanupOldData-512764c999334798f3e47fc2] waiting to remove documents for test.bar from { _id: 34187.0 } -> { _id: 34649.0 } 77000 m30001| Fri Feb 22 12:30:02.674 [cleanupOldData-5127647b99334798f3e47d6f] moveChunk deleted 937 documents for test.foo from { _id: 3747.0 } -> { _id: 4684.0 } m30001| Fri Feb 22 12:30:02.674 [cleanupOldData-512764c999334798f3e47fc2] moveChunk starting delete for: test.bar from { _id: 34187.0 } -> { _id: 34649.0 } m30999| Fri Feb 22 12:30:02.967 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:02.967 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:02.968 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764cabd1f9944665936a8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764c9bd1f9944665936a7" } } m30999| Fri Feb 22 12:30:02.968 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764cabd1f9944665936a8 m30999| Fri Feb 22 12:30:02.968 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:02.968 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:02.968 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:02.970 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:02.970 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:02.970 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:02.970 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:02.970 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:02.972 [Balancer] shard0001 has more chunks me:142 best: shard0000:75 m30999| Fri Feb 22 12:30:02.972 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:02.972 [Balancer] donor : shard0001 chunks on 142 m30999| Fri Feb 22 12:30:02.972 [Balancer] receiver : shard0000 chunks on 75 m30999| Fri Feb 22 12:30:02.972 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:02.972 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_34649.0", lastmod: Timestamp 76000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 34649.0 }, max: { _id: 35111.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:02.973 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 76|1||000000000000000000000000min: { _id: 34649.0 }max: { _id: 35111.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:02.973 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:02.973 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 34649.0 }, max: { _id: 35111.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_34649.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:02.974 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ca99334798f3e47fc4 m30001| Fri Feb 22 12:30:02.974 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:02-512764ca99334798f3e47fc5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536202974), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 34649.0 }, max: { _id: 35111.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:02.975 [conn4] moveChunk request accepted at version 76|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:02.977 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:02.977 [migrateThread] starting receiving-end of migration of chunk { _id: 34649.0 } -> { _id: 35111.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:02.987 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 34649.0 }, max: { _id: 35111.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 132, clonedBytes: 137676, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:02.997 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 34649.0 }, max: { _id: 35111.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 383, clonedBytes: 399469, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:03.000 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:03.001 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 34649.0 } -> { _id: 35111.0 } m30000| Fri Feb 22 12:30:03.002 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 34649.0 } -> { _id: 35111.0 } m30001| Fri Feb 22 12:30:03.007 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 34649.0 }, max: { _id: 35111.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:03.008 [conn4] moveChunk setting version to: 77|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:03.008 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:03.011 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 506 version: 76|1||51276475bd1f99446659365c based on: 76|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:03.011 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 76|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:03.011 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 76000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 506 m30001| Fri Feb 22 12:30:03.012 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:03.012 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 34649.0 } -> { _id: 35111.0 } m30000| Fri Feb 22 12:30:03.012 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 34649.0 } -> { _id: 35111.0 } m30000| Fri Feb 22 12:30:03.012 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:03-512764cbc49297cf54df5654", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536203012), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 34649.0 }, max: { _id: 35111.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 22, step4 of 5: 0, step5 of 5: 11 } } m30001| Fri Feb 22 12:30:03.018 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 34649.0 }, max: { _id: 35111.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:03.018 [conn4] moveChunk updating self version to: 77|1||51276475bd1f99446659365c through { _id: 35111.0 } -> { _id: 35573.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:03.019 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:03-512764cb99334798f3e47fc6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536203019), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 34649.0 }, max: { _id: 35111.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:03.019 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:03.019 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:03.019 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 76000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 76000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 77000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:03.019 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:03.019 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:03.019 [cleanupOldData-512764cb99334798f3e47fc7] (start) waiting to cleanup test.bar from { _id: 34649.0 } -> { _id: 35111.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:03.019 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:03.020 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:03.020 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:03-512764cb99334798f3e47fc8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536203020), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 34649.0 }, max: { _id: 35111.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:03.020 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:03.022 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 507 version: 77|1||51276475bd1f99446659365c based on: 76|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:03.024 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 508 version: 77|1||51276475bd1f99446659365c based on: 76|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:03.025 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:03.025 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:03.026 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 509 version: 77|1||51276475bd1f99446659365c based on: 77|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:03.026 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 77|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:03.027 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 77000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 509 m30999| Fri Feb 22 12:30:03.027 [conn1] setShardVersion success: { oldVersion: Timestamp 76000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:03.039 [cleanupOldData-512764cb99334798f3e47fc7] waiting to remove documents for test.bar from { _id: 34649.0 } -> { _id: 35111.0 } m30001| Fri Feb 22 12:30:03.122 [cleanupOldData-512764c999334798f3e47fc2] moveChunk deleted 462 documents for test.bar from { _id: 34187.0 } -> { _id: 34649.0 } m30001| Fri Feb 22 12:30:03.122 [cleanupOldData-512764cb99334798f3e47fc7] moveChunk starting delete for: test.bar from { _id: 34649.0 } -> { _id: 35111.0 } m30001| Fri Feb 22 12:30:03.406 [cleanupOldData-512764cb99334798f3e47fc7] moveChunk deleted 462 documents for test.bar from { _id: 34649.0 } -> { _id: 35111.0 } m30001| Fri Feb 22 12:30:03.406 [cleanupOldData-5127647899334798f3e47d56] moveChunk starting delete for: test.bar from { _id: 461.0 } -> { _id: 923.0 } 78000 m30999| Fri Feb 22 12:30:04.026 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:04.026 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:04.026 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:04 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764ccbd1f9944665936a9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764cabd1f9944665936a8" } } m30999| Fri Feb 22 12:30:04.027 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764ccbd1f9944665936a9 m30999| Fri Feb 22 12:30:04.027 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:04.027 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:04.027 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:04.029 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:04.029 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:04.029 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:04.029 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:04.029 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:04.031 [Balancer] shard0001 has more chunks me:141 best: shard0000:76 m30999| Fri Feb 22 12:30:04.032 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:04.032 [Balancer] donor : shard0001 chunks on 141 m30999| Fri Feb 22 12:30:04.032 [Balancer] receiver : shard0000 chunks on 76 m30999| Fri Feb 22 12:30:04.032 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:04.032 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_35111.0", lastmod: Timestamp 77000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 35111.0 }, max: { _id: 35573.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:04.032 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 77|1||000000000000000000000000min: { _id: 35111.0 }max: { _id: 35573.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:04.032 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:04.032 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 35111.0 }, max: { _id: 35573.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_35111.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:04.033 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764cc99334798f3e47fc9 m30001| Fri Feb 22 12:30:04.033 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:04-512764cc99334798f3e47fca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536204033), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 35111.0 }, max: { _id: 35573.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:04.034 [conn4] moveChunk request accepted at version 77|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:04.036 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:04.036 [migrateThread] starting receiving-end of migration of chunk { _id: 35111.0 } -> { _id: 35573.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:04.046 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 35111.0 }, max: { _id: 35573.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 184, clonedBytes: 191912, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:04.056 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 35111.0 }, max: { _id: 35573.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 444, clonedBytes: 463092, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:04.057 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:04.057 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 35111.0 } -> { _id: 35573.0 } m30000| Fri Feb 22 12:30:04.059 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 35111.0 } -> { _id: 35573.0 } m30001| Fri Feb 22 12:30:04.066 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 35111.0 }, max: { _id: 35573.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:04.067 [conn4] moveChunk setting version to: 78|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:04.067 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:04.069 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 510 version: 77|1||51276475bd1f99446659365c based on: 77|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:04.069 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 77|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:04.069 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 77000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 510 m30000| Fri Feb 22 12:30:04.069 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 35111.0 } -> { _id: 35573.0 } m30000| Fri Feb 22 12:30:04.069 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 35111.0 } -> { _id: 35573.0 } m30001| Fri Feb 22 12:30:04.069 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:04.069 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:04-512764ccc49297cf54df5655", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536204069), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 35111.0 }, max: { _id: 35573.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 20, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:04.077 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 35111.0 }, max: { _id: 35573.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:04.077 [conn4] moveChunk updating self version to: 78|1||51276475bd1f99446659365c through { _id: 35573.0 } -> { _id: 36035.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:04.078 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:04-512764cc99334798f3e47fcb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536204078), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 35111.0 }, max: { _id: 35573.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:04.078 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:04.078 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:04.078 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 77000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 77000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 78000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:04.078 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:04.078 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:04.078 [cleanupOldData-512764cc99334798f3e47fcc] (start) waiting to cleanup test.bar from { _id: 35111.0 } -> { _id: 35573.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:04.078 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:04.079 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:04.079 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:04-512764cc99334798f3e47fcd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536204079), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 35111.0 }, max: { _id: 35573.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:04.079 [Balancer] moveChunk result: { ok: 1.0 } m30001| Fri Feb 22 12:30:04.079 [cleanupOldData-5127647899334798f3e47d56] moveChunk deleted 462 documents for test.bar from { _id: 461.0 } -> { _id: 923.0 } m30001| Fri Feb 22 12:30:04.079 [cleanupOldData-5127647699334798f3e47d4c] moveChunk starting delete for: test.bar from { _id: MinKey } -> { _id: 461.0 } m30999| Fri Feb 22 12:30:04.080 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 511 version: 78|1||51276475bd1f99446659365c based on: 77|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:04.082 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 512 version: 78|1||51276475bd1f99446659365c based on: 77|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:04.082 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:04.083 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:04.084 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 513 version: 78|1||51276475bd1f99446659365c based on: 78|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:04.084 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 78|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:04.084 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 78000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 513 m30999| Fri Feb 22 12:30:04.084 [conn1] setShardVersion success: { oldVersion: Timestamp 77000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:04.098 [cleanupOldData-512764cc99334798f3e47fcc] waiting to remove documents for test.bar from { _id: 35111.0 } -> { _id: 35573.0 } 79000 m30001| Fri Feb 22 12:30:04.558 [cleanupOldData-5127647699334798f3e47d4c] moveChunk deleted 461 documents for test.bar from { _id: MinKey } -> { _id: 461.0 } m30001| Fri Feb 22 12:30:04.558 [cleanupOldData-5127648099334798f3e47d9c] moveChunk starting delete for: test.bar from { _id: 3695.0 } -> { _id: 4157.0 } m30999| Fri Feb 22 12:30:05.083 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:05.083 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:05.084 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:05 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764cdbd1f9944665936aa" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764ccbd1f9944665936a9" } } m30999| Fri Feb 22 12:30:05.084 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764cdbd1f9944665936aa m30999| Fri Feb 22 12:30:05.084 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:05.084 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:05.084 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:05.086 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:05.086 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:05.086 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:05.086 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:05.086 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:05.088 [Balancer] shard0001 has more chunks me:140 best: shard0000:77 m30999| Fri Feb 22 12:30:05.088 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:05.088 [Balancer] donor : shard0001 chunks on 140 m30999| Fri Feb 22 12:30:05.088 [Balancer] receiver : shard0000 chunks on 77 m30999| Fri Feb 22 12:30:05.088 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:05.089 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_35573.0", lastmod: Timestamp 78000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 35573.0 }, max: { _id: 36035.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:05.089 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 78|1||000000000000000000000000min: { _id: 35573.0 }max: { _id: 36035.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:05.089 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:05.089 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 35573.0 }, max: { _id: 36035.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_35573.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:05.090 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764cd99334798f3e47fce m30001| Fri Feb 22 12:30:05.090 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:05-512764cd99334798f3e47fcf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536205090), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 35573.0 }, max: { _id: 36035.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:05.091 [conn4] moveChunk request accepted at version 78|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:05.092 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:05.092 [migrateThread] starting receiving-end of migration of chunk { _id: 35573.0 } -> { _id: 36035.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:05.103 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 35573.0 }, max: { _id: 36035.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 118, clonedBytes: 123074, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:05.113 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 35573.0 }, max: { _id: 36035.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 296, clonedBytes: 308728, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:05.123 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:05.123 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 35573.0 } -> { _id: 36035.0 } m30001| Fri Feb 22 12:30:05.123 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 35573.0 }, max: { _id: 36035.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:05.125 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 35573.0 } -> { _id: 36035.0 } m30001| Fri Feb 22 12:30:05.133 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 35573.0 }, max: { _id: 36035.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:05.133 [conn4] moveChunk setting version to: 79|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:05.133 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:05.136 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 35573.0 } -> { _id: 36035.0 } m30000| Fri Feb 22 12:30:05.136 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 35573.0 } -> { _id: 36035.0 } m30000| Fri Feb 22 12:30:05.136 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:05-512764cdc49297cf54df5656", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536205136), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 35573.0 }, max: { _id: 36035.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:05.144 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 35573.0 }, max: { _id: 36035.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:05.144 [conn4] moveChunk updating self version to: 79|1||51276475bd1f99446659365c through { _id: 36035.0 } -> { _id: 36497.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:05.144 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:05-512764cd99334798f3e47fd0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536205144), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 35573.0 }, max: { _id: 36035.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:05.144 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:05.145 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:05.145 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:05.145 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:05.145 [cleanupOldData-512764cd99334798f3e47fd1] (start) waiting to cleanup test.bar from { _id: 35573.0 } -> { _id: 36035.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:05.145 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:05.145 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:05.145 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:05-512764cd99334798f3e47fd2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536205145), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 35573.0 }, max: { _id: 36035.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:05.145 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:05.148 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 514 version: 79|1||51276475bd1f99446659365c based on: 78|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:05.148 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:05.148 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:30:05.148 [cleanupOldData-5127648099334798f3e47d9c] moveChunk deleted 462 documents for test.bar from { _id: 3695.0 } -> { _id: 4157.0 } m30001| Fri Feb 22 12:30:05.148 [cleanupOldData-5127648999334798f3e47dec] moveChunk starting delete for: test.bar from { _id: 7391.0 } -> { _id: 7853.0 } m30999| Fri Feb 22 12:30:05.150 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 79000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 514 m30999| Fri Feb 22 12:30:05.151 [conn1] setShardVersion success: { oldVersion: Timestamp 78000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:05.165 [cleanupOldData-512764cd99334798f3e47fd1] waiting to remove documents for test.bar from { _id: 35573.0 } -> { _id: 36035.0 } 80000 m30001| Fri Feb 22 12:30:05.797 [cleanupOldData-5127648999334798f3e47dec] moveChunk deleted 462 documents for test.bar from { _id: 7391.0 } -> { _id: 7853.0 } m30001| Fri Feb 22 12:30:05.797 [cleanupOldData-512764cd99334798f3e47fd1] moveChunk starting delete for: test.bar from { _id: 35573.0 } -> { _id: 36035.0 } m30001| Fri Feb 22 12:30:06.136 [cleanupOldData-512764cd99334798f3e47fd1] moveChunk deleted 462 documents for test.bar from { _id: 35573.0 } -> { _id: 36035.0 } m30001| Fri Feb 22 12:30:06.136 [cleanupOldData-5127649299334798f3e47e3c] moveChunk starting delete for: test.bar from { _id: 11087.0 } -> { _id: 11549.0 } m30999| Fri Feb 22 12:30:06.149 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:06.149 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:06.150 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:06 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764cebd1f9944665936ab" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764cdbd1f9944665936aa" } } m30999| Fri Feb 22 12:30:06.150 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764cebd1f9944665936ab m30999| Fri Feb 22 12:30:06.150 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:06.150 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:06.150 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:06.153 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:06.153 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:06.153 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:06.153 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:06.153 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:06.155 [Balancer] shard0001 has more chunks me:139 best: shard0000:78 m30999| Fri Feb 22 12:30:06.155 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:06.155 [Balancer] donor : shard0001 chunks on 139 m30999| Fri Feb 22 12:30:06.155 [Balancer] receiver : shard0000 chunks on 78 m30999| Fri Feb 22 12:30:06.155 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:06.155 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_36035.0", lastmod: Timestamp 79000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 36035.0 }, max: { _id: 36497.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:06.155 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 79|1||000000000000000000000000min: { _id: 36035.0 }max: { _id: 36497.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:06.156 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:06.156 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 36035.0 }, max: { _id: 36497.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_36035.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:06.157 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ce99334798f3e47fd3 m30001| Fri Feb 22 12:30:06.157 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:06-512764ce99334798f3e47fd4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536206157), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 36035.0 }, max: { _id: 36497.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:06.159 [conn4] moveChunk request accepted at version 79|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:06.161 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:06.161 [migrateThread] starting receiving-end of migration of chunk { _id: 36035.0 } -> { _id: 36497.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:06.171 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36035.0 }, max: { _id: 36497.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 124, clonedBytes: 129332, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:06.181 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36035.0 }, max: { _id: 36497.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 300, clonedBytes: 312900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:06.191 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:06.191 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 36035.0 } -> { _id: 36497.0 } m30001| Fri Feb 22 12:30:06.191 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36035.0 }, max: { _id: 36497.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:06.194 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 36035.0 } -> { _id: 36497.0 } m30001| Fri Feb 22 12:30:06.202 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36035.0 }, max: { _id: 36497.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:06.202 [conn4] moveChunk setting version to: 80|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:06.202 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:06.204 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 36035.0 } -> { _id: 36497.0 } m30000| Fri Feb 22 12:30:06.204 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 36035.0 } -> { _id: 36497.0 } m30000| Fri Feb 22 12:30:06.204 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:06-512764cec49297cf54df5657", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536206204), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 36035.0 }, max: { _id: 36497.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:06.205 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 515 version: 79|1||51276475bd1f99446659365c based on: 79|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:06.205 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 79|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:06.206 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 79000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 515 m30001| Fri Feb 22 12:30:06.206 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:06.212 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 36035.0 }, max: { _id: 36497.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:06.212 [conn4] moveChunk updating self version to: 80|1||51276475bd1f99446659365c through { _id: 36497.0 } -> { _id: 36959.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:06.213 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:06-512764ce99334798f3e47fd5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536206213), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 36035.0 }, max: { _id: 36497.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:06.213 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:06.213 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:06.213 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:06.213 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 79000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 79000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 80000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:06.213 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:06.214 [cleanupOldData-512764ce99334798f3e47fd6] (start) waiting to cleanup test.bar from { _id: 36035.0 } -> { _id: 36497.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:06.214 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:06.214 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:06.214 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:06-512764ce99334798f3e47fd7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536206214), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 36035.0 }, max: { _id: 36497.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 2, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:06.214 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:06.215 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 516 version: 80|1||51276475bd1f99446659365c based on: 79|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:06.217 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 517 version: 80|1||51276475bd1f99446659365c based on: 79|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:06.219 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:06.219 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:06.220 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 518 version: 80|1||51276475bd1f99446659365c based on: 80|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:06.220 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 80|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:06.221 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 80000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 518 m30999| Fri Feb 22 12:30:06.221 [conn1] setShardVersion success: { oldVersion: Timestamp 79000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:06.234 [cleanupOldData-512764ce99334798f3e47fd6] waiting to remove documents for test.bar from { _id: 36035.0 } -> { _id: 36497.0 } 81000 m30001| Fri Feb 22 12:30:06.651 [cleanupOldData-5127649299334798f3e47e3c] moveChunk deleted 462 documents for test.bar from { _id: 11087.0 } -> { _id: 11549.0 } m30001| Fri Feb 22 12:30:06.651 [cleanupOldData-512764ce99334798f3e47fd6] moveChunk starting delete for: test.bar from { _id: 36035.0 } -> { _id: 36497.0 } m30001| Fri Feb 22 12:30:06.888 [cleanupOldData-512764ce99334798f3e47fd6] moveChunk deleted 462 documents for test.bar from { _id: 36035.0 } -> { _id: 36497.0 } m30001| Fri Feb 22 12:30:06.888 [cleanupOldData-5127649b99334798f3e47e8c] moveChunk starting delete for: test.bar from { _id: 14783.0 } -> { _id: 15245.0 } m30999| Fri Feb 22 12:30:07.220 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:07.220 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:07.220 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:07 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764cfbd1f9944665936ac" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764cebd1f9944665936ab" } } m30999| Fri Feb 22 12:30:07.221 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764cfbd1f9944665936ac m30999| Fri Feb 22 12:30:07.221 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:07.221 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:07.221 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:07.223 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:07.223 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:07.223 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:07.223 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:07.223 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:07.225 [Balancer] shard0001 has more chunks me:138 best: shard0000:79 m30999| Fri Feb 22 12:30:07.225 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:07.225 [Balancer] donor : shard0001 chunks on 138 m30999| Fri Feb 22 12:30:07.225 [Balancer] receiver : shard0000 chunks on 79 m30999| Fri Feb 22 12:30:07.225 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:07.225 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_36497.0", lastmod: Timestamp 80000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 36497.0 }, max: { _id: 36959.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:07.225 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 80|1||000000000000000000000000min: { _id: 36497.0 }max: { _id: 36959.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:07.226 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:07.226 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 36497.0 }, max: { _id: 36959.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_36497.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:07.227 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764cf99334798f3e47fd8 m30001| Fri Feb 22 12:30:07.227 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:07-512764cf99334798f3e47fd9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536207227), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 36497.0 }, max: { _id: 36959.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:07.228 [conn4] moveChunk request accepted at version 80|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:07.229 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:07.230 [migrateThread] starting receiving-end of migration of chunk { _id: 36497.0 } -> { _id: 36959.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:07.240 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36497.0 }, max: { _id: 36959.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 71, clonedBytes: 74053, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:07.250 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36497.0 }, max: { _id: 36959.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 191, clonedBytes: 199213, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:07.260 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36497.0 }, max: { _id: 36959.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 308, clonedBytes: 321244, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:07.270 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36497.0 }, max: { _id: 36959.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 430, clonedBytes: 448490, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:07.273 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:07.273 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 36497.0 } -> { _id: 36959.0 } m30000| Fri Feb 22 12:30:07.276 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 36497.0 } -> { _id: 36959.0 } m30001| Fri Feb 22 12:30:07.286 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36497.0 }, max: { _id: 36959.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:07.286 [conn4] moveChunk setting version to: 81|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:07.286 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:07.289 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 519 version: 80|1||51276475bd1f99446659365c based on: 80|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:07.289 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 80|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:07.289 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 80000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 519 m30001| Fri Feb 22 12:30:07.289 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:07.296 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 36497.0 } -> { _id: 36959.0 } m30000| Fri Feb 22 12:30:07.296 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:07.296 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 36497.0 } -> { _id: 36959.0 } m30000| Fri Feb 22 12:30:07.297 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:07-512764cfc49297cf54df5658", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536207297), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 36497.0 }, max: { _id: 36959.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 42, step4 of 5: 0, step5 of 5: 23 } } m30001| Fri Feb 22 12:30:07.307 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 36497.0 }, max: { _id: 36959.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:07.307 [conn4] moveChunk updating self version to: 81|1||51276475bd1f99446659365c through { _id: 36959.0 } -> { _id: 37421.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:07.308 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:07-512764cf99334798f3e47fda", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536207308), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 36497.0 }, max: { _id: 36959.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:07.308 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:07.308 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:07.308 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:07.308 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:07.308 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:07.308 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 80000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 80000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 81000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:07.308 [cleanupOldData-512764cf99334798f3e47fdb] (start) waiting to cleanup test.bar from { _id: 36497.0 } -> { _id: 36959.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:07.308 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:07.308 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:07-512764cf99334798f3e47fdc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536207308), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 36497.0 }, max: { _id: 36959.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 56, step5 of 6: 21, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:07.308 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:07.310 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 520 version: 81|1||51276475bd1f99446659365c based on: 80|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:07.311 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 521 version: 81|1||51276475bd1f99446659365c based on: 80|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:07.312 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:07.312 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:07.313 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 522 version: 81|1||51276475bd1f99446659365c based on: 81|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:07.314 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 81|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:07.314 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 81000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 522 m30999| Fri Feb 22 12:30:07.314 [conn1] setShardVersion success: { oldVersion: Timestamp 80000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:07.328 [cleanupOldData-512764cf99334798f3e47fdb] waiting to remove documents for test.bar from { _id: 36497.0 } -> { _id: 36959.0 } m30001| Fri Feb 22 12:30:07.359 [cleanupOldData-5127649b99334798f3e47e8c] moveChunk deleted 462 documents for test.bar from { _id: 14783.0 } -> { _id: 15245.0 } m30001| Fri Feb 22 12:30:07.359 [cleanupOldData-512764cf99334798f3e47fdb] moveChunk starting delete for: test.bar from { _id: 36497.0 } -> { _id: 36959.0 } m30001| Fri Feb 22 12:30:07.683 [cleanupOldData-512764cf99334798f3e47fdb] moveChunk deleted 462 documents for test.bar from { _id: 36497.0 } -> { _id: 36959.0 } m30001| Fri Feb 22 12:30:07.683 [cleanupOldData-512764a499334798f3e47edc] moveChunk starting delete for: test.bar from { _id: 18479.0 } -> { _id: 18941.0 } 82000 m30999| Fri Feb 22 12:30:08.313 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:08.313 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:08.313 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d0bd1f9944665936ad" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764cfbd1f9944665936ac" } } m30999| Fri Feb 22 12:30:08.314 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d0bd1f9944665936ad m30999| Fri Feb 22 12:30:08.314 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:08.314 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:08.314 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:08.316 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:08.316 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:08.316 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:08.316 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:08.316 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:08.317 [Balancer] shard0001 has more chunks me:137 best: shard0000:80 m30999| Fri Feb 22 12:30:08.317 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:08.317 [Balancer] donor : shard0001 chunks on 137 m30999| Fri Feb 22 12:30:08.317 [Balancer] receiver : shard0000 chunks on 80 m30999| Fri Feb 22 12:30:08.317 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:08.317 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_36959.0", lastmod: Timestamp 81000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 36959.0 }, max: { _id: 37421.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:08.317 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 81|1||000000000000000000000000min: { _id: 36959.0 }max: { _id: 37421.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:08.317 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:08.318 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 36959.0 }, max: { _id: 37421.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_36959.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:08.318 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d099334798f3e47fdd m30001| Fri Feb 22 12:30:08.318 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:08-512764d099334798f3e47fde", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536208318), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 36959.0 }, max: { _id: 37421.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:08.319 [conn4] moveChunk request accepted at version 81|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:08.321 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:08.321 [migrateThread] starting receiving-end of migration of chunk { _id: 36959.0 } -> { _id: 37421.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:08.331 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36959.0 }, max: { _id: 37421.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 105, clonedBytes: 109515, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:08.341 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36959.0 }, max: { _id: 37421.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 268, clonedBytes: 279524, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:08.351 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36959.0 }, max: { _id: 37421.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 431, clonedBytes: 449533, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:08.353 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:08.353 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 36959.0 } -> { _id: 37421.0 } m30000| Fri Feb 22 12:30:08.355 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 36959.0 } -> { _id: 37421.0 } m30001| Fri Feb 22 12:30:08.361 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 36959.0 }, max: { _id: 37421.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:08.361 [conn4] moveChunk setting version to: 82|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:08.361 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:08.363 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 523 version: 81|1||51276475bd1f99446659365c based on: 81|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:08.363 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 81|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:08.364 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 81000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 523 m30001| Fri Feb 22 12:30:08.364 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:08.365 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 36959.0 } -> { _id: 37421.0 } m30000| Fri Feb 22 12:30:08.365 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 36959.0 } -> { _id: 37421.0 } m30000| Fri Feb 22 12:30:08.365 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:08-512764d0c49297cf54df5659", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536208365), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 36959.0 }, max: { _id: 37421.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 31, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:08.372 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 36959.0 }, max: { _id: 37421.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:08.372 [conn4] moveChunk updating self version to: 82|1||51276475bd1f99446659365c through { _id: 37421.0 } -> { _id: 37883.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:08.372 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:08-512764d099334798f3e47fdf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536208372), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 36959.0 }, max: { _id: 37421.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:08.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:08.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:08.372 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:08.372 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 81000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 81000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 82000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:08.372 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:08.372 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:08.372 [cleanupOldData-512764d099334798f3e47fe0] (start) waiting to cleanup test.bar from { _id: 36959.0 } -> { _id: 37421.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:08.373 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:08.373 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:08-512764d099334798f3e47fe1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536208373), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 36959.0 }, max: { _id: 37421.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:08.373 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:08.374 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 524 version: 82|1||51276475bd1f99446659365c based on: 81|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:08.375 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 525 version: 82|1||51276475bd1f99446659365c based on: 81|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:08.376 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:08.376 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:08.377 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 526 version: 82|1||51276475bd1f99446659365c based on: 82|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:08.377 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 82|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:08.377 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 82000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 526 m30999| Fri Feb 22 12:30:08.378 [conn1] setShardVersion success: { oldVersion: Timestamp 81000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:08.392 [cleanupOldData-512764d099334798f3e47fe0] waiting to remove documents for test.bar from { _id: 36959.0 } -> { _id: 37421.0 } m30001| Fri Feb 22 12:30:08.452 [cleanupOldData-512764a499334798f3e47edc] moveChunk deleted 462 documents for test.bar from { _id: 18479.0 } -> { _id: 18941.0 } m30001| Fri Feb 22 12:30:08.452 [cleanupOldData-512764d099334798f3e47fe0] moveChunk starting delete for: test.bar from { _id: 36959.0 } -> { _id: 37421.0 } 83000 m30001| Fri Feb 22 12:30:08.646 [cleanupOldData-512764d099334798f3e47fe0] moveChunk deleted 462 documents for test.bar from { _id: 36959.0 } -> { _id: 37421.0 } m30001| Fri Feb 22 12:30:08.646 [cleanupOldData-512764ad99334798f3e47f2c] moveChunk starting delete for: test.bar from { _id: 22175.0 } -> { _id: 22637.0 } m30999| Fri Feb 22 12:30:08.791 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:30:08 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30001| Fri Feb 22 12:30:09.004 [cleanupOldData-512764ad99334798f3e47f2c] moveChunk deleted 462 documents for test.bar from { _id: 22175.0 } -> { _id: 22637.0 } m30001| Fri Feb 22 12:30:09.004 [cleanupOldData-512764bb99334798f3e47f7c] moveChunk starting delete for: test.bar from { _id: 27719.0 } -> { _id: 28181.0 } m30001| Fri Feb 22 12:30:09.207 [cleanupOldData-512764bb99334798f3e47f7c] moveChunk deleted 462 documents for test.bar from { _id: 27719.0 } -> { _id: 28181.0 } m30001| Fri Feb 22 12:30:09.207 [cleanupOldData-512764cc99334798f3e47fcc] moveChunk starting delete for: test.bar from { _id: 35111.0 } -> { _id: 35573.0 } m30999| Fri Feb 22 12:30:09.377 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:09.378 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:09.378 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:09 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d1bd1f9944665936ae" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d0bd1f9944665936ad" } } m30999| Fri Feb 22 12:30:09.379 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d1bd1f9944665936ae m30999| Fri Feb 22 12:30:09.379 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:09.379 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:09.379 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:09.382 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:09.382 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:09.382 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:09.382 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:09.382 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:09.384 [Balancer] shard0001 has more chunks me:136 best: shard0000:81 m30999| Fri Feb 22 12:30:09.384 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:09.384 [Balancer] donor : shard0001 chunks on 136 m30999| Fri Feb 22 12:30:09.384 [Balancer] receiver : shard0000 chunks on 81 m30999| Fri Feb 22 12:30:09.384 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:09.384 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_37421.0", lastmod: Timestamp 82000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 37421.0 }, max: { _id: 37883.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:09.385 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 82|1||000000000000000000000000min: { _id: 37421.0 }max: { _id: 37883.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:09.385 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:09.385 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 37421.0 }, max: { _id: 37883.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_37421.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:09.386 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d199334798f3e47fe2 m30001| Fri Feb 22 12:30:09.386 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:09-512764d199334798f3e47fe3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536209386), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 37421.0 }, max: { _id: 37883.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:09.388 [conn4] moveChunk request accepted at version 82|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:09.389 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:09.390 [migrateThread] starting receiving-end of migration of chunk { _id: 37421.0 } -> { _id: 37883.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:09.400 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37421.0 }, max: { _id: 37883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 118, clonedBytes: 123074, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:09.410 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37421.0 }, max: { _id: 37883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 236, clonedBytes: 246148, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:09.420 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37421.0 }, max: { _id: 37883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 404, clonedBytes: 421372, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:09.424 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:09.424 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 37421.0 } -> { _id: 37883.0 } m30000| Fri Feb 22 12:30:09.427 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 37421.0 } -> { _id: 37883.0 } m30001| Fri Feb 22 12:30:09.431 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37421.0 }, max: { _id: 37883.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:09.431 [conn4] moveChunk setting version to: 83|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:09.431 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:09.434 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 527 version: 82|1||51276475bd1f99446659365c based on: 82|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:09.435 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 82|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:09.435 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 82000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 527 m30001| Fri Feb 22 12:30:09.435 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:09.437 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 37421.0 } -> { _id: 37883.0 } m30000| Fri Feb 22 12:30:09.437 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 37421.0 } -> { _id: 37883.0 } m30000| Fri Feb 22 12:30:09.437 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:09-512764d1c49297cf54df565a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536209437), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 37421.0 }, max: { _id: 37883.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 33, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:09.441 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 37421.0 }, max: { _id: 37883.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:09.441 [conn4] moveChunk updating self version to: 83|1||51276475bd1f99446659365c through { _id: 37883.0 } -> { _id: 38345.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:09.442 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:09-512764d199334798f3e47fe4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536209442), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 37421.0 }, max: { _id: 37883.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:09.442 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:09.442 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:09.442 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 82000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 82000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 83000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:09.442 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:09.442 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:09.442 [cleanupOldData-512764d199334798f3e47fe5] (start) waiting to cleanup test.bar from { _id: 37421.0 } -> { _id: 37883.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:09.442 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:09.443 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:09.443 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:09-512764d199334798f3e47fe6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536209443), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 37421.0 }, max: { _id: 37883.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:09.443 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:09.444 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 528 version: 83|1||51276475bd1f99446659365c based on: 82|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:09.446 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 529 version: 83|1||51276475bd1f99446659365c based on: 82|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:09.447 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:09.448 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:09.449 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 530 version: 83|1||51276475bd1f99446659365c based on: 83|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:09.450 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 83|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:09.450 [cleanupOldData-512764cc99334798f3e47fcc] moveChunk deleted 462 documents for test.bar from { _id: 35111.0 } -> { _id: 35573.0 } m30999| Fri Feb 22 12:30:09.450 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 83000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 530 m30999| Fri Feb 22 12:30:09.450 [conn1] setShardVersion success: { oldVersion: Timestamp 82000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:09.462 [cleanupOldData-512764d199334798f3e47fe5] waiting to remove documents for test.bar from { _id: 37421.0 } -> { _id: 37883.0 } m30001| Fri Feb 22 12:30:09.463 [cleanupOldData-512764d199334798f3e47fe5] moveChunk starting delete for: test.bar from { _id: 37421.0 } -> { _id: 37883.0 } 84000 m30001| Fri Feb 22 12:30:10.162 [cleanupOldData-512764d199334798f3e47fe5] moveChunk deleted 462 documents for test.bar from { _id: 37421.0 } -> { _id: 37883.0 } m30999| Fri Feb 22 12:30:10.449 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:10.449 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:10.449 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:10 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d2bd1f9944665936af" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d1bd1f9944665936ae" } } m30999| Fri Feb 22 12:30:10.450 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d2bd1f9944665936af m30999| Fri Feb 22 12:30:10.450 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:10.450 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:10.450 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:10.452 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:10.452 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:10.452 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:10.452 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:10.452 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:10.455 [Balancer] shard0001 has more chunks me:135 best: shard0000:82 m30999| Fri Feb 22 12:30:10.455 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:10.455 [Balancer] donor : shard0001 chunks on 135 m30999| Fri Feb 22 12:30:10.455 [Balancer] receiver : shard0000 chunks on 82 m30999| Fri Feb 22 12:30:10.455 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:10.455 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_37883.0", lastmod: Timestamp 83000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 37883.0 }, max: { _id: 38345.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:10.455 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 83|1||000000000000000000000000min: { _id: 37883.0 }max: { _id: 38345.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:10.455 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:10.455 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 37883.0 }, max: { _id: 38345.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_37883.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:10.456 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d299334798f3e47fe7 m30001| Fri Feb 22 12:30:10.456 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:10-512764d299334798f3e47fe8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536210456), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 37883.0 }, max: { _id: 38345.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:10.458 [conn4] moveChunk request accepted at version 83|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:10.459 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:10.460 [migrateThread] starting receiving-end of migration of chunk { _id: 37883.0 } -> { _id: 38345.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:10.470 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37883.0 }, max: { _id: 38345.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 123, clonedBytes: 128289, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:10.480 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37883.0 }, max: { _id: 38345.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 303, clonedBytes: 316029, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:10.489 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:10.489 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 37883.0 } -> { _id: 38345.0 } m30001| Fri Feb 22 12:30:10.490 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37883.0 }, max: { _id: 38345.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:10.492 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 37883.0 } -> { _id: 38345.0 } m30001| Fri Feb 22 12:30:10.500 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 37883.0 }, max: { _id: 38345.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:10.500 [conn4] moveChunk setting version to: 84|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:10.501 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:10.502 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 37883.0 } -> { _id: 38345.0 } m30000| Fri Feb 22 12:30:10.502 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 37883.0 } -> { _id: 38345.0 } m30000| Fri Feb 22 12:30:10.502 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:10-512764d2c49297cf54df565b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536210502), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 37883.0 }, max: { _id: 38345.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:10.503 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 531 version: 83|1||51276475bd1f99446659365c based on: 83|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:10.503 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 83|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:10.503 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 83000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 531 m30001| Fri Feb 22 12:30:10.504 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:10.511 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 37883.0 }, max: { _id: 38345.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:10.511 [conn4] moveChunk updating self version to: 84|1||51276475bd1f99446659365c through { _id: 38345.0 } -> { _id: 38807.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:10.512 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:10-512764d299334798f3e47fe9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536210512), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 37883.0 }, max: { _id: 38345.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:10.512 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:10.512 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:10.512 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 83000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 83000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 84000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:10.512 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:10.512 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:10.512 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:10.512 [cleanupOldData-512764d299334798f3e47fea] (start) waiting to cleanup test.bar from { _id: 37883.0 } -> { _id: 38345.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:10.512 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:10.512 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:10-512764d299334798f3e47feb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536210512), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 37883.0 }, max: { _id: 38345.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:10.512 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:10.514 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 532 version: 84|1||51276475bd1f99446659365c based on: 83|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:10.516 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 533 version: 84|1||51276475bd1f99446659365c based on: 83|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:10.517 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:10.517 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:10.518 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 534 version: 84|1||51276475bd1f99446659365c based on: 84|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:10.519 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 84|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:10.519 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 84000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 534 m30999| Fri Feb 22 12:30:10.519 [conn1] setShardVersion success: { oldVersion: Timestamp 83000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:10.532 [cleanupOldData-512764d299334798f3e47fea] waiting to remove documents for test.bar from { _id: 37883.0 } -> { _id: 38345.0 } m30001| Fri Feb 22 12:30:10.532 [cleanupOldData-512764d299334798f3e47fea] moveChunk starting delete for: test.bar from { _id: 37883.0 } -> { _id: 38345.0 } 85000 m30001| Fri Feb 22 12:30:11.038 [cleanupOldData-512764d299334798f3e47fea] moveChunk deleted 462 documents for test.bar from { _id: 37883.0 } -> { _id: 38345.0 } m30999| Fri Feb 22 12:30:11.518 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:11.518 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:11.519 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:11 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d3bd1f9944665936b0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d2bd1f9944665936af" } } m30999| Fri Feb 22 12:30:11.519 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d3bd1f9944665936b0 m30999| Fri Feb 22 12:30:11.519 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:11.519 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:11.519 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:11.521 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:11.521 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:11.521 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:11.522 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:11.522 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:11.524 [Balancer] shard0001 has more chunks me:134 best: shard0000:83 m30999| Fri Feb 22 12:30:11.524 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:11.524 [Balancer] donor : shard0001 chunks on 134 m30999| Fri Feb 22 12:30:11.524 [Balancer] receiver : shard0000 chunks on 83 m30999| Fri Feb 22 12:30:11.524 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:11.524 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_38345.0", lastmod: Timestamp 84000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 38345.0 }, max: { _id: 38807.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:11.524 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 84|1||000000000000000000000000min: { _id: 38345.0 }max: { _id: 38807.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:11.524 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:11.525 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 38345.0 }, max: { _id: 38807.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_38345.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:11.526 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d399334798f3e47fec m30001| Fri Feb 22 12:30:11.526 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:11-512764d399334798f3e47fed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536211526), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 38345.0 }, max: { _id: 38807.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:11.527 [conn4] moveChunk request accepted at version 84|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:11.528 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:11.529 [migrateThread] starting receiving-end of migration of chunk { _id: 38345.0 } -> { _id: 38807.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:11.539 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 38345.0 }, max: { _id: 38807.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 139, clonedBytes: 144977, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:11.549 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 38345.0 }, max: { _id: 38807.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 328, clonedBytes: 342104, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:11.557 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:11.557 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 38345.0 } -> { _id: 38807.0 } m30000| Fri Feb 22 12:30:11.559 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 38345.0 } -> { _id: 38807.0 } m30001| Fri Feb 22 12:30:11.559 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 38345.0 }, max: { _id: 38807.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:11.560 [conn4] moveChunk setting version to: 85|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:11.560 [conn11] Waiting for commit to finish m30001| Fri Feb 22 12:30:11.560 [conn3] assertion 13388 [test.bar] shard version not ok in Client::Context: version mismatch detected for test.bar, stored major version 85 does not match received 84 ( ns : test.bar, received : 84|1||51276475bd1f99446659365c, wanted : 85|0||51276475bd1f99446659365c, send ) ( ns : test.bar, received : 84|1||51276475bd1f99446659365c, wanted : 85|0||51276475bd1f99446659365c, send ) ns:test.bar query:{ query: { _id: 85634.0 }, $explain: true } m30001| Fri Feb 22 12:30:11.560 [conn3] stale version detected during query over test.bar : { $err: "[test.bar] shard version not ok in Client::Context: version mismatch detected for test.bar, stored major version 85 does not match received 84 ( ns : ...", code: 13388, ns: "test.bar", vReceived: Timestamp 84000|1, vReceivedEpoch: ObjectId('51276475bd1f99446659365c'), vWanted: Timestamp 85000|0, vWantedEpoch: ObjectId('51276475bd1f99446659365c') } m30999| Fri Feb 22 12:30:11.562 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 535 version: 84|1||51276475bd1f99446659365c based on: 84|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:11.562 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 84|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:11.562 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 84000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 535 m30001| Fri Feb 22 12:30:11.562 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:11.569 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 38345.0 } -> { _id: 38807.0 } m30000| Fri Feb 22 12:30:11.569 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 38345.0 } -> { _id: 38807.0 } m30000| Fri Feb 22 12:30:11.569 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:11-512764d3c49297cf54df565c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536211569), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 38345.0 }, max: { _id: 38807.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 27, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:11.570 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 38345.0 }, max: { _id: 38807.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:11.570 [conn4] moveChunk updating self version to: 85|1||51276475bd1f99446659365c through { _id: 38807.0 } -> { _id: 39269.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:11.571 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:11-512764d399334798f3e47fee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536211570), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 38345.0 }, max: { _id: 38807.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:11.571 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:11.571 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 84000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 84000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 85000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:11.571 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:11.571 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:11.571 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:11.571 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:11.571 [cleanupOldData-512764d399334798f3e47fef] (start) waiting to cleanup test.bar from { _id: 38345.0 } -> { _id: 38807.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:11.571 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:11.571 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:11-512764d399334798f3e47ff0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536211571), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 38345.0 }, max: { _id: 38807.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:11.571 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:11.573 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 536 version: 85|1||51276475bd1f99446659365c based on: 84|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:11.575 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 537 version: 85|1||51276475bd1f99446659365c based on: 84|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:11.576 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:11.576 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:11.577 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 538 version: 85|1||51276475bd1f99446659365c based on: 85|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:11.577 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 85|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:11.578 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 85000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 538 m30999| Fri Feb 22 12:30:11.578 [conn1] setShardVersion success: { oldVersion: Timestamp 84000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:11.591 [cleanupOldData-512764d399334798f3e47fef] waiting to remove documents for test.bar from { _id: 38345.0 } -> { _id: 38807.0 } m30001| Fri Feb 22 12:30:11.591 [cleanupOldData-512764d399334798f3e47fef] moveChunk starting delete for: test.bar from { _id: 38345.0 } -> { _id: 38807.0 } 86000 m30001| Fri Feb 22 12:30:12.198 [cleanupOldData-512764d399334798f3e47fef] moveChunk deleted 462 documents for test.bar from { _id: 38345.0 } -> { _id: 38807.0 } m30999| Fri Feb 22 12:30:12.577 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:12.577 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:12.577 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:12 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d4bd1f9944665936b1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d3bd1f9944665936b0" } } m30999| Fri Feb 22 12:30:12.578 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d4bd1f9944665936b1 m30999| Fri Feb 22 12:30:12.578 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:12.578 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:12.578 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:12.580 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:12.580 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:12.580 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:12.580 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:12.580 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:12.582 [Balancer] shard0001 has more chunks me:133 best: shard0000:84 m30999| Fri Feb 22 12:30:12.582 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:12.582 [Balancer] donor : shard0001 chunks on 133 m30999| Fri Feb 22 12:30:12.582 [Balancer] receiver : shard0000 chunks on 84 m30999| Fri Feb 22 12:30:12.582 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:12.582 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_38807.0", lastmod: Timestamp 85000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 38807.0 }, max: { _id: 39269.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:12.582 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 85|1||000000000000000000000000min: { _id: 38807.0 }max: { _id: 39269.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:12.583 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:12.583 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 38807.0 }, max: { _id: 39269.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_38807.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:12.584 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d499334798f3e47ff1 m30001| Fri Feb 22 12:30:12.584 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:12-512764d499334798f3e47ff2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536212584), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 38807.0 }, max: { _id: 39269.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:12.585 [conn4] moveChunk request accepted at version 85|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:12.586 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:12.586 [migrateThread] starting receiving-end of migration of chunk { _id: 38807.0 } -> { _id: 39269.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:12.597 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 38807.0 }, max: { _id: 39269.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 126, clonedBytes: 131418, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:12.607 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 38807.0 }, max: { _id: 39269.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 289, clonedBytes: 301427, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:12.617 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:12.617 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 38807.0 } -> { _id: 39269.0 } m30001| Fri Feb 22 12:30:12.617 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 38807.0 }, max: { _id: 39269.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:12.618 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 38807.0 } -> { _id: 39269.0 } m30001| Fri Feb 22 12:30:12.627 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 38807.0 }, max: { _id: 39269.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:12.627 [conn4] moveChunk setting version to: 86|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:12.627 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:12.629 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 38807.0 } -> { _id: 39269.0 } m30000| Fri Feb 22 12:30:12.629 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 38807.0 } -> { _id: 39269.0 } m30000| Fri Feb 22 12:30:12.629 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:12-512764d4c49297cf54df565d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536212629), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 38807.0 }, max: { _id: 39269.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 11 } } m30999| Fri Feb 22 12:30:12.630 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 539 version: 85|1||51276475bd1f99446659365c based on: 85|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:12.630 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 85|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:12.630 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 85000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 539 m30001| Fri Feb 22 12:30:12.630 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:12.638 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 38807.0 }, max: { _id: 39269.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:12.638 [conn4] moveChunk updating self version to: 86|1||51276475bd1f99446659365c through { _id: 39269.0 } -> { _id: 39731.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:12.639 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:12-512764d499334798f3e47ff3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536212639), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 38807.0 }, max: { _id: 39269.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:12.639 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:12.639 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:12.639 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:12.639 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 85000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 85000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 86000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:12.639 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:12.639 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:12.639 [cleanupOldData-512764d499334798f3e47ff4] (start) waiting to cleanup test.bar from { _id: 38807.0 } -> { _id: 39269.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:12.639 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:12.639 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:12-512764d499334798f3e47ff5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536212639), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 38807.0 }, max: { _id: 39269.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:12.639 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:12.641 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 540 version: 86|1||51276475bd1f99446659365c based on: 85|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:12.642 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 541 version: 86|1||51276475bd1f99446659365c based on: 85|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:12.643 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:12.643 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:12.644 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 542 version: 86|1||51276475bd1f99446659365c based on: 86|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:12.644 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 86|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:12.645 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 86000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 542 m30999| Fri Feb 22 12:30:12.645 [conn1] setShardVersion success: { oldVersion: Timestamp 85000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:12.659 [cleanupOldData-512764d499334798f3e47ff4] waiting to remove documents for test.bar from { _id: 38807.0 } -> { _id: 39269.0 } m30001| Fri Feb 22 12:30:12.659 [cleanupOldData-512764d499334798f3e47ff4] moveChunk starting delete for: test.bar from { _id: 38807.0 } -> { _id: 39269.0 } 87000 m30001| Fri Feb 22 12:30:13.033 [cleanupOldData-512764d499334798f3e47ff4] moveChunk deleted 462 documents for test.bar from { _id: 38807.0 } -> { _id: 39269.0 } m30999| Fri Feb 22 12:30:13.644 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:13.644 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:13.644 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:13 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d5bd1f9944665936b2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d4bd1f9944665936b1" } } m30999| Fri Feb 22 12:30:13.645 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d5bd1f9944665936b2 m30999| Fri Feb 22 12:30:13.645 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:13.645 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:13.645 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:13.647 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:13.647 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:13.647 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:13.647 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:13.647 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:13.649 [Balancer] shard0001 has more chunks me:132 best: shard0000:85 m30999| Fri Feb 22 12:30:13.649 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:13.649 [Balancer] donor : shard0001 chunks on 132 m30999| Fri Feb 22 12:30:13.649 [Balancer] receiver : shard0000 chunks on 85 m30999| Fri Feb 22 12:30:13.649 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:13.649 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_39269.0", lastmod: Timestamp 86000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 39269.0 }, max: { _id: 39731.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:13.649 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 86|1||000000000000000000000000min: { _id: 39269.0 }max: { _id: 39731.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:13.649 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:13.649 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 39269.0 }, max: { _id: 39731.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_39269.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:13.650 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d599334798f3e47ff6 m30001| Fri Feb 22 12:30:13.650 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:13-512764d599334798f3e47ff7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536213650), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 39269.0 }, max: { _id: 39731.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:13.651 [conn4] moveChunk request accepted at version 86|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:13.652 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:13.652 [migrateThread] starting receiving-end of migration of chunk { _id: 39269.0 } -> { _id: 39731.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:13.663 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 39269.0 }, max: { _id: 39731.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 117, clonedBytes: 122031, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:13.673 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 39269.0 }, max: { _id: 39731.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 286, clonedBytes: 298298, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:13.683 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 39269.0 }, max: { _id: 39731.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 454, clonedBytes: 473522, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:13.684 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:13.684 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 39269.0 } -> { _id: 39731.0 } m30000| Fri Feb 22 12:30:13.686 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 39269.0 } -> { _id: 39731.0 } m30001| Fri Feb 22 12:30:13.693 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 39269.0 }, max: { _id: 39731.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:13.693 [conn4] moveChunk setting version to: 87|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:13.694 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:13.697 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 543 version: 86|1||51276475bd1f99446659365c based on: 86|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:13.697 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 39269.0 } -> { _id: 39731.0 } m30000| Fri Feb 22 12:30:13.697 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 39269.0 } -> { _id: 39731.0 } m30000| Fri Feb 22 12:30:13.697 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:13-512764d5c49297cf54df565e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536213697), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 39269.0 }, max: { _id: 39731.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 30, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:13.697 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 86|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:13.697 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 86000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 543 m30001| Fri Feb 22 12:30:13.697 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:13.704 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 39269.0 }, max: { _id: 39731.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:13.704 [conn4] moveChunk updating self version to: 87|1||51276475bd1f99446659365c through { _id: 39731.0 } -> { _id: 40193.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:13.704 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:13-512764d599334798f3e47ff8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536213704), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 39269.0 }, max: { _id: 39731.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:13.704 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:13.704 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:13.704 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:13.704 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:13.704 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:13.704 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 86000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 86000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 87000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:13.705 [cleanupOldData-512764d599334798f3e47ff9] (start) waiting to cleanup test.bar from { _id: 39269.0 } -> { _id: 39731.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:13.705 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:13.705 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:13-512764d599334798f3e47ffa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536213705), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 39269.0 }, max: { _id: 39731.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:13.705 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:13.706 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 544 version: 87|1||51276475bd1f99446659365c based on: 86|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:13.708 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 545 version: 87|1||51276475bd1f99446659365c based on: 86|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:13.709 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:13.709 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:13.710 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 546 version: 87|1||51276475bd1f99446659365c based on: 87|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:13.711 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 87|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:13.711 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 87000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 546 m30999| Fri Feb 22 12:30:13.711 [conn1] setShardVersion success: { oldVersion: Timestamp 86000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:13.725 [cleanupOldData-512764d599334798f3e47ff9] waiting to remove documents for test.bar from { _id: 39269.0 } -> { _id: 39731.0 } m30001| Fri Feb 22 12:30:13.725 [cleanupOldData-512764d599334798f3e47ff9] moveChunk starting delete for: test.bar from { _id: 39269.0 } -> { _id: 39731.0 } 88000 m30001| Fri Feb 22 12:30:14.139 [cleanupOldData-512764d599334798f3e47ff9] moveChunk deleted 462 documents for test.bar from { _id: 39269.0 } -> { _id: 39731.0 } m30999| Fri Feb 22 12:30:14.710 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:14.710 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:14.711 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d6bd1f9944665936b3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d5bd1f9944665936b2" } } m30999| Fri Feb 22 12:30:14.712 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d6bd1f9944665936b3 m30999| Fri Feb 22 12:30:14.712 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:14.712 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:14.712 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:14.714 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:14.714 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:14.714 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:14.714 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:14.714 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:14.716 [Balancer] shard0001 has more chunks me:131 best: shard0000:86 m30999| Fri Feb 22 12:30:14.716 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:14.716 [Balancer] donor : shard0001 chunks on 131 m30999| Fri Feb 22 12:30:14.716 [Balancer] receiver : shard0000 chunks on 86 m30999| Fri Feb 22 12:30:14.716 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:14.716 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_39731.0", lastmod: Timestamp 87000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 39731.0 }, max: { _id: 40193.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:14.716 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 87|1||000000000000000000000000min: { _id: 39731.0 }max: { _id: 40193.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:14.716 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:14.717 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 39731.0 }, max: { _id: 40193.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_39731.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:14.717 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d699334798f3e47ffb m30001| Fri Feb 22 12:30:14.718 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:14-512764d699334798f3e47ffc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536214718), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 39731.0 }, max: { _id: 40193.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:14.719 [conn4] moveChunk request accepted at version 87|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:14.720 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:14.721 [migrateThread] starting receiving-end of migration of chunk { _id: 39731.0 } -> { _id: 40193.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:14.731 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 39731.0 }, max: { _id: 40193.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 218, clonedBytes: 227374, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:14.740 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:14.741 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 39731.0 } -> { _id: 40193.0 } m30001| Fri Feb 22 12:30:14.741 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 39731.0 }, max: { _id: 40193.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:14.742 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 39731.0 } -> { _id: 40193.0 } m30001| Fri Feb 22 12:30:14.751 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 39731.0 }, max: { _id: 40193.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:14.751 [conn4] moveChunk setting version to: 88|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:14.751 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:14.752 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 39731.0 } -> { _id: 40193.0 } m30000| Fri Feb 22 12:30:14.752 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 39731.0 } -> { _id: 40193.0 } m30000| Fri Feb 22 12:30:14.752 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:14-512764d6c49297cf54df565f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536214752), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 39731.0 }, max: { _id: 40193.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 19, step4 of 5: 0, step5 of 5: 11 } } m30999| Fri Feb 22 12:30:14.754 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 547 version: 87|1||51276475bd1f99446659365c based on: 87|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:14.754 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 87|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:14.754 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 87000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 547 m30001| Fri Feb 22 12:30:14.754 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:14.762 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 39731.0 }, max: { _id: 40193.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:14.762 [conn4] moveChunk updating self version to: 88|1||51276475bd1f99446659365c through { _id: 40193.0 } -> { _id: 40655.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:14.762 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:14-512764d699334798f3e47ffd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536214762), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 39731.0 }, max: { _id: 40193.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:14.763 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:14.763 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 87000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 87000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 88000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:14.763 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:14.763 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:14.763 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:14.763 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:14.763 [cleanupOldData-512764d699334798f3e47ffe] (start) waiting to cleanup test.bar from { _id: 39731.0 } -> { _id: 40193.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:14.763 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:14.763 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:14-512764d699334798f3e47fff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536214763), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 39731.0 }, max: { _id: 40193.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:14.763 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:14.765 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 548 version: 88|1||51276475bd1f99446659365c based on: 87|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:14.767 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 549 version: 88|1||51276475bd1f99446659365c based on: 87|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:14.768 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:14.768 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:14.769 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 550 version: 88|1||51276475bd1f99446659365c based on: 88|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:14.769 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 88|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:14.770 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 88000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 550 m30999| Fri Feb 22 12:30:14.770 [conn1] setShardVersion success: { oldVersion: Timestamp 87000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:14.783 [cleanupOldData-512764d699334798f3e47ffe] waiting to remove documents for test.bar from { _id: 39731.0 } -> { _id: 40193.0 } m30001| Fri Feb 22 12:30:14.783 [cleanupOldData-512764d699334798f3e47ffe] moveChunk starting delete for: test.bar from { _id: 39731.0 } -> { _id: 40193.0 } 89000 m30001| Fri Feb 22 12:30:15.572 [cleanupOldData-512764d699334798f3e47ffe] moveChunk deleted 462 documents for test.bar from { _id: 39731.0 } -> { _id: 40193.0 } m30999| Fri Feb 22 12:30:15.769 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:15.769 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:15.770 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:15 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d7bd1f9944665936b4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d6bd1f9944665936b3" } } m30999| Fri Feb 22 12:30:15.771 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d7bd1f9944665936b4 m30999| Fri Feb 22 12:30:15.771 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:15.771 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:15.771 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:15.773 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:15.773 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:15.773 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:15.773 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:15.773 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:15.776 [Balancer] shard0001 has more chunks me:130 best: shard0000:87 m30999| Fri Feb 22 12:30:15.776 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:15.776 [Balancer] donor : shard0001 chunks on 130 m30999| Fri Feb 22 12:30:15.776 [Balancer] receiver : shard0000 chunks on 87 m30999| Fri Feb 22 12:30:15.776 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:15.776 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_40193.0", lastmod: Timestamp 88000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 40193.0 }, max: { _id: 40655.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:15.776 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 88|1||000000000000000000000000min: { _id: 40193.0 }max: { _id: 40655.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:15.776 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:15.776 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 40193.0 }, max: { _id: 40655.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_40193.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:15.777 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d799334798f3e48000 m30001| Fri Feb 22 12:30:15.777 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:15-512764d799334798f3e48001", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536215777), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 40193.0 }, max: { _id: 40655.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:15.779 [conn4] moveChunk request accepted at version 88|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:15.781 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:15.781 [migrateThread] starting receiving-end of migration of chunk { _id: 40193.0 } -> { _id: 40655.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:15.791 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 40193.0 }, max: { _id: 40655.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 116, clonedBytes: 120988, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:15.801 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 40193.0 }, max: { _id: 40655.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 279, clonedBytes: 290997, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:15.812 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 40193.0 }, max: { _id: 40655.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 447, clonedBytes: 466221, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:15.813 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:15.813 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 40193.0 } -> { _id: 40655.0 } m30000| Fri Feb 22 12:30:15.816 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 40193.0 } -> { _id: 40655.0 } m30001| Fri Feb 22 12:30:15.822 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 40193.0 }, max: { _id: 40655.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:15.822 [conn4] moveChunk setting version to: 89|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:15.822 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:15.826 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 551 version: 88|1||51276475bd1f99446659365c based on: 88|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:15.826 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 40193.0 } -> { _id: 40655.0 } m30000| Fri Feb 22 12:30:15.826 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 40193.0 } -> { _id: 40655.0 } m30999| Fri Feb 22 12:30:15.826 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 88|1||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:15.826 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:15-512764d7c49297cf54df5660", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536215826), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 40193.0 }, max: { _id: 40655.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 31, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:15.826 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 88000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 551 m30001| Fri Feb 22 12:30:15.826 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:15.832 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 40193.0 }, max: { _id: 40655.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:15.833 [conn4] moveChunk updating self version to: 89|1||51276475bd1f99446659365c through { _id: 40655.0 } -> { _id: 41117.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:15.833 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:15-512764d799334798f3e48002", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536215833), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 40193.0 }, max: { _id: 40655.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:15.833 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:15.833 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:30:15.833 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:15.833 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 88000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 88000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 89000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:15.833 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:15.834 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:15.834 [cleanupOldData-512764d799334798f3e48003] (start) waiting to cleanup test.bar from { _id: 40193.0 } -> { _id: 40655.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:15.834 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:15.834 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:15-512764d799334798f3e48004", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536215834), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 40193.0 }, max: { _id: 40655.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 2, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:15.834 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:15.836 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 552 version: 89|1||51276475bd1f99446659365c based on: 88|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:15.838 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 553 version: 89|1||51276475bd1f99446659365c based on: 88|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:15.839 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:15.839 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:15.840 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 554 version: 89|1||51276475bd1f99446659365c based on: 89|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:15.840 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 89|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:15.841 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 89000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 554 m30999| Fri Feb 22 12:30:15.841 [conn1] setShardVersion success: { oldVersion: Timestamp 88000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:15.854 [cleanupOldData-512764d799334798f3e48003] waiting to remove documents for test.bar from { _id: 40193.0 } -> { _id: 40655.0 } m30001| Fri Feb 22 12:30:15.854 [cleanupOldData-512764d799334798f3e48003] moveChunk starting delete for: test.bar from { _id: 40193.0 } -> { _id: 40655.0 } 90000 m30001| Fri Feb 22 12:30:16.383 [cleanupOldData-512764d799334798f3e48003] moveChunk deleted 462 documents for test.bar from { _id: 40193.0 } -> { _id: 40655.0 } m30999| Fri Feb 22 12:30:16.840 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:16.840 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:16.840 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:16 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d8bd1f9944665936b5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d7bd1f9944665936b4" } } m30999| Fri Feb 22 12:30:16.841 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d8bd1f9944665936b5 m30999| Fri Feb 22 12:30:16.841 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:16.841 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:16.841 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:16.843 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:16.843 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:16.843 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:16.843 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:16.843 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:16.846 [Balancer] shard0001 has more chunks me:129 best: shard0000:88 m30999| Fri Feb 22 12:30:16.846 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:16.846 [Balancer] donor : shard0001 chunks on 129 m30999| Fri Feb 22 12:30:16.846 [Balancer] receiver : shard0000 chunks on 88 m30999| Fri Feb 22 12:30:16.846 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:16.846 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_40655.0", lastmod: Timestamp 89000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 40655.0 }, max: { _id: 41117.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:16.846 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 89|1||000000000000000000000000min: { _id: 40655.0 }max: { _id: 41117.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:16.846 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:16.846 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 40655.0 }, max: { _id: 41117.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_40655.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:16.848 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d899334798f3e48005 m30001| Fri Feb 22 12:30:16.848 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:16-512764d899334798f3e48006", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536216848), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 40655.0 }, max: { _id: 41117.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:16.849 [conn4] moveChunk request accepted at version 89|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:16.850 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:16.851 [migrateThread] starting receiving-end of migration of chunk { _id: 40655.0 } -> { _id: 41117.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:16.861 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 40655.0 }, max: { _id: 41117.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 177, clonedBytes: 184611, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:16.871 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 40655.0 }, max: { _id: 41117.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 421, clonedBytes: 439103, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:16.873 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:16.873 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 40655.0 } -> { _id: 41117.0 } m30000| Fri Feb 22 12:30:16.875 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 40655.0 } -> { _id: 41117.0 } m30001| Fri Feb 22 12:30:16.882 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 40655.0 }, max: { _id: 41117.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:16.882 [conn4] moveChunk setting version to: 90|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:16.882 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:16.884 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 555 version: 89|1||51276475bd1f99446659365c based on: 89|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:16.884 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 89|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:16.884 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 89000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 555 m30001| Fri Feb 22 12:30:16.884 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:16.885 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 40655.0 } -> { _id: 41117.0 } m30000| Fri Feb 22 12:30:16.885 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 40655.0 } -> { _id: 41117.0 } m30000| Fri Feb 22 12:30:16.886 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:16-512764d8c49297cf54df5661", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536216885), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 40655.0 }, max: { _id: 41117.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:16.892 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 40655.0 }, max: { _id: 41117.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:16.892 [conn4] moveChunk updating self version to: 90|1||51276475bd1f99446659365c through { _id: 41117.0 } -> { _id: 41579.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:16.893 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:16-512764d899334798f3e48007", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536216893), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 40655.0 }, max: { _id: 41117.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:16.893 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:16.893 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:30:16.893 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 89000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 89000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 90000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:16.893 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:16.893 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:16.893 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:16.893 [cleanupOldData-512764d899334798f3e48008] (start) waiting to cleanup test.bar from { _id: 40655.0 } -> { _id: 41117.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:16.894 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:16.894 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:16-512764d899334798f3e48009", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536216894), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 40655.0 }, max: { _id: 41117.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 31, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:16.894 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:16.895 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 556 version: 90|1||51276475bd1f99446659365c based on: 89|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:16.897 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 557 version: 90|1||51276475bd1f99446659365c based on: 89|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:16.898 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:16.899 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:16.900 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 558 version: 90|1||51276475bd1f99446659365c based on: 90|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:16.900 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 90|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:16.900 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 90000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 558 m30999| Fri Feb 22 12:30:16.900 [conn1] setShardVersion success: { oldVersion: Timestamp 89000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:16.913 [cleanupOldData-512764d899334798f3e48008] waiting to remove documents for test.bar from { _id: 40655.0 } -> { _id: 41117.0 } m30001| Fri Feb 22 12:30:16.913 [cleanupOldData-512764d899334798f3e48008] moveChunk starting delete for: test.bar from { _id: 40655.0 } -> { _id: 41117.0 } m30001| Fri Feb 22 12:30:17.268 [cleanupOldData-512764d899334798f3e48008] moveChunk deleted 462 documents for test.bar from { _id: 40655.0 } -> { _id: 41117.0 } 91000 m30999| Fri Feb 22 12:30:17.899 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:17.900 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:17.900 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:17 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764d9bd1f9944665936b6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d8bd1f9944665936b5" } } m30999| Fri Feb 22 12:30:17.901 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764d9bd1f9944665936b6 m30999| Fri Feb 22 12:30:17.901 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:17.901 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:17.901 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:17.903 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:17.904 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:17.904 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:17.904 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:17.904 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:17.905 [Balancer] shard0001 has more chunks me:128 best: shard0000:89 m30999| Fri Feb 22 12:30:17.905 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:17.905 [Balancer] donor : shard0001 chunks on 128 m30999| Fri Feb 22 12:30:17.906 [Balancer] receiver : shard0000 chunks on 89 m30999| Fri Feb 22 12:30:17.906 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:17.906 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_41117.0", lastmod: Timestamp 90000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 41117.0 }, max: { _id: 41579.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:17.906 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 90|1||000000000000000000000000min: { _id: 41117.0 }max: { _id: 41579.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:17.906 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:17.906 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 41117.0 }, max: { _id: 41579.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_41117.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:17.907 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764d999334798f3e4800a m30001| Fri Feb 22 12:30:17.907 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:17-512764d999334798f3e4800b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536217907), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 41117.0 }, max: { _id: 41579.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:17.908 [conn4] moveChunk request accepted at version 90|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:17.910 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:17.910 [migrateThread] starting receiving-end of migration of chunk { _id: 41117.0 } -> { _id: 41579.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:17.920 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41117.0 }, max: { _id: 41579.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 122, clonedBytes: 127246, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:17.931 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41117.0 }, max: { _id: 41579.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 297, clonedBytes: 309771, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:17.940 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:17.940 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 41117.0 } -> { _id: 41579.0 } m30001| Fri Feb 22 12:30:17.941 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41117.0 }, max: { _id: 41579.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:17.942 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 41117.0 } -> { _id: 41579.0 } m30001| Fri Feb 22 12:30:17.951 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41117.0 }, max: { _id: 41579.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:17.952 [conn4] moveChunk setting version to: 91|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:17.952 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:17.953 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 41117.0 } -> { _id: 41579.0 } m30000| Fri Feb 22 12:30:17.953 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 41117.0 } -> { _id: 41579.0 } m30000| Fri Feb 22 12:30:17.953 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:17-512764d9c49297cf54df5662", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536217953), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 41117.0 }, max: { _id: 41579.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:17.954 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 559 version: 90|1||51276475bd1f99446659365c based on: 90|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:17.954 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 90|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:17.954 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 90000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 559 m30001| Fri Feb 22 12:30:17.955 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:17.962 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 41117.0 }, max: { _id: 41579.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:17.962 [conn4] moveChunk updating self version to: 91|1||51276475bd1f99446659365c through { _id: 41579.0 } -> { _id: 42041.0 } for collection 'test.bar' m30999| Fri Feb 22 12:30:17.963 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 90000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 90000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 91000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:17.963 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:17-512764d999334798f3e4800c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536217963), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 41117.0 }, max: { _id: 41579.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:17.963 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:17.963 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:17.963 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:17.963 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:17.963 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:17.963 [cleanupOldData-512764d999334798f3e4800d] (start) waiting to cleanup test.bar from { _id: 41117.0 } -> { _id: 41579.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:17.963 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:17.963 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:17-512764d999334798f3e4800e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536217963), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 41117.0 }, max: { _id: 41579.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:17.963 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:17.964 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 560 version: 91|1||51276475bd1f99446659365c based on: 90|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:17.966 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 561 version: 91|1||51276475bd1f99446659365c based on: 90|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:17.968 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:17.968 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:17.969 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 562 version: 91|1||51276475bd1f99446659365c based on: 91|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:17.969 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 91|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:17.969 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 91000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 562 m30999| Fri Feb 22 12:30:17.969 [conn1] setShardVersion success: { oldVersion: Timestamp 90000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:17.983 [cleanupOldData-512764d999334798f3e4800d] waiting to remove documents for test.bar from { _id: 41117.0 } -> { _id: 41579.0 } m30001| Fri Feb 22 12:30:17.983 [cleanupOldData-512764d999334798f3e4800d] moveChunk starting delete for: test.bar from { _id: 41117.0 } -> { _id: 41579.0 } m30001| Fri Feb 22 12:30:18.295 [cleanupOldData-512764d999334798f3e4800d] moveChunk deleted 462 documents for test.bar from { _id: 41117.0 } -> { _id: 41579.0 } 92000 m30999| Fri Feb 22 12:30:18.969 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:18.969 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:18.969 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:18 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764dabd1f9944665936b7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764d9bd1f9944665936b6" } } m30999| Fri Feb 22 12:30:18.970 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764dabd1f9944665936b7 m30999| Fri Feb 22 12:30:18.970 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:18.970 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:18.970 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:18.972 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:18.972 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:18.972 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:18.972 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:18.972 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:18.974 [Balancer] shard0001 has more chunks me:127 best: shard0000:90 m30999| Fri Feb 22 12:30:18.974 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:18.974 [Balancer] donor : shard0001 chunks on 127 m30999| Fri Feb 22 12:30:18.974 [Balancer] receiver : shard0000 chunks on 90 m30999| Fri Feb 22 12:30:18.974 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:18.974 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_41579.0", lastmod: Timestamp 91000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 41579.0 }, max: { _id: 42041.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:18.974 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 91|1||000000000000000000000000min: { _id: 41579.0 }max: { _id: 42041.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:18.975 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:18.975 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 41579.0 }, max: { _id: 42041.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_41579.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:18.976 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764da99334798f3e4800f m30001| Fri Feb 22 12:30:18.976 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:18-512764da99334798f3e48010", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536218976), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 41579.0 }, max: { _id: 42041.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:18.977 [conn4] moveChunk request accepted at version 91|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:18.978 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:18.978 [migrateThread] starting receiving-end of migration of chunk { _id: 41579.0 } -> { _id: 42041.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:18.989 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41579.0 }, max: { _id: 42041.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 128, clonedBytes: 133504, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:18.999 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41579.0 }, max: { _id: 42041.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 305, clonedBytes: 318115, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:19.008 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:19.008 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 41579.0 } -> { _id: 42041.0 } m30001| Fri Feb 22 12:30:19.009 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41579.0 }, max: { _id: 42041.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:19.011 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 41579.0 } -> { _id: 42041.0 } m30001| Fri Feb 22 12:30:19.019 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 41579.0 }, max: { _id: 42041.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:19.020 [conn4] moveChunk setting version to: 92|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:19.020 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:19.021 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 41579.0 } -> { _id: 42041.0 } m30000| Fri Feb 22 12:30:19.021 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 41579.0 } -> { _id: 42041.0 } m30000| Fri Feb 22 12:30:19.021 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:19-512764dbc49297cf54df5663", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536219021), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 41579.0 }, max: { _id: 42041.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 13 } } m30999| Fri Feb 22 12:30:19.023 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 563 version: 91|1||51276475bd1f99446659365c based on: 91|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:19.023 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 91|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:19.023 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 91000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 563 m30001| Fri Feb 22 12:30:19.023 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:19.030 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 41579.0 }, max: { _id: 42041.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:19.030 [conn4] moveChunk updating self version to: 92|1||51276475bd1f99446659365c through { _id: 42041.0 } -> { _id: 42503.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:19.031 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:19-512764db99334798f3e48011", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536219031), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 41579.0 }, max: { _id: 42041.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:19.031 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:19.031 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:19.031 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:19.031 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:19.031 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 91000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 91000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 92000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:19.031 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:19.031 [cleanupOldData-512764db99334798f3e48012] (start) waiting to cleanup test.bar from { _id: 41579.0 } -> { _id: 42041.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:19.032 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:19.032 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:19-512764db99334798f3e48013", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536219032), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 41579.0 }, max: { _id: 42041.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 41, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:19.032 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:19.033 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 564 version: 92|1||51276475bd1f99446659365c based on: 91|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:19.035 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 565 version: 92|1||51276475bd1f99446659365c based on: 91|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:19.037 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:19.037 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:19.038 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 566 version: 92|1||51276475bd1f99446659365c based on: 92|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:19.038 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 92|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:19.039 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 92000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 566 m30999| Fri Feb 22 12:30:19.039 [conn1] setShardVersion success: { oldVersion: Timestamp 91000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:19.051 [cleanupOldData-512764db99334798f3e48012] waiting to remove documents for test.bar from { _id: 41579.0 } -> { _id: 42041.0 } m30001| Fri Feb 22 12:30:19.051 [cleanupOldData-512764db99334798f3e48012] moveChunk starting delete for: test.bar from { _id: 41579.0 } -> { _id: 42041.0 } 93000 m30001| Fri Feb 22 12:30:19.637 [cleanupOldData-512764db99334798f3e48012] moveChunk deleted 462 documents for test.bar from { _id: 41579.0 } -> { _id: 42041.0 } m30999| Fri Feb 22 12:30:20.038 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:20.038 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:20.038 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764dcbd1f9944665936b8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764dabd1f9944665936b7" } } m30999| Fri Feb 22 12:30:20.039 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764dcbd1f9944665936b8 m30999| Fri Feb 22 12:30:20.039 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:20.039 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:20.039 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:20.041 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:20.041 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:20.041 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:20.041 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:20.041 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:20.043 [Balancer] shard0001 has more chunks me:126 best: shard0000:91 m30999| Fri Feb 22 12:30:20.043 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:20.043 [Balancer] donor : shard0001 chunks on 126 m30999| Fri Feb 22 12:30:20.043 [Balancer] receiver : shard0000 chunks on 91 m30999| Fri Feb 22 12:30:20.043 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:20.043 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_42041.0", lastmod: Timestamp 92000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 42041.0 }, max: { _id: 42503.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:20.043 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 92|1||000000000000000000000000min: { _id: 42041.0 }max: { _id: 42503.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:20.043 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:20.043 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 42041.0 }, max: { _id: 42503.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_42041.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:20.044 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764dc99334798f3e48014 m30001| Fri Feb 22 12:30:20.044 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:20-512764dc99334798f3e48015", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536220044), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 42041.0 }, max: { _id: 42503.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:20.045 [conn4] moveChunk request accepted at version 92|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:20.046 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:20.046 [migrateThread] starting receiving-end of migration of chunk { _id: 42041.0 } -> { _id: 42503.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:20.056 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42041.0 }, max: { _id: 42503.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 191, clonedBytes: 199213, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:20.067 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42041.0 }, max: { _id: 42503.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 438, clonedBytes: 456834, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:20.070 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:20.070 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 42041.0 } -> { _id: 42503.0 } m30000| Fri Feb 22 12:30:20.071 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 42041.0 } -> { _id: 42503.0 } m30001| Fri Feb 22 12:30:20.077 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42041.0 }, max: { _id: 42503.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:20.077 [conn4] moveChunk setting version to: 93|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:20.077 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:20.080 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 567 version: 92|1||51276475bd1f99446659365c based on: 92|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:20.081 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 92|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:20.081 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 92000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 567 m30001| Fri Feb 22 12:30:20.081 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:20.081 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 42041.0 } -> { _id: 42503.0 } m30000| Fri Feb 22 12:30:20.081 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 42041.0 } -> { _id: 42503.0 } m30000| Fri Feb 22 12:30:20.081 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:20-512764dcc49297cf54df5664", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536220081), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 42041.0 }, max: { _id: 42503.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 23, step4 of 5: 0, step5 of 5: 10 } } m30001| Fri Feb 22 12:30:20.088 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 42041.0 }, max: { _id: 42503.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:20.088 [conn4] moveChunk updating self version to: 93|1||51276475bd1f99446659365c through { _id: 42503.0 } -> { _id: 42965.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:20.088 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:20-512764dc99334798f3e48016", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536220088), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 42041.0 }, max: { _id: 42503.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:20.088 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:20.088 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:20.088 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:20.088 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:20.088 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 92000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 92000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 93000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:20.088 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:20.089 [cleanupOldData-512764dc99334798f3e48017] (start) waiting to cleanup test.bar from { _id: 42041.0 } -> { _id: 42503.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:20.089 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:20.089 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:20-512764dc99334798f3e48018", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536220089), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 42041.0 }, max: { _id: 42503.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:20.089 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:20.090 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 568 version: 93|1||51276475bd1f99446659365c based on: 92|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:20.092 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 569 version: 93|1||51276475bd1f99446659365c based on: 92|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:20.093 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:20.093 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:20.094 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 570 version: 93|1||51276475bd1f99446659365c based on: 93|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:20.094 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 93|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:20.095 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 93000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 570 m30999| Fri Feb 22 12:30:20.095 [conn1] setShardVersion success: { oldVersion: Timestamp 92000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:20.109 [cleanupOldData-512764dc99334798f3e48017] waiting to remove documents for test.bar from { _id: 42041.0 } -> { _id: 42503.0 } m30001| Fri Feb 22 12:30:20.109 [cleanupOldData-512764dc99334798f3e48017] moveChunk starting delete for: test.bar from { _id: 42041.0 } -> { _id: 42503.0 } 94000 m30001| Fri Feb 22 12:30:20.565 [cleanupOldData-512764dc99334798f3e48017] moveChunk deleted 462 documents for test.bar from { _id: 42041.0 } -> { _id: 42503.0 } m30999| Fri Feb 22 12:30:21.094 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:21.094 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:21.094 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:21 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764ddbd1f9944665936b9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764dcbd1f9944665936b8" } } m30999| Fri Feb 22 12:30:21.095 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764ddbd1f9944665936b9 m30999| Fri Feb 22 12:30:21.095 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:21.095 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:21.095 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:21.097 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:21.097 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:21.097 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:21.097 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:21.097 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:21.099 [Balancer] shard0001 has more chunks me:125 best: shard0000:92 m30999| Fri Feb 22 12:30:21.099 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:21.099 [Balancer] donor : shard0001 chunks on 125 m30999| Fri Feb 22 12:30:21.099 [Balancer] receiver : shard0000 chunks on 92 m30999| Fri Feb 22 12:30:21.099 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:21.099 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_42503.0", lastmod: Timestamp 93000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 42503.0 }, max: { _id: 42965.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:21.099 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 93|1||000000000000000000000000min: { _id: 42503.0 }max: { _id: 42965.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:21.099 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:21.099 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 42503.0 }, max: { _id: 42965.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_42503.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:21.100 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764dd99334798f3e48019 m30001| Fri Feb 22 12:30:21.100 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:21-512764dd99334798f3e4801a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536221100), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 42503.0 }, max: { _id: 42965.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:21.102 [conn4] moveChunk request accepted at version 93|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:21.103 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:21.103 [migrateThread] starting receiving-end of migration of chunk { _id: 42503.0 } -> { _id: 42965.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:21.113 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42503.0 }, max: { _id: 42965.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 125, clonedBytes: 130375, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:21.123 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42503.0 }, max: { _id: 42965.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 304, clonedBytes: 317072, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:21.132 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:21.132 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 42503.0 } -> { _id: 42965.0 } m30001| Fri Feb 22 12:30:21.133 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42503.0 }, max: { _id: 42965.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:21.135 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 42503.0 } -> { _id: 42965.0 } m30001| Fri Feb 22 12:30:21.144 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42503.0 }, max: { _id: 42965.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:21.144 [conn4] moveChunk setting version to: 94|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:21.144 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:21.145 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 42503.0 } -> { _id: 42965.0 } m30000| Fri Feb 22 12:30:21.145 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 42503.0 } -> { _id: 42965.0 } m30000| Fri Feb 22 12:30:21.145 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:21-512764ddc49297cf54df5665", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536221145), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 42503.0 }, max: { _id: 42965.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 28, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:21.146 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 571 version: 93|1||51276475bd1f99446659365c based on: 93|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:21.146 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 93|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:21.146 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 93000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 571 m30001| Fri Feb 22 12:30:21.147 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:21.154 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 42503.0 }, max: { _id: 42965.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:21.154 [conn4] moveChunk updating self version to: 94|1||51276475bd1f99446659365c through { _id: 42965.0 } -> { _id: 43427.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:21.155 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:21-512764dd99334798f3e4801b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536221155), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 42503.0 }, max: { _id: 42965.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:21.155 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:21.155 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:21.155 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:21.155 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 93000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 93000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 94000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:21.155 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:21.155 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:21.155 [cleanupOldData-512764dd99334798f3e4801c] (start) waiting to cleanup test.bar from { _id: 42503.0 } -> { _id: 42965.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:21.155 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:21.155 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:21-512764dd99334798f3e4801d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536221155), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 42503.0 }, max: { _id: 42965.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:21.156 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:21.157 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 572 version: 94|1||51276475bd1f99446659365c based on: 93|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:21.158 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 573 version: 94|1||51276475bd1f99446659365c based on: 93|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:21.159 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:21.159 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:21.160 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 574 version: 94|1||51276475bd1f99446659365c based on: 94|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:21.161 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 94|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:21.161 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 94000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 574 m30999| Fri Feb 22 12:30:21.161 [conn1] setShardVersion success: { oldVersion: Timestamp 93000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:21.175 [cleanupOldData-512764dd99334798f3e4801c] waiting to remove documents for test.bar from { _id: 42503.0 } -> { _id: 42965.0 } m30001| Fri Feb 22 12:30:21.175 [cleanupOldData-512764dd99334798f3e4801c] moveChunk starting delete for: test.bar from { _id: 42503.0 } -> { _id: 42965.0 } 95000 m30001| Fri Feb 22 12:30:21.697 [cleanupOldData-512764dd99334798f3e4801c] moveChunk deleted 462 documents for test.bar from { _id: 42503.0 } -> { _id: 42965.0 } m30999| Fri Feb 22 12:30:22.160 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:22.161 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:22.161 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:22 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764debd1f9944665936ba" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764ddbd1f9944665936b9" } } m30999| Fri Feb 22 12:30:22.162 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764debd1f9944665936ba m30999| Fri Feb 22 12:30:22.162 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:22.162 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:22.162 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:22.164 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:22.164 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:22.164 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:22.164 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:22.164 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:22.166 [Balancer] shard0001 has more chunks me:124 best: shard0000:93 m30999| Fri Feb 22 12:30:22.166 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:22.166 [Balancer] donor : shard0001 chunks on 124 m30999| Fri Feb 22 12:30:22.166 [Balancer] receiver : shard0000 chunks on 93 m30999| Fri Feb 22 12:30:22.166 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:22.166 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_42965.0", lastmod: Timestamp 94000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 42965.0 }, max: { _id: 43427.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:22.167 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 94|1||000000000000000000000000min: { _id: 42965.0 }max: { _id: 43427.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:22.167 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:22.167 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 42965.0 }, max: { _id: 43427.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_42965.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:22.168 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764de99334798f3e4801e m30001| Fri Feb 22 12:30:22.168 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:22-512764de99334798f3e4801f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536222168), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 42965.0 }, max: { _id: 43427.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:22.170 [conn4] moveChunk request accepted at version 94|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:22.171 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:22.171 [migrateThread] starting receiving-end of migration of chunk { _id: 42965.0 } -> { _id: 43427.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:22.182 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42965.0 }, max: { _id: 43427.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 120, clonedBytes: 125160, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:22.192 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42965.0 }, max: { _id: 43427.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 298, clonedBytes: 310814, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:22.201 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:22.201 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 42965.0 } -> { _id: 43427.0 } m30001| Fri Feb 22 12:30:22.202 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42965.0 }, max: { _id: 43427.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:22.204 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 42965.0 } -> { _id: 43427.0 } m30001| Fri Feb 22 12:30:22.212 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 42965.0 }, max: { _id: 43427.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:22.212 [conn4] moveChunk setting version to: 95|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:22.213 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:22.214 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 42965.0 } -> { _id: 43427.0 } m30000| Fri Feb 22 12:30:22.214 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 42965.0 } -> { _id: 43427.0 } m30000| Fri Feb 22 12:30:22.214 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:22-512764dec49297cf54df5666", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536222214), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 42965.0 }, max: { _id: 43427.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:22.216 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 575 version: 94|1||51276475bd1f99446659365c based on: 94|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:22.216 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 94|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:22.216 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 94000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 575 m30001| Fri Feb 22 12:30:22.216 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:22.223 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 42965.0 }, max: { _id: 43427.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:22.223 [conn4] moveChunk updating self version to: 95|1||51276475bd1f99446659365c through { _id: 43427.0 } -> { _id: 43889.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:22.223 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:22-512764de99334798f3e48020", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536222223), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 42965.0 }, max: { _id: 43427.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:22.223 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:22.223 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:22.224 [conn4] MigrateFromStatus::done Global lock acquired m30999| { oldVersion: Timestamp 94000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 94000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 95000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:22.224 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:22.224 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:22.224 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:22.224 [cleanupOldData-512764de99334798f3e48021] (start) waiting to cleanup test.bar from { _id: 42965.0 } -> { _id: 43427.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:22.224 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:22.224 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:22-512764de99334798f3e48022", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536222224), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 42965.0 }, max: { _id: 43427.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:22.224 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:22.226 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 576 version: 95|1||51276475bd1f99446659365c based on: 94|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:22.228 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 577 version: 95|1||51276475bd1f99446659365c based on: 94|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:22.229 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:22.229 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:22.230 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 578 version: 95|1||51276475bd1f99446659365c based on: 95|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:22.230 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 95|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:22.231 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 95000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 578 m30999| Fri Feb 22 12:30:22.231 [conn1] setShardVersion success: { oldVersion: Timestamp 94000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:22.244 [cleanupOldData-512764de99334798f3e48021] waiting to remove documents for test.bar from { _id: 42965.0 } -> { _id: 43427.0 } m30001| Fri Feb 22 12:30:22.244 [cleanupOldData-512764de99334798f3e48021] moveChunk starting delete for: test.bar from { _id: 42965.0 } -> { _id: 43427.0 } m30001| Fri Feb 22 12:30:22.434 [cleanupOldData-512764de99334798f3e48021] moveChunk deleted 462 documents for test.bar from { _id: 42965.0 } -> { _id: 43427.0 } 96000 m30999| Fri Feb 22 12:30:23.230 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:23.230 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:23.230 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:23 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764dfbd1f9944665936bb" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764debd1f9944665936ba" } } m30999| Fri Feb 22 12:30:23.231 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764dfbd1f9944665936bb m30999| Fri Feb 22 12:30:23.231 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:23.231 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:23.231 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:23.234 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:23.234 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:23.234 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:23.234 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:23.234 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:23.236 [Balancer] shard0001 has more chunks me:123 best: shard0000:94 m30999| Fri Feb 22 12:30:23.236 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:23.236 [Balancer] donor : shard0001 chunks on 123 m30999| Fri Feb 22 12:30:23.236 [Balancer] receiver : shard0000 chunks on 94 m30999| Fri Feb 22 12:30:23.236 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:23.236 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_43427.0", lastmod: Timestamp 95000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 43427.0 }, max: { _id: 43889.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:23.237 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 95|1||000000000000000000000000min: { _id: 43427.0 }max: { _id: 43889.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:23.237 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:23.237 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 43427.0 }, max: { _id: 43889.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_43427.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:23.238 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764df99334798f3e48023 m30001| Fri Feb 22 12:30:23.238 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:23-512764df99334798f3e48024", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536223238), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 43427.0 }, max: { _id: 43889.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:23.239 [conn4] moveChunk request accepted at version 95|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:23.241 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:23.241 [migrateThread] starting receiving-end of migration of chunk { _id: 43427.0 } -> { _id: 43889.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:23.251 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43427.0 }, max: { _id: 43889.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 122, clonedBytes: 127246, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:23.262 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43427.0 }, max: { _id: 43889.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 301, clonedBytes: 313943, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:23.271 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:23.271 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 43427.0 } -> { _id: 43889.0 } m30001| Fri Feb 22 12:30:23.272 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43427.0 }, max: { _id: 43889.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:23.274 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 43427.0 } -> { _id: 43889.0 } m30001| Fri Feb 22 12:30:23.282 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43427.0 }, max: { _id: 43889.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:23.282 [conn4] moveChunk setting version to: 96|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:23.282 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:23.284 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 43427.0 } -> { _id: 43889.0 } m30000| Fri Feb 22 12:30:23.284 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 43427.0 } -> { _id: 43889.0 } m30000| Fri Feb 22 12:30:23.284 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:23-512764dfc49297cf54df5667", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536223284), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 43427.0 }, max: { _id: 43889.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:23.285 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 579 version: 95|1||51276475bd1f99446659365c based on: 95|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:23.285 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 95|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:23.285 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 95000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 579 m30001| Fri Feb 22 12:30:23.285 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:23.293 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 43427.0 }, max: { _id: 43889.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:23.293 [conn4] moveChunk updating self version to: 96|1||51276475bd1f99446659365c through { _id: 43889.0 } -> { _id: 44351.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:23.293 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:23-512764df99334798f3e48025", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536223293), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 43427.0 }, max: { _id: 43889.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:23.294 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:23.294 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 95000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 95000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 96000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:23.294 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:23.294 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:23.296 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 580 version: 96|1||51276475bd1f99446659365c based on: 95|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:23.297 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 581 version: 96|1||51276475bd1f99446659365c based on: 96|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:23.298 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 96|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:23.298 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 96000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 581 m30999| Fri Feb 22 12:30:23.298 [conn1] setShardVersion success: { oldVersion: Timestamp 95000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:23.339 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:23.339 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:23.339 [cleanupOldData-512764df99334798f3e48026] (start) waiting to cleanup test.bar from { _id: 43427.0 } -> { _id: 43889.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:23.340 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:23.340 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:23-512764df99334798f3e48027", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536223340), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 43427.0 }, max: { _id: 43889.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 45 } } m30001| Fri Feb 22 12:30:23.340 [conn4] command admin.$cmd command: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 43427.0 }, max: { _id: 43889.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_43427.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:24 r:1644 w:84 reslen:37 103ms m30999| Fri Feb 22 12:30:23.340 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:23.341 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:23.341 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30001| Fri Feb 22 12:30:23.359 [cleanupOldData-512764df99334798f3e48026] waiting to remove documents for test.bar from { _id: 43427.0 } -> { _id: 43889.0 } m30001| Fri Feb 22 12:30:23.359 [cleanupOldData-512764df99334798f3e48026] moveChunk starting delete for: test.bar from { _id: 43427.0 } -> { _id: 43889.0 } 97000 m30001| Fri Feb 22 12:30:24.193 [cleanupOldData-512764df99334798f3e48026] moveChunk deleted 462 documents for test.bar from { _id: 43427.0 } -> { _id: 43889.0 } m30999| Fri Feb 22 12:30:24.342 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:24.342 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:24.343 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:24 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e0bd1f9944665936bc" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764dfbd1f9944665936bb" } } m30999| Fri Feb 22 12:30:24.343 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e0bd1f9944665936bc m30999| Fri Feb 22 12:30:24.343 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:24.343 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:24.343 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:24.345 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:24.346 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:24.346 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:24.346 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:24.346 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:24.348 [Balancer] shard0001 has more chunks me:122 best: shard0000:95 m30999| Fri Feb 22 12:30:24.348 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:24.348 [Balancer] donor : shard0001 chunks on 122 m30999| Fri Feb 22 12:30:24.348 [Balancer] receiver : shard0000 chunks on 95 m30999| Fri Feb 22 12:30:24.348 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:24.348 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_43889.0", lastmod: Timestamp 96000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 43889.0 }, max: { _id: 44351.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:24.348 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 96|1||000000000000000000000000min: { _id: 43889.0 }max: { _id: 44351.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:24.348 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:24.348 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 43889.0 }, max: { _id: 44351.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_43889.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:24.349 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e099334798f3e48028 m30001| Fri Feb 22 12:30:24.349 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:24-512764e099334798f3e48029", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536224349), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 43889.0 }, max: { _id: 44351.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:24.350 [conn4] moveChunk request accepted at version 96|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:24.352 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:24.352 [migrateThread] starting receiving-end of migration of chunk { _id: 43889.0 } -> { _id: 44351.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:24.362 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43889.0 }, max: { _id: 44351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 128, clonedBytes: 133504, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:24.373 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43889.0 }, max: { _id: 44351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 267, clonedBytes: 278481, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:24.383 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43889.0 }, max: { _id: 44351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 433, clonedBytes: 451619, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:24.385 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:24.385 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 43889.0 } -> { _id: 44351.0 } m30000| Fri Feb 22 12:30:24.386 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 43889.0 } -> { _id: 44351.0 } m30001| Fri Feb 22 12:30:24.393 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 43889.0 }, max: { _id: 44351.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:24.393 [conn4] moveChunk setting version to: 97|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:24.393 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:24.396 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 582 version: 96|1||51276475bd1f99446659365c based on: 96|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:24.396 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 96|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:24.396 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 96000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 582 m30001| Fri Feb 22 12:30:24.396 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:24.397 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 43889.0 } -> { _id: 44351.0 } m30000| Fri Feb 22 12:30:24.397 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 43889.0 } -> { _id: 44351.0 } m30000| Fri Feb 22 12:30:24.397 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:24-512764e0c49297cf54df5668", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536224397), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 43889.0 }, max: { _id: 44351.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 31, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:24.403 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 43889.0 }, max: { _id: 44351.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:24.403 [conn4] moveChunk updating self version to: 97|1||51276475bd1f99446659365c through { _id: 44351.0 } -> { _id: 44813.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:24.404 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:24-512764e099334798f3e4802a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536224404), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 43889.0 }, max: { _id: 44351.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:24.404 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:24.404 [conn4] MigrateFromStatus::done Global lock acquired m30999| Fri Feb 22 12:30:24.404 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:24.404 [conn4] forking for cleanup of chunk data m30999| { oldVersion: Timestamp 96000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 96000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 97000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:24.404 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:24.404 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:24.404 [cleanupOldData-512764e099334798f3e4802b] (start) waiting to cleanup test.bar from { _id: 43889.0 } -> { _id: 44351.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:24.405 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:24.405 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:24-512764e099334798f3e4802c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536224405), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 43889.0 }, max: { _id: 44351.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:24.405 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:24.406 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 583 version: 97|1||51276475bd1f99446659365c based on: 96|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:24.408 [Balancer] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 584 version: 97|1||51276475bd1f99446659365c based on: 96|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:24.409 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:24.410 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:24.410 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 585 version: 97|1||51276475bd1f99446659365c based on: 97|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:24.411 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 97|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:24.411 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 97000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 585 m30999| Fri Feb 22 12:30:24.411 [conn1] setShardVersion success: { oldVersion: Timestamp 96000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:24.424 [cleanupOldData-512764e099334798f3e4802b] waiting to remove documents for test.bar from { _id: 43889.0 } -> { _id: 44351.0 } m30001| Fri Feb 22 12:30:24.424 [cleanupOldData-512764e099334798f3e4802b] moveChunk starting delete for: test.bar from { _id: 43889.0 } -> { _id: 44351.0 } 98000 m30001| Fri Feb 22 12:30:24.876 [cleanupOldData-512764e099334798f3e4802b] moveChunk deleted 462 documents for test.bar from { _id: 43889.0 } -> { _id: 44351.0 } m30999| Fri Feb 22 12:30:25.410 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:25.410 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:25.411 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:25 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e1bd1f9944665936bd" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e0bd1f9944665936bc" } } m30999| Fri Feb 22 12:30:25.411 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e1bd1f9944665936bd m30999| Fri Feb 22 12:30:25.411 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:25.411 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:25.411 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:25.413 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:25.413 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:25.413 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:25.413 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:25.413 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:25.416 [Balancer] shard0001 has more chunks me:121 best: shard0000:96 m30999| Fri Feb 22 12:30:25.416 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:25.416 [Balancer] donor : shard0001 chunks on 121 m30999| Fri Feb 22 12:30:25.416 [Balancer] receiver : shard0000 chunks on 96 m30999| Fri Feb 22 12:30:25.416 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:25.416 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_44351.0", lastmod: Timestamp 97000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 44351.0 }, max: { _id: 44813.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:25.416 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 97|1||000000000000000000000000min: { _id: 44351.0 }max: { _id: 44813.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:25.416 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:25.416 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 44351.0 }, max: { _id: 44813.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_44351.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:25.417 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e199334798f3e4802d m30001| Fri Feb 22 12:30:25.417 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:25-512764e199334798f3e4802e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536225417), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 44351.0 }, max: { _id: 44813.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:25.418 [conn4] moveChunk request accepted at version 97|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:25.419 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:25.420 [migrateThread] starting receiving-end of migration of chunk { _id: 44351.0 } -> { _id: 44813.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:25.430 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 44351.0 }, max: { _id: 44813.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 166, clonedBytes: 173138, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:25.440 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 44351.0 }, max: { _id: 44813.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 419, clonedBytes: 437017, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:25.442 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:25.442 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 44351.0 } -> { _id: 44813.0 } m30000| Fri Feb 22 12:30:25.443 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 44351.0 } -> { _id: 44813.0 } m30001| Fri Feb 22 12:30:25.450 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 44351.0 }, max: { _id: 44813.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:25.450 [conn4] moveChunk setting version to: 98|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:25.450 [conn11] Waiting for commit to finish m30999| Fri Feb 22 12:30:25.453 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 586 version: 97|1||51276475bd1f99446659365c based on: 97|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:25.453 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 97|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:25.453 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 97000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 586 m30001| Fri Feb 22 12:30:25.453 [conn3] waiting till out of critical section m30000| Fri Feb 22 12:30:25.454 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 44351.0 } -> { _id: 44813.0 } m30000| Fri Feb 22 12:30:25.454 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 44351.0 } -> { _id: 44813.0 } m30000| Fri Feb 22 12:30:25.454 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:25-512764e1c49297cf54df5669", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536225454), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 44351.0 }, max: { _id: 44813.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 21, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:25.460 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 44351.0 }, max: { _id: 44813.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:25.460 [conn4] moveChunk updating self version to: 98|1||51276475bd1f99446659365c through { _id: 44813.0 } -> { _id: 45275.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:25.461 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:25-512764e199334798f3e4802f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536225461), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 44351.0 }, max: { _id: 44813.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:25.461 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:25.461 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:25.461 [conn4] forking for cleanup of chunk data m30999| Fri Feb 22 12:30:25.461 [conn1] setShardVersion failed! m30001| Fri Feb 22 12:30:25.461 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| { oldVersion: Timestamp 97000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 97000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 98000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:25.461 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:25.461 [cleanupOldData-512764e199334798f3e48030] (start) waiting to cleanup test.bar from { _id: 44351.0 } -> { _id: 44813.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:25.462 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:25.462 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:25-512764e199334798f3e48031", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536225462), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 44351.0 }, max: { _id: 44813.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 1, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:25.462 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:25.464 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 587 version: 98|1||51276475bd1f99446659365c based on: 97|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:25.465 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 588 version: 98|1||51276475bd1f99446659365c based on: 97|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:25.466 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:25.466 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:25.467 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 589 version: 98|1||51276475bd1f99446659365c based on: 98|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:25.468 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 98|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:25.468 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 98000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 589 m30999| Fri Feb 22 12:30:25.468 [conn1] setShardVersion success: { oldVersion: Timestamp 97000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:25.481 [cleanupOldData-512764e199334798f3e48030] waiting to remove documents for test.bar from { _id: 44351.0 } -> { _id: 44813.0 } m30001| Fri Feb 22 12:30:25.481 [cleanupOldData-512764e199334798f3e48030] moveChunk starting delete for: test.bar from { _id: 44351.0 } -> { _id: 44813.0 } 99000 m30001| Fri Feb 22 12:30:25.894 [cleanupOldData-512764e199334798f3e48030] moveChunk deleted 462 documents for test.bar from { _id: 44351.0 } -> { _id: 44813.0 } m30999| Fri Feb 22 12:30:26.467 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:26.467 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:26.468 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e2bd1f9944665936be" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e1bd1f9944665936bd" } } m30999| Fri Feb 22 12:30:26.468 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e2bd1f9944665936be m30999| Fri Feb 22 12:30:26.468 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:26.468 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:26.468 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:26.471 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:26.471 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:26.471 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:26.471 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:26.471 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:26.473 [Balancer] shard0001 has more chunks me:120 best: shard0000:97 m30999| Fri Feb 22 12:30:26.473 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:26.473 [Balancer] donor : shard0001 chunks on 120 m30999| Fri Feb 22 12:30:26.473 [Balancer] receiver : shard0000 chunks on 97 m30999| Fri Feb 22 12:30:26.473 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:26.473 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_44813.0", lastmod: Timestamp 98000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 44813.0 }, max: { _id: 45275.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:26.473 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 98|1||000000000000000000000000min: { _id: 44813.0 }max: { _id: 45275.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:26.474 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:26.474 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 44813.0 }, max: { _id: 45275.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_44813.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:26.475 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e299334798f3e48032 m30001| Fri Feb 22 12:30:26.475 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:26-512764e299334798f3e48033", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536226475), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 44813.0 }, max: { _id: 45275.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:26.476 [conn4] moveChunk request accepted at version 98|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:26.478 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:26.478 [migrateThread] starting receiving-end of migration of chunk { _id: 44813.0 } -> { _id: 45275.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:26.488 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 44813.0 }, max: { _id: 45275.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 128, clonedBytes: 133504, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:26.498 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 44813.0 }, max: { _id: 45275.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 300, clonedBytes: 312900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:26.508 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:26.508 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 44813.0 } -> { _id: 45275.0 } m30001| Fri Feb 22 12:30:26.508 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 44813.0 }, max: { _id: 45275.0 }, shardKeyPattern: { _id: 1.0 }, state: "catchup", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:26.510 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 44813.0 } -> { _id: 45275.0 } m30001| Fri Feb 22 12:30:26.519 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 44813.0 }, max: { _id: 45275.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:26.519 [conn4] moveChunk setting version to: 99|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:26.519 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:26.520 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 44813.0 } -> { _id: 45275.0 } m30000| Fri Feb 22 12:30:26.521 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 44813.0 } -> { _id: 45275.0 } m30000| Fri Feb 22 12:30:26.521 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:26-512764e2c49297cf54df566a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536226521), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 44813.0 }, max: { _id: 45275.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 29, step4 of 5: 0, step5 of 5: 12 } } m30999| Fri Feb 22 12:30:26.521 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 590 version: 98|1||51276475bd1f99446659365c based on: 98|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:26.522 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 98|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:26.522 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 98000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 590 m30001| Fri Feb 22 12:30:26.522 [conn3] waiting till out of critical section m30001| Fri Feb 22 12:30:26.529 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 44813.0 }, max: { _id: 45275.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:26.529 [conn4] moveChunk updating self version to: 99|1||51276475bd1f99446659365c through { _id: 45275.0 } -> { _id: 45737.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:26.530 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:26-512764e299334798f3e48034", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536226530), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 44813.0 }, max: { _id: 45275.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:26.530 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30999| Fri Feb 22 12:30:26.530 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 98000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", version: Timestamp 98000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), globalVersion: Timestamp 99000|0, globalVersionEpoch: ObjectId('51276475bd1f99446659365c'), reloadConfig: true, ok: 0.0, errmsg: "shard global version for collection is higher than trying to set to 'test.bar'" } m30001| Fri Feb 22 12:30:26.530 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:26.530 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:26.530 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:26.530 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:26.530 [cleanupOldData-512764e299334798f3e48035] (start) waiting to cleanup test.bar from { _id: 44813.0 } -> { _id: 45275.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:26.531 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:26.531 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:26-512764e299334798f3e48036", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536226531), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 44813.0 }, max: { _id: 45275.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 40, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:26.531 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:26.532 [conn1] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 591 version: 99|1||51276475bd1f99446659365c based on: 98|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:26.534 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 592 version: 99|1||51276475bd1f99446659365c based on: 98|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:26.535 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:26.536 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:26.537 [conn1] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 593 version: 99|1||51276475bd1f99446659365c based on: 99|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:26.537 [conn1] warning: chunk manager reload forced for collection 'test.bar', config version is 99|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:26.537 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 99000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 593 m30999| Fri Feb 22 12:30:26.537 [conn1] setShardVersion success: { oldVersion: Timestamp 98000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:26.550 [cleanupOldData-512764e299334798f3e48035] waiting to remove documents for test.bar from { _id: 44813.0 } -> { _id: 45275.0 } m30001| Fri Feb 22 12:30:26.550 [cleanupOldData-512764e299334798f3e48035] moveChunk starting delete for: test.bar from { _id: 44813.0 } -> { _id: 45275.0 } m30001| Fri Feb 22 12:30:26.886 [cleanupOldData-512764e299334798f3e48035] moveChunk deleted 462 documents for test.bar from { _id: 44813.0 } -> { _id: 45275.0 } --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512763febd1f994466593644") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0001" } test.bar shard key: { "_id" : 1 } chunks: shard0000 98 shard0001 119 too many chunks to print, use verbose if you want to force print test.foo shard key: { "_id" : 1 } chunks: shard0000 53 shard0001 54 too many chunks to print, use verbose if you want to force print ShardingTest input: { "shard0000" : 53, "shard0001" : 54 } min: 53 max: 54 ShardingTest input: { "shard0000" : 98, "shard0001" : 119 } min: 98 max: 119 m30999| Fri Feb 22 12:30:26.971 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 54000|0, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 413 m30999| Fri Feb 22 12:30:26.971 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:30:26.972 [conn1] setShardVersion shard0000 localhost:30000 test.foo { setShardVersion: "test.foo", configdb: "localhost:30000", version: Timestamp 54000|0, versionEpoch: ObjectId('51276475bd1f99446659365b'), serverID: ObjectId('512763febd1f994466593646'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 413 m30000| Fri Feb 22 12:30:26.972 [conn6] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 12:30:26.974 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:30:26.976 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 99000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 593 m30999| Fri Feb 22 12:30:26.976 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.bar", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.bar'" } m30999| Fri Feb 22 12:30:26.976 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 99000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), authoritative: true, shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 593 m30000| Fri Feb 22 12:30:26.976 [conn6] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 12:30:26.980 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } 0 m30999| Fri Feb 22 12:30:27.536 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:27.537 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:27.537 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e3bd1f9944665936bf" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e2bd1f9944665936be" } } m30999| Fri Feb 22 12:30:27.537 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e3bd1f9944665936bf m30999| Fri Feb 22 12:30:27.537 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:27.538 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:27.538 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:27.539 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:27.540 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:27.540 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:27.540 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:27.540 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:27.542 [Balancer] shard0001 has more chunks me:119 best: shard0000:98 m30999| Fri Feb 22 12:30:27.542 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:27.542 [Balancer] donor : shard0001 chunks on 119 m30999| Fri Feb 22 12:30:27.542 [Balancer] receiver : shard0000 chunks on 98 m30999| Fri Feb 22 12:30:27.542 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:27.542 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_45275.0", lastmod: Timestamp 99000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 45275.0 }, max: { _id: 45737.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:27.542 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 99|1||000000000000000000000000min: { _id: 45275.0 }max: { _id: 45737.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:27.542 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:27.542 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 45275.0 }, max: { _id: 45737.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_45275.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:27.543 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e399334798f3e48037 m30001| Fri Feb 22 12:30:27.543 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:27-512764e399334798f3e48038", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536227543), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 45275.0 }, max: { _id: 45737.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:27.545 [conn4] moveChunk request accepted at version 99|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:27.545 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:27.546 [migrateThread] starting receiving-end of migration of chunk { _id: 45275.0 } -> { _id: 45737.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:27.556 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45275.0 }, max: { _id: 45737.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 77, clonedBytes: 80311, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:27.566 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45275.0 }, max: { _id: 45737.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 200, clonedBytes: 208600, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:27.576 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45275.0 }, max: { _id: 45737.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 327, clonedBytes: 341061, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:27.586 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45275.0 }, max: { _id: 45737.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 450, clonedBytes: 469350, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:27.588 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:27.588 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 45275.0 } -> { _id: 45737.0 } m30000| Fri Feb 22 12:30:27.590 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 45275.0 } -> { _id: 45737.0 } m30001| Fri Feb 22 12:30:27.602 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45275.0 }, max: { _id: 45737.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:27.603 [conn4] moveChunk setting version to: 100|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:27.603 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:27.610 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 45275.0 } -> { _id: 45737.0 } m30000| Fri Feb 22 12:30:27.610 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 45275.0 } -> { _id: 45737.0 } m30000| Fri Feb 22 12:30:27.611 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:27-512764e3c49297cf54df566b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536227611), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 45275.0 }, max: { _id: 45737.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 41, step4 of 5: 0, step5 of 5: 22 } } m30001| Fri Feb 22 12:30:27.613 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 45275.0 }, max: { _id: 45737.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:27.613 [conn4] moveChunk updating self version to: 100|1||51276475bd1f99446659365c through { _id: 45737.0 } -> { _id: 46199.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:27.614 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:27-512764e399334798f3e48039", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536227614), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 45275.0 }, max: { _id: 45737.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:27.614 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:27.614 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:27.614 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:27.614 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:27.614 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:27.614 [cleanupOldData-512764e399334798f3e4803a] (start) waiting to cleanup test.bar from { _id: 45275.0 } -> { _id: 45737.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:27.614 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:27.614 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:27-512764e399334798f3e4803b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536227614), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 45275.0 }, max: { _id: 45737.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:27.614 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:27.616 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 594 version: 100|1||51276475bd1f99446659365c based on: 99|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:27.616 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:27.617 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:27.618 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 100000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 594 m30999| Fri Feb 22 12:30:27.618 [conn1] setShardVersion success: { oldVersion: Timestamp 99000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:27.634 [cleanupOldData-512764e399334798f3e4803a] waiting to remove documents for test.bar from { _id: 45275.0 } -> { _id: 45737.0 } m30001| Fri Feb 22 12:30:27.634 [cleanupOldData-512764e399334798f3e4803a] moveChunk starting delete for: test.bar from { _id: 45275.0 } -> { _id: 45737.0 } m30001| Fri Feb 22 12:30:27.652 [cleanupOldData-512764e399334798f3e4803a] moveChunk deleted 462 documents for test.bar from { _id: 45275.0 } -> { _id: 45737.0 } 1000 m30999| Fri Feb 22 12:30:28.617 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:28.618 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:28.618 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:28 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e4bd1f9944665936c0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e3bd1f9944665936bf" } } m30999| Fri Feb 22 12:30:28.618 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e4bd1f9944665936c0 m30999| Fri Feb 22 12:30:28.618 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:28.618 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:28.618 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:28.620 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:28.620 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:28.620 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:28.620 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:28.620 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:28.622 [Balancer] shard0001 has more chunks me:118 best: shard0000:99 m30999| Fri Feb 22 12:30:28.622 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:28.622 [Balancer] donor : shard0001 chunks on 118 m30999| Fri Feb 22 12:30:28.622 [Balancer] receiver : shard0000 chunks on 99 m30999| Fri Feb 22 12:30:28.622 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:28.623 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_45737.0", lastmod: Timestamp 100000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 45737.0 }, max: { _id: 46199.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:28.623 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 100|1||000000000000000000000000min: { _id: 45737.0 }max: { _id: 46199.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:28.623 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:28.623 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 45737.0 }, max: { _id: 46199.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_45737.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:28.624 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e499334798f3e4803c m30001| Fri Feb 22 12:30:28.624 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:28-512764e499334798f3e4803d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536228624), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 45737.0 }, max: { _id: 46199.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:28.625 [conn4] moveChunk request accepted at version 100|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:28.626 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:28.626 [migrateThread] starting receiving-end of migration of chunk { _id: 45737.0 } -> { _id: 46199.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:28.636 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45737.0 }, max: { _id: 46199.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 141, clonedBytes: 147063, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:28.646 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45737.0 }, max: { _id: 46199.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 332, clonedBytes: 346276, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:28.654 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:28.654 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 45737.0 } -> { _id: 46199.0 } m30000| Fri Feb 22 12:30:28.656 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 45737.0 } -> { _id: 46199.0 } m30001| Fri Feb 22 12:30:28.656 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 45737.0 }, max: { _id: 46199.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:28.657 [conn4] moveChunk setting version to: 101|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:28.657 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:28.666 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 45737.0 } -> { _id: 46199.0 } m30000| Fri Feb 22 12:30:28.666 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 45737.0 } -> { _id: 46199.0 } m30000| Fri Feb 22 12:30:28.666 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:28-512764e4c49297cf54df566c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536228666), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 45737.0 }, max: { _id: 46199.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 26, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:28.667 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 45737.0 }, max: { _id: 46199.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:28.667 [conn4] moveChunk updating self version to: 101|1||51276475bd1f99446659365c through { _id: 46199.0 } -> { _id: 46661.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:28.668 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:28-512764e499334798f3e4803e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536228668), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 45737.0 }, max: { _id: 46199.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:28.668 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:28.668 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:28.668 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:28.668 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:28.668 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:28.668 [cleanupOldData-512764e499334798f3e4803f] (start) waiting to cleanup test.bar from { _id: 45737.0 } -> { _id: 46199.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:28.668 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:28.668 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:28-512764e499334798f3e48040", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536228668), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 45737.0 }, max: { _id: 46199.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 30, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:28.668 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:28.670 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 595 version: 101|1||51276475bd1f99446659365c based on: 100|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:28.671 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:28.671 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 101000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 595 m30999| Fri Feb 22 12:30:28.671 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:28.672 [conn1] setShardVersion success: { oldVersion: Timestamp 100000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:28.688 [cleanupOldData-512764e499334798f3e4803f] waiting to remove documents for test.bar from { _id: 45737.0 } -> { _id: 46199.0 } m30001| Fri Feb 22 12:30:28.688 [cleanupOldData-512764e499334798f3e4803f] moveChunk starting delete for: test.bar from { _id: 45737.0 } -> { _id: 46199.0 } m30001| Fri Feb 22 12:30:28.716 [cleanupOldData-512764e499334798f3e4803f] moveChunk deleted 462 documents for test.bar from { _id: 45737.0 } -> { _id: 46199.0 } 2000 m30999| Fri Feb 22 12:30:29.672 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:29.672 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:29.672 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:29 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e5bd1f9944665936c1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e4bd1f9944665936c0" } } m30999| Fri Feb 22 12:30:29.673 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e5bd1f9944665936c1 m30999| Fri Feb 22 12:30:29.673 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:29.673 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:29.673 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:29.675 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:29.675 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:29.675 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:29.675 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:29.675 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:29.677 [Balancer] shard0001 has more chunks me:117 best: shard0000:100 m30999| Fri Feb 22 12:30:29.677 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:29.677 [Balancer] donor : shard0001 chunks on 117 m30999| Fri Feb 22 12:30:29.677 [Balancer] receiver : shard0000 chunks on 100 m30999| Fri Feb 22 12:30:29.677 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:29.677 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_46199.0", lastmod: Timestamp 101000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 46199.0 }, max: { _id: 46661.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:29.678 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 101|1||000000000000000000000000min: { _id: 46199.0 }max: { _id: 46661.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:29.678 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:29.678 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 46199.0 }, max: { _id: 46661.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_46199.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:29.679 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e599334798f3e48041 m30001| Fri Feb 22 12:30:29.679 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:29-512764e599334798f3e48042", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536229679), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 46199.0 }, max: { _id: 46661.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:29.680 [conn4] moveChunk request accepted at version 101|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:29.680 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:29.681 [migrateThread] starting receiving-end of migration of chunk { _id: 46199.0 } -> { _id: 46661.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:29.691 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46199.0 }, max: { _id: 46661.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 150, clonedBytes: 156450, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:29.701 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46199.0 }, max: { _id: 46661.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 342, clonedBytes: 356706, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:29.708 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:29.708 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 46199.0 } -> { _id: 46661.0 } m30000| Fri Feb 22 12:30:29.709 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 46199.0 } -> { _id: 46661.0 } m30001| Fri Feb 22 12:30:29.711 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46199.0 }, max: { _id: 46661.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:29.711 [conn4] moveChunk setting version to: 102|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:29.711 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:29.720 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 46199.0 } -> { _id: 46661.0 } m30000| Fri Feb 22 12:30:29.720 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 46199.0 } -> { _id: 46661.0 } m30000| Fri Feb 22 12:30:29.720 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:29-512764e5c49297cf54df566d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536229720), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 46199.0 }, max: { _id: 46661.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 26, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:29.721 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 46199.0 }, max: { _id: 46661.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:29.722 [conn4] moveChunk updating self version to: 102|1||51276475bd1f99446659365c through { _id: 46661.0 } -> { _id: 47123.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:29.722 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:29-512764e599334798f3e48043", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536229722), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 46199.0 }, max: { _id: 46661.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:29.722 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:29.722 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:29.722 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:29.722 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:29.722 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:29.722 [cleanupOldData-512764e599334798f3e48044] (start) waiting to cleanup test.bar from { _id: 46199.0 } -> { _id: 46661.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:29.723 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:29.723 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:29-512764e599334798f3e48045", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536229723), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 46199.0 }, max: { _id: 46661.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 30, step5 of 6: 10, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:29.723 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:29.725 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 596 version: 102|1||51276475bd1f99446659365c based on: 101|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:29.725 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:29.726 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:29.726 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 102000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 596 m30999| Fri Feb 22 12:30:29.726 [conn1] setShardVersion success: { oldVersion: Timestamp 101000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:29.742 [cleanupOldData-512764e599334798f3e48044] waiting to remove documents for test.bar from { _id: 46199.0 } -> { _id: 46661.0 } m30001| Fri Feb 22 12:30:29.742 [cleanupOldData-512764e599334798f3e48044] moveChunk starting delete for: test.bar from { _id: 46199.0 } -> { _id: 46661.0 } m30001| Fri Feb 22 12:30:29.771 [cleanupOldData-512764e599334798f3e48044] moveChunk deleted 462 documents for test.bar from { _id: 46199.0 } -> { _id: 46661.0 } 3000 m30999| Fri Feb 22 12:30:30.726 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:30.727 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:30.727 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:30 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e6bd1f9944665936c2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e5bd1f9944665936c1" } } m30999| Fri Feb 22 12:30:30.728 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e6bd1f9944665936c2 m30999| Fri Feb 22 12:30:30.728 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:30.728 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:30.728 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:30.731 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:30.731 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:30.731 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:30.731 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:30.731 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:30.734 [Balancer] shard0001 has more chunks me:116 best: shard0000:101 m30999| Fri Feb 22 12:30:30.734 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:30.734 [Balancer] donor : shard0001 chunks on 116 m30999| Fri Feb 22 12:30:30.734 [Balancer] receiver : shard0000 chunks on 101 m30999| Fri Feb 22 12:30:30.734 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:30.734 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_46661.0", lastmod: Timestamp 102000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 46661.0 }, max: { _id: 47123.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:30.735 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 102|1||000000000000000000000000min: { _id: 46661.0 }max: { _id: 47123.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:30.735 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:30.735 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 46661.0 }, max: { _id: 47123.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_46661.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:30.736 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e699334798f3e48046 m30001| Fri Feb 22 12:30:30.736 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:30-512764e699334798f3e48047", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536230736), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 46661.0 }, max: { _id: 47123.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:30.738 [conn4] moveChunk request accepted at version 102|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:30.740 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:30.740 [migrateThread] starting receiving-end of migration of chunk { _id: 46661.0 } -> { _id: 47123.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:30.750 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46661.0 }, max: { _id: 47123.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 68, clonedBytes: 70924, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:30.761 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46661.0 }, max: { _id: 47123.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 178, clonedBytes: 185654, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:30.771 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46661.0 }, max: { _id: 47123.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 296, clonedBytes: 308728, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:30.781 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46661.0 }, max: { _id: 47123.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 409, clonedBytes: 426587, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:30.788 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:30.788 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 46661.0 } -> { _id: 47123.0 } m30000| Fri Feb 22 12:30:30.790 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 46661.0 } -> { _id: 47123.0 } m30001| Fri Feb 22 12:30:30.798 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 46661.0 }, max: { _id: 47123.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:30.798 [conn4] moveChunk setting version to: 103|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:30.798 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:30.801 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 46661.0 } -> { _id: 47123.0 } m30000| Fri Feb 22 12:30:30.801 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 46661.0 } -> { _id: 47123.0 } m30000| Fri Feb 22 12:30:30.801 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:30-512764e6c49297cf54df566e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536230801), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 46661.0 }, max: { _id: 47123.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 46, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:30.808 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 46661.0 }, max: { _id: 47123.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:30.808 [conn4] moveChunk updating self version to: 103|1||51276475bd1f99446659365c through { _id: 47123.0 } -> { _id: 47585.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:30.809 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:30-512764e699334798f3e48048", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536230809), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 46661.0 }, max: { _id: 47123.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:30.809 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:30.809 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:30.809 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:30.809 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:30.809 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:30.809 [cleanupOldData-512764e699334798f3e48049] (start) waiting to cleanup test.bar from { _id: 46661.0 } -> { _id: 47123.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:30.810 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:30.810 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:30-512764e699334798f3e4804a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536230810), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 46661.0 }, max: { _id: 47123.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 1, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:30.810 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:30.812 [Balancer] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 597 version: 103|1||51276475bd1f99446659365c based on: 102|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:30.813 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:30.814 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:30.814 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 103000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 597 m30999| Fri Feb 22 12:30:30.815 [conn1] setShardVersion success: { oldVersion: Timestamp 102000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:30.829 [cleanupOldData-512764e699334798f3e48049] waiting to remove documents for test.bar from { _id: 46661.0 } -> { _id: 47123.0 } m30001| Fri Feb 22 12:30:30.829 [cleanupOldData-512764e699334798f3e48049] moveChunk starting delete for: test.bar from { _id: 46661.0 } -> { _id: 47123.0 } m30001| Fri Feb 22 12:30:30.858 [cleanupOldData-512764e699334798f3e48049] moveChunk deleted 462 documents for test.bar from { _id: 46661.0 } -> { _id: 47123.0 } 4000 m30999| Fri Feb 22 12:30:31.814 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:31.815 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:31.815 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:31 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e7bd1f9944665936c3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e6bd1f9944665936c2" } } m30999| Fri Feb 22 12:30:31.816 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e7bd1f9944665936c3 m30999| Fri Feb 22 12:30:31.816 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:31.816 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:31.816 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:31.818 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:31.818 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:31.818 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:31.818 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:31.818 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:31.820 [Balancer] shard0001 has more chunks me:115 best: shard0000:102 m30999| Fri Feb 22 12:30:31.820 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:31.820 [Balancer] donor : shard0001 chunks on 115 m30999| Fri Feb 22 12:30:31.820 [Balancer] receiver : shard0000 chunks on 102 m30999| Fri Feb 22 12:30:31.820 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:31.820 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_47123.0", lastmod: Timestamp 103000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 47123.0 }, max: { _id: 47585.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:31.820 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 103|1||000000000000000000000000min: { _id: 47123.0 }max: { _id: 47585.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:31.820 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:31.821 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 47123.0 }, max: { _id: 47585.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_47123.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:31.821 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e799334798f3e4804b m30001| Fri Feb 22 12:30:31.821 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:31-512764e799334798f3e4804c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536231821), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 47123.0 }, max: { _id: 47585.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:31.823 [conn4] moveChunk request accepted at version 103|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:31.823 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:31.823 [migrateThread] starting receiving-end of migration of chunk { _id: 47123.0 } -> { _id: 47585.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:31.834 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47123.0 }, max: { _id: 47585.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 73, clonedBytes: 76139, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:31.844 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47123.0 }, max: { _id: 47585.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 190, clonedBytes: 198170, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:31.854 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47123.0 }, max: { _id: 47585.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 299, clonedBytes: 311857, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:31.864 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47123.0 }, max: { _id: 47585.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 417, clonedBytes: 434931, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:31.868 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:31.868 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 47123.0 } -> { _id: 47585.0 } m30000| Fri Feb 22 12:30:31.871 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 47123.0 } -> { _id: 47585.0 } m30001| Fri Feb 22 12:30:31.881 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47123.0 }, max: { _id: 47585.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:31.881 [conn4] moveChunk setting version to: 104|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:31.881 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:31.881 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 47123.0 } -> { _id: 47585.0 } m30000| Fri Feb 22 12:30:31.881 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 47123.0 } -> { _id: 47585.0 } m30000| Fri Feb 22 12:30:31.881 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:31-512764e7c49297cf54df566f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536231881), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 47123.0 }, max: { _id: 47585.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 44, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:31.891 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 47123.0 }, max: { _id: 47585.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:31.891 [conn4] moveChunk updating self version to: 104|1||51276475bd1f99446659365c through { _id: 47585.0 } -> { _id: 48047.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:31.892 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:31-512764e799334798f3e4804d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536231892), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 47123.0 }, max: { _id: 47585.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:31.892 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:31.892 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:31.892 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:31.892 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:31.892 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:31.892 [cleanupOldData-512764e799334798f3e4804e] (start) waiting to cleanup test.bar from { _id: 47123.0 } -> { _id: 47585.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:31.892 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:31.892 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:31-512764e799334798f3e4804f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536231892), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 47123.0 }, max: { _id: 47585.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:31.892 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:31.895 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 598 version: 104|1||51276475bd1f99446659365c based on: 103|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:31.895 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:31.896 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:31.896 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 104000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 598 m30999| Fri Feb 22 12:30:31.896 [conn1] setShardVersion success: { oldVersion: Timestamp 103000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:31.912 [cleanupOldData-512764e799334798f3e4804e] waiting to remove documents for test.bar from { _id: 47123.0 } -> { _id: 47585.0 } m30001| Fri Feb 22 12:30:31.912 [cleanupOldData-512764e799334798f3e4804e] moveChunk starting delete for: test.bar from { _id: 47123.0 } -> { _id: 47585.0 } m30001| Fri Feb 22 12:30:31.950 [cleanupOldData-512764e799334798f3e4804e] moveChunk deleted 462 documents for test.bar from { _id: 47123.0 } -> { _id: 47585.0 } 5000 m30999| Fri Feb 22 12:30:32.896 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:32.897 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:32.897 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e8bd1f9944665936c4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e7bd1f9944665936c3" } } m30999| Fri Feb 22 12:30:32.897 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e8bd1f9944665936c4 m30999| Fri Feb 22 12:30:32.897 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:32.897 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:32.897 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:32.899 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:32.899 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:32.899 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:32.899 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:32.899 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:32.902 [Balancer] shard0001 has more chunks me:114 best: shard0000:103 m30999| Fri Feb 22 12:30:32.902 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:32.902 [Balancer] donor : shard0001 chunks on 114 m30999| Fri Feb 22 12:30:32.902 [Balancer] receiver : shard0000 chunks on 103 m30999| Fri Feb 22 12:30:32.902 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:32.902 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_47585.0", lastmod: Timestamp 104000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 47585.0 }, max: { _id: 48047.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:32.902 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 104|1||000000000000000000000000min: { _id: 47585.0 }max: { _id: 48047.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:32.902 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:32.902 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 47585.0 }, max: { _id: 48047.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_47585.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:32.903 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e899334798f3e48050 m30001| Fri Feb 22 12:30:32.903 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:32-512764e899334798f3e48051", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536232903), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 47585.0 }, max: { _id: 48047.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:32.904 [conn4] moveChunk request accepted at version 104|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:32.905 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:32.905 [migrateThread] starting receiving-end of migration of chunk { _id: 47585.0 } -> { _id: 48047.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:32.916 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47585.0 }, max: { _id: 48047.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 81, clonedBytes: 84483, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:32.926 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47585.0 }, max: { _id: 48047.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 207, clonedBytes: 215901, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:32.936 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47585.0 }, max: { _id: 48047.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 325, clonedBytes: 338975, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:32.946 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47585.0 }, max: { _id: 48047.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 437, clonedBytes: 455791, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:32.949 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:32.949 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 47585.0 } -> { _id: 48047.0 } m30000| Fri Feb 22 12:30:32.951 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 47585.0 } -> { _id: 48047.0 } m30001| Fri Feb 22 12:30:32.962 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 47585.0 }, max: { _id: 48047.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:32.963 [conn4] moveChunk setting version to: 105|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:32.963 [conn11] Waiting for commit to finish 6000 m30000| Fri Feb 22 12:30:32.972 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 47585.0 } -> { _id: 48047.0 } m30000| Fri Feb 22 12:30:32.972 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 47585.0 } -> { _id: 48047.0 } m30000| Fri Feb 22 12:30:32.972 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:32-512764e8c49297cf54df5670", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536232972), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 47585.0 }, max: { _id: 48047.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 42, step4 of 5: 0, step5 of 5: 23 } } m30001| Fri Feb 22 12:30:32.973 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 47585.0 }, max: { _id: 48047.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:32.973 [conn4] moveChunk updating self version to: 105|1||51276475bd1f99446659365c through { _id: 48047.0 } -> { _id: 48509.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:32.974 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:32-512764e899334798f3e48052", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536232974), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 47585.0 }, max: { _id: 48047.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:32.974 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:32.974 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:32.974 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:32.974 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:32.974 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:32.974 [cleanupOldData-512764e899334798f3e48053] (start) waiting to cleanup test.bar from { _id: 47585.0 } -> { _id: 48047.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:32.974 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:32.974 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:32-512764e899334798f3e48054", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536232974), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 47585.0 }, max: { _id: 48047.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:32.974 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:32.977 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 599 version: 105|1||51276475bd1f99446659365c based on: 104|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:32.977 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:32.977 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:32.978 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 105000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 599 m30999| Fri Feb 22 12:30:32.979 [conn1] setShardVersion success: { oldVersion: Timestamp 104000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:32.994 [cleanupOldData-512764e899334798f3e48053] waiting to remove documents for test.bar from { _id: 47585.0 } -> { _id: 48047.0 } m30001| Fri Feb 22 12:30:32.994 [cleanupOldData-512764e899334798f3e48053] moveChunk starting delete for: test.bar from { _id: 47585.0 } -> { _id: 48047.0 } m30001| Fri Feb 22 12:30:33.028 [cleanupOldData-512764e899334798f3e48053] moveChunk deleted 462 documents for test.bar from { _id: 47585.0 } -> { _id: 48047.0 } 7000 m30999| Fri Feb 22 12:30:33.978 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:33.978 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:33.979 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:33 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764e9bd1f9944665936c5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e8bd1f9944665936c4" } } m30999| Fri Feb 22 12:30:33.979 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764e9bd1f9944665936c5 m30999| Fri Feb 22 12:30:33.979 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:33.979 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:33.979 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:33.982 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:33.982 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:33.982 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:33.982 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:33.982 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:33.984 [Balancer] shard0001 has more chunks me:113 best: shard0000:104 m30999| Fri Feb 22 12:30:33.984 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:33.984 [Balancer] donor : shard0001 chunks on 113 m30999| Fri Feb 22 12:30:33.984 [Balancer] receiver : shard0000 chunks on 104 m30999| Fri Feb 22 12:30:33.984 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:33.984 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_48047.0", lastmod: Timestamp 105000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 48047.0 }, max: { _id: 48509.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:33.985 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 105|1||000000000000000000000000min: { _id: 48047.0 }max: { _id: 48509.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:33.985 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:33.985 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 48047.0 }, max: { _id: 48509.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_48047.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:33.986 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764e999334798f3e48055 m30001| Fri Feb 22 12:30:33.986 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:33-512764e999334798f3e48056", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536233986), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 48047.0 }, max: { _id: 48509.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:33.987 [conn4] moveChunk request accepted at version 105|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:33.988 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:33.989 [migrateThread] starting receiving-end of migration of chunk { _id: 48047.0 } -> { _id: 48509.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:33.999 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48047.0 }, max: { _id: 48509.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 92, clonedBytes: 95956, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:34.009 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48047.0 }, max: { _id: 48509.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 211, clonedBytes: 220073, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:34.019 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48047.0 }, max: { _id: 48509.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 329, clonedBytes: 343147, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:34.030 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48047.0 }, max: { _id: 48509.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 458, clonedBytes: 477694, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:34.030 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:34.030 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 48047.0 } -> { _id: 48509.0 } m30000| Fri Feb 22 12:30:34.033 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 48047.0 } -> { _id: 48509.0 } m30001| Fri Feb 22 12:30:34.046 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48047.0 }, max: { _id: 48509.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:34.046 [conn4] moveChunk setting version to: 106|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:34.046 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:34.053 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 48047.0 } -> { _id: 48509.0 } m30000| Fri Feb 22 12:30:34.053 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 48047.0 } -> { _id: 48509.0 } m30000| Fri Feb 22 12:30:34.053 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:34-512764eac49297cf54df5671", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536234053), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 48047.0 }, max: { _id: 48509.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 40, step4 of 5: 0, step5 of 5: 23 } } m30001| Fri Feb 22 12:30:34.056 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 48047.0 }, max: { _id: 48509.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:34.056 [conn4] moveChunk updating self version to: 106|1||51276475bd1f99446659365c through { _id: 48509.0 } -> { _id: 48971.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:34.058 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:34-512764ea99334798f3e48057", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536234058), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 48047.0 }, max: { _id: 48509.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:34.058 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:34.058 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:34.058 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:34.058 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:34.058 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:34.058 [cleanupOldData-512764ea99334798f3e48058] (start) waiting to cleanup test.bar from { _id: 48047.0 } -> { _id: 48509.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:34.058 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:34.058 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:34-512764ea99334798f3e48059", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536234058), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 48047.0 }, max: { _id: 48509.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:34.058 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:34.061 [Balancer] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 600 version: 106|1||51276475bd1f99446659365c based on: 105|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:34.062 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:34.062 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 106000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 600 m30999| Fri Feb 22 12:30:34.062 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:34.063 [conn1] setShardVersion success: { oldVersion: Timestamp 105000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:34.078 [cleanupOldData-512764ea99334798f3e48058] waiting to remove documents for test.bar from { _id: 48047.0 } -> { _id: 48509.0 } m30001| Fri Feb 22 12:30:34.078 [cleanupOldData-512764ea99334798f3e48058] moveChunk starting delete for: test.bar from { _id: 48047.0 } -> { _id: 48509.0 } m30001| Fri Feb 22 12:30:34.107 [cleanupOldData-512764ea99334798f3e48058] moveChunk deleted 462 documents for test.bar from { _id: 48047.0 } -> { _id: 48509.0 } m30999| Fri Feb 22 12:30:35.063 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:35.063 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:35.064 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:35 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764ebbd1f9944665936c6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764e9bd1f9944665936c5" } } m30999| Fri Feb 22 12:30:35.064 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764ebbd1f9944665936c6 m30999| Fri Feb 22 12:30:35.064 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:35.064 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:35.064 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:35.067 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:35.067 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:35.067 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:35.067 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:35.067 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:35.069 [Balancer] shard0001 has more chunks me:112 best: shard0000:105 m30999| Fri Feb 22 12:30:35.069 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:35.069 [Balancer] donor : shard0001 chunks on 112 m30999| Fri Feb 22 12:30:35.069 [Balancer] receiver : shard0000 chunks on 105 m30999| Fri Feb 22 12:30:35.069 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:35.069 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_48509.0", lastmod: Timestamp 106000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 48509.0 }, max: { _id: 48971.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:35.070 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 106|1||000000000000000000000000min: { _id: 48509.0 }max: { _id: 48971.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:35.070 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:35.070 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 48509.0 }, max: { _id: 48971.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_48509.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:35.071 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764eb99334798f3e4805a m30001| Fri Feb 22 12:30:35.071 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:35-512764eb99334798f3e4805b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536235071), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 48509.0 }, max: { _id: 48971.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:35.073 [conn4] moveChunk request accepted at version 106|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:35.074 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:35.074 [migrateThread] starting receiving-end of migration of chunk { _id: 48509.0 } -> { _id: 48971.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:35.084 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48509.0 }, max: { _id: 48971.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 83, clonedBytes: 86569, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:35.095 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48509.0 }, max: { _id: 48971.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 199, clonedBytes: 207557, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:35.105 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48509.0 }, max: { _id: 48971.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 326, clonedBytes: 340018, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 8000 m30001| Fri Feb 22 12:30:35.115 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48509.0 }, max: { _id: 48971.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 455, clonedBytes: 474565, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:35.116 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:35.116 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 48509.0 } -> { _id: 48971.0 } m30000| Fri Feb 22 12:30:35.119 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 48509.0 } -> { _id: 48971.0 } m30001| Fri Feb 22 12:30:35.132 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48509.0 }, max: { _id: 48971.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:35.132 [conn4] moveChunk setting version to: 107|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:35.132 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:35.139 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 48509.0 } -> { _id: 48971.0 } m30000| Fri Feb 22 12:30:35.139 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 48509.0 } -> { _id: 48971.0 } m30000| Fri Feb 22 12:30:35.139 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:35-512764ebc49297cf54df5672", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536235139), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 48509.0 }, max: { _id: 48971.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 41, step4 of 5: 0, step5 of 5: 23 } } m30001| Fri Feb 22 12:30:35.142 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 48509.0 }, max: { _id: 48971.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:35.142 [conn4] moveChunk updating self version to: 107|1||51276475bd1f99446659365c through { _id: 48971.0 } -> { _id: 49433.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:35.143 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:35-512764eb99334798f3e4805c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536235143), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 48509.0 }, max: { _id: 48971.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:35.143 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:35.143 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:35.143 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:35.143 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:35.143 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:35.143 [cleanupOldData-512764eb99334798f3e4805d] (start) waiting to cleanup test.bar from { _id: 48509.0 } -> { _id: 48971.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:35.144 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:35.144 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:35-512764eb99334798f3e4805e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536235144), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 48509.0 }, max: { _id: 48971.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 1, step4 of 6: 57, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:35.144 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:35.147 [Balancer] ChunkManager: time to load chunks for test.bar: 2ms sequenceNumber: 601 version: 107|1||51276475bd1f99446659365c based on: 106|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:35.147 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:35.147 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 107000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 601 m30999| Fri Feb 22 12:30:35.148 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:35.148 [conn1] setShardVersion success: { oldVersion: Timestamp 106000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:35.163 [cleanupOldData-512764eb99334798f3e4805d] waiting to remove documents for test.bar from { _id: 48509.0 } -> { _id: 48971.0 } m30001| Fri Feb 22 12:30:35.163 [cleanupOldData-512764eb99334798f3e4805d] moveChunk starting delete for: test.bar from { _id: 48509.0 } -> { _id: 48971.0 } m30001| Fri Feb 22 12:30:35.192 [cleanupOldData-512764eb99334798f3e4805d] moveChunk deleted 462 documents for test.bar from { _id: 48509.0 } -> { _id: 48971.0 } m30999| Fri Feb 22 12:30:36.158 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:36.158 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:36.158 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:36 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764ecbd1f9944665936c7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764ebbd1f9944665936c6" } } m30999| Fri Feb 22 12:30:36.159 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764ecbd1f9944665936c7 m30999| Fri Feb 22 12:30:36.159 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:36.159 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:36.159 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:36.161 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:36.161 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:36.161 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:36.161 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:36.161 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:36.163 [Balancer] shard0001 has more chunks me:111 best: shard0000:106 m30999| Fri Feb 22 12:30:36.163 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:36.163 [Balancer] donor : shard0001 chunks on 111 m30999| Fri Feb 22 12:30:36.163 [Balancer] receiver : shard0000 chunks on 106 m30999| Fri Feb 22 12:30:36.163 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:36.163 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_48971.0", lastmod: Timestamp 107000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 48971.0 }, max: { _id: 49433.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:36.164 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 107|1||000000000000000000000000min: { _id: 48971.0 }max: { _id: 49433.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:36.164 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:36.164 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 48971.0 }, max: { _id: 49433.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_48971.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:36.165 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ec99334798f3e4805f m30001| Fri Feb 22 12:30:36.165 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:36-512764ec99334798f3e48060", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536236165), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 48971.0 }, max: { _id: 49433.0 }, from: "shard0001", to: "shard0000" } } 9000 m30001| Fri Feb 22 12:30:36.166 [conn4] moveChunk request accepted at version 107|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:36.166 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:36.167 [migrateThread] starting receiving-end of migration of chunk { _id: 48971.0 } -> { _id: 49433.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:36.177 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48971.0 }, max: { _id: 49433.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 87, clonedBytes: 90741, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:36.187 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48971.0 }, max: { _id: 49433.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 211, clonedBytes: 220073, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:36.197 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48971.0 }, max: { _id: 49433.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 338, clonedBytes: 352534, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:36.207 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48971.0 }, max: { _id: 49433.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 440, clonedBytes: 458920, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:36.209 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:36.209 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 48971.0 } -> { _id: 49433.0 } m30000| Fri Feb 22 12:30:36.210 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 48971.0 } -> { _id: 49433.0 } m30001| Fri Feb 22 12:30:36.223 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 48971.0 }, max: { _id: 49433.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:36.224 [conn4] moveChunk setting version to: 108|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:36.224 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:36.231 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 48971.0 } -> { _id: 49433.0 } m30000| Fri Feb 22 12:30:36.231 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 48971.0 } -> { _id: 49433.0 } m30000| Fri Feb 22 12:30:36.231 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:36-512764ecc49297cf54df5673", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536236231), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 48971.0 }, max: { _id: 49433.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 42, step4 of 5: 0, step5 of 5: 21 } } m30001| Fri Feb 22 12:30:36.234 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 48971.0 }, max: { _id: 49433.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:36.234 [conn4] moveChunk updating self version to: 108|1||51276475bd1f99446659365c through { _id: 49433.0 } -> { _id: 49895.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:36.234 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:36-512764ec99334798f3e48061", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536236234), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 48971.0 }, max: { _id: 49433.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:36.235 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:36.235 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:36.235 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:36.235 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:36.235 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:36.235 [cleanupOldData-512764ec99334798f3e48062] (start) waiting to cleanup test.bar from { _id: 48971.0 } -> { _id: 49433.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:36.235 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:36.235 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:36-512764ec99334798f3e48063", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536236235), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 48971.0 }, max: { _id: 49433.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:36.235 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:36.237 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 602 version: 108|1||51276475bd1f99446659365c based on: 107|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:36.238 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:36.238 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:36.239 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 108000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 602 m30999| Fri Feb 22 12:30:36.239 [conn1] setShardVersion success: { oldVersion: Timestamp 107000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:36.255 [cleanupOldData-512764ec99334798f3e48062] waiting to remove documents for test.bar from { _id: 48971.0 } -> { _id: 49433.0 } m30001| Fri Feb 22 12:30:36.255 [cleanupOldData-512764ec99334798f3e48062] moveChunk starting delete for: test.bar from { _id: 48971.0 } -> { _id: 49433.0 } m30001| Fri Feb 22 12:30:36.283 [cleanupOldData-512764ec99334798f3e48062] moveChunk deleted 462 documents for test.bar from { _id: 48971.0 } -> { _id: 49433.0 } 10000 m30999| Fri Feb 22 12:30:37.238 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:37.239 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:37.239 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:37 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764edbd1f9944665936c8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764ecbd1f9944665936c7" } } m30999| Fri Feb 22 12:30:37.240 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764edbd1f9944665936c8 m30999| Fri Feb 22 12:30:37.240 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:37.240 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:37.240 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:37.242 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:37.242 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:37.242 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:37.242 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:37.242 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:37.244 [Balancer] shard0001 has more chunks me:110 best: shard0000:107 m30999| Fri Feb 22 12:30:37.244 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:37.244 [Balancer] donor : shard0001 chunks on 110 m30999| Fri Feb 22 12:30:37.244 [Balancer] receiver : shard0000 chunks on 107 m30999| Fri Feb 22 12:30:37.244 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:37.244 [Balancer] ns: test.bar going to move { _id: "test.bar-_id_49433.0", lastmod: Timestamp 108000|1, lastmodEpoch: ObjectId('51276475bd1f99446659365c'), ns: "test.bar", min: { _id: 49433.0 }, max: { _id: 49895.0 }, shard: "shard0001" } from: shard0001 to: shard0000 tag [] m30999| Fri Feb 22 12:30:37.244 [Balancer] moving chunk ns: test.bar moving ( ns:test.barshard: shard0001:localhost:30001lastmod: 108|1||000000000000000000000000min: { _id: 49433.0 }max: { _id: 49895.0 }) shard0001:localhost:30001 -> shard0000:localhost:30000 m30001| Fri Feb 22 12:30:37.244 [conn4] warning: secondaryThrottle selected but no replication m30001| Fri Feb 22 12:30:37.244 [conn4] received moveChunk request: { moveChunk: "test.bar", from: "localhost:30001", to: "localhost:30000", fromShard: "shard0001", toShard: "shard0000", min: { _id: 49433.0 }, max: { _id: 49895.0 }, maxChunkSizeBytes: 1048576, shardId: "test.bar-_id_49433.0", configdb: "localhost:30000", secondaryThrottle: true, waitForDelete: false } m30001| Fri Feb 22 12:30:37.245 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' acquired, ts : 512764ed99334798f3e48064 m30001| Fri Feb 22 12:30:37.245 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:37-512764ed99334798f3e48065", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536237245), what: "moveChunk.start", ns: "test.bar", details: { min: { _id: 49433.0 }, max: { _id: 49895.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:37.247 [conn4] moveChunk request accepted at version 108|1||51276475bd1f99446659365c m30001| Fri Feb 22 12:30:37.247 [conn4] moveChunk number of documents: 462 m30000| Fri Feb 22 12:30:37.248 [migrateThread] starting receiving-end of migration of chunk { _id: 49433.0 } -> { _id: 49895.0 } for collection test.bar from localhost:30001 (0 slaves detected) m30001| Fri Feb 22 12:30:37.258 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 49433.0 }, max: { _id: 49895.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 90, clonedBytes: 93870, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:37.268 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 49433.0 }, max: { _id: 49895.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 212, clonedBytes: 221116, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:37.278 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 49433.0 }, max: { _id: 49895.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 317, clonedBytes: 330631, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:37.288 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 49433.0 }, max: { _id: 49895.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 410, clonedBytes: 427630, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30000| Fri Feb 22 12:30:37.294 [migrateThread] Waiting for replication to catch up before entering critical section m30000| Fri Feb 22 12:30:37.294 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 49433.0 } -> { _id: 49895.0 } m30000| Fri Feb 22 12:30:37.296 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 49433.0 } -> { _id: 49895.0 } m30001| Fri Feb 22 12:30:37.304 [conn4] moveChunk data transfer progress: { active: true, ns: "test.bar", from: "localhost:30001", min: { _id: 49433.0 }, max: { _id: 49895.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30001| Fri Feb 22 12:30:37.304 [conn4] moveChunk setting version to: 109|0||51276475bd1f99446659365c m30000| Fri Feb 22 12:30:37.305 [conn11] Waiting for commit to finish m30000| Fri Feb 22 12:30:37.307 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.bar' { _id: 49433.0 } -> { _id: 49895.0 } m30000| Fri Feb 22 12:30:37.307 [migrateThread] migrate commit flushed to journal for 'test.bar' { _id: 49433.0 } -> { _id: 49895.0 } m30000| Fri Feb 22 12:30:37.307 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:37-512764edc49297cf54df5674", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536237307), what: "moveChunk.to", ns: "test.bar", details: { min: { _id: 49433.0 }, max: { _id: 49895.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 45, step4 of 5: 0, step5 of 5: 12 } } m30001| Fri Feb 22 12:30:37.315 [conn4] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.bar", from: "localhost:30001", min: { _id: 49433.0 }, max: { _id: 49895.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 462, clonedBytes: 481866, catchup: 0, steady: 0 }, ok: 1.0 } m30001| Fri Feb 22 12:30:37.315 [conn4] moveChunk updating self version to: 109|1||51276475bd1f99446659365c through { _id: 49895.0 } -> { _id: 50357.0 } for collection 'test.bar' m30001| Fri Feb 22 12:30:37.316 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:37-512764ed99334798f3e48066", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536237315), what: "moveChunk.commit", ns: "test.bar", details: { min: { _id: 49433.0 }, max: { _id: 49895.0 }, from: "shard0001", to: "shard0000" } } m30001| Fri Feb 22 12:30:37.316 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:37.316 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:37.316 [conn4] forking for cleanup of chunk data m30001| Fri Feb 22 12:30:37.316 [conn4] MigrateFromStatus::done About to acquire global write lock to exit critical section m30001| Fri Feb 22 12:30:37.316 [conn4] MigrateFromStatus::done Global lock acquired m30001| Fri Feb 22 12:30:37.316 [cleanupOldData-512764ed99334798f3e48067] (start) waiting to cleanup test.bar from { _id: 49433.0 } -> { _id: 49895.0 }, # cursors remaining: 0 m30001| Fri Feb 22 12:30:37.316 [conn4] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:30001:1361536118:20113' unlocked. m30001| Fri Feb 22 12:30:37.316 [conn4] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:30:37-512764ed99334798f3e48068", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "127.0.0.1:57660", time: new Date(1361536237316), what: "moveChunk.from", ns: "test.bar", details: { min: { _id: 49433.0 }, max: { _id: 49895.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 56, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:30:37.316 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:30:37.318 [Balancer] ChunkManager: time to load chunks for test.bar: 1ms sequenceNumber: 603 version: 109|1||51276475bd1f99446659365c based on: 108|1||51276475bd1f99446659365c m30999| Fri Feb 22 12:30:37.319 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:37.319 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:37.319 [conn1] setShardVersion shard0000 localhost:30000 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 109000|0, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0000", shardHost: "localhost:30000" } 0x117f320 603 m30999| Fri Feb 22 12:30:37.320 [conn1] setShardVersion success: { oldVersion: Timestamp 108000|0, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } m30001| Fri Feb 22 12:30:37.336 [cleanupOldData-512764ed99334798f3e48067] waiting to remove documents for test.bar from { _id: 49433.0 } -> { _id: 49895.0 } m30001| Fri Feb 22 12:30:37.336 [cleanupOldData-512764ed99334798f3e48067] moveChunk starting delete for: test.bar from { _id: 49433.0 } -> { _id: 49895.0 } m30001| Fri Feb 22 12:30:37.364 [cleanupOldData-512764ed99334798f3e48067] moveChunk deleted 462 documents for test.bar from { _id: 49433.0 } -> { _id: 49895.0 } 11000 m30999| Fri Feb 22 12:30:38.320 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:38.320 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:38.320 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764eebd1f9944665936c9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764edbd1f9944665936c8" } } m30999| Fri Feb 22 12:30:38.321 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764eebd1f9944665936c9 m30999| Fri Feb 22 12:30:38.321 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:38.321 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:38.321 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:38.324 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:38.324 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:38.324 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:38.324 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:38.324 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:38.326 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:30:38.326 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:38.326 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:30:38.326 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:30:38.326 [Balancer] threshold : 2 m30999| Fri Feb 22 12:30:38.326 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:30:38.326 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:38.327 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:30:38.817 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:30:38 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms 12000 13000 14000 15000 16000 17000 m30999| Fri Feb 22 12:30:44.327 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:44.328 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:44.328 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764f4bd1f9944665936ca" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764eebd1f9944665936c9" } } m30999| Fri Feb 22 12:30:44.328 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764f4bd1f9944665936ca m30999| Fri Feb 22 12:30:44.328 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:44.328 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:44.328 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:44.331 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:44.331 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:44.331 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:44.331 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:44.331 [Balancer] threshold : 8 m30999| Fri Feb 22 12:30:44.333 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:30:44.333 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:44.333 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:30:44.333 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:30:44.333 [Balancer] threshold : 8 m30999| Fri Feb 22 12:30:44.333 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:30:44.333 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:44.334 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 18000 19000 20000 21000 22000 23000 m30999| Fri Feb 22 12:30:50.335 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:50.335 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:50.335 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512764fabd1f9944665936cb" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764f4bd1f9944665936ca" } } m30999| Fri Feb 22 12:30:50.336 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 512764fabd1f9944665936cb m30999| Fri Feb 22 12:30:50.336 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:50.336 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:50.336 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:50.338 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:50.338 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:50.338 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:50.338 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:50.338 [Balancer] threshold : 8 m30999| Fri Feb 22 12:30:50.340 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:30:50.340 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:50.340 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:30:50.340 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:30:50.340 [Balancer] threshold : 8 m30999| Fri Feb 22 12:30:50.340 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:30:50.340 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:50.341 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 24000 25000 26000 27000 28000 m30999| Fri Feb 22 12:30:56.341 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:30:56.342 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:30:56.342 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:30:56 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276500bd1f9944665936cc" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512764fabd1f9944665936cb" } } m30999| Fri Feb 22 12:30:56.342 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276500bd1f9944665936cc m30999| Fri Feb 22 12:30:56.343 [Balancer] *** start balancing round m30999| Fri Feb 22 12:30:56.343 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:30:56.343 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:30:56.345 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:30:56.345 [Balancer] collection : test.foo m30999| Fri Feb 22 12:30:56.345 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:30:56.345 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:30:56.345 [Balancer] threshold : 8 m30999| Fri Feb 22 12:30:56.347 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:30:56.347 [Balancer] collection : test.bar m30999| Fri Feb 22 12:30:56.347 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:30:56.347 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:30:56.347 [Balancer] threshold : 8 m30999| Fri Feb 22 12:30:56.347 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:30:56.347 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:30:56.347 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 29000 30000 31000 32000 33000 34000 35000 m30999| Fri Feb 22 12:31:02.348 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:02.348 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:02.349 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276506bd1f9944665936cd" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276500bd1f9944665936cc" } } m30999| Fri Feb 22 12:31:02.349 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276506bd1f9944665936cd m30999| Fri Feb 22 12:31:02.349 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:02.349 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:02.349 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:02.352 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:02.352 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:02.352 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:02.352 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:02.352 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:02.354 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:02.354 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:02.354 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:02.354 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:02.354 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:02.354 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:02.354 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:02.354 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 36000 37000 38000 39000 40000 m30999| Fri Feb 22 12:31:08.355 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:08.356 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:08.356 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127650cbd1f9944665936ce" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276506bd1f9944665936cd" } } m30999| Fri Feb 22 12:31:08.357 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127650cbd1f9944665936ce m30999| Fri Feb 22 12:31:08.357 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:08.357 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:08.357 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:08.360 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:08.360 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:08.360 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:08.360 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:08.360 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:08.362 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:08.362 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:08.362 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:08.362 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:08.362 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:08.362 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:08.362 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:08.363 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 41000 m30999| Fri Feb 22 12:31:08.818 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:31:08 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms 42000 43000 44000 45000 46000 m30999| Fri Feb 22 12:31:14.364 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:14.364 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:14.364 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:14 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276512bd1f9944665936cf" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127650cbd1f9944665936ce" } } m30999| Fri Feb 22 12:31:14.365 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276512bd1f9944665936cf m30999| Fri Feb 22 12:31:14.365 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:14.365 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:14.365 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:14.367 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:14.367 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:14.367 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:14.367 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:14.367 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:14.370 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:14.370 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:14.370 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:14.370 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:14.370 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:14.370 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:14.370 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:14.370 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 47000 48000 49000 m30999| Fri Feb 22 12:31:17.520 [conn1] setShardVersion shard0001 localhost:30001 test.bar { setShardVersion: "test.bar", configdb: "localhost:30000", version: Timestamp 109000|1, versionEpoch: ObjectId('51276475bd1f99446659365c'), serverID: ObjectId('512763febd1f994466593646'), shard: "shard0001", shardHost: "localhost:30001" } 0x11800a0 603 m30999| Fri Feb 22 12:31:17.520 [conn1] setShardVersion success: { oldVersion: Timestamp 99000|1, oldVersionEpoch: ObjectId('51276475bd1f99446659365c'), ok: 1.0 } 50000 51000 52000 m30999| Fri Feb 22 12:31:20.371 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:20.372 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:20.372 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:20 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276518bd1f9944665936d0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276512bd1f9944665936cf" } } m30999| Fri Feb 22 12:31:20.373 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276518bd1f9944665936d0 m30999| Fri Feb 22 12:31:20.373 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:20.373 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:20.373 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:20.375 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:20.375 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:20.375 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:20.375 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:20.375 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:20.377 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:20.377 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:20.377 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:20.377 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:20.377 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:20.377 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:20.377 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:20.377 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 53000 54000 55000 56000 57000 58000 m30999| Fri Feb 22 12:31:26.378 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:26.378 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:26.378 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:26 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127651ebd1f9944665936d1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276518bd1f9944665936d0" } } m30999| Fri Feb 22 12:31:26.379 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127651ebd1f9944665936d1 m30999| Fri Feb 22 12:31:26.379 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:26.379 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:26.379 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:26.381 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:26.381 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:26.381 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:26.381 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:26.381 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:26.383 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:26.383 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:26.383 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:26.383 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:26.383 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:26.383 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:26.383 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:26.383 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 59000 60000 61000 62000 63000 64000 m30999| Fri Feb 22 12:31:32.384 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:32.385 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:32.385 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276524bd1f9944665936d2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127651ebd1f9944665936d1" } } m30999| Fri Feb 22 12:31:32.386 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276524bd1f9944665936d2 m30999| Fri Feb 22 12:31:32.386 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:32.386 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:32.386 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:32.388 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:32.388 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:32.388 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:32.388 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:32.388 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:32.390 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:32.390 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:32.390 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:32.390 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:32.390 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:32.391 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:32.391 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:32.391 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 65000 66000 67000 68000 69000 70000 m30999| Fri Feb 22 12:31:38.392 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:38.392 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:38.392 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127652abd1f9944665936d3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276524bd1f9944665936d2" } } m30999| Fri Feb 22 12:31:38.393 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127652abd1f9944665936d3 m30999| Fri Feb 22 12:31:38.393 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:38.393 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:38.393 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:38.395 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:38.395 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:38.395 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:38.395 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:38.395 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:38.397 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:38.397 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:38.397 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:38.397 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:38.397 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:38.397 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:38.397 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:38.397 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:31:38.820 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:31:38 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms m30000| Fri Feb 22 12:31:38.929 [conn4] command admin.$cmd command: { writebacklisten: ObjectId('512763febd1f994466593646') } ntoreturn:1 keyUpdates:0 reslen:44 300001ms m30999| Fri Feb 22 12:31:38.929 [WriteBackListener-localhost:30000] writebacklisten result: { noop: true, ok: 1.0 } m30001| Fri Feb 22 12:31:38.930 [conn2] command admin.$cmd command: { writebacklisten: ObjectId('512763febd1f994466593646') } ntoreturn:1 keyUpdates:0 reslen:44 300001ms m30999| Fri Feb 22 12:31:38.930 [WriteBackListener-localhost:30001] writebacklisten result: { noop: true, ok: 1.0 } 71000 72000 73000 74000 75000 m30999| Fri Feb 22 12:31:44.398 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:44.398 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:44.398 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:44 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276530bd1f9944665936d4" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127652abd1f9944665936d3" } } m30999| Fri Feb 22 12:31:44.399 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276530bd1f9944665936d4 m30999| Fri Feb 22 12:31:44.399 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:44.399 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:44.399 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:44.401 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:44.401 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:44.401 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:44.401 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:44.401 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:44.403 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:44.403 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:44.403 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:44.403 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:44.403 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:44.403 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:44.403 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:44.403 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 76000 77000 78000 79000 80000 81000 m30999| Fri Feb 22 12:31:50.404 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:50.404 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:50.405 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:50 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276536bd1f9944665936d5" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276530bd1f9944665936d4" } } m30999| Fri Feb 22 12:31:50.405 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276536bd1f9944665936d5 m30999| Fri Feb 22 12:31:50.405 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:50.405 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:50.405 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:50.408 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:50.408 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:50.408 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:50.408 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:50.408 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:50.411 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:50.411 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:50.411 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:50.411 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:50.411 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:50.411 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:50.411 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:50.411 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 82000 83000 84000 85000 86000 87000 m30999| Fri Feb 22 12:31:56.412 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:31:56.412 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:31:56.413 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:31:56 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127653cbd1f9944665936d6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276536bd1f9944665936d5" } } m30999| Fri Feb 22 12:31:56.413 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 5127653cbd1f9944665936d6 m30999| Fri Feb 22 12:31:56.413 [Balancer] *** start balancing round m30999| Fri Feb 22 12:31:56.413 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:31:56.413 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:31:56.415 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:31:56.415 [Balancer] collection : test.foo m30999| Fri Feb 22 12:31:56.415 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:31:56.415 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:31:56.415 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:56.417 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:31:56.417 [Balancer] collection : test.bar m30999| Fri Feb 22 12:31:56.417 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:31:56.417 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:31:56.417 [Balancer] threshold : 8 m30999| Fri Feb 22 12:31:56.417 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:31:56.417 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:31:56.418 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 88000 89000 90000 91000 92000 m30999| Fri Feb 22 12:32:02.418 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:32:02.419 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:32:02.419 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:32:02 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276542bd1f9944665936d7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127653cbd1f9944665936d6" } } m30999| Fri Feb 22 12:32:02.420 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276542bd1f9944665936d7 m30999| Fri Feb 22 12:32:02.420 [Balancer] *** start balancing round m30999| Fri Feb 22 12:32:02.420 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:32:02.420 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:32:02.422 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:32:02.422 [Balancer] collection : test.foo m30999| Fri Feb 22 12:32:02.422 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:32:02.422 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:32:02.422 [Balancer] threshold : 8 m30999| Fri Feb 22 12:32:02.425 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:32:02.425 [Balancer] collection : test.bar m30999| Fri Feb 22 12:32:02.425 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:32:02.425 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:32:02.425 [Balancer] threshold : 8 m30999| Fri Feb 22 12:32:02.425 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:32:02.425 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:32:02.425 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. 93000 94000 95000 96000 97000 98000 m30999| Fri Feb 22 12:32:08.426 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:32:08.426 [Balancer] trying to acquire new distributed lock for balancer on localhost:30000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838 ) m30999| Fri Feb 22 12:32:08.427 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:32:08 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "51276548bd1f9944665936d8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276542bd1f9944665936d7" } } m30999| Fri Feb 22 12:32:08.427 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' acquired, ts : 51276548bd1f9944665936d8 m30999| Fri Feb 22 12:32:08.427 [Balancer] *** start balancing round m30999| Fri Feb 22 12:32:08.427 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:32:08.427 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:32:08.429 [Balancer] shard0001 has more chunks me:54 best: shard0000:53 m30999| Fri Feb 22 12:32:08.429 [Balancer] collection : test.foo m30999| Fri Feb 22 12:32:08.429 [Balancer] donor : shard0001 chunks on 54 m30999| Fri Feb 22 12:32:08.429 [Balancer] receiver : shard0000 chunks on 53 m30999| Fri Feb 22 12:32:08.429 [Balancer] threshold : 8 m30999| Fri Feb 22 12:32:08.431 [Balancer] shard0001 has more chunks me:109 best: shard0000:108 m30999| Fri Feb 22 12:32:08.431 [Balancer] collection : test.bar m30999| Fri Feb 22 12:32:08.431 [Balancer] donor : shard0001 chunks on 109 m30999| Fri Feb 22 12:32:08.431 [Balancer] receiver : shard0000 chunks on 108 m30999| Fri Feb 22 12:32:08.431 [Balancer] threshold : 8 m30999| Fri Feb 22 12:32:08.431 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:32:08.431 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:32:08.431 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838' unlocked. m30999| Fri Feb 22 12:32:08.822 [LockPinger] cluster localhost:30000 pinged successfully at Fri Feb 22 12:32:08 2013 by distributed lock pinger 'localhost:30000/bs-smartos-x86-64-1.10gen.cc:30999:1361535998:16838', sleeping for 30000ms 99000 --- Sharding Status --- sharding version: { "_id" : 1, "version" : 3, "minCompatibleVersion" : 3, "currentVersion" : 4, "clusterId" : ObjectId("512763febd1f994466593644") } shards: { "_id" : "shard0000", "host" : "localhost:30000" } { "_id" : "shard0001", "host" : "localhost:30001" } databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : true, "primary" : "shard0001" } test.bar shard key: { "_id" : 1 } chunks: shard0000 108 shard0001 109 too many chunks to print, use verbose if you want to force print test.foo shard key: { "_id" : 1 } chunks: shard0000 53 shard0001 54 too many chunks to print, use verbose if you want to force print ShardingTest input: { "shard0000" : 53, "shard0001" : 54 } min: 53 max: 54 ShardingTest input: { "shard0000" : 108, "shard0001" : 109 } min: 108 max: 109 m30999| Fri Feb 22 12:32:10.473 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30000| Fri Feb 22 12:32:10.473 [conn3] end connection 127.0.0.1:41716 (12 connections now open) m30000| Fri Feb 22 12:32:10.473 [conn5] end connection 127.0.0.1:42877 (12 connections now open) m30000| Fri Feb 22 12:32:10.473 [conn6] end connection 127.0.0.1:46374 (12 connections now open) m30001| Fri Feb 22 12:32:10.473 [conn3] end connection 127.0.0.1:61557 (4 connections now open) m30001| Fri Feb 22 12:32:10.473 [conn4] end connection 127.0.0.1:57660 (4 connections now open) m30000| Fri Feb 22 12:32:10.473 [conn7] end connection 127.0.0.1:47249 (10 connections now open) Fri Feb 22 12:32:11.473 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:32:11.473 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:32:11.473 [interruptThread] now exiting m30000| Fri Feb 22 12:32:11.473 dbexit: m30000| Fri Feb 22 12:32:11.473 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:32:11.473 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:32:11.473 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:32:11.473 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:32:11.474 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:32:11.474 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:32:11.474 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:32:11.474 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:32:11.474 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:32:11.474 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:32:11.474 [conn1] end connection 127.0.0.1:42078 (8 connections now open) m30000| Fri Feb 22 12:32:11.474 [conn2] end connection 127.0.0.1:64252 (8 connections now open) m30000| Fri Feb 22 12:32:11.474 [conn9] end connection 127.0.0.1:33730 (8 connections now open) m30001| Fri Feb 22 12:32:11.474 [conn5] end connection 127.0.0.1:59513 (2 connections now open) m30000| Fri Feb 22 12:32:11.474 [conn8] end connection 127.0.0.1:42759 (8 connections now open) m30000| Fri Feb 22 12:32:11.474 [conn10] end connection 127.0.0.1:41381 (8 connections now open) m30000| Fri Feb 22 12:32:11.474 [conn13] end connection 127.0.0.1:47848 (8 connections now open) m30000| Fri Feb 22 12:32:11.474 [conn11] end connection 127.0.0.1:34115 (8 connections now open) m30000| Fri Feb 22 12:32:11.474 [conn12] end connection 127.0.0.1:46366 (5 connections now open) m30000| Fri Feb 22 12:32:11.536 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:32:11.594 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:32:11.594 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:32:11.594 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:32:11.595 dbexit: really exiting now Fri Feb 22 12:32:12.473 shell: stopped mongo program on port 30000 Fri Feb 22 12:32:12.651 [conn20] end connection 127.0.0.1:38771 (0 connections now open) m30001| Fri Feb 22 12:32:12.473 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:32:12.474 [interruptThread] now exiting m30001| Fri Feb 22 12:32:12.474 dbexit: m30001| Fri Feb 22 12:32:12.474 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:32:12.474 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:32:12.474 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:32:12.474 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:32:12.474 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:32:12.474 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:32:12.474 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:32:12.474 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:32:12.474 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:32:12.474 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:32:12.474 [conn1] end connection 127.0.0.1:51379 (1 connection now open) m30001| Fri Feb 22 12:32:12.569 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:32:12.649 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:32:12.649 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:32:12.649 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:32:12.650 dbexit: really exiting now Fri Feb 22 12:32:13.473 shell: stopped mongo program on port 30001 *** ShardingTest multcollections completed successfully in 335.254 seconds *** 5.5916 minutes Fri Feb 22 12:32:13.597 [initandlisten] connection accepted from 127.0.0.1:38021 #21 (1 connection now open) Fri Feb 22 12:32:13.597 [conn21] end connection 127.0.0.1:38021 (0 connections now open) ******************************************* Test : sharding_multiple_ns_rs.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_multiple_ns_rs.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_multiple_ns_rs.js";TestData.testFile = "sharding_multiple_ns_rs.js";TestData.testName = "sharding_multiple_ns_rs";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:32:13 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:32:13.744 [initandlisten] connection accepted from 127.0.0.1:48378 #22 (1 connection now open) null Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "blah-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "blah", "shard" : 0, "node" : 0, "set" : "blah-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/blah-rs0-0' Fri Feb 22 12:32:13.763 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet blah-rs0 --dbpath /data/db/blah-rs0-0 --setParameter enableTestCommands=1 m31100| note: noprealloc may hurt performance in many applications m31100| Fri Feb 22 12:32:13.853 [initandlisten] MongoDB starting : pid=14449 port=31100 dbpath=/data/db/blah-rs0-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31100| Fri Feb 22 12:32:13.854 [initandlisten] m31100| Fri Feb 22 12:32:13.854 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31100| Fri Feb 22 12:32:13.854 [initandlisten] ** uses to detect impending page faults. m31100| Fri Feb 22 12:32:13.854 [initandlisten] ** This may result in slower performance for certain use cases m31100| Fri Feb 22 12:32:13.854 [initandlisten] m31100| Fri Feb 22 12:32:13.854 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31100| Fri Feb 22 12:32:13.854 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31100| Fri Feb 22 12:32:13.854 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31100| Fri Feb 22 12:32:13.854 [initandlisten] allocator: system m31100| Fri Feb 22 12:32:13.854 [initandlisten] options: { dbpath: "/data/db/blah-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "blah-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31100| Fri Feb 22 12:32:13.854 [initandlisten] journal dir=/data/db/blah-rs0-0/journal m31100| Fri Feb 22 12:32:13.854 [initandlisten] recover : no journal files present, no recovery needed m31100| Fri Feb 22 12:32:13.871 [FileAllocator] allocating new datafile /data/db/blah-rs0-0/local.ns, filling with zeroes... m31100| Fri Feb 22 12:32:13.871 [FileAllocator] creating directory /data/db/blah-rs0-0/_tmp m31100| Fri Feb 22 12:32:13.871 [FileAllocator] done allocating datafile /data/db/blah-rs0-0/local.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 12:32:13.871 [FileAllocator] allocating new datafile /data/db/blah-rs0-0/local.0, filling with zeroes... m31100| Fri Feb 22 12:32:13.871 [FileAllocator] done allocating datafile /data/db/blah-rs0-0/local.0, size: 16MB, took 0 secs m31100| Fri Feb 22 12:32:13.875 [initandlisten] waiting for connections on port 31100 m31100| Fri Feb 22 12:32:13.875 [websvr] admin web console waiting for connections on port 32100 m31100| Fri Feb 22 12:32:13.877 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 12:32:13.877 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31100| Fri Feb 22 12:32:13.966 [initandlisten] connection accepted from 127.0.0.1:61107 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "blah-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "blah", "shard" : 0, "node" : 1, "set" : "blah-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/blah-rs0-1' Fri Feb 22 12:32:13.974 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet blah-rs0 --dbpath /data/db/blah-rs0-1 --setParameter enableTestCommands=1 m31101| note: noprealloc may hurt performance in many applications m31101| Fri Feb 22 12:32:14.067 [initandlisten] MongoDB starting : pid=14450 port=31101 dbpath=/data/db/blah-rs0-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31101| Fri Feb 22 12:32:14.067 [initandlisten] m31101| Fri Feb 22 12:32:14.067 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31101| Fri Feb 22 12:32:14.067 [initandlisten] ** uses to detect impending page faults. m31101| Fri Feb 22 12:32:14.067 [initandlisten] ** This may result in slower performance for certain use cases m31101| Fri Feb 22 12:32:14.067 [initandlisten] m31101| Fri Feb 22 12:32:14.067 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31101| Fri Feb 22 12:32:14.067 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31101| Fri Feb 22 12:32:14.067 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31101| Fri Feb 22 12:32:14.067 [initandlisten] allocator: system m31101| Fri Feb 22 12:32:14.068 [initandlisten] options: { dbpath: "/data/db/blah-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "blah-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31101| Fri Feb 22 12:32:14.068 [initandlisten] journal dir=/data/db/blah-rs0-1/journal m31101| Fri Feb 22 12:32:14.068 [initandlisten] recover : no journal files present, no recovery needed m31101| Fri Feb 22 12:32:14.083 [FileAllocator] allocating new datafile /data/db/blah-rs0-1/local.ns, filling with zeroes... m31101| Fri Feb 22 12:32:14.083 [FileAllocator] creating directory /data/db/blah-rs0-1/_tmp m31101| Fri Feb 22 12:32:14.083 [FileAllocator] done allocating datafile /data/db/blah-rs0-1/local.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 12:32:14.083 [FileAllocator] allocating new datafile /data/db/blah-rs0-1/local.0, filling with zeroes... m31101| Fri Feb 22 12:32:14.084 [FileAllocator] done allocating datafile /data/db/blah-rs0-1/local.0, size: 16MB, took 0 secs m31101| Fri Feb 22 12:32:14.087 [initandlisten] waiting for connections on port 31101 m31101| Fri Feb 22 12:32:14.087 [websvr] admin web console waiting for connections on port 32101 m31101| Fri Feb 22 12:32:14.090 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31101| Fri Feb 22 12:32:14.090 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31101| Fri Feb 22 12:32:14.176 [initandlisten] connection accepted from 127.0.0.1:52786 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31102, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "blah-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "blah", "shard" : 0, "node" : 2, "set" : "blah-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/blah-rs0-2' Fri Feb 22 12:32:14.181 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31102 --noprealloc --smallfiles --rest --replSet blah-rs0 --dbpath /data/db/blah-rs0-2 --setParameter enableTestCommands=1 m31102| note: noprealloc may hurt performance in many applications m31102| Fri Feb 22 12:32:14.273 [initandlisten] MongoDB starting : pid=14454 port=31102 dbpath=/data/db/blah-rs0-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31102| Fri Feb 22 12:32:14.273 [initandlisten] m31102| Fri Feb 22 12:32:14.273 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31102| Fri Feb 22 12:32:14.273 [initandlisten] ** uses to detect impending page faults. m31102| Fri Feb 22 12:32:14.273 [initandlisten] ** This may result in slower performance for certain use cases m31102| Fri Feb 22 12:32:14.273 [initandlisten] m31102| Fri Feb 22 12:32:14.273 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31102| Fri Feb 22 12:32:14.273 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31102| Fri Feb 22 12:32:14.273 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31102| Fri Feb 22 12:32:14.273 [initandlisten] allocator: system m31102| Fri Feb 22 12:32:14.273 [initandlisten] options: { dbpath: "/data/db/blah-rs0-2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "blah-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31102| Fri Feb 22 12:32:14.273 [initandlisten] journal dir=/data/db/blah-rs0-2/journal m31102| Fri Feb 22 12:32:14.274 [initandlisten] recover : no journal files present, no recovery needed m31102| Fri Feb 22 12:32:14.287 [FileAllocator] allocating new datafile /data/db/blah-rs0-2/local.ns, filling with zeroes... m31102| Fri Feb 22 12:32:14.287 [FileAllocator] creating directory /data/db/blah-rs0-2/_tmp m31102| Fri Feb 22 12:32:14.288 [FileAllocator] done allocating datafile /data/db/blah-rs0-2/local.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 12:32:14.288 [FileAllocator] allocating new datafile /data/db/blah-rs0-2/local.0, filling with zeroes... m31102| Fri Feb 22 12:32:14.288 [FileAllocator] done allocating datafile /data/db/blah-rs0-2/local.0, size: 16MB, took 0 secs m31102| Fri Feb 22 12:32:14.291 [initandlisten] waiting for connections on port 31102 m31102| Fri Feb 22 12:32:14.291 [websvr] admin web console waiting for connections on port 32102 m31102| Fri Feb 22 12:32:14.294 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31102| Fri Feb 22 12:32:14.294 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31102| Fri Feb 22 12:32:14.382 [initandlisten] connection accepted from 127.0.0.1:37448 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101, connection to bs-smartos-x86-64-1.10gen.cc:31102 ] { "replSetInitiate" : { "_id" : "blah-rs0", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31102" } ] } } m31100| Fri Feb 22 12:32:14.386 [conn1] replSet replSetInitiate admin command received from client m31100| Fri Feb 22 12:32:14.387 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31100| Fri Feb 22 12:32:14.387 [initandlisten] connection accepted from 165.225.128.186:48267 #2 (2 connections now open) m31101| Fri Feb 22 12:32:14.388 [initandlisten] connection accepted from 165.225.128.186:33987 #2 (2 connections now open) m31102| Fri Feb 22 12:32:14.390 [initandlisten] connection accepted from 165.225.128.186:62596 #2 (2 connections now open) m31100| Fri Feb 22 12:32:14.391 [conn1] replSet replSetInitiate all members seem up m31100| Fri Feb 22 12:32:14.391 [conn1] ****** m31100| Fri Feb 22 12:32:14.391 [conn1] creating replication oplog of size: 40MB... m31100| Fri Feb 22 12:32:14.391 [FileAllocator] allocating new datafile /data/db/blah-rs0-0/local.1, filling with zeroes... m31100| Fri Feb 22 12:32:14.392 [FileAllocator] done allocating datafile /data/db/blah-rs0-0/local.1, size: 64MB, took 0 secs m31100| Fri Feb 22 12:32:14.409 [conn2] end connection 165.225.128.186:48267 (1 connection now open) m31100| Fri Feb 22 12:32:14.412 [conn1] ****** m31100| Fri Feb 22 12:32:14.412 [conn1] replSet info saving a newer config version to local.system.replset m31100| Fri Feb 22 12:32:14.438 [conn1] replSet saveConfigLocally done m31100| Fri Feb 22 12:32:14.438 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 12:32:23.878 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:23.878 [rsStart] replSet STARTUP2 m31100| Fri Feb 22 12:32:23.878 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31100| Fri Feb 22 12:32:23.878 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31101| Fri Feb 22 12:32:24.090 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:24.091 [initandlisten] connection accepted from 165.225.128.186:36170 #3 (2 connections now open) m31101| Fri Feb 22 12:32:24.092 [initandlisten] connection accepted from 165.225.128.186:39795 #3 (3 connections now open) m31101| Fri Feb 22 12:32:24.092 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 12:32:24.092 [rsStart] replSet got config version 1 from a remote, saving locally m31101| Fri Feb 22 12:32:24.092 [rsStart] replSet info saving a newer config version to local.system.replset m31101| Fri Feb 22 12:32:24.095 [rsStart] replSet saveConfigLocally done m31101| Fri Feb 22 12:32:24.095 [rsStart] replSet STARTUP2 m31101| Fri Feb 22 12:32:24.096 [rsSync] ****** m31101| Fri Feb 22 12:32:24.096 [rsSync] creating replication oplog of size: 40MB... m31101| Fri Feb 22 12:32:24.096 [FileAllocator] allocating new datafile /data/db/blah-rs0-1/local.1, filling with zeroes... m31101| Fri Feb 22 12:32:24.096 [FileAllocator] done allocating datafile /data/db/blah-rs0-1/local.1, size: 64MB, took 0 secs m31101| Fri Feb 22 12:32:24.105 [conn3] end connection 165.225.128.186:39795 (2 connections now open) m31101| Fri Feb 22 12:32:24.109 [rsSync] ****** m31101| Fri Feb 22 12:32:24.109 [rsSync] replSet initial sync pending m31101| Fri Feb 22 12:32:24.109 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31102| Fri Feb 22 12:32:24.294 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:24.295 [initandlisten] connection accepted from 165.225.128.186:44594 #4 (3 connections now open) m31102| Fri Feb 22 12:32:24.296 [initandlisten] connection accepted from 165.225.128.186:51873 #3 (3 connections now open) m31102| Fri Feb 22 12:32:24.296 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 12:32:24.297 [rsStart] replSet got config version 1 from a remote, saving locally m31102| Fri Feb 22 12:32:24.297 [rsStart] replSet info saving a newer config version to local.system.replset m31102| Fri Feb 22 12:32:24.300 [rsStart] replSet saveConfigLocally done m31102| Fri Feb 22 12:32:24.300 [rsStart] replSet STARTUP2 m31102| Fri Feb 22 12:32:24.301 [rsSync] ****** m31102| Fri Feb 22 12:32:24.301 [rsSync] creating replication oplog of size: 40MB... m31102| Fri Feb 22 12:32:24.301 [FileAllocator] allocating new datafile /data/db/blah-rs0-2/local.1, filling with zeroes... m31102| Fri Feb 22 12:32:24.301 [FileAllocator] done allocating datafile /data/db/blah-rs0-2/local.1, size: 64MB, took 0 secs m31102| Fri Feb 22 12:32:24.310 [conn3] end connection 165.225.128.186:51873 (2 connections now open) m31102| Fri Feb 22 12:32:24.310 [rsSync] ****** m31102| Fri Feb 22 12:32:24.310 [rsSync] replSet initial sync pending m31102| Fri Feb 22 12:32:24.310 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31100| Fri Feb 22 12:32:24.879 [rsSync] replSet SECONDARY m31100| Fri Feb 22 12:32:25.878 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 thinks that we are down m31100| Fri Feb 22 12:32:25.878 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31100| Fri Feb 22 12:32:25.878 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31100| Fri Feb 22 12:32:25.878 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31100| Fri Feb 22 12:32:25.879 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31100| Fri Feb 22 12:32:25.879 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 12:32:26.093 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31101| Fri Feb 22 12:32:26.093 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31102| Fri Feb 22 12:32:26.093 [initandlisten] connection accepted from 165.225.128.186:44978 #4 (3 connections now open) m31101| Fri Feb 22 12:32:26.093 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31101| Fri Feb 22 12:32:26.093 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31101| Fri Feb 22 12:32:26.093 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31102| Fri Feb 22 12:32:26.297 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31102| Fri Feb 22 12:32:26.297 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31101| Fri Feb 22 12:32:26.297 [initandlisten] connection accepted from 165.225.128.186:35245 #4 (3 connections now open) m31102| Fri Feb 22 12:32:26.297 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31102| Fri Feb 22 12:32:26.297 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31100| Fri Feb 22 12:32:31.879 [rsMgr] replSet info electSelf 0 m31102| Fri Feb 22 12:32:31.880 [conn2] replSet RECOVERING m31101| Fri Feb 22 12:32:31.880 [conn2] replSet RECOVERING m31102| Fri Feb 22 12:32:31.880 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31101| Fri Feb 22 12:32:31.880 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31101| Fri Feb 22 12:32:32.094 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31102| Fri Feb 22 12:32:32.298 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31100| Fri Feb 22 12:32:32.879 [rsMgr] replSet PRIMARY m31100| Fri Feb 22 12:32:33.879 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31100| Fri Feb 22 12:32:33.879 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31101| Fri Feb 22 12:32:34.094 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31102| Fri Feb 22 12:32:34.298 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31101| Fri Feb 22 12:32:37.880 [conn2] end connection 165.225.128.186:33987 (2 connections now open) m31101| Fri Feb 22 12:32:37.880 [initandlisten] connection accepted from 165.225.128.186:40340 #5 (3 connections now open) m31100| Fri Feb 22 12:32:40.095 [conn3] end connection 165.225.128.186:36170 (2 connections now open) m31100| Fri Feb 22 12:32:40.095 [initandlisten] connection accepted from 165.225.128.186:52909 #5 (3 connections now open) m31101| Fri Feb 22 12:32:40.109 [rsSync] replSet initial sync pending m31101| Fri Feb 22 12:32:40.109 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:40.110 [initandlisten] connection accepted from 165.225.128.186:65372 #6 (4 connections now open) m31101| Fri Feb 22 12:32:40.115 [rsSync] build index local.me { _id: 1 } m31101| Fri Feb 22 12:32:40.118 [rsSync] build index done. scanned 0 total records. 0.002 secs m31101| Fri Feb 22 12:32:40.119 [rsSync] build index local.replset.minvalid { _id: 1 } m31101| Fri Feb 22 12:32:40.120 [rsSync] build index done. scanned 0 total records. 0 secs m31101| Fri Feb 22 12:32:40.120 [rsSync] replSet initial sync drop all databases m31101| Fri Feb 22 12:32:40.120 [rsSync] dropAllDatabasesExceptLocal 1 m31101| Fri Feb 22 12:32:40.120 [rsSync] replSet initial sync clone all databases m31101| Fri Feb 22 12:32:40.120 [rsSync] replSet initial sync data copy, starting syncup m31101| Fri Feb 22 12:32:40.120 [rsSync] oplog sync 1 of 3 m31101| Fri Feb 22 12:32:40.121 [rsSync] oplog sync 2 of 3 m31101| Fri Feb 22 12:32:40.121 [rsSync] replSet initial sync building indexes m31101| Fri Feb 22 12:32:40.121 [rsSync] oplog sync 3 of 3 m31101| Fri Feb 22 12:32:40.121 [rsSync] replSet initial sync finishing up m31101| Fri Feb 22 12:32:40.129 [rsSync] replSet set minValid=5127654e:1 m31101| Fri Feb 22 12:32:40.134 [rsSync] replSet initial sync done m31100| Fri Feb 22 12:32:40.134 [conn6] end connection 165.225.128.186:65372 (3 connections now open) m31100| Fri Feb 22 12:32:40.299 [conn4] end connection 165.225.128.186:44594 (2 connections now open) m31100| Fri Feb 22 12:32:40.299 [initandlisten] connection accepted from 165.225.128.186:61690 #7 (3 connections now open) m31102| Fri Feb 22 12:32:40.310 [rsSync] replSet initial sync pending m31102| Fri Feb 22 12:32:40.310 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:40.311 [initandlisten] connection accepted from 165.225.128.186:36623 #8 (4 connections now open) m31102| Fri Feb 22 12:32:40.319 [rsSync] build index local.me { _id: 1 } m31102| Fri Feb 22 12:32:40.323 [rsSync] build index done. scanned 0 total records. 0.004 secs m31102| Fri Feb 22 12:32:40.325 [rsSync] build index local.replset.minvalid { _id: 1 } m31102| Fri Feb 22 12:32:40.326 [rsSync] build index done. scanned 0 total records. 0.001 secs m31102| Fri Feb 22 12:32:40.326 [rsSync] replSet initial sync drop all databases m31102| Fri Feb 22 12:32:40.326 [rsSync] dropAllDatabasesExceptLocal 1 m31102| Fri Feb 22 12:32:40.326 [rsSync] replSet initial sync clone all databases m31102| Fri Feb 22 12:32:40.326 [rsSync] replSet initial sync data copy, starting syncup m31102| Fri Feb 22 12:32:40.326 [rsSync] oplog sync 1 of 3 m31102| Fri Feb 22 12:32:40.327 [rsSync] oplog sync 2 of 3 m31102| Fri Feb 22 12:32:40.327 [rsSync] replSet initial sync building indexes m31102| Fri Feb 22 12:32:40.327 [rsSync] oplog sync 3 of 3 m31102| Fri Feb 22 12:32:40.327 [rsSync] replSet initial sync finishing up m31102| Fri Feb 22 12:32:40.335 [rsSync] replSet set minValid=5127654e:1 m31102| Fri Feb 22 12:32:40.341 [rsSync] replSet initial sync done m31100| Fri Feb 22 12:32:40.342 [conn8] end connection 165.225.128.186:36623 (3 connections now open) m31101| Fri Feb 22 12:32:41.096 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:41.097 [initandlisten] connection accepted from 165.225.128.186:39564 #9 (4 connections now open) m31101| Fri Feb 22 12:32:41.135 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:41.135 [initandlisten] connection accepted from 165.225.128.186:42673 #10 (5 connections now open) m31102| Fri Feb 22 12:32:41.301 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:41.302 [initandlisten] connection accepted from 165.225.128.186:44721 #11 (6 connections now open) m31102| Fri Feb 22 12:32:41.342 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:41.342 [initandlisten] connection accepted from 165.225.128.186:56172 #12 (7 connections now open) m31101| Fri Feb 22 12:32:42.135 [rsSync] replSet SECONDARY m31100| Fri Feb 22 12:32:42.144 [slaveTracking] build index local.slaves { _id: 1 } m31100| Fri Feb 22 12:32:42.146 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31102| Fri Feb 22 12:32:42.299 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31102| Fri Feb 22 12:32:42.342 [rsSync] replSet SECONDARY m31100| Fri Feb 22 12:32:42.356 [FileAllocator] allocating new datafile /data/db/blah-rs0-0/admin.ns, filling with zeroes... m31100| Fri Feb 22 12:32:42.356 [FileAllocator] done allocating datafile /data/db/blah-rs0-0/admin.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 12:32:42.356 [FileAllocator] allocating new datafile /data/db/blah-rs0-0/admin.0, filling with zeroes... m31100| Fri Feb 22 12:32:42.356 [FileAllocator] done allocating datafile /data/db/blah-rs0-0/admin.0, size: 16MB, took 0 secs m31100| Fri Feb 22 12:32:42.360 [conn1] build index admin.foo { _id: 1 } m31100| Fri Feb 22 12:32:42.361 [conn1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 12:32:42.363 [FileAllocator] allocating new datafile /data/db/blah-rs0-1/admin.ns, filling with zeroes... m31102| Fri Feb 22 12:32:42.363 [FileAllocator] allocating new datafile /data/db/blah-rs0-2/admin.ns, filling with zeroes... m31101| Fri Feb 22 12:32:42.363 [FileAllocator] done allocating datafile /data/db/blah-rs0-1/admin.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 12:32:42.363 [FileAllocator] done allocating datafile /data/db/blah-rs0-2/admin.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 12:32:42.363 [FileAllocator] allocating new datafile /data/db/blah-rs0-1/admin.0, filling with zeroes... m31102| Fri Feb 22 12:32:42.363 [FileAllocator] allocating new datafile /data/db/blah-rs0-2/admin.0, filling with zeroes... m31101| Fri Feb 22 12:32:42.363 [FileAllocator] done allocating datafile /data/db/blah-rs0-1/admin.0, size: 16MB, took 0 secs m31102| Fri Feb 22 12:32:42.363 [FileAllocator] done allocating datafile /data/db/blah-rs0-2/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361536362000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361536362000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 m31102| Fri Feb 22 12:32:42.367 [repl writer worker 1] build index admin.foo { _id: 1 } m31101| Fri Feb 22 12:32:42.367 [repl writer worker 1] build index admin.foo { _id: 1 } m31102| Fri Feb 22 12:32:42.368 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 12:32:42.368 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31102 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31102, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361536362000, "i" : 1 } Fri Feb 22 12:32:42.374 starting new replica set monitor for replica set blah-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 12:32:42.374 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set blah-rs0 m31100| Fri Feb 22 12:32:42.374 [initandlisten] connection accepted from 165.225.128.186:41644 #13 (8 connections now open) Fri Feb 22 12:32:42.375 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from blah-rs0/ Fri Feb 22 12:32:42.375 trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set blah-rs0 Fri Feb 22 12:32:42.375 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set blah-rs0 Fri Feb 22 12:32:42.375 trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set blah-rs0 m31100| Fri Feb 22 12:32:42.375 [initandlisten] connection accepted from 165.225.128.186:43441 #14 (9 connections now open) Fri Feb 22 12:32:42.375 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set blah-rs0 Fri Feb 22 12:32:42.375 trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set blah-rs0 m31101| Fri Feb 22 12:32:42.375 [initandlisten] connection accepted from 165.225.128.186:44468 #6 (4 connections now open) Fri Feb 22 12:32:42.376 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set blah-rs0 m31102| Fri Feb 22 12:32:42.376 [initandlisten] connection accepted from 165.225.128.186:36395 #5 (4 connections now open) m31100| Fri Feb 22 12:32:42.376 [initandlisten] connection accepted from 165.225.128.186:41835 #15 (10 connections now open) m31100| Fri Feb 22 12:32:42.376 [conn13] end connection 165.225.128.186:41644 (9 connections now open) Fri Feb 22 12:32:42.377 Primary for replica set blah-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:32:42.377 [initandlisten] connection accepted from 165.225.128.186:38623 #7 (5 connections now open) m31102| Fri Feb 22 12:32:42.378 [initandlisten] connection accepted from 165.225.128.186:34197 #6 (5 connections now open) Fri Feb 22 12:32:42.378 replica set monitor for replica set blah-rs0 started, address is blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 12:32:42.378 [ReplicaSetMonitorWatcher] starting Resetting db path '/data/db/blah-config0' Fri Feb 22 12:32:42.382 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/blah-config0 --configsvr --setParameter enableTestCommands=1 m29000| Fri Feb 22 12:32:42.472 [initandlisten] MongoDB starting : pid=14497 port=29000 dbpath=/data/db/blah-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 12:32:42.472 [initandlisten] m29000| Fri Feb 22 12:32:42.473 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 12:32:42.473 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 12:32:42.473 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 12:32:42.473 [initandlisten] m29000| Fri Feb 22 12:32:42.473 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 12:32:42.473 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 12:32:42.473 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 12:32:42.473 [initandlisten] allocator: system m29000| Fri Feb 22 12:32:42.473 [initandlisten] options: { configsvr: true, dbpath: "/data/db/blah-config0", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 12:32:42.473 [initandlisten] journal dir=/data/db/blah-config0/journal m29000| Fri Feb 22 12:32:42.473 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 12:32:42.488 [FileAllocator] allocating new datafile /data/db/blah-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 12:32:42.488 [FileAllocator] creating directory /data/db/blah-config0/_tmp m29000| Fri Feb 22 12:32:42.488 [FileAllocator] done allocating datafile /data/db/blah-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 12:32:42.488 [FileAllocator] allocating new datafile /data/db/blah-config0/local.0, filling with zeroes... m29000| Fri Feb 22 12:32:42.488 [FileAllocator] done allocating datafile /data/db/blah-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 12:32:42.491 [initandlisten] ****** m29000| Fri Feb 22 12:32:42.491 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 12:32:42.495 [initandlisten] ****** m29000| Fri Feb 22 12:32:42.496 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 12:32:42.496 [websvr] admin web console waiting for connections on port 30000 m29000| Fri Feb 22 12:32:42.583 [initandlisten] connection accepted from 127.0.0.1:39481 #1 (1 connection now open) "bs-smartos-x86-64-1.10gen.cc:29000" m29000| Fri Feb 22 12:32:42.584 [initandlisten] connection accepted from 165.225.128.186:35779 #2 (2 connections now open) ShardingTest blah : { "config" : "bs-smartos-x86-64-1.10gen.cc:29000", "shards" : [ connection to blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 ] } Fri Feb 22 12:32:42.588 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb bs-smartos-x86-64-1.10gen.cc:29000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:32:42.606 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:32:42.607 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=14498 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:32:42.607 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:32:42.607 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:32:42.607 [mongosMain] options: { chunkSize: 1, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 12:32:42.607 [mongosMain] config string : bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:32:42.607 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:32:42.608 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.608 [mongosMain] connected connection! m29000| Fri Feb 22 12:32:42.608 [initandlisten] connection accepted from 165.225.128.186:35195 #3 (3 connections now open) m30999| Fri Feb 22 12:32:42.609 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:32:42.609 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:32:42.609 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 12:32:42.609 [initandlisten] connection accepted from 165.225.128.186:48930 #4 (4 connections now open) m30999| Fri Feb 22 12:32:42.609 [mongosMain] connected connection! m29000| Fri Feb 22 12:32:42.610 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:32:42.615 [mongosMain] created new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:32:42.616 [mongosMain] trying to acquire new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838 ) m30999| Fri Feb 22 12:32:42.616 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:32:42.616 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m30999| Fri Feb 22 12:32:42.616 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:32:42 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "5127656a1cbfef1c1fa08d1e" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m29000| Fri Feb 22 12:32:42.616 [FileAllocator] allocating new datafile /data/db/blah-config0/config.ns, filling with zeroes... m29000| Fri Feb 22 12:32:42.616 [FileAllocator] done allocating datafile /data/db/blah-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 12:32:42.617 [FileAllocator] allocating new datafile /data/db/blah-config0/config.0, filling with zeroes... m29000| Fri Feb 22 12:32:42.617 [FileAllocator] done allocating datafile /data/db/blah-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 12:32:42.617 [FileAllocator] allocating new datafile /data/db/blah-config0/config.1, filling with zeroes... m29000| Fri Feb 22 12:32:42.617 [FileAllocator] done allocating datafile /data/db/blah-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 12:32:42.620 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 12:32:42.622 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Fri Feb 22 12:32:42.622 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 12:32:42.623 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:32:42.624 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 12:32:42 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838', sleeping for 30000ms m29000| Fri Feb 22 12:32:42.624 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 12:32:42.624 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:32:42.625 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' acquired, ts : 5127656a1cbfef1c1fa08d1e m30999| Fri Feb 22 12:32:42.627 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:32:42.627 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:32:42.627 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:32:42-5127656a1cbfef1c1fa08d1f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536362627), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 12:32:42.627 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 12:32:42.628 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:32:42.628 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 12:32:42.629 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 12:32:42.629 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:32:42.630 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:32:42-5127656a1cbfef1c1fa08d21", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536362630), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:32:42.630 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:32:42.631 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' unlocked. m29000| Fri Feb 22 12:32:42.632 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:32:42.633 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:32:42.633 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:32:42.633 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:32:42.633 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:32:42.633 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:32:42.633 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 12:32:42.633 [websvr] admin web console waiting for connections on port 31999 m29000| Fri Feb 22 12:32:42.633 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:32:42.633 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 12:32:42.634 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 12:32:42.635 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Fri Feb 22 12:32:42.635 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 12:32:42.635 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 12:32:42.636 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:32:42.636 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 12:32:42.637 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:32:42.637 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 12:32:42.638 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:32:42.638 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 12:32:42.639 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:32:42.639 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 12:32:42.639 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 12:32:42.640 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:32:42.640 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:32:42.640 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:32:42 m30999| Fri Feb 22 12:32:42.640 [Balancer] created new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:32:42.640 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:32:42.641 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 12:32:42.641 [conn3] build index config.mongos { _id: 1 } m29000| Fri Feb 22 12:32:42.641 [initandlisten] connection accepted from 165.225.128.186:49638 #5 (5 connections now open) m30999| Fri Feb 22 12:32:42.641 [Balancer] connected connection! m29000| Fri Feb 22 12:32:42.642 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:32:42.642 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:32:42.642 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838 ) m30999| Fri Feb 22 12:32:42.643 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:32:42.643 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:32:42 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127656a1cbfef1c1fa08d23" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:32:42.644 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' acquired, ts : 5127656a1cbfef1c1fa08d23 m30999| Fri Feb 22 12:32:42.644 [Balancer] *** start balancing round m30999| Fri Feb 22 12:32:42.644 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:32:42.644 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:32:42.644 [Balancer] no collections to balance m30999| Fri Feb 22 12:32:42.644 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:32:42.644 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:32:42.644 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' unlocked. m30999| Fri Feb 22 12:32:42.790 [mongosMain] connection accepted from 127.0.0.1:48722 #1 (1 connection now open) ShardingTest undefined going to add shard : blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.791 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 12:32:42.792 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 12:32:42.793 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:32:42.793 [conn1] put [admin] on: config:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:32:42.793 [conn1] starting new replica set monitor for replica set blah-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.793 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.794 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.794 [conn1] connected connection! m30999| Fri Feb 22 12:32:42.794 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set blah-rs0 m31100| Fri Feb 22 12:32:42.794 [initandlisten] connection accepted from 165.225.128.186:57960 #16 (10 connections now open) m30999| Fri Feb 22 12:32:42.794 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536362794), ok: 1.0 } m30999| Fri Feb 22 12:32:42.795 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from blah-rs0/ m30999| Fri Feb 22 12:32:42.795 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set blah-rs0 m30999| Fri Feb 22 12:32:42.795 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.795 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.795 [conn1] connected connection! m31100| Fri Feb 22 12:32:42.795 [initandlisten] connection accepted from 165.225.128.186:54323 #17 (11 connections now open) m30999| Fri Feb 22 12:32:42.795 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set blah-rs0 m30999| Fri Feb 22 12:32:42.795 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set blah-rs0 m30999| Fri Feb 22 12:32:42.795 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.795 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.795 [conn1] connected connection! m30999| Fri Feb 22 12:32:42.795 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set blah-rs0 m30999| Fri Feb 22 12:32:42.795 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set blah-rs0 m30999| Fri Feb 22 12:32:42.795 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m31101| Fri Feb 22 12:32:42.795 [initandlisten] connection accepted from 165.225.128.186:46045 #8 (6 connections now open) m30999| Fri Feb 22 12:32:42.795 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.796 [conn1] connected connection! m30999| Fri Feb 22 12:32:42.796 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set blah-rs0 m31102| Fri Feb 22 12:32:42.796 [initandlisten] connection accepted from 165.225.128.186:40374 #7 (6 connections now open) m30999| Fri Feb 22 12:32:42.796 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.796 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 12:32:42.796 [initandlisten] connection accepted from 165.225.128.186:40821 #18 (12 connections now open) m30999| Fri Feb 22 12:32:42.796 [conn1] connected connection! m30999| Fri Feb 22 12:32:42.796 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.796 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.796 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.796 [conn1] replicaSetChange: shard not found for set: blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.796 [conn1] _check : blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31100| Fri Feb 22 12:32:42.796 [conn16] end connection 165.225.128.186:57960 (11 connections now open) m30999| Fri Feb 22 12:32:42.797 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536362796), ok: 1.0 } m30999| Fri Feb 22 12:32:42.797 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.797 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.797 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.797 [conn1] Primary for replica set blah-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.797 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536362797), ok: 1.0 } m30999| Fri Feb 22 12:32:42.797 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.797 [conn1] dbclient_rs nodes[1].ok = false bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.797 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.797 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536362797), ok: 1.0 } m30999| Fri Feb 22 12:32:42.797 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.797 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.798 [conn1] connected connection! m31101| Fri Feb 22 12:32:42.798 [initandlisten] connection accepted from 165.225.128.186:56833 #9 (7 connections now open) m30999| Fri Feb 22 12:32:42.798 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.798 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.798 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.798 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536362798), ok: 1.0 } m30999| Fri Feb 22 12:32:42.798 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.798 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.798 [conn1] connected connection! m31102| Fri Feb 22 12:32:42.798 [initandlisten] connection accepted from 165.225.128.186:60523 #8 (7 connections now open) m30999| Fri Feb 22 12:32:42.799 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.799 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.799 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.799 [conn1] replica set monitor for replica set blah-rs0 started, address is blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.799 BackgroundJob starting: ReplicaSetMonitorWatcher m30999| Fri Feb 22 12:32:42.799 [ReplicaSetMonitorWatcher] starting m30999| Fri Feb 22 12:32:42.799 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.799 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 12:32:42.799 [initandlisten] connection accepted from 165.225.128.186:33331 #19 (12 connections now open) m30999| Fri Feb 22 12:32:42.799 [conn1] connected connection! m30999| Fri Feb 22 12:32:42.801 [conn1] going to add shard: { _id: "blah-rs0", host: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } { "shardAdded" : "blah-rs0", "ok" : 1 } m30999| Fri Feb 22 12:32:42.802 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:32:42.803 [conn1] best shard for new allocation is shard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 mapped: 128 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:32:42.803 [conn1] put [test] on: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.803 [conn1] enabling sharding on: test m31100| Fri Feb 22 12:32:42.804 [FileAllocator] allocating new datafile /data/db/blah-rs0-0/test.ns, filling with zeroes... m31100| Fri Feb 22 12:32:42.805 [FileAllocator] done allocating datafile /data/db/blah-rs0-0/test.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 12:32:42.805 [FileAllocator] allocating new datafile /data/db/blah-rs0-0/test.0, filling with zeroes... m31100| Fri Feb 22 12:32:42.805 [FileAllocator] done allocating datafile /data/db/blah-rs0-0/test.0, size: 16MB, took 0 secs m31100| Fri Feb 22 12:32:42.809 [conn19] build index test.foo { _id: 1 } m31100| Fri Feb 22 12:32:42.810 [conn19] build index done. scanned 0 total records. 0.001 secs m31100| Fri Feb 22 12:32:42.811 [conn19] info: creating collection test.foo on add index m30999| Fri Feb 22 12:32:42.811 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:32:42.811 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m30999| Fri Feb 22 12:32:42.811 [conn1] going to create 1 chunk(s) for: test.foo using new epoch 5127656a1cbfef1c1fa08d24 m31102| Fri Feb 22 12:32:42.811 [FileAllocator] allocating new datafile /data/db/blah-rs0-2/test.ns, filling with zeroes... m31101| Fri Feb 22 12:32:42.812 [FileAllocator] allocating new datafile /data/db/blah-rs0-1/test.ns, filling with zeroes... m31102| Fri Feb 22 12:32:42.812 [FileAllocator] done allocating datafile /data/db/blah-rs0-2/test.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 12:32:42.812 [FileAllocator] done allocating datafile /data/db/blah-rs0-1/test.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 12:32:42.812 [FileAllocator] allocating new datafile /data/db/blah-rs0-2/test.0, filling with zeroes... m31101| Fri Feb 22 12:32:42.812 [FileAllocator] allocating new datafile /data/db/blah-rs0-1/test.0, filling with zeroes... m31102| Fri Feb 22 12:32:42.812 [FileAllocator] done allocating datafile /data/db/blah-rs0-2/test.0, size: 16MB, took 0 secs m31101| Fri Feb 22 12:32:42.812 [FileAllocator] done allocating datafile /data/db/blah-rs0-1/test.0, size: 16MB, took 0 secs m30999| Fri Feb 22 12:32:42.813 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 2 version: 1|0||5127656a1cbfef1c1fa08d24 based on: (empty) m29000| Fri Feb 22 12:32:42.813 [conn3] build index config.collections { _id: 1 } m29000| Fri Feb 22 12:32:42.815 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:32:42.815 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31100 serverID: 5127656a1cbfef1c1fa08d22 m30999| Fri Feb 22 12:32:42.815 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31101 serverID: 5127656a1cbfef1c1fa08d22 m30999| Fri Feb 22 12:32:42.815 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.815 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.815 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31102 serverID: 5127656a1cbfef1c1fa08d22 m30999| Fri Feb 22 12:32:42.815 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.816 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:42.816 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.816 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.816 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:42.816 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.816 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.816 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:42.816 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] connected connection! m31101| Fri Feb 22 12:32:42.816 [initandlisten] connection accepted from 165.225.128.186:53061 #10 (8 connections now open) m31100| Fri Feb 22 12:32:42.816 [initandlisten] connection accepted from 165.225.128.186:57884 #20 (13 connections now open) m30999| Fri Feb 22 12:32:42.816 [conn1] connected connection! m31100| Fri Feb 22 12:32:42.816 [initandlisten] connection accepted from 165.225.128.186:61541 #21 (14 connections now open) m30999| Fri Feb 22 12:32:42.816 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] connected connection! m30999| Fri Feb 22 12:32:42.816 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.816 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 12:32:42.817 [initandlisten] connection accepted from 165.225.128.186:42094 #9 (8 connections now open) m30999| Fri Feb 22 12:32:42.817 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] connected connection! m31102| Fri Feb 22 12:32:42.817 [repl writer worker 1] build index test.foo { _id: 1 } m31101| Fri Feb 22 12:32:42.817 [repl writer worker 1] build index test.foo { _id: 1 } m30999| Fri Feb 22 12:32:42.817 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1188bb0 2 m30999| Fri Feb 22 12:32:42.817 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:32:42.817 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), authoritative: true, shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1188bb0 2 m31100| Fri Feb 22 12:32:42.818 [conn21] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 12:32:42.818 [initandlisten] connection accepted from 165.225.128.186:34726 #6 (6 connections now open) m31102| Fri Feb 22 12:32:42.818 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31102| Fri Feb 22 12:32:42.818 [repl writer worker 1] info: creating collection test.foo on add index m31101| Fri Feb 22 12:32:42.818 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 12:32:42.818 [repl writer worker 1] info: creating collection test.foo on add index m30999| Fri Feb 22 12:32:42.819 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:32:42.820 [conn1] about to initiate autosplit: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 156936 splitThreshold: 921 m30999| Fri Feb 22 12:32:42.820 [conn1] chunk not full enough to trigger auto-split no split entry m31100| Fri Feb 22 12:32:42.821 [conn21] build index test.bar { _id: 1 } m31100| Fri Feb 22 12:32:42.823 [conn21] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:32:42.824 [conn1] about to initiate autosplit: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 203 splitThreshold: 921 m30999| Fri Feb 22 12:32:42.824 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:32:42.826 [conn1] about to initiate autosplit: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 203 splitThreshold: 921 m30999| Fri Feb 22 12:32:42.826 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:32:42.827 [conn1] about to initiate autosplit: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 203 splitThreshold: 921 m30999| Fri Feb 22 12:32:42.828 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:32:42.829 [conn1] about to initiate autosplit: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 203 splitThreshold: 921 m30999| Fri Feb 22 12:32:42.829 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:32:42.830 [conn1] about to initiate autosplit: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } dataWritten: 203 splitThreshold: 921 m31100| Fri Feb 22 12:32:42.831 [conn19] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m31100| Fri Feb 22 12:32:42.831 [conn19] max number of requested split points reached (2) before the end of chunk test.foo { : MinKey } -->> { : MaxKey } m31100| Fri Feb 22 12:32:42.832 [conn19] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "blah-rs0", splitKeys: [ { _id: 0.0 } ], shardId: "test.foo-_id_MinKey", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m29000| Fri Feb 22 12:32:42.833 [initandlisten] connection accepted from 165.225.128.186:46718 #7 (7 connections now open) m31100| Fri Feb 22 12:32:42.834 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:31100:1361536362:25124 (sleeping for 30000ms) m31102| Fri Feb 22 12:32:42.836 [repl writer worker 1] build index test.bar { _id: 1 } m31101| Fri Feb 22 12:32:42.836 [repl writer worker 1] build index test.bar { _id: 1 } m31100| Fri Feb 22 12:32:42.836 [conn19] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536362:25124' acquired, ts : 5127656a8648a0a994792307 m31100| Fri Feb 22 12:32:42.837 [conn19] splitChunk accepted at version 1|0||5127656a1cbfef1c1fa08d24 m31102| Fri Feb 22 12:32:42.837 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 12:32:42.838 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31100| Fri Feb 22 12:32:42.838 [conn19] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:32:42-5127656a8648a0a994792308", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:33331", time: new Date(1361536362838), what: "split", ns: "test.foo", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 0.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127656a1cbfef1c1fa08d24') }, right: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5127656a1cbfef1c1fa08d24') } } } m31100| Fri Feb 22 12:32:42.838 [conn19] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536362:25124' unlocked. m30999| Fri Feb 22 12:32:42.839 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 1|2||5127656a1cbfef1c1fa08d24 based on: 1|0||5127656a1cbfef1c1fa08d24 m30999| Fri Feb 22 12:32:42.839 [conn1] autosplitted test.foo shard: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } on: { _id: 0.0 } (splitThreshold 921) m30999| Fri Feb 22 12:32:42.839 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|2, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1188bb0 3 m30999| Fri Feb 22 12:32:42.840 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), ok: 1.0 } m30999| Fri Feb 22 12:32:42.840 [conn1] about to initiate autosplit: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } dataWritten: 156936 splitThreshold: 471859 m30999| Fri Feb 22 12:32:42.840 [conn1] chunk not full enough to trigger auto-split no split entry m30999| Fri Feb 22 12:32:42.856 [conn1] splitting: test.foo shard: ns:test.fooshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|2||000000000000000000000000min: { _id: 0.0 }max: { _id: MaxKey } m31100| Fri Feb 22 12:32:42.856 [conn19] received splitChunk request: { splitChunk: "test.foo", keyPattern: { _id: 1.0 }, min: { _id: 0.0 }, max: { _id: MaxKey }, from: "blah-rs0", splitKeys: [ { _id: 50.0 } ], shardId: "test.foo-_id_0.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m31100| Fri Feb 22 12:32:42.857 [conn19] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536362:25124' acquired, ts : 5127656a8648a0a994792309 m31100| Fri Feb 22 12:32:42.859 [conn19] splitChunk accepted at version 1|2||5127656a1cbfef1c1fa08d24 m31100| Fri Feb 22 12:32:42.860 [conn19] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:32:42-5127656a8648a0a99479230a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:33331", time: new Date(1361536362860), what: "split", ns: "test.foo", details: { before: { min: { _id: 0.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: 0.0 }, max: { _id: 50.0 }, lastmod: Timestamp 1000|3, lastmodEpoch: ObjectId('5127656a1cbfef1c1fa08d24') }, right: { min: { _id: 50.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|4, lastmodEpoch: ObjectId('5127656a1cbfef1c1fa08d24') } } } m31100| Fri Feb 22 12:32:42.860 [conn19] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536362:25124' unlocked. m30999| Fri Feb 22 12:32:42.862 [conn1] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 4 version: 1|4||5127656a1cbfef1c1fa08d24 based on: 1|2||5127656a1cbfef1c1fa08d24 m30999| Fri Feb 22 12:32:42.862 [mongosMain] connection accepted from 165.225.128.186:65184 #2 (2 connections now open) m30999| Fri Feb 22 12:32:42.863 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|4, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1188bb0 4 m30999| Fri Feb 22 12:32:42.863 [conn1] setShardVersion success: { oldVersion: Timestamp 1000|0, oldVersionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), ok: 1.0 } m30999| Fri Feb 22 12:32:42.864 [conn2] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.864 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 12:32:42.864 [initandlisten] connection accepted from 165.225.128.186:33453 #22 (15 connections now open) m30999| Fri Feb 22 12:32:42.864 [conn2] connected connection! m30999| Fri Feb 22 12:32:42.864 [conn2] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.865 [conn2] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|4, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11835a0 4 m30999| Fri Feb 22 12:32:42.865 [conn2] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361536362000, "i" : 202 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361536362000, "i" : 202 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31102 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31102, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361536362000, "i" : 202 } ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number ReplSetTest stop *** Shutting down mongod in port 31100 *** m31100| Fri Feb 22 12:32:42.876 got signal 15 (Terminated), will terminate after current cmd ends m31100| Fri Feb 22 12:32:42.876 [interruptThread] now exiting m31100| Fri Feb 22 12:32:42.876 dbexit: m31100| Fri Feb 22 12:32:42.876 [interruptThread] shutdown: going to close listening sockets... m31100| Fri Feb 22 12:32:42.876 [interruptThread] closing listening socket: 12 m31100| Fri Feb 22 12:32:42.876 [interruptThread] closing listening socket: 13 m31100| Fri Feb 22 12:32:42.876 [interruptThread] closing listening socket: 14 m31100| Fri Feb 22 12:32:42.876 [interruptThread] removing socket file: /tmp/mongodb-31100.sock m31100| Fri Feb 22 12:32:42.876 [interruptThread] shutdown: going to flush diaglog... m31100| Fri Feb 22 12:32:42.876 [interruptThread] shutdown: going to close sockets... m31100| Fri Feb 22 12:32:42.876 [interruptThread] shutdown: waiting for fs preallocator... m31100| Fri Feb 22 12:32:42.876 [interruptThread] shutdown: lock for final commit... m31100| Fri Feb 22 12:32:42.876 [interruptThread] shutdown: final commit... m31100| Fri Feb 22 12:32:42.876 [conn1] end connection 127.0.0.1:61107 (14 connections now open) m31102| Fri Feb 22 12:32:42.876 [conn2] end connection 165.225.128.186:62596 (7 connections now open) m31100| Fri Feb 22 12:32:42.876 [conn5] end connection 165.225.128.186:52909 (14 connections now open) m31100| Fri Feb 22 12:32:42.876 [conn7] end connection 165.225.128.186:61690 (14 connections now open) m31101| Fri Feb 22 12:32:42.876 [conn5] end connection 165.225.128.186:40340 (7 connections now open) m31100| Fri Feb 22 12:32:42.877 [conn19] end connection 165.225.128.186:33331 (14 connections now open) m31101| Fri Feb 22 12:32:42.877 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:42.877 [conn14] end connection 165.225.128.186:43441 (14 connections now open) m31101| Fri Feb 22 12:32:42.877 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:42.877 [conn17] end connection 165.225.128.186:54323 (14 connections now open) m31102| Fri Feb 22 12:32:42.877 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:42.877 [conn15] end connection 165.225.128.186:41835 (14 connections now open) m31102| Fri Feb 22 12:32:42.877 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:32:42.877 [conn18] end connection 165.225.128.186:40821 (14 connections now open) m31100| Fri Feb 22 12:32:42.877 [conn21] end connection 165.225.128.186:61541 (14 connections now open) m31100| Fri Feb 22 12:32:42.881 [conn22] end connection 165.225.128.186:33453 (5 connections now open) m30999| Fri Feb 22 12:32:42.890 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [0] server [165.225.128.186:31100] m30999| Fri Feb 22 12:32:42.891 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] DBClientCursor::init call() failed m30999| Fri Feb 22 12:32:42.891 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] User Assertion: 10276:DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('5127656a1cbfef1c1fa08d22') } m30999| Fri Feb 22 12:32:42.891 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] Detected bad connection created at 1361536362816661 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:42.891 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { writebacklisten: ObjectId('5127656a1cbfef1c1fa08d22') } m29000| Fri Feb 22 12:32:42.895 [conn6] end connection 165.225.128.186:34726 (6 connections now open) m29000| Fri Feb 22 12:32:42.895 [conn7] end connection 165.225.128.186:46718 (6 connections now open) m31100| Fri Feb 22 12:32:42.901 [interruptThread] shutdown: closing all files... m31100| Fri Feb 22 12:32:42.903 [interruptThread] closeAllFiles() finished m31100| Fri Feb 22 12:32:42.903 [interruptThread] journalCleanup... m31100| Fri Feb 22 12:32:42.903 [interruptThread] removeJournalFiles m31100| Fri Feb 22 12:32:42.904 dbexit: really exiting now Fri Feb 22 12:32:43.876 shell: stopped mongo program on port 31100 m30999| Fri Feb 22 12:32:43.891 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:43.891 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:43.891 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:32:44.096 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY m31101| Fri Feb 22 12:32:44.096 [rsHealthPoll] DBClientCursor::init call() failed m31101| Fri Feb 22 12:32:44.096 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:32:44.096 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31100 is down (or slow to respond): m31101| Fri Feb 22 12:32:44.096 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state DOWN m31101| Fri Feb 22 12:32:44.097 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'bs-smartos-x86-64-1.10gen.cc:31101 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31100 is already primary and more up-to-date' m31102| Fri Feb 22 12:32:44.300 [rsHealthPoll] DBClientCursor::init call() failed m31102| Fri Feb 22 12:32:44.300 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:44.300 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31100 is down (or slow to respond): m31102| Fri Feb 22 12:32:44.300 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state DOWN m31102| Fri Feb 22 12:32:44.300 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31102 is electable' m30999| Fri Feb 22 12:32:45.892 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:45.892 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:45.892 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:32:46.097 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:46.301 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:32:48.097 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:48.301 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m30999| Fri Feb 22 12:32:48.645 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:32:48.645 [Balancer] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 12:32:48.645 [Balancer] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m30999| Fri Feb 22 12:32:48.645 [Balancer] DBClientCursor::init call() failed m30999| Fri Feb 22 12:32:48.645 [Balancer] User Assertion: 10276:DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { features: 1 } m30999| Fri Feb 22 12:32:48.645 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30999| Fri Feb 22 12:32:48.645 [Balancer] caught exception while doing balance: DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { features: 1 } m30999| Fri Feb 22 12:32:48.645 [Balancer] *** End of balancing round m29000| Fri Feb 22 12:32:48.646 [conn3] end connection 165.225.128.186:35195 (4 connections now open) m30999| Fri Feb 22 12:32:48.893 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:48.893 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:48.893 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:32:50.098 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:32:50.211 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31102| Fri Feb 22 12:32:50.302 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:51.243 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m31101| Fri Feb 22 12:32:52.099 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:52.302 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying Fri Feb 22 12:32:52.379 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 Fri Feb 22 12:32:52.388 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 12:32:52.388 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 12:32:52.388 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:32:52.389 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] checking replica set: blah-rs0 m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] User Assertion: 10276:DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { ismaster: 1 } m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31100 ns: admin.$cmd query: { ismaster: 1 } m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] _check : blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:52.799 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:52.800 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536372800), ok: 1.0 } m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536372800), ok: 1.0 } m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:52.800 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:52.893 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:52.894 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:52.894 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:32:53.389 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [FAILED_STATE] for bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:32:53.390 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536373390), ok: 1.0 } Fri Feb 22 12:32:53.390 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536373390), ok: 1.0 } m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [FAILED_STATE] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536373801), ok: 1.0 } m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536373801), ok: 1.0 } m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:53.801 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 12:32:54.097 [conn4] end connection 165.225.128.186:44978 (6 connections now open) m31102| Fri Feb 22 12:32:54.097 [initandlisten] connection accepted from 165.225.128.186:52784 #10 (7 connections now open) m31101| Fri Feb 22 12:32:54.099 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:32:54.301 [conn4] end connection 165.225.128.186:35245 (6 connections now open) m31101| Fri Feb 22 12:32:54.301 [initandlisten] connection accepted from 165.225.128.186:56418 #11 (7 connections now open) m31102| Fri Feb 22 12:32:54.303 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying Fri Feb 22 12:32:54.390 [ReplicaSetMonitorWatcher] warning: No primary detected for set blah-rs0 m30999| Fri Feb 22 12:32:54.646 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:32:54.646 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 12:32:54.646 [initandlisten] connection accepted from 165.225.128.186:39011 #8 (5 connections now open) m30999| Fri Feb 22 12:32:54.646 [Balancer] connected connection! m30999| Fri Feb 22 12:32:54.646 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:32:54.646 [Balancer] _check : blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:54.646 [Balancer] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:54.647 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:54.647 [Balancer] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:54.647 [Balancer] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:54.647 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536374647), ok: 1.0 } m30999| Fri Feb 22 12:32:54.647 [Balancer] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:54.647 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:54.647 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:54.647 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536374647), ok: 1.0 } m30999| Fri Feb 22 12:32:54.647 [Balancer] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:54.647 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:54.647 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:54.801 [ReplicaSetMonitorWatcher] warning: No primary detected for set blah-rs0 m30999| Fri Feb 22 12:32:55.648 [Balancer] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [FAILED_STATE] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:55.648 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536375648), ok: 1.0 } m30999| Fri Feb 22 12:32:55.648 [Balancer] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:55.648 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:55.648 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:32:55.648 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536375648), ok: 1.0 } m30999| Fri Feb 22 12:32:55.648 [Balancer] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:55.648 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:32:55.648 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m31101| Fri Feb 22 12:32:56.100 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:56.303 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:56.424 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m30999| Fri Feb 22 12:32:56.649 [Balancer] warning: No primary detected for set blah-rs0 m30999| Fri Feb 22 12:32:56.649 [Balancer] User Assertion: 10009:ReplicaSetMonitor no master found for set: blah-rs0 m30999| Fri Feb 22 12:32:56.649 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30999| Fri Feb 22 12:32:56.649 [Balancer] caught exception while doing balance: ReplicaSetMonitor no master found for set: blah-rs0 m30999| Fri Feb 22 12:32:56.649 [Balancer] *** End of balancing round m29000| Fri Feb 22 12:32:56.649 [conn5] end connection 165.225.128.186:49638 (4 connections now open) m31101| Fri Feb 22 12:32:56.864 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m30999| Fri Feb 22 12:32:57.894 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:32:57.894 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:32:57.894 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:32:58.101 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:32:58.304 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:00.101 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:00.304 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:02.102 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:02.305 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:02.476 [rsMgr] replSet info electSelf 1 m31102| Fri Feb 22 12:33:02.476 [conn10] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31101 (1) m31102| Fri Feb 22 12:33:02.579 [rsMgr] replSet not trying to elect self as responded yea to someone else recently m30999| Fri Feb 22 12:33:02.649 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:33:02.649 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:02.649 [Balancer] connected connection! m29000| Fri Feb 22 12:33:02.649 [initandlisten] connection accepted from 165.225.128.186:40180 #9 (5 connections now open) m30999| Fri Feb 22 12:33:02.650 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:33:02.650 [Balancer] _check : blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:02.650 [Balancer] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:02.650 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:02.650 [Balancer] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:02.650 [Balancer] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:02.651 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536382651), ok: 1.0 } m30999| Fri Feb 22 12:33:02.651 [Balancer] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:02.651 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:02.651 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:02.651 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536382651), ok: 1.0 } m30999| Fri Feb 22 12:33:02.651 [Balancer] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:02.651 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:02.651 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m31101| Fri Feb 22 12:33:02.878 [rsMgr] replSet PRIMARY m30999| Fri Feb 22 12:33:03.651 [Balancer] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [FAILED_STATE] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:03.652 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536383652), ok: 1.0 } m30999| Fri Feb 22 12:33:03.652 [Balancer] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:03.652 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:03.652 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:03.652 [Balancer] Primary for replica set blah-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:03.652 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:03.652 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:03.652 [Balancer] connected connection! m31101| Fri Feb 22 12:33:03.652 [initandlisten] connection accepted from 165.225.128.186:38383 #12 (8 connections now open) m30999| Fri Feb 22 12:33:03.653 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838 ) m30999| Fri Feb 22 12:33:03.653 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:33:03 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127657f1cbfef1c1fa08d25" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127656a1cbfef1c1fa08d23" } } m30999| Fri Feb 22 12:33:03.654 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' acquired, ts : 5127657f1cbfef1c1fa08d25 m30999| Fri Feb 22 12:33:03.654 [Balancer] *** start balancing round m30999| Fri Feb 22 12:33:03.654 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:33:03.654 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:33:03.654 [Balancer] can't balance without more active shards m30999| Fri Feb 22 12:33:03.654 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:33:03.654 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:33:03.654 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' unlocked. m30999| Fri Feb 22 12:33:03.895 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:03.895 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:03.895 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:04.103 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:04.303 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state PRIMARY m31102| Fri Feb 22 12:33:04.306 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying Fri Feb 22 12:33:04.390 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:33:04.391 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:33:04.391 [ReplicaSetMonitorWatcher] Primary for replica set blah-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:04.801 [ReplicaSetMonitorWatcher] checking replica set: blah-rs0 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536384802), ok: 1.0 } m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] _check : blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:04.802 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536384802), ok: 1.0 } m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:04.802 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536384803), ok: 1.0 } m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536384803), ok: 1.0 } m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:04.803 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 12:33:04.877 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 12:33:04.878 [initandlisten] connection accepted from 165.225.128.186:40586 #13 (9 connections now open) m31101| Fri Feb 22 12:33:06.103 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:06.306 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:08.104 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:08.307 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m30999| Fri Feb 22 12:33:09.655 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:33:09.655 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838 ) m30999| Fri Feb 22 12:33:09.656 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:33:09 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512765851cbfef1c1fa08d26" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127657f1cbfef1c1fa08d25" } } m30999| Fri Feb 22 12:33:09.657 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' acquired, ts : 512765851cbfef1c1fa08d26 m30999| Fri Feb 22 12:33:09.657 [Balancer] *** start balancing round m30999| Fri Feb 22 12:33:09.657 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:33:09.657 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:33:09.657 [Balancer] can't balance without more active shards m30999| Fri Feb 22 12:33:09.657 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:33:09.657 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:33:09.657 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' unlocked. m31101| Fri Feb 22 12:33:10.105 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:10.105 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:10.105 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:10.105 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:10.105 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:10.106 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:10.307 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:10.308 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:10.308 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:10.308 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:10.308 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:10.308 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:10.896 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:10.896 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:10.896 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] WriteBackListener exception : socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:12.106 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:12.106 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:12.107 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:12.107 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:12.107 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:12.107 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:12.309 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:12.309 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:12.309 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:12.310 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:12.310 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:12.310 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:12.625 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 12:33:12 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838', sleeping for 30000ms m31101| Fri Feb 22 12:33:14.108 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:14.108 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:14.108 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:14.109 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:14.109 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:14.109 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:14.311 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:14.311 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:14.311 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:14.311 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:14.311 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:14.312 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:33:14.392 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:33:14.392 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.803 [ReplicaSetMonitorWatcher] checking replica set: blah-rs0 m30999| Fri Feb 22 12:33:14.803 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536394803), ok: 1.0 } m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] _check : blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.804 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [CONNECT_ERROR] for bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536394804), ok: 1.0 } m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:14.804 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "blah-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536394804), ok: 1.0 } m30999| Fri Feb 22 12:33:14.805 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.805 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:14.805 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:14.805 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "blah-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31101", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536394805), ok: 1.0 } m30999| Fri Feb 22 12:33:14.805 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = false bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:14.805 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:14.805 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:33:15.658 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:33:15.658 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838 ) m30999| Fri Feb 22 12:33:15.659 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:33:15 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127658b1cbfef1c1fa08d27" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512765851cbfef1c1fa08d26" } } m30999| Fri Feb 22 12:33:15.660 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' acquired, ts : 5127658b1cbfef1c1fa08d27 m30999| Fri Feb 22 12:33:15.660 [Balancer] *** start balancing round m30999| Fri Feb 22 12:33:15.660 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:33:15.660 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:33:15.660 [Balancer] can't balance without more active shards m30999| Fri Feb 22 12:33:15.660 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:33:15.660 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:33:15.660 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536362:16838' unlocked. m31101| Fri Feb 22 12:33:16.110 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:16.110 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:16.110 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:16.110 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:16.111 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:16.111 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:16.312 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:16.312 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:16.313 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:16.313 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:16.313 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:16.313 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:18.111 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:18.111 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:18.112 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:33:18.112 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:18.112 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:33:18.112 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:18.314 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:18.314 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:18.315 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:18.315 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:18.315 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:18.315 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:33:18.877 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:18.877 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:18.877 [conn1] connected connection! m31101| Fri Feb 22 12:33:18.877 [initandlisten] connection accepted from 165.225.128.186:58497 #14 (10 connections now open) m30999| Fri Feb 22 12:33:18.877 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|4, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1187a70 4 m30999| Fri Feb 22 12:33:18.877 [conn1] setShardVersion failed! m30999| { need_authoritative: true, ok: 0.0, errmsg: "first setShardVersion" } m30999| Fri Feb 22 12:33:18.878 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|4, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), authoritative: true, shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1187a70 4 m31101| Fri Feb 22 12:33:18.879 [conn14] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 12:33:18.879 [initandlisten] connection accepted from 165.225.128.186:61652 #10 (6 connections now open) m30999| Fri Feb 22 12:33:18.880 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:33:18.881 [conn1] CMD: shardcollection: { shardcollection: "test.bar", key: { _id: 1.0 } } m30999| Fri Feb 22 12:33:18.882 [conn1] enable sharding on: test.bar with shard key: { _id: 1.0 } m30999| Fri Feb 22 12:33:18.882 [conn1] going to create 1 chunk(s) for: test.bar using new epoch 5127658e1cbfef1c1fa08d28 m30999| Fri Feb 22 12:33:18.883 [conn1] ChunkManager: time to load chunks for test.bar: 0ms sequenceNumber: 5 version: 1|0||5127658e1cbfef1c1fa08d28 based on: (empty) m30999| Fri Feb 22 12:33:18.883 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.bar { setShardVersion: "test.bar", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127658e1cbfef1c1fa08d28'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1187a70 5 m30999| Fri Feb 22 12:33:18.883 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.bar", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.bar'" } m30999| Fri Feb 22 12:33:18.884 [conn1] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.bar { setShardVersion: "test.bar", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|0, versionEpoch: ObjectId('5127658e1cbfef1c1fa08d28'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), authoritative: true, shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1187a70 5 m31101| Fri Feb 22 12:33:18.884 [conn14] no current chunk manager found for this shard, will initialize m30999| Fri Feb 22 12:33:18.884 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:33:18.885 [conn1] splitting: test.bar shard: ns:test.barshard: blah-rs0:blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: MaxKey } m31101| Fri Feb 22 12:33:18.885 [conn12] received splitChunk request: { splitChunk: "test.bar", keyPattern: { _id: 1.0 }, min: { _id: MinKey }, max: { _id: MaxKey }, from: "blah-rs0", splitKeys: [ { _id: 50.0 } ], shardId: "test.bar-_id_MinKey", configdb: "bs-smartos-x86-64-1.10gen.cc:29000" } m29000| Fri Feb 22 12:33:18.886 [initandlisten] connection accepted from 165.225.128.186:51900 #11 (7 connections now open) m31101| Fri Feb 22 12:33:18.887 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:31101:1361536398:6339 (sleeping for 30000ms) m31101| Fri Feb 22 12:33:18.888 [conn12] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:31101:1361536398:6339' acquired, ts : 5127658ea50da1e1fed38f87 m31101| Fri Feb 22 12:33:18.889 [conn12] splitChunk accepted at version 1|0||5127658e1cbfef1c1fa08d28 m31101| Fri Feb 22 12:33:18.889 [conn12] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:33:18-5127658ea50da1e1fed38f88", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:38383", time: new Date(1361536398889), what: "split", ns: "test.bar", details: { before: { min: { _id: MinKey }, max: { _id: MaxKey }, lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('000000000000000000000000') }, left: { min: { _id: MinKey }, max: { _id: 50.0 }, lastmod: Timestamp 1000|1, lastmodEpoch: ObjectId('5127658e1cbfef1c1fa08d28') }, right: { min: { _id: 50.0 }, max: { _id: MaxKey }, lastmod: Timestamp 1000|2, lastmodEpoch: ObjectId('5127658e1cbfef1c1fa08d28') } } } m31101| Fri Feb 22 12:33:18.890 [conn12] distributed lock 'test.bar/bs-smartos-x86-64-1.10gen.cc:31101:1361536398:6339' unlocked. m30999| Fri Feb 22 12:33:18.891 [conn1] ChunkManager: time to load chunks for test.bar: 0ms sequenceNumber: 6 version: 1|2||5127658e1cbfef1c1fa08d28 based on: 1|0||5127658e1cbfef1c1fa08d28 m30999| Fri Feb 22 12:33:18.891 [mongosMain] connection accepted from 165.225.128.186:65365 #3 (3 connections now open) m30999| Fri Feb 22 12:33:18.892 [conn3] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:18.892 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:33:18.892 [conn3] connected connection! m30999| Fri Feb 22 12:33:18.892 [conn3] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 12:33:18.892 [initandlisten] connection accepted from 165.225.128.186:33280 #15 (11 connections now open) m30999| Fri Feb 22 12:33:18.892 [conn3] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.bar { setShardVersion: "test.bar", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|2, versionEpoch: ObjectId('5127658e1cbfef1c1fa08d28'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11820e0 6 m30999| Fri Feb 22 12:33:18.892 [conn3] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:33:18.893 [conn3] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|4, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x11820e0 4 m30999| Fri Feb 22 12:33:18.893 [conn3] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:33:18.893 [conn2] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:33:18.894 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 12:33:18.894 [initandlisten] connection accepted from 165.225.128.186:50291 #16 (12 connections now open) m30999| Fri Feb 22 12:33:18.894 [conn2] connected connection! m30999| Fri Feb 22 12:33:18.894 [conn2] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.bar { setShardVersion: "test.bar", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|2, versionEpoch: ObjectId('5127658e1cbfef1c1fa08d28'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1182670 6 m30999| Fri Feb 22 12:33:18.894 [conn2] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:33:18.894 [conn2] setShardVersion blah-rs0 bs-smartos-x86-64-1.10gen.cc:31101 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|4, versionEpoch: ObjectId('5127656a1cbfef1c1fa08d24'), serverID: ObjectId('5127656a1cbfef1c1fa08d22'), shard: "blah-rs0", shardHost: "blah-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x1182670 4 m30999| Fri Feb 22 12:33:18.894 [conn2] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } m30999| Fri Feb 22 12:33:18.895 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m29000| Fri Feb 22 12:33:18.895 [conn4] end connection 165.225.128.186:48930 (6 connections now open) m29000| Fri Feb 22 12:33:18.895 [conn8] end connection 165.225.128.186:39011 (6 connections now open) m29000| Fri Feb 22 12:33:18.895 [conn9] end connection 165.225.128.186:40180 (6 connections now open) m31101| Fri Feb 22 12:33:18.895 [conn9] end connection 165.225.128.186:56833 (11 connections now open) m31102| Fri Feb 22 12:33:18.895 [conn8] end connection 165.225.128.186:60523 (6 connections now open) m31102| Fri Feb 22 12:33:18.895 [conn7] end connection 165.225.128.186:40374 (6 connections now open) m31101| Fri Feb 22 12:33:18.896 [conn16] end connection 165.225.128.186:50291 (10 connections now open) m31101| Fri Feb 22 12:33:18.896 [conn15] end connection 165.225.128.186:33280 (10 connections now open) m31101| Fri Feb 22 12:33:18.896 [conn12] end connection 165.225.128.186:38383 (10 connections now open) m31101| Fri Feb 22 12:33:18.896 [conn14] end connection 165.225.128.186:58497 (10 connections now open) m31101| Fri Feb 22 12:33:18.896 [conn8] end connection 165.225.128.186:46045 (10 connections now open) Fri Feb 22 12:33:19.895 shell: stopped mongo program on port 30999 Fri Feb 22 12:33:19.895 No db started on port: 30000 Fri Feb 22 12:33:19.895 shell: stopped mongo program on port 30000 ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number ReplSetTest stop *** Shutting down mongod in port 31100 *** Fri Feb 22 12:33:19.895 No db started on port: 31100 Fri Feb 22 12:33:19.895 shell: stopped mongo program on port 31100 ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number ReplSetTest stop *** Shutting down mongod in port 31101 *** m31101| Fri Feb 22 12:33:19.896 got signal 15 (Terminated), will terminate after current cmd ends m31101| Fri Feb 22 12:33:19.896 [interruptThread] now exiting m31101| Fri Feb 22 12:33:19.896 dbexit: m31101| Fri Feb 22 12:33:19.896 [interruptThread] shutdown: going to close listening sockets... m31101| Fri Feb 22 12:33:19.896 [interruptThread] closing listening socket: 15 m31101| Fri Feb 22 12:33:19.896 [interruptThread] closing listening socket: 16 m31101| Fri Feb 22 12:33:19.896 [interruptThread] closing listening socket: 17 m31101| Fri Feb 22 12:33:19.896 [interruptThread] removing socket file: /tmp/mongodb-31101.sock m31101| Fri Feb 22 12:33:19.896 [interruptThread] shutdown: going to flush diaglog... m31101| Fri Feb 22 12:33:19.896 [interruptThread] shutdown: going to close sockets... m31101| Fri Feb 22 12:33:19.896 [interruptThread] shutdown: waiting for fs preallocator... m31101| Fri Feb 22 12:33:19.896 [interruptThread] shutdown: lock for final commit... m31101| Fri Feb 22 12:33:19.896 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 12:33:19.896 [conn1] end connection 127.0.0.1:52786 (5 connections now open) m31102| Fri Feb 22 12:33:19.896 [conn10] end connection 165.225.128.186:52784 (4 connections now open) m31101| Fri Feb 22 12:33:19.896 [conn11] end connection 165.225.128.186:56418 (5 connections now open) m31101| Fri Feb 22 12:33:19.896 [conn6] end connection 165.225.128.186:44468 (5 connections now open) m31101| Fri Feb 22 12:33:19.896 [conn7] end connection 165.225.128.186:38623 (5 connections now open) m31102| Fri Feb 22 12:33:19.896 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31101 m29000| Fri Feb 22 12:33:19.896 [conn11] end connection 165.225.128.186:51900 (3 connections now open) m29000| Fri Feb 22 12:33:19.896 [conn10] end connection 165.225.128.186:61652 (3 connections now open) m31101| Fri Feb 22 12:33:19.924 [interruptThread] shutdown: closing all files... m31101| Fri Feb 22 12:33:19.925 [interruptThread] closeAllFiles() finished m31101| Fri Feb 22 12:33:19.925 [interruptThread] journalCleanup... m31101| Fri Feb 22 12:33:19.925 [interruptThread] removeJournalFiles m31101| Fri Feb 22 12:33:19.926 dbexit: really exiting now m31102| Fri Feb 22 12:33:20.305 [rsHealthPoll] DBClientCursor::init call() failed m31102| Fri Feb 22 12:33:20.305 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 heartbeat failed, retrying m31102| Fri Feb 22 12:33:20.305 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31101 is down (or slow to respond): m31102| Fri Feb 22 12:33:20.305 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state DOWN m31102| Fri Feb 22 12:33:20.305 [rsMgr] replSet can't see a majority, will not try to elect self m31102| Fri Feb 22 12:33:20.316 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:20.316 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:20.316 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:33:20.316 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:20.317 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:33:20.317 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:33:20.896 shell: stopped mongo program on port 31101 ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number ReplSetTest stop *** Shutting down mongod in port 31102 *** m31102| Fri Feb 22 12:33:20.896 got signal 15 (Terminated), will terminate after current cmd ends m31102| Fri Feb 22 12:33:20.896 [interruptThread] now exiting m31102| Fri Feb 22 12:33:20.896 dbexit: m31102| Fri Feb 22 12:33:20.896 [interruptThread] shutdown: going to close listening sockets... m31102| Fri Feb 22 12:33:20.896 [interruptThread] closing listening socket: 18 m31102| Fri Feb 22 12:33:20.896 [interruptThread] closing listening socket: 19 m31102| Fri Feb 22 12:33:20.896 [interruptThread] closing listening socket: 20 m31102| Fri Feb 22 12:33:20.896 [interruptThread] removing socket file: /tmp/mongodb-31102.sock m31102| Fri Feb 22 12:33:20.896 [interruptThread] shutdown: going to flush diaglog... m31102| Fri Feb 22 12:33:20.896 [interruptThread] shutdown: going to close sockets... m31102| Fri Feb 22 12:33:20.897 [interruptThread] shutdown: waiting for fs preallocator... m31102| Fri Feb 22 12:33:20.897 [interruptThread] shutdown: lock for final commit... m31102| Fri Feb 22 12:33:20.897 [interruptThread] shutdown: final commit... m31102| Fri Feb 22 12:33:20.897 [conn1] end connection 127.0.0.1:37448 (3 connections now open) m31102| Fri Feb 22 12:33:20.897 [conn5] end connection 165.225.128.186:36395 (3 connections now open) m31102| Fri Feb 22 12:33:20.897 [conn6] end connection 165.225.128.186:34197 (1 connection now open) m31102| Fri Feb 22 12:33:20.915 [interruptThread] shutdown: closing all files... m31102| Fri Feb 22 12:33:20.916 [interruptThread] closeAllFiles() finished m31102| Fri Feb 22 12:33:20.916 [interruptThread] journalCleanup... m31102| Fri Feb 22 12:33:20.916 [interruptThread] removeJournalFiles m31102| Fri Feb 22 12:33:20.916 dbexit: really exiting now Fri Feb 22 12:33:21.896 shell: stopped mongo program on port 31102 ReplSetTest stopSet deleting all dbpaths ReplSetTest stopSet *** Shut down repl set - test worked **** m29000| Fri Feb 22 12:33:21.909 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 12:33:21.909 [interruptThread] now exiting m29000| Fri Feb 22 12:33:21.909 dbexit: m29000| Fri Feb 22 12:33:21.909 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 12:33:21.909 [interruptThread] closing listening socket: 27 m29000| Fri Feb 22 12:33:21.909 [interruptThread] closing listening socket: 28 m29000| Fri Feb 22 12:33:21.909 [interruptThread] closing listening socket: 29 m29000| Fri Feb 22 12:33:21.909 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 12:33:21.909 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 12:33:21.909 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 12:33:21.909 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 12:33:21.909 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 12:33:21.909 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 12:33:21.909 [conn1] end connection 127.0.0.1:39481 (1 connection now open) m29000| Fri Feb 22 12:33:21.909 [conn2] end connection 165.225.128.186:35779 (1 connection now open) m29000| Fri Feb 22 12:33:21.918 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 12:33:21.918 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 12:33:21.918 [interruptThread] journalCleanup... m29000| Fri Feb 22 12:33:21.918 [interruptThread] removeJournalFiles m29000| Fri Feb 22 12:33:21.918 dbexit: really exiting now Fri Feb 22 12:33:22.909 shell: stopped mongo program on port 29000 *** ShardingTest blah completed successfully in 69.161 seconds *** Fri Feb 22 12:33:22.923 [conn22] end connection 127.0.0.1:48378 (0 connections now open) 1.1557 minutes Fri Feb 22 12:33:22.941 [initandlisten] connection accepted from 127.0.0.1:65405 #23 (1 connection now open) Fri Feb 22 12:33:22.941 [conn23] end connection 127.0.0.1:65405 (0 connections now open) ******************************************* Test : sharding_passthrough.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js";TestData.testFile = "sharding_passthrough.js";TestData.testName = "sharding_passthrough";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:33:22 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:33:23.103 [initandlisten] connection accepted from 127.0.0.1:52906 #24 (1 connection now open) null Resetting db path '/data/db/sharding_passthrough0' Fri Feb 22 12:33:23.116 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30000 --dbpath /data/db/sharding_passthrough0 --setParameter enableTestCommands=1 m30000| Fri Feb 22 12:33:23.186 [initandlisten] MongoDB starting : pid=14608 port=30000 dbpath=/data/db/sharding_passthrough0 64-bit host=bs-smartos-x86-64-1.10gen.cc m30000| Fri Feb 22 12:33:23.186 [initandlisten] m30000| Fri Feb 22 12:33:23.186 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30000| Fri Feb 22 12:33:23.186 [initandlisten] ** uses to detect impending page faults. m30000| Fri Feb 22 12:33:23.186 [initandlisten] ** This may result in slower performance for certain use cases m30000| Fri Feb 22 12:33:23.186 [initandlisten] m30000| Fri Feb 22 12:33:23.186 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30000| Fri Feb 22 12:33:23.186 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30000| Fri Feb 22 12:33:23.186 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30000| Fri Feb 22 12:33:23.186 [initandlisten] allocator: system m30000| Fri Feb 22 12:33:23.186 [initandlisten] options: { dbpath: "/data/db/sharding_passthrough0", port: 30000, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:33:23.187 [initandlisten] journal dir=/data/db/sharding_passthrough0/journal m30000| Fri Feb 22 12:33:23.187 [initandlisten] recover : no journal files present, no recovery needed m30000| Fri Feb 22 12:33:23.199 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/local.ns, filling with zeroes... m30000| Fri Feb 22 12:33:23.199 [FileAllocator] creating directory /data/db/sharding_passthrough0/_tmp m30000| Fri Feb 22 12:33:23.200 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/local.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:33:23.200 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/local.0, filling with zeroes... m30000| Fri Feb 22 12:33:23.200 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/local.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:33:23.203 [initandlisten] waiting for connections on port 30000 m30000| Fri Feb 22 12:33:23.203 [websvr] admin web console waiting for connections on port 31000 m30000| Fri Feb 22 12:33:23.318 [initandlisten] connection accepted from 127.0.0.1:54949 #1 (1 connection now open) Resetting db path '/data/db/sharding_passthrough1' Fri Feb 22 12:33:23.321 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 30001 --dbpath /data/db/sharding_passthrough1 --setParameter enableTestCommands=1 m30001| Fri Feb 22 12:33:23.409 [initandlisten] MongoDB starting : pid=14609 port=30001 dbpath=/data/db/sharding_passthrough1 64-bit host=bs-smartos-x86-64-1.10gen.cc m30001| Fri Feb 22 12:33:23.410 [initandlisten] m30001| Fri Feb 22 12:33:23.410 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m30001| Fri Feb 22 12:33:23.410 [initandlisten] ** uses to detect impending page faults. m30001| Fri Feb 22 12:33:23.410 [initandlisten] ** This may result in slower performance for certain use cases m30001| Fri Feb 22 12:33:23.410 [initandlisten] m30001| Fri Feb 22 12:33:23.410 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m30001| Fri Feb 22 12:33:23.410 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30001| Fri Feb 22 12:33:23.410 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30001| Fri Feb 22 12:33:23.410 [initandlisten] allocator: system m30001| Fri Feb 22 12:33:23.410 [initandlisten] options: { dbpath: "/data/db/sharding_passthrough1", port: 30001, setParameter: [ "enableTestCommands=1" ] } m30001| Fri Feb 22 12:33:23.410 [initandlisten] journal dir=/data/db/sharding_passthrough1/journal m30001| Fri Feb 22 12:33:23.410 [initandlisten] recover : no journal files present, no recovery needed m30001| Fri Feb 22 12:33:23.424 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/local.ns, filling with zeroes... m30001| Fri Feb 22 12:33:23.424 [FileAllocator] creating directory /data/db/sharding_passthrough1/_tmp m30001| Fri Feb 22 12:33:23.424 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/local.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:33:23.424 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/local.0, filling with zeroes... m30001| Fri Feb 22 12:33:23.424 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/local.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:33:23.427 [initandlisten] waiting for connections on port 30001 m30001| Fri Feb 22 12:33:23.427 [websvr] admin web console waiting for connections on port 31001 m30001| Fri Feb 22 12:33:23.523 [initandlisten] connection accepted from 127.0.0.1:46171 #1 (1 connection now open) "localhost:30000" m30000| Fri Feb 22 12:33:23.523 [initandlisten] connection accepted from 127.0.0.1:55009 #2 (2 connections now open) ShardingTest sharding_passthrough : { "config" : "localhost:30000", "shards" : [ connection to localhost:30000, connection to localhost:30001 ] } Fri Feb 22 12:33:23.530 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb localhost:30000 --chunkSize 50 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:33:23.546 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:33:23.546 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=14610 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:33:23.546 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:33:23.546 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:33:23.546 [mongosMain] options: { chunkSize: 50, configdb: "localhost:30000", port: 30999, setParameter: [ "enableTestCommands=1" ] } m30000| Fri Feb 22 12:33:23.547 [initandlisten] connection accepted from 127.0.0.1:34908 #3 (3 connections now open) m30000| Fri Feb 22 12:33:23.548 [initandlisten] connection accepted from 127.0.0.1:55372 #4 (4 connections now open) m30000| Fri Feb 22 12:33:23.549 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:33:23.561 [LockPinger] creating distributed lock ping thread for localhost:30000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838 (sleeping for 30000ms) m30000| Fri Feb 22 12:33:23.561 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/config.ns, filling with zeroes... m30000| Fri Feb 22 12:33:23.562 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/config.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:33:23.562 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/config.0, filling with zeroes... m30000| Fri Feb 22 12:33:23.562 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/config.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:33:23.562 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/config.1, filling with zeroes... m30000| Fri Feb 22 12:33:23.562 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/config.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:33:23.565 [conn4] build index config.locks { _id: 1 } m30000| Fri Feb 22 12:33:23.566 [conn4] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:33:23.567 [conn3] build index config.lockpings { _id: 1 } m30000| Fri Feb 22 12:33:23.569 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:33:23.570 [conn3] build index config.lockpings { ping: new Date(1) } m30000| Fri Feb 22 12:33:23.571 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:23.571 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127659348539f319791602c m30999| Fri Feb 22 12:33:23.574 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:33:23.574 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:33:23.574 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:33:23-5127659348539f319791602d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536403574), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30000| Fri Feb 22 12:33:23.574 [conn4] build index config.changelog { _id: 1 } m30000| Fri Feb 22 12:33:23.575 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:23.575 [mongosMain] writing initial config version at v4 m30000| Fri Feb 22 12:33:23.575 [conn4] build index config.version { _id: 1 } m30000| Fri Feb 22 12:33:23.576 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:23.576 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:33:23-5127659348539f319791602f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536403576), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:33:23.577 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:33:23.577 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30000| Fri Feb 22 12:33:23.578 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:33:23.579 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:33:23.579 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 12:33:23.579 [mongosMain] waiting for connections on port 30999 m30000| Fri Feb 22 12:33:23.580 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:33:23.581 [conn3] build index config.chunks { _id: 1 } m30000| Fri Feb 22 12:33:23.582 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:33:23.582 [conn3] info: creating collection config.chunks on add index m30000| Fri Feb 22 12:33:23.582 [conn3] build index config.chunks { ns: 1, min: 1 } m30000| Fri Feb 22 12:33:23.583 [conn3] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:33:23.583 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m30000| Fri Feb 22 12:33:23.584 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:33:23.584 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m30000| Fri Feb 22 12:33:23.585 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:33:23.585 [conn3] build index config.shards { _id: 1 } m30000| Fri Feb 22 12:33:23.586 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:33:23.586 [conn3] info: creating collection config.shards on add index m30000| Fri Feb 22 12:33:23.586 [conn3] build index config.shards { host: 1 } m30000| Fri Feb 22 12:33:23.588 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:33:23.588 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:33:23.588 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:33:23 m30000| Fri Feb 22 12:33:23.589 [initandlisten] connection accepted from 127.0.0.1:54456 #5 (5 connections now open) m30000| Fri Feb 22 12:33:23.589 [conn3] build index config.mongos { _id: 1 } m30000| Fri Feb 22 12:33:23.590 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:33:23.591 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127659348539f3197916031 m30999| Fri Feb 22 12:33:23.592 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:33:23.731 [mongosMain] connection accepted from 127.0.0.1:43649 #1 (1 connection now open) ShardingTest undefined going to add shard : localhost:30000 m30999| Fri Feb 22 12:33:23.733 [conn1] couldn't find database [admin] in config db m30000| Fri Feb 22 12:33:23.734 [conn3] build index config.databases { _id: 1 } m30000| Fri Feb 22 12:33:23.735 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:23.735 [conn1] put [admin] on: config:localhost:30000 m30999| Fri Feb 22 12:33:23.736 [conn1] going to add shard: { _id: "shard0000", host: "localhost:30000" } { "shardAdded" : "shard0000", "ok" : 1 } ShardingTest undefined going to add shard : localhost:30001 m30001| Fri Feb 22 12:33:23.739 [initandlisten] connection accepted from 127.0.0.1:62328 #2 (2 connections now open) m30999| Fri Feb 22 12:33:23.740 [conn1] going to add shard: { _id: "shard0001", host: "localhost:30001" } { "shardAdded" : "shard0001", "ok" : 1 } m30999| Fri Feb 22 12:33:23.741 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:33:23.742 [conn1] put [test] on: shard0001:localhost:30001 m30999| Fri Feb 22 12:33:23.742 [conn1] enabling sharding on: test !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/error3.js ******************************************* Test : jstests/set_param1.js ... m30999| Fri Feb 22 12:33:23.764 [conn1] Request::process end ns: admin.$cmd msg id: 5 op: 2004 attempt: 0 0ms m30999| Fri Feb 22 12:33:23.764 [conn1] Request::process begin ns: admin.$cmd msg id: 6 op: 2004 attempt: 0 m30999| Fri Feb 22 12:33:23.764 [conn1] single query: admin.$cmd { setParameter: 1.0, logLevel: 0.0 } ntoreturn: -1 options : 0 8ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo1.js ******************************************* Test : jstests/where3.js ... m30999| Fri Feb 22 12:33:23.766 [conn1] DROP: test.where3 m30001| Fri Feb 22 12:33:23.767 [initandlisten] connection accepted from 127.0.0.1:34922 #3 (3 connections now open) m30999| Fri Feb 22 12:33:23.767 [conn1] creating WriteBackListener for: localhost:30001 serverID: 5127659348539f3197916030 m30001| Fri Feb 22 12:33:23.768 [conn3] CMD: drop test.where3 m30999| Fri Feb 22 12:33:23.768 [conn1] creating WriteBackListener for: localhost:30000 serverID: 5127659348539f3197916030 m30000| Fri Feb 22 12:33:23.768 [initandlisten] connection accepted from 127.0.0.1:50119 #6 (6 connections now open) m30001| Fri Feb 22 12:33:23.769 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/test.ns, filling with zeroes... m30001| Fri Feb 22 12:33:23.769 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/test.ns, size: 16MB, took 0 secs m30001| Fri Feb 22 12:33:23.769 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/test.0, filling with zeroes... m30001| Fri Feb 22 12:33:23.770 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/test.0, size: 64MB, took 0 secs m30001| Fri Feb 22 12:33:23.770 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/test.1, filling with zeroes... m30001| Fri Feb 22 12:33:23.770 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/test.1, size: 128MB, took 0 secs m30001| Fri Feb 22 12:33:23.774 [conn3] build index test.where3 { _id: 1 } m30001| Fri Feb 22 12:33:23.775 [conn3] build index done. scanned 0 total records. 0.001 secs 49ms ******************************************* Test : jstests/mr_optim.js ... m30999| Fri Feb 22 12:33:23.822 [conn1] DROP: test.mr_optim m30001| Fri Feb 22 12:33:23.823 [conn3] CMD: drop test.mr_optim m30001| Fri Feb 22 12:33:23.824 [conn3] build index test.mr_optim { _id: 1 } m30001| Fri Feb 22 12:33:23.825 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:33:23.903 [conn3] CMD: drop test.tmp.mr.mr_optim_0 m30001| Fri Feb 22 12:33:23.904 [conn3] CMD: drop test.tmp.mr.mr_optim_0_inc m30001| Fri Feb 22 12:33:23.904 [conn3] build index test.tmp.mr.mr_optim_0_inc { 0: 1 } m30001| Fri Feb 22 12:33:23.905 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:23.905 [conn3] build index test.tmp.mr.mr_optim_0 { _id: 1 } m30001| Fri Feb 22 12:33:23.906 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:33:24.015 [conn3] CMD: drop test.mr_optim_out m30001| Fri Feb 22 12:33:24.016 [conn3] CMD: drop test.tmp.mr.mr_optim_0 m30001| Fri Feb 22 12:33:24.017 [conn3] CMD: drop test.tmp.mr.mr_optim_0 m30001| Fri Feb 22 12:33:24.017 [conn3] CMD: drop test.tmp.mr.mr_optim_0_inc m30001| Fri Feb 22 12:33:24.018 [conn3] CMD: drop test.tmp.mr.mr_optim_0 m30001| Fri Feb 22 12:33:24.018 [conn3] CMD: drop test.tmp.mr.mr_optim_0_inc m30001| Fri Feb 22 12:33:24.019 [conn3] command test.$cmd command: { mapreduce: "mr_optim", map: function m(){ m30001| emit(this._id, 13); m30001| }, reduce: function r( key , values ){ m30001| return "bad"; m30001| }, out: "mr_optim_out" } ntoreturn:1 keyUpdates:0 numYields: 1011 locks(micros) W:2600 r:85377 w:41141 reslen:136 141ms { "result" : "mr_optim_out", "timeMillis" : 139, "counts" : { "input" : 1000, "emit" : 1000, "reduce" : 0, "output" : 1000 }, "ok" : 1, } m30001| Fri Feb 22 12:33:24.029 [initandlisten] connection accepted from 127.0.0.1:40959 #4 (4 connections now open) m30999| Fri Feb 22 12:33:24.079 [conn1] DROP: test.mr_optim_out m30001| Fri Feb 22 12:33:24.079 [conn3] CMD: drop test.mr_optim_out m30999| Fri Feb 22 12:33:24.181 [conn1] DROP: test.mr_optim m30001| Fri Feb 22 12:33:24.182 [conn3] CMD: drop test.mr_optim 369ms ******************************************* Test : jstests/ref3.js ... m30999| Fri Feb 22 12:33:24.192 [conn1] DROP: test.otherthings3 m30001| Fri Feb 22 12:33:24.192 [conn3] CMD: drop test.otherthings3 m30999| Fri Feb 22 12:33:24.193 [conn1] DROP: test.things3 m30001| Fri Feb 22 12:33:24.193 [conn3] CMD: drop test.things3 m30001| Fri Feb 22 12:33:24.193 [conn3] build index test.otherthings3 { _id: 1 } m30001| Fri Feb 22 12:33:24.194 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.194 [conn3] build index test.things3 { _id: 1 } m30001| Fri Feb 22 12:33:24.195 [conn3] build index done. scanned 0 total records. 0 secs 13ms ******************************************* Test : jstests/fm3.js ... m30999| Fri Feb 22 12:33:24.201 [conn1] DROP: test.fm3 m30001| Fri Feb 22 12:33:24.201 [conn3] CMD: drop test.fm3 m30001| Fri Feb 22 12:33:24.202 [conn3] build index test.fm3 { _id: 1 } m30001| Fri Feb 22 12:33:24.202 [conn3] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/type2.js ... m30999| Fri Feb 22 12:33:24.211 [conn1] DROP: test.jstests_type2 m30001| Fri Feb 22 12:33:24.211 [conn3] CMD: drop test.jstests_type2 m30001| Fri Feb 22 12:33:24.212 [conn3] build index test.jstests_type2 { _id: 1 } m30001| Fri Feb 22 12:33:24.212 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.214 [conn3] build index test.jstests_type2 { a: 1.0 } m30001| Fri Feb 22 12:33:24.215 [conn3] build index done. scanned 3 total records. 0 secs 12ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped.js >>>>>>>>>>>>>>> skipping jstests/slowNightly ******************************************* Test : jstests/remove8.js ... m30999| Fri Feb 22 12:33:24.218 [conn1] DROP: test.remove8 m30001| Fri Feb 22 12:33:24.218 [conn3] CMD: drop test.remove8 m30001| Fri Feb 22 12:33:24.218 [conn3] build index test.remove8 { _id: 1 } m30001| Fri Feb 22 12:33:24.220 [conn3] build index done. scanned 0 total records. 0.001 secs 201ms ******************************************* Test : jstests/fts_partition1.js ... m30000| Fri Feb 22 12:33:24.425 [initandlisten] connection accepted from 127.0.0.1:41443 #7 (7 connections now open) m30001| Fri Feb 22 12:33:24.426 [initandlisten] connection accepted from 127.0.0.1:65413 #5 (5 connections now open) m30999| Fri Feb 22 12:33:24.427 [conn1] DROP: test.text_parition1 m30001| Fri Feb 22 12:33:24.427 [conn3] CMD: drop test.text_parition1 m30001| Fri Feb 22 12:33:24.427 [conn3] build index test.text_parition1 { _id: 1 } m30001| Fri Feb 22 12:33:24.428 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.428 [conn3] build index test.text_parition1 { x: 1.0, _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:33:24.431 [conn3] build index done. scanned 4 total records. 0.002 secs 18ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_borders.js ******************************************* Test : jstests/sort8.js ... m30999| Fri Feb 22 12:33:24.437 [conn1] DROP: test.jstests_sort8 m30001| Fri Feb 22 12:33:24.437 [conn3] CMD: drop test.jstests_sort8 m30001| Fri Feb 22 12:33:24.438 [conn3] build index test.jstests_sort8 { _id: 1 } m30001| Fri Feb 22 12:33:24.439 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:33:24.440 [conn3] build index test.jstests_sort8 { a: 1.0 } m30001| Fri Feb 22 12:33:24.441 [conn3] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:33:24.443 [conn4] CMD: dropIndexes test.jstests_sort8 m30001| Fri Feb 22 12:33:24.446 [conn3] build index test.jstests_sort8 { a: 1.0 } m30001| Fri Feb 22 12:33:24.446 [conn3] build index done. scanned 2 total records. 0 secs 12ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geonear_validate.js ******************************************* Test : jstests/getlog1.js ... 7ms ******************************************* Test : jstests/find_and_modify_where.js ... m30999| Fri Feb 22 12:33:24.457 [conn1] DROP: test.find_and_modify_where m30001| Fri Feb 22 12:33:24.458 [conn3] CMD: drop test.find_and_modify_where m30001| Fri Feb 22 12:33:24.458 [conn3] build index test.find_and_modify_where { _id: 1 } m30001| Fri Feb 22 12:33:24.461 [conn3] build index done. scanned 0 total records. 0.002 secs 31ms ******************************************* Test : jstests/explain7.js ... m30999| Fri Feb 22 12:33:24.489 [conn1] DROP: test.jstests_explain7 m30001| Fri Feb 22 12:33:24.491 [conn3] CMD: drop test.jstests_explain7 m30001| Fri Feb 22 12:33:24.492 [conn3] build index test.jstests_explain7 { _id: 1 } m30001| Fri Feb 22 12:33:24.493 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.493 [conn3] info: creating collection test.jstests_explain7 on add index m30001| Fri Feb 22 12:33:24.493 [conn3] build index test.jstests_explain7 { loc: "2d" } m30001| Fri Feb 22 12:33:24.494 [conn3] build index done. scanned 0 total records. 0.001 secs 12ms ******************************************* Test : jstests/eval1.js ... m30999| Fri Feb 22 12:33:24.501 [conn1] DROP: test.eval1 m30001| Fri Feb 22 12:33:24.501 [conn3] CMD: drop test.eval1 m30001| Fri Feb 22 12:33:24.502 [conn3] build index test.eval1 { _id: 1 } m30001| Fri Feb 22 12:33:24.502 [conn3] build index done. scanned 0 total records. 0 secs 7ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2near.js ******************************************* Test : jstests/hashtest1.js ... 18ms ******************************************* Test : jstests/in7.js ... m30999| Fri Feb 22 12:33:24.525 [conn1] DROP: test.jstests_slow_in1 m30001| Fri Feb 22 12:33:24.525 [conn3] CMD: drop test.jstests_slow_in1 m30001| Fri Feb 22 12:33:24.526 [conn3] build index test.jstests_slow_in1 { _id: 1 } m30001| Fri Feb 22 12:33:24.527 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.527 [conn3] info: creating collection test.jstests_slow_in1 on add index m30001| Fri Feb 22 12:33:24.527 [conn3] build index test.jstests_slow_in1 { a: 1.0, b: 1.0, c: 1.0, d: 1.0, e: 1.0, f: 1.0, g: 1.0 } m30001| Fri Feb 22 12:33:24.528 [conn3] build index done. scanned 0 total records. 0 secs 7ms ******************************************* Test : jstests/coveredIndex2.js ... m30999| Fri Feb 22 12:33:24.534 [conn1] DROP: test.jstests_coveredIndex2 m30001| Fri Feb 22 12:33:24.534 [conn3] CMD: drop test.jstests_coveredIndex2 m30001| Fri Feb 22 12:33:24.535 [conn3] build index test.jstests_coveredIndex2 { _id: 1 } m30001| Fri Feb 22 12:33:24.535 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.536 [conn3] build index test.jstests_coveredIndex2 { a: 1.0 } m30001| Fri Feb 22 12:33:24.537 [conn3] build index done. scanned 2 total records. 0 secs 8ms ******************************************* Test : jstests/queryoptimizer2.js ... m30999| Fri Feb 22 12:33:24.540 [conn1] DROP: test.queryoptimizer2 m30001| Fri Feb 22 12:33:24.540 [conn3] CMD: drop test.queryoptimizer2 m30001| Fri Feb 22 12:33:24.541 [conn3] build index test.queryoptimizer2 { _id: 1 } m30001| Fri Feb 22 12:33:24.542 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.547 [conn3] build index test.queryoptimizer2 { a: 1.0 } m30001| Fri Feb 22 12:33:24.548 [conn3] build index done. scanned 120 total records. 0 secs m30001| Fri Feb 22 12:33:24.548 [conn3] build index test.queryoptimizer2 { b: 1.0 } m30001| Fri Feb 22 12:33:24.549 [conn3] build index done. scanned 120 total records. 0.001 secs m30001| Fri Feb 22 12:33:24.550 [conn3] build index test.queryoptimizer2 { z: 1.0 } m30001| Fri Feb 22 12:33:24.551 [conn3] build index done. scanned 120 total records. 0.001 secs m30001| Fri Feb 22 12:33:24.552 [conn3] build index test.queryoptimizer2 { c: 1.0 } m30001| Fri Feb 22 12:33:24.553 [conn3] build index done. scanned 120 total records. 0.001 secs m30001| Fri Feb 22 12:33:24.560 [conn4] CMD: dropIndexes test.queryoptimizer2 m30999| Fri Feb 22 12:33:24.566 [conn1] DROP: test.queryoptimizer2 m30001| Fri Feb 22 12:33:24.566 [conn3] CMD: drop test.queryoptimizer2 m30001| Fri Feb 22 12:33:24.570 [conn3] build index test.queryoptimizer2 { _id: 1 } m30001| Fri Feb 22 12:33:24.571 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.575 [conn3] build index test.queryoptimizer2 { a: 1.0 } m30001| Fri Feb 22 12:33:24.576 [conn3] build index done. scanned 120 total records. 0 secs m30001| Fri Feb 22 12:33:24.577 [conn3] build index test.queryoptimizer2 { b: 1.0 } m30001| Fri Feb 22 12:33:24.577 [conn3] build index done. scanned 120 total records. 0 secs m30001| Fri Feb 22 12:33:24.578 [conn3] build index test.queryoptimizer2 { z: 1.0 } m30001| Fri Feb 22 12:33:24.578 [conn3] build index done. scanned 120 total records. 0 secs m30001| Fri Feb 22 12:33:24.579 [conn3] build index test.queryoptimizer2 { c: 1.0 } m30001| Fri Feb 22 12:33:24.580 [conn3] build index done. scanned 120 total records. 0 secs m30001| Fri Feb 22 12:33:24.586 [conn4] CMD: dropIndexes test.queryoptimizer2 51ms ******************************************* Test : jstests/covered_index_negative_1.js ... m30999| Fri Feb 22 12:33:24.598 [conn1] DROP: test.covered_negative_1 m30001| Fri Feb 22 12:33:24.598 [conn3] CMD: drop test.covered_negative_1 m30001| Fri Feb 22 12:33:24.599 [conn3] build index test.covered_negative_1 { _id: 1 } m30001| Fri Feb 22 12:33:24.599 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:24.606 [conn3] build index test.covered_negative_1 { a: 1.0, b: -1.0, c: 1.0 } m30001| Fri Feb 22 12:33:24.608 [conn3] build index done. scanned 100 total records. 0.001 secs m30001| Fri Feb 22 12:33:24.608 [conn3] build index test.covered_negative_1 { e: 1.0 } m30001| Fri Feb 22 12:33:24.609 [conn3] build index done. scanned 100 total records. 0.001 secs m30001| Fri Feb 22 12:33:24.610 [conn3] build index test.covered_negative_1 { d: 1.0 } m30001| Fri Feb 22 12:33:24.611 [conn3] build index done. scanned 100 total records. 0 secs m30001| Fri Feb 22 12:33:24.611 [conn3] build index test.covered_negative_1 { f: "hashed" } m30001| Fri Feb 22 12:33:24.612 [conn3] build index done. scanned 100 total records. 0 secs all tests passed 25ms >>>>>>>>>>>>>>> skipping jstests/disk !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/profile2.js ******************************************* Test : jstests/group7.js ... m30999| Fri Feb 22 12:33:24.622 [conn1] DROP: test.jstests_group7 m30001| Fri Feb 22 12:33:24.622 [conn3] CMD: drop test.jstests_group7 m30999| Fri Feb 22 12:33:24.622 [conn1] DROP: test.jstests_group7 m30001| Fri Feb 22 12:33:24.623 [conn3] CMD: drop test.jstests_group7 m30001| Fri Feb 22 12:33:24.623 [conn3] build index test.jstests_group7 { _id: 1 } m30001| Fri Feb 22 12:33:24.624 [conn3] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:33:24.733 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( a = 0; a < 50; ++a ) { db.jstests_group7.update( {$atomic:true}, {$set:{a:a}}, false, true ); db.getLastError(); } localhost:30999/admin sh14619| MongoDB shell version: 2.4.0-rc1-pre- sh14619| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:33:24.798 [mongosMain] connection accepted from 127.0.0.1:58926 #2 (2 connections now open) m30000| Fri Feb 22 12:33:24.801 [initandlisten] connection accepted from 127.0.0.1:46525 #8 (8 connections now open) m30001| Fri Feb 22 12:33:24.801 [initandlisten] connection accepted from 127.0.0.1:49383 #6 (6 connections now open) [ { "a" : 0 } ] m30001| Fri Feb 22 12:33:25.100 [conn3] command test.$cmd command: { group: { key: { a: 1.0 }, initial: {}, ns: "jstests_group7", $reduce: function (){} } } ntoreturn:1 keyUpdates:0 numYields: 21 locks(micros) r:57928 reslen:505 279ms sh14619| null 770ms ******************************************* Test : jstests/basicb.js ... m30999| Fri Feb 22 12:33:25.388 [conn2] end connection 127.0.0.1:58926 (1 connection now open) m30999| Fri Feb 22 12:33:25.393 [conn1] DROP: test.basicb m30001| Fri Feb 22 12:33:25.393 [conn3] CMD: drop test.basicb 8ms ******************************************* Test : jstests/fts1.js ... m30000| Fri Feb 22 12:33:25.401 [initandlisten] connection accepted from 127.0.0.1:57652 #9 (9 connections now open) m30001| Fri Feb 22 12:33:25.402 [initandlisten] connection accepted from 127.0.0.1:43594 #7 (7 connections now open) m30999| Fri Feb 22 12:33:25.402 [conn1] DROP: test.text1 m30001| Fri Feb 22 12:33:25.402 [conn3] CMD: drop test.text1 m30001| Fri Feb 22 12:33:25.403 [conn3] build index test.text1 { _id: 1 } m30001| Fri Feb 22 12:33:25.403 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:25.404 [conn3] build index test.text1 { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:33:25.404 [conn3] build index done. scanned 4 total records. 0 secs 13ms ******************************************* Test : jstests/all4.js ... m30999| Fri Feb 22 12:33:25.411 [conn1] DROP: test.jstests_all4 m30001| Fri Feb 22 12:33:25.411 [conn3] CMD: drop test.jstests_all4 m30001| Fri Feb 22 12:33:25.416 [conn3] build index test.jstests_all4 { _id: 1 } m30001| Fri Feb 22 12:33:25.416 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:25.437 [conn7] end connection 127.0.0.1:43594 (6 connections now open) m30001| Fri Feb 22 12:33:25.437 [conn5] end connection 127.0.0.1:65413 (5 connections now open) 35ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_circle1.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped_server7543.js >>>>>>>>>>>>>>> skipping jstests/tool ******************************************* Test : jstests/objid4.js ... 7ms ******************************************* Test : jstests/covered_index_simple_id.js ... m30000| Fri Feb 22 12:33:25.452 [conn7] end connection 127.0.0.1:41443 (8 connections now open) m30000| Fri Feb 22 12:33:25.452 [conn9] end connection 127.0.0.1:57652 (8 connections now open) m30999| Fri Feb 22 12:33:25.456 [conn1] DROP: test.covered_simple_id m30001| Fri Feb 22 12:33:25.456 [conn3] CMD: drop test.covered_simple_id m30001| Fri Feb 22 12:33:25.457 [conn3] build index test.covered_simple_id { _id: 1 } m30001| Fri Feb 22 12:33:25.457 [conn3] build index done. scanned 0 total records. 0 secs all tests pass 11ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/cursor6.js ******************************************* Test : jstests/index_many2.js ... m30999| Fri Feb 22 12:33:25.462 [conn1] DROP: test.index_many2 m30001| Fri Feb 22 12:33:25.462 [conn3] CMD: drop test.index_many2 m30001| Fri Feb 22 12:33:25.463 [conn3] build index test.index_many2 { _id: 1 } m30001| Fri Feb 22 12:33:25.464 [conn3] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:33:25.465 [conn3] build index test.index_many2 { x1: 1.0 } m30001| Fri Feb 22 12:33:25.465 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.466 [conn3] build index test.index_many2 { x2: 1.0 } m30001| Fri Feb 22 12:33:25.467 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.467 [conn3] build index test.index_many2 { x3: 1.0 } m30001| Fri Feb 22 12:33:25.467 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.468 [conn3] build index test.index_many2 { x4: 1.0 } m30001| Fri Feb 22 12:33:25.468 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.469 [conn3] build index test.index_many2 { x5: 1.0 } m30001| Fri Feb 22 12:33:25.469 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.469 [conn3] build index test.index_many2 { x6: 1.0 } m30001| Fri Feb 22 12:33:25.470 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.470 [conn3] build index test.index_many2 { x7: 1.0 } m30001| Fri Feb 22 12:33:25.471 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.471 [conn3] build index test.index_many2 { x8: 1.0 } m30001| Fri Feb 22 12:33:25.471 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.472 [conn3] build index test.index_many2 { x9: 1.0 } m30001| Fri Feb 22 12:33:25.472 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.472 [conn3] build index test.index_many2 { x10: 1.0 } m30001| Fri Feb 22 12:33:25.473 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.473 [conn3] build index test.index_many2 { x11: 1.0 } m30001| Fri Feb 22 12:33:25.474 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.474 [conn3] build index test.index_many2 { x12: 1.0 } m30001| Fri Feb 22 12:33:25.475 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.475 [conn3] build index test.index_many2 { x13: 1.0 } m30001| Fri Feb 22 12:33:25.475 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.476 [conn3] build index test.index_many2 { x14: 1.0 } m30001| Fri Feb 22 12:33:25.476 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.476 [conn3] build index test.index_many2 { x15: 1.0 } m30001| Fri Feb 22 12:33:25.477 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.477 [conn3] build index test.index_many2 { x16: 1.0 } m30001| Fri Feb 22 12:33:25.477 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.478 [conn3] build index test.index_many2 { x17: 1.0 } m30001| Fri Feb 22 12:33:25.478 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.479 [conn3] build index test.index_many2 { x18: 1.0 } m30001| Fri Feb 22 12:33:25.479 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.480 [conn3] build index test.index_many2 { x19: 1.0 } m30001| Fri Feb 22 12:33:25.480 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.480 [conn3] build index test.index_many2 { x20: 1.0 } m30001| Fri Feb 22 12:33:25.481 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.481 [conn3] build index test.index_many2 { x21: 1.0 } m30001| Fri Feb 22 12:33:25.482 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.483 [conn3] build index test.index_many2 { x22: 1.0 } m30001| Fri Feb 22 12:33:25.483 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.484 [conn3] build index test.index_many2 { x23: 1.0 } m30001| Fri Feb 22 12:33:25.484 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.485 [conn3] build index test.index_many2 { x24: 1.0 } m30001| Fri Feb 22 12:33:25.485 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.486 [conn3] build index test.index_many2 { x25: 1.0 } m30001| Fri Feb 22 12:33:25.486 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.486 [conn3] build index test.index_many2 { x26: 1.0 } m30001| Fri Feb 22 12:33:25.487 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.487 [conn3] build index test.index_many2 { x27: 1.0 } m30001| Fri Feb 22 12:33:25.488 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.488 [conn3] build index test.index_many2 { x28: 1.0 } m30001| Fri Feb 22 12:33:25.489 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.489 [conn3] build index test.index_many2 { x29: 1.0 } m30001| Fri Feb 22 12:33:25.489 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.490 [conn3] build index test.index_many2 { x30: 1.0 } m30001| Fri Feb 22 12:33:25.490 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.491 [conn3] build index test.index_many2 { x31: 1.0 } m30001| Fri Feb 22 12:33:25.491 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.492 [conn3] build index test.index_many2 { x32: 1.0 } m30001| Fri Feb 22 12:33:25.492 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.492 [conn3] build index test.index_many2 { x33: 1.0 } m30001| Fri Feb 22 12:33:25.493 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.493 [conn3] build index test.index_many2 { x34: 1.0 } m30001| Fri Feb 22 12:33:25.494 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.494 [conn3] build index test.index_many2 { x35: 1.0 } m30001| Fri Feb 22 12:33:25.495 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.495 [conn3] build index test.index_many2 { x36: 1.0 } m30001| Fri Feb 22 12:33:25.495 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.496 [conn3] build index test.index_many2 { x37: 1.0 } m30001| Fri Feb 22 12:33:25.496 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.497 [conn3] build index test.index_many2 { x38: 1.0 } m30001| Fri Feb 22 12:33:25.497 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.497 [conn3] build index test.index_many2 { x39: 1.0 } m30001| Fri Feb 22 12:33:25.498 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.498 [conn3] build index test.index_many2 { x40: 1.0 } m30001| Fri Feb 22 12:33:25.499 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.499 [conn3] build index test.index_many2 { x41: 1.0 } m30001| Fri Feb 22 12:33:25.499 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.500 [conn3] build index test.index_many2 { x42: 1.0 } m30001| Fri Feb 22 12:33:25.500 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.501 [conn3] build index test.index_many2 { x43: 1.0 } m30001| Fri Feb 22 12:33:25.501 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.502 [conn3] build index test.index_many2 { x44: 1.0 } m30001| Fri Feb 22 12:33:25.502 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.502 [conn3] build index test.index_many2 { x45: 1.0 } m30001| Fri Feb 22 12:33:25.503 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.503 [conn3] build index test.index_many2 { x46: 1.0 } m30001| Fri Feb 22 12:33:25.503 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.504 [conn3] build index test.index_many2 { x47: 1.0 } m30001| Fri Feb 22 12:33:25.504 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.505 [conn3] build index test.index_many2 { x48: 1.0 } m30001| Fri Feb 22 12:33:25.505 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.505 [conn3] build index test.index_many2 { x49: 1.0 } m30001| Fri Feb 22 12:33:25.506 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.506 [conn3] build index test.index_many2 { x50: 1.0 } m30001| Fri Feb 22 12:33:25.507 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.507 [conn3] build index test.index_many2 { x51: 1.0 } m30001| Fri Feb 22 12:33:25.507 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.508 [conn3] build index test.index_many2 { x52: 1.0 } m30001| Fri Feb 22 12:33:25.508 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.508 [conn3] build index test.index_many2 { x53: 1.0 } m30001| Fri Feb 22 12:33:25.509 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.510 [conn3] build index test.index_many2 { x54: 1.0 } m30001| Fri Feb 22 12:33:25.510 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.511 [conn3] build index test.index_many2 { x55: 1.0 } m30001| Fri Feb 22 12:33:25.511 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.512 [conn3] build index test.index_many2 { x56: 1.0 } m30001| Fri Feb 22 12:33:25.512 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.513 [conn3] build index test.index_many2 { x57: 1.0 } m30001| Fri Feb 22 12:33:25.513 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.514 [conn3] build index test.index_many2 { x58: 1.0 } m30001| Fri Feb 22 12:33:25.514 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.515 [conn3] build index test.index_many2 { x59: 1.0 } m30001| Fri Feb 22 12:33:25.515 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.515 [conn3] build index test.index_many2 { x60: 1.0 } m30001| Fri Feb 22 12:33:25.516 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.516 [conn3] build index test.index_many2 { x61: 1.0 } m30001| Fri Feb 22 12:33:25.517 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.517 [conn3] build index test.index_many2 { x62: 1.0 } m30001| Fri Feb 22 12:33:25.518 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.518 [conn3] build index test.index_many2 { x63: 1.0 } m30001| Fri Feb 22 12:33:25.519 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.519 [conn3] add index fails, too many indexes for test.index_many2 key:{ x64: 1.0 } m30001| Fri Feb 22 12:33:25.520 [conn3] add index fails, too many indexes for test.index_many2 key:{ x65: 1.0 } m30001| Fri Feb 22 12:33:25.520 [conn3] add index fails, too many indexes for test.index_many2 key:{ x66: 1.0 } m30001| Fri Feb 22 12:33:25.521 [conn3] add index fails, too many indexes for test.index_many2 key:{ x67: 1.0 } m30001| Fri Feb 22 12:33:25.521 [conn3] add index fails, too many indexes for test.index_many2 key:{ x68: 1.0 } m30001| Fri Feb 22 12:33:25.521 [conn3] add index fails, too many indexes for test.index_many2 key:{ x69: 1.0 } m30001| Fri Feb 22 12:33:25.522 [conn3] add index fails, too many indexes for test.index_many2 key:{ x70: 1.0 } m30001| Fri Feb 22 12:33:25.522 [conn3] add index fails, too many indexes for test.index_many2 key:{ x71: 1.0 } m30001| Fri Feb 22 12:33:25.523 [conn3] add index fails, too many indexes for test.index_many2 key:{ x72: 1.0 } m30001| Fri Feb 22 12:33:25.523 [conn3] add index fails, too many indexes for test.index_many2 key:{ x73: 1.0 } m30001| Fri Feb 22 12:33:25.523 [conn3] add index fails, too many indexes for test.index_many2 key:{ x74: 1.0 } m30001| Fri Feb 22 12:33:25.524 [conn3] add index fails, too many indexes for test.index_many2 key:{ x75: 1.0 } m30001| Fri Feb 22 12:33:25.524 [conn3] add index fails, too many indexes for test.index_many2 key:{ x76: 1.0 } m30001| Fri Feb 22 12:33:25.525 [conn3] add index fails, too many indexes for test.index_many2 key:{ x77: 1.0 } m30001| Fri Feb 22 12:33:25.525 [conn3] add index fails, too many indexes for test.index_many2 key:{ x78: 1.0 } m30001| Fri Feb 22 12:33:25.525 [conn3] add index fails, too many indexes for test.index_many2 key:{ x79: 1.0 } m30001| Fri Feb 22 12:33:25.526 [conn3] add index fails, too many indexes for test.index_many2 key:{ x80: 1.0 } m30001| Fri Feb 22 12:33:25.526 [conn3] add index fails, too many indexes for test.index_many2 key:{ x81: 1.0 } m30001| Fri Feb 22 12:33:25.526 [conn3] add index fails, too many indexes for test.index_many2 key:{ x82: 1.0 } m30001| Fri Feb 22 12:33:25.529 [conn3] add index fails, too many indexes for test.index_many2 key:{ x83: 1.0 } m30001| Fri Feb 22 12:33:25.529 [conn3] add index fails, too many indexes for test.index_many2 key:{ x84: 1.0 } m30001| Fri Feb 22 12:33:25.529 [conn3] add index fails, too many indexes for test.index_many2 key:{ x85: 1.0 } m30001| Fri Feb 22 12:33:25.530 [conn3] add index fails, too many indexes for test.index_many2 key:{ x86: 1.0 } m30001| Fri Feb 22 12:33:25.530 [conn3] add index fails, too many indexes for test.index_many2 key:{ x87: 1.0 } m30001| Fri Feb 22 12:33:25.531 [conn3] add index fails, too many indexes for test.index_many2 key:{ x88: 1.0 } m30001| Fri Feb 22 12:33:25.531 [conn3] add index fails, too many indexes for test.index_many2 key:{ x89: 1.0 } m30001| Fri Feb 22 12:33:25.531 [conn3] add index fails, too many indexes for test.index_many2 key:{ x90: 1.0 } m30001| Fri Feb 22 12:33:25.532 [conn3] add index fails, too many indexes for test.index_many2 key:{ x91: 1.0 } m30001| Fri Feb 22 12:33:25.532 [conn3] add index fails, too many indexes for test.index_many2 key:{ x92: 1.0 } m30001| Fri Feb 22 12:33:25.533 [conn3] add index fails, too many indexes for test.index_many2 key:{ x93: 1.0 } m30001| Fri Feb 22 12:33:25.533 [conn3] add index fails, too many indexes for test.index_many2 key:{ x94: 1.0 } m30001| Fri Feb 22 12:33:25.533 [conn3] add index fails, too many indexes for test.index_many2 key:{ x95: 1.0 } m30001| Fri Feb 22 12:33:25.534 [conn3] add index fails, too many indexes for test.index_many2 key:{ x96: 1.0 } m30001| Fri Feb 22 12:33:25.534 [conn3] add index fails, too many indexes for test.index_many2 key:{ x97: 1.0 } m30001| Fri Feb 22 12:33:25.534 [conn3] add index fails, too many indexes for test.index_many2 key:{ x98: 1.0 } m30001| Fri Feb 22 12:33:25.535 [conn3] add index fails, too many indexes for test.index_many2 key:{ x99: 1.0 } m30001| Fri Feb 22 12:33:25.535 [conn3] add index fails, too many indexes for test.index_many2 key:{ x100: 1.0 } m30001| Fri Feb 22 12:33:25.536 [conn3] add index fails, too many indexes for test.index_many2 key:{ x101: 1.0 } m30001| Fri Feb 22 12:33:25.536 [conn3] add index fails, too many indexes for test.index_many2 key:{ x102: 1.0 } m30001| Fri Feb 22 12:33:25.537 [conn3] add index fails, too many indexes for test.index_many2 key:{ x103: 1.0 } m30001| Fri Feb 22 12:33:25.537 [conn3] add index fails, too many indexes for test.index_many2 key:{ x104: 1.0 } m30001| Fri Feb 22 12:33:25.537 [conn3] add index fails, too many indexes for test.index_many2 key:{ x105: 1.0 } m30001| Fri Feb 22 12:33:25.538 [conn3] add index fails, too many indexes for test.index_many2 key:{ x106: 1.0 } m30001| Fri Feb 22 12:33:25.538 [conn3] add index fails, too many indexes for test.index_many2 key:{ x107: 1.0 } m30001| Fri Feb 22 12:33:25.539 [conn3] add index fails, too many indexes for test.index_many2 key:{ x108: 1.0 } m30001| Fri Feb 22 12:33:25.539 [conn3] add index fails, too many indexes for test.index_many2 key:{ x109: 1.0 } m30001| Fri Feb 22 12:33:25.539 [conn3] add index fails, too many indexes for test.index_many2 key:{ x110: 1.0 } m30001| Fri Feb 22 12:33:25.540 [conn3] add index fails, too many indexes for test.index_many2 key:{ x111: 1.0 } m30001| Fri Feb 22 12:33:25.540 [conn3] add index fails, too many indexes for test.index_many2 key:{ x112: 1.0 } m30001| Fri Feb 22 12:33:25.541 [conn3] add index fails, too many indexes for test.index_many2 key:{ x113: 1.0 } m30001| Fri Feb 22 12:33:25.542 [conn3] add index fails, too many indexes for test.index_many2 key:{ x114: 1.0 } m30001| Fri Feb 22 12:33:25.542 [conn3] add index fails, too many indexes for test.index_many2 key:{ x115: 1.0 } m30001| Fri Feb 22 12:33:25.542 [conn3] add index fails, too many indexes for test.index_many2 key:{ x116: 1.0 } m30001| Fri Feb 22 12:33:25.543 [conn3] add index fails, too many indexes for test.index_many2 key:{ x117: 1.0 } m30001| Fri Feb 22 12:33:25.543 [conn3] add index fails, too many indexes for test.index_many2 key:{ x118: 1.0 } m30001| Fri Feb 22 12:33:25.544 [conn3] add index fails, too many indexes for test.index_many2 key:{ x119: 1.0 } m30001| Fri Feb 22 12:33:25.544 [conn3] add index fails, too many indexes for test.index_many2 key:{ x120: 1.0 } m30001| Fri Feb 22 12:33:25.545 [conn3] add index fails, too many indexes for test.index_many2 key:{ x121: 1.0 } m30001| Fri Feb 22 12:33:25.545 [conn3] add index fails, too many indexes for test.index_many2 key:{ x122: 1.0 } m30001| Fri Feb 22 12:33:25.545 [conn3] add index fails, too many indexes for test.index_many2 key:{ x123: 1.0 } m30001| Fri Feb 22 12:33:25.546 [conn3] add index fails, too many indexes for test.index_many2 key:{ x124: 1.0 } m30001| Fri Feb 22 12:33:25.546 [conn3] add index fails, too many indexes for test.index_many2 key:{ x125: 1.0 } m30001| Fri Feb 22 12:33:25.546 [conn3] add index fails, too many indexes for test.index_many2 key:{ x126: 1.0 } m30001| Fri Feb 22 12:33:25.547 [conn3] add index fails, too many indexes for test.index_many2 key:{ x127: 1.0 } m30001| Fri Feb 22 12:33:25.548 [conn3] add index fails, too many indexes for test.index_many2 key:{ x128: 1.0 } m30001| Fri Feb 22 12:33:25.548 [conn3] add index fails, too many indexes for test.index_many2 key:{ x129: 1.0 } m30001| Fri Feb 22 12:33:25.548 [conn3] add index fails, too many indexes for test.index_many2 key:{ x130: 1.0 } m30001| Fri Feb 22 12:33:25.549 [conn3] add index fails, too many indexes for test.index_many2 key:{ x131: 1.0 } m30001| Fri Feb 22 12:33:25.549 [conn3] add index fails, too many indexes for test.index_many2 key:{ x132: 1.0 } m30001| Fri Feb 22 12:33:25.550 [conn3] add index fails, too many indexes for test.index_many2 key:{ x133: 1.0 } m30001| Fri Feb 22 12:33:25.550 [conn3] add index fails, too many indexes for test.index_many2 key:{ x134: 1.0 } m30001| Fri Feb 22 12:33:25.550 [conn3] add index fails, too many indexes for test.index_many2 key:{ x135: 1.0 } m30001| Fri Feb 22 12:33:25.551 [conn3] add index fails, too many indexes for test.index_many2 key:{ x136: 1.0 } m30001| Fri Feb 22 12:33:25.551 [conn3] add index fails, too many indexes for test.index_many2 key:{ x137: 1.0 } m30001| Fri Feb 22 12:33:25.551 [conn3] add index fails, too many indexes for test.index_many2 key:{ x138: 1.0 } m30001| Fri Feb 22 12:33:25.552 [conn3] add index fails, too many indexes for test.index_many2 key:{ x139: 1.0 } m30001| Fri Feb 22 12:33:25.552 [conn3] add index fails, too many indexes for test.index_many2 key:{ x140: 1.0 } m30001| Fri Feb 22 12:33:25.553 [conn3] add index fails, too many indexes for test.index_many2 key:{ x141: 1.0 } m30001| Fri Feb 22 12:33:25.553 [conn3] add index fails, too many indexes for test.index_many2 key:{ x142: 1.0 } m30001| Fri Feb 22 12:33:25.554 [conn3] add index fails, too many indexes for test.index_many2 key:{ x143: 1.0 } m30001| Fri Feb 22 12:33:25.554 [conn3] add index fails, too many indexes for test.index_many2 key:{ x144: 1.0 } m30001| Fri Feb 22 12:33:25.555 [conn3] add index fails, too many indexes for test.index_many2 key:{ x145: 1.0 } m30001| Fri Feb 22 12:33:25.555 [conn3] add index fails, too many indexes for test.index_many2 key:{ x146: 1.0 } m30001| Fri Feb 22 12:33:25.556 [conn3] add index fails, too many indexes for test.index_many2 key:{ x147: 1.0 } m30001| Fri Feb 22 12:33:25.556 [conn3] add index fails, too many indexes for test.index_many2 key:{ x148: 1.0 } m30001| Fri Feb 22 12:33:25.556 [conn3] add index fails, too many indexes for test.index_many2 key:{ x149: 1.0 } m30001| Fri Feb 22 12:33:25.557 [conn3] add index fails, too many indexes for test.index_many2 key:{ x150: 1.0 } m30001| Fri Feb 22 12:33:25.557 [conn3] add index fails, too many indexes for test.index_many2 key:{ x151: 1.0 } m30001| Fri Feb 22 12:33:25.557 [conn3] add index fails, too many indexes for test.index_many2 key:{ x152: 1.0 } m30001| Fri Feb 22 12:33:25.558 [conn3] add index fails, too many indexes for test.index_many2 key:{ x153: 1.0 } m30001| Fri Feb 22 12:33:25.562 [conn3] add index fails, too many indexes for test.index_many2 key:{ x154: 1.0 } m30001| Fri Feb 22 12:33:25.563 [conn3] add index fails, too many indexes for test.index_many2 key:{ x155: 1.0 } m30001| Fri Feb 22 12:33:25.563 [conn3] add index fails, too many indexes for test.index_many2 key:{ x156: 1.0 } m30001| Fri Feb 22 12:33:25.564 [conn3] add index fails, too many indexes for test.index_many2 key:{ x157: 1.0 } m30001| Fri Feb 22 12:33:25.564 [conn3] add index fails, too many indexes for test.index_many2 key:{ x158: 1.0 } m30001| Fri Feb 22 12:33:25.564 [conn3] add index fails, too many indexes for test.index_many2 key:{ x159: 1.0 } m30001| Fri Feb 22 12:33:25.565 [conn3] add index fails, too many indexes for test.index_many2 key:{ x160: 1.0 } m30001| Fri Feb 22 12:33:25.565 [conn3] add index fails, too many indexes for test.index_many2 key:{ x161: 1.0 } m30001| Fri Feb 22 12:33:25.566 [conn3] add index fails, too many indexes for test.index_many2 key:{ x162: 1.0 } m30001| Fri Feb 22 12:33:25.566 [conn3] add index fails, too many indexes for test.index_many2 key:{ x163: 1.0 } m30001| Fri Feb 22 12:33:25.566 [conn3] add index fails, too many indexes for test.index_many2 key:{ x164: 1.0 } m30001| Fri Feb 22 12:33:25.567 [conn3] add index fails, too many indexes for test.index_many2 key:{ x165: 1.0 } m30001| Fri Feb 22 12:33:25.567 [conn3] add index fails, too many indexes for test.index_many2 key:{ x166: 1.0 } m30001| Fri Feb 22 12:33:25.567 [conn3] add index fails, too many indexes for test.index_many2 key:{ x167: 1.0 } m30001| Fri Feb 22 12:33:25.568 [conn3] add index fails, too many indexes for test.index_many2 key:{ x168: 1.0 } m30001| Fri Feb 22 12:33:25.568 [conn3] add index fails, too many indexes for test.index_many2 key:{ x169: 1.0 } m30001| Fri Feb 22 12:33:25.569 [conn3] add index fails, too many indexes for test.index_many2 key:{ x170: 1.0 } m30001| Fri Feb 22 12:33:25.569 [conn3] add index fails, too many indexes for test.index_many2 key:{ x171: 1.0 } m30001| Fri Feb 22 12:33:25.569 [conn3] add index fails, too many indexes for test.index_many2 key:{ x172: 1.0 } m30001| Fri Feb 22 12:33:25.570 [conn3] add index fails, too many indexes for test.index_many2 key:{ x173: 1.0 } m30001| Fri Feb 22 12:33:25.570 [conn3] add index fails, too many indexes for test.index_many2 key:{ x174: 1.0 } m30001| Fri Feb 22 12:33:25.570 [conn3] add index fails, too many indexes for test.index_many2 key:{ x175: 1.0 } m30001| Fri Feb 22 12:33:25.571 [conn3] add index fails, too many indexes for test.index_many2 key:{ x176: 1.0 } m30001| Fri Feb 22 12:33:25.571 [conn3] add index fails, too many indexes for test.index_many2 key:{ x177: 1.0 } m30001| Fri Feb 22 12:33:25.571 [conn3] add index fails, too many indexes for test.index_many2 key:{ x178: 1.0 } m30001| Fri Feb 22 12:33:25.572 [conn3] add index fails, too many indexes for test.index_many2 key:{ x179: 1.0 } m30001| Fri Feb 22 12:33:25.572 [conn3] add index fails, too many indexes for test.index_many2 key:{ x180: 1.0 } m30001| Fri Feb 22 12:33:25.573 [conn3] add index fails, too many indexes for test.index_many2 key:{ x181: 1.0 } m30001| Fri Feb 22 12:33:25.573 [conn3] add index fails, too many indexes for test.index_many2 key:{ x182: 1.0 } m30001| Fri Feb 22 12:33:25.573 [conn3] add index fails, too many indexes for test.index_many2 key:{ x183: 1.0 } m30001| Fri Feb 22 12:33:25.574 [conn3] add index fails, too many indexes for test.index_many2 key:{ x184: 1.0 } m30001| Fri Feb 22 12:33:25.574 [conn3] add index fails, too many indexes for test.index_many2 key:{ x185: 1.0 } m30001| Fri Feb 22 12:33:25.575 [conn3] add index fails, too many indexes for test.index_many2 key:{ x186: 1.0 } m30001| Fri Feb 22 12:33:25.575 [conn3] add index fails, too many indexes for test.index_many2 key:{ x187: 1.0 } m30001| Fri Feb 22 12:33:25.575 [conn3] add index fails, too many indexes for test.index_many2 key:{ x188: 1.0 } m30001| Fri Feb 22 12:33:25.576 [conn3] add index fails, too many indexes for test.index_many2 key:{ x189: 1.0 } m30001| Fri Feb 22 12:33:25.576 [conn3] add index fails, too many indexes for test.index_many2 key:{ x190: 1.0 } m30001| Fri Feb 22 12:33:25.576 [conn3] add index fails, too many indexes for test.index_many2 key:{ x191: 1.0 } m30001| Fri Feb 22 12:33:25.577 [conn3] add index fails, too many indexes for test.index_many2 key:{ x192: 1.0 } m30001| Fri Feb 22 12:33:25.577 [conn3] add index fails, too many indexes for test.index_many2 key:{ x193: 1.0 } m30001| Fri Feb 22 12:33:25.578 [conn3] add index fails, too many indexes for test.index_many2 key:{ x194: 1.0 } m30001| Fri Feb 22 12:33:25.578 [conn3] add index fails, too many indexes for test.index_many2 key:{ x195: 1.0 } m30001| Fri Feb 22 12:33:25.578 [conn3] add index fails, too many indexes for test.index_many2 key:{ x196: 1.0 } m30001| Fri Feb 22 12:33:25.579 [conn3] add index fails, too many indexes for test.index_many2 key:{ x197: 1.0 } m30001| Fri Feb 22 12:33:25.579 [conn3] add index fails, too many indexes for test.index_many2 key:{ x198: 1.0 } m30001| Fri Feb 22 12:33:25.580 [conn3] add index fails, too many indexes for test.index_many2 key:{ x199: 1.0 } m30001| Fri Feb 22 12:33:25.580 [conn3] add index fails, too many indexes for test.index_many2 key:{ x200: 1.0 } m30001| Fri Feb 22 12:33:25.581 [conn3] add index fails, too many indexes for test.index_many2 key:{ x201: 1.0 } m30001| Fri Feb 22 12:33:25.581 [conn3] add index fails, too many indexes for test.index_many2 key:{ x202: 1.0 } m30001| Fri Feb 22 12:33:25.582 [conn3] add index fails, too many indexes for test.index_many2 key:{ x203: 1.0 } m30001| Fri Feb 22 12:33:25.582 [conn3] add index fails, too many indexes for test.index_many2 key:{ x204: 1.0 } m30001| Fri Feb 22 12:33:25.582 [conn3] add index fails, too many indexes for test.index_many2 key:{ x205: 1.0 } m30001| Fri Feb 22 12:33:25.583 [conn3] add index fails, too many indexes for test.index_many2 key:{ x206: 1.0 } m30001| Fri Feb 22 12:33:25.583 [conn3] add index fails, too many indexes for test.index_many2 key:{ x207: 1.0 } m30001| Fri Feb 22 12:33:25.584 [conn3] add index fails, too many indexes for test.index_many2 key:{ x208: 1.0 } m30001| Fri Feb 22 12:33:25.584 [conn3] add index fails, too many indexes for test.index_many2 key:{ x209: 1.0 } m30001| Fri Feb 22 12:33:25.584 [conn3] add index fails, too many indexes for test.index_many2 key:{ x210: 1.0 } m30001| Fri Feb 22 12:33:25.585 [conn3] add index fails, too many indexes for test.index_many2 key:{ x211: 1.0 } m30001| Fri Feb 22 12:33:25.585 [conn3] add index fails, too many indexes for test.index_many2 key:{ x212: 1.0 } m30001| Fri Feb 22 12:33:25.586 [conn3] add index fails, too many indexes for test.index_many2 key:{ x213: 1.0 } m30001| Fri Feb 22 12:33:25.586 [conn3] add index fails, too many indexes for test.index_many2 key:{ x214: 1.0 } m30001| Fri Feb 22 12:33:25.587 [conn3] add index fails, too many indexes for test.index_many2 key:{ x215: 1.0 } m30001| Fri Feb 22 12:33:25.588 [conn3] add index fails, too many indexes for test.index_many2 key:{ x216: 1.0 } m30001| Fri Feb 22 12:33:25.588 [conn3] add index fails, too many indexes for test.index_many2 key:{ x217: 1.0 } m30001| Fri Feb 22 12:33:25.589 [conn3] add index fails, too many indexes for test.index_many2 key:{ x218: 1.0 } m30001| Fri Feb 22 12:33:25.589 [conn3] add index fails, too many indexes for test.index_many2 key:{ x219: 1.0 } m30001| Fri Feb 22 12:33:25.589 [conn3] add index fails, too many indexes for test.index_many2 key:{ x220: 1.0 } m30001| Fri Feb 22 12:33:25.590 [conn3] add index fails, too many indexes for test.index_many2 key:{ x221: 1.0 } m30001| Fri Feb 22 12:33:25.590 [conn3] add index fails, too many indexes for test.index_many2 key:{ x222: 1.0 } m30001| Fri Feb 22 12:33:25.591 [conn3] add index fails, too many indexes for test.index_many2 key:{ x223: 1.0 } m30001| Fri Feb 22 12:33:25.591 [conn3] add index fails, too many indexes for test.index_many2 key:{ x224: 1.0 } m30001| Fri Feb 22 12:33:25.592 [conn3] add index fails, too many indexes for test.index_many2 key:{ x225: 1.0 } m30001| Fri Feb 22 12:33:25.592 [conn3] add index fails, too many indexes for test.index_many2 key:{ x226: 1.0 } m30001| Fri Feb 22 12:33:25.592 [conn3] add index fails, too many indexes for test.index_many2 key:{ x227: 1.0 } m30001| Fri Feb 22 12:33:25.593 [conn3] add index fails, too many indexes for test.index_many2 key:{ x228: 1.0 } m30001| Fri Feb 22 12:33:25.593 [conn3] add index fails, too many indexes for test.index_many2 key:{ x229: 1.0 } m30001| Fri Feb 22 12:33:25.594 [conn3] add index fails, too many indexes for test.index_many2 key:{ x230: 1.0 } m30001| Fri Feb 22 12:33:25.594 [conn3] add index fails, too many indexes for test.index_many2 key:{ x231: 1.0 } m30001| Fri Feb 22 12:33:25.594 [conn3] add index fails, too many indexes for test.index_many2 key:{ x232: 1.0 } m30001| Fri Feb 22 12:33:25.596 [conn3] add index fails, too many indexes for test.index_many2 key:{ x233: 1.0 } m30001| Fri Feb 22 12:33:25.596 [conn3] add index fails, too many indexes for test.index_many2 key:{ x234: 1.0 } m30001| Fri Feb 22 12:33:25.597 [conn3] add index fails, too many indexes for test.index_many2 key:{ x235: 1.0 } m30001| Fri Feb 22 12:33:25.597 [conn3] add index fails, too many indexes for test.index_many2 key:{ x236: 1.0 } m30001| Fri Feb 22 12:33:25.597 [conn3] add index fails, too many indexes for test.index_many2 key:{ x237: 1.0 } m30001| Fri Feb 22 12:33:25.598 [conn3] add index fails, too many indexes for test.index_many2 key:{ x238: 1.0 } m30001| Fri Feb 22 12:33:25.598 [conn3] add index fails, too many indexes for test.index_many2 key:{ x239: 1.0 } m30001| Fri Feb 22 12:33:25.598 [conn3] add index fails, too many indexes for test.index_many2 key:{ x240: 1.0 } m30001| Fri Feb 22 12:33:25.599 [conn3] add index fails, too many indexes for test.index_many2 key:{ x241: 1.0 } m30001| Fri Feb 22 12:33:25.599 [conn3] add index fails, too many indexes for test.index_many2 key:{ x242: 1.0 } m30001| Fri Feb 22 12:33:25.600 [conn3] add index fails, too many indexes for test.index_many2 key:{ x243: 1.0 } m30001| Fri Feb 22 12:33:25.600 [conn3] add index fails, too many indexes for test.index_many2 key:{ x244: 1.0 } m30001| Fri Feb 22 12:33:25.600 [conn3] add index fails, too many indexes for test.index_many2 key:{ x245: 1.0 } m30001| Fri Feb 22 12:33:25.601 [conn3] add index fails, too many indexes for test.index_many2 key:{ x246: 1.0 } m30001| Fri Feb 22 12:33:25.601 [conn3] add index fails, too many indexes for test.index_many2 key:{ x247: 1.0 } m30001| Fri Feb 22 12:33:25.601 [conn3] add index fails, too many indexes for test.index_many2 key:{ x248: 1.0 } m30001| Fri Feb 22 12:33:25.602 [conn3] add index fails, too many indexes for test.index_many2 key:{ x249: 1.0 } m30001| Fri Feb 22 12:33:25.602 [conn3] add index fails, too many indexes for test.index_many2 key:{ x250: 1.0 } m30001| Fri Feb 22 12:33:25.603 [conn3] add index fails, too many indexes for test.index_many2 key:{ x251: 1.0 } m30001| Fri Feb 22 12:33:25.603 [conn3] add index fails, too many indexes for test.index_many2 key:{ x252: 1.0 } m30001| Fri Feb 22 12:33:25.603 [conn3] add index fails, too many indexes for test.index_many2 key:{ x253: 1.0 } m30001| Fri Feb 22 12:33:25.604 [conn3] add index fails, too many indexes for test.index_many2 key:{ x254: 1.0 } m30001| Fri Feb 22 12:33:25.604 [conn3] add index fails, too many indexes for test.index_many2 key:{ x255: 1.0 } m30001| Fri Feb 22 12:33:25.604 [conn3] add index fails, too many indexes for test.index_many2 key:{ x256: 1.0 } m30001| Fri Feb 22 12:33:25.605 [conn3] add index fails, too many indexes for test.index_many2 key:{ x257: 1.0 } m30001| Fri Feb 22 12:33:25.605 [conn3] add index fails, too many indexes for test.index_many2 key:{ x258: 1.0 } m30001| Fri Feb 22 12:33:25.605 [conn3] add index fails, too many indexes for test.index_many2 key:{ x259: 1.0 } m30001| Fri Feb 22 12:33:25.606 [conn3] add index fails, too many indexes for test.index_many2 key:{ x260: 1.0 } m30001| Fri Feb 22 12:33:25.606 [conn3] add index fails, too many indexes for test.index_many2 key:{ x261: 1.0 } m30001| Fri Feb 22 12:33:25.607 [conn3] add index fails, too many indexes for test.index_many2 key:{ x262: 1.0 } m30001| Fri Feb 22 12:33:25.607 [conn3] add index fails, too many indexes for test.index_many2 key:{ x263: 1.0 } m30001| Fri Feb 22 12:33:25.607 [conn3] add index fails, too many indexes for test.index_many2 key:{ x264: 1.0 } m30001| Fri Feb 22 12:33:25.608 [conn3] add index fails, too many indexes for test.index_many2 key:{ x265: 1.0 } m30001| Fri Feb 22 12:33:25.608 [conn3] add index fails, too many indexes for test.index_many2 key:{ x266: 1.0 } m30001| Fri Feb 22 12:33:25.608 [conn3] add index fails, too many indexes for test.index_many2 key:{ x267: 1.0 } m30001| Fri Feb 22 12:33:25.609 [conn3] add index fails, too many indexes for test.index_many2 key:{ x268: 1.0 } m30001| Fri Feb 22 12:33:25.609 [conn3] add index fails, too many indexes for test.index_many2 key:{ x269: 1.0 } m30001| Fri Feb 22 12:33:25.609 [conn3] add index fails, too many indexes for test.index_many2 key:{ x270: 1.0 } m30001| Fri Feb 22 12:33:25.610 [conn3] add index fails, too many indexes for test.index_many2 key:{ x271: 1.0 } m30001| Fri Feb 22 12:33:25.610 [conn3] add index fails, too many indexes for test.index_many2 key:{ x272: 1.0 } m30001| Fri Feb 22 12:33:25.611 [conn3] add index fails, too many indexes for test.index_many2 key:{ x273: 1.0 } m30001| Fri Feb 22 12:33:25.611 [conn3] add index fails, too many indexes for test.index_many2 key:{ x274: 1.0 } m30001| Fri Feb 22 12:33:25.612 [conn3] add index fails, too many indexes for test.index_many2 key:{ x275: 1.0 } m30001| Fri Feb 22 12:33:25.612 [conn3] add index fails, too many indexes for test.index_many2 key:{ x276: 1.0 } m30001| Fri Feb 22 12:33:25.612 [conn3] add index fails, too many indexes for test.index_many2 key:{ x277: 1.0 } m30001| Fri Feb 22 12:33:25.613 [conn3] add index fails, too many indexes for test.index_many2 key:{ x278: 1.0 } m30001| Fri Feb 22 12:33:25.613 [conn3] add index fails, too many indexes for test.index_many2 key:{ x279: 1.0 } m30001| Fri Feb 22 12:33:25.614 [conn3] add index fails, too many indexes for test.index_many2 key:{ x280: 1.0 } m30001| Fri Feb 22 12:33:25.614 [conn3] add index fails, too many indexes for test.index_many2 key:{ x281: 1.0 } m30001| Fri Feb 22 12:33:25.614 [conn3] add index fails, too many indexes for test.index_many2 key:{ x282: 1.0 } m30001| Fri Feb 22 12:33:25.615 [conn3] add index fails, too many indexes for test.index_many2 key:{ x283: 1.0 } m30001| Fri Feb 22 12:33:25.615 [conn3] add index fails, too many indexes for test.index_many2 key:{ x284: 1.0 } m30001| Fri Feb 22 12:33:25.615 [conn3] add index fails, too many indexes for test.index_many2 key:{ x285: 1.0 } m30001| Fri Feb 22 12:33:25.616 [conn3] add index fails, too many indexes for test.index_many2 key:{ x286: 1.0 } m30001| Fri Feb 22 12:33:25.616 [conn3] add index fails, too many indexes for test.index_many2 key:{ x287: 1.0 } m30001| Fri Feb 22 12:33:25.617 [conn3] add index fails, too many indexes for test.index_many2 key:{ x288: 1.0 } m30001| Fri Feb 22 12:33:25.617 [conn3] add index fails, too many indexes for test.index_many2 key:{ x289: 1.0 } m30001| Fri Feb 22 12:33:25.617 [conn3] add index fails, too many indexes for test.index_many2 key:{ x290: 1.0 } m30001| Fri Feb 22 12:33:25.618 [conn3] add index fails, too many indexes for test.index_many2 key:{ x291: 1.0 } m30001| Fri Feb 22 12:33:25.618 [conn3] add index fails, too many indexes for test.index_many2 key:{ x292: 1.0 } m30001| Fri Feb 22 12:33:25.618 [conn3] add index fails, too many indexes for test.index_many2 key:{ x293: 1.0 } m30001| Fri Feb 22 12:33:25.619 [conn3] add index fails, too many indexes for test.index_many2 key:{ x294: 1.0 } m30001| Fri Feb 22 12:33:25.619 [conn3] add index fails, too many indexes for test.index_many2 key:{ x295: 1.0 } m30001| Fri Feb 22 12:33:25.620 [conn3] add index fails, too many indexes for test.index_many2 key:{ x296: 1.0 } m30001| Fri Feb 22 12:33:25.620 [conn3] add index fails, too many indexes for test.index_many2 key:{ x297: 1.0 } m30001| Fri Feb 22 12:33:25.620 [conn3] add index fails, too many indexes for test.index_many2 key:{ x298: 1.0 } m30001| Fri Feb 22 12:33:25.621 [conn3] add index fails, too many indexes for test.index_many2 key:{ x299: 1.0 } m30001| Fri Feb 22 12:33:25.621 [conn3] add index fails, too many indexes for test.index_many2 key:{ x300: 1.0 } m30001| Fri Feb 22 12:33:25.621 [conn3] add index fails, too many indexes for test.index_many2 key:{ x301: 1.0 } m30001| Fri Feb 22 12:33:25.622 [conn3] add index fails, too many indexes for test.index_many2 key:{ x302: 1.0 } m30001| Fri Feb 22 12:33:25.622 [conn3] add index fails, too many indexes for test.index_many2 key:{ x303: 1.0 } m30001| Fri Feb 22 12:33:25.623 [conn3] add index fails, too many indexes for test.index_many2 key:{ x304: 1.0 } m30001| Fri Feb 22 12:33:25.623 [conn3] add index fails, too many indexes for test.index_many2 key:{ x305: 1.0 } m30001| Fri Feb 22 12:33:25.623 [conn3] add index fails, too many indexes for test.index_many2 key:{ x306: 1.0 } m30001| Fri Feb 22 12:33:25.624 [conn3] add index fails, too many indexes for test.index_many2 key:{ x307: 1.0 } m30001| Fri Feb 22 12:33:25.624 [conn3] add index fails, too many indexes for test.index_many2 key:{ x308: 1.0 } m30001| Fri Feb 22 12:33:25.624 [conn3] add index fails, too many indexes for test.index_many2 key:{ x309: 1.0 } m30001| Fri Feb 22 12:33:25.625 [conn3] add index fails, too many indexes for test.index_many2 key:{ x310: 1.0 } m30001| Fri Feb 22 12:33:25.625 [conn3] add index fails, too many indexes for test.index_many2 key:{ x311: 1.0 } m30001| Fri Feb 22 12:33:25.625 [conn3] add index fails, too many indexes for test.index_many2 key:{ x312: 1.0 } m30001| Fri Feb 22 12:33:25.626 [conn3] add index fails, too many indexes for test.index_many2 key:{ x313: 1.0 } m30001| Fri Feb 22 12:33:25.626 [conn3] add index fails, too many indexes for test.index_many2 key:{ x314: 1.0 } m30001| Fri Feb 22 12:33:25.626 [conn3] add index fails, too many indexes for test.index_many2 key:{ x315: 1.0 } m30001| Fri Feb 22 12:33:25.627 [conn3] add index fails, too many indexes for test.index_many2 key:{ x316: 1.0 } m30001| Fri Feb 22 12:33:25.627 [conn3] add index fails, too many indexes for test.index_many2 key:{ x317: 1.0 } m30001| Fri Feb 22 12:33:25.627 [conn3] add index fails, too many indexes for test.index_many2 key:{ x318: 1.0 } m30001| Fri Feb 22 12:33:25.628 [conn3] add index fails, too many indexes for test.index_many2 key:{ x319: 1.0 } m30001| Fri Feb 22 12:33:25.628 [conn3] add index fails, too many indexes for test.index_many2 key:{ x320: 1.0 } m30001| Fri Feb 22 12:33:25.629 [conn3] add index fails, too many indexes for test.index_many2 key:{ x321: 1.0 } m30001| Fri Feb 22 12:33:25.629 [conn3] add index fails, too many indexes for test.index_many2 key:{ x322: 1.0 } m30001| Fri Feb 22 12:33:25.629 [conn3] add index fails, too many indexes for test.index_many2 key:{ x323: 1.0 } m30001| Fri Feb 22 12:33:25.630 [conn3] add index fails, too many indexes for test.index_many2 key:{ x324: 1.0 } m30001| Fri Feb 22 12:33:25.630 [conn3] add index fails, too many indexes for test.index_many2 key:{ x325: 1.0 } m30001| Fri Feb 22 12:33:25.631 [conn3] add index fails, too many indexes for test.index_many2 key:{ x326: 1.0 } m30001| Fri Feb 22 12:33:25.631 [conn3] add index fails, too many indexes for test.index_many2 key:{ x327: 1.0 } m30001| Fri Feb 22 12:33:25.631 [conn3] add index fails, too many indexes for test.index_many2 key:{ x328: 1.0 } m30001| Fri Feb 22 12:33:25.632 [conn3] add index fails, too many indexes for test.index_many2 key:{ x329: 1.0 } m30001| Fri Feb 22 12:33:25.632 [conn3] add index fails, too many indexes for test.index_many2 key:{ x330: 1.0 } m30001| Fri Feb 22 12:33:25.632 [conn3] add index fails, too many indexes for test.index_many2 key:{ x331: 1.0 } m30001| Fri Feb 22 12:33:25.633 [conn3] add index fails, too many indexes for test.index_many2 key:{ x332: 1.0 } m30001| Fri Feb 22 12:33:25.633 [conn3] add index fails, too many indexes for test.index_many2 key:{ x333: 1.0 } m30001| Fri Feb 22 12:33:25.633 [conn3] add index fails, too many indexes for test.index_many2 key:{ x334: 1.0 } m30001| Fri Feb 22 12:33:25.634 [conn3] add index fails, too many indexes for test.index_many2 key:{ x335: 1.0 } m30001| Fri Feb 22 12:33:25.634 [conn3] add index fails, too many indexes for test.index_many2 key:{ x336: 1.0 } m30001| Fri Feb 22 12:33:25.634 [conn3] add index fails, too many indexes for test.index_many2 key:{ x337: 1.0 } m30001| Fri Feb 22 12:33:25.635 [conn3] add index fails, too many indexes for test.index_many2 key:{ x338: 1.0 } m30001| Fri Feb 22 12:33:25.635 [conn3] add index fails, too many indexes for test.index_many2 key:{ x339: 1.0 } m30001| Fri Feb 22 12:33:25.636 [conn3] add index fails, too many indexes for test.index_many2 key:{ x340: 1.0 } m30001| Fri Feb 22 12:33:25.636 [conn3] add index fails, too many indexes for test.index_many2 key:{ x341: 1.0 } m30001| Fri Feb 22 12:33:25.636 [conn3] add index fails, too many indexes for test.index_many2 key:{ x342: 1.0 } m30001| Fri Feb 22 12:33:25.637 [conn3] add index fails, too many indexes for test.index_many2 key:{ x343: 1.0 } m30001| Fri Feb 22 12:33:25.637 [conn3] add index fails, too many indexes for test.index_many2 key:{ x344: 1.0 } m30001| Fri Feb 22 12:33:25.637 [conn3] add index fails, too many indexes for test.index_many2 key:{ x345: 1.0 } m30001| Fri Feb 22 12:33:25.638 [conn3] add index fails, too many indexes for test.index_many2 key:{ x346: 1.0 } m30001| Fri Feb 22 12:33:25.638 [conn3] add index fails, too many indexes for test.index_many2 key:{ x347: 1.0 } m30001| Fri Feb 22 12:33:25.638 [conn3] add index fails, too many indexes for test.index_many2 key:{ x348: 1.0 } m30001| Fri Feb 22 12:33:25.639 [conn3] add index fails, too many indexes for test.index_many2 key:{ x349: 1.0 } m30001| Fri Feb 22 12:33:25.639 [conn3] add index fails, too many indexes for test.index_many2 key:{ x350: 1.0 } m30001| Fri Feb 22 12:33:25.640 [conn3] add index fails, too many indexes for test.index_many2 key:{ x351: 1.0 } m30001| Fri Feb 22 12:33:25.640 [conn3] add index fails, too many indexes for test.index_many2 key:{ x352: 1.0 } m30001| Fri Feb 22 12:33:25.641 [conn3] add index fails, too many indexes for test.index_many2 key:{ x353: 1.0 } m30001| Fri Feb 22 12:33:25.641 [conn3] add index fails, too many indexes for test.index_many2 key:{ x354: 1.0 } m30001| Fri Feb 22 12:33:25.641 [conn3] add index fails, too many indexes for test.index_many2 key:{ x355: 1.0 } m30001| Fri Feb 22 12:33:25.642 [conn3] add index fails, too many indexes for test.index_many2 key:{ x356: 1.0 } m30001| Fri Feb 22 12:33:25.642 [conn3] add index fails, too many indexes for test.index_many2 key:{ x357: 1.0 } m30001| Fri Feb 22 12:33:25.642 [conn3] add index fails, too many indexes for test.index_many2 key:{ x358: 1.0 } m30001| Fri Feb 22 12:33:25.643 [conn3] add index fails, too many indexes for test.index_many2 key:{ x359: 1.0 } m30001| Fri Feb 22 12:33:25.643 [conn3] add index fails, too many indexes for test.index_many2 key:{ x360: 1.0 } m30001| Fri Feb 22 12:33:25.644 [conn3] add index fails, too many indexes for test.index_many2 key:{ x361: 1.0 } m30001| Fri Feb 22 12:33:25.644 [conn3] add index fails, too many indexes for test.index_many2 key:{ x362: 1.0 } m30001| Fri Feb 22 12:33:25.644 [conn3] add index fails, too many indexes for test.index_many2 key:{ x363: 1.0 } m30001| Fri Feb 22 12:33:25.645 [conn3] add index fails, too many indexes for test.index_many2 key:{ x364: 1.0 } m30001| Fri Feb 22 12:33:25.645 [conn3] add index fails, too many indexes for test.index_many2 key:{ x365: 1.0 } m30001| Fri Feb 22 12:33:25.645 [conn3] add index fails, too many indexes for test.index_many2 key:{ x366: 1.0 } m30001| Fri Feb 22 12:33:25.646 [conn3] add index fails, too many indexes for test.index_many2 key:{ x367: 1.0 } m30001| Fri Feb 22 12:33:25.646 [conn3] add index fails, too many indexes for test.index_many2 key:{ x368: 1.0 } m30001| Fri Feb 22 12:33:25.646 [conn3] add index fails, too many indexes for test.index_many2 key:{ x369: 1.0 } m30001| Fri Feb 22 12:33:25.647 [conn3] add index fails, too many indexes for test.index_many2 key:{ x370: 1.0 } m30001| Fri Feb 22 12:33:25.647 [conn3] add index fails, too many indexes for test.index_many2 key:{ x371: 1.0 } m30001| Fri Feb 22 12:33:25.648 [conn3] add index fails, too many indexes for test.index_many2 key:{ x372: 1.0 } m30001| Fri Feb 22 12:33:25.648 [conn3] add index fails, too many indexes for test.index_many2 key:{ x373: 1.0 } m30001| Fri Feb 22 12:33:25.651 [conn3] add index fails, too many indexes for test.index_many2 key:{ x374: 1.0 } m30001| Fri Feb 22 12:33:25.651 [conn3] add index fails, too many indexes for test.index_many2 key:{ x375: 1.0 } m30001| Fri Feb 22 12:33:25.651 [conn3] add index fails, too many indexes for test.index_many2 key:{ x376: 1.0 } m30001| Fri Feb 22 12:33:25.652 [conn3] add index fails, too many indexes for test.index_many2 key:{ x377: 1.0 } m30001| Fri Feb 22 12:33:25.652 [conn3] add index fails, too many indexes for test.index_many2 key:{ x378: 1.0 } m30001| Fri Feb 22 12:33:25.653 [conn3] add index fails, too many indexes for test.index_many2 key:{ x379: 1.0 } m30001| Fri Feb 22 12:33:25.653 [conn3] add index fails, too many indexes for test.index_many2 key:{ x380: 1.0 } m30001| Fri Feb 22 12:33:25.654 [conn3] add index fails, too many indexes for test.index_many2 key:{ x381: 1.0 } m30001| Fri Feb 22 12:33:25.654 [conn3] add index fails, too many indexes for test.index_many2 key:{ x382: 1.0 } m30001| Fri Feb 22 12:33:25.655 [conn3] add index fails, too many indexes for test.index_many2 key:{ x383: 1.0 } m30001| Fri Feb 22 12:33:25.655 [conn3] add index fails, too many indexes for test.index_many2 key:{ x384: 1.0 } m30001| Fri Feb 22 12:33:25.656 [conn3] add index fails, too many indexes for test.index_many2 key:{ x385: 1.0 } m30001| Fri Feb 22 12:33:25.656 [conn3] add index fails, too many indexes for test.index_many2 key:{ x386: 1.0 } m30001| Fri Feb 22 12:33:25.656 [conn3] add index fails, too many indexes for test.index_many2 key:{ x387: 1.0 } m30001| Fri Feb 22 12:33:25.657 [conn3] add index fails, too many indexes for test.index_many2 key:{ x388: 1.0 } m30001| Fri Feb 22 12:33:25.657 [conn3] add index fails, too many indexes for test.index_many2 key:{ x389: 1.0 } m30001| Fri Feb 22 12:33:25.657 [conn3] add index fails, too many indexes for test.index_many2 key:{ x390: 1.0 } m30001| Fri Feb 22 12:33:25.658 [conn3] add index fails, too many indexes for test.index_many2 key:{ x391: 1.0 } m30001| Fri Feb 22 12:33:25.658 [conn3] add index fails, too many indexes for test.index_many2 key:{ x392: 1.0 } m30001| Fri Feb 22 12:33:25.659 [conn3] add index fails, too many indexes for test.index_many2 key:{ x393: 1.0 } m30001| Fri Feb 22 12:33:25.659 [conn3] add index fails, too many indexes for test.index_many2 key:{ x394: 1.0 } m30001| Fri Feb 22 12:33:25.659 [conn3] add index fails, too many indexes for test.index_many2 key:{ x395: 1.0 } m30001| Fri Feb 22 12:33:25.660 [conn3] add index fails, too many indexes for test.index_many2 key:{ x396: 1.0 } m30001| Fri Feb 22 12:33:25.660 [conn3] add index fails, too many indexes for test.index_many2 key:{ x397: 1.0 } m30001| Fri Feb 22 12:33:25.661 [conn3] add index fails, too many indexes for test.index_many2 key:{ x398: 1.0 } m30001| Fri Feb 22 12:33:25.661 [conn3] add index fails, too many indexes for test.index_many2 key:{ x399: 1.0 } m30001| Fri Feb 22 12:33:25.662 [conn3] add index fails, too many indexes for test.index_many2 key:{ x400: 1.0 } m30001| Fri Feb 22 12:33:25.662 [conn3] add index fails, too many indexes for test.index_many2 key:{ x401: 1.0 } m30001| Fri Feb 22 12:33:25.663 [conn3] add index fails, too many indexes for test.index_many2 key:{ x402: 1.0 } m30001| Fri Feb 22 12:33:25.663 [conn3] add index fails, too many indexes for test.index_many2 key:{ x403: 1.0 } m30001| Fri Feb 22 12:33:25.664 [conn3] add index fails, too many indexes for test.index_many2 key:{ x404: 1.0 } m30001| Fri Feb 22 12:33:25.664 [conn3] add index fails, too many indexes for test.index_many2 key:{ x405: 1.0 } m30001| Fri Feb 22 12:33:25.664 [conn3] add index fails, too many indexes for test.index_many2 key:{ x406: 1.0 } m30001| Fri Feb 22 12:33:25.665 [conn3] add index fails, too many indexes for test.index_many2 key:{ x407: 1.0 } m30001| Fri Feb 22 12:33:25.665 [conn3] add index fails, too many indexes for test.index_many2 key:{ x408: 1.0 } m30001| Fri Feb 22 12:33:25.666 [conn3] add index fails, too many indexes for test.index_many2 key:{ x409: 1.0 } m30001| Fri Feb 22 12:33:25.666 [conn3] add index fails, too many indexes for test.index_many2 key:{ x410: 1.0 } m30001| Fri Feb 22 12:33:25.666 [conn3] add index fails, too many indexes for test.index_many2 key:{ x411: 1.0 } m30001| Fri Feb 22 12:33:25.667 [conn3] add index fails, too many indexes for test.index_many2 key:{ x412: 1.0 } m30001| Fri Feb 22 12:33:25.667 [conn3] add index fails, too many indexes for test.index_many2 key:{ x413: 1.0 } m30001| Fri Feb 22 12:33:25.667 [conn3] add index fails, too many indexes for test.index_many2 key:{ x414: 1.0 } m30001| Fri Feb 22 12:33:25.668 [conn3] add index fails, too many indexes for test.index_many2 key:{ x415: 1.0 } m30001| Fri Feb 22 12:33:25.668 [conn3] add index fails, too many indexes for test.index_many2 key:{ x416: 1.0 } m30001| Fri Feb 22 12:33:25.668 [conn3] add index fails, too many indexes for test.index_many2 key:{ x417: 1.0 } m30001| Fri Feb 22 12:33:25.669 [conn3] add index fails, too many indexes for test.index_many2 key:{ x418: 1.0 } m30001| Fri Feb 22 12:33:25.669 [conn3] add index fails, too many indexes for test.index_many2 key:{ x419: 1.0 } m30001| Fri Feb 22 12:33:25.669 [conn3] add index fails, too many indexes for test.index_many2 key:{ x420: 1.0 } m30001| Fri Feb 22 12:33:25.670 [conn3] add index fails, too many indexes for test.index_many2 key:{ x421: 1.0 } m30001| Fri Feb 22 12:33:25.670 [conn3] add index fails, too many indexes for test.index_many2 key:{ x422: 1.0 } m30001| Fri Feb 22 12:33:25.671 [conn3] add index fails, too many indexes for test.index_many2 key:{ x423: 1.0 } m30001| Fri Feb 22 12:33:25.671 [conn3] add index fails, too many indexes for test.index_many2 key:{ x424: 1.0 } m30001| Fri Feb 22 12:33:25.672 [conn3] add index fails, too many indexes for test.index_many2 key:{ x425: 1.0 } m30001| Fri Feb 22 12:33:25.672 [conn3] add index fails, too many indexes for test.index_many2 key:{ x426: 1.0 } m30001| Fri Feb 22 12:33:25.673 [conn3] add index fails, too many indexes for test.index_many2 key:{ x427: 1.0 } m30001| Fri Feb 22 12:33:25.673 [conn3] add index fails, too many indexes for test.index_many2 key:{ x428: 1.0 } m30001| Fri Feb 22 12:33:25.673 [conn3] add index fails, too many indexes for test.index_many2 key:{ x429: 1.0 } m30001| Fri Feb 22 12:33:25.674 [conn3] add index fails, too many indexes for test.index_many2 key:{ x430: 1.0 } m30001| Fri Feb 22 12:33:25.674 [conn3] add index fails, too many indexes for test.index_many2 key:{ x431: 1.0 } m30001| Fri Feb 22 12:33:25.675 [conn3] add index fails, too many indexes for test.index_many2 key:{ x432: 1.0 } m30001| Fri Feb 22 12:33:25.675 [conn3] add index fails, too many indexes for test.index_many2 key:{ x433: 1.0 } m30001| Fri Feb 22 12:33:25.675 [conn3] add index fails, too many indexes for test.index_many2 key:{ x434: 1.0 } m30001| Fri Feb 22 12:33:25.676 [conn3] add index fails, too many indexes for test.index_many2 key:{ x435: 1.0 } m30001| Fri Feb 22 12:33:25.676 [conn3] add index fails, too many indexes for test.index_many2 key:{ x436: 1.0 } m30001| Fri Feb 22 12:33:25.676 [conn3] add index fails, too many indexes for test.index_many2 key:{ x437: 1.0 } m30001| Fri Feb 22 12:33:25.677 [conn3] add index fails, too many indexes for test.index_many2 key:{ x438: 1.0 } m30001| Fri Feb 22 12:33:25.677 [conn3] add index fails, too many indexes for test.index_many2 key:{ x439: 1.0 } m30001| Fri Feb 22 12:33:25.677 [conn3] add index fails, too many indexes for test.index_many2 key:{ x440: 1.0 } m30001| Fri Feb 22 12:33:25.678 [conn3] add index fails, too many indexes for test.index_many2 key:{ x441: 1.0 } m30001| Fri Feb 22 12:33:25.678 [conn3] add index fails, too many indexes for test.index_many2 key:{ x442: 1.0 } m30001| Fri Feb 22 12:33:25.679 [conn3] add index fails, too many indexes for test.index_many2 key:{ x443: 1.0 } m30001| Fri Feb 22 12:33:25.679 [conn3] add index fails, too many indexes for test.index_many2 key:{ x444: 1.0 } m30001| Fri Feb 22 12:33:25.679 [conn3] add index fails, too many indexes for test.index_many2 key:{ x445: 1.0 } m30001| Fri Feb 22 12:33:25.680 [conn3] add index fails, too many indexes for test.index_many2 key:{ x446: 1.0 } m30001| Fri Feb 22 12:33:25.681 [conn3] add index fails, too many indexes for test.index_many2 key:{ x447: 1.0 } m30001| Fri Feb 22 12:33:25.681 [conn3] add index fails, too many indexes for test.index_many2 key:{ x448: 1.0 } m30001| Fri Feb 22 12:33:25.682 [conn3] add index fails, too many indexes for test.index_many2 key:{ x449: 1.0 } m30001| Fri Feb 22 12:33:25.682 [conn3] add index fails, too many indexes for test.index_many2 key:{ x450: 1.0 } m30001| Fri Feb 22 12:33:25.682 [conn3] add index fails, too many indexes for test.index_many2 key:{ x451: 1.0 } m30001| Fri Feb 22 12:33:25.683 [conn3] add index fails, too many indexes for test.index_many2 key:{ x452: 1.0 } m30001| Fri Feb 22 12:33:25.683 [conn3] add index fails, too many indexes for test.index_many2 key:{ x453: 1.0 } m30001| Fri Feb 22 12:33:25.683 [conn3] add index fails, too many indexes for test.index_many2 key:{ x454: 1.0 } m30001| Fri Feb 22 12:33:25.684 [conn3] add index fails, too many indexes for test.index_many2 key:{ x455: 1.0 } m30001| Fri Feb 22 12:33:25.684 [conn3] add index fails, too many indexes for test.index_many2 key:{ x456: 1.0 } m30001| Fri Feb 22 12:33:25.685 [conn3] add index fails, too many indexes for test.index_many2 key:{ x457: 1.0 } m30001| Fri Feb 22 12:33:25.685 [conn3] add index fails, too many indexes for test.index_many2 key:{ x458: 1.0 } m30001| Fri Feb 22 12:33:25.685 [conn3] add index fails, too many indexes for test.index_many2 key:{ x459: 1.0 } m30001| Fri Feb 22 12:33:25.686 [conn3] add index fails, too many indexes for test.index_many2 key:{ x460: 1.0 } m30001| Fri Feb 22 12:33:25.686 [conn3] add index fails, too many indexes for test.index_many2 key:{ x461: 1.0 } m30001| Fri Feb 22 12:33:25.687 [conn3] add index fails, too many indexes for test.index_many2 key:{ x462: 1.0 } m30001| Fri Feb 22 12:33:25.687 [conn3] add index fails, too many indexes for test.index_many2 key:{ x463: 1.0 } m30001| Fri Feb 22 12:33:25.687 [conn3] add index fails, too many indexes for test.index_many2 key:{ x464: 1.0 } m30001| Fri Feb 22 12:33:25.688 [conn3] add index fails, too many indexes for test.index_many2 key:{ x465: 1.0 } m30001| Fri Feb 22 12:33:25.688 [conn3] add index fails, too many indexes for test.index_many2 key:{ x466: 1.0 } m30001| Fri Feb 22 12:33:25.689 [conn3] add index fails, too many indexes for test.index_many2 key:{ x467: 1.0 } m30001| Fri Feb 22 12:33:25.689 [conn3] add index fails, too many indexes for test.index_many2 key:{ x468: 1.0 } m30001| Fri Feb 22 12:33:25.690 [conn3] add index fails, too many indexes for test.index_many2 key:{ x469: 1.0 } m30001| Fri Feb 22 12:33:25.690 [conn3] add index fails, too many indexes for test.index_many2 key:{ x470: 1.0 } m30001| Fri Feb 22 12:33:25.690 [conn3] add index fails, too many indexes for test.index_many2 key:{ x471: 1.0 } m30001| Fri Feb 22 12:33:25.691 [conn3] add index fails, too many indexes for test.index_many2 key:{ x472: 1.0 } m30001| Fri Feb 22 12:33:25.691 [conn3] add index fails, too many indexes for test.index_many2 key:{ x473: 1.0 } m30001| Fri Feb 22 12:33:25.692 [conn3] add index fails, too many indexes for test.index_many2 key:{ x474: 1.0 } m30001| Fri Feb 22 12:33:25.692 [conn3] add index fails, too many indexes for test.index_many2 key:{ x475: 1.0 } m30001| Fri Feb 22 12:33:25.692 [conn3] add index fails, too many indexes for test.index_many2 key:{ x476: 1.0 } m30001| Fri Feb 22 12:33:25.693 [conn3] add index fails, too many indexes for test.index_many2 key:{ x477: 1.0 } m30001| Fri Feb 22 12:33:25.693 [conn3] add index fails, too many indexes for test.index_many2 key:{ x478: 1.0 } m30001| Fri Feb 22 12:33:25.693 [conn3] add index fails, too many indexes for test.index_many2 key:{ x479: 1.0 } m30001| Fri Feb 22 12:33:25.694 [conn3] add index fails, too many indexes for test.index_many2 key:{ x480: 1.0 } m30001| Fri Feb 22 12:33:25.694 [conn3] add index fails, too many indexes for test.index_many2 key:{ x481: 1.0 } m30001| Fri Feb 22 12:33:25.695 [conn3] add index fails, too many indexes for test.index_many2 key:{ x482: 1.0 } m30001| Fri Feb 22 12:33:25.695 [conn3] add index fails, too many indexes for test.index_many2 key:{ x483: 1.0 } m30001| Fri Feb 22 12:33:25.695 [conn3] add index fails, too many indexes for test.index_many2 key:{ x484: 1.0 } m30001| Fri Feb 22 12:33:25.696 [conn3] add index fails, too many indexes for test.index_many2 key:{ x485: 1.0 } m30001| Fri Feb 22 12:33:25.696 [conn3] add index fails, too many indexes for test.index_many2 key:{ x486: 1.0 } m30001| Fri Feb 22 12:33:25.696 [conn3] add index fails, too many indexes for test.index_many2 key:{ x487: 1.0 } m30001| Fri Feb 22 12:33:25.697 [conn3] add index fails, too many indexes for test.index_many2 key:{ x488: 1.0 } m30001| Fri Feb 22 12:33:25.697 [conn3] add index fails, too many indexes for test.index_many2 key:{ x489: 1.0 } m30001| Fri Feb 22 12:33:25.698 [conn3] add index fails, too many indexes for test.index_many2 key:{ x490: 1.0 } m30001| Fri Feb 22 12:33:25.698 [conn3] add index fails, too many indexes for test.index_many2 key:{ x491: 1.0 } m30001| Fri Feb 22 12:33:25.698 [conn3] add index fails, too many indexes for test.index_many2 key:{ x492: 1.0 } m30001| Fri Feb 22 12:33:25.699 [conn3] add index fails, too many indexes for test.index_many2 key:{ x493: 1.0 } m30001| Fri Feb 22 12:33:25.699 [conn3] add index fails, too many indexes for test.index_many2 key:{ x494: 1.0 } m30001| Fri Feb 22 12:33:25.699 [conn3] add index fails, too many indexes for test.index_many2 key:{ x495: 1.0 } m30001| Fri Feb 22 12:33:25.700 [conn3] add index fails, too many indexes for test.index_many2 key:{ x496: 1.0 } m30001| Fri Feb 22 12:33:25.700 [conn3] add index fails, too many indexes for test.index_many2 key:{ x497: 1.0 } m30001| Fri Feb 22 12:33:25.701 [conn3] add index fails, too many indexes for test.index_many2 key:{ x498: 1.0 } m30001| Fri Feb 22 12:33:25.701 [conn3] add index fails, too many indexes for test.index_many2 key:{ x499: 1.0 } m30001| Fri Feb 22 12:33:25.702 [conn3] add index fails, too many indexes for test.index_many2 key:{ x500: 1.0 } m30001| Fri Feb 22 12:33:25.702 [conn3] add index fails, too many indexes for test.index_many2 key:{ x501: 1.0 } m30001| Fri Feb 22 12:33:25.702 [conn3] add index fails, too many indexes for test.index_many2 key:{ x502: 1.0 } m30001| Fri Feb 22 12:33:25.703 [conn3] add index fails, too many indexes for test.index_many2 key:{ x503: 1.0 } m30001| Fri Feb 22 12:33:25.703 [conn3] add index fails, too many indexes for test.index_many2 key:{ x504: 1.0 } m30001| Fri Feb 22 12:33:25.704 [conn3] add index fails, too many indexes for test.index_many2 key:{ x505: 1.0 } m30001| Fri Feb 22 12:33:25.704 [conn3] add index fails, too many indexes for test.index_many2 key:{ x506: 1.0 } m30001| Fri Feb 22 12:33:25.704 [conn3] add index fails, too many indexes for test.index_many2 key:{ x507: 1.0 } m30001| Fri Feb 22 12:33:25.713 [conn3] add index fails, too many indexes for test.index_many2 key:{ x508: 1.0 } m30001| Fri Feb 22 12:33:25.713 [conn3] add index fails, too many indexes for test.index_many2 key:{ x509: 1.0 } m30001| Fri Feb 22 12:33:25.713 [conn3] add index fails, too many indexes for test.index_many2 key:{ x510: 1.0 } m30001| Fri Feb 22 12:33:25.714 [conn3] add index fails, too many indexes for test.index_many2 key:{ x511: 1.0 } m30001| Fri Feb 22 12:33:25.714 [conn3] add index fails, too many indexes for test.index_many2 key:{ x512: 1.0 } m30001| Fri Feb 22 12:33:25.715 [conn3] add index fails, too many indexes for test.index_many2 key:{ x513: 1.0 } m30001| Fri Feb 22 12:33:25.715 [conn3] add index fails, too many indexes for test.index_many2 key:{ x514: 1.0 } m30001| Fri Feb 22 12:33:25.715 [conn3] add index fails, too many indexes for test.index_many2 key:{ x515: 1.0 } m30001| Fri Feb 22 12:33:25.716 [conn3] add index fails, too many indexes for test.index_many2 key:{ x516: 1.0 } m30001| Fri Feb 22 12:33:25.716 [conn3] add index fails, too many indexes for test.index_many2 key:{ x517: 1.0 } m30001| Fri Feb 22 12:33:25.716 [conn3] add index fails, too many indexes for test.index_many2 key:{ x518: 1.0 } m30001| Fri Feb 22 12:33:25.717 [conn3] add index fails, too many indexes for test.index_many2 key:{ x519: 1.0 } m30001| Fri Feb 22 12:33:25.717 [conn3] add index fails, too many indexes for test.index_many2 key:{ x520: 1.0 } m30001| Fri Feb 22 12:33:25.717 [conn3] add index fails, too many indexes for test.index_many2 key:{ x521: 1.0 } m30001| Fri Feb 22 12:33:25.718 [conn3] add index fails, too many indexes for test.index_many2 key:{ x522: 1.0 } m30001| Fri Feb 22 12:33:25.718 [conn3] add index fails, too many indexes for test.index_many2 key:{ x523: 1.0 } m30001| Fri Feb 22 12:33:25.718 [conn3] add index fails, too many indexes for test.index_many2 key:{ x524: 1.0 } m30001| Fri Feb 22 12:33:25.719 [conn3] add index fails, too many indexes for test.index_many2 key:{ x525: 1.0 } m30001| Fri Feb 22 12:33:25.719 [conn3] add index fails, too many indexes for test.index_many2 key:{ x526: 1.0 } m30001| Fri Feb 22 12:33:25.719 [conn3] add index fails, too many indexes for test.index_many2 key:{ x527: 1.0 } m30001| Fri Feb 22 12:33:25.720 [conn3] add index fails, too many indexes for test.index_many2 key:{ x528: 1.0 } m30001| Fri Feb 22 12:33:25.720 [conn3] add index fails, too many indexes for test.index_many2 key:{ x529: 1.0 } m30001| Fri Feb 22 12:33:25.720 [conn3] add index fails, too many indexes for test.index_many2 key:{ x530: 1.0 } m30001| Fri Feb 22 12:33:25.721 [conn3] add index fails, too many indexes for test.index_many2 key:{ x531: 1.0 } m30001| Fri Feb 22 12:33:25.721 [conn3] add index fails, too many indexes for test.index_many2 key:{ x532: 1.0 } m30001| Fri Feb 22 12:33:25.721 [conn3] add index fails, too many indexes for test.index_many2 key:{ x533: 1.0 } m30001| Fri Feb 22 12:33:25.722 [conn3] add index fails, too many indexes for test.index_many2 key:{ x534: 1.0 } m30001| Fri Feb 22 12:33:25.722 [conn3] add index fails, too many indexes for test.index_many2 key:{ x535: 1.0 } m30001| Fri Feb 22 12:33:25.722 [conn3] add index fails, too many indexes for test.index_many2 key:{ x536: 1.0 } m30001| Fri Feb 22 12:33:25.723 [conn3] add index fails, too many indexes for test.index_many2 key:{ x537: 1.0 } m30001| Fri Feb 22 12:33:25.723 [conn3] add index fails, too many indexes for test.index_many2 key:{ x538: 1.0 } m30001| Fri Feb 22 12:33:25.723 [conn3] add index fails, too many indexes for test.index_many2 key:{ x539: 1.0 } m30001| Fri Feb 22 12:33:25.724 [conn3] add index fails, too many indexes for test.index_many2 key:{ x540: 1.0 } m30001| Fri Feb 22 12:33:25.724 [conn3] add index fails, too many indexes for test.index_many2 key:{ x541: 1.0 } m30001| Fri Feb 22 12:33:25.725 [conn3] add index fails, too many indexes for test.index_many2 key:{ x542: 1.0 } m30001| Fri Feb 22 12:33:25.725 [conn3] add index fails, too many indexes for test.index_many2 key:{ x543: 1.0 } m30001| Fri Feb 22 12:33:25.725 [conn3] add index fails, too many indexes for test.index_many2 key:{ x544: 1.0 } m30001| Fri Feb 22 12:33:25.726 [conn3] add index fails, too many indexes for test.index_many2 key:{ x545: 1.0 } m30001| Fri Feb 22 12:33:25.726 [conn3] add index fails, too many indexes for test.index_many2 key:{ x546: 1.0 } m30001| Fri Feb 22 12:33:25.726 [conn3] add index fails, too many indexes for test.index_many2 key:{ x547: 1.0 } m30001| Fri Feb 22 12:33:25.727 [conn3] add index fails, too many indexes for test.index_many2 key:{ x548: 1.0 } m30001| Fri Feb 22 12:33:25.727 [conn3] add index fails, too many indexes for test.index_many2 key:{ x549: 1.0 } m30001| Fri Feb 22 12:33:25.727 [conn3] add index fails, too many indexes for test.index_many2 key:{ x550: 1.0 } m30001| Fri Feb 22 12:33:25.728 [conn3] add index fails, too many indexes for test.index_many2 key:{ x551: 1.0 } m30001| Fri Feb 22 12:33:25.728 [conn3] add index fails, too many indexes for test.index_many2 key:{ x552: 1.0 } m30001| Fri Feb 22 12:33:25.728 [conn3] add index fails, too many indexes for test.index_many2 key:{ x553: 1.0 } m30001| Fri Feb 22 12:33:25.729 [conn3] add index fails, too many indexes for test.index_many2 key:{ x554: 1.0 } m30001| Fri Feb 22 12:33:25.729 [conn3] add index fails, too many indexes for test.index_many2 key:{ x555: 1.0 } m30001| Fri Feb 22 12:33:25.729 [conn3] add index fails, too many indexes for test.index_many2 key:{ x556: 1.0 } m30001| Fri Feb 22 12:33:25.730 [conn3] add index fails, too many indexes for test.index_many2 key:{ x557: 1.0 } m30001| Fri Feb 22 12:33:25.730 [conn3] add index fails, too many indexes for test.index_many2 key:{ x558: 1.0 } m30001| Fri Feb 22 12:33:25.730 [conn3] add index fails, too many indexes for test.index_many2 key:{ x559: 1.0 } m30001| Fri Feb 22 12:33:25.731 [conn3] add index fails, too many indexes for test.index_many2 key:{ x560: 1.0 } m30001| Fri Feb 22 12:33:25.731 [conn3] add index fails, too many indexes for test.index_many2 key:{ x561: 1.0 } m30001| Fri Feb 22 12:33:25.732 [conn3] add index fails, too many indexes for test.index_many2 key:{ x562: 1.0 } m30001| Fri Feb 22 12:33:25.732 [conn3] add index fails, too many indexes for test.index_many2 key:{ x563: 1.0 } m30001| Fri Feb 22 12:33:25.732 [conn3] add index fails, too many indexes for test.index_many2 key:{ x564: 1.0 } m30001| Fri Feb 22 12:33:25.733 [conn3] add index fails, too many indexes for test.index_many2 key:{ x565: 1.0 } m30001| Fri Feb 22 12:33:25.733 [conn3] add index fails, too many indexes for test.index_many2 key:{ x566: 1.0 } m30001| Fri Feb 22 12:33:25.733 [conn3] add index fails, too many indexes for test.index_many2 key:{ x567: 1.0 } m30001| Fri Feb 22 12:33:25.734 [conn3] add index fails, too many indexes for test.index_many2 key:{ x568: 1.0 } m30001| Fri Feb 22 12:33:25.734 [conn3] add index fails, too many indexes for test.index_many2 key:{ x569: 1.0 } m30001| Fri Feb 22 12:33:25.734 [conn3] add index fails, too many indexes for test.index_many2 key:{ x570: 1.0 } m30001| Fri Feb 22 12:33:25.735 [conn3] add index fails, too many indexes for test.index_many2 key:{ x571: 1.0 } m30001| Fri Feb 22 12:33:25.735 [conn3] add index fails, too many indexes for test.index_many2 key:{ x572: 1.0 } m30001| Fri Feb 22 12:33:25.736 [conn3] add index fails, too many indexes for test.index_many2 key:{ x573: 1.0 } m30001| Fri Feb 22 12:33:25.736 [conn3] add index fails, too many indexes for test.index_many2 key:{ x574: 1.0 } m30001| Fri Feb 22 12:33:25.736 [conn3] add index fails, too many indexes for test.index_many2 key:{ x575: 1.0 } m30001| Fri Feb 22 12:33:25.737 [conn3] add index fails, too many indexes for test.index_many2 key:{ x576: 1.0 } m30001| Fri Feb 22 12:33:25.737 [conn3] add index fails, too many indexes for test.index_many2 key:{ x577: 1.0 } m30001| Fri Feb 22 12:33:25.737 [conn3] add index fails, too many indexes for test.index_many2 key:{ x578: 1.0 } m30001| Fri Feb 22 12:33:25.738 [conn3] add index fails, too many indexes for test.index_many2 key:{ x579: 1.0 } m30001| Fri Feb 22 12:33:25.738 [conn3] add index fails, too many indexes for test.index_many2 key:{ x580: 1.0 } m30001| Fri Feb 22 12:33:25.738 [conn3] add index fails, too many indexes for test.index_many2 key:{ x581: 1.0 } m30001| Fri Feb 22 12:33:25.739 [conn3] add index fails, too many indexes for test.index_many2 key:{ x582: 1.0 } m30001| Fri Feb 22 12:33:25.739 [conn3] add index fails, too many indexes for test.index_many2 key:{ x583: 1.0 } m30001| Fri Feb 22 12:33:25.740 [conn3] add index fails, too many indexes for test.index_many2 key:{ x584: 1.0 } m30001| Fri Feb 22 12:33:25.740 [conn3] add index fails, too many indexes for test.index_many2 key:{ x585: 1.0 } m30001| Fri Feb 22 12:33:25.740 [conn3] add index fails, too many indexes for test.index_many2 key:{ x586: 1.0 } m30001| Fri Feb 22 12:33:25.741 [conn3] add index fails, too many indexes for test.index_many2 key:{ x587: 1.0 } m30001| Fri Feb 22 12:33:25.741 [conn3] add index fails, too many indexes for test.index_many2 key:{ x588: 1.0 } m30001| Fri Feb 22 12:33:25.741 [conn3] add index fails, too many indexes for test.index_many2 key:{ x589: 1.0 } m30001| Fri Feb 22 12:33:25.742 [conn3] add index fails, too many indexes for test.index_many2 key:{ x590: 1.0 } m30001| Fri Feb 22 12:33:25.742 [conn3] add index fails, too many indexes for test.index_many2 key:{ x591: 1.0 } m30001| Fri Feb 22 12:33:25.742 [conn3] add index fails, too many indexes for test.index_many2 key:{ x592: 1.0 } m30001| Fri Feb 22 12:33:25.743 [conn3] add index fails, too many indexes for test.index_many2 key:{ x593: 1.0 } m30001| Fri Feb 22 12:33:25.743 [conn3] add index fails, too many indexes for test.index_many2 key:{ x594: 1.0 } m30001| Fri Feb 22 12:33:25.743 [conn3] add index fails, too many indexes for test.index_many2 key:{ x595: 1.0 } m30001| Fri Feb 22 12:33:25.744 [conn3] add index fails, too many indexes for test.index_many2 key:{ x596: 1.0 } m30001| Fri Feb 22 12:33:25.744 [conn3] add index fails, too many indexes for test.index_many2 key:{ x597: 1.0 } m30001| Fri Feb 22 12:33:25.744 [conn3] add index fails, too many indexes for test.index_many2 key:{ x598: 1.0 } m30001| Fri Feb 22 12:33:25.745 [conn3] add index fails, too many indexes for test.index_many2 key:{ x599: 1.0 } m30001| Fri Feb 22 12:33:25.745 [conn3] add index fails, too many indexes for test.index_many2 key:{ x600: 1.0 } m30001| Fri Feb 22 12:33:25.746 [conn3] add index fails, too many indexes for test.index_many2 key:{ x601: 1.0 } m30001| Fri Feb 22 12:33:25.746 [conn3] add index fails, too many indexes for test.index_many2 key:{ x602: 1.0 } m30001| Fri Feb 22 12:33:25.746 [conn3] add index fails, too many indexes for test.index_many2 key:{ x603: 1.0 } m30001| Fri Feb 22 12:33:25.747 [conn3] add index fails, too many indexes for test.index_many2 key:{ x604: 1.0 } m30001| Fri Feb 22 12:33:25.747 [conn3] add index fails, too many indexes for test.index_many2 key:{ x605: 1.0 } m30001| Fri Feb 22 12:33:25.747 [conn3] add index fails, too many indexes for test.index_many2 key:{ x606: 1.0 } m30001| Fri Feb 22 12:33:25.748 [conn3] add index fails, too many indexes for test.index_many2 key:{ x607: 1.0 } m30001| Fri Feb 22 12:33:25.748 [conn3] add index fails, too many indexes for test.index_many2 key:{ x608: 1.0 } m30001| Fri Feb 22 12:33:25.748 [conn3] add index fails, too many indexes for test.index_many2 key:{ x609: 1.0 } m30001| Fri Feb 22 12:33:25.749 [conn3] add index fails, too many indexes for test.index_many2 key:{ x610: 1.0 } m30001| Fri Feb 22 12:33:25.749 [conn3] add index fails, too many indexes for test.index_many2 key:{ x611: 1.0 } m30001| Fri Feb 22 12:33:25.749 [conn3] add index fails, too many indexes for test.index_many2 key:{ x612: 1.0 } m30001| Fri Feb 22 12:33:25.750 [conn3] add index fails, too many indexes for test.index_many2 key:{ x613: 1.0 } m30001| Fri Feb 22 12:33:25.750 [conn3] add index fails, too many indexes for test.index_many2 key:{ x614: 1.0 } m30001| Fri Feb 22 12:33:25.751 [conn3] add index fails, too many indexes for test.index_many2 key:{ x615: 1.0 } m30001| Fri Feb 22 12:33:25.751 [conn3] add index fails, too many indexes for test.index_many2 key:{ x616: 1.0 } m30001| Fri Feb 22 12:33:25.751 [conn3] add index fails, too many indexes for test.index_many2 key:{ x617: 1.0 } m30001| Fri Feb 22 12:33:25.752 [conn3] add index fails, too many indexes for test.index_many2 key:{ x618: 1.0 } m30001| Fri Feb 22 12:33:25.752 [conn3] add index fails, too many indexes for test.index_many2 key:{ x619: 1.0 } m30001| Fri Feb 22 12:33:25.752 [conn3] add index fails, too many indexes for test.index_many2 key:{ x620: 1.0 } m30001| Fri Feb 22 12:33:25.753 [conn3] add index fails, too many indexes for test.index_many2 key:{ x621: 1.0 } m30001| Fri Feb 22 12:33:25.753 [conn3] add index fails, too many indexes for test.index_many2 key:{ x622: 1.0 } m30001| Fri Feb 22 12:33:25.753 [conn3] add index fails, too many indexes for test.index_many2 key:{ x623: 1.0 } m30001| Fri Feb 22 12:33:25.754 [conn3] add index fails, too many indexes for test.index_many2 key:{ x624: 1.0 } m30001| Fri Feb 22 12:33:25.754 [conn3] add index fails, too many indexes for test.index_many2 key:{ x625: 1.0 } m30001| Fri Feb 22 12:33:25.754 [conn3] add index fails, too many indexes for test.index_many2 key:{ x626: 1.0 } m30001| Fri Feb 22 12:33:25.755 [conn3] add index fails, too many indexes for test.index_many2 key:{ x627: 1.0 } m30001| Fri Feb 22 12:33:25.755 [conn3] add index fails, too many indexes for test.index_many2 key:{ x628: 1.0 } m30001| Fri Feb 22 12:33:25.755 [conn3] add index fails, too many indexes for test.index_many2 key:{ x629: 1.0 } m30001| Fri Feb 22 12:33:25.756 [conn3] add index fails, too many indexes for test.index_many2 key:{ x630: 1.0 } m30001| Fri Feb 22 12:33:25.756 [conn3] add index fails, too many indexes for test.index_many2 key:{ x631: 1.0 } m30001| Fri Feb 22 12:33:25.756 [conn3] add index fails, too many indexes for test.index_many2 key:{ x632: 1.0 } m30001| Fri Feb 22 12:33:25.757 [conn3] add index fails, too many indexes for test.index_many2 key:{ x633: 1.0 } m30001| Fri Feb 22 12:33:25.757 [conn3] add index fails, too many indexes for test.index_many2 key:{ x634: 1.0 } m30001| Fri Feb 22 12:33:25.758 [conn3] add index fails, too many indexes for test.index_many2 key:{ x635: 1.0 } m30001| Fri Feb 22 12:33:25.758 [conn3] add index fails, too many indexes for test.index_many2 key:{ x636: 1.0 } m30001| Fri Feb 22 12:33:25.758 [conn3] add index fails, too many indexes for test.index_many2 key:{ x637: 1.0 } m30001| Fri Feb 22 12:33:25.759 [conn3] add index fails, too many indexes for test.index_many2 key:{ x638: 1.0 } m30001| Fri Feb 22 12:33:25.759 [conn3] add index fails, too many indexes for test.index_many2 key:{ x639: 1.0 } m30001| Fri Feb 22 12:33:25.759 [conn3] add index fails, too many indexes for test.index_many2 key:{ x640: 1.0 } m30001| Fri Feb 22 12:33:25.760 [conn3] add index fails, too many indexes for test.index_many2 key:{ x641: 1.0 } m30001| Fri Feb 22 12:33:25.760 [conn3] add index fails, too many indexes for test.index_many2 key:{ x642: 1.0 } m30001| Fri Feb 22 12:33:25.761 [conn3] add index fails, too many indexes for test.index_many2 key:{ x643: 1.0 } m30001| Fri Feb 22 12:33:25.761 [conn3] add index fails, too many indexes for test.index_many2 key:{ x644: 1.0 } m30001| Fri Feb 22 12:33:25.761 [conn3] add index fails, too many indexes for test.index_many2 key:{ x645: 1.0 } m30001| Fri Feb 22 12:33:25.762 [conn3] add index fails, too many indexes for test.index_many2 key:{ x646: 1.0 } m30001| Fri Feb 22 12:33:25.762 [conn3] add index fails, too many indexes for test.index_many2 key:{ x647: 1.0 } m30001| Fri Feb 22 12:33:25.762 [conn3] add index fails, too many indexes for test.index_many2 key:{ x648: 1.0 } m30001| Fri Feb 22 12:33:25.763 [conn3] add index fails, too many indexes for test.index_many2 key:{ x649: 1.0 } m30001| Fri Feb 22 12:33:25.763 [conn3] add index fails, too many indexes for test.index_many2 key:{ x650: 1.0 } m30001| Fri Feb 22 12:33:25.764 [conn3] add index fails, too many indexes for test.index_many2 key:{ x651: 1.0 } m30001| Fri Feb 22 12:33:25.764 [conn3] add index fails, too many indexes for test.index_many2 key:{ x652: 1.0 } m30001| Fri Feb 22 12:33:25.764 [conn3] add index fails, too many indexes for test.index_many2 key:{ x653: 1.0 } m30001| Fri Feb 22 12:33:25.765 [conn3] add index fails, too many indexes for test.index_many2 key:{ x654: 1.0 } m30001| Fri Feb 22 12:33:25.765 [conn3] add index fails, too many indexes for test.index_many2 key:{ x655: 1.0 } m30001| Fri Feb 22 12:33:25.765 [conn3] add index fails, too many indexes for test.index_many2 key:{ x656: 1.0 } m30001| Fri Feb 22 12:33:25.766 [conn3] add index fails, too many indexes for test.index_many2 key:{ x657: 1.0 } m30001| Fri Feb 22 12:33:25.766 [conn3] add index fails, too many indexes for test.index_many2 key:{ x658: 1.0 } m30001| Fri Feb 22 12:33:25.766 [conn3] add index fails, too many indexes for test.index_many2 key:{ x659: 1.0 } m30001| Fri Feb 22 12:33:25.767 [conn3] add index fails, too many indexes for test.index_many2 key:{ x660: 1.0 } m30001| Fri Feb 22 12:33:25.767 [conn3] add index fails, too many indexes for test.index_many2 key:{ x661: 1.0 } m30001| Fri Feb 22 12:33:25.767 [conn3] add index fails, too many indexes for test.index_many2 key:{ x662: 1.0 } m30001| Fri Feb 22 12:33:25.768 [conn3] add index fails, too many indexes for test.index_many2 key:{ x663: 1.0 } m30001| Fri Feb 22 12:33:25.768 [conn3] add index fails, too many indexes for test.index_many2 key:{ x664: 1.0 } m30001| Fri Feb 22 12:33:25.768 [conn3] add index fails, too many indexes for test.index_many2 key:{ x665: 1.0 } m30001| Fri Feb 22 12:33:25.769 [conn3] add index fails, too many indexes for test.index_many2 key:{ x666: 1.0 } m30001| Fri Feb 22 12:33:25.769 [conn3] add index fails, too many indexes for test.index_many2 key:{ x667: 1.0 } m30001| Fri Feb 22 12:33:25.769 [conn3] add index fails, too many indexes for test.index_many2 key:{ x668: 1.0 } m30001| Fri Feb 22 12:33:25.770 [conn3] add index fails, too many indexes for test.index_many2 key:{ x669: 1.0 } m30001| Fri Feb 22 12:33:25.770 [conn3] add index fails, too many indexes for test.index_many2 key:{ x670: 1.0 } m30001| Fri Feb 22 12:33:25.770 [conn3] add index fails, too many indexes for test.index_many2 key:{ x671: 1.0 } m30001| Fri Feb 22 12:33:25.771 [conn3] add index fails, too many indexes for test.index_many2 key:{ x672: 1.0 } m30001| Fri Feb 22 12:33:25.771 [conn3] add index fails, too many indexes for test.index_many2 key:{ x673: 1.0 } m30001| Fri Feb 22 12:33:25.772 [conn3] add index fails, too many indexes for test.index_many2 key:{ x674: 1.0 } m30001| Fri Feb 22 12:33:25.772 [conn3] add index fails, too many indexes for test.index_many2 key:{ x675: 1.0 } m30001| Fri Feb 22 12:33:25.772 [conn3] add index fails, too many indexes for test.index_many2 key:{ x676: 1.0 } m30001| Fri Feb 22 12:33:25.772 [conn3] add index fails, too many indexes for test.index_many2 key:{ x677: 1.0 } m30001| Fri Feb 22 12:33:25.773 [conn3] add index fails, too many indexes for test.index_many2 key:{ x678: 1.0 } m30001| Fri Feb 22 12:33:25.773 [conn3] add index fails, too many indexes for test.index_many2 key:{ x679: 1.0 } m30001| Fri Feb 22 12:33:25.773 [conn3] add index fails, too many indexes for test.index_many2 key:{ x680: 1.0 } m30001| Fri Feb 22 12:33:25.774 [conn3] add index fails, too many indexes for test.index_many2 key:{ x681: 1.0 } m30001| Fri Feb 22 12:33:25.774 [conn3] add index fails, too many indexes for test.index_many2 key:{ x682: 1.0 } m30001| Fri Feb 22 12:33:25.774 [conn3] add index fails, too many indexes for test.index_many2 key:{ x683: 1.0 } m30001| Fri Feb 22 12:33:25.775 [conn3] add index fails, too many indexes for test.index_many2 key:{ x684: 1.0 } m30001| Fri Feb 22 12:33:25.775 [conn3] add index fails, too many indexes for test.index_many2 key:{ x685: 1.0 } m30001| Fri Feb 22 12:33:25.775 [conn3] add index fails, too many indexes for test.index_many2 key:{ x686: 1.0 } m30001| Fri Feb 22 12:33:25.776 [conn3] add index fails, too many indexes for test.index_many2 key:{ x687: 1.0 } m30001| Fri Feb 22 12:33:25.776 [conn3] add index fails, too many indexes for test.index_many2 key:{ x688: 1.0 } m30001| Fri Feb 22 12:33:25.776 [conn3] add index fails, too many indexes for test.index_many2 key:{ x689: 1.0 } m30001| Fri Feb 22 12:33:25.777 [conn3] add index fails, too many indexes for test.index_many2 key:{ x690: 1.0 } m30001| Fri Feb 22 12:33:25.777 [conn3] add index fails, too many indexes for test.index_many2 key:{ x691: 1.0 } m30001| Fri Feb 22 12:33:25.777 [conn3] add index fails, too many indexes for test.index_many2 key:{ x692: 1.0 } m30001| Fri Feb 22 12:33:25.778 [conn3] add index fails, too many indexes for test.index_many2 key:{ x693: 1.0 } m30001| Fri Feb 22 12:33:25.778 [conn3] add index fails, too many indexes for test.index_many2 key:{ x694: 1.0 } m30001| Fri Feb 22 12:33:25.779 [conn3] add index fails, too many indexes for test.index_many2 key:{ x695: 1.0 } m30001| Fri Feb 22 12:33:25.779 [conn3] add index fails, too many indexes for test.index_many2 key:{ x696: 1.0 } m30001| Fri Feb 22 12:33:25.779 [conn3] add index fails, too many indexes for test.index_many2 key:{ x697: 1.0 } m30001| Fri Feb 22 12:33:25.780 [conn3] add index fails, too many indexes for test.index_many2 key:{ x698: 1.0 } m30001| Fri Feb 22 12:33:25.780 [conn3] add index fails, too many indexes for test.index_many2 key:{ x699: 1.0 } m30001| Fri Feb 22 12:33:25.780 [conn3] add index fails, too many indexes for test.index_many2 key:{ x700: 1.0 } m30001| Fri Feb 22 12:33:25.781 [conn3] add index fails, too many indexes for test.index_many2 key:{ x701: 1.0 } m30001| Fri Feb 22 12:33:25.781 [conn3] add index fails, too many indexes for test.index_many2 key:{ x702: 1.0 } m30001| Fri Feb 22 12:33:25.781 [conn3] add index fails, too many indexes for test.index_many2 key:{ x703: 1.0 } m30001| Fri Feb 22 12:33:25.782 [conn3] add index fails, too many indexes for test.index_many2 key:{ x704: 1.0 } m30001| Fri Feb 22 12:33:25.782 [conn3] add index fails, too many indexes for test.index_many2 key:{ x705: 1.0 } m30001| Fri Feb 22 12:33:25.782 [conn3] add index fails, too many indexes for test.index_many2 key:{ x706: 1.0 } m30001| Fri Feb 22 12:33:25.783 [conn3] add index fails, too many indexes for test.index_many2 key:{ x707: 1.0 } m30001| Fri Feb 22 12:33:25.783 [conn3] add index fails, too many indexes for test.index_many2 key:{ x708: 1.0 } m30001| Fri Feb 22 12:33:25.784 [conn3] add index fails, too many indexes for test.index_many2 key:{ x709: 1.0 } m30001| Fri Feb 22 12:33:25.784 [conn3] add index fails, too many indexes for test.index_many2 key:{ x710: 1.0 } m30001| Fri Feb 22 12:33:25.784 [conn3] add index fails, too many indexes for test.index_many2 key:{ x711: 1.0 } m30001| Fri Feb 22 12:33:25.785 [conn3] add index fails, too many indexes for test.index_many2 key:{ x712: 1.0 } m30001| Fri Feb 22 12:33:25.785 [conn3] add index fails, too many indexes for test.index_many2 key:{ x713: 1.0 } m30001| Fri Feb 22 12:33:25.785 [conn3] add index fails, too many indexes for test.index_many2 key:{ x714: 1.0 } m30001| Fri Feb 22 12:33:25.786 [conn3] add index fails, too many indexes for test.index_many2 key:{ x715: 1.0 } m30001| Fri Feb 22 12:33:25.786 [conn3] add index fails, too many indexes for test.index_many2 key:{ x716: 1.0 } m30001| Fri Feb 22 12:33:25.786 [conn3] add index fails, too many indexes for test.index_many2 key:{ x717: 1.0 } m30001| Fri Feb 22 12:33:25.787 [conn3] add index fails, too many indexes for test.index_many2 key:{ x718: 1.0 } m30001| Fri Feb 22 12:33:25.787 [conn3] add index fails, too many indexes for test.index_many2 key:{ x719: 1.0 } m30001| Fri Feb 22 12:33:25.787 [conn3] add index fails, too many indexes for test.index_many2 key:{ x720: 1.0 } m30001| Fri Feb 22 12:33:25.788 [conn3] add index fails, too many indexes for test.index_many2 key:{ x721: 1.0 } m30001| Fri Feb 22 12:33:25.788 [conn3] add index fails, too many indexes for test.index_many2 key:{ x722: 1.0 } m30001| Fri Feb 22 12:33:25.788 [conn3] add index fails, too many indexes for test.index_many2 key:{ x723: 1.0 } m30001| Fri Feb 22 12:33:25.789 [conn3] add index fails, too many indexes for test.index_many2 key:{ x724: 1.0 } m30001| Fri Feb 22 12:33:25.789 [conn3] add index fails, too many indexes for test.index_many2 key:{ x725: 1.0 } m30001| Fri Feb 22 12:33:25.789 [conn3] add index fails, too many indexes for test.index_many2 key:{ x726: 1.0 } m30001| Fri Feb 22 12:33:25.790 [conn3] add index fails, too many indexes for test.index_many2 key:{ x727: 1.0 } m30001| Fri Feb 22 12:33:25.790 [conn3] add index fails, too many indexes for test.index_many2 key:{ x728: 1.0 } m30001| Fri Feb 22 12:33:25.790 [conn3] add index fails, too many indexes for test.index_many2 key:{ x729: 1.0 } m30001| Fri Feb 22 12:33:25.791 [conn3] add index fails, too many indexes for test.index_many2 key:{ x730: 1.0 } m30001| Fri Feb 22 12:33:25.791 [conn3] add index fails, too many indexes for test.index_many2 key:{ x731: 1.0 } m30001| Fri Feb 22 12:33:25.791 [conn3] add index fails, too many indexes for test.index_many2 key:{ x732: 1.0 } m30001| Fri Feb 22 12:33:25.792 [conn3] add index fails, too many indexes for test.index_many2 key:{ x733: 1.0 } m30001| Fri Feb 22 12:33:25.792 [conn3] add index fails, too many indexes for test.index_many2 key:{ x734: 1.0 } m30001| Fri Feb 22 12:33:25.792 [conn3] add index fails, too many indexes for test.index_many2 key:{ x735: 1.0 } m30001| Fri Feb 22 12:33:25.793 [conn3] add index fails, too many indexes for test.index_many2 key:{ x736: 1.0 } m30001| Fri Feb 22 12:33:25.793 [conn3] add index fails, too many indexes for test.index_many2 key:{ x737: 1.0 } m30001| Fri Feb 22 12:33:25.794 [conn3] add index fails, too many indexes for test.index_many2 key:{ x738: 1.0 } m30001| Fri Feb 22 12:33:25.794 [conn3] add index fails, too many indexes for test.index_many2 key:{ x739: 1.0 } m30001| Fri Feb 22 12:33:25.794 [conn3] add index fails, too many indexes for test.index_many2 key:{ x740: 1.0 } m30001| Fri Feb 22 12:33:25.795 [conn3] add index fails, too many indexes for test.index_many2 key:{ x741: 1.0 } m30001| Fri Feb 22 12:33:25.795 [conn3] add index fails, too many indexes for test.index_many2 key:{ x742: 1.0 } m30001| Fri Feb 22 12:33:25.795 [conn3] add index fails, too many indexes for test.index_many2 key:{ x743: 1.0 } m30001| Fri Feb 22 12:33:25.796 [conn3] add index fails, too many indexes for test.index_many2 key:{ x744: 1.0 } m30001| Fri Feb 22 12:33:25.796 [conn3] add index fails, too many indexes for test.index_many2 key:{ x745: 1.0 } m30001| Fri Feb 22 12:33:25.796 [conn3] add index fails, too many indexes for test.index_many2 key:{ x746: 1.0 } m30001| Fri Feb 22 12:33:25.797 [conn3] add index fails, too many indexes for test.index_many2 key:{ x747: 1.0 } m30001| Fri Feb 22 12:33:25.797 [conn3] add index fails, too many indexes for test.index_many2 key:{ x748: 1.0 } m30001| Fri Feb 22 12:33:25.797 [conn3] add index fails, too many indexes for test.index_many2 key:{ x749: 1.0 } m30001| Fri Feb 22 12:33:25.798 [conn3] add index fails, too many indexes for test.index_many2 key:{ x750: 1.0 } m30001| Fri Feb 22 12:33:25.798 [conn3] add index fails, too many indexes for test.index_many2 key:{ x751: 1.0 } m30001| Fri Feb 22 12:33:25.798 [conn3] add index fails, too many indexes for test.index_many2 key:{ x752: 1.0 } m30001| Fri Feb 22 12:33:25.799 [conn3] add index fails, too many indexes for test.index_many2 key:{ x753: 1.0 } m30001| Fri Feb 22 12:33:25.799 [conn3] add index fails, too many indexes for test.index_many2 key:{ x754: 1.0 } m30001| Fri Feb 22 12:33:25.799 [conn3] add index fails, too many indexes for test.index_many2 key:{ x755: 1.0 } m30001| Fri Feb 22 12:33:25.800 [conn3] add index fails, too many indexes for test.index_many2 key:{ x756: 1.0 } m30001| Fri Feb 22 12:33:25.801 [conn3] add index fails, too many indexes for test.index_many2 key:{ x757: 1.0 } m30001| Fri Feb 22 12:33:25.801 [conn3] add index fails, too many indexes for test.index_many2 key:{ x758: 1.0 } m30001| Fri Feb 22 12:33:25.802 [conn3] add index fails, too many indexes for test.index_many2 key:{ x759: 1.0 } m30001| Fri Feb 22 12:33:25.802 [conn3] add index fails, too many indexes for test.index_many2 key:{ x760: 1.0 } m30001| Fri Feb 22 12:33:25.802 [conn3] add index fails, too many indexes for test.index_many2 key:{ x761: 1.0 } m30001| Fri Feb 22 12:33:25.803 [conn3] add index fails, too many indexes for test.index_many2 key:{ x762: 1.0 } m30001| Fri Feb 22 12:33:25.803 [conn3] add index fails, too many indexes for test.index_many2 key:{ x763: 1.0 } m30001| Fri Feb 22 12:33:25.803 [conn3] add index fails, too many indexes for test.index_many2 key:{ x764: 1.0 } m30001| Fri Feb 22 12:33:25.804 [conn3] add index fails, too many indexes for test.index_many2 key:{ x765: 1.0 } m30001| Fri Feb 22 12:33:25.804 [conn3] add index fails, too many indexes for test.index_many2 key:{ x766: 1.0 } m30001| Fri Feb 22 12:33:25.804 [conn3] add index fails, too many indexes for test.index_many2 key:{ x767: 1.0 } m30001| Fri Feb 22 12:33:25.805 [conn3] add index fails, too many indexes for test.index_many2 key:{ x768: 1.0 } m30001| Fri Feb 22 12:33:25.805 [conn3] add index fails, too many indexes for test.index_many2 key:{ x769: 1.0 } m30001| Fri Feb 22 12:33:25.805 [conn3] add index fails, too many indexes for test.index_many2 key:{ x770: 1.0 } m30001| Fri Feb 22 12:33:25.806 [conn3] add index fails, too many indexes for test.index_many2 key:{ x771: 1.0 } m30001| Fri Feb 22 12:33:25.806 [conn3] add index fails, too many indexes for test.index_many2 key:{ x772: 1.0 } m30001| Fri Feb 22 12:33:25.806 [conn3] add index fails, too many indexes for test.index_many2 key:{ x773: 1.0 } m30001| Fri Feb 22 12:33:25.807 [conn3] add index fails, too many indexes for test.index_many2 key:{ x774: 1.0 } m30001| Fri Feb 22 12:33:25.807 [conn3] add index fails, too many indexes for test.index_many2 key:{ x775: 1.0 } m30001| Fri Feb 22 12:33:25.807 [conn3] add index fails, too many indexes for test.index_many2 key:{ x776: 1.0 } m30001| Fri Feb 22 12:33:25.808 [conn3] add index fails, too many indexes for test.index_many2 key:{ x777: 1.0 } m30001| Fri Feb 22 12:33:25.808 [conn3] add index fails, too many indexes for test.index_many2 key:{ x778: 1.0 } m30001| Fri Feb 22 12:33:25.808 [conn3] add index fails, too many indexes for test.index_many2 key:{ x779: 1.0 } m30001| Fri Feb 22 12:33:25.809 [conn3] add index fails, too many indexes for test.index_many2 key:{ x780: 1.0 } m30001| Fri Feb 22 12:33:25.809 [conn3] add index fails, too many indexes for test.index_many2 key:{ x781: 1.0 } m30001| Fri Feb 22 12:33:25.809 [conn3] add index fails, too many indexes for test.index_many2 key:{ x782: 1.0 } m30001| Fri Feb 22 12:33:25.810 [conn3] add index fails, too many indexes for test.index_many2 key:{ x783: 1.0 } m30001| Fri Feb 22 12:33:25.810 [conn3] add index fails, too many indexes for test.index_many2 key:{ x784: 1.0 } m30001| Fri Feb 22 12:33:25.811 [conn3] add index fails, too many indexes for test.index_many2 key:{ x785: 1.0 } m30001| Fri Feb 22 12:33:25.811 [conn3] add index fails, too many indexes for test.index_many2 key:{ x786: 1.0 } m30001| Fri Feb 22 12:33:25.812 [conn3] add index fails, too many indexes for test.index_many2 key:{ x787: 1.0 } m30001| Fri Feb 22 12:33:25.812 [conn3] add index fails, too many indexes for test.index_many2 key:{ x788: 1.0 } m30001| Fri Feb 22 12:33:25.812 [conn3] add index fails, too many indexes for test.index_many2 key:{ x789: 1.0 } m30001| Fri Feb 22 12:33:25.813 [conn3] add index fails, too many indexes for test.index_many2 key:{ x790: 1.0 } m30001| Fri Feb 22 12:33:25.813 [conn3] add index fails, too many indexes for test.index_many2 key:{ x791: 1.0 } m30001| Fri Feb 22 12:33:25.813 [conn3] add index fails, too many indexes for test.index_many2 key:{ x792: 1.0 } m30001| Fri Feb 22 12:33:25.814 [conn3] add index fails, too many indexes for test.index_many2 key:{ x793: 1.0 } m30001| Fri Feb 22 12:33:25.814 [conn3] add index fails, too many indexes for test.index_many2 key:{ x794: 1.0 } m30001| Fri Feb 22 12:33:25.814 [conn3] add index fails, too many indexes for test.index_many2 key:{ x795: 1.0 } m30001| Fri Feb 22 12:33:25.815 [conn3] add index fails, too many indexes for test.index_many2 key:{ x796: 1.0 } m30001| Fri Feb 22 12:33:25.815 [conn3] add index fails, too many indexes for test.index_many2 key:{ x797: 1.0 } m30001| Fri Feb 22 12:33:25.816 [conn3] add index fails, too many indexes for test.index_many2 key:{ x798: 1.0 } m30001| Fri Feb 22 12:33:25.816 [conn3] add index fails, too many indexes for test.index_many2 key:{ x799: 1.0 } m30001| Fri Feb 22 12:33:25.817 [conn3] add index fails, too many indexes for test.index_many2 key:{ x800: 1.0 } m30001| Fri Feb 22 12:33:25.817 [conn3] add index fails, too many indexes for test.index_many2 key:{ x801: 1.0 } m30001| Fri Feb 22 12:33:25.817 [conn3] add index fails, too many indexes for test.index_many2 key:{ x802: 1.0 } m30001| Fri Feb 22 12:33:25.818 [conn3] add index fails, too many indexes for test.index_many2 key:{ x803: 1.0 } m30001| Fri Feb 22 12:33:25.818 [conn3] add index fails, too many indexes for test.index_many2 key:{ x804: 1.0 } m30001| Fri Feb 22 12:33:25.818 [conn3] add index fails, too many indexes for test.index_many2 key:{ x805: 1.0 } m30001| Fri Feb 22 12:33:25.819 [conn3] add index fails, too many indexes for test.index_many2 key:{ x806: 1.0 } m30001| Fri Feb 22 12:33:25.819 [conn3] add index fails, too many indexes for test.index_many2 key:{ x807: 1.0 } m30001| Fri Feb 22 12:33:25.819 [conn3] add index fails, too many indexes for test.index_many2 key:{ x808: 1.0 } m30001| Fri Feb 22 12:33:25.820 [conn3] add index fails, too many indexes for test.index_many2 key:{ x809: 1.0 } m30001| Fri Feb 22 12:33:25.820 [conn3] add index fails, too many indexes for test.index_many2 key:{ x810: 1.0 } m30001| Fri Feb 22 12:33:25.821 [conn3] add index fails, too many indexes for test.index_many2 key:{ x811: 1.0 } m30001| Fri Feb 22 12:33:25.821 [conn3] add index fails, too many indexes for test.index_many2 key:{ x812: 1.0 } m30001| Fri Feb 22 12:33:25.821 [conn3] add index fails, too many indexes for test.index_many2 key:{ x813: 1.0 } m30001| Fri Feb 22 12:33:25.822 [conn3] add index fails, too many indexes for test.index_many2 key:{ x814: 1.0 } m30001| Fri Feb 22 12:33:25.823 [conn3] add index fails, too many indexes for test.index_many2 key:{ x815: 1.0 } m30001| Fri Feb 22 12:33:25.823 [conn3] add index fails, too many indexes for test.index_many2 key:{ x816: 1.0 } m30001| Fri Feb 22 12:33:25.824 [conn3] add index fails, too many indexes for test.index_many2 key:{ x817: 1.0 } m30001| Fri Feb 22 12:33:25.824 [conn3] add index fails, too many indexes for test.index_many2 key:{ x818: 1.0 } m30001| Fri Feb 22 12:33:25.824 [conn3] add index fails, too many indexes for test.index_many2 key:{ x819: 1.0 } m30001| Fri Feb 22 12:33:25.825 [conn3] add index fails, too many indexes for test.index_many2 key:{ x820: 1.0 } m30001| Fri Feb 22 12:33:25.825 [conn3] add index fails, too many indexes for test.index_many2 key:{ x821: 1.0 } m30001| Fri Feb 22 12:33:25.826 [conn3] add index fails, too many indexes for test.index_many2 key:{ x822: 1.0 } m30001| Fri Feb 22 12:33:25.826 [conn3] add index fails, too many indexes for test.index_many2 key:{ x823: 1.0 } m30001| Fri Feb 22 12:33:25.827 [conn3] add index fails, too many indexes for test.index_many2 key:{ x824: 1.0 } m30001| Fri Feb 22 12:33:25.827 [conn3] add index fails, too many indexes for test.index_many2 key:{ x825: 1.0 } m30001| Fri Feb 22 12:33:25.827 [conn3] add index fails, too many indexes for test.index_many2 key:{ x826: 1.0 } m30001| Fri Feb 22 12:33:25.828 [conn3] add index fails, too many indexes for test.index_many2 key:{ x827: 1.0 } m30001| Fri Feb 22 12:33:25.828 [conn3] add index fails, too many indexes for test.index_many2 key:{ x828: 1.0 } m30001| Fri Feb 22 12:33:25.829 [conn3] add index fails, too many indexes for test.index_many2 key:{ x829: 1.0 } m30001| Fri Feb 22 12:33:25.829 [conn3] add index fails, too many indexes for test.index_many2 key:{ x830: 1.0 } m30001| Fri Feb 22 12:33:25.830 [conn3] add index fails, too many indexes for test.index_many2 key:{ x831: 1.0 } m30001| Fri Feb 22 12:33:25.830 [conn3] add index fails, too many indexes for test.index_many2 key:{ x832: 1.0 } m30001| Fri Feb 22 12:33:25.830 [conn3] add index fails, too many indexes for test.index_many2 key:{ x833: 1.0 } m30001| Fri Feb 22 12:33:25.831 [conn3] add index fails, too many indexes for test.index_many2 key:{ x834: 1.0 } m30001| Fri Feb 22 12:33:25.831 [conn3] add index fails, too many indexes for test.index_many2 key:{ x835: 1.0 } m30001| Fri Feb 22 12:33:25.832 [conn3] add index fails, too many indexes for test.index_many2 key:{ x836: 1.0 } m30001| Fri Feb 22 12:33:25.832 [conn3] add index fails, too many indexes for test.index_many2 key:{ x837: 1.0 } m30001| Fri Feb 22 12:33:25.832 [conn3] add index fails, too many indexes for test.index_many2 key:{ x838: 1.0 } m30001| Fri Feb 22 12:33:25.833 [conn3] add index fails, too many indexes for test.index_many2 key:{ x839: 1.0 } m30001| Fri Feb 22 12:33:25.833 [conn3] add index fails, too many indexes for test.index_many2 key:{ x840: 1.0 } m30001| Fri Feb 22 12:33:25.834 [conn3] add index fails, too many indexes for test.index_many2 key:{ x841: 1.0 } m30001| Fri Feb 22 12:33:25.834 [conn3] add index fails, too many indexes for test.index_many2 key:{ x842: 1.0 } m30001| Fri Feb 22 12:33:25.835 [conn3] add index fails, too many indexes for test.index_many2 key:{ x843: 1.0 } m30001| Fri Feb 22 12:33:25.835 [conn3] add index fails, too many indexes for test.index_many2 key:{ x844: 1.0 } m30001| Fri Feb 22 12:33:25.835 [conn3] add index fails, too many indexes for test.index_many2 key:{ x845: 1.0 } m30001| Fri Feb 22 12:33:25.836 [conn3] add index fails, too many indexes for test.index_many2 key:{ x846: 1.0 } m30001| Fri Feb 22 12:33:25.836 [conn3] add index fails, too many indexes for test.index_many2 key:{ x847: 1.0 } m30001| Fri Feb 22 12:33:25.836 [conn3] add index fails, too many indexes for test.index_many2 key:{ x848: 1.0 } m30001| Fri Feb 22 12:33:25.837 [conn3] add index fails, too many indexes for test.index_many2 key:{ x849: 1.0 } m30001| Fri Feb 22 12:33:25.837 [conn3] add index fails, too many indexes for test.index_many2 key:{ x850: 1.0 } m30001| Fri Feb 22 12:33:25.837 [conn3] add index fails, too many indexes for test.index_many2 key:{ x851: 1.0 } m30001| Fri Feb 22 12:33:25.838 [conn3] add index fails, too many indexes for test.index_many2 key:{ x852: 1.0 } m30001| Fri Feb 22 12:33:25.838 [conn3] add index fails, too many indexes for test.index_many2 key:{ x853: 1.0 } m30001| Fri Feb 22 12:33:25.839 [conn3] add index fails, too many indexes for test.index_many2 key:{ x854: 1.0 } m30001| Fri Feb 22 12:33:25.839 [conn3] add index fails, too many indexes for test.index_many2 key:{ x855: 1.0 } m30001| Fri Feb 22 12:33:25.839 [conn3] add index fails, too many indexes for test.index_many2 key:{ x856: 1.0 } m30001| Fri Feb 22 12:33:25.840 [conn3] add index fails, too many indexes for test.index_many2 key:{ x857: 1.0 } m30001| Fri Feb 22 12:33:25.840 [conn3] add index fails, too many indexes for test.index_many2 key:{ x858: 1.0 } m30001| Fri Feb 22 12:33:25.841 [conn3] add index fails, too many indexes for test.index_many2 key:{ x859: 1.0 } m30001| Fri Feb 22 12:33:25.841 [conn3] add index fails, too many indexes for test.index_many2 key:{ x860: 1.0 } m30001| Fri Feb 22 12:33:25.842 [conn3] add index fails, too many indexes for test.index_many2 key:{ x861: 1.0 } m30001| Fri Feb 22 12:33:25.842 [conn3] add index fails, too many indexes for test.index_many2 key:{ x862: 1.0 } m30001| Fri Feb 22 12:33:25.842 [conn3] add index fails, too many indexes for test.index_many2 key:{ x863: 1.0 } m30001| Fri Feb 22 12:33:25.843 [conn3] add index fails, too many indexes for test.index_many2 key:{ x864: 1.0 } m30001| Fri Feb 22 12:33:25.843 [conn3] add index fails, too many indexes for test.index_many2 key:{ x865: 1.0 } m30001| Fri Feb 22 12:33:25.843 [conn3] add index fails, too many indexes for test.index_many2 key:{ x866: 1.0 } m30001| Fri Feb 22 12:33:25.844 [conn3] add index fails, too many indexes for test.index_many2 key:{ x867: 1.0 } m30001| Fri Feb 22 12:33:25.844 [conn3] add index fails, too many indexes for test.index_many2 key:{ x868: 1.0 } m30001| Fri Feb 22 12:33:25.844 [conn3] add index fails, too many indexes for test.index_many2 key:{ x869: 1.0 } m30001| Fri Feb 22 12:33:25.845 [conn3] add index fails, too many indexes for test.index_many2 key:{ x870: 1.0 } m30001| Fri Feb 22 12:33:25.845 [conn3] add index fails, too many indexes for test.index_many2 key:{ x871: 1.0 } m30001| Fri Feb 22 12:33:25.846 [conn3] add index fails, too many indexes for test.index_many2 key:{ x872: 1.0 } m30001| Fri Feb 22 12:33:25.846 [conn3] add index fails, too many indexes for test.index_many2 key:{ x873: 1.0 } m30001| Fri Feb 22 12:33:25.847 [conn3] add index fails, too many indexes for test.index_many2 key:{ x874: 1.0 } m30001| Fri Feb 22 12:33:25.847 [conn3] add index fails, too many indexes for test.index_many2 key:{ x875: 1.0 } m30001| Fri Feb 22 12:33:25.847 [conn3] add index fails, too many indexes for test.index_many2 key:{ x876: 1.0 } m30001| Fri Feb 22 12:33:25.848 [conn3] add index fails, too many indexes for test.index_many2 key:{ x877: 1.0 } m30001| Fri Feb 22 12:33:25.848 [conn3] add index fails, too many indexes for test.index_many2 key:{ x878: 1.0 } m30001| Fri Feb 22 12:33:25.848 [conn3] add index fails, too many indexes for test.index_many2 key:{ x879: 1.0 } m30001| Fri Feb 22 12:33:25.849 [conn3] add index fails, too many indexes for test.index_many2 key:{ x880: 1.0 } m30001| Fri Feb 22 12:33:25.849 [conn3] add index fails, too many indexes for test.index_many2 key:{ x881: 1.0 } m30001| Fri Feb 22 12:33:25.850 [conn3] add index fails, too many indexes for test.index_many2 key:{ x882: 1.0 } m30001| Fri Feb 22 12:33:25.850 [conn3] add index fails, too many indexes for test.index_many2 key:{ x883: 1.0 } m30001| Fri Feb 22 12:33:25.850 [conn3] add index fails, too many indexes for test.index_many2 key:{ x884: 1.0 } m30001| Fri Feb 22 12:33:25.851 [conn3] add index fails, too many indexes for test.index_many2 key:{ x885: 1.0 } m30001| Fri Feb 22 12:33:25.851 [conn3] add index fails, too many indexes for test.index_many2 key:{ x886: 1.0 } m30001| Fri Feb 22 12:33:25.852 [conn3] add index fails, too many indexes for test.index_many2 key:{ x887: 1.0 } m30001| Fri Feb 22 12:33:25.852 [conn3] add index fails, too many indexes for test.index_many2 key:{ x888: 1.0 } m30001| Fri Feb 22 12:33:25.852 [conn3] add index fails, too many indexes for test.index_many2 key:{ x889: 1.0 } m30001| Fri Feb 22 12:33:25.853 [conn3] add index fails, too many indexes for test.index_many2 key:{ x890: 1.0 } m30001| Fri Feb 22 12:33:25.853 [conn3] add index fails, too many indexes for test.index_many2 key:{ x891: 1.0 } m30001| Fri Feb 22 12:33:25.853 [conn3] add index fails, too many indexes for test.index_many2 key:{ x892: 1.0 } m30001| Fri Feb 22 12:33:25.854 [conn3] add index fails, too many indexes for test.index_many2 key:{ x893: 1.0 } m30001| Fri Feb 22 12:33:25.854 [conn3] add index fails, too many indexes for test.index_many2 key:{ x894: 1.0 } m30001| Fri Feb 22 12:33:25.854 [conn3] add index fails, too many indexes for test.index_many2 key:{ x895: 1.0 } m30001| Fri Feb 22 12:33:25.856 [conn3] add index fails, too many indexes for test.index_many2 key:{ x896: 1.0 } m30001| Fri Feb 22 12:33:25.857 [conn3] add index fails, too many indexes for test.index_many2 key:{ x897: 1.0 } m30001| Fri Feb 22 12:33:25.858 [conn3] add index fails, too many indexes for test.index_many2 key:{ x898: 1.0 } m30001| Fri Feb 22 12:33:25.858 [conn3] add index fails, too many indexes for test.index_many2 key:{ x899: 1.0 } m30001| Fri Feb 22 12:33:25.858 [conn3] add index fails, too many indexes for test.index_many2 key:{ x900: 1.0 } m30001| Fri Feb 22 12:33:25.859 [conn3] add index fails, too many indexes for test.index_many2 key:{ x901: 1.0 } m30001| Fri Feb 22 12:33:25.859 [conn3] add index fails, too many indexes for test.index_many2 key:{ x902: 1.0 } m30001| Fri Feb 22 12:33:25.859 [conn3] add index fails, too many indexes for test.index_many2 key:{ x903: 1.0 } m30001| Fri Feb 22 12:33:25.860 [conn3] add index fails, too many indexes for test.index_many2 key:{ x904: 1.0 } m30001| Fri Feb 22 12:33:25.860 [conn3] add index fails, too many indexes for test.index_many2 key:{ x905: 1.0 } m30001| Fri Feb 22 12:33:25.861 [conn3] add index fails, too many indexes for test.index_many2 key:{ x906: 1.0 } m30001| Fri Feb 22 12:33:25.861 [conn3] add index fails, too many indexes for test.index_many2 key:{ x907: 1.0 } m30001| Fri Feb 22 12:33:25.861 [conn3] add index fails, too many indexes for test.index_many2 key:{ x908: 1.0 } m30001| Fri Feb 22 12:33:25.862 [conn3] add index fails, too many indexes for test.index_many2 key:{ x909: 1.0 } m30001| Fri Feb 22 12:33:25.862 [conn3] add index fails, too many indexes for test.index_many2 key:{ x910: 1.0 } m30001| Fri Feb 22 12:33:25.863 [conn3] add index fails, too many indexes for test.index_many2 key:{ x911: 1.0 } m30001| Fri Feb 22 12:33:25.863 [conn3] add index fails, too many indexes for test.index_many2 key:{ x912: 1.0 } m30001| Fri Feb 22 12:33:25.864 [conn3] add index fails, too many indexes for test.index_many2 key:{ x913: 1.0 } m30001| Fri Feb 22 12:33:25.864 [conn3] add index fails, too many indexes for test.index_many2 key:{ x914: 1.0 } m30001| Fri Feb 22 12:33:25.865 [conn3] add index fails, too many indexes for test.index_many2 key:{ x915: 1.0 } m30001| Fri Feb 22 12:33:25.865 [conn3] add index fails, too many indexes for test.index_many2 key:{ x916: 1.0 } m30001| Fri Feb 22 12:33:25.866 [conn3] add index fails, too many indexes for test.index_many2 key:{ x917: 1.0 } m30001| Fri Feb 22 12:33:25.866 [conn3] add index fails, too many indexes for test.index_many2 key:{ x918: 1.0 } m30001| Fri Feb 22 12:33:25.866 [conn3] add index fails, too many indexes for test.index_many2 key:{ x919: 1.0 } m30001| Fri Feb 22 12:33:25.867 [conn3] add index fails, too many indexes for test.index_many2 key:{ x920: 1.0 } m30001| Fri Feb 22 12:33:25.867 [conn3] add index fails, too many indexes for test.index_many2 key:{ x921: 1.0 } m30001| Fri Feb 22 12:33:25.868 [conn3] add index fails, too many indexes for test.index_many2 key:{ x922: 1.0 } m30001| Fri Feb 22 12:33:25.868 [conn3] add index fails, too many indexes for test.index_many2 key:{ x923: 1.0 } m30001| Fri Feb 22 12:33:25.869 [conn3] add index fails, too many indexes for test.index_many2 key:{ x924: 1.0 } m30001| Fri Feb 22 12:33:25.869 [conn3] add index fails, too many indexes for test.index_many2 key:{ x925: 1.0 } m30001| Fri Feb 22 12:33:25.870 [conn3] add index fails, too many indexes for test.index_many2 key:{ x926: 1.0 } m30001| Fri Feb 22 12:33:25.870 [conn3] add index fails, too many indexes for test.index_many2 key:{ x927: 1.0 } m30001| Fri Feb 22 12:33:25.871 [conn3] add index fails, too many indexes for test.index_many2 key:{ x928: 1.0 } m30001| Fri Feb 22 12:33:25.871 [conn3] add index fails, too many indexes for test.index_many2 key:{ x929: 1.0 } m30001| Fri Feb 22 12:33:25.872 [conn3] add index fails, too many indexes for test.index_many2 key:{ x930: 1.0 } m30001| Fri Feb 22 12:33:25.872 [conn3] add index fails, too many indexes for test.index_many2 key:{ x931: 1.0 } m30001| Fri Feb 22 12:33:25.873 [conn3] add index fails, too many indexes for test.index_many2 key:{ x932: 1.0 } m30001| Fri Feb 22 12:33:25.873 [conn3] add index fails, too many indexes for test.index_many2 key:{ x933: 1.0 } m30001| Fri Feb 22 12:33:25.873 [conn3] add index fails, too many indexes for test.index_many2 key:{ x934: 1.0 } m30001| Fri Feb 22 12:33:25.874 [conn3] add index fails, too many indexes for test.index_many2 key:{ x935: 1.0 } m30001| Fri Feb 22 12:33:25.874 [conn3] add index fails, too many indexes for test.index_many2 key:{ x936: 1.0 } m30001| Fri Feb 22 12:33:25.875 [conn3] add index fails, too many indexes for test.index_many2 key:{ x937: 1.0 } m30001| Fri Feb 22 12:33:25.875 [conn3] add index fails, too many indexes for test.index_many2 key:{ x938: 1.0 } m30001| Fri Feb 22 12:33:25.875 [conn3] add index fails, too many indexes for test.index_many2 key:{ x939: 1.0 } m30001| Fri Feb 22 12:33:25.876 [conn3] add index fails, too many indexes for test.index_many2 key:{ x940: 1.0 } m30001| Fri Feb 22 12:33:25.876 [conn3] add index fails, too many indexes for test.index_many2 key:{ x941: 1.0 } m30001| Fri Feb 22 12:33:25.877 [conn3] add index fails, too many indexes for test.index_many2 key:{ x942: 1.0 } m30001| Fri Feb 22 12:33:25.877 [conn3] add index fails, too many indexes for test.index_many2 key:{ x943: 1.0 } m30001| Fri Feb 22 12:33:25.878 [conn3] add index fails, too many indexes for test.index_many2 key:{ x944: 1.0 } m30001| Fri Feb 22 12:33:25.878 [conn3] add index fails, too many indexes for test.index_many2 key:{ x945: 1.0 } m30001| Fri Feb 22 12:33:25.878 [conn3] add index fails, too many indexes for test.index_many2 key:{ x946: 1.0 } m30001| Fri Feb 22 12:33:25.879 [conn3] add index fails, too many indexes for test.index_many2 key:{ x947: 1.0 } m30001| Fri Feb 22 12:33:25.879 [conn3] add index fails, too many indexes for test.index_many2 key:{ x948: 1.0 } m30001| Fri Feb 22 12:33:25.880 [conn3] add index fails, too many indexes for test.index_many2 key:{ x949: 1.0 } m30001| Fri Feb 22 12:33:25.880 [conn3] add index fails, too many indexes for test.index_many2 key:{ x950: 1.0 } m30001| Fri Feb 22 12:33:25.880 [conn3] add index fails, too many indexes for test.index_many2 key:{ x951: 1.0 } m30001| Fri Feb 22 12:33:25.881 [conn3] add index fails, too many indexes for test.index_many2 key:{ x952: 1.0 } m30001| Fri Feb 22 12:33:25.881 [conn3] add index fails, too many indexes for test.index_many2 key:{ x953: 1.0 } m30001| Fri Feb 22 12:33:25.882 [conn3] add index fails, too many indexes for test.index_many2 key:{ x954: 1.0 } m30001| Fri Feb 22 12:33:25.882 [conn3] add index fails, too many indexes for test.index_many2 key:{ x955: 1.0 } m30001| Fri Feb 22 12:33:25.883 [conn3] add index fails, too many indexes for test.index_many2 key:{ x956: 1.0 } m30001| Fri Feb 22 12:33:25.883 [conn3] add index fails, too many indexes for test.index_many2 key:{ x957: 1.0 } m30001| Fri Feb 22 12:33:25.883 [conn3] add index fails, too many indexes for test.index_many2 key:{ x958: 1.0 } m30001| Fri Feb 22 12:33:25.884 [conn3] add index fails, too many indexes for test.index_many2 key:{ x959: 1.0 } m30001| Fri Feb 22 12:33:25.884 [conn3] add index fails, too many indexes for test.index_many2 key:{ x960: 1.0 } m30001| Fri Feb 22 12:33:25.884 [conn3] add index fails, too many indexes for test.index_many2 key:{ x961: 1.0 } m30001| Fri Feb 22 12:33:25.885 [conn3] add index fails, too many indexes for test.index_many2 key:{ x962: 1.0 } m30001| Fri Feb 22 12:33:25.885 [conn3] add index fails, too many indexes for test.index_many2 key:{ x963: 1.0 } m30001| Fri Feb 22 12:33:25.886 [conn3] add index fails, too many indexes for test.index_many2 key:{ x964: 1.0 } m30001| Fri Feb 22 12:33:25.886 [conn3] add index fails, too many indexes for test.index_many2 key:{ x965: 1.0 } m30001| Fri Feb 22 12:33:25.886 [conn3] add index fails, too many indexes for test.index_many2 key:{ x966: 1.0 } m30001| Fri Feb 22 12:33:25.887 [conn3] add index fails, too many indexes for test.index_many2 key:{ x967: 1.0 } m30001| Fri Feb 22 12:33:25.887 [conn3] add index fails, too many indexes for test.index_many2 key:{ x968: 1.0 } m30001| Fri Feb 22 12:33:25.888 [conn3] add index fails, too many indexes for test.index_many2 key:{ x969: 1.0 } m30001| Fri Feb 22 12:33:25.888 [conn3] add index fails, too many indexes for test.index_many2 key:{ x970: 1.0 } m30001| Fri Feb 22 12:33:25.889 [conn3] add index fails, too many indexes for test.index_many2 key:{ x971: 1.0 } m30001| Fri Feb 22 12:33:25.889 [conn3] add index fails, too many indexes for test.index_many2 key:{ x972: 1.0 } m30001| Fri Feb 22 12:33:25.890 [conn3] add index fails, too many indexes for test.index_many2 key:{ x973: 1.0 } m30001| Fri Feb 22 12:33:25.890 [conn3] add index fails, too many indexes for test.index_many2 key:{ x974: 1.0 } m30001| Fri Feb 22 12:33:25.890 [conn3] add index fails, too many indexes for test.index_many2 key:{ x975: 1.0 } m30001| Fri Feb 22 12:33:25.891 [conn3] add index fails, too many indexes for test.index_many2 key:{ x976: 1.0 } m30001| Fri Feb 22 12:33:25.891 [conn3] add index fails, too many indexes for test.index_many2 key:{ x977: 1.0 } m30001| Fri Feb 22 12:33:25.892 [conn3] add index fails, too many indexes for test.index_many2 key:{ x978: 1.0 } m30001| Fri Feb 22 12:33:25.892 [conn3] add index fails, too many indexes for test.index_many2 key:{ x979: 1.0 } m30001| Fri Feb 22 12:33:25.893 [conn3] add index fails, too many indexes for test.index_many2 key:{ x980: 1.0 } m30001| Fri Feb 22 12:33:25.893 [conn3] add index fails, too many indexes for test.index_many2 key:{ x981: 1.0 } m30001| Fri Feb 22 12:33:25.894 [conn3] add index fails, too many indexes for test.index_many2 key:{ x982: 1.0 } m30001| Fri Feb 22 12:33:25.894 [conn3] add index fails, too many indexes for test.index_many2 key:{ x983: 1.0 } m30001| Fri Feb 22 12:33:25.894 [conn3] add index fails, too many indexes for test.index_many2 key:{ x984: 1.0 } m30001| Fri Feb 22 12:33:25.895 [conn3] add index fails, too many indexes for test.index_many2 key:{ x985: 1.0 } m30001| Fri Feb 22 12:33:25.896 [conn3] add index fails, too many indexes for test.index_many2 key:{ x986: 1.0 } m30001| Fri Feb 22 12:33:25.896 [conn3] add index fails, too many indexes for test.index_many2 key:{ x987: 1.0 } m30001| Fri Feb 22 12:33:25.896 [conn3] add index fails, too many indexes for test.index_many2 key:{ x988: 1.0 } m30001| Fri Feb 22 12:33:25.897 [conn3] add index fails, too many indexes for test.index_many2 key:{ x989: 1.0 } m30001| Fri Feb 22 12:33:25.897 [conn3] add index fails, too many indexes for test.index_many2 key:{ x990: 1.0 } m30001| Fri Feb 22 12:33:25.898 [conn3] add index fails, too many indexes for test.index_many2 key:{ x991: 1.0 } m30001| Fri Feb 22 12:33:25.898 [conn3] add index fails, too many indexes for test.index_many2 key:{ x992: 1.0 } m30001| Fri Feb 22 12:33:25.899 [conn3] add index fails, too many indexes for test.index_many2 key:{ x993: 1.0 } m30001| Fri Feb 22 12:33:25.899 [conn3] add index fails, too many indexes for test.index_many2 key:{ x994: 1.0 } m30001| Fri Feb 22 12:33:25.900 [conn3] add index fails, too many indexes for test.index_many2 key:{ x995: 1.0 } m30001| Fri Feb 22 12:33:25.900 [conn3] add index fails, too many indexes for test.index_many2 key:{ x996: 1.0 } m30001| Fri Feb 22 12:33:25.900 [conn3] add index fails, too many indexes for test.index_many2 key:{ x997: 1.0 } m30001| Fri Feb 22 12:33:25.901 [conn3] add index fails, too many indexes for test.index_many2 key:{ x998: 1.0 } m30001| Fri Feb 22 12:33:25.901 [conn3] add index fails, too many indexes for test.index_many2 key:{ x999: 1.0 } m30001| Fri Feb 22 12:33:25.905 [conn4] CMD: dropIndexes test.index_many2 m30001| Fri Feb 22 12:33:25.907 [conn3] build index test.index_many2 { z: 1.0 } m30001| Fri Feb 22 12:33:25.907 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:25.909 [conn4] CMD: dropIndexes test.index_many2 463ms >>>>>>>>>>>>>>> skipping jstests/misc >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/indexh.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped1.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/apitest_db.js ******************************************* Test : jstests/remove.js ... m30999| Fri Feb 22 12:33:25.925 [conn1] DROP: test.removetest m30001| Fri Feb 22 12:33:25.925 [conn3] CMD: drop test.removetest m30001| Fri Feb 22 12:33:25.926 [conn3] build index test.removetest { _id: 1 } m30001| Fri Feb 22 12:33:25.926 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:25.927 [conn3] info: creating collection test.removetest on add index m30001| Fri Feb 22 12:33:25.927 [conn3] build index test.removetest { x: 1.0 } m30001| Fri Feb 22 12:33:25.927 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:25.949 [conn4] CMD: validate test.removetest m30001| Fri Feb 22 12:33:25.949 [conn4] validating index 0: test.removetest.$_id_ m30001| Fri Feb 22 12:33:25.949 [conn4] validating index 1: test.removetest.$x_1 m30001| Fri Feb 22 12:33:25.949 [conn3] build index test.removetest { x: -1.0 } m30001| Fri Feb 22 12:33:25.950 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:25.992 [conn4] CMD: validate test.removetest m30001| Fri Feb 22 12:33:25.992 [conn4] validating index 0: test.removetest.$_id_ m30001| Fri Feb 22 12:33:25.992 [conn4] validating index 1: test.removetest.$x_1 m30001| Fri Feb 22 12:33:25.992 [conn4] validating index 2: test.removetest.$x_-1 m30001| Fri Feb 22 12:33:25.992 [conn4] CMD: validate test.removetest m30001| Fri Feb 22 12:33:25.992 [conn4] validating index 0: test.removetest.$_id_ m30001| Fri Feb 22 12:33:25.992 [conn4] validating index 1: test.removetest.$x_1 m30001| Fri Feb 22 12:33:25.992 [conn4] validating index 2: test.removetest.$x_-1 68ms ******************************************* Test : jstests/set1.js ... m30999| Fri Feb 22 12:33:25.993 [conn1] DROP: test.set1 m30001| Fri Feb 22 12:33:25.993 [conn3] CMD: drop test.set1 m30001| Fri Feb 22 12:33:25.994 [conn3] build index test.set1 { _id: 1 } m30001| Fri Feb 22 12:33:25.994 [conn3] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/update5.js ... m30999| Fri Feb 22 12:33:25.996 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:25.996 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:25.997 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:25.997 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:25.999 [conn3] build index test.update5 { a: 1.0 } m30001| Fri Feb 22 12:33:25.999 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:26.000 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:26.000 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:26.001 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:26.001 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.003 [conn3] build index test.update5 { a: 1.0 } m30001| Fri Feb 22 12:33:26.003 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:26.004 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:26.004 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:26.005 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:26.006 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.008 [conn3] build index test.update5 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:33:26.008 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:26.009 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:26.009 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:26.010 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:26.010 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.012 [conn3] build index test.update5 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:33:26.013 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:26.014 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:26.014 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:26.015 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:26.015 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.017 [conn3] build index test.update5 { referer: 1.0 } m30001| Fri Feb 22 12:33:26.018 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:26.019 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:26.019 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:26.020 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:26.020 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.023 [conn3] build index test.update5 { referer: 1.0, lame: 1.0 } m30001| Fri Feb 22 12:33:26.023 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:26.024 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:26.024 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:26.025 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:26.025 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.027 [conn3] build index test.update5 { referer: 1.0, name: 1.0 } m30001| Fri Feb 22 12:33:26.027 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:33:26.028 [conn1] DROP: test.update5 m30001| Fri Feb 22 12:33:26.028 [conn3] CMD: drop test.update5 m30001| Fri Feb 22 12:33:26.029 [conn3] build index test.update5 { _id: 1 } m30001| Fri Feb 22 12:33:26.030 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.032 [conn3] build index test.update5 { date: 1.0, referer: 1.0, name: 1.0 } m30001| Fri Feb 22 12:33:26.032 [conn3] build index done. scanned 1 total records. 0 secs 38ms ******************************************* Test : jstests/drop_index.js ... m30999| Fri Feb 22 12:33:26.040 [conn1] DROP: test.dropIndex m30001| Fri Feb 22 12:33:26.040 [conn3] CMD: drop test.dropIndex m30001| Fri Feb 22 12:33:26.040 [conn3] build index test.dropIndex { _id: 1 } m30001| Fri Feb 22 12:33:26.041 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:26.041 [conn3] build index test.dropIndex { a: 1.0 } m30001| Fri Feb 22 12:33:26.041 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:26.042 [conn3] build index test.dropIndex { b: 1.0 } m30001| Fri Feb 22 12:33:26.042 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:26.043 [conn4] CMD: dropIndexes test.dropIndex m30001| Fri Feb 22 12:33:26.043 [conn4] CMD: dropIndexes test.dropIndex m30001| Fri Feb 22 12:33:26.044 [conn3] build index test.dropIndex { a: 1.0 } m30001| Fri Feb 22 12:33:26.044 [conn3] build index done. scanned 1 total records. 0 secs 12ms ******************************************* Test : jstests/push2.js ... m30999| Fri Feb 22 12:33:26.050 [conn1] DROP: test.push2 m30001| Fri Feb 22 12:33:26.050 [conn3] CMD: drop test.push2 m30001| Fri Feb 22 12:33:26.050 [conn3] build index test.push2 { _id: 1 } m30001| Fri Feb 22 12:33:26.051 [conn3] build index done. scanned 0 total records. 0 secs 0 pushes 1 pushes 2 pushes 3 pushes 4 pushes 5 pushes m30001| Fri Feb 22 12:33:26.151 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/test.2, filling with zeroes... m30001| Fri Feb 22 12:33:26.151 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/test.2, size: 256MB, took 0 secs 6 pushes 7 pushes 8 pushes 9 pushes 10 pushes 11 pushes 12 pushes 13 pushes 14 pushes m30001| Fri Feb 22 12:33:26.618 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/test.3, filling with zeroes... m30001| Fri Feb 22 12:33:26.618 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/test.3, size: 512MB, took 0 secs 15 pushes 16 pushes 17 pushes 18 pushes 19 pushes 20 pushes 21 pushes 22 pushes 23 pushes m30001| Fri Feb 22 12:33:27.537 [conn3] Assertion: 10334:BSONObj size: 16800187 (0xBB590001) is invalid. Size must be between 0 and 16793600(16MB) First element: 0: ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." m30001| 0xdb9678 0xd7f91c 0xd7f9ac 0x9344d4 0xa1bb0e 0xba331e 0xba5828 0xba659c 0xba6645 0xb982ff 0xb9acf2 0xb38308 0xb3cee3 0x933241 0xda9b4b 0xe0adfe 0xfffffd7fff257024 0xfffffd7fff2572f0 m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15printStackTraceERSo+0x28 [0xdb9678] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo11msgassertedEiPKc+0x9c [0xd7f91c] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'0x97f9ac [0xd7f9ac] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZNK5mongo7BSONObj14_assertInvalidEv+0x264 [0x9344d4] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo14BSONObjBuilder4doneEv+0xde [0xa1bb0e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZNK5mongo3Mod5applyERNS_15BSONBuilderBaseENS_11BSONElementERNS_8ModStateE+0x92e [0xba331e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo11ModSetState17createNewFromModsERKSsRNS_15BSONBuilderBaseERNS_18BSONIteratorSortedERKSt4pairIKSt17_Rb_tree_iteratorIS7_IS1_N5boost10shared_ptrINS_8ModStateEEEEESF_ERKNS_9LexNumCmpE+0x478 [0xba5828] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo11ModSetState20createNewObjFromModsERKSsRNS_14BSONObjBuilderERKNS_7BSONObjE+0x6c [0xba659c] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo11ModSetState17createNewFromModsEv+0x65 [0xba6645] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo14_updateObjectsEbPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEPNS_11RemoveSaverEbRKNS_24QueryPlanSelectionPolicyEb+0xb3f [0xb982ff] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo13updateObjectsEPKcRKNS_7BSONObjES4_bbbRNS_7OpDebugEbRKNS_24QueryPlanSelectionPolicyE+0xa2 [0xb9acf2] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo14receivedUpdateERNS_7MessageERNS_5CurOpE+0x3e8 [0xb38308] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0x1653 [0xb3cee3] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x91 [0x933241] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x43b [0xda9b4b] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'thread_proxy+0x7e [0xe0adfe] m30001| /lib/amd64/libc.so.1'_thrp_setup+0xbc [0xfffffd7fff257024] m30001| /lib/amd64/libc.so.1'_lwp_start+0x0 [0xfffffd7fff2572f0] m30001| Fri Feb 22 12:33:27.538 [conn3] update test.push2 update: { $push: { a: ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } } nscanned:1 keyUpdates:0 exception: BSONObj size: 16800187 (0xBB590001) is invalid. Size must be between 0 and 16793600(16MB) First element: 0: ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." code:10334 locks(micros) w:8462 8ms m30999| Fri Feb 22 12:33:27.538 [conn1] DROP: test.push2 m30001| Fri Feb 22 12:33:27.538 [conn3] CMD: drop test.push2 1495ms ******************************************* Test : jstests/ord.js ... m30999| Fri Feb 22 12:33:27.550 [conn1] DROP: test.jstests_ord m30001| Fri Feb 22 12:33:27.551 [conn3] CMD: drop test.jstests_ord m30001| Fri Feb 22 12:33:27.552 [conn3] build index test.jstests_ord { _id: 1 } m30001| Fri Feb 22 12:33:27.553 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:27.553 [conn3] info: creating collection test.jstests_ord on add index m30001| Fri Feb 22 12:33:27.553 [conn3] build index test.jstests_ord { a: 1.0 } m30001| Fri Feb 22 12:33:27.553 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:27.554 [conn3] build index test.jstests_ord { b: 1.0 } m30001| Fri Feb 22 12:33:27.554 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:27.565 [conn4] CMD: dropIndexes test.jstests_ord 27ms ******************************************* Test : jstests/index_check7.js ... m30999| Fri Feb 22 12:33:27.567 [conn1] DROP: test.index_check7 m30001| Fri Feb 22 12:33:27.608 [conn3] CMD: drop test.index_check7 m30001| Fri Feb 22 12:33:27.610 [conn3] build index test.index_check7 { _id: 1 } m30001| Fri Feb 22 12:33:27.610 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:27.614 [conn3] build index test.index_check7 { x: 1.0 } m30001| Fri Feb 22 12:33:27.615 [conn3] build index done. scanned 100 total records. 0 secs m30001| Fri Feb 22 12:33:27.616 [conn3] build index test.index_check7 { x: -1.0 } m30001| Fri Feb 22 12:33:27.617 [conn3] build index done. scanned 100 total records. 0 secs 51ms ******************************************* Test : jstests/ts1.js ... m30999| Fri Feb 22 12:33:27.619 [conn1] DROP: test.ts1 m30001| Fri Feb 22 12:33:27.619 [conn3] CMD: drop test.ts1 m30001| Fri Feb 22 12:33:27.620 [conn3] build index test.ts1 { _id: 1 } m30001| Fri Feb 22 12:33:27.620 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:33:29.593 [initandlisten] connection accepted from 127.0.0.1:42520 #10 (8 connections now open) m30999| Fri Feb 22 12:33:29.594 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127659948539f3197916032 m30999| Fri Feb 22 12:33:29.594 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. 2018ms ******************************************* Test : jstests/orr.js ... m30999| Fri Feb 22 12:33:29.644 [conn1] DROP: test.jstests_orr m30001| Fri Feb 22 12:33:29.644 [conn3] CMD: drop test.jstests_orr m30001| Fri Feb 22 12:33:29.645 [conn3] build index test.jstests_orr { _id: 1 } m30001| Fri Feb 22 12:33:29.646 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.646 [conn3] info: creating collection test.jstests_orr on add index m30001| Fri Feb 22 12:33:29.646 [conn3] build index test.jstests_orr { a: 1.0 } m30001| Fri Feb 22 12:33:29.647 [conn3] build index done. scanned 0 total records. 0 secs 14ms ******************************************* Test : jstests/useindexonobjgtlt.js ... m30999| Fri Feb 22 12:33:29.657 [conn1] DROP: test.factories m30001| Fri Feb 22 12:33:29.657 [conn3] CMD: drop test.factories m30001| Fri Feb 22 12:33:29.658 [conn3] build index test.factories { _id: 1 } m30001| Fri Feb 22 12:33:29.658 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.658 [conn3] build index test.factories { metro: 1.0 } m30001| Fri Feb 22 12:33:29.659 [conn3] build index done. scanned 1 total records. 0 secs 11ms ******************************************* Test : jstests/cursor7.js ... m30999| Fri Feb 22 12:33:29.666 [conn1] DROP: test.ed_db_cursor_mi m30001| Fri Feb 22 12:33:29.666 [conn3] CMD: drop test.ed_db_cursor_mi m30001| Fri Feb 22 12:33:29.667 [conn3] build index test.ed_db_cursor_mi { _id: 1 } m30001| Fri Feb 22 12:33:29.667 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.667 [conn3] build index test.ed_db_cursor_mi { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:33:29.668 [conn3] build index done. scanned 4 total records. 0 secs 17ms ******************************************* Test : jstests/find_and_modify_server6993.js ... m30999| Fri Feb 22 12:33:29.685 [conn1] DROP: test.find_and_modify_server6993 m30001| Fri Feb 22 12:33:29.685 [conn3] CMD: drop test.find_and_modify_server6993 m30001| Fri Feb 22 12:33:29.686 [conn3] build index test.find_and_modify_server6993 { _id: 1 } m30001| Fri Feb 22 12:33:29.686 [conn3] build index done. scanned 0 total records. 0 secs 9ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_array2.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/bench_test1.js ******************************************* Test : jstests/objid5.js ... m30999| Fri Feb 22 12:33:29.688 [conn1] DROP: test.objid5 m30001| Fri Feb 22 12:33:29.688 [conn3] CMD: drop test.objid5 m30001| Fri Feb 22 12:33:29.688 [conn3] build index test.objid5 { _id: 1 } m30001| Fri Feb 22 12:33:29.689 [conn3] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/index_check6.js ... m30999| Fri Feb 22 12:33:29.695 [conn1] DROP: test.index_check6 m30001| Fri Feb 22 12:33:29.695 [conn3] CMD: drop test.index_check6 m30001| Fri Feb 22 12:33:29.695 [conn3] build index test.index_check6 { _id: 1 } m30001| Fri Feb 22 12:33:29.696 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.696 [conn3] info: creating collection test.index_check6 on add index m30001| Fri Feb 22 12:33:29.696 [conn3] build index test.index_check6 { age: 1.0, rating: 1.0 } m30001| Fri Feb 22 12:33:29.696 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:29.725 [conn1] DROP: test.index_check6 m30001| Fri Feb 22 12:33:29.725 [conn3] CMD: drop test.index_check6 m30001| Fri Feb 22 12:33:29.727 [conn3] build index test.index_check6 { _id: 1 } m30001| Fri Feb 22 12:33:29.727 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.762 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.774 [conn3] build index test.index_check6 { a: -1.0, b: -1.0, c: -1.0 } m30001| Fri Feb 22 12:33:29.783 [conn3] build index done. scanned 900 total records. 0.008 secs m30001| Fri Feb 22 12:33:29.798 [conn4] killcursors: found 0 of 1 m30001| Fri Feb 22 12:33:29.809 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.810 [conn3] build index test.index_check6 { a: -1.0, b: -1.0, c: 1.0 } m30001| Fri Feb 22 12:33:29.815 [conn3] build index done. scanned 900 total records. 0.004 secs m30001| Fri Feb 22 12:33:29.829 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.830 [conn3] build index test.index_check6 { a: -1.0, b: 1.0, c: -1.0 } m30001| Fri Feb 22 12:33:29.835 [conn3] build index done. scanned 900 total records. 0.004 secs m30001| Fri Feb 22 12:33:29.850 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.851 [conn3] build index test.index_check6 { a: -1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:33:29.856 [conn3] build index done. scanned 900 total records. 0.005 secs m30001| Fri Feb 22 12:33:29.872 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.873 [conn3] build index test.index_check6 { a: 1.0, b: -1.0, c: -1.0 } m30001| Fri Feb 22 12:33:29.878 [conn3] build index done. scanned 900 total records. 0.004 secs m30001| Fri Feb 22 12:33:29.892 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.893 [conn3] build index test.index_check6 { a: 1.0, b: -1.0, c: 1.0 } m30001| Fri Feb 22 12:33:29.898 [conn3] build index done. scanned 900 total records. 0.005 secs m30001| Fri Feb 22 12:33:29.913 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.913 [conn3] build index test.index_check6 { a: 1.0, b: 1.0, c: -1.0 } m30001| Fri Feb 22 12:33:29.918 [conn3] build index done. scanned 900 total records. 0.004 secs m30001| Fri Feb 22 12:33:29.933 [conn4] CMD: dropIndexes test.index_check6 m30001| Fri Feb 22 12:33:29.934 [conn3] build index test.index_check6 { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:33:29.939 [conn3] build index done. scanned 900 total records. 0.004 secs 263ms ******************************************* Test : jstests/ore.js ... m30999| Fri Feb 22 12:33:29.959 [conn1] DROP: test.jstests_ore m30001| Fri Feb 22 12:33:29.959 [conn3] CMD: drop test.jstests_ore m30001| Fri Feb 22 12:33:29.959 [conn3] build index test.jstests_ore { _id: 1 } m30001| Fri Feb 22 12:33:29.960 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.960 [conn3] info: creating collection test.jstests_ore on add index m30001| Fri Feb 22 12:33:29.960 [conn3] build index test.jstests_ore { a: -1.0 } m30001| Fri Feb 22 12:33:29.961 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.961 [conn3] build index test.jstests_ore { b: 1.0 } m30001| Fri Feb 22 12:33:29.961 [conn3] build index done. scanned 0 total records. 0 secs 9ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/update4.js ******************************************* Test : jstests/index1.js ... m30001| Fri Feb 22 12:33:29.969 [conn3] build index test.embeddedIndexTest { _id: 1 } m30001| Fri Feb 22 12:33:29.969 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:29.970 [conn3] build index test.embeddedIndexTest { z.a: 1.0 } m30001| Fri Feb 22 12:33:29.970 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:33:29.973 [conn4] CMD: validate test.embeddedIndexTest m30001| Fri Feb 22 12:33:29.973 [conn4] validating index 0: test.embeddedIndexTest.$_id_ m30001| Fri Feb 22 12:33:29.973 [conn4] validating index 1: test.embeddedIndexTest.$z.a_1 10ms ******************************************* Test : jstests/indexi.js ... m30999| Fri Feb 22 12:33:29.974 [conn1] DROP: test.jstests_indexi m30001| Fri Feb 22 12:33:29.974 [conn3] CMD: drop test.jstests_indexi m30001| Fri Feb 22 12:33:29.974 [conn3] build index test.jstests_indexi { _id: 1 } m30001| Fri Feb 22 12:33:29.975 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.025 [conn3] build index test.jstests_indexi { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:33:30.031 [conn3] build index done. scanned 1000 total records. 0.005 secs m30001| Fri Feb 22 12:33:30.032 [conn3] build index test.jstests_indexi { a: 1.0, c: 1.0 } m30001| Fri Feb 22 12:33:30.037 [conn3] build index done. scanned 1000 total records. 0.005 secs 68ms ******************************************* Test : jstests/updatel.js ... m30999| Fri Feb 22 12:33:30.042 [conn1] DROP: test.jstests_updatel m30001| Fri Feb 22 12:33:30.042 [conn3] CMD: drop test.jstests_updatel m30001| Fri Feb 22 12:33:30.044 [conn3] build index test.jstests_updatel { _id: 1 } m30001| Fri Feb 22 12:33:30.044 [conn3] build index done. scanned 0 total records. 0 secs 6ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2nonstring.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/profile3.js ******************************************* Test : jstests/queryoptimizer3.js ... m30999| Fri Feb 22 12:33:30.048 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.048 [conn3] CMD: drop test.jstests_queryoptimizer3 Fri Feb 22 12:33:30.054 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i = 0; i < 400; ++i ) { sleep( 50 ); db.jstests_queryoptimizer3.drop(); } localhost:30999/admin m30001| Fri Feb 22 12:33:30.055 [conn3] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:30.054 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.055 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.056 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.056 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.056 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.056 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.057 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.057 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.066 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.066 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.067 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.068 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.068 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.068 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.068 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.069 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.069 [conn3] build index done. scanned 0 total records. 0 secs sh14645| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:33:30.077 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.077 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.079 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.079 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.079 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.079 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.079 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.080 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.080 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.087 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.091 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.092 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.093 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.093 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.093 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.093 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.093 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.093 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.098 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.099 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.101 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.104 [conn3] build index done. scanned 0 total records. 0.002 secs m30001| Fri Feb 22 12:33:30.104 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.104 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.104 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.105 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.105 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.111 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.111 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.113 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.113 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.113 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.113 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.114 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.114 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.114 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.121 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.121 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.122 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.122 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.123 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.123 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.123 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.123 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.123 [conn3] build index done. scanned 0 total records. 0 secs sh14645| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:33:30.126 [mongosMain] connection accepted from 127.0.0.1:40591 #3 (2 connections now open) m30999| Fri Feb 22 12:33:30.131 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.131 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.132 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.132 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.133 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.133 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.133 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.133 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.133 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.138 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.142 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.144 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.144 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.144 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.144 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.145 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.145 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.145 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.150 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.150 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.152 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.152 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.152 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.152 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.152 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.153 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.153 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.159 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.159 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.161 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.161 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.161 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.161 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.161 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.162 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.162 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.169 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.169 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.170 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.170 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.170 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.170 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.171 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.171 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.171 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.179 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.179 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:30.181 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.181 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.181 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.182 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.182 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.182 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.182 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.182 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.183 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.187 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.195 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.197 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.198 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.198 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.198 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.198 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.198 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.199 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.203 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.209 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.211 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.212 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.212 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.212 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.212 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.212 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.213 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.223 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.223 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.225 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.225 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.225 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.225 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.226 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.226 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.226 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.232 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.232 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.234 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.234 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.236 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.236 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.237 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.238 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.238 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.238 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.238 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.238 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.239 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.250 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.250 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.252 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.252 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.252 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.252 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.252 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.253 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.253 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.257 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.262 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.263 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.264 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.264 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.264 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.264 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.264 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.265 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.269 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.270 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.271 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.271 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.271 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.271 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.272 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.272 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.272 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.279 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.279 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.280 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.281 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.281 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.281 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.281 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.281 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.282 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.284 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.284 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.286 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.286 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.288 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.288 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.289 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.289 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.289 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.289 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.290 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.290 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.290 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.299 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.299 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.301 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.301 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.301 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.301 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.301 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.302 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.302 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.306 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.312 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.314 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.314 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.314 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.314 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.314 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.315 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.315 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.319 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.320 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.322 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.322 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.322 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.322 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.322 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.323 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.323 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.330 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.330 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.331 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.332 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.332 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.332 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.332 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.332 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.332 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.336 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.336 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.338 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.338 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.341 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.341 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.343 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.343 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.343 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.343 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.343 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.344 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.344 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.353 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.353 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.355 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.355 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.355 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.355 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.355 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.356 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.356 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.362 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.366 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.367 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.368 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.368 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.368 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.368 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.368 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.368 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.373 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.373 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.375 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.375 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.375 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.375 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.375 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.376 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.376 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.384 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.384 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.386 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.386 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.386 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.386 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.386 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.387 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.387 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.388 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.388 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.390 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.391 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.396 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.396 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.397 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.397 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.397 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.397 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.398 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.398 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.398 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.406 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.406 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.408 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.408 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.408 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.408 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.409 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.409 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.409 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.414 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.427 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.430 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.431 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.431 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.431 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.431 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.432 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.432 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.437 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.438 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.439 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.440 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.440 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.440 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.440 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.440 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30999| Fri Feb 22 12:33:30.441 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.441 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.441 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.443 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.443 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.447 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.447 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.448 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.448 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.448 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.448 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.449 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.449 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.449 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.456 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.456 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.457 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.458 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.458 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.458 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.458 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.458 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.459 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.466 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.466 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.467 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.468 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.468 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.468 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.468 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.468 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.469 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.473 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.478 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.480 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.480 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.480 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.480 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.480 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.480 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.481 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.485 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.486 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.487 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.488 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.488 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.488 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.488 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.489 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.489 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.493 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.493 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.495 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.495 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.496 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.496 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.497 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.497 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.497 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.497 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.498 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.498 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.498 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.505 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.505 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.506 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.507 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.507 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.507 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.507 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.507 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.508 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.515 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.515 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.516 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.517 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.517 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.517 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.517 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.517 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.517 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.522 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.526 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.527 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.528 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.528 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.528 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.528 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.528 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.529 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.533 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.536 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.537 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.537 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.537 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.537 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.538 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.538 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.538 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.544 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.544 [conn3] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:30.545 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.545 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.546 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.546 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.546 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.546 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.546 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.547 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.547 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.553 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.553 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.555 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.555 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.555 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.555 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.555 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.556 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.556 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.563 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.563 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.565 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.565 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.565 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.565 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.565 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.566 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.566 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.570 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.575 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.576 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.577 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.577 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.577 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.577 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.577 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.578 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.582 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.583 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.585 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.585 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.585 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.585 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.586 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.586 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.586 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.592 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.592 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.594 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.594 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.594 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.594 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.594 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.595 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.595 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.596 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.596 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.598 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.598 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.601 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.601 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.602 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.603 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.603 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.603 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.603 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.603 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.603 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.613 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.613 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.615 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.616 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.616 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.616 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.616 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.617 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.617 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.623 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.630 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.632 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.633 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.633 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.633 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.633 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.635 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.635 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.640 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.641 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.643 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.643 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.643 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.643 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.643 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.644 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.644 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.648 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.648 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.650 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.650 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.652 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.652 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.653 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.653 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.653 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.653 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.654 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.654 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.654 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.660 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.661 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.662 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.662 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.662 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.662 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.663 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.663 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.663 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.670 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.670 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.672 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.672 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.672 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.672 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.672 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.673 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.673 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.677 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.682 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.683 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.683 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.684 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.684 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.684 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.684 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.684 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.689 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.689 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.691 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.691 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.691 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.691 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.692 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.692 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.692 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.699 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.699 [conn3] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:30.700 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.700 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.700 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.701 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.701 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.701 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.701 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.701 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.702 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.708 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.708 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.709 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.709 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.709 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.709 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.710 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.710 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.710 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.717 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.717 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.719 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.719 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.719 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.719 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.719 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.720 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.720 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.724 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.729 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.730 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.731 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.731 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.731 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.731 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.731 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.731 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.736 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.736 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.739 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.740 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.740 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.740 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.740 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.740 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.741 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.747 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.747 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.748 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.749 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.749 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.749 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.749 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.749 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.750 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.750 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.751 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.752 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.752 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.756 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.756 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.757 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.757 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.757 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.757 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.757 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.757 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.758 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.765 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.765 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.766 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.766 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.766 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.766 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.767 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.767 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.767 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.771 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.776 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.777 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.778 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.778 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.778 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.778 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.778 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.778 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.783 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.783 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.785 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.785 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.785 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.785 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.785 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.786 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.786 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.792 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.792 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.793 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.794 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.794 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.794 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.794 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.794 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.795 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.801 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.801 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.802 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.802 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.802 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.802 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30999| Fri Feb 22 12:33:30.802 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.803 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.803 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.804 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.804 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.804 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.804 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.804 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.810 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.810 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.811 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.812 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.812 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.812 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.812 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.812 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.813 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.817 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.821 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.823 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.823 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.823 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.823 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.823 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.824 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.824 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.829 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.829 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.830 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.831 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.831 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.831 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.831 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.831 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.831 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.838 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.838 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.839 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.839 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.839 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.839 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.840 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.840 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.840 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.851 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.851 [conn3] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:30.854 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.854 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.855 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.855 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.855 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.855 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.855 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.857 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.857 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.857 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.857 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.857 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.864 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.864 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.865 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.865 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.865 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.865 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.866 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.866 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.866 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.870 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.875 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.876 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.877 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.877 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.877 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.877 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.877 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.877 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.882 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.883 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.884 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.884 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.884 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.884 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.885 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.885 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.885 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.891 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.891 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.893 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.893 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.893 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.893 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.893 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.894 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.894 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.900 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.900 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.901 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.902 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.902 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.902 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.902 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.902 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.903 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.907 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.907 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.909 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.909 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.910 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.910 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.911 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.911 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.911 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.911 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.911 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.912 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.912 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.916 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.920 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.922 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.922 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.922 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.922 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.922 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.923 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.923 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.927 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.928 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.929 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.930 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.930 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.930 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.930 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.930 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.930 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.937 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.937 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.938 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.938 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.938 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.938 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.939 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.939 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.939 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.945 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.945 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.947 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.947 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.947 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.947 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.947 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.948 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.948 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.957 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.957 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.959 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30999| Fri Feb 22 12:33:30.959 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.959 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.959 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.959 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.959 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.959 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.961 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.961 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.961 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.961 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.961 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.965 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.968 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.969 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.970 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.970 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.970 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.970 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.970 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.970 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.975 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.975 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.977 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.977 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.977 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.977 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.977 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.978 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.978 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.984 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.984 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.985 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.986 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.986 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.986 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.986 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.986 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.987 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:30.993 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.993 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:30.994 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:30.994 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.995 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:30.995 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:30.995 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:30.995 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:30.995 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.003 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.003 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.005 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.005 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.005 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.005 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.006 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.006 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.006 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.010 [conn1] DROP: test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.011 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.015 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.016 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.017 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.017 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.017 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.017 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.018 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.018 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.018 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.023 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.024 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.025 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.025 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.025 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.025 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.026 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.026 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.026 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.033 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.033 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.034 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.034 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.034 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.034 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.035 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.035 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.035 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.041 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.041 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.043 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.043 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.043 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.043 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.043 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.044 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.044 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.051 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.051 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.052 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.053 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.053 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.053 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.053 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.055 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.055 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.060 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.064 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.065 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.066 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.066 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.066 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.066 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.066 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.067 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.067 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.067 [conn6] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.069 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.069 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.071 [conn1] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.072 [conn3] CMD: drop test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.073 [conn3] build index test.jstests_queryoptimizer3 { _id: 1 } m30001| Fri Feb 22 12:33:31.074 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.074 [conn3] info: creating collection test.jstests_queryoptimizer3 on add index m30001| Fri Feb 22 12:33:31.074 [conn3] build index test.jstests_queryoptimizer3 { a: 1.0 } m30001| Fri Feb 22 12:33:31.074 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:31.074 [conn3] build index test.jstests_queryoptimizer3 { b: 1.0 } m30001| Fri Feb 22 12:33:31.074 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:31.119 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.119 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.171 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.171 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.222 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.222 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.273 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.273 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.324 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.324 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.375 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.375 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.425 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.425 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.476 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.476 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.529 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.529 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.580 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.580 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.630 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.630 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.681 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.681 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.731 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.732 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.782 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.782 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.833 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.833 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.883 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.883 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.934 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.934 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:31.984 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:31.985 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.035 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.035 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.086 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.086 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.137 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.137 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.187 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.187 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.238 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.238 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.288 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.288 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.339 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.339 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.389 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.389 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.440 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.440 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.490 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.490 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.541 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.541 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.592 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.592 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.642 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.642 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.693 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.693 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.744 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.744 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.794 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.794 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.845 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.845 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.895 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.895 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.946 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.946 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:32.996 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:32.996 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.047 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.047 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.098 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.098 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.148 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.148 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.199 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.199 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.249 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.250 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.300 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.300 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.351 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.351 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.401 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.401 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.452 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.452 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.502 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.502 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.558 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.558 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.608 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.609 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.659 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.659 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.710 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.710 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.761 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.761 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.811 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.811 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.862 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.862 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.912 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.912 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:33.963 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:33.963 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.014 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.014 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.064 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.065 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.115 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.115 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.166 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.166 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.216 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.216 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.267 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.267 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.317 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.317 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.368 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.368 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.418 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.418 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.469 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.469 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.519 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.520 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.570 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.570 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.620 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.621 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.671 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.671 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.722 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.722 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.772 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.772 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.823 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.823 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.873 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.873 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.924 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.924 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:34.975 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:34.975 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.025 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.026 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.076 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.076 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.127 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.127 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.177 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.177 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.228 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.228 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.278 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.279 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.329 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.329 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.380 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.380 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.430 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.430 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.481 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.481 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.531 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.531 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.582 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.582 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.596 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127659f881c8e7453916033 m30999| Fri Feb 22 12:33:35.596 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:33:35.632 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.633 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.683 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.683 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.734 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.734 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.784 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.785 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.835 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.835 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.886 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.886 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.936 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.936 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:35.987 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:35.987 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.037 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.038 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.088 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.088 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.139 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.139 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.189 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.189 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.240 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.240 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.290 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.290 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.341 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.341 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.391 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.392 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.442 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.442 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.493 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.493 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.543 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.543 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.594 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.594 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.645 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.645 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.696 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.696 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.746 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.746 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.797 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.797 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.848 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.848 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.904 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.904 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:36.955 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:36.955 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.006 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.006 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.056 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.057 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.107 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.107 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.158 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.158 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.209 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.209 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.260 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.260 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.311 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.311 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.362 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.362 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.413 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.413 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.463 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.464 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.514 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.514 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.565 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.565 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.616 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.616 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.667 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.667 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.717 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.718 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.768 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.768 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.819 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.819 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.870 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.870 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.920 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.920 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:37.971 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:37.971 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.022 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.022 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.072 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.072 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.123 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.123 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.174 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.174 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.224 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.224 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.275 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.275 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.326 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.326 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.376 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.376 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.427 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.427 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.478 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.478 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.528 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.528 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.579 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.579 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.630 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.630 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.680 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.680 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.731 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.731 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.782 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.782 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.832 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.832 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.883 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.883 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.934 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.934 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:38.984 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:38.984 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.035 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.035 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.086 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.086 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.136 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.136 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.186 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.186 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.237 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.237 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.287 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.287 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.338 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.338 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.388 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.389 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.439 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.439 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.490 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.490 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.540 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.540 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.591 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.591 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.641 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.641 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.692 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.692 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.742 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.742 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.793 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.793 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.843 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.843 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.894 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.894 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.944 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.944 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:39.995 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:39.995 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.046 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.046 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.096 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.096 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.147 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.147 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.197 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.197 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.248 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.248 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.298 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.299 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.349 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.349 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.399 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.400 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.450 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.450 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.501 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.501 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.551 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.551 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.602 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.602 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.652 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.652 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.703 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.703 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.753 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.754 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.804 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.804 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.854 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.855 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.905 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.905 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:40.956 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:40.956 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.006 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.006 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.057 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.057 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.108 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.108 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.158 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.158 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.209 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.209 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.259 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.259 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.310 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.310 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.360 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.360 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.411 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.411 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.461 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.461 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.512 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.512 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.562 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.562 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.598 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765a5881c8e7453916034 m30999| Fri Feb 22 12:33:41.598 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:33:41.613 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.613 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.663 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.663 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.714 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.714 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.764 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.765 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.815 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.815 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.866 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.866 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.916 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.916 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:41.967 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:41.967 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.017 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.018 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.068 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.068 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.119 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.119 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.169 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.169 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.220 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.220 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.272 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.272 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.322 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.322 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.373 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.373 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.423 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.423 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.474 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.474 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.524 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.524 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.575 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.575 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.625 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.625 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.676 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.676 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.726 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.726 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.777 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.777 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.827 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.827 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.878 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.878 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.928 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.928 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:42.979 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:42.979 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.029 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.030 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.080 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.080 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.131 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.131 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.181 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.181 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.232 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.232 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.282 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.283 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.333 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.333 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.383 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.384 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.434 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.434 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.485 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.485 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.535 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.535 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.586 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.586 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.636 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.636 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.687 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.687 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.737 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.737 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.788 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.788 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.838 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.838 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.889 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.889 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.939 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.939 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:43.990 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:43.990 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.041 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.041 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.091 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.091 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.142 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.142 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.192 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.192 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.243 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.243 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.293 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.293 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.344 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.344 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.394 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.394 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.445 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.445 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.495 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.495 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.546 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.546 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.596 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.596 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.647 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.647 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.698 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.698 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.748 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.748 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.798 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.799 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.849 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.849 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.899 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.900 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:44.950 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:44.950 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.001 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.001 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.051 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.051 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.102 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.102 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.152 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.153 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.203 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.204 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.254 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.255 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.305 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.305 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.356 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.356 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.407 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.407 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.458 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.458 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.508 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.509 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.559 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.559 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.610 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.610 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.661 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.661 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.712 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.712 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.762 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.763 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.813 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.813 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.864 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.864 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.915 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.915 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:45.965 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:45.965 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.016 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.016 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.067 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.067 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.117 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.117 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.168 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.168 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.219 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.219 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.269 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.269 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.320 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.320 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.371 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.371 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.421 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.422 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.472 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.472 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.523 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.523 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.573 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.574 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.624 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.624 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.675 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.675 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.725 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.725 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.775 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.776 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.826 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.826 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.876 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.877 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.927 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.927 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:46.978 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:46.978 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.028 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.028 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.079 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.079 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.130 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.130 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.180 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.180 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.231 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.231 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.281 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.281 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.332 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.332 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.382 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.383 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.433 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.433 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.484 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.484 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.534 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.534 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.585 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.585 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.600 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765ab881c8e7453916035 m30999| Fri Feb 22 12:33:47.601 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:33:47.636 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.636 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.686 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.686 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.737 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.737 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.787 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.787 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.838 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.838 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.888 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.888 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.939 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.939 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:47.989 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:47.989 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.040 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.040 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.091 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.091 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.142 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.142 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.192 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.192 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.243 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.243 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.293 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.294 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.344 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.344 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.394 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.395 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.445 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.445 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.495 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.496 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.546 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.546 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.596 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.597 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.647 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.647 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.698 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.698 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.748 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.748 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.799 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.799 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.849 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.849 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.900 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.900 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:48.950 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:48.950 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.001 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.001 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.051 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.051 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.102 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.102 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.153 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.153 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.203 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.203 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.254 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.254 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.304 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.304 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.355 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.355 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.405 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.405 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.456 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.456 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.506 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.506 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.557 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.557 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.607 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.607 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.658 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.658 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.708 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.708 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.759 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.759 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.809 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.809 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.860 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.860 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.910 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.910 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:49.961 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:49.961 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.011 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.012 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.062 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.062 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.113 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.113 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.163 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.163 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.214 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.214 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.264 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.265 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.315 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.315 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.366 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.366 [conn6] CMD: drop test.jstests_queryoptimizer3 m30999| Fri Feb 22 12:33:50.416 [conn3] DROP: test.jstests_queryoptimizer3 m30001| Fri Feb 22 12:33:50.416 [conn6] CMD: drop test.jstests_queryoptimizer3 sh14645| false m30999| Fri Feb 22 12:33:50.430 [conn3] end connection 127.0.0.1:40591 (1 connection now open) 20389ms ******************************************* Test : jstests/fts_spanish.js ... m30000| Fri Feb 22 12:33:50.438 [initandlisten] connection accepted from 127.0.0.1:40913 #11 (9 connections now open) m30001| Fri Feb 22 12:33:50.439 [initandlisten] connection accepted from 127.0.0.1:52585 #8 (6 connections now open) m30999| Fri Feb 22 12:33:50.439 [conn1] DROP: test.text_spanish m30001| Fri Feb 22 12:33:50.440 [conn3] CMD: drop test.text_spanish m30001| Fri Feb 22 12:33:50.441 [conn3] build index test.text_spanish { _id: 1 } m30001| Fri Feb 22 12:33:50.442 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:50.442 [conn3] build index test.text_spanish { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:33:50.443 [conn3] build index done. scanned 4 total records. 0 secs 10ms ******************************************* Test : jstests/coveredIndex3.js ... 4ms ******************************************* Test : jstests/in6.js ... m30999| Fri Feb 22 12:33:50.451 [conn1] DROP: test.jstests_in6 m30001| Fri Feb 22 12:33:50.452 [conn3] CMD: drop test.jstests_in6 m30001| Fri Feb 22 12:33:50.452 [conn3] build index test.jstests_in6 { _id: 1 } m30001| Fri Feb 22 12:33:50.453 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:50.454 [conn3] build index test.jstests_in6 { i: 1.0 } m30001| Fri Feb 22 12:33:50.454 [conn3] build index done. scanned 1 total records. 0 secs 4ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/dbhash.js ******************************************* Test : jstests/all5.js ... m30999| Fri Feb 22 12:33:50.456 [conn1] DROP: test.jstests_all5 m30001| Fri Feb 22 12:33:50.457 [conn3] CMD: drop test.jstests_all5 m30999| Fri Feb 22 12:33:50.457 [conn1] DROP: test.jstests_all5 m30001| Fri Feb 22 12:33:50.457 [conn3] CMD: drop test.jstests_all5 m30001| Fri Feb 22 12:33:50.457 [conn3] build index test.jstests_all5 { _id: 1 } m30001| Fri Feb 22 12:33:50.458 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:50.458 [conn1] DROP: test.jstests_all5 m30001| Fri Feb 22 12:33:50.459 [conn3] CMD: drop test.jstests_all5 m30001| Fri Feb 22 12:33:50.460 [conn3] build index test.jstests_all5 { _id: 1 } m30001| Fri Feb 22 12:33:50.460 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:50.461 [conn1] DROP: test.jstests_all5 m30001| Fri Feb 22 12:33:50.461 [conn3] CMD: drop test.jstests_all5 m30001| Fri Feb 22 12:33:50.461 [conn3] build index test.jstests_all5 { _id: 1 } m30001| Fri Feb 22 12:33:50.462 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:50.462 [conn1] DROP: test.jstests_all5 m30001| Fri Feb 22 12:33:50.462 [conn3] CMD: drop test.jstests_all5 m30001| Fri Feb 22 12:33:50.463 [conn3] build index test.jstests_all5 { _id: 1 } m30001| Fri Feb 22 12:33:50.463 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:50.463 [conn1] DROP: test.jstests_all5 m30001| Fri Feb 22 12:33:50.464 [conn3] CMD: drop test.jstests_all5 m30001| Fri Feb 22 12:33:50.464 [conn3] build index test.jstests_all5 { _id: 1 } m30001| Fri Feb 22 12:33:50.465 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:50.465 [conn1] DROP: test.jstests_all5 m30001| Fri Feb 22 12:33:50.465 [conn3] CMD: drop test.jstests_all5 m30001| Fri Feb 22 12:33:50.466 [conn3] build index test.jstests_all5 { _id: 1 } m30001| Fri Feb 22 12:33:50.466 [conn3] build index done. scanned 0 total records. 0 secs 11ms ******************************************* Test : jstests/index_maxkey.js ... m30999| Fri Feb 22 12:33:50.467 [conn1] DROP: test.index_maxkey m30001| Fri Feb 22 12:33:50.467 [conn3] CMD: drop test.index_maxkey m30001| Fri Feb 22 12:33:50.468 [conn3] build index test.index_maxkey { _id: 1 } m30001| Fri Feb 22 12:33:50.468 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:50.469 [conn3] info: creating collection test.index_maxkey on add index m30001| Fri Feb 22 12:33:50.469 [conn3] build index test.index_maxkey { s: 1.0 } m30001| Fri Feb 22 12:33:50.469 [conn3] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:33:50.549 [conn11] end connection 127.0.0.1:40913 (8 connections now open) m30001| Fri Feb 22 12:33:50.549 [conn8] end connection 127.0.0.1:52585 (5 connections now open) m30001| Fri Feb 22 12:33:50.712 [conn3] test.index_maxkey ERROR: key too large len:822 max:819 822 test.index_maxkey.$s_1 indexVersion: 0 max key is : 821 m30999| Fri Feb 22 12:33:50.715 [conn1] DROP: test.index_maxkey m30001| Fri Feb 22 12:33:50.715 [conn3] CMD: drop test.index_maxkey m30001| Fri Feb 22 12:33:50.716 [conn3] build index test.index_maxkey { _id: 1 } m30001| Fri Feb 22 12:33:50.717 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:50.717 [conn3] info: creating collection test.index_maxkey on add index m30001| Fri Feb 22 12:33:50.717 [conn3] build index test.index_maxkey { s: 1.0 } m30001| Fri Feb 22 12:33:50.717 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:51.060 [conn3] test.index_maxkey ERROR: key too large len:1028 max:1024 1028 test.index_maxkey.$s_1 indexVersion: 1 max key is : 1026 597ms ******************************************* Test : jstests/group6.js ... m30999| Fri Feb 22 12:33:51.064 [conn1] DROP: test.jstests_group6 m30001| Fri Feb 22 12:33:51.064 [conn3] CMD: drop test.jstests_group6 m30001| Fri Feb 22 12:33:51.065 [conn3] build index test.jstests_group6 { _id: 1 } m30001| Fri Feb 22 12:33:51.066 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:51.074 [conn1] DROP: test.jstests_group6 m30001| Fri Feb 22 12:33:51.074 [conn3] CMD: drop test.jstests_group6 m30001| Fri Feb 22 12:33:51.075 [conn3] build index test.jstests_group6 { _id: 1 } m30001| Fri Feb 22 12:33:51.076 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:33:51.085 [conn1] DROP: test.jstests_group6 m30001| Fri Feb 22 12:33:51.085 [conn3] CMD: drop test.jstests_group6 m30001| Fri Feb 22 12:33:51.086 [conn3] build index test.jstests_group6 { _id: 1 } m30001| Fri Feb 22 12:33:51.086 [conn3] build index done. scanned 0 total records. 0 secs 40ms ******************************************* Test : jstests/basicc.js ... m30999| Fri Feb 22 12:33:51.104 [conn1] DROP: test.jstests_basicc m30001| Fri Feb 22 12:33:51.104 [conn3] CMD: drop test.jstests_basicc m30999| Fri Feb 22 12:33:51.105 [conn1] couldn't find database [test_basicc] in config db m30999| Fri Feb 22 12:33:51.106 [conn1] put [test_basicc] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:33:51.106 [conn1] DROP: test_basicc.jstests_basicc m30000| Fri Feb 22 12:33:51.106 [conn6] CMD: drop test_basicc.jstests_basicc Fri Feb 22 12:33:51.112 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval while( 1 ) { db.jstests.basicc1.save( {} ); } localhost:30999 m30000| Fri Feb 22 12:33:51.113 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/test_basicc.ns, filling with zeroes... m30000| Fri Feb 22 12:33:51.113 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/test_basicc.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:33:51.113 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/test_basicc.0, filling with zeroes... m30000| Fri Feb 22 12:33:51.114 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/test_basicc.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:33:51.114 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/test_basicc.1, filling with zeroes... m30000| Fri Feb 22 12:33:51.114 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/test_basicc.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:33:51.117 [conn6] build index test_basicc.jstests_basicc { _id: 1 } m30000| Fri Feb 22 12:33:51.118 [conn6] build index done. scanned 0 total records. 0.001 secs sh14665| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:33:52.173 shell: stopped mongo program on pid 14665 m30999| Fri Feb 22 12:33:52.173 [conn1] DROP: test.jstests_basicc m30001| Fri Feb 22 12:33:52.173 [conn3] CMD: drop test.jstests_basicc m30999| Fri Feb 22 12:33:52.174 [conn1] DROP: test_basicc.jstests_basicc m30000| Fri Feb 22 12:33:52.174 [conn6] CMD: drop test_basicc.jstests_basicc m30999| Fri Feb 22 12:33:52.175 [conn1] DROP DATABASE: test_basicc m30999| Fri Feb 22 12:33:52.175 [conn1] erased database test_basicc from local registry m30999| Fri Feb 22 12:33:52.177 [conn1] DBConfig::dropDatabase: test_basicc m30999| Fri Feb 22 12:33:52.177 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:33:52-512765b0881c8e7453916036", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536432177), what: "dropDatabase.start", ns: "test_basicc", details: {} } m30999| Fri Feb 22 12:33:52.177 [conn1] DBConfig::dropDatabase: test_basicc dropped sharded collections: 0 m30000| Fri Feb 22 12:33:52.177 [conn3] dropDatabase test_basicc starting m30000| Fri Feb 22 12:33:52.204 [conn3] removeJournalFiles m30000| Fri Feb 22 12:33:52.206 [conn3] dropDatabase test_basicc finished m30999| Fri Feb 22 12:33:52.206 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:33:52-512765b0881c8e7453916037", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536432206), what: "dropDatabase", ns: "test_basicc", details: {} } 1103ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_update_btree.js ******************************************* Test : jstests/remove9.js ... m30999| Fri Feb 22 12:33:52.207 [conn1] DROP: test.jstests_remove9 m30001| Fri Feb 22 12:33:52.207 [conn3] CMD: drop test.jstests_remove9 m30001| Fri Feb 22 12:33:52.208 [conn3] build index test.jstests_remove9 { _id: 1 } m30001| Fri Feb 22 12:33:52.209 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:33:52.209 [conn3] info: creating collection test.jstests_remove9 on add index m30001| Fri Feb 22 12:33:52.209 [conn3] build index test.jstests_remove9 { i: 1.0 } m30001| Fri Feb 22 12:33:52.209 [conn3] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:33:52.257 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_remove9; for( j = 0; j < 5000; ++j ) { i = Random.randInt( 499 ) * 2; t.update( {i:i}, {$set:{i:2000}} ); t.remove( {i:2000} ); t.save( {i:i} ); } localhost:30999/admin sh14666| MongoDB shell version: 2.4.0-rc1-pre- sh14666| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:33:52.330 [mongosMain] connection accepted from 127.0.0.1:46997 #4 (2 connections now open) m30001| Fri Feb 22 12:33:52.964 [conn6] test.jstests_remove9 warning: cursor loc 0:2cdaa4 does not match byLoc position 0:2cda70 ! m30999| Fri Feb 22 12:33:53.602 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765b1881c8e7453916038 m30999| Fri Feb 22 12:33:53.603 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:33:54.236 [conn6] test.jstests_remove9 warning: cursor loc 0:2cdaa4 does not match byLoc position 0:2cda70 ! m30999| Fri Feb 22 12:33:55.585 [conn4] end connection 127.0.0.1:46997 (1 connection now open) m30001| Fri Feb 22 12:33:59.604 [initandlisten] connection accepted from 127.0.0.1:51687 #9 (6 connections now open) m30999| Fri Feb 22 12:33:59.605 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765b7881c8e7453916039 m30999| Fri Feb 22 12:33:59.606 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. 7749ms >>>>>>>>>>>>>>> skipping jstests/libs >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/fsync2.js ******************************************* Test : jstests/dropdb.js ... m30999| Fri Feb 22 12:33:59.963 [conn1] couldn't find database [jstests_dropdb] in config db m30999| Fri Feb 22 12:33:59.964 [conn1] put [jstests_dropdb] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:33:59.965 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/jstests_dropdb.ns, filling with zeroes... m30000| Fri Feb 22 12:33:59.965 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/jstests_dropdb.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:33:59.965 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/jstests_dropdb.0, filling with zeroes... m30000| Fri Feb 22 12:33:59.966 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/jstests_dropdb.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:33:59.966 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/jstests_dropdb.1, filling with zeroes... m30000| Fri Feb 22 12:33:59.966 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/jstests_dropdb.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:33:59.969 [conn6] build index jstests_dropdb.c { _id: 1 } m30000| Fri Feb 22 12:33:59.970 [conn6] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:34:00.006 [conn1] DROP DATABASE: jstests_dropdb m30999| Fri Feb 22 12:34:00.006 [conn1] erased database jstests_dropdb from local registry m30999| Fri Feb 22 12:34:00.007 [conn1] DBConfig::dropDatabase: jstests_dropdb m30999| Fri Feb 22 12:34:00.007 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:34:00-512765b8881c8e745391603a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536440007), what: "dropDatabase.start", ns: "jstests_dropdb", details: {} } m30999| Fri Feb 22 12:34:00.008 [conn1] DBConfig::dropDatabase: jstests_dropdb dropped sharded collections: 0 m30000| Fri Feb 22 12:34:00.008 [conn3] dropDatabase jstests_dropdb starting m30000| Fri Feb 22 12:34:00.036 [conn3] removeJournalFiles m30000| Fri Feb 22 12:34:00.038 [conn3] dropDatabase jstests_dropdb finished m30999| Fri Feb 22 12:34:00.038 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:34:00-512765b8881c8e745391603b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536440038), what: "dropDatabase", ns: "jstests_dropdb", details: {} } m30999| Fri Feb 22 12:34:00.054 [conn1] couldn't find database [jstests_dropdb] in config db m30999| Fri Feb 22 12:34:00.056 [conn1] put [jstests_dropdb] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:34:00.056 [conn1] DROP DATABASE: jstests_dropdb m30999| Fri Feb 22 12:34:00.056 [conn1] erased database jstests_dropdb from local registry m30999| Fri Feb 22 12:34:00.056 [conn1] DBConfig::dropDatabase: jstests_dropdb m30999| Fri Feb 22 12:34:00.056 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:34:00-512765b8881c8e745391603c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536440056), what: "dropDatabase.start", ns: "jstests_dropdb", details: {} } m30999| Fri Feb 22 12:34:00.057 [conn1] DBConfig::dropDatabase: jstests_dropdb dropped sharded collections: 0 m30000| Fri Feb 22 12:34:00.057 [conn3] dropDatabase jstests_dropdb starting m30000| Fri Feb 22 12:34:00.078 [conn3] removeJournalFiles m30000| Fri Feb 22 12:34:00.078 [conn3] dropDatabase jstests_dropdb finished m30999| Fri Feb 22 12:34:00.079 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:34:00-512765b8881c8e745391603d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536440079), what: "dropDatabase", ns: "jstests_dropdb", details: {} } 126ms ******************************************* Test : jstests/type3.js ... m30999| Fri Feb 22 12:34:00.084 [conn1] DROP: test.jstests_type3 m30001| Fri Feb 22 12:34:00.085 [conn3] CMD: drop test.jstests_type3 m30001| Fri Feb 22 12:34:00.086 [conn3] build index test.jstests_type3 { _id: 1 } m30001| Fri Feb 22 12:34:00.087 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:00.087 [conn3] info: creating collection test.jstests_type3 on add index m30001| Fri Feb 22 12:34:00.087 [conn3] build index test.jstests_type3 { a: 1.0 } m30001| Fri Feb 22 12:34:00.088 [conn3] build index done. scanned 0 total records. 0 secs 13ms ******************************************* Test : jstests/fm2.js ... m30999| Fri Feb 22 12:34:00.105 [conn1] DROP: test.fm2 m30001| Fri Feb 22 12:34:00.105 [conn3] CMD: drop test.fm2 m30001| Fri Feb 22 12:34:00.106 [conn3] build index test.fm2 { _id: 1 } m30001| Fri Feb 22 12:34:00.107 [conn3] build index done. scanned 0 total records. 0 secs 12ms ******************************************* Test : jstests/removea.js ... setting random seed: 1361536440108 m30999| Fri Feb 22 12:34:00.108 [conn1] DROP: test.jstests_removea m30001| Fri Feb 22 12:34:00.109 [conn3] CMD: drop test.jstests_removea m30001| Fri Feb 22 12:34:00.109 [conn3] build index test.jstests_removea { _id: 1 } m30001| Fri Feb 22 12:34:00.110 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:00.110 [conn3] info: creating collection test.jstests_removea on add index m30001| Fri Feb 22 12:34:00.110 [conn3] build index test.jstests_removea { a: 1.0 } m30001| Fri Feb 22 12:34:00.110 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:01.105 [conn3] warning: log line attempted (77k) over max size(10k), printing beginning and end ... remove test.jstests_removea query: { a: { $in: [ 5565.0, 1500.0, 9583.0, 3742.0, 4414.0, 4423.0, 6525.0, 3093.0, 8521.0, 9328.0, 897.0, 365.0, 5500.0, 4865.0, 4866.0, 3320.0, 3762.0, 1916.0, 6909.0, 9064.0, 8723.0, 6074.0, 5702.0, 9254.0, 8684.0, 3940.0, 9156.0, 5209.0, 65.0, 9159.0, 5772.0, 6363.0, 6679.0, 9143.0, 509.0, 491.0, 2739.0, 8416.0, 2351.0, 9152.0, 4282.0, 9514.0, 5763.0, 6789.0, 8702.0, 7383.0, 3214.0, 6907.0, 7069.0, 8215.0, 972.0, 3792.0, 8935.0, 8645.0, 5340.0, 3183.0, 4306.0, 6266.0, 1533.0, 359.0, 2466.0, 7387.0, 2378.0, 9051.0, 2349.0, 2971.0, 9025.0, 9643.0, 4140.0, 8702.0, 859.0, 84.0, 6791.0, 99.0, 6793.0, 7952.0, 1146.0, 8642.0, 5150.0, 7107.0, 3358.0, 9813.0, 4284.0, 7372.0, 8287.0, 9025.0, 1173.0, 681.0, 7563.0, 6349.0, 132.0, 2127.0, 2008.0, 5990.0, 8861.0, 9352.0, 3100.0, 1448.0, 3998.0, 8707.0, 3166.0, 2299.0, 3372.0, 1639.0, 8994.0, 3534.0, 7150.0, 9722.0, 8552.0, 3395.0, 5046.0, 1778.0, 5498.0, 644.0, 1804.0, 1530.0, 1170.0, 7203.0, 8120.0, 9214.0, 8746.0, 7107.0, 9208.0, 5599.0, 9256.0, 7691.0, 8187.0, 1843.0, 4143.0, 3975.0, 6478.0, 3626.0, 4051.0, 2165.0, 8405.0, 3003.0, 8525.0, 3204.0, 2182.0, 6402.0, 6329.0, 2333.0, 4464.0, 2218.0, 1045.0, 5578.0, 679.0, 6431.0, 1664.0, 9105.0, 8289.0, 9690.0, 2831.0, 2394.0, 5205.0, 4298.0, 4460.0, 1148.0, 509.0, 4540.0, 5380.0, 893.0, 9705.0, 8784.0, 720.0, 7193.0, 1659.0, 6801.0, 7706.0, 3432.0, 9425.0, 734.0, 9570.0, 2085.0, 2150.0, 3160.0, 2243.0, 422.0, 241.0, 676.0, 3536.0, 6593.0, 975.0, 6451.0, 9486.0, 7004.0, 6751.0, 184.0, 7560.0, 961.0, 3168.0, 8895.0, 1397.0, 3477.0, 9107.0, 7000.0, 6783.0, 2215.0, 6021.0, 9095.0, 3547.0, 9481.0, 8604.0, 3898.0, 3060.0, 5216.0, 4040.0, 2776.0, 6022.0, 1925.0, 2012.0, 1302.0, 239.0, 7467.0, 2658.0, 7278.0, 3063.0, 6666.0, 4666.0, 3655.0, 8182.0, 7662.0, 694.0, 9799.0, 1468.0, 3942.0, 2299.0, 9531.0, 5538.0, 8000.0, 1567.0, 9385.0, 7745.0, 7548.0, 1631.0, 5510.0, 1268.0, 231.0, 3256.0, 2674.0, 4002.0, 7767.0, 9700.0, 3787.0, 4918.0, 5464.0, 7008.0, 3389.0, 2602.0, 8048.0, 1954.0, 3546.0, 5641.0, 7725.0, 4803.0, 5578.0, 9558.0, 5441.0, 9083.0, 6073.0, 9972.0, 1255.0, 5561.0, 607.0, 6683.0, 4771.0, 2604.0, 3624.0, 8352.0, 1569.0, 108.0, 7901.0, 2488.0, 6569.0, 9201.0, 2044.0, 401.0, 262.0, 4883.0, 9441.0, 1831.0, 8754.0, 3809.0, 7131.0, 8937.0, 3558.0, 4398.0, 7999.0, 4314.0, 2062.0, 1448.0, 4757.0, 2758.0, 4623.0, 2453.0, 9136.0, 1433.0, 9222.0, 7810.0, 4732.0, 8158.0, 9610.0, 2095.0, 6941.0, 2475.0, 7890.0, 8599.0, 8423.0, 9205.0, 7471.0, 4326.0, 3526.0, 9167.0, 7328.0, 7611.0, 119.0, 6760.0, 3510.0, 5571.0, 2263.0, 9076.0, 8832.0, 3571.0, 2153.0, 6256.0, 6686.0, 3881.0, 4784.0, 7755.0, 9913.0, 5724.0, 3763.0, 4217.0, 669.0, 3900.0, 1715.0, 9643.0, 2225.0, 8165.0, 3713.0, 3534.0, 8643.0, 570.0, 7174.0, 7712.0, 3249.0, 2927.0, 6660.0, 7486.0, 1865.0, 1601.0, 7012.0, 1868.0, 1716.0, 1819.0, 3269.0, 2515.0, 1964.0, 8671.0, 801.0, 6096.0, 6793.0, 7007.0, 4084.0, 9751.0, 6999.0, 7396.0, 7582.0, 4361.0, 1008.0, 357.0, 7141.0, 5282.0, 1265.0, 6035.0, 5356.0, 5256.0, 6934.0, 1514.0, 9338.0, 9179.0, 8844.0, 2954.0, 4321.0, 5401.0, 7615.0, 4742.0, 1550.0, 3271.0, 912.0, 2836.0, 3873.0, 1589.0, 747.0, 4508.0, 7500.0, 7666.0, 6792.0, 1644.0, 3339.0, 1685.0, 6609.0, 9821.0, 6370.0, 6027.0, 8763.0, 326.0, 6475.0, 4279.0, 9739.0, 7313.0, 9176.0, 9591.0, 608.0, 7284.0, 9708.0, 7077.0, 4370.0, 3080.0, 8877.0, 9947.0, 9925.0, 9577.0, 4125.0, 4056.0, 3601.0, .......... , 4166.0, 9240.0, 774.0, 5580.0, 3340.0, 408.0, 1134.0, 6825.0, 1613.0, 4606.0, 8077.0, 9322.0, 240.0, 2116.0, 9099.0, 168.0, 3130.0, 7345.0, 9651.0, 3301.0, 6795.0, 5966.0, 1746.0, 5802.0, 3478.0, 9064.0, 6842.0, 438.0, 3400.0, 1486.0, 6440.0, 9114.0, 7073.0, 6575.0, 8839.0, 6937.0, 3599.0, 4480.0, 5476.0, 1566.0, 5912.0, 6228.0, 5296.0, 7551.0, 8887.0, 743.0, 581.0, 337.0, 1254.0, 7001.0, 2694.0, 9615.0, 7267.0, 8518.0, 1148.0, 6730.0, 9047.0, 2333.0, 4464.0, 5810.0, 5555.0, 9710.0, 6235.0, 3315.0, 9113.0, 710.0, 4850.0, 5693.0, 5433.0, 9847.0, 302.0, 9828.0, 8072.0, 5002.0, 6526.0, 5968.0, 3256.0, 6788.0, 2705.0, 7821.0, 6025.0, 548.0, 1967.0, 9509.0, 3605.0, 8185.0, 939.0, 4060.0, 6983.0, 1993.0, 8562.0, 3134.0, 6590.0, 7696.0, 7632.0, 2671.0, 3047.0, 8872.0, 7485.0, 4131.0, 8909.0, 9945.0, 1281.0, 6043.0, 2337.0, 9950.0, 6307.0, 5525.0, 8649.0, 4233.0, 749.0, 4473.0, 672.0, 6303.0, 5368.0, 108.0, 3844.0, 6995.0, 3509.0, 4565.0, 6999.0, 3156.0, 7894.0, 3210.0, 4775.0, 3018.0, 7630.0, 6379.0, 2574.0, 471.0, 3861.0, 4221.0, 29.0, 5397.0, 7523.0, 9330.0, 3892.0, 2279.0, 7613.0, 6865.0, 6303.0, 4248.0, 3434.0, 6834.0, 9363.0, 4771.0, 9233.0, 8228.0, 1955.0, 4166.0, 4685.0, 9709.0, 9747.0, 3126.0, 9654.0, 6523.0, 4316.0, 440.0, 6665.0, 8324.0, 2331.0, 7106.0, 9532.0, 7620.0, 735.0, 4013.0, 1574.0, 3494.0, 8856.0, 5387.0, 1860.0, 318.0, 3392.0, 5189.0, 2914.0, 6131.0, 7203.0, 8645.0, 8334.0, 368.0, 7850.0, 6101.0, 3733.0, 5642.0, 6817.0, 1392.0, 9474.0, 3245.0, 5357.0, 1913.0, 8613.0, 7078.0, 7890.0, 2565.0, 8490.0, 5674.0, 8905.0, 790.0, 9419.0, 7030.0, 4288.0, 4856.0, 8896.0, 5901.0, 3031.0, 2602.0, 4783.0, 8280.0, 235.0, 2806.0, 5881.0, 2719.0, 7374.0, 6390.0, 3355.0, 3201.0, 736.0, 9632.0, 1425.0, 3240.0, 3980.0, 6574.0, 786.0, 5899.0, 5764.0, 2820.0, 3165.0, 9416.0, 8355.0, 7912.0, 478.0, 3118.0, 8184.0, 8458.0, 7013.0, 3319.0, 4269.0, 9218.0, 8072.0, 3885.0, 5438.0, 4321.0, 3524.0, 1157.0, 2313.0, 5814.0, 9691.0, 7911.0, 4967.0, 1710.0, 8013.0, 6053.0, 2204.0, 749.0, 3933.0, 2734.0, 3400.0, 6035.0, 4422.0, 9566.0, 8838.0, 4754.0, 7611.0, 9627.0, 9477.0, 6151.0, 2934.0, 1962.0, 2093.0, 4956.0, 5249.0, 8235.0, 729.0, 2447.0, 1347.0, 1245.0, 389.0, 2338.0, 2319.0, 9983.0, 3913.0, 3679.0, 184.0, 922.0, 7986.0, 7952.0, 2240.0, 219.0, 9186.0, 2557.0, 1566.0, 6530.0, 6046.0, 7874.0, 7113.0, 9114.0, 8036.0, 81.0, 1825.0, 912.0, 7430.0, 6923.0, 2218.0, 1310.0, 4965.0, 8625.0, 7873.0, 8236.0, 5260.0, 1117.0, 4255.0, 4319.0, 2914.0, 3778.0, 7320.0, 6770.0, 5219.0, 8883.0, 3334.0, 5674.0, 2445.0, 4924.0, 2339.0, 406.0, 1606.0, 1021.0, 8720.0, 8821.0, 4671.0, 4581.0, 7286.0, 8935.0, 7142.0, 9415.0, 8326.0, 7773.0, 7937.0, 6487.0, 63.0, 7067.0, 6526.0, 4491.0, 1950.0, 506.0, 8245.0, 2435.0, 7240.0, 2207.0, 3735.0, 1430.0, 8274.0, 1109.0, 3437.0, 7860.0, 2920.0, 975.0, 3085.0, 5129.0, 5602.0, 680.0, 1127.0, 6173.0, 1355.0, 4486.0, 4029.0, 3912.0, 6350.0, 6979.0, 375.0, 1849.0, 3314.0, 2366.0, 9719.0, 1989.0, 5296.0, 2818.0, 7121.0, 2264.0, 6203.0, 5531.0, 9303.0, 669.0, 1562.0, 3220.0, 3363.0, 8276.0, 7926.0, 7708.0, 3986.0, 8321.0, 4367.0, 1311.0, 87.0, 3565.0, 4509.0, 5707.0, 6830.0, 9084.0, 2510.0, 2967.0, 3947.0, 2719.0, 1463.0, 8447.0, 6313.0, 328.0, 7618.0, 218.0, 7695.0, 8067.0, 8902.0, 4730.0, 3630.0, 8864.0, 1134.0, 3393.0, 4358.0, 5271.0, 410.0, 9762.0, 3750.0, 5908.0, 5064.0 ] }, $atomic: true } ndeleted:6307 keyUpdates:0 locks(micros) w:271985 271ms m30999| Fri Feb 22 12:34:01.106 [conn1] DROP: test.jstests_removea m30001| Fri Feb 22 12:34:01.106 [conn3] CMD: drop test.jstests_removea m30001| Fri Feb 22 12:34:01.115 [conn3] build index test.jstests_removea { _id: 1 } m30001| Fri Feb 22 12:34:01.116 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:01.116 [conn3] info: creating collection test.jstests_removea on add index m30001| Fri Feb 22 12:34:01.116 [conn3] build index test.jstests_removea { a: 1.0 } m30001| Fri Feb 22 12:34:01.116 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:02.005 [conn3] warning: log line attempted (77k) over max size(10k), printing beginning and end ... remove test.jstests_removea query: { a: { $in: [ 2018.0, 5172.0, 7652.0, 5661.0, 426.0, 219.0, 789.0, 6463.0, 4466.0, 1258.0, 9287.0, 6781.0, 9311.0, 3717.0, 3411.0, 5684.0, 6356.0, 9055.0, 6835.0, 8470.0, 7476.0, 4359.0, 5464.0, 8365.0, 7698.0, 9495.0, 132.0, 1865.0, 5982.0, 7755.0, 386.0, 1174.0, 8796.0, 939.0, 1520.0, 2163.0, 6163.0, 4809.0, 2070.0, 1300.0, 4892.0, 5850.0, 3935.0, 9980.0, 8646.0, 5278.0, 2330.0, 9695.0, 9569.0, 1228.0, 7745.0, 6477.0, 1129.0, 5073.0, 7109.0, 8066.0, 6895.0, 2787.0, 6334.0, 2220.0, 4902.0, 7318.0, 1440.0, 9620.0, 2325.0, 2885.0, 437.0, 9802.0, 121.0, 7151.0, 3159.0, 3552.0, 2347.0, 2203.0, 1230.0, 207.0, 9830.0, 280.0, 5363.0, 3291.0, 2882.0, 5179.0, 2161.0, 2930.0, 2739.0, 5289.0, 7623.0, 8487.0, 6431.0, 9181.0, 9537.0, 9841.0, 4742.0, 9691.0, 5944.0, 761.0, 7975.0, 7442.0, 1864.0, 2806.0, 9192.0, 9732.0, 1476.0, 3189.0, 9622.0, 5735.0, 7929.0, 5049.0, 5927.0, 3947.0, 7977.0, 3550.0, 4007.0, 2810.0, 3635.0, 6275.0, 9041.0, 466.0, 8020.0, 1316.0, 6439.0, 4565.0, 7591.0, 1533.0, 3408.0, 8066.0, 7957.0, 8393.0, 5801.0, 1981.0, 5446.0, 3836.0, 9955.0, 977.0, 1629.0, 3616.0, 4195.0, 2801.0, 2973.0, 527.0, 9687.0, 7437.0, 2826.0, 988.0, 5345.0, 6959.0, 7908.0, 3392.0, 6458.0, 1996.0, 6501.0, 1676.0, 1743.0, 5761.0, 533.0, 2537.0, 8497.0, 1571.0, 8727.0, 9750.0, 547.0, 4810.0, 3016.0, 3996.0, 8679.0, 5250.0, 5410.0, 1675.0, 8231.0, 693.0, 7493.0, 1102.0, 3550.0, 7849.0, 9751.0, 9556.0, 3983.0, 1405.0, 2906.0, 9603.0, 1102.0, 7212.0, 8453.0, 8130.0, 1850.0, 531.0, 586.0, 6533.0, 7291.0, 5272.0, 6689.0, 5502.0, 1645.0, 5175.0, 8596.0, 2823.0, 1317.0, 2851.0, 1803.0, 7648.0, 8584.0, 7641.0, 4808.0, 9672.0, 9643.0, 8215.0, 5780.0, 6646.0, 1695.0, 860.0, 8744.0, 8665.0, 8770.0, 9382.0, 6454.0, 2677.0, 959.0, 7573.0, 2163.0, 5637.0, 6760.0, 173.0, 7465.0, 9761.0, 3212.0, 3259.0, 8393.0, 8293.0, 3511.0, 17.0, 6976.0, 5253.0, 6794.0, 2812.0, 421.0, 7573.0, 9778.0, 7510.0, 5131.0, 3084.0, 4944.0, 978.0, 7725.0, 2772.0, 4948.0, 7713.0, 264.0, 754.0, 7952.0, 6525.0, 1862.0, 404.0, 3563.0, 3213.0, 3866.0, 70.0, 4056.0, 182.0, 805.0, 6820.0, 592.0, 8924.0, 4287.0, 1646.0, 9089.0, 6313.0, 2027.0, 4578.0, 5459.0, 641.0, 9206.0, 3136.0, 4881.0, 8346.0, 4338.0, 2557.0, 4810.0, 7349.0, 6837.0, 1235.0, 6403.0, 7959.0, 8470.0, 9824.0, 4043.0, 7273.0, 5888.0, 7415.0, 7669.0, 8003.0, 6414.0, 3255.0, 7576.0, 7684.0, 4280.0, 7420.0, 6385.0, 9184.0, 9506.0, 8897.0, 7873.0, 8458.0, 6598.0, 4505.0, 7341.0, 493.0, 5261.0, 7089.0, 3217.0, 9372.0, 308.0, 6434.0, 4570.0, 3388.0, 2961.0, 1328.0, 4235.0, 1167.0, 718.0, 6217.0, 2234.0, 9717.0, 7992.0, 886.0, 9168.0, 349.0, 4687.0, 6605.0, 4284.0, 3406.0, 4925.0, 7178.0, 7897.0, 7743.0, 3085.0, 8330.0, 2852.0, 5883.0, 9358.0, 3876.0, 4717.0, 800.0, 2005.0, 2098.0, 5399.0, 5259.0, 8497.0, 782.0, 9858.0, 9649.0, 2728.0, 4335.0, 618.0, 9258.0, 496.0, 1440.0, 9760.0, 6905.0, 427.0, 6671.0, 3078.0, 1897.0, 2540.0, 4506.0, 6095.0, 1220.0, 1632.0, 9189.0, 6623.0, 3917.0, 7681.0, 8863.0, 3546.0, 4590.0, 441.0, 7416.0, 6526.0, 6958.0, 427.0, 2490.0, 6319.0, 3410.0, 3478.0, 3064.0, 378.0, 1495.0, 1074.0, 5080.0, 3431.0, 3278.0, 3608.0, 8522.0, 7743.0, 8512.0, 8794.0, 4404.0, 2720.0, 7548.0, 2395.0, 5098.0, 8557.0, 4934.0, 3471.0, 6431.0, 3626.0, 9637.0, 1315.0, 5012.0, 271.0, 2810.0, 6287.0, 9194.0, 8715.0, 9079.0, 1715.0, 7140.0, 3757.0, 2237.0, 9057.0, 408.0, 3948.0, 1331.0, 3523.0, 6502.0, 7946.0, 8042.0, 2316.0, .......... 0, 8625.0, 3884.0, 3203.0, 1550.0, 1968.0, 7110.0, 2888.0, 8435.0, 1093.0, 2673.0, 4570.0, 3452.0, 3158.0, 9212.0, 789.0, 9369.0, 379.0, 3319.0, 1276.0, 4735.0, 4577.0, 9092.0, 5539.0, 4482.0, 8674.0, 3462.0, 3483.0, 5507.0, 1043.0, 4459.0, 7496.0, 1931.0, 3683.0, 4620.0, 4432.0, 9702.0, 129.0, 6720.0, 938.0, 5246.0, 4051.0, 4018.0, 558.0, 7853.0, 4471.0, 2195.0, 6053.0, 473.0, 8542.0, 5184.0, 5365.0, 8620.0, 7821.0, 2074.0, 4685.0, 2074.0, 8974.0, 7276.0, 7035.0, 9057.0, 4855.0, 1131.0, 9824.0, 7578.0, 6892.0, 3171.0, 9800.0, 3640.0, 4722.0, 5184.0, 8848.0, 978.0, 4004.0, 6304.0, 9058.0, 2039.0, 8356.0, 7303.0, 515.0, 4876.0, 6354.0, 6251.0, 238.0, 5643.0, 9523.0, 2776.0, 5708.0, 1389.0, 2259.0, 3063.0, 1422.0, 2829.0, 9631.0, 2817.0, 1921.0, 1788.0, 3982.0, 357.0, 9707.0, 2991.0, 6283.0, 7029.0, 1510.0, 4905.0, 3166.0, 7146.0, 1326.0, 9309.0, 5582.0, 4610.0, 3677.0, 4277.0, 13.0, 8431.0, 5513.0, 275.0, 8185.0, 4319.0, 216.0, 4044.0, 6212.0, 4534.0, 2120.0, 9967.0, 9109.0, 3187.0, 5006.0, 3601.0, 1619.0, 9181.0, 2988.0, 7069.0, 1282.0, 3973.0, 7251.0, 991.0, 4689.0, 6344.0, 7058.0, 2647.0, 2856.0, 1380.0, 8474.0, 5059.0, 6655.0, 5823.0, 9243.0, 6674.0, 8250.0, 7015.0, 3256.0, 6347.0, 9453.0, 7583.0, 3043.0, 3299.0, 2964.0, 3096.0, 3734.0, 6744.0, 7409.0, 4838.0, 6915.0, 4875.0, 2131.0, 4919.0, 7836.0, 7884.0, 2664.0, 5890.0, 4385.0, 4728.0, 2821.0, 2064.0, 1281.0, 8294.0, 4354.0, 4715.0, 7920.0, 8684.0, 94.0, 8359.0, 3312.0, 9291.0, 1542.0, 295.0, 6542.0, 5343.0, 6810.0, 589.0, 6198.0, 9635.0, 9895.0, 4707.0, 7199.0, 5098.0, 7177.0, 5958.0, 8468.0, 8923.0, 2115.0, 9960.0, 9878.0, 7912.0, 4060.0, 8300.0, 1910.0, 9741.0, 2121.0, 3579.0, 6473.0, 9840.0, 36.0, 3038.0, 2305.0, 8557.0, 2972.0, 2939.0, 3406.0, 8304.0, 2196.0, 899.0, 3930.0, 5375.0, 8555.0, 8357.0, 7988.0, 6115.0, 703.0, 370.0, 7788.0, 2133.0, 9009.0, 4920.0, 8543.0, 4247.0, 1091.0, 8913.0, 3611.0, 9847.0, 9905.0, 3078.0, 8269.0, 3360.0, 327.0, 2924.0, 5595.0, 4513.0, 5179.0, 7976.0, 2860.0, 4073.0, 6611.0, 6444.0, 1896.0, 1756.0, 3809.0, 7542.0, 1868.0, 5990.0, 2932.0, 8620.0, 7872.0, 225.0, 245.0, 9320.0, 3826.0, 5470.0, 4371.0, 9917.0, 3070.0, 3746.0, 8591.0, 6060.0, 5113.0, 2153.0, 9159.0, 7077.0, 1977.0, 2216.0, 531.0, 50.0, 2568.0, 4230.0, 3479.0, 6832.0, 8505.0, 1255.0, 7010.0, 6206.0, 9861.0, 2474.0, 23.0, 8363.0, 4236.0, 600.0, 3665.0, 3902.0, 7610.0, 3003.0, 293.0, 4513.0, 413.0, 1261.0, 5.0, 7563.0, 7245.0, 8813.0, 660.0, 1745.0, 3686.0, 1410.0, 2337.0, 5996.0, 8631.0, 43.0, 2164.0, 8858.0, 5598.0, 1037.0, 451.0, 479.0, 1428.0, 3006.0, 4075.0, 4440.0, 5832.0, 5346.0, 8045.0, 1592.0, 213.0, 7958.0, 1191.0, 3088.0, 9201.0, 6900.0, 6907.0, 4810.0, 3346.0, 5623.0, 4896.0, 363.0, 6392.0, 6525.0, 4844.0, 4834.0, 5588.0, 2465.0, 7966.0, 6502.0, 8456.0, 2957.0, 7363.0, 8678.0, 4466.0, 4713.0, 3121.0, 8880.0, 5050.0, 1235.0, 3099.0, 7505.0, 393.0, 1134.0, 5331.0, 3219.0, 6434.0, 9860.0, 2995.0, 7214.0, 531.0, 4555.0, 7870.0, 3553.0, 9205.0, 6681.0, 3233.0, 7773.0, 6415.0, 2139.0, 2122.0, 935.0, 5798.0, 3553.0, 1912.0, 1310.0, 5312.0, 4408.0, 1134.0, 898.0, 7720.0, 9733.0, 9479.0, 8944.0, 5345.0, 3014.0, 6922.0, 9668.0, 2548.0, 4025.0, 2907.0, 6372.0, 4356.0, 7264.0, 1524.0, 9125.0, 4671.0, 5683.0, 3626.0, 8209.0, 2525.0, 9395.0, 1808.0, 513.0, 9779.0, 4056.0, 9204.0, 1067.0, 8738.0, 8594.0, 8023.0, 6898.0, 3104.0 ] }, $atomic: true } ndeleted:6309 keyUpdates:0 locks(micros) w:244076 244ms 1899ms ******************************************* Test : jstests/sorta.js ... m30999| Fri Feb 22 12:34:02.007 [conn1] DROP: test.jstests_sorta m30001| Fri Feb 22 12:34:02.007 [conn3] CMD: drop test.jstests_sorta m30001| Fri Feb 22 12:34:02.008 [conn3] build index test.jstests_sorta { _id: 1 } m30001| Fri Feb 22 12:34:02.018 [conn3] build index done. scanned 0 total records. 0.009 secs m30001| Fri Feb 22 12:34:02.019 [conn3] build index test.jstests_sorta { a: 1.0 } m30001| Fri Feb 22 12:34:02.020 [conn3] build index done. scanned 9 total records. 0 secs 15ms ******************************************* Test : jstests/eval0.js ... m30001| Fri Feb 22 12:34:02.024 [conn3] build index test.system.js { _id: 1 } m30001| Fri Feb 22 12:34:02.025 [conn3] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/explain6.js ... m30999| Fri Feb 22 12:34:02.026 [conn1] DROP: test.jstests_explain6 m30001| Fri Feb 22 12:34:02.027 [conn3] CMD: drop test.jstests_explain6 m30001| Fri Feb 22 12:34:02.027 [conn3] build index test.jstests_explain6 { _id: 1 } m30001| Fri Feb 22 12:34:02.028 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:02.028 [conn3] build index test.jstests_explain6 { a: 1.0 } m30001| Fri Feb 22 12:34:02.029 [conn3] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:02.030 [conn4] CMD: dropIndexes test.jstests_explain6 6ms ******************************************* Test : jstests/sort9.js ... m30999| Fri Feb 22 12:34:02.032 [conn1] DROP: test.jstests_sort9 m30001| Fri Feb 22 12:34:02.033 [conn3] CMD: drop test.jstests_sort9 m30001| Fri Feb 22 12:34:02.033 [conn3] build index test.jstests_sort9 { _id: 1 } m30001| Fri Feb 22 12:34:02.034 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:02.035 [conn1] DROP: test.jstests_sort9 m30001| Fri Feb 22 12:34:02.035 [conn3] CMD: drop test.jstests_sort9 m30001| Fri Feb 22 12:34:02.037 [conn3] build index test.jstests_sort9 { _id: 1 } m30001| Fri Feb 22 12:34:02.037 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:02.039 [conn1] DROP: test.jstests_sort9 m30001| Fri Feb 22 12:34:02.039 [conn3] CMD: drop test.jstests_sort9 m30001| Fri Feb 22 12:34:02.040 [conn3] build index test.jstests_sort9 { _id: 1 } m30001| Fri Feb 22 12:34:02.040 [conn3] build index done. scanned 0 total records. 0 secs 11ms ******************************************* Test : jstests/error2.js ... m30999| Fri Feb 22 12:34:02.043 [conn1] DROP: test.jstests_error2 m30001| Fri Feb 22 12:34:02.043 [conn3] CMD: drop test.jstests_error2 m30001| Fri Feb 22 12:34:02.043 [conn3] build index test.jstests_error2 { _id: 1 } m30001| Fri Feb 22 12:34:02.044 [conn3] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:02.072 [conn3] JavaScript execution failed: ReferenceError: a is not defined near '){ return a() }' m30001| Fri Feb 22 12:34:02.073 [conn3] assertion 16722 JavaScript execution failed: ReferenceError: a is not defined near '){ return a() }' ns:test.jstests_error2 query:{ $where: function (){ return a() } } m30001| Fri Feb 22 12:34:02.073 [conn3] problem detected during query over test.jstests_error2 : { $err: "JavaScript execution failed: ReferenceError: a is not defined near '){ return a() }' ", code: 16722 } m30999| Fri Feb 22 12:34:02.074 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16722 JavaScript execution failed: ReferenceError: a is not defined near '){ return a() }' m30001| Fri Feb 22 12:34:02.074 [conn3] end connection 127.0.0.1:34922 (5 connections now open) m30001| Fri Feb 22 12:34:02.124 [conn6] JavaScript execution failed: ReferenceError: a is not defined near '{ return a(); }' 85ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_group.js ******************************************* Test : jstests/fts_phrase.js ... m30000| Fri Feb 22 12:34:02.130 [initandlisten] connection accepted from 127.0.0.1:64927 #12 (9 connections now open) m30001| Fri Feb 22 12:34:02.130 [initandlisten] connection accepted from 127.0.0.1:52483 #10 (6 connections now open) m30999| Fri Feb 22 12:34:02.132 [conn1] DROP: test.text_phrase m30001| Fri Feb 22 12:34:02.132 [conn6] CMD: drop test.text_phrase m30001| Fri Feb 22 12:34:02.134 [conn6] build index test.text_phrase { _id: 1 } m30001| Fri Feb 22 12:34:02.135 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:02.135 [conn6] build index test.text_phrase { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:34:02.136 [conn6] build index done. scanned 3 total records. 0.001 secs 14ms ******************************************* Test : jstests/ref2.js ... m30999| Fri Feb 22 12:34:02.146 [conn1] DROP: test.ref2 m30001| Fri Feb 22 12:34:02.146 [conn6] CMD: drop test.ref2 m30001| Fri Feb 22 12:34:02.147 [conn6] build index test.ref2 { _id: 1 } m30001| Fri Feb 22 12:34:02.148 [conn6] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/where2.js ... m30999| Fri Feb 22 12:34:02.150 [conn1] DROP: test.where2 m30001| Fri Feb 22 12:34:02.151 [conn6] CMD: drop test.where2 m30001| Fri Feb 22 12:34:02.152 [conn6] build index test.where2 { _id: 1 } m30001| Fri Feb 22 12:34:02.152 [conn6] build index done. scanned 0 total records. 0 secs 34ms ******************************************* Test : jstests/mr_outreduce2.js ... m30999| Fri Feb 22 12:34:02.185 [conn1] DROP: test.mr_outreduce2 m30001| Fri Feb 22 12:34:02.186 [conn6] CMD: drop test.mr_outreduce2 m30999| Fri Feb 22 12:34:02.186 [conn1] DROP: test.mr_outreduce2_out m30001| Fri Feb 22 12:34:02.186 [conn6] CMD: drop test.mr_outreduce2_out m30001| Fri Feb 22 12:34:02.187 [conn6] build index test.mr_outreduce2 { _id: 1 } m30001| Fri Feb 22 12:34:02.188 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:02.219 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_1 m30001| Fri Feb 22 12:34:02.219 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_1_inc m30001| Fri Feb 22 12:34:02.219 [conn6] build index test.tmp.mr.mr_outreduce2_1_inc { 0: 1 } m30001| Fri Feb 22 12:34:02.220 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:02.220 [conn6] build index test.tmp.mr.mr_outreduce2_1 { _id: 1 } m30001| Fri Feb 22 12:34:02.221 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:02.223 [conn6] CMD: drop test.mr_outreduce2_out m30001| Fri Feb 22 12:34:02.226 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_1 m30001| Fri Feb 22 12:34:02.226 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_1 m30001| Fri Feb 22 12:34:02.226 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_1_inc m30001| Fri Feb 22 12:34:02.228 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_1 m30001| Fri Feb 22 12:34:02.228 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_1_inc m30001| Fri Feb 22 12:34:02.230 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_2 m30001| Fri Feb 22 12:34:02.230 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_2_inc m30001| Fri Feb 22 12:34:02.231 [conn6] build index test.tmp.mr.mr_outreduce2_2_inc { 0: 1 } m30001| Fri Feb 22 12:34:02.231 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:02.231 [conn6] build index test.tmp.mr.mr_outreduce2_2 { _id: 1 } m30001| Fri Feb 22 12:34:02.232 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:02.234 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_2 m30001| Fri Feb 22 12:34:02.235 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_2_inc m30001| Fri Feb 22 12:34:02.236 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_2 m30001| Fri Feb 22 12:34:02.236 [conn6] CMD: drop test.tmp.mr.mr_outreduce2_2_inc 53ms ******************************************* Test : jstests/array4.js ... m30999| Fri Feb 22 12:34:02.242 [conn1] DROP: test.array4 m30001| Fri Feb 22 12:34:02.243 [conn6] CMD: drop test.array4 m30001| Fri Feb 22 12:34:02.244 [conn6] build index test.array4 { _id: 1 } m30001| Fri Feb 22 12:34:02.244 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:02.246 [conn1] DROP: test.array4 m30001| Fri Feb 22 12:34:02.246 [conn6] CMD: drop test.array4 m30001| Fri Feb 22 12:34:02.247 [conn6] build index test.array4 { _id: 1 } m30001| Fri Feb 22 12:34:02.247 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:02.248 [conn1] DROP: test.array4 m30001| Fri Feb 22 12:34:02.248 [conn6] CMD: drop test.array4 m30001| Fri Feb 22 12:34:02.250 [conn6] build index test.array4 { _id: 1 } m30001| Fri Feb 22 12:34:02.250 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:02.251 [conn1] DROP: test.array4 m30001| Fri Feb 22 12:34:02.251 [conn6] CMD: drop test.array4 m30001| Fri Feb 22 12:34:02.253 [conn6] build index test.array4 { _id: 1 } m30001| Fri Feb 22 12:34:02.253 [conn6] build index done. scanned 0 total records. 0 secs 15ms ******************************************* Test : jstests/max_message_size.js ... m30999| Fri Feb 22 12:34:02.260 [conn1] DROP: test.max_message_size m30001| Fri Feb 22 12:34:02.260 [conn6] CMD: drop test.max_message_size m30001| Fri Feb 22 12:34:02.927 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/test.4, filling with zeroes... m30001| Fri Feb 22 12:34:02.927 [conn6] build index test.max_message_size { _id: 1 } m30001| Fri Feb 22 12:34:02.927 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/test.4, size: 1024MB, took 0 secs m30001| Fri Feb 22 12:34:02.928 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:03.139 [conn6] insert test.max_message_size ninserted:1 keyUpdates:0 locks(micros) w:130642 130ms m30999| Fri Feb 22 12:34:03.267 [conn1] DROP: test.max_message_size m30001| Fri Feb 22 12:34:03.267 [conn6] CMD: drop test.max_message_size m30001| Fri Feb 22 12:34:03.534 [conn6] build index test.max_message_size { _id: 1 } m30001| Fri Feb 22 12:34:03.534 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:03.611 [conn1] DROP: test.max_message_size m30001| Fri Feb 22 12:34:03.611 [conn6] CMD: drop test.max_message_size m30001| Fri Feb 22 12:34:03.862 [conn6] build index test.max_message_size { _id: 1 } m30001| Fri Feb 22 12:34:03.863 [conn6] build index done. scanned 0 total records. 0 secs 1811ms ******************************************* Test : jstests/getlog2.js ... m30999| Fri Feb 22 12:34:04.066 [conn1] DROP: test.getLogTest2 m30001| Fri Feb 22 12:34:04.067 [conn6] CMD: drop test.getLogTest2 3ms ******************************************* Test : jstests/push.js ... m30999| Fri Feb 22 12:34:04.068 [conn1] DROP: test.push m30001| Fri Feb 22 12:34:04.069 [conn6] CMD: drop test.push m30001| Fri Feb 22 12:34:04.070 [conn6] build index test.push { _id: 1 } m30001| Fri Feb 22 12:34:04.072 [conn6] build index done. scanned 0 total records. 0.001 secs 11ms ******************************************* Test : jstests/explain4.js ... m30999| Fri Feb 22 12:34:04.080 [conn1] DROP: test.jstests_explain4 m30001| Fri Feb 22 12:34:04.081 [conn6] CMD: drop test.jstests_explain4 m30001| Fri Feb 22 12:34:04.083 [conn6] build index test.jstests_explain4 { _id: 1 } m30001| Fri Feb 22 12:34:04.085 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:04.104 [conn10] end connection 127.0.0.1:52483 (5 connections now open) m30000| Fri Feb 22 12:34:04.104 [conn12] end connection 127.0.0.1:64927 (8 connections now open) 32ms ******************************************* Test : jstests/eval2.js ... m30999| Fri Feb 22 12:34:04.112 [conn1] DROP: test.test m30001| Fri Feb 22 12:34:04.112 [conn6] CMD: drop test.test m30001| Fri Feb 22 12:34:04.113 [conn6] build index test.test { _id: 1 } m30001| Fri Feb 22 12:34:04.113 [conn6] build index done. scanned 0 total records. 0 secs 57ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/shellkillop.js ******************************************* Test : jstests/mr_merge.js ... m30999| Fri Feb 22 12:34:04.173 [conn1] DROP: test.mr_merge m30001| Fri Feb 22 12:34:04.173 [conn6] CMD: drop test.mr_merge m30001| Fri Feb 22 12:34:04.173 [conn6] build index test.mr_merge { _id: 1 } m30999| Fri Feb 22 12:34:04.174 [conn1] DROP: test.mr_merge_out m30001| Fri Feb 22 12:34:04.175 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:04.175 [conn6] CMD: drop test.mr_merge_out m30001| Fri Feb 22 12:34:04.176 [conn6] CMD: drop test.tmp.mr.mr_merge_3 m30001| Fri Feb 22 12:34:04.177 [conn6] CMD: drop test.tmp.mr.mr_merge_3_inc m30001| Fri Feb 22 12:34:04.177 [conn6] build index test.tmp.mr.mr_merge_3_inc { 0: 1 } m30001| Fri Feb 22 12:34:04.178 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.178 [conn6] build index test.tmp.mr.mr_merge_3 { _id: 1 } m30001| Fri Feb 22 12:34:04.179 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.181 [conn6] CMD: drop test.mr_merge_out m30001| Fri Feb 22 12:34:04.183 [conn6] CMD: drop test.tmp.mr.mr_merge_3 m30001| Fri Feb 22 12:34:04.183 [conn6] CMD: drop test.tmp.mr.mr_merge_3 m30001| Fri Feb 22 12:34:04.183 [conn6] CMD: drop test.tmp.mr.mr_merge_3_inc m30001| Fri Feb 22 12:34:04.185 [conn6] CMD: drop test.tmp.mr.mr_merge_3 m30001| Fri Feb 22 12:34:04.185 [conn6] CMD: drop test.tmp.mr.mr_merge_3_inc m30001| Fri Feb 22 12:34:04.187 [conn6] CMD: drop test.tmp.mr.mr_merge_4 m30001| Fri Feb 22 12:34:04.187 [conn6] CMD: drop test.tmp.mr.mr_merge_4_inc m30001| Fri Feb 22 12:34:04.187 [conn6] build index test.tmp.mr.mr_merge_4_inc { 0: 1 } m30001| Fri Feb 22 12:34:04.188 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.188 [conn6] build index test.tmp.mr.mr_merge_4 { _id: 1 } m30001| Fri Feb 22 12:34:04.188 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.190 [conn6] CMD: drop test.mr_merge_out m30001| Fri Feb 22 12:34:04.194 [conn6] CMD: drop test.tmp.mr.mr_merge_4 m30001| Fri Feb 22 12:34:04.194 [conn6] CMD: drop test.tmp.mr.mr_merge_4 m30001| Fri Feb 22 12:34:04.194 [conn6] CMD: drop test.tmp.mr.mr_merge_4_inc m30001| Fri Feb 22 12:34:04.195 [conn6] CMD: drop test.tmp.mr.mr_merge_4 m30001| Fri Feb 22 12:34:04.195 [conn6] CMD: drop test.tmp.mr.mr_merge_4_inc m30001| Fri Feb 22 12:34:04.197 [conn6] CMD: drop test.tmp.mr.mr_merge_5 m30001| Fri Feb 22 12:34:04.197 [conn6] CMD: drop test.tmp.mr.mr_merge_5_inc m30001| Fri Feb 22 12:34:04.197 [conn6] build index test.tmp.mr.mr_merge_5_inc { 0: 1 } m30001| Fri Feb 22 12:34:04.197 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.198 [conn6] build index test.tmp.mr.mr_merge_5 { _id: 1 } m30001| Fri Feb 22 12:34:04.198 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.200 [conn6] CMD: drop test.tmp.mr.mr_merge_5 m30001| Fri Feb 22 12:34:04.201 [conn6] CMD: drop test.tmp.mr.mr_merge_5 m30001| Fri Feb 22 12:34:04.201 [conn6] CMD: drop test.tmp.mr.mr_merge_5_inc m30001| Fri Feb 22 12:34:04.202 [conn6] CMD: drop test.tmp.mr.mr_merge_5 m30001| Fri Feb 22 12:34:04.202 [conn6] CMD: drop test.tmp.mr.mr_merge_5_inc m30001| Fri Feb 22 12:34:04.204 [conn6] CMD: drop test.tmp.mr.mr_merge_6 m30001| Fri Feb 22 12:34:04.204 [conn6] CMD: drop test.tmp.mr.mr_merge_6_inc m30001| Fri Feb 22 12:34:04.205 [conn6] build index test.tmp.mr.mr_merge_6_inc { 0: 1 } m30001| Fri Feb 22 12:34:04.205 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.205 [conn6] build index test.tmp.mr.mr_merge_6 { _id: 1 } m30001| Fri Feb 22 12:34:04.206 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.208 [conn6] CMD: drop test.tmp.mr.mr_merge_6 m30001| Fri Feb 22 12:34:04.209 [conn6] CMD: drop test.tmp.mr.mr_merge_6 m30001| Fri Feb 22 12:34:04.209 [conn6] CMD: drop test.tmp.mr.mr_merge_6_inc m30001| Fri Feb 22 12:34:04.210 [conn6] CMD: drop test.tmp.mr.mr_merge_6 m30001| Fri Feb 22 12:34:04.210 [conn6] CMD: drop test.tmp.mr.mr_merge_6_inc 43ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_uniqueDocs.js ******************************************* Test : jstests/pull2.js ... m30999| Fri Feb 22 12:34:04.212 [conn1] DROP: test.pull2 m30001| Fri Feb 22 12:34:04.212 [conn6] CMD: drop test.pull2 m30001| Fri Feb 22 12:34:04.212 [conn6] build index test.pull2 { _id: 1 } m30001| Fri Feb 22 12:34:04.213 [conn6] build index done. scanned 0 total records. 0 secs 5ms ******************************************* Test : jstests/distinct_speed1.js ... m30999| Fri Feb 22 12:34:04.217 [conn1] DROP: test.distinct_speed1 m30001| Fri Feb 22 12:34:04.217 [conn6] CMD: drop test.distinct_speed1 m30001| Fri Feb 22 12:34:04.217 [conn6] build index test.distinct_speed1 { _id: 1 } m30001| Fri Feb 22 12:34:04.218 [conn6] build index done. scanned 0 total records. 0 secs it: 9 di: 13 it: 9 di: 13 it: 8 di: 14 m30001| Fri Feb 22 12:34:04.919 [conn6] build index test.distinct_speed1 { x: 1.0 } m30001| Fri Feb 22 12:34:04.979 [conn6] build index done. scanned 10000 total records. 0.06 secs 767ms ******************************************* Test : jstests/sortc.js ... m30999| Fri Feb 22 12:34:04.984 [conn1] DROP: test.jstests_sortc m30001| Fri Feb 22 12:34:04.985 [conn6] CMD: drop test.jstests_sortc m30001| Fri Feb 22 12:34:04.985 [conn6] build index test.jstests_sortc { _id: 1 } m30001| Fri Feb 22 12:34:04.986 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:04.991 [conn6] build index test.jstests_sortc { a: 1.0 } m30001| Fri Feb 22 12:34:04.992 [conn6] build index done. scanned 2 total records. 0 secs 13ms ******************************************* Test : jstests/type1.js ... m30999| Fri Feb 22 12:34:04.997 [conn1] DROP: test.type1 m30001| Fri Feb 22 12:34:04.998 [conn6] CMD: drop test.type1 m30001| Fri Feb 22 12:34:04.998 [conn6] build index test.type1 { _id: 1 } m30001| Fri Feb 22 12:34:04.999 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:05.002 [conn6] build index test.type1 { x: 1.0 } m30001| Fri Feb 22 12:34:05.003 [conn6] build index done. scanned 4 total records. 0 secs 11ms ******************************************* Test : jstests/removec.js ... m30999| Fri Feb 22 12:34:05.007 [conn1] DROP: test.jstests_removec m30001| Fri Feb 22 12:34:05.008 [conn6] CMD: drop test.jstests_removec m30001| Fri Feb 22 12:34:05.008 [conn6] build index test.jstests_removec { _id: 1 } m30001| Fri Feb 22 12:34:05.010 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:05.010 [conn6] info: creating collection test.jstests_removec on add index m30001| Fri Feb 22 12:34:05.010 [conn6] build index test.jstests_removec { a: 1.0 } m30001| Fri Feb 22 12:34:05.011 [conn6] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:34:05.083 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');t = db.jstests_removec;for( j = 0; j < 1000; ++j ) { o = t.findOne( { a:Random.randInt( 1100 ) } ); t.remove( { _id:o._id } ); t.insert( o );} localhost:30999/admin sh14683| MongoDB shell version: 2.4.0-rc1-pre- sh14683| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:34:05.169 [mongosMain] connection accepted from 127.0.0.1:61378 #5 (2 connections now open) m30001| Fri Feb 22 12:34:05.173 [initandlisten] connection accepted from 127.0.0.1:39334 #11 (6 connections now open) m30001| Fri Feb 22 12:34:05.210 [conn6] key seems to have moved in the index, refinding. 3:1033b000 m30999| Fri Feb 22 12:34:05.608 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765bd881c8e745391603e m30999| Fri Feb 22 12:34:05.609 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:34:06.231 [conn6] key seems to have moved in the index, refinding. 3:10350000 m30999| Fri Feb 22 12:34:06.323 [conn5] end connection 127.0.0.1:61378 (1 connection now open) m30999| Fri Feb 22 12:34:06.502 [conn1] DROP: test.jstests_removec m30001| Fri Feb 22 12:34:06.502 [conn6] CMD: drop test.jstests_removec 1497ms ******************************************* Test : jstests/mr4.js ... m30999| Fri Feb 22 12:34:06.506 [conn1] DROP: test.mr4 m30001| Fri Feb 22 12:34:06.506 [conn6] CMD: drop test.mr4 m30001| Fri Feb 22 12:34:06.506 [conn6] build index test.mr4 { _id: 1 } m30001| Fri Feb 22 12:34:06.507 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:06.508 [conn6] CMD: drop test.tmp.mr.mr4_7 m30001| Fri Feb 22 12:34:06.508 [conn6] CMD: drop test.tmp.mr.mr4_7_inc m30001| Fri Feb 22 12:34:06.508 [conn6] build index test.tmp.mr.mr4_7_inc { 0: 1 } m30001| Fri Feb 22 12:34:06.509 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:06.509 [conn6] build index test.tmp.mr.mr4_7 { _id: 1 } m30001| Fri Feb 22 12:34:06.510 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:06.511 [conn6] CMD: drop test.mr4_out m30001| Fri Feb 22 12:34:06.513 [conn6] CMD: drop test.tmp.mr.mr4_7 m30001| Fri Feb 22 12:34:06.514 [conn6] CMD: drop test.tmp.mr.mr4_7 m30001| Fri Feb 22 12:34:06.514 [conn6] CMD: drop test.tmp.mr.mr4_7_inc m30001| Fri Feb 22 12:34:06.515 [conn6] CMD: drop test.tmp.mr.mr4_7 m30001| Fri Feb 22 12:34:06.515 [conn6] CMD: drop test.tmp.mr.mr4_7_inc m30999| Fri Feb 22 12:34:06.516 [conn1] DROP: test.mr4_out m30001| Fri Feb 22 12:34:06.516 [conn6] CMD: drop test.mr4_out m30001| Fri Feb 22 12:34:06.518 [conn6] CMD: drop test.tmp.mr.mr4_8 m30001| Fri Feb 22 12:34:06.518 [conn6] CMD: drop test.tmp.mr.mr4_8_inc m30001| Fri Feb 22 12:34:06.518 [conn6] build index test.tmp.mr.mr4_8_inc { 0: 1 } m30001| Fri Feb 22 12:34:06.519 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:06.519 [conn6] build index test.tmp.mr.mr4_8 { _id: 1 } m30001| Fri Feb 22 12:34:06.519 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:06.521 [conn6] CMD: drop test.mr4_out m30001| Fri Feb 22 12:34:06.522 [conn6] CMD: drop test.tmp.mr.mr4_8 m30001| Fri Feb 22 12:34:06.522 [conn6] CMD: drop test.tmp.mr.mr4_8 m30001| Fri Feb 22 12:34:06.522 [conn6] CMD: drop test.tmp.mr.mr4_8_inc m30001| Fri Feb 22 12:34:06.523 [conn6] CMD: drop test.tmp.mr.mr4_8 m30001| Fri Feb 22 12:34:06.523 [conn6] CMD: drop test.tmp.mr.mr4_8_inc m30999| Fri Feb 22 12:34:06.524 [conn1] DROP: test.mr4_out m30001| Fri Feb 22 12:34:06.524 [conn6] CMD: drop test.mr4_out 21ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_allowedcomparisons.js ******************************************* Test : jstests/pull_remove1.js ... m30999| Fri Feb 22 12:34:06.525 [conn1] DROP: test.pull_remove1 m30001| Fri Feb 22 12:34:06.526 [conn6] CMD: drop test.pull_remove1 m30001| Fri Feb 22 12:34:06.526 [conn6] build index test.pull_remove1 { _id: 1 } m30001| Fri Feb 22 12:34:06.527 [conn6] build index done. scanned 0 total records. 0 secs 32ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo2.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2overlappingpolys.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_distinct.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_poly_line.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped2.js ******************************************* Test : jstests/indexk.js ... m30999| Fri Feb 22 12:34:06.559 [conn1] DROP: test.jstests_indexk m30001| Fri Feb 22 12:34:06.559 [conn6] CMD: drop test.jstests_indexk m30001| Fri Feb 22 12:34:06.560 [conn6] build index test.jstests_indexk { _id: 1 } m30001| Fri Feb 22 12:34:06.560 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:06.563 [conn6] build index test.jstests_indexk { a: 1.0 } m30001| Fri Feb 22 12:34:06.564 [conn6] build index done. scanned 1 total records. 0 secs 11ms ******************************************* Test : jstests/set2.js ... m30999| Fri Feb 22 12:34:06.570 [conn1] DROP: test.set2 m30001| Fri Feb 22 12:34:06.571 [conn6] CMD: drop test.set2 m30001| Fri Feb 22 12:34:06.571 [conn6] build index test.set2 { _id: 1 } m30001| Fri Feb 22 12:34:06.572 [conn6] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/padding.js ... m30999| Fri Feb 22 12:34:06.575 [conn1] DROP: test.padding m30001| Fri Feb 22 12:34:06.575 [conn6] CMD: drop test.padding m30001| Fri Feb 22 12:34:06.576 [conn6] build index test.padding { _id: 1 } m30001| Fri Feb 22 12:34:06.576 [conn6] build index done. scanned 0 total records. 0 secs 1 m30001| Fri Feb 22 12:34:06.660 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.0990000000000038 m30001| Fri Feb 22 12:34:06.686 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.2020000000000075 m30001| Fri Feb 22 12:34:06.711 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.738 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.3000000000000114 m30001| Fri Feb 22 12:34:06.753 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.766 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.780 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.3990000000000151 m30001| Fri Feb 22 12:34:06.804 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.818 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.831 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.5020000000000189 m30001| Fri Feb 22 12:34:06.844 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.858 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.870 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.883 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.6000000000000227 m30001| Fri Feb 22 12:34:06.896 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.908 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.920 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.6990000000000265 m30001| Fri Feb 22 12:34:06.933 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.947 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.961 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.8020000000000302 m30001| Fri Feb 22 12:34:06.974 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.986 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:06.999 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.900000000000034 m30001| Fri Feb 22 12:34:07.012 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.024 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.037 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.9950000000000379 m30001| Fri Feb 22 12:34:07.047 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.049 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.9620000000000415 m30001| Fri Feb 22 12:34:07.052 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.9280000000000452 m30001| Fri Feb 22 12:34:07.054 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.056 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.8950000000000489 m30001| Fri Feb 22 12:34:07.059 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.061 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.8620000000000525 m30001| Fri Feb 22 12:34:07.064 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.8280000000000562 m30001| Fri Feb 22 12:34:07.066 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.068 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7950000000000599 m30001| Fri Feb 22 12:34:07.071 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7620000000000635 m30001| Fri Feb 22 12:34:07.073 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.075 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7280000000000673 m30001| Fri Feb 22 12:34:07.078 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.695000000000071 m30001| Fri Feb 22 12:34:07.080 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.083 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.6620000000000745 m30001| Fri Feb 22 12:34:07.089 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.100 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.6960000000000783 m30001| Fri Feb 22 12:34:07.110 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.119 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.127 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.727000000000082 m30001| Fri Feb 22 12:34:07.137 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.146 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7300000000000857 m30001| Fri Feb 22 12:34:07.155 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.165 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.171 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7280000000000895 m30001| Fri Feb 22 12:34:07.177 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.183 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.189 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.195 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7310000000000931 m30001| Fri Feb 22 12:34:07.201 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.207 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.213 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7300000000000968 m30001| Fri Feb 22 12:34:07.221 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.227 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.233 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.239 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7280000000001006 m30001| Fri Feb 22 12:34:07.245 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.251 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.258 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7310000000001042 m30001| Fri Feb 22 12:34:07.264 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.271 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.279 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.285 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.730000000000108 m30001| Fri Feb 22 12:34:07.292 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.298 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.304 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7200000000001117 m30001| Fri Feb 22 12:34:07.312 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.321 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.330 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7240000000001152 m30001| Fri Feb 22 12:34:07.340 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.350 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.722000000000119 m30001| Fri Feb 22 12:34:07.360 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.370 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7210000000001227 m30001| Fri Feb 22 12:34:07.380 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.389 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.399 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7240000000001263 m30001| Fri Feb 22 12:34:07.409 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.420 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.73400000000013 m30001| Fri Feb 22 12:34:07.434 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.7690000000001338 m30001| Fri Feb 22 12:34:07.449 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.466 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.8000000000001375 m30001| Fri Feb 22 12:34:07.481 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.495 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.8350000000001412 m30001| Fri Feb 22 12:34:07.509 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding 1.869000000000145 m30001| Fri Feb 22 12:34:07.524 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30001| Fri Feb 22 12:34:07.538 [conn6] info DFM::findAll(): extent 3:10384000 was empty, skipping ahead. ns:test.padding m30999| Fri Feb 22 12:34:07.542 [conn1] DROP: test.padding m30001| Fri Feb 22 12:34:07.542 [conn6] CMD: drop test.padding 969ms ******************************************* Test : jstests/index3.js ... m30999| Fri Feb 22 12:34:07.548 [conn1] DROP: test.index3 m30001| Fri Feb 22 12:34:07.548 [conn6] CMD: drop test.index3 m30001| Fri Feb 22 12:34:07.549 [conn6] build index test.index3 { _id: 1 } m30001| Fri Feb 22 12:34:07.550 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.550 [conn6] info: creating collection test.index3 on add index m30001| Fri Feb 22 12:34:07.550 [conn6] build index test.index3 { name: 1.0 } m30001| Fri Feb 22 12:34:07.550 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.552 [conn4] CMD: validate test.index3 m30001| Fri Feb 22 12:34:07.553 [conn4] validating index 0: test.index3.$_id_ m30001| Fri Feb 22 12:34:07.553 [conn4] validating index 1: test.index3.$name_1 10ms ******************************************* Test : jstests/update6.js ... m30999| Fri Feb 22 12:34:07.560 [conn1] DROP: test.update6 m30001| Fri Feb 22 12:34:07.560 [conn6] CMD: drop test.update6 m30001| Fri Feb 22 12:34:07.561 [conn6] build index test.update6 { _id: 1 } m30001| Fri Feb 22 12:34:07.561 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:07.563 [conn1] DROP: test.update6 m30001| Fri Feb 22 12:34:07.563 [conn6] CMD: drop test.update6 m30001| Fri Feb 22 12:34:07.564 [conn6] build index test.update6 { _id: 1 } m30001| Fri Feb 22 12:34:07.564 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:07.565 [conn1] DROP: test.update6 m30001| Fri Feb 22 12:34:07.565 [conn6] CMD: drop test.update6 m30001| Fri Feb 22 12:34:07.566 [conn6] build index test.update6 { _id: 1 } m30001| Fri Feb 22 12:34:07.567 [conn6] build index done. scanned 0 total records. 0 secs { "0720" : 5, "0721" : 12, "0722" : 11, "0723" : 3 } { "0719" : 1, "0720" : 5, "0721" : 12, "0722" : 11, "0723" : 3 } 16ms ******************************************* Test : jstests/org.js ... m30999| Fri Feb 22 12:34:07.570 [conn1] DROP: test.jstests_org m30001| Fri Feb 22 12:34:07.570 [conn6] CMD: drop test.jstests_org m30001| Fri Feb 22 12:34:07.570 [conn6] build index test.jstests_org { _id: 1 } m30001| Fri Feb 22 12:34:07.571 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.571 [conn6] info: creating collection test.jstests_org on add index m30001| Fri Feb 22 12:34:07.571 [conn6] build index test.jstests_org { a: 1.0 } m30001| Fri Feb 22 12:34:07.571 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.572 [conn6] build index test.jstests_org { b: 1.0 } m30001| Fri Feb 22 12:34:07.572 [conn6] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/exists2.js ... m30999| Fri Feb 22 12:34:07.576 [conn1] DROP: test.exists2 m30001| Fri Feb 22 12:34:07.576 [conn6] CMD: drop test.exists2 m30001| Fri Feb 22 12:34:07.577 [conn6] build index test.exists2 { _id: 1 } m30001| Fri Feb 22 12:34:07.577 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.579 [conn6] build index test.exists2 { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:34:07.579 [conn6] build index done. scanned 2 total records. 0 secs 5ms ******************************************* Test : jstests/orq.js ... m30999| Fri Feb 22 12:34:07.581 [conn1] DROP: test.jstests_orq m30001| Fri Feb 22 12:34:07.581 [conn6] CMD: drop test.jstests_orq m30001| Fri Feb 22 12:34:07.581 [conn6] build index test.jstests_orq { _id: 1 } m30001| Fri Feb 22 12:34:07.582 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.582 [conn6] info: creating collection test.jstests_orq on add index m30001| Fri Feb 22 12:34:07.582 [conn6] build index test.jstests_orq { a: 1.0 } m30001| Fri Feb 22 12:34:07.582 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.583 [conn6] build index test.jstests_orq { b: 1.0 } m30001| Fri Feb 22 12:34:07.583 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.583 [conn6] build index test.jstests_orq { c: 1.0 } m30001| Fri Feb 22 12:34:07.584 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:07.603 [conn1] DROP: test.jstests_orq m30001| Fri Feb 22 12:34:07.603 [conn6] CMD: drop test.jstests_orq m30001| Fri Feb 22 12:34:07.605 [conn6] build index test.jstests_orq { _id: 1 } m30001| Fri Feb 22 12:34:07.605 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.605 [conn6] info: creating collection test.jstests_orq on add index m30001| Fri Feb 22 12:34:07.605 [conn6] build index test.jstests_orq { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:07.606 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:07.625 [conn1] DROP: test.jstests_orq m30001| Fri Feb 22 12:34:07.625 [conn6] CMD: drop test.jstests_orq m30001| Fri Feb 22 12:34:07.626 [conn6] build index test.jstests_orq { _id: 1 } m30001| Fri Feb 22 12:34:07.626 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.626 [conn6] info: creating collection test.jstests_orq on add index m30001| Fri Feb 22 12:34:07.627 [conn6] build index test.jstests_orq { a: 1.0 } m30001| Fri Feb 22 12:34:07.627 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.627 [conn6] build index test.jstests_orq { b: 1.0 } m30001| Fri Feb 22 12:34:07.627 [conn6] build index done. scanned 0 total records. 0 secs 61ms ******************************************* Test : jstests/objid7.js ... 3ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/bench_test3.js ******************************************* Test : jstests/map1.js ... 6ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_array0.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_withinquery.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_uniqueDocs2.js ******************************************* Test : jstests/cursor5.js ... m30999| Fri Feb 22 12:34:07.652 [conn1] DROP: test.ed_db_cursor5_bwsi m30001| Fri Feb 22 12:34:07.652 [conn6] CMD: drop test.ed_db_cursor5_bwsi m30001| Fri Feb 22 12:34:07.653 [conn6] build index test.ed_db_cursor5_bwsi { _id: 1 } m30001| Fri Feb 22 12:34:07.653 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.654 [conn6] build index test.ed_db_cursor5_bwsi { a.d: 1.0, a: 1.0, e: -1.0 } m30001| Fri Feb 22 12:34:07.654 [conn6] build index done. scanned 6 total records. 0 secs 9ms ******************************************* Test : jstests/covered_index_compound_1.js ... m30999| Fri Feb 22 12:34:07.665 [conn1] DROP: test.covered_compound_1 m30001| Fri Feb 22 12:34:07.665 [conn6] CMD: drop test.covered_compound_1 m30001| Fri Feb 22 12:34:07.666 [conn6] build index test.covered_compound_1 { _id: 1 } m30001| Fri Feb 22 12:34:07.666 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.674 [conn6] build index test.covered_compound_1 { a: 1.0, b: -1.0, c: 1.0 } m30001| Fri Feb 22 12:34:07.675 [conn6] build index done. scanned 100 total records. 0 secs all tests passed 18ms ******************************************* Test : jstests/basica.js ... m30999| Fri Feb 22 12:34:07.680 [conn1] DROP: test.basica m30001| Fri Feb 22 12:34:07.681 [conn6] CMD: drop test.basica m30001| Fri Feb 22 12:34:07.681 [conn6] build index test.basica { _id: 1 } m30001| Fri Feb 22 12:34:07.682 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:07.685 [conn1] DROP: test.basica m30001| Fri Feb 22 12:34:07.685 [conn6] CMD: drop test.basica m30001| Fri Feb 22 12:34:07.686 [conn6] build index test.basica { _id: 1 } m30001| Fri Feb 22 12:34:07.686 [conn6] build index done. scanned 0 total records. 0 secs 9ms ******************************************* Test : jstests/group4.js ... m30999| Fri Feb 22 12:34:07.689 [conn1] DROP: test.group4 m30001| Fri Feb 22 12:34:07.689 [conn6] CMD: drop test.group4 m30001| Fri Feb 22 12:34:07.689 [conn6] build index test.group4 { _id: 1 } m30001| Fri Feb 22 12:34:07.690 [conn6] build index done. scanned 0 total records. 0 secs 19ms ******************************************* Test : jstests/fts2.js ... m30000| Fri Feb 22 12:34:07.710 [initandlisten] connection accepted from 127.0.0.1:61593 #13 (9 connections now open) m30001| Fri Feb 22 12:34:07.710 [initandlisten] connection accepted from 127.0.0.1:50784 #12 (7 connections now open) m30999| Fri Feb 22 12:34:07.711 [conn1] DROP: test.text2 m30001| Fri Feb 22 12:34:07.711 [conn6] CMD: drop test.text2 m30001| Fri Feb 22 12:34:07.712 [conn6] build index test.text2 { _id: 1 } m30001| Fri Feb 22 12:34:07.712 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.712 [conn6] build index test.text2 { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:34:07.713 [conn6] build index done. scanned 2 total records. 0 secs 10ms ******************************************* Test : jstests/finda.js ... m30999| Fri Feb 22 12:34:07.725 [conn1] DROP: test.jstests_finda m30001| Fri Feb 22 12:34:07.725 [conn6] CMD: drop test.jstests_finda m30999| Fri Feb 22 12:34:07.725 [conn1] DROP: test.jstests_finda m30001| Fri Feb 22 12:34:07.726 [conn6] CMD: drop test.jstests_finda m30001| Fri Feb 22 12:34:07.726 [conn6] build index test.jstests_finda { _id: 1 } m30001| Fri Feb 22 12:34:07.727 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.735 [conn6] build index test.jstests_finda { a: 1.0, _id: 1.0 } m30001| Fri Feb 22 12:34:07.736 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.736 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.737 [conn6] build index done. scanned 200 total records. 0 secs m30001| Fri Feb 22 12:34:07.738 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.745 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.746 [conn6] build index done. scanned 200 total records. 0 secs m30001| Fri Feb 22 12:34:07.747 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.752 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.753 [conn6] build index done. scanned 200 total records. 0 secs m30001| Fri Feb 22 12:34:07.753 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.768 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.769 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.770 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.777 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.778 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.778 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.786 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.787 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.787 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.808 [conn12] end connection 127.0.0.1:50784 (6 connections now open) m30000| Fri Feb 22 12:34:07.808 [conn13] end connection 127.0.0.1:61593 (8 connections now open) m30001| Fri Feb 22 12:34:07.826 [conn4] killcursors: found 0 of 1 m30001| Fri Feb 22 12:34:07.826 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.828 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.828 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.836 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.838 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.838 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.846 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.847 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.847 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.863 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.864 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.865 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.872 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.874 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.874 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.881 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.883 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.883 [conn4] CMD: dropIndexes test.jstests_finda m30999| Fri Feb 22 12:34:07.884 [conn1] DROP: test.jstests_finda m30001| Fri Feb 22 12:34:07.884 [conn6] CMD: drop test.jstests_finda m30001| Fri Feb 22 12:34:07.887 [conn6] build index test.jstests_finda { _id: 1 } m30001| Fri Feb 22 12:34:07.887 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:07.904 [conn6] build index test.jstests_finda { a: 1.0, _id: 1.0 } m30001| Fri Feb 22 12:34:07.906 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.907 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.908 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.909 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.914 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.915 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.916 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.921 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.923 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.923 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.937 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.938 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.938 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.946 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.947 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.948 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.958 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.959 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.960 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.976 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.977 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.978 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.985 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.987 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.988 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:07.996 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:07.997 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:07.998 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:08.014 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:08.015 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:08.016 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:08.024 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:08.025 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:08.026 [conn4] CMD: dropIndexes test.jstests_finda m30001| Fri Feb 22 12:34:08.034 [conn6] build index test.jstests_finda { c: 1.0 } m30001| Fri Feb 22 12:34:08.035 [conn6] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:34:08.036 [conn4] CMD: dropIndexes test.jstests_finda 319ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_circle2.js ******************************************* Test : jstests/basic9.js ... 1ms ******************************************* Test : jstests/find9.js ... m30001| Fri Feb 22 12:34:08.038 [conn6] build index test.foo_basic9 { _id: 1 } m30001| Fri Feb 22 12:34:08.039 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:08.039 [conn1] DROP: test.jstests_find9 m30001| Fri Feb 22 12:34:08.039 [conn6] CMD: drop test.jstests_find9 m30001| Fri Feb 22 12:34:08.050 [conn6] build index test.jstests_find9 { _id: 1 } m30001| Fri Feb 22 12:34:08.051 [conn6] build index done. scanned 0 total records. 0 secs 205ms ******************************************* Test : jstests/in4.js ... m30999| Fri Feb 22 12:34:08.245 [conn1] DROP: test.jstests_in4 m30001| Fri Feb 22 12:34:08.252 [conn6] CMD: drop test.jstests_in4 m30001| Fri Feb 22 12:34:08.253 [conn6] build index test.jstests_in4 { _id: 1 } m30001| Fri Feb 22 12:34:08.254 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:08.254 [conn6] info: creating collection test.jstests_in4 on add index m30001| Fri Feb 22 12:34:08.254 [conn6] build index test.jstests_in4 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:08.255 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:08.261 [conn1] DROP: test.jstests_in4 m30001| Fri Feb 22 12:34:08.261 [conn6] CMD: drop test.jstests_in4 m30001| Fri Feb 22 12:34:08.265 [conn6] build index test.jstests_in4 { _id: 1 } m30001| Fri Feb 22 12:34:08.265 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:08.265 [conn6] info: creating collection test.jstests_in4 on add index m30001| Fri Feb 22 12:34:08.265 [conn6] build index test.jstests_in4 { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:34:08.266 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:08.269 [conn1] DROP: test.jstests_in4 m30001| Fri Feb 22 12:34:08.269 [conn6] CMD: drop test.jstests_in4 m30001| Fri Feb 22 12:34:08.271 [conn6] build index test.jstests_in4 { _id: 1 } m30001| Fri Feb 22 12:34:08.271 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:08.271 [conn6] info: creating collection test.jstests_in4 on add index m30001| Fri Feb 22 12:34:08.272 [conn6] build index test.jstests_in4 { a: 1.0, b: -1.0 } m30001| Fri Feb 22 12:34:08.272 [conn6] build index done. scanned 0 total records. 0 secs 31ms ******************************************* Test : jstests/coveredIndex1.js ... m30999| Fri Feb 22 12:34:08.397 [conn1] DROP: test.jstests_coveredIndex1 m30001| Fri Feb 22 12:34:08.398 [conn6] CMD: drop test.jstests_coveredIndex1 m30001| Fri Feb 22 12:34:08.398 [conn6] build index test.jstests_coveredIndex1 { _id: 1 } m30001| Fri Feb 22 12:34:08.399 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:08.400 [conn6] build index test.jstests_coveredIndex1 { ln: 1.0 } m30001| Fri Feb 22 12:34:08.401 [conn6] build index done. scanned 6 total records. 0 secs m30001| Fri Feb 22 12:34:08.404 [conn4] CMD: dropIndexes test.jstests_coveredIndex1 m30001| Fri Feb 22 12:34:08.405 [conn6] build index test.jstests_coveredIndex1 { ln: 1.0, fn: 1.0 } m30001| Fri Feb 22 12:34:08.405 [conn6] build index done. scanned 6 total records. 0 secs m30001| Fri Feb 22 12:34:08.409 [conn4] CMD: dropIndexes test.jstests_coveredIndex1 m30001| Fri Feb 22 12:34:08.410 [conn6] build index test.jstests_coveredIndex1 { _id: 1.0, ln: 1.0 } m30001| Fri Feb 22 12:34:08.410 [conn6] build index done. scanned 6 total records. 0 secs m30001| Fri Feb 22 12:34:08.413 [conn4] CMD: dropIndexes test.jstests_coveredIndex1 m30001| Fri Feb 22 12:34:08.414 [conn6] build index test.jstests_coveredIndex1 { obj: 1.0 } m30001| Fri Feb 22 12:34:08.415 [conn6] build index done. scanned 6 total records. 0 secs m30001| Fri Feb 22 12:34:08.417 [conn4] CMD: dropIndexes test.jstests_coveredIndex1 m30001| Fri Feb 22 12:34:08.418 [conn6] build index test.jstests_coveredIndex1 { obj.a: 1.0, obj.b: 1.0 } m30001| Fri Feb 22 12:34:08.419 [conn6] build index done. scanned 6 total records. 0 secs m30001| Fri Feb 22 12:34:08.420 [conn4] CMD: validate test.jstests_coveredIndex1 m30001| Fri Feb 22 12:34:08.420 [conn4] validating index 0: test.jstests_coveredIndex1.$_id_ m30001| Fri Feb 22 12:34:08.420 [conn4] validating index 1: test.jstests_coveredIndex1.$obj.a_1_obj.b_1 146ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_queryoptimizer.js ******************************************* Test : jstests/queryoptimizer1.js ... m30999| Fri Feb 22 12:34:08.424 [conn1] DROP: test.queryoptimizer1 m30001| Fri Feb 22 12:34:08.425 [conn6] CMD: drop test.queryoptimizer1 m30001| Fri Feb 22 12:34:08.425 [conn6] build index test.queryoptimizer1 { _id: 1 } m30001| Fri Feb 22 12:34:08.426 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:08.451 [conn4] killcursors: found 0 of 1 m30001| Fri Feb 22 12:34:08.451 [conn4] killcursors: found 0 of 1 m30001| Fri Feb 22 12:34:08.451 [conn4] killcursors: found 0 of 1 m30001| Fri Feb 22 12:34:09.737 [conn6] build index test.queryoptimizer1 { a: 1.0 } m30001| Fri Feb 22 12:34:09.849 [conn6] build index done. scanned 20000 total records. 0.111 secs m30001| Fri Feb 22 12:34:09.849 [conn6] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:111613 111ms m30001| Fri Feb 22 12:34:09.850 [conn6] build index test.queryoptimizer1 { b: 1.0 } m30001| Fri Feb 22 12:34:09.963 [conn6] build index done. scanned 20000 total records. 0.113 secs m30001| Fri Feb 22 12:34:09.963 [conn6] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:113787 113ms { "cursor" : "BtreeCursor a_1", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 20, "nscanned" : 20, "nscannedObjectsAllPlans" : 60, "nscannedAllPlans" : 60, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "a" : [ [ 50, 50 ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:30001", "millis" : 0 } m30999| Fri Feb 22 12:34:11.610 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765c3881c8e745391603f m30999| Fri Feb 22 12:34:11.611 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. { "cursor" : "BtreeCursor a_1", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 20, "nscanned" : 20, "nscannedObjectsAllPlans" : 60, "nscannedAllPlans" : 60, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "a" : [ [ 50, 50 ] ] }, "server" : "bs-smartos-x86-64-1.10gen.cc:30001", "millis" : 0 } m30999| Fri Feb 22 12:34:16.060 [conn1] DROP: test.queryoptimizer1 m30001| Fri Feb 22 12:34:16.060 [conn6] CMD: drop test.queryoptimizer1 7646ms ******************************************* Test : jstests/remove_justone.js ... m30999| Fri Feb 22 12:34:16.067 [conn1] DROP: test.remove_justone m30001| Fri Feb 22 12:34:16.067 [conn6] CMD: drop test.remove_justone m30001| Fri Feb 22 12:34:16.068 [conn6] build index test.remove_justone { _id: 1 } m30001| Fri Feb 22 12:34:16.069 [conn6] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/find_and_modify.js ... m30999| Fri Feb 22 12:34:16.071 [conn1] DROP: test.find_and_modify m30001| Fri Feb 22 12:34:16.071 [conn6] CMD: drop test.find_and_modify m30001| Fri Feb 22 12:34:16.073 [conn6] build index test.find_and_modify { _id: 1 } m30001| Fri Feb 22 12:34:16.074 [conn6] build index done. scanned 0 total records. 0 secs 8ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/profile1.js ******************************************* Test : jstests/find8.js ... m30999| Fri Feb 22 12:34:16.079 [conn1] DROP: test.jstests_find8 m30001| Fri Feb 22 12:34:16.080 [conn6] CMD: drop test.jstests_find8 m30001| Fri Feb 22 12:34:16.080 [conn6] build index test.jstests_find8 { _id: 1 } m30001| Fri Feb 22 12:34:16.081 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.083 [conn6] build index test.jstests_find8 { b: 1.0 } m30001| Fri Feb 22 12:34:16.084 [conn6] build index done. scanned 2 total records. 0 secs 7ms >>>>>>>>>>>>>>> skipping jstests/aggregation ******************************************* Test : jstests/basic8.js ... m30999| Fri Feb 22 12:34:16.085 [conn1] DROP: test.basic8 m30001| Fri Feb 22 12:34:16.086 [conn6] CMD: drop test.basic8 m30001| Fri Feb 22 12:34:16.086 [conn6] build index test.basic8 { _id: 1 } m30001| Fri Feb 22 12:34:16.087 [conn6] build index done. scanned 0 total records. 0 secs 3ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_circle3.js ******************************************* Test : jstests/fts3.js ... m30000| Fri Feb 22 12:34:16.090 [initandlisten] connection accepted from 127.0.0.1:60473 #14 (9 connections now open) m30001| Fri Feb 22 12:34:16.091 [initandlisten] connection accepted from 127.0.0.1:39200 #13 (7 connections now open) m30999| Fri Feb 22 12:34:16.091 [conn1] DROP: test.text3 m30001| Fri Feb 22 12:34:16.092 [conn6] CMD: drop test.text3 m30001| Fri Feb 22 12:34:16.092 [conn6] build index test.text3 { _id: 1 } m30001| Fri Feb 22 12:34:16.093 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.094 [conn6] build index test.text3 { _fts: "text", _ftsx: 1, z: 1.0 } m30001| Fri Feb 22 12:34:16.095 [conn6] build index done. scanned 2 total records. 0 secs 11ms ******************************************* Test : jstests/group5.js ... m30999| Fri Feb 22 12:34:16.104 [conn1] DROP: test.group5 m30001| Fri Feb 22 12:34:16.104 [conn6] CMD: drop test.group5 m30001| Fri Feb 22 12:34:16.105 [conn6] build index test.group5 { _id: 1 } m30001| Fri Feb 22 12:34:16.106 [conn6] build index done. scanned 0 total records. 0 secs 57ms >>>>>>>>>>>>>>> skipping jstests/clone ******************************************* Test : jstests/gle_shell_server5441.js ... m30999| Fri Feb 22 12:34:16.157 [conn1] DROP: test.server5441 m30001| Fri Feb 22 12:34:16.157 [conn6] CMD: drop test.server5441 m30001| Fri Feb 22 12:34:16.158 [conn6] build index test.server5441 { _id: 1 } m30001| Fri Feb 22 12:34:16.159 [conn6] build index done. scanned 0 total records. 0 secs 41ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2oddshapes.js ******************************************* Test : jstests/in5.js ... m30999| Fri Feb 22 12:34:16.198 [conn1] DROP: test.in5 m30001| Fri Feb 22 12:34:16.202 [conn6] CMD: drop test.in5 m30001| Fri Feb 22 12:34:16.203 [conn6] build index test.in5 { _id: 1 } m30001| Fri Feb 22 12:34:16.205 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:16.206 [conn6] build index test.in5 { x: 1.0 } m30001| Fri Feb 22 12:34:16.207 [conn6] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:16.208 [conn4] CMD: dropIndexes test.in5 m30001| Fri Feb 22 12:34:16.210 [conn6] build index test.in5 { x.a: 1.0 } m30001| Fri Feb 22 12:34:16.211 [conn6] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:16.213 [conn4] CMD: dropIndexes test.in5 m30999| Fri Feb 22 12:34:16.213 [conn1] DROP: test.in5 m30001| Fri Feb 22 12:34:16.214 [conn6] CMD: drop test.in5 m30001| Fri Feb 22 12:34:16.216 [conn6] build index test.in5 { _id: 1 } m30001| Fri Feb 22 12:34:16.216 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.219 [conn4] CMD: dropIndexes test.in5 m30001| Fri Feb 22 12:34:16.220 [conn6] build index test.in5 { _id.a: 1.0 } m30001| Fri Feb 22 12:34:16.220 [conn6] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:16.222 [conn4] CMD: dropIndexes test.in5 26ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2edgecases.js ******************************************* Test : jstests/index_check5.js ... m30999| Fri Feb 22 12:34:16.224 [conn1] DROP: test.index_check5 m30001| Fri Feb 22 12:34:16.224 [conn6] CMD: drop test.index_check5 m30001| Fri Feb 22 12:34:16.224 [conn6] build index test.index_check5 { _id: 1 } m30001| Fri Feb 22 12:34:16.225 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.226 [conn6] build index test.index_check5 { scores.level: 1.0, scores.score: 1.0 } m30001| Fri Feb 22 12:34:16.226 [conn6] build index done. scanned 2 total records. 0 secs 4ms ******************************************* Test : jstests/orp.js ... m30999| Fri Feb 22 12:34:16.228 [conn1] DROP: test.jstests_orp m30001| Fri Feb 22 12:34:16.228 [conn6] CMD: drop test.jstests_orp m30999| Fri Feb 22 12:34:16.228 [conn1] DROP: test.jstests_orp m30001| Fri Feb 22 12:34:16.228 [conn6] CMD: drop test.jstests_orp m30001| Fri Feb 22 12:34:16.229 [conn6] build index test.jstests_orp { _id: 1 } m30001| Fri Feb 22 12:34:16.230 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.237 [conn6] build index test.jstests_orp { a: 1.0 } m30001| Fri Feb 22 12:34:16.238 [conn6] build index done. scanned 120 total records. 0.001 secs m30001| Fri Feb 22 12:34:16.239 [conn6] build index test.jstests_orp { c: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:16.240 [conn6] build index done. scanned 120 total records. 0.001 secs m30999| Fri Feb 22 12:34:16.251 [conn1] DROP: test.jstests_orp m30001| Fri Feb 22 12:34:16.251 [conn6] CMD: drop test.jstests_orp m30001| Fri Feb 22 12:34:16.255 [conn6] build index test.jstests_orp { _id: 1 } m30001| Fri Feb 22 12:34:16.256 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.263 [conn6] build index test.jstests_orp { a: 1.0 } m30001| Fri Feb 22 12:34:16.265 [conn6] build index done. scanned 120 total records. 0.001 secs m30001| Fri Feb 22 12:34:16.265 [conn6] build index test.jstests_orp { c: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:16.266 [conn6] build index done. scanned 120 total records. 0.001 secs 70ms ******************************************* Test : jstests/exists3.js ... m30999| Fri Feb 22 12:34:16.298 [conn1] DROP: test.jstests_exists3 m30001| Fri Feb 22 12:34:16.299 [conn6] CMD: drop test.jstests_exists3 m30001| Fri Feb 22 12:34:16.299 [conn6] build index test.jstests_exists3 { _id: 1 } m30001| Fri Feb 22 12:34:16.300 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.301 [conn6] build index test.jstests_exists3 { c: -1.0 } m30001| Fri Feb 22 12:34:16.302 [conn6] build index done. scanned 1 total records. 0 secs 7ms ******************************************* Test : jstests/orf.js ... m30999| Fri Feb 22 12:34:16.305 [conn1] DROP: test.jstests_orf m30001| Fri Feb 22 12:34:16.305 [conn6] CMD: drop test.jstests_orf m30001| Fri Feb 22 12:34:16.306 [conn6] build index test.jstests_orf { _id: 1 } m30001| Fri Feb 22 12:34:16.310 [conn6] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 12:34:16.568 [conn6] query test.jstests_orf query: { query: { $or: [ { _id: 0.0 }, { _id: 1.0 }, { _id: 2.0 }, { _id: 3.0 }, { _id: 4.0 }, { _id: 5.0 }, { _id: 6.0 }, { _id: 7.0 }, { _id: 8.0 }, { _id: 9.0 }, { _id: 10.0 }, { _id: 11.0 }, { _id: 12.0 }, { _id: 13.0 }, { _id: 14.0 }, { _id: 15.0 }, { _id: 16.0 }, { _id: 17.0 }, { _id: 18.0 }, { _id: 19.0 }, { _id: 20.0 }, { _id: 21.0 }, { _id: 22.0 }, { _id: 23.0 }, { _id: 24.0 }, { _id: 25.0 }, { _id: 26.0 }, { _id: 27.0 }, { _id: 28.0 }, { _id: 29.0 }, { _id: 30.0 }, { _id: 31.0 }, { _id: 32.0 }, { _id: 33.0 }, { _id: 34.0 }, { _id: 35.0 }, { _id: 36.0 }, { _id: 37.0 }, { _id: 38.0 }, { _id: 39.0 }, { _id: 40.0 }, { _id: 41.0 }, { _id: 42.0 }, { _id: 43.0 }, { _id: 44.0 }, { _id: 45.0 }, { _id: 46.0 }, { _id: 47.0 }, { _id: 48.0 }, { _id: 49.0 }, { _id: 50.0 }, { _id: 51.0 }, { _id: 52.0 }, { _id: 53.0 }, { _id: 54.0 }, { _id: 55.0 }, { _id: 56.0 }, { _id: 57.0 }, { _id: 58.0 }, { _id: 59.0 }, { _id: 60.0 }, { _id: 61.0 }, { _id: 62.0 }, { _id: 63.0 }, { _id: 64.0 }, { _id: 65.0 }, { _id: 66.0 }, { _id: 67.0 }, { _id: 68.0 }, { _id: 69.0 }, { _id: 70.0 }, { _id: 71.0 }, { _id: 72.0 }, { _id: 73.0 }, { _id: 74.0 }, { _id: 75.0 }, { _id: 76.0 }, { _id: 77.0 }, { _id: 78.0 }, { _id: 79.0 }, { _id: 80.0 }, { _id: 81.0 }, { _id: 82.0 }, { _id: 83.0 }, { _id: 84.0 }, { _id: 85.0 }, { _id: 86.0 }, { _id: 87.0 }, { _id: 88.0 }, { _id: 89.0 }, { _id: 90.0 }, { _id: 91.0 }, { _id: 92.0 }, { _id: 93.0 }, { _id: 94.0 }, { _id: 95.0 }, { _id: 96.0 }, { _id: 97.0 }, { _id: 98.0 }, { _id: 99.0 }, { _id: 100.0 }, { _id: 101.0 }, { _id: 102.0 }, { _id: 103.0 }, { _id: 104.0 }, { _id: 105.0 }, { _id: 106.0 }, { _id: 107.0 }, { _id: 108.0 }, { _id: 109.0 }, { _id: 110.0 }, { _id: 111.0 }, { _id: 112.0 }, { _id: 113.0 }, { _id: 114.0 }, { _id: 115.0 }, { _id: 116.0 }, { _id: 117.0 }, { _id: 118.0 }, { _id: 119.0 }, { _id: 120.0 }, { _id: 121.0 }, { _id: 122.0 }, { _id: 123.0 }, { _id: 124.0 }, { _id: 125.0 }, { _id: 126.0 }, { _id: 127.0 }, { _id: 128.0 }, { _id: 129.0 }, { _id: 130.0 }, { _id: 131.0 }, { _id: 132.0 }, { _id: 133.0 }, { _id: 134.0 }, { _id: 135.0 }, { _id: 136.0 }, { _id: 137.0 }, { _id: 138.0 }, { _id: 139.0 }, { _id: 140.0 }, { _id: 141.0 }, { _id: 142.0 }, { _id: 143.0 }, { _id: 144.0 }, { _id: 145.0 }, { _id: 146.0 }, { _id: 147.0 }, { _id: 148.0 }, { _id: 149.0 }, { _id: 150.0 }, { _id: 151.0 }, { _id: 152.0 }, { _id: 153.0 }, { _id: 154.0 }, { _id: 155.0 }, { _id: 156.0 }, { _id: 157.0 }, { _id: 158.0 }, { _id: 159.0 }, { _id: 160.0 }, { _id: 161.0 }, { _id: 162.0 }, { _id: 163.0 }, { _id: 164.0 }, { _id: 165.0 }, { _id: 166.0 }, { _id: 167.0 }, { _id: 168.0 }, { _id: 169.0 }, { _id: 170.0 }, { _id: 171.0 }, { _id: 172.0 }, { _id: 173.0 }, { _id: 174.0 }, { _id: 175.0 }, { _id: 176.0 }, { _id: 177.0 }, { _id: 178.0 }, { _id: 179.0 }, { _id: 180.0 }, { _id: 181.0 }, { _id: 182.0 }, { _id: 183.0 }, { _id: 184.0 }, { _id: 185.0 }, { _id: 186.0 }, { _id: 187.0 }, { _id: 188.0 }, { _id: 189.0 }, { _id: 190.0 }, { _id: 191.0 }, { _id: 192.0 }, { _id: 193.0 }, { _id: 194.0 }, { _id: 195.0 }, { _id: 196.0 }, { _id: 197.0 }, { _id: 198.0 }, { _id: 199.0 } ] }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:201 keyUpdates:0 locks(micros) r:245963 nreturned:1 reslen:84881 245ms m30001| Fri Feb 22 12:34:16.642 [conn13] end connection 127.0.0.1:39200 (6 connections now open) m30000| Fri Feb 22 12:34:16.642 [conn14] end connection 127.0.0.1:60473 (8 connections now open) { "clauses" : [ { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "_id" : [ [ 0, 0 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 0, 0 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 1, 1 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 1, 1 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 2, 2 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 2, 2 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "_id" : [ [ 3, 3 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 3, 3 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 4, 4 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 4, 4 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 5, 5 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 5, 5 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 6, 6 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 6, 6 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "_id" : [ [ 7, 7 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 7, 7 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 8, 8 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 8, 8 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 9, 9 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 9, 9 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 10, 10 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 10, 10 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 11, 11 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 11, 11 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 12, 12 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 12, 12 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 13, 13 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 13, 13 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 14, 14 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 14, 14 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "_id" : [ [ 15, 15 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 15, 15 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 16, 16 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 16, 16 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 17, 17 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 17, 17 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 18, 18 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 18, 18 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 19, 19 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 19, 19 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 20, 20 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 20, 20 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 21, 21 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 21, 21 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 22, 22 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 22, 22 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 23, 23 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 23, 23 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 24, 24 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 24, 24 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 25, 25 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 25, 25 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 26, 26 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 26, 26 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 27, 27 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 27, 27 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 28, 28 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 28, 28 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 29, 29 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 29, 29 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 30, 30 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 30, 30 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "_id" : [ [ 31, 31 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 31, 31 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 32, 32 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 32, 32 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 33, 33 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 33, 33 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 34, 34 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 34, 34 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 35, 35 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 35, 35 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 36, 36 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 36, 36 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 37, 37 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 37, 37 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 38, 38 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 38, 38 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 39, 39 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 39, 39 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 40, 40 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 40, 40 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 41, 41 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 41, 41 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 42, 42 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 42, 42 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 43, 43 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 43, 43 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 44, 44 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 44, 44 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 45, 45 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 45, 45 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 46, 46 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 46, 46 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 47, 47 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 47, 47 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 48, 48 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 48, 48 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 49, 49 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 49, 49 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 50, 50 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 50, 50 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 51, 51 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 51, 51 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 52, 52 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 52, 52 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 53, 53 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 53, 53 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 54, 54 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 54, 54 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 55, 55 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 55, 55 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 56, 56 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 56, 56 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 57, 57 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 57, 57 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 58, 58 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 58, 58 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 59, 59 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 59, 59 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 60, 60 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 60, 60 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 61, 61 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 61, 61 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 62, 62 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 62, 62 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "_id" : [ [ 63, 63 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 63, 63 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 64, 64 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 64, 64 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 65, 65 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 65, 65 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 66, 66 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 66, 66 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 67, 67 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 67, 67 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 68, 68 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 68, 68 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 69, 69 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 69, 69 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 70, 70 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 70, 70 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 71, 71 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 71, 71 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 72, 72 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 72, 72 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 73, 73 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 73, 73 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 74, 74 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 74, 74 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 75, 75 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 75, 75 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 76, 76 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 76, 76 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 77, 77 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 77, 77 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 78, 78 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 78, 78 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 79, 79 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 79, 79 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 80, 80 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 80, 80 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 81, 81 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 81, 81 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 82, 82 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 82, 82 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 83, 83 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 83, 83 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 84, 84 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 84, 84 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 85, 85 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 85, 85 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 86, 86 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 86, 86 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 87, 87 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 87, 87 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 88, 88 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 88, 88 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 89, 89 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 89, 89 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 90, 90 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 90, 90 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 91, 91 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 91, 91 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 92, 92 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 92, 92 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 93, 93 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 93, 93 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 94, 94 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 94, 94 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 95, 95 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 95, 95 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 96, 96 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 96, 96 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 97, 97 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 97, 97 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 98, 98 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 98, 98 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 99, 99 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 99, 99 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 100, 100 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 100, 100 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 101, 101 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 101, 101 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 102, 102 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 102, 102 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 103, 103 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 103, 103 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 104, 104 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 104, 104 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 105, 105 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 105, 105 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 106, 106 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 106, 106 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 107, 107 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 107, 107 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 108, 108 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 108, 108 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 109, 109 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 109, 109 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 110, 110 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 110, 110 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 111, 111 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 111, 111 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 112, 112 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 112, 112 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 113, 113 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 113, 113 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 114, 114 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 114, 114 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 115, 115 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 115, 115 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 116, 116 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 116, 116 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 117, 117 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 117, 117 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 118, 118 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 118, 118 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 119, 119 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 119, 119 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 120, 120 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 120, 120 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 121, 121 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 121, 121 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 122, 122 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 122, 122 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 123, 123 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 123, 123 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 124, 124 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 124, 124 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 125, 125 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 125, 125 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 126, 126 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 126, 126 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 127, 127 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 127, 127 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 128, 128 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 128, 128 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 129, 129 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 129, 129 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 130, 130 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 130, 130 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 131, 131 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 131, 131 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 132, 132 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 132, 132 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 133, 133 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 133, 133 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 134, 134 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 134, 134 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 135, 135 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 135, 135 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 136, 136 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 136, 136 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 137, 137 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 137, 137 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 138, 138 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 138, 138 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 139, 139 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 139, 139 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 140, 140 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 140, 140 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 141, 141 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 141, 141 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 142, 142 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 142, 142 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 143, 143 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 143, 143 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 144, 144 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 144, 144 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 145, 145 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 145, 145 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 146, 146 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 146, 146 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 147, 147 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 147, 147 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 148, 148 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 148, 148 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 149, 149 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 149, 149 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 150, 150 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 150, 150 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 151, 151 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 151, 151 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 152, 152 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 152, 152 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 153, 153 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 153, 153 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 154, 154 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 154, 154 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 155, 155 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 155, 155 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 156, 156 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 156, 156 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 157, 157 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 157, 157 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 158, 158 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 158, 158 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 159, 159 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 159, 159 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 160, 160 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 160, 160 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 161, 161 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 161, 161 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 162, 162 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 162, 162 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 163, 163 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 163, 163 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 164, 164 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 164, 164 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 165, 165 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 165, 165 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 166, 166 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 166, 166 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 167, 167 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 167, 167 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 168, 168 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 168, 168 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 169, 169 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 169, 169 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 170, 170 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 170, 170 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 171, 171 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 171, 171 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 172, 172 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 172, 172 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 173, 173 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 173, 173 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 174, 174 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 174, 174 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 175, 175 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 175, 175 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 176, 176 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 176, 176 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 177, 177 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 177, 177 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 178, 178 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 178, 178 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 179, 179 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 179, 179 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 180, 180 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 180, 180 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 181, 181 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 181, 181 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 182, 182 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 182, 182 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 183, 183 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 183, 183 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 184, 184 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 184, 184 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 185, 185 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 185, 185 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 186, 186 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 186, 186 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 187, 187 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 187, 187 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 188, 188 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 188, 188 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 189, 189 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 189, 189 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 190, 190 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 190, 190 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 191, 191 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 191, 191 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 192, 192 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 192, 192 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 193, 193 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 193, 193 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 194, 194 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 194, 194 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 195, 195 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 195, 195 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 196, 196 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 196, 196 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 197, 197 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 197, 197 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { "_id" : [ [ 198, 198 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 198, 198 ] ] } } ] }, { "cursor" : "BtreeCursor _id_", "isMultiKey" : false, "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "nscannedObjectsAllPlans" : 1, "nscannedAllPlans" : 1, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "_id" : [ [ 199, 199 ] ] }, "allPlans" : [ { "cursor" : "BtreeCursor _id_", "n" : 1, "nscannedObjects" : 1, "nscanned" : 1, "indexBounds" : { "_id" : [ [ 199, 199 ] ] } } ] } ], "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "millis" : 236, "server" : "bs-smartos-x86-64-1.10gen.cc:30001", "millis" : 236 } m30001| Fri Feb 22 12:34:16.960 [conn6] command test.$cmd command: { count: "jstests_orf", query: { $or: [ { _id: 0.0 }, { _id: 1.0 }, { _id: 2.0 }, { _id: 3.0 }, { _id: 4.0 }, { _id: 5.0 }, { _id: 6.0 }, { _id: 7.0 }, { _id: 8.0 }, { _id: 9.0 }, { _id: 10.0 }, { _id: 11.0 }, { _id: 12.0 }, { _id: 13.0 }, { _id: 14.0 }, { _id: 15.0 }, { _id: 16.0 }, { _id: 17.0 }, { _id: 18.0 }, { _id: 19.0 }, { _id: 20.0 }, { _id: 21.0 }, { _id: 22.0 }, { _id: 23.0 }, { _id: 24.0 }, { _id: 25.0 }, { _id: 26.0 }, { _id: 27.0 }, { _id: 28.0 }, { _id: 29.0 }, { _id: 30.0 }, { _id: 31.0 }, { _id: 32.0 }, { _id: 33.0 }, { _id: 34.0 }, { _id: 35.0 }, { _id: 36.0 }, { _id: 37.0 }, { _id: 38.0 }, { _id: 39.0 }, { _id: 40.0 }, { _id: 41.0 }, { _id: 42.0 }, { _id: 43.0 }, { _id: 44.0 }, { _id: 45.0 }, { _id: 46.0 }, { _id: 47.0 }, { _id: 48.0 }, { _id: 49.0 }, { _id: 50.0 }, { _id: 51.0 }, { _id: 52.0 }, { _id: 53.0 }, { _id: 54.0 }, { _id: 55.0 }, { _id: 56.0 }, { _id: 57.0 }, { _id: 58.0 }, { _id: 59.0 }, { _id: 60.0 }, { _id: 61.0 }, { _id: 62.0 }, { _id: 63.0 }, { _id: 64.0 }, { _id: 65.0 }, { _id: 66.0 }, { _id: 67.0 }, { _id: 68.0 }, { _id: 69.0 }, { _id: 70.0 }, { _id: 71.0 }, { _id: 72.0 }, { _id: 73.0 }, { _id: 74.0 }, { _id: 75.0 }, { _id: 76.0 }, { _id: 77.0 }, { _id: 78.0 }, { _id: 79.0 }, { _id: 80.0 }, { _id: 81.0 }, { _id: 82.0 }, { _id: 83.0 }, { _id: 84.0 }, { _id: 85.0 }, { _id: 86.0 }, { _id: 87.0 }, { _id: 88.0 }, { _id: 89.0 }, { _id: 90.0 }, { _id: 91.0 }, { _id: 92.0 }, { _id: 93.0 }, { _id: 94.0 }, { _id: 95.0 }, { _id: 96.0 }, { _id: 97.0 }, { _id: 98.0 }, { _id: 99.0 }, { _id: 100.0 }, { _id: 101.0 }, { _id: 102.0 }, { _id: 103.0 }, { _id: 104.0 }, { _id: 105.0 }, { _id: 106.0 }, { _id: 107.0 }, { _id: 108.0 }, { _id: 109.0 }, { _id: 110.0 }, { _id: 111.0 }, { _id: 112.0 }, { _id: 113.0 }, { _id: 114.0 }, { _id: 115.0 }, { _id: 116.0 }, { _id: 117.0 }, { _id: 118.0 }, { _id: 119.0 }, { _id: 120.0 }, { _id: 121.0 }, { _id: 122.0 }, { _id: 123.0 }, { _id: 124.0 }, { _id: 125.0 }, { _id: 126.0 }, { _id: 127.0 }, { _id: 128.0 }, { _id: 129.0 }, { _id: 130.0 }, { _id: 131.0 }, { _id: 132.0 }, { _id: 133.0 }, { _id: 134.0 }, { _id: 135.0 }, { _id: 136.0 }, { _id: 137.0 }, { _id: 138.0 }, { _id: 139.0 }, { _id: 140.0 }, { _id: 141.0 }, { _id: 142.0 }, { _id: 143.0 }, { _id: 144.0 }, { _id: 145.0 }, { _id: 146.0 }, { _id: 147.0 }, { _id: 148.0 }, { _id: 149.0 }, { _id: 150.0 }, { _id: 151.0 }, { _id: 152.0 }, { _id: 153.0 }, { _id: 154.0 }, { _id: 155.0 }, { _id: 156.0 }, { _id: 157.0 }, { _id: 158.0 }, { _id: 159.0 }, { _id: 160.0 }, { _id: 161.0 }, { _id: 162.0 }, { _id: 163.0 }, { _id: 164.0 }, { _id: 165.0 }, { _id: 166.0 }, { _id: 167.0 }, { _id: 168.0 }, { _id: 169.0 }, { _id: 170.0 }, { _id: 171.0 }, { _id: 172.0 }, { _id: 173.0 }, { _id: 174.0 }, { _id: 175.0 }, { _id: 176.0 }, { _id: 177.0 }, { _id: 178.0 }, { _id: 179.0 }, { _id: 180.0 }, { _id: 181.0 }, { _id: 182.0 }, { _id: 183.0 }, { _id: 184.0 }, { _id: 185.0 }, { _id: 186.0 }, { _id: 187.0 }, { _id: 188.0 }, { _id: 189.0 }, { _id: 190.0 }, { _id: 191.0 }, { _id: 192.0 }, { _id: 193.0 }, { _id: 194.0 }, { _id: 195.0 }, { _id: 196.0 }, { _id: 197.0 }, { _id: 198.0 }, { _id: 199.0 } ] } } ntoreturn:1 keyUpdates:0 locks(micros) r:235424 reslen:48 235ms 656ms ******************************************* Test : jstests/update7.js ... m30999| Fri Feb 22 12:34:16.962 [conn1] DROP: test.update7 m30001| Fri Feb 22 12:34:16.962 [conn6] CMD: drop test.update7 m30001| Fri Feb 22 12:34:16.963 [conn6] build index test.update7 { _id: 1 } m30001| Fri Feb 22 12:34:16.963 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:16.966 [conn1] DROP: test.update7 m30001| Fri Feb 22 12:34:16.966 [conn6] CMD: drop test.update7 m30001| Fri Feb 22 12:34:16.968 [conn6] build index test.update7 { _id: 1 } m30001| Fri Feb 22 12:34:16.969 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.970 [conn6] build index test.update7 { a: 1.0 } m30001| Fri Feb 22 12:34:16.971 [conn6] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:16.972 [conn6] build index test.update7 { b: 1.0 } m30001| Fri Feb 22 12:34:16.972 [conn6] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:34:16.974 [conn1] DROP: test.update7 m30001| Fri Feb 22 12:34:16.974 [conn6] CMD: drop test.update7 m30001| Fri Feb 22 12:34:16.977 [conn6] build index test.update7 { _id: 1 } m30001| Fri Feb 22 12:34:16.978 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:16.980 [conn1] DROP: test.update7 m30001| Fri Feb 22 12:34:16.980 [conn6] CMD: drop test.update7 m30001| Fri Feb 22 12:34:16.982 [conn6] build index test.update7 { _id: 1 } m30001| Fri Feb 22 12:34:16.983 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.983 [conn6] build index test.update7 { a: 1.0 } m30001| Fri Feb 22 12:34:16.983 [conn6] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:34:16.989 [conn1] DROP: test.update7 m30001| Fri Feb 22 12:34:16.989 [conn6] CMD: drop test.update7 m30001| Fri Feb 22 12:34:16.991 [conn6] build index test.update7 { _id: 1 } m30001| Fri Feb 22 12:34:16.992 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.992 [conn6] build index test.update7 { x: 1.0 } m30001| Fri Feb 22 12:34:16.992 [conn6] build index done. scanned 3 total records. 0 secs 34ms ******************************************* Test : jstests/index2.js ... m30999| Fri Feb 22 12:34:16.995 [conn1] DROP: test.embeddedIndexTest2 m30001| Fri Feb 22 12:34:16.995 [conn6] CMD: drop test.embeddedIndexTest2 m30001| Fri Feb 22 12:34:16.996 [conn6] build index test.embeddedIndexTest2 { _id: 1 } m30001| Fri Feb 22 12:34:16.996 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:16.999 [conn6] build index test.embeddedIndexTest2 { z: 1.0 } m30001| Fri Feb 22 12:34:16.999 [conn6] build index done. scanned 4 total records. 0 secs m30001| Fri Feb 22 12:34:17.002 [conn4] CMD: validate test.embeddedIndexTest2 m30001| Fri Feb 22 12:34:17.002 [conn4] validating index 0: test.embeddedIndexTest2.$_id_ m30001| Fri Feb 22 12:34:17.002 [conn4] validating index 1: test.embeddedIndexTest2.$z_1 8ms ******************************************* Test : jstests/set3.js ... m30999| Fri Feb 22 12:34:17.008 [conn1] DROP: test.set3 m30001| Fri Feb 22 12:34:17.008 [conn6] CMD: drop test.set3 m30001| Fri Feb 22 12:34:17.008 [conn6] build index test.set3 { _id: 1 } m30001| Fri Feb 22 12:34:17.009 [conn6] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/collmod.js ... m30999| Fri Feb 22 12:34:17.017 [conn1] DROP: test.collModTest m30001| Fri Feb 22 12:34:17.017 [conn6] CMD: drop test.collModTest m30001| Fri Feb 22 12:34:17.018 [conn6] build index test.collModTest { _id: 1 } m30001| Fri Feb 22 12:34:17.019 [conn6] build index done. scanned 0 total records. 0 secs { "raw" : { "localhost:30001" : { "usePowerOf2Sizes_old" : false, "usePowerOf2Sizes_new" : true, "ok" : 1 } }, "ok" : 1 } { "raw" : { "localhost:30001" : { "ok" : 0, "errmsg" : "unknown option to collMod: unrecognized" } }, "ok" : 0, "errmsg" : "{ localhost:30001: \"unknown option to collMod: unrecognized\" }" } m30001| Fri Feb 22 12:34:17.022 [conn6] build index test.collModTest { a: 1.0 } m30001| Fri Feb 22 12:34:17.023 [conn6] build index done. scanned 0 total records. 0 secs { "raw" : { "localhost:30001" : { "ok" : 0, "errmsg" : "no keyPattern specified" } }, "ok" : 0, "errmsg" : "{ localhost:30001: \"no keyPattern specified\" }" } { "raw" : { "localhost:30001" : { "ok" : 0, "errmsg" : "no expireAfterSeconds field" } }, "ok" : 0, "errmsg" : "{ localhost:30001: \"no expireAfterSeconds field\" }" } { "raw" : { "localhost:30001" : { "ok" : 0, "errmsg" : "expireAfterSeconds field must be a number" } }, "ok" : 0, "errmsg" : "{ localhost:30001: \"expireAfterSeconds field must be a number\" }" } { "raw" : { "localhost:30001" : { "expireAfterSeconds_old" : 50, "expireAfterSeconds_new" : 100, "ok" : 1 } }, "ok" : 1 } m30001| Fri Feb 22 12:34:17.027 [conn4] CMD: dropIndexes test.collModTest m30001| Fri Feb 22 12:34:17.029 [conn6] build index test.collModTest { a: 1.0 } m30001| Fri Feb 22 12:34:17.030 [conn6] build index done. scanned 0 total records. 0 secs { "raw" : { "localhost:30001" : { "ok" : 0, "errmsg" : "existing expireAfterSeconds field is not a number" } }, "ok" : 0, "errmsg" : "{ localhost:30001: \"existing expireAfterSeconds field is not a number\" }" } m30001| Fri Feb 22 12:34:17.031 [conn4] CMD: dropIndexes test.collModTest m30001| Fri Feb 22 12:34:17.032 [conn6] build index test.collModTest { a: 1.0 } m30001| Fri Feb 22 12:34:17.032 [conn6] build index done. scanned 0 total records. 0 secs { "raw" : { "localhost:30001" : { "usePowerOf2Sizes_old" : true, "usePowerOf2Sizes_new" : false, "expireAfterSeconds_old" : 50, "expireAfterSeconds_new" : 100, "ok" : 1 } }, "ok" : 1 } 25ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped3.js ******************************************* Test : jstests/indexj.js ... m30999| Fri Feb 22 12:34:17.039 [conn1] DROP: test.jstests_indexj m30001| Fri Feb 22 12:34:17.039 [conn6] CMD: drop test.jstests_indexj m30001| Fri Feb 22 12:34:17.040 [conn6] build index test.jstests_indexj { _id: 1 } m30001| Fri Feb 22 12:34:17.041 [conn6] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:17.041 [conn6] info: creating collection test.jstests_indexj on add index m30001| Fri Feb 22 12:34:17.041 [conn6] build index test.jstests_indexj { a: 1.0 } m30001| Fri Feb 22 12:34:17.042 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:17.043 [conn1] DROP: test.jstests_indexj m30001| Fri Feb 22 12:34:17.043 [conn6] CMD: drop test.jstests_indexj m30001| Fri Feb 22 12:34:17.047 [conn6] build index test.jstests_indexj { _id: 1 } m30001| Fri Feb 22 12:34:17.047 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:17.047 [conn6] info: creating collection test.jstests_indexj on add index m30001| Fri Feb 22 12:34:17.047 [conn6] build index test.jstests_indexj { a: 1.0 } m30001| Fri Feb 22 12:34:17.048 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:17.051 [conn1] DROP: test.jstests_indexj m30001| Fri Feb 22 12:34:17.051 [conn6] CMD: drop test.jstests_indexj m30001| Fri Feb 22 12:34:17.054 [conn6] build index test.jstests_indexj { _id: 1 } m30001| Fri Feb 22 12:34:17.054 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:17.054 [conn6] info: creating collection test.jstests_indexj on add index m30001| Fri Feb 22 12:34:17.054 [conn6] build index test.jstests_indexj { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:17.055 [conn6] build index done. scanned 0 total records. 0 secs 25ms ******************************************* Test : jstests/cursor4.js ... m30999| Fri Feb 22 12:34:17.061 [conn1] DROP: test.ed_db_cursor4_cfmfs m30001| Fri Feb 22 12:34:17.061 [conn6] CMD: drop test.ed_db_cursor4_cfmfs m30001| Fri Feb 22 12:34:17.062 [conn6] build index test.ed_db_cursor4_cfmfs { _id: 1 } m30001| Fri Feb 22 12:34:17.063 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:17.063 [conn6] build index test.ed_db_cursor4_cfmfs { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:17.064 [conn6] build index done. scanned 5 total records. 0 secs 19ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_array1.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/bench_test2.js ******************************************* Test : jstests/objid6.js ... 25ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_update.js ******************************************* Test : jstests/where1.js ... m30999| Fri Feb 22 12:34:17.105 [conn1] DROP: test.where1 m30001| Fri Feb 22 12:34:17.105 [conn6] CMD: drop test.where1 m30001| Fri Feb 22 12:34:17.106 [conn6] build index test.where1 { _id: 1 } m30001| Fri Feb 22 12:34:17.107 [conn6] build index done. scanned 0 total records. 0 secs 36ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo3.js ******************************************* Test : jstests/numberlong4.js ... m30999| Fri Feb 22 12:34:17.141 [conn1] DROP: test.jstests_numberlong4 m30001| Fri Feb 22 12:34:17.141 [conn6] CMD: drop test.jstests_numberlong4 1ms ******************************************* Test : jstests/proj_key1.js ... m30999| Fri Feb 22 12:34:17.142 [conn1] DROP: test.proj_key1 m30001| Fri Feb 22 12:34:17.142 [conn6] CMD: drop test.proj_key1 m30001| Fri Feb 22 12:34:17.143 [conn6] build index test.proj_key1 { _id: 1 } m30001| Fri Feb 22 12:34:17.144 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:17.145 [conn6] build index test.proj_key1 { a: 1.0 } m30001| Fri Feb 22 12:34:17.146 [conn6] build index done. scanned 10 total records. 0 secs 9ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped_empty.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/error1.js >>>>>>>>>>>>>>> skipping jstests/multiVersion >>>>>>>>>>>>>>> skipping jstests/sharding ******************************************* Test : jstests/sortb.js ... m30999| Fri Feb 22 12:34:17.151 [conn1] DROP: test.jstests_sortb m30001| Fri Feb 22 12:34:17.151 [conn6] CMD: drop test.jstests_sortb m30001| Fri Feb 22 12:34:17.152 [conn6] build index test.jstests_sortb { _id: 1 } m30001| Fri Feb 22 12:34:17.152 [conn6] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:17.152 [conn6] info: creating collection test.jstests_sortb on add index m30001| Fri Feb 22 12:34:17.152 [conn6] build index test.jstests_sortb { b: 1.0 } m30001| Fri Feb 22 12:34:17.153 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:17.613 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765c9881c8e7453916040 m30999| Fri Feb 22 12:34:17.613 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:34:18.149 [conn6] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortb query:{ query: {}, orderby: { a: -1.0 }, $hint: { b: 1.0 } } m30001| Fri Feb 22 12:34:18.149 [conn6] ntoskip:0 ntoreturn:100 m30001| Fri Feb 22 12:34:18.149 [conn6] problem detected during query over test.jstests_sortb : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:18.150 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:18.150 [conn6] end connection 127.0.0.1:49383 (5 connections now open) m30001| Fri Feb 22 12:34:18.168 [conn11] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortb query:{ query: {}, orderby: { a: -1.0 }, $hint: { b: 1.0 }, $showDiskLoc: true } m30001| Fri Feb 22 12:34:18.168 [conn11] ntoskip:0 ntoreturn:100 m30001| Fri Feb 22 12:34:18.168 [conn11] problem detected during query over test.jstests_sortb : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:18.168 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:18.169 [conn11] end connection 127.0.0.1:39334 (4 connections now open) m30999| Fri Feb 22 12:34:18.169 [conn1] DROP: test.jstests_sortb m30001| Fri Feb 22 12:34:18.170 [initandlisten] connection accepted from 127.0.0.1:35456 #14 (5 connections now open) m30001| Fri Feb 22 12:34:18.170 [conn14] CMD: drop test.jstests_sortb 1022ms ******************************************* Test : jstests/eval3.js ... m30999| Fri Feb 22 12:34:18.176 [conn1] DROP: test.eval3 m30001| Fri Feb 22 12:34:18.176 [conn14] CMD: drop test.eval3 m30001| Fri Feb 22 12:34:18.177 [conn14] build index test.eval3 { _id: 1 } m30001| Fri Feb 22 12:34:18.177 [conn14] build index done. scanned 0 total records. 0 secs 56ms ******************************************* Test : jstests/explain5.js ... m30999| Fri Feb 22 12:34:18.229 [conn1] DROP: test.jstests_explain5 m30001| Fri Feb 22 12:34:18.229 [conn14] CMD: drop test.jstests_explain5 m30001| Fri Feb 22 12:34:18.230 [conn14] build index test.jstests_explain5 { _id: 1 } m30001| Fri Feb 22 12:34:18.238 [conn14] build index done. scanned 0 total records. 0.008 secs m30001| Fri Feb 22 12:34:18.238 [conn14] info: creating collection test.jstests_explain5 on add index m30001| Fri Feb 22 12:34:18.239 [conn14] build index test.jstests_explain5 { a: 1.0 } m30001| Fri Feb 22 12:34:18.239 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.240 [conn14] build index test.jstests_explain5 { b: 1.0 } m30001| Fri Feb 22 12:34:18.241 [conn14] build index done. scanned 0 total records. 0 secs 18ms ******************************************* Test : jstests/drop.js ... m30999| Fri Feb 22 12:34:18.248 [conn1] DROP: test.jstests_drop m30001| Fri Feb 22 12:34:18.248 [conn14] CMD: drop test.jstests_drop m30001| Fri Feb 22 12:34:18.250 [conn14] build index test.jstests_drop { _id: 1 } m30001| Fri Feb 22 12:34:18.250 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.251 [conn14] build index test.jstests_drop { a: 1.0 } m30001| Fri Feb 22 12:34:18.252 [conn14] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:18.253 [conn1] DROP: test.jstests_drop m30001| Fri Feb 22 12:34:18.253 [conn14] CMD: drop test.jstests_drop m30001| Fri Feb 22 12:34:18.257 [conn14] build index test.jstests_drop { _id: 1 } m30001| Fri Feb 22 12:34:18.258 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.258 [conn14] info: creating collection test.jstests_drop on add index m30001| Fri Feb 22 12:34:18.258 [conn14] build index test.jstests_drop { a: 1.0 } m30001| Fri Feb 22 12:34:18.258 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.260 [conn4] CMD: dropIndexes test.jstests_drop 16ms ******************************************* Test : jstests/mr5.js ... m30999| Fri Feb 22 12:34:18.268 [conn1] DROP: test.mr5 m30001| Fri Feb 22 12:34:18.268 [conn14] CMD: drop test.mr5 m30001| Fri Feb 22 12:34:18.268 [conn14] build index test.mr5 { _id: 1 } m30001| Fri Feb 22 12:34:18.270 [conn14] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:18.303 [conn14] CMD: drop test.tmp.mr.mr5_9 m30001| Fri Feb 22 12:34:18.303 [conn14] CMD: drop test.tmp.mr.mr5_9_inc m30001| Fri Feb 22 12:34:18.303 [conn14] build index test.tmp.mr.mr5_9_inc { 0: 1 } m30001| Fri Feb 22 12:34:18.304 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.304 [conn14] build index test.tmp.mr.mr5_9 { _id: 1 } m30001| Fri Feb 22 12:34:18.305 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.308 [conn14] CMD: drop test.mr5_out m30001| Fri Feb 22 12:34:18.310 [conn14] CMD: drop test.tmp.mr.mr5_9 m30001| Fri Feb 22 12:34:18.311 [conn14] CMD: drop test.tmp.mr.mr5_9 m30001| Fri Feb 22 12:34:18.311 [conn14] CMD: drop test.tmp.mr.mr5_9_inc m30001| Fri Feb 22 12:34:18.312 [conn14] CMD: drop test.tmp.mr.mr5_9 m30001| Fri Feb 22 12:34:18.312 [conn14] CMD: drop test.tmp.mr.mr5_9_inc m30999| Fri Feb 22 12:34:18.314 [conn1] DROP: test.mr5_out m30001| Fri Feb 22 12:34:18.314 [conn14] CMD: drop test.mr5_out m30001| Fri Feb 22 12:34:18.316 [conn14] CMD: drop test.tmp.mr.mr5_10 m30001| Fri Feb 22 12:34:18.316 [conn14] CMD: drop test.tmp.mr.mr5_10_inc m30001| Fri Feb 22 12:34:18.317 [conn14] build index test.tmp.mr.mr5_10_inc { 0: 1 } m30001| Fri Feb 22 12:34:18.317 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.317 [conn14] build index test.tmp.mr.mr5_10 { _id: 1 } m30001| Fri Feb 22 12:34:18.318 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.320 [conn14] CMD: drop test.mr5_out m30001| Fri Feb 22 12:34:18.322 [conn14] CMD: drop test.tmp.mr.mr5_10 m30001| Fri Feb 22 12:34:18.323 [conn14] CMD: drop test.tmp.mr.mr5_10 m30001| Fri Feb 22 12:34:18.323 [conn14] CMD: drop test.tmp.mr.mr5_10_inc m30001| Fri Feb 22 12:34:18.324 [conn14] CMD: drop test.tmp.mr.mr5_10 m30001| Fri Feb 22 12:34:18.324 [conn14] CMD: drop test.tmp.mr.mr5_10_inc m30999| Fri Feb 22 12:34:18.325 [conn1] DROP: test.mr5_out m30001| Fri Feb 22 12:34:18.325 [conn14] CMD: drop test.mr5_out 65ms ******************************************* Test : jstests/fm1.js ... m30999| Fri Feb 22 12:34:18.328 [conn1] DROP: test.fm1 m30001| Fri Feb 22 12:34:18.328 [conn14] CMD: drop test.fm1 m30001| Fri Feb 22 12:34:18.328 [conn14] build index test.fm1 { _id: 1 } m30001| Fri Feb 22 12:34:18.329 [conn14] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/removeb.js ... m30999| Fri Feb 22 12:34:18.331 [conn1] DROP: test.jstests_removeb m30001| Fri Feb 22 12:34:18.331 [conn14] CMD: drop test.jstests_removeb m30001| Fri Feb 22 12:34:18.331 [conn14] build index test.jstests_removeb { _id: 1 } m30001| Fri Feb 22 12:34:18.332 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:18.332 [conn14] info: creating collection test.jstests_removeb on add index m30001| Fri Feb 22 12:34:18.332 [conn14] build index test.jstests_removeb { a: 1.0 } m30001| Fri Feb 22 12:34:18.332 [conn14] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:34:19.780 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');while( db.jstests_removeb.count() == 20000 );for( i = 20000; i < 40000; ++i ) { db.jstests_removeb.insert( { a:i } ); db.getLastError(); if (i % 1000 == 0) { print( i-20000 + " of 20000 documents inserted" ); }} localhost:30999/admin sh14696| MongoDB shell version: 2.4.0-rc1-pre- sh14696| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:34:19.852 [mongosMain] connection accepted from 127.0.0.1:39677 #6 (2 connections now open) m30001| Fri Feb 22 12:34:19.854 [initandlisten] connection accepted from 127.0.0.1:52532 #15 (6 connections now open) sh14696| 0 of 20000 documents inserted sh14696| 1000 of 20000 documents inserted sh14696| 2000 of 20000 documents inserted sh14696| 3000 of 20000 documents inserted sh14696| 4000 of 20000 documents inserted m30001| Fri Feb 22 12:34:22.548 [conn14] remove test.jstests_removeb query: { a: { $gte: 0.0 } } ndeleted:24943 keyUpdates:0 numYields: 180 locks(micros) w:1929913 2766ms sh14696| 5000 of 20000 documents inserted sh14696| 6000 of 20000 documents inserted sh14696| 7000 of 20000 documents inserted m30001| Fri Feb 22 12:34:23.728 [conn15] insert test.jstests_removeb ninserted:1 keyUpdates:0 locks(micros) w:54 151ms m30000| Fri Feb 22 12:34:23.728 [conn10] update config.locks query: { _id: "balancer", state: 0, ts: ObjectId('512765c9881c8e7453916040') } update: { $set: { state: 1, who: "bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838:Balancer:10113", process: "bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838", when: new Date(1361536463615), why: "doing balance round", ts: ObjectId('512765cf881c8e7453916041') } } nscanned:1 nupdated:1 fastmod:1 keyUpdates:0 locks(micros) w:281 113ms m30000| Fri Feb 22 12:34:23.728 [conn5] update config.mongos query: { _id: "bs-smartos-x86-64-1.10gen.cc:30999" } update: { $set: { ping: new Date(1361536463613), up: 60, waiting: false, mongoVersion: "2.4.0-rc1-pre-" } } idhack:1 nupdated:1 fastmod:1 keyUpdates:0 locks(micros) w:149 114ms m30999| Fri Feb 22 12:34:23.729 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765cf881c8e7453916041 m30999| Fri Feb 22 12:34:23.729 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. sh14696| 8000 of 20000 documents inserted sh14696| 9000 of 20000 documents inserted sh14696| 10000 of 20000 documents inserted sh14696| 11000 of 20000 documents inserted sh14696| 12000 of 20000 documents inserted sh14696| 13000 of 20000 documents inserted sh14696| 14000 of 20000 documents inserted sh14696| 15000 of 20000 documents inserted sh14696| 16000 of 20000 documents inserted sh14696| 17000 of 20000 documents inserted sh14696| 18000 of 20000 documents inserted sh14696| 19000 of 20000 documents inserted sh14696| null m30999| Fri Feb 22 12:34:27.796 [conn6] end connection 127.0.0.1:39677 (1 connection now open) m30999| Fri Feb 22 12:34:27.803 [conn1] DROP: test.jstests_removeb m30001| Fri Feb 22 12:34:27.803 [conn14] CMD: drop test.jstests_removeb 9477ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_exactfetch.js ******************************************* Test : jstests/objid2.js ... m30999| Fri Feb 22 12:34:27.808 [conn1] DROP: test.objid2 m30001| Fri Feb 22 12:34:27.809 [conn14] CMD: drop test.objid2 m30001| Fri Feb 22 12:34:27.809 [conn14] build index test.objid2 { _id: 1 } m30001| Fri Feb 22 12:34:27.810 [conn14] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/ne1.js ... m30999| Fri Feb 22 12:34:27.812 [conn1] DROP: test.ne1 m30001| Fri Feb 22 12:34:27.812 [conn14] CMD: drop test.ne1 m30001| Fri Feb 22 12:34:27.813 [conn14] build index test.ne1 { _id: 1 } m30001| Fri Feb 22 12:34:27.813 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:27.814 [conn14] build index test.ne1 { x: 1.0 } m30001| Fri Feb 22 12:34:27.815 [conn14] build index done. scanned 3 total records. 0 secs 17ms ******************************************* Test : jstests/andor.js ... m30999| Fri Feb 22 12:34:27.832 [conn1] DROP: test.jstests_andor m30001| Fri Feb 22 12:34:27.833 [conn14] CMD: drop test.jstests_andor m30001| Fri Feb 22 12:34:27.833 [conn14] build index test.jstests_andor { _id: 1 } m30001| Fri Feb 22 12:34:27.834 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:27.840 [conn14] build index test.jstests_andor { a: 1.0 } m30001| Fri Feb 22 12:34:27.840 [conn14] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:27.847 [conn1] DROP: test.jstests_andor m30001| Fri Feb 22 12:34:27.847 [conn14] CMD: drop test.jstests_andor m30001| Fri Feb 22 12:34:27.849 [conn14] build index test.jstests_andor { _id: 1 } m30001| Fri Feb 22 12:34:27.850 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:27.855 [conn14] build index test.jstests_andor { a: 1.0 } m30001| Fri Feb 22 12:34:27.856 [conn14] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:27.862 [conn1] DROP: test.jstests_andor m30001| Fri Feb 22 12:34:27.862 [conn14] CMD: drop test.jstests_andor m30001| Fri Feb 22 12:34:27.864 [conn14] build index test.jstests_andor { _id: 1 } m30001| Fri Feb 22 12:34:27.864 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:27.864 [conn14] info: creating collection test.jstests_andor on add index m30001| Fri Feb 22 12:34:27.864 [conn14] build index test.jstests_andor { a: 1.0 } m30001| Fri Feb 22 12:34:27.864 [conn14] build index done. scanned 0 total records. 0 secs 38ms ******************************************* Test : jstests/distinct3.js ... m30999| Fri Feb 22 12:34:27.867 [conn1] DROP: test.jstests_distinct3 m30001| Fri Feb 22 12:34:27.867 [conn14] CMD: drop test.jstests_distinct3 m30001| Fri Feb 22 12:34:27.868 [conn14] build index test.jstests_distinct3 { _id: 1 } m30001| Fri Feb 22 12:34:27.868 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:27.868 [conn14] info: creating collection test.jstests_distinct3 on add index m30001| Fri Feb 22 12:34:27.868 [conn14] build index test.jstests_distinct3 { a: 1.0 } m30001| Fri Feb 22 12:34:27.869 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:27.869 [conn14] build index test.jstests_distinct3 { b: 1.0 } m30001| Fri Feb 22 12:34:27.870 [conn14] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:34:27.995 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i = 0; i < 2500; ++i ) { db.jstests_distinct3.remove( { a:49 } ); for( j = 0; j < 20; ++j ) { db.jstests_distinct3.save( { a:49, c:49, d:j } ); } } // Wait for the above writes to complete. db.getLastError(); localhost:30999/admin sh14709| MongoDB shell version: 2.4.0-rc1-pre- sh14709| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:34:28.058 [mongosMain] connection accepted from 127.0.0.1:52930 #7 (2 connections now open) m30999| Fri Feb 22 12:34:29.731 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765d5881c8e7453916042 m30999| Fri Feb 22 12:34:29.732 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. 6494ms ******************************************* Test : jstests/indexbindata.js ... 1ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/apitest_dbcollection.js ******************************************* Test : jstests/find_and_modify_server6226.js ... m30999| Fri Feb 22 12:34:34.362 [conn1] DROP: test.find_and_modify_server6226 m30001| Fri Feb 22 12:34:34.362 [conn14] CMD: drop test.find_and_modify_server6226 m30001| Fri Feb 22 12:34:34.364 [conn14] build index test.find_and_modify_server6226 { _id: 1 } m30001| Fri Feb 22 12:34:34.365 [conn14] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/indexn.js ... m30999| Fri Feb 22 12:34:34.366 [conn1] DROP: test.jstests_indexn m30001| Fri Feb 22 12:34:34.367 [conn14] CMD: drop test.jstests_indexn m30001| Fri Feb 22 12:34:34.368 [conn14] build index test.jstests_indexn { _id: 1 } m30001| Fri Feb 22 12:34:34.369 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:34.369 [conn14] build index test.jstests_indexn { a: 1.0 } m30001| Fri Feb 22 12:34:34.370 [conn14] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:34.370 [conn14] build index test.jstests_indexn { b: 1.0 } m30001| Fri Feb 22 12:34:34.371 [conn14] build index done. scanned 1 total records. 0 secs 15ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped7.js ******************************************* Test : jstests/updatek.js ... m30999| Fri Feb 22 12:34:34.381 [conn1] DROP: test.jstests_updatek m30001| Fri Feb 22 12:34:34.381 [conn14] CMD: drop test.jstests_updatek m30001| Fri Feb 22 12:34:34.382 [conn14] build index test.jstests_updatek { _id: 1 } m30001| Fri Feb 22 12:34:34.382 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.383 [conn1] DROP: test.jstests_updatek m30001| Fri Feb 22 12:34:34.384 [conn14] CMD: drop test.jstests_updatek m30001| Fri Feb 22 12:34:34.386 [conn14] build index test.jstests_updatek { _id: 1 } m30001| Fri Feb 22 12:34:34.386 [conn14] build index done. scanned 0 total records. 0 secs 7ms ******************************************* Test : jstests/indexx.js ... 0ms ******************************************* Test : jstests/set7.js ... m30999| Fri Feb 22 12:34:34.388 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.388 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.389 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.390 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.392 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.392 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.394 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.394 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.396 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.396 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.397 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.397 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.398 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.404 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.407 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.407 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.408 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.408 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.409 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.410 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.411 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.411 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.412 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.413 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.414 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.414 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.416 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.416 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.417 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.417 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.419 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.419 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:34.420 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:34.420 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:34.421 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:34.422 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.211 [conn14] update test.jstests_set7 update: { $set: { a.1500000: 1.0 } } nscanned:1 nmoved:1 nupdated:1 keyUpdates:0 locks(micros) w:789574 789ms m30999| Fri Feb 22 12:34:35.212 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:35.326 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:35.329 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:35.329 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.330 [conn1] DROP: test.jstests_set7 m30001| Fri Feb 22 12:34:35.330 [conn14] CMD: drop test.jstests_set7 m30001| Fri Feb 22 12:34:35.332 [conn14] build index test.jstests_set7 { _id: 1 } m30001| Fri Feb 22 12:34:35.332 [conn14] build index done. scanned 0 total records. 0 secs 945ms ******************************************* Test : jstests/datasize2.js ... m30999| Fri Feb 22 12:34:35.333 [conn1] DROP: test.datasize2 m30001| Fri Feb 22 12:34:35.334 [conn14] CMD: drop test.datasize2 m30001| Fri Feb 22 12:34:35.335 [conn14] build index test.datasize2 { _id: 1 } m30001| Fri Feb 22 12:34:35.335 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.454 [conn7] end connection 127.0.0.1:52930 (1 connection now open) 159ms ******************************************* Test : jstests/regex8.js ... m30999| Fri Feb 22 12:34:35.498 [conn1] DROP: test.regex8 m30001| Fri Feb 22 12:34:35.499 [conn14] CMD: drop test.regex8 m30001| Fri Feb 22 12:34:35.499 [conn14] build index test.regex8 { _id: 1 } m30001| Fri Feb 22 12:34:35.500 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.502 [conn14] build index test.regex8 { a: 1.0 } m30001| Fri Feb 22 12:34:35.502 [conn14] build index done. scanned 3 total records. 0 secs 12ms ******************************************* Test : jstests/update3.js ... m30999| Fri Feb 22 12:34:35.510 [conn1] DROP: test.jstests_update3 m30001| Fri Feb 22 12:34:35.510 [conn14] CMD: drop test.jstests_update3 m30001| Fri Feb 22 12:34:35.511 [conn14] build index test.jstests_update3 { _id: 1 } m30001| Fri Feb 22 12:34:35.512 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.512 [conn1] DROP: test.jstests_update3 m30001| Fri Feb 22 12:34:35.512 [conn14] CMD: drop test.jstests_update3 m30001| Fri Feb 22 12:34:35.514 [conn14] build index test.jstests_update3 { _id: 1 } m30001| Fri Feb 22 12:34:35.515 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.515 [conn1] DROP: test.jstests_update3 m30001| Fri Feb 22 12:34:35.515 [conn14] CMD: drop test.jstests_update3 m30001| Fri Feb 22 12:34:35.517 [conn14] build index test.jstests_update3 { _id: 1 } m30001| Fri Feb 22 12:34:35.517 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.517 [conn1] DROP: test.jstests_update3 m30001| Fri Feb 22 12:34:35.517 [conn14] CMD: drop test.jstests_update3 m30001| Fri Feb 22 12:34:35.519 [conn14] build index test.jstests_update3 { _id: 1 } m30001| Fri Feb 22 12:34:35.519 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.520 [conn1] DROP: test.jstests_update3 m30001| Fri Feb 22 12:34:35.520 [conn14] CMD: drop test.jstests_update3 m30001| Fri Feb 22 12:34:35.522 [conn14] build index test.jstests_update3 { _id: 1 } m30001| Fri Feb 22 12:34:35.522 [conn14] build index done. scanned 0 total records. 0 secs 18ms ******************************************* Test : jstests/index6.js ... m30999| Fri Feb 22 12:34:35.526 [conn1] DROP: test.ed.db.index6 m30001| Fri Feb 22 12:34:35.527 [conn14] CMD: drop test.ed.db.index6 m30001| Fri Feb 22 12:34:35.527 [conn14] build index test.ed.db.index6 { _id: 1 } m30001| Fri Feb 22 12:34:35.528 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.528 [conn14] build index test.ed.db.index6 { comments.name: 1.0 } m30001| Fri Feb 22 12:34:35.528 [conn14] build index done. scanned 1 total records. 0 secs 7ms ******************************************* Test : jstests/index_check1.js ... m30999| Fri Feb 22 12:34:35.530 [conn1] DROP: test.somecollection m30001| Fri Feb 22 12:34:35.530 [conn14] CMD: drop test.somecollection m30001| Fri Feb 22 12:34:35.531 [conn14] build index test.somecollection { _id: 1 } m30001| Fri Feb 22 12:34:35.532 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.533 [conn14] build index test.somecollection { a: 1.0 } m30001| Fri Feb 22 12:34:35.535 [conn14] build index done. scanned 1 total records. 0.001 secs m30999| Fri Feb 22 12:34:35.536 [conn1] DROP: test.somecollection m30001| Fri Feb 22 12:34:35.536 [conn14] CMD: drop test.somecollection m30001| Fri Feb 22 12:34:35.539 [conn14] build index test.somecollection { _id: 1 } m30001| Fri Feb 22 12:34:35.540 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.541 [conn14] build index test.somecollection { a: 1.0 } m30001| Fri Feb 22 12:34:35.541 [conn14] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:35.542 [conn9] CMD: validate test.somecollection m30001| Fri Feb 22 12:34:35.542 [conn9] validating index 0: test.somecollection.$_id_ m30001| Fri Feb 22 12:34:35.542 [conn9] validating index 1: test.somecollection.$a_1 14ms ******************************************* Test : jstests/orb.js ... m30999| Fri Feb 22 12:34:35.548 [conn1] DROP: test.jstests_orb m30001| Fri Feb 22 12:34:35.548 [conn14] CMD: drop test.jstests_orb m30001| Fri Feb 22 12:34:35.548 [conn14] build index test.jstests_orb { _id: 1 } m30001| Fri Feb 22 12:34:35.549 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.549 [conn14] build index test.jstests_orb { a: -1.0 } m30001| Fri Feb 22 12:34:35.550 [conn14] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:35.551 [conn1] DROP: test.jstests_orb m30001| Fri Feb 22 12:34:35.551 [conn14] CMD: drop test.jstests_orb m30001| Fri Feb 22 12:34:35.554 [conn14] build index test.jstests_orb { _id: 1 } m30001| Fri Feb 22 12:34:35.555 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.555 [conn14] build index test.jstests_orb { a: 1.0, b: -1.0 } m30001| Fri Feb 22 12:34:35.556 [conn14] build index done. scanned 1 total records. 0 secs 14ms ******************************************* Test : jstests/exists7.js ... m30999| Fri Feb 22 12:34:35.558 [conn1] DROP: test.jstests_explain7 m30001| Fri Feb 22 12:34:35.558 [conn14] CMD: drop test.jstests_explain7 m30001| Fri Feb 22 12:34:35.561 [conn14] build index test.jstests_explain7 { _id: 1 } m30001| Fri Feb 22 12:34:35.562 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.563 [conn14] build index test.jstests_explain7 { b: 1.0 } m30001| Fri Feb 22 12:34:35.563 [conn14] build index done. scanned 5 total records. 0 secs 8ms ******************************************* Test : jstests/coveredIndex4.js ... m30999| Fri Feb 22 12:34:35.565 [conn1] DROP: test.jstests_coveredIndex4 m30001| Fri Feb 22 12:34:35.566 [conn14] CMD: drop test.jstests_coveredIndex4 m30001| Fri Feb 22 12:34:35.566 [conn14] build index test.jstests_coveredIndex4 { _id: 1 } m30001| Fri Feb 22 12:34:35.567 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.567 [conn14] info: creating collection test.jstests_coveredIndex4 on add index m30001| Fri Feb 22 12:34:35.567 [conn14] build index test.jstests_coveredIndex4 { a: 1.0 } m30001| Fri Feb 22 12:34:35.567 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.567 [conn14] build index test.jstests_coveredIndex4 { b: 1.0 } m30001| Fri Feb 22 12:34:35.568 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.687 [conn14] query test.jstests_coveredIndex4 query: { $or: [ { a: 0.0 }, { b: 1.0 }, { a: 2.0 }, { b: 3.0 }, { a: 4.0 }, { b: 5.0 }, { a: 6.0 }, { b: 7.0 }, { a: 8.0 }, { b: 9.0 }, { a: 10.0 }, { b: 11.0 }, { a: 12.0 }, { b: 13.0 }, { a: 14.0 }, { b: 15.0 }, { a: 16.0 }, { b: 17.0 }, { a: 18.0 }, { b: 19.0 }, { a: 20.0 }, { b: 21.0 }, { a: 22.0 }, { b: 23.0 }, { a: 24.0 }, { b: 25.0 }, { a: 26.0 }, { b: 27.0 }, { a: 28.0 }, { b: 29.0 }, { a: 30.0 }, { b: 31.0 }, { a: 32.0 }, { b: 33.0 }, { a: 34.0 }, { b: 35.0 }, { a: 36.0 }, { b: 37.0 }, { a: 38.0 }, { b: 39.0 }, { a: 40.0 }, { b: 41.0 }, { a: 42.0 }, { b: 43.0 }, { a: 44.0 }, { b: 45.0 }, { a: 46.0 }, { b: 47.0 }, { a: 48.0 }, { b: 49.0 }, { a: 50.0 }, { b: 51.0 }, { a: 52.0 }, { b: 53.0 }, { a: 54.0 }, { b: 55.0 }, { a: 56.0 }, { b: 57.0 }, { a: 58.0 }, { b: 59.0 }, { a: 60.0 }, { b: 61.0 }, { a: 62.0 }, { b: 63.0 }, { a: 64.0 }, { b: 65.0 }, { a: 66.0 }, { b: 67.0 }, { a: 68.0 }, { b: 69.0 }, { a: 70.0 }, { b: 71.0 }, { a: 72.0 }, { b: 73.0 }, { a: 74.0 }, { b: 75.0 }, { a: 76.0 }, { b: 77.0 }, { a: 78.0 }, { b: 79.0 }, { a: 80.0 }, { b: 81.0 }, { a: 82.0 }, { b: 83.0 }, { a: 84.0 }, { b: 85.0 }, { a: 86.0 }, { b: 87.0 }, { a: 88.0 }, { b: 89.0 }, { a: 90.0 }, { b: 91.0 }, { a: 92.0 }, { b: 93.0 }, { a: 94.0 }, { b: 95.0 }, { a: 96.0 }, { b: 97.0 }, { a: 98.0 }, { b: 99.0 }, { a: 100.0 }, { b: 101.0 }, { a: 102.0 }, { b: 103.0 }, { a: 104.0 }, { b: 105.0 }, { a: 106.0 }, { b: 107.0 }, { a: 108.0 }, { b: 109.0 }, { a: 110.0 }, { b: 111.0 }, { a: 112.0 }, { b: 113.0 }, { a: 114.0 }, { b: 115.0 }, { a: 116.0 }, { b: 117.0 }, { a: 118.0 }, { b: 119.0 }, { a: 120.0 }, { b: 121.0 }, { a: 122.0 }, { b: 123.0 }, { a: 124.0 }, { b: 125.0 }, { a: 126.0 }, { b: 127.0 }, { a: 128.0 }, { b: 129.0 }, { a: 130.0 }, { b: 131.0 }, { a: 132.0 }, { b: 133.0 }, { a: 134.0 }, { b: 135.0 }, { a: 136.0 }, { b: 137.0 }, { a: 138.0 }, { b: 139.0 }, { a: 140.0 }, { b: 141.0 }, { a: 142.0 }, { b: 143.0 }, { a: 144.0 }, { b: 145.0 }, { a: 146.0 }, { b: 147.0 }, { a: 148.0 }, { b: 149.0 }, { a: 150.0 }, { b: 151.0 }, { a: 152.0 }, { b: 153.0 }, { a: 154.0 }, { b: 155.0 }, { a: 156.0 }, { b: 157.0 }, { a: 158.0 }, { b: 159.0 }, { a: 160.0 }, { b: 161.0 }, { a: 162.0 }, { b: 163.0 }, { a: 164.0 }, { b: 165.0 }, { a: 166.0 }, { b: 167.0 }, { a: 168.0 }, { b: 169.0 }, { a: 170.0 }, { b: 171.0 }, { a: 172.0 }, { b: 173.0 }, { a: 174.0 }, { b: 175.0 }, { a: 176.0 }, { b: 177.0 }, { a: 178.0 }, { b: 179.0 }, { a: 180.0 }, { b: 181.0 }, { a: 182.0 }, { b: 183.0 }, { a: 184.0 }, { b: 185.0 }, { a: 186.0 }, { b: 187.0 }, { a: 188.0 }, { b: 189.0 }, { a: 190.0 }, { b: 191.0 }, { a: 192.0 }, { b: 193.0 }, { a: 194.0 }, { b: 195.0 }, { a: 196.0 }, { b: 197.0 }, { a: 198.0 }, { b: 199.0 } ] } cursorid:309475396383425 ntoreturn:0 ntoskip:0 nscanned:102 keyUpdates:0 locks(micros) r:105109 nreturned:101 reslen:1086 105ms m30999| Fri Feb 22 12:34:35.734 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765db881c8e7453916043 m30999| Fri Feb 22 12:34:35.734 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:34:35.878 [conn14] query test.jstests_coveredIndex4 query: { $or: [ { a: 0.0 }, { b: 1.0 }, { a: 2.0 }, { b: 3.0 }, { a: 4.0 }, { b: 5.0 }, { a: 6.0 }, { b: 7.0 }, { a: 8.0 }, { b: 9.0 }, { a: 10.0 }, { b: 11.0 }, { a: 12.0 }, { b: 13.0 }, { a: 14.0 }, { b: 15.0 }, { a: 16.0 }, { b: 17.0 }, { a: 18.0 }, { b: 19.0 }, { a: 20.0 }, { b: 21.0 }, { a: 22.0 }, { b: 23.0 }, { a: 24.0 }, { b: 25.0 }, { a: 26.0 }, { b: 27.0 }, { a: 28.0 }, { b: 29.0 }, { a: 30.0 }, { b: 31.0 }, { a: 32.0 }, { b: 33.0 }, { a: 34.0 }, { b: 35.0 }, { a: 36.0 }, { b: 37.0 }, { a: 38.0 }, { b: 39.0 }, { a: 40.0 }, { b: 41.0 }, { a: 42.0 }, { b: 43.0 }, { a: 44.0 }, { b: 45.0 }, { a: 46.0 }, { b: 47.0 }, { a: 48.0 }, { b: 49.0 }, { a: 50.0 }, { b: 51.0 }, { a: 52.0 }, { b: 53.0 }, { a: 54.0 }, { b: 55.0 }, { a: 56.0 }, { b: 57.0 }, { a: 58.0 }, { b: 59.0 }, { a: 60.0 }, { b: 61.0 }, { a: 62.0 }, { b: 63.0 }, { a: 64.0 }, { b: 65.0 }, { a: 66.0 }, { b: 67.0 }, { a: 68.0 }, { b: 69.0 }, { a: 70.0 }, { b: 71.0 }, { a: 72.0 }, { b: 73.0 }, { a: 74.0 }, { b: 75.0 }, { a: 76.0 }, { b: 77.0 }, { a: 78.0 }, { b: 79.0 }, { a: 80.0 }, { b: 81.0 }, { a: 82.0 }, { b: 83.0 }, { a: 84.0 }, { b: 85.0 }, { a: 86.0 }, { b: 87.0 }, { a: 88.0 }, { b: 89.0 }, { a: 90.0 }, { b: 91.0 }, { a: 92.0 }, { b: 93.0 }, { a: 94.0 }, { b: 95.0 }, { a: 96.0 }, { b: 97.0 }, { a: 98.0 }, { b: 99.0 }, { a: 100.0 }, { b: 101.0 }, { a: 102.0 }, { b: 103.0 }, { a: 104.0 }, { b: 105.0 }, { a: 106.0 }, { b: 107.0 }, { a: 108.0 }, { b: 109.0 }, { a: 110.0 }, { b: 111.0 }, { a: 112.0 }, { b: 113.0 }, { a: 114.0 }, { b: 115.0 }, { a: 116.0 }, { b: 117.0 }, { a: 118.0 }, { b: 119.0 }, { a: 120.0 }, { b: 121.0 }, { a: 122.0 }, { b: 123.0 }, { a: 124.0 }, { b: 125.0 }, { a: 126.0 }, { b: 127.0 }, { a: 128.0 }, { b: 129.0 }, { a: 130.0 }, { b: 131.0 }, { a: 132.0 }, { b: 133.0 }, { a: 134.0 }, { b: 135.0 }, { a: 136.0 }, { b: 137.0 }, { a: 138.0 }, { b: 139.0 }, { a: 140.0 }, { b: 141.0 }, { a: 142.0 }, { b: 143.0 }, { a: 144.0 }, { b: 145.0 }, { a: 146.0 }, { b: 147.0 }, { a: 148.0 }, { b: 149.0 }, { a: 150.0 }, { b: 151.0 }, { a: 152.0 }, { b: 153.0 }, { a: 154.0 }, { b: 155.0 }, { a: 156.0 }, { b: 157.0 }, { a: 158.0 }, { b: 159.0 }, { a: 160.0 }, { b: 161.0 }, { a: 162.0 }, { b: 163.0 }, { a: 164.0 }, { b: 165.0 }, { a: 166.0 }, { b: 167.0 }, { a: 168.0 }, { b: 169.0 }, { a: 170.0 }, { b: 171.0 }, { a: 172.0 }, { b: 173.0 }, { a: 174.0 }, { b: 175.0 }, { a: 176.0 }, { b: 177.0 }, { a: 178.0 }, { b: 179.0 }, { a: 180.0 }, { b: 181.0 }, { a: 182.0 }, { b: 183.0 }, { a: 184.0 }, { b: 185.0 }, { a: 186.0 }, { b: 187.0 }, { a: 188.0 }, { b: 189.0 }, { a: 190.0 }, { b: 191.0 }, { a: 192.0 }, { b: 193.0 }, { a: 194.0 }, { b: 195.0 }, { a: 196.0 }, { b: 197.0 }, { a: 198.0 }, { b: 199.0 } ] } cursorid:310291195845264 ntoreturn:0 ntoskip:0 nscanned:102 keyUpdates:0 locks(micros) r:113307 nreturned:101 reslen:1075 113ms 403ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_polygon1.js ******************************************* Test : jstests/queryoptimizer4.js ... m30999| Fri Feb 22 12:34:35.969 [conn1] DROP: test.jstests_queryoptimizer4 m30001| Fri Feb 22 12:34:35.970 [conn14] CMD: drop test.jstests_queryoptimizer4 m30001| Fri Feb 22 12:34:35.970 [conn14] build index test.jstests_queryoptimizer4 { _id: 1 } m30001| Fri Feb 22 12:34:35.971 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.971 [conn14] info: creating collection test.jstests_queryoptimizer4 on add index m30001| Fri Feb 22 12:34:35.971 [conn14] build index test.jstests_queryoptimizer4 { a: 1.0 } m30001| Fri Feb 22 12:34:35.972 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.972 [conn14] build index test.jstests_queryoptimizer4 { b: 1.0 } m30001| Fri Feb 22 12:34:35.973 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.981 [conn1] DROP: test.jstests_queryoptimizer4 m30001| Fri Feb 22 12:34:35.981 [conn14] CMD: drop test.jstests_queryoptimizer4 m30001| Fri Feb 22 12:34:35.984 [conn14] build index test.jstests_queryoptimizer4 { _id: 1 } m30001| Fri Feb 22 12:34:35.985 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.985 [conn14] info: creating collection test.jstests_queryoptimizer4 on add index m30001| Fri Feb 22 12:34:35.985 [conn14] build index test.jstests_queryoptimizer4 { a: 1.0 } m30001| Fri Feb 22 12:34:35.985 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.986 [conn14] build index test.jstests_queryoptimizer4 { b: 1.0 } m30001| Fri Feb 22 12:34:35.986 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:35.991 [conn1] DROP: test.jstests_queryoptimizer4 m30001| Fri Feb 22 12:34:35.991 [conn14] CMD: drop test.jstests_queryoptimizer4 m30001| Fri Feb 22 12:34:35.994 [conn14] build index test.jstests_queryoptimizer4 { _id: 1 } m30001| Fri Feb 22 12:34:35.995 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.995 [conn14] info: creating collection test.jstests_queryoptimizer4 on add index m30001| Fri Feb 22 12:34:35.995 [conn14] build index test.jstests_queryoptimizer4 { a: 1.0 } m30001| Fri Feb 22 12:34:35.995 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:35.995 [conn14] build index test.jstests_queryoptimizer4 { b: 1.0 } m30001| Fri Feb 22 12:34:35.996 [conn14] build index done. scanned 0 total records. 0 secs 32ms ******************************************* Test : jstests/skip1.js ... 1ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/profile4.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/fsync.js ******************************************* Test : jstests/group1.js ... m30999| Fri Feb 22 12:34:36.004 [conn1] DROP: test.group1 m30001| Fri Feb 22 12:34:36.004 [conn14] CMD: drop test.group1 m30001| Fri Feb 22 12:34:36.005 [conn14] build index test.group1 { _id: 1 } m30001| Fri Feb 22 12:34:36.005 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:36.079 [conn1] DROP: test.group1 m30001| Fri Feb 22 12:34:36.079 [conn14] CMD: drop test.group1 m30001| Fri Feb 22 12:34:36.081 [conn14] build index test.group1 { _id: 1 } m30001| Fri Feb 22 12:34:36.082 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:36.103 [conn1] DROP: test.group1 m30001| Fri Feb 22 12:34:36.103 [conn14] CMD: drop test.group1 m30001| Fri Feb 22 12:34:36.105 [conn14] build index test.group1 { _id: 1 } m30001| Fri Feb 22 12:34:36.105 [conn14] build index done. scanned 0 total records. 0 secs 115ms ******************************************* Test : jstests/all2.js ... m30999| Fri Feb 22 12:34:36.121 [conn1] DROP: test.all2 m30001| Fri Feb 22 12:34:36.121 [conn14] CMD: drop test.all2 m30001| Fri Feb 22 12:34:36.122 [conn14] build index test.all2 { _id: 1 } m30001| Fri Feb 22 12:34:36.123 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.129 [conn14] build index test.all2 { a.x: 1.0 } m30001| Fri Feb 22 12:34:36.130 [conn14] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:34:36.137 [conn1] DROP: test.all2 m30001| Fri Feb 22 12:34:36.137 [conn14] CMD: drop test.all2 m30001| Fri Feb 22 12:34:36.139 [conn14] build index test.all2 { _id: 1 } m30001| Fri Feb 22 12:34:36.140 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.146 [conn14] build index test.all2 { a: 1.0 } m30001| Fri Feb 22 12:34:36.146 [conn14] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:34:36.153 [conn1] DROP: test.all2 m30001| Fri Feb 22 12:34:36.153 [conn14] CMD: drop test.all2 m30001| Fri Feb 22 12:34:36.156 [conn14] build index test.all2 { _id: 1 } m30001| Fri Feb 22 12:34:36.156 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.157 [conn14] build index test.all2 { name: 1.0 } m30001| Fri Feb 22 12:34:36.157 [conn14] build index done. scanned 1 total records. 0 secs 42ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_center_sphere2.js ******************************************* Test : jstests/update_invalid1.js ... m30999| Fri Feb 22 12:34:36.164 [conn1] DROP: test.update_invalid1 m30001| Fri Feb 22 12:34:36.164 [conn14] CMD: drop test.update_invalid1 7ms ******************************************* Test : jstests/mr1.js ... m30999| Fri Feb 22 12:34:36.166 [conn1] DROP: test.mr1 m30001| Fri Feb 22 12:34:36.166 [conn14] CMD: drop test.mr1 m30001| Fri Feb 22 12:34:36.167 [conn14] build index test.mr1 { _id: 1 } m30001| Fri Feb 22 12:34:36.167 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.169 [conn14] CMD: drop test.tmp.mr.mr1_11 m30001| Fri Feb 22 12:34:36.169 [conn14] CMD: drop test.tmp.mr.mr1_11_inc m30001| Fri Feb 22 12:34:36.169 [conn14] build index test.tmp.mr.mr1_11_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.169 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.170 [conn14] build index test.tmp.mr.mr1_11 { _id: 1 } m30001| Fri Feb 22 12:34:36.170 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.173 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.176 [conn14] CMD: drop test.tmp.mr.mr1_11 m30001| Fri Feb 22 12:34:36.176 [conn14] CMD: drop test.tmp.mr.mr1_11 m30001| Fri Feb 22 12:34:36.176 [conn14] CMD: drop test.tmp.mr.mr1_11_inc m30001| Fri Feb 22 12:34:36.178 [conn14] CMD: drop test.tmp.mr.mr1_11 m30001| Fri Feb 22 12:34:36.178 [conn14] CMD: drop test.tmp.mr.mr1_11_inc { "result" : "mr1_out", "timeMillis" : 8, "counts" : { "input" : 4, "emit" : 8, "reduce" : 3, "output" : 3 }, "ok" : 1 } { "_id" : "a", "value" : { "count" : 2 } } { "_id" : "b", "value" : { "count" : 3 } } { "_id" : "c", "value" : { "count" : 3 } } { "a" : 2, "b" : 3, "c" : 3 } m30999| Fri Feb 22 12:34:36.181 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.181 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.183 [conn14] CMD: drop test.tmp.mr.mr1_12 m30001| Fri Feb 22 12:34:36.183 [conn14] CMD: drop test.tmp.mr.mr1_12_inc m30001| Fri Feb 22 12:34:36.184 [conn14] build index test.tmp.mr.mr1_12_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.184 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.184 [conn14] build index test.tmp.mr.mr1_12 { _id: 1 } m30001| Fri Feb 22 12:34:36.185 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.186 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.190 [conn14] CMD: drop test.tmp.mr.mr1_12 m30001| Fri Feb 22 12:34:36.190 [conn14] CMD: drop test.tmp.mr.mr1_12 m30001| Fri Feb 22 12:34:36.190 [conn14] CMD: drop test.tmp.mr.mr1_12_inc m30001| Fri Feb 22 12:34:36.191 [conn14] CMD: drop test.tmp.mr.mr1_12 m30001| Fri Feb 22 12:34:36.191 [conn14] CMD: drop test.tmp.mr.mr1_12_inc { "result" : "mr1_out", "timeMillis" : 7, "counts" : { "input" : 2, "emit" : 4, "reduce" : 1, "output" : 3 }, "ok" : 1 } m30999| Fri Feb 22 12:34:36.193 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.193 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.195 [conn14] CMD: drop test.tmp.mr.mr1_13 m30001| Fri Feb 22 12:34:36.196 [conn14] CMD: drop test.tmp.mr.mr1_13_inc m30001| Fri Feb 22 12:34:36.196 [conn14] build index test.tmp.mr.mr1_13_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.196 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.197 [conn14] build index test.tmp.mr.mr1_13 { _id: 1 } m30001| Fri Feb 22 12:34:36.197 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.199 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.202 [conn14] CMD: drop test.tmp.mr.mr1_13 m30001| Fri Feb 22 12:34:36.202 [conn14] CMD: drop test.tmp.mr.mr1_13 m30001| Fri Feb 22 12:34:36.202 [conn14] CMD: drop test.tmp.mr.mr1_13_inc m30001| Fri Feb 22 12:34:36.204 [conn14] CMD: drop test.tmp.mr.mr1_13 m30001| Fri Feb 22 12:34:36.204 [conn14] CMD: drop test.tmp.mr.mr1_13_inc { "result" : "mr1_out", "timeMillis" : 7, "counts" : { "input" : 2, "emit" : 4, "reduce" : 1, "output" : 3 }, "ok" : 1 } m30999| Fri Feb 22 12:34:36.205 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.205 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.208 [conn14] CMD: drop test.tmp.mr.mr1_14 m30001| Fri Feb 22 12:34:36.208 [conn14] CMD: drop test.tmp.mr.mr1_14_inc m30001| Fri Feb 22 12:34:36.208 [conn14] build index test.tmp.mr.mr1_14_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.208 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.209 [conn14] build index test.tmp.mr.mr1_14 { _id: 1 } m30001| Fri Feb 22 12:34:36.209 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.211 [conn14] CMD: drop test.mr1_foo m30001| Fri Feb 22 12:34:36.214 [conn14] CMD: drop test.tmp.mr.mr1_14 m30001| Fri Feb 22 12:34:36.214 [conn14] CMD: drop test.tmp.mr.mr1_14 m30001| Fri Feb 22 12:34:36.214 [conn14] CMD: drop test.tmp.mr.mr1_14_inc m30001| Fri Feb 22 12:34:36.216 [conn14] CMD: drop test.tmp.mr.mr1_14 m30001| Fri Feb 22 12:34:36.216 [conn14] CMD: drop test.tmp.mr.mr1_14_inc { "result" : "mr1_foo", "timeMillis" : 7, "counts" : { "input" : 2, "emit" : 4, "reduce" : 1, "output" : 3 }, "ok" : 1 } m30999| Fri Feb 22 12:34:36.217 [conn1] DROP: test.mr1_foo m30001| Fri Feb 22 12:34:36.217 [conn14] CMD: drop test.mr1_foo m30001| Fri Feb 22 12:34:36.295 [conn14] CMD: drop test.tmp.mr.mr1_15 m30001| Fri Feb 22 12:34:36.295 [conn14] CMD: drop test.tmp.mr.mr1_15_inc m30001| Fri Feb 22 12:34:36.295 [conn14] build index test.tmp.mr.mr1_15_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.295 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.296 [conn14] build index test.tmp.mr.mr1_15 { _id: 1 } m30001| Fri Feb 22 12:34:36.296 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.379 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.383 [conn14] CMD: drop test.tmp.mr.mr1_15 m30001| Fri Feb 22 12:34:36.383 [conn14] CMD: drop test.tmp.mr.mr1_15 m30001| Fri Feb 22 12:34:36.383 [conn14] CMD: drop test.tmp.mr.mr1_15_inc m30001| Fri Feb 22 12:34:36.386 [conn14] CMD: drop test.tmp.mr.mr1_15 m30001| Fri Feb 22 12:34:36.386 [conn14] CMD: drop test.tmp.mr.mr1_15_inc { "result" : "mr1_out", "timeMillis" : 89, "counts" : { "input" : 999, "emit" : 1998, "reduce" : 22, "output" : 4 }, "ok" : 1 } { "_id" : "a", "value" : { "count" : 2 } } { "_id" : "b", "value" : { "count" : 998 } } { "_id" : "c", "value" : { "count" : 3 } } { "_id" : "d", "value" : { "count" : 995 } } m30999| Fri Feb 22 12:34:36.390 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.391 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.394 [conn14] CMD: drop test.tmp.mr.mr1_16 m30001| Fri Feb 22 12:34:36.394 [conn14] CMD: drop test.tmp.mr.mr1_16_inc m30001| Fri Feb 22 12:34:36.395 [conn14] build index test.tmp.mr.mr1_16_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.396 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.396 [conn14] build index test.tmp.mr.mr1_16 { _id: 1 } m30001| Fri Feb 22 12:34:36.396 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.467 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.471 [conn14] CMD: drop test.tmp.mr.mr1_16 m30001| Fri Feb 22 12:34:36.471 [conn14] CMD: drop test.tmp.mr.mr1_16 m30001| Fri Feb 22 12:34:36.471 [conn14] CMD: drop test.tmp.mr.mr1_16_inc m30001| Fri Feb 22 12:34:36.473 [conn14] CMD: drop test.tmp.mr.mr1_16 m30001| Fri Feb 22 12:34:36.473 [conn14] CMD: drop test.tmp.mr.mr1_16_inc { "result" : "mr1_out", "timeMillis" : 77, "timing" : { "mapTime" : 56, "emitLoop" : 72, "reduceTime" : 9, "mode" : "mixed", "total" : 77 }, "counts" : { "input" : 999, "emit" : 1998, "reduce" : 22, "output" : 4 }, "ok" : 1 } m30001| Fri Feb 22 12:34:36.475 [conn14] CMD: drop test.tmp.mr.mr1_17 m30001| Fri Feb 22 12:34:36.476 [conn14] CMD: drop test.tmp.mr.mr1_17_inc m30001| Fri Feb 22 12:34:36.476 [conn14] build index test.tmp.mr.mr1_17_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.476 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.477 [conn14] build index test.tmp.mr.mr1_17 { _id: 1 } m30001| Fri Feb 22 12:34:36.477 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.545 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.551 [conn14] CMD: drop test.tmp.mr.mr1_17 m30001| Fri Feb 22 12:34:36.551 [conn14] CMD: drop test.tmp.mr.mr1_17 m30001| Fri Feb 22 12:34:36.551 [conn14] CMD: drop test.tmp.mr.mr1_17_inc m30001| Fri Feb 22 12:34:36.553 [conn14] CMD: drop test.tmp.mr.mr1_17 m30001| Fri Feb 22 12:34:36.553 [conn14] CMD: drop test.tmp.mr.mr1_17_inc m30999| Fri Feb 22 12:34:36.554 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.554 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.556 [conn14] CMD: drop test.tmp.mr.mr1_18 m30001| Fri Feb 22 12:34:36.556 [conn14] CMD: drop test.tmp.mr.mr1_18_inc m30001| Fri Feb 22 12:34:36.557 [conn14] build index test.tmp.mr.mr1_18_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.557 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.557 [conn14] build index test.tmp.mr.mr1_18 { _id: 1 } m30001| Fri Feb 22 12:34:36.558 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.628 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.631 [conn14] CMD: drop test.tmp.mr.mr1_18 m30001| Fri Feb 22 12:34:36.631 [conn14] CMD: drop test.tmp.mr.mr1_18 m30001| Fri Feb 22 12:34:36.631 [conn14] CMD: drop test.tmp.mr.mr1_18_inc m30001| Fri Feb 22 12:34:36.633 [conn14] CMD: drop test.tmp.mr.mr1_18 m30001| Fri Feb 22 12:34:36.633 [conn14] CMD: drop test.tmp.mr.mr1_18_inc m30999| Fri Feb 22 12:34:36.634 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.634 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.637 [conn14] CMD: drop test.tmp.mr.mr1_19 m30001| Fri Feb 22 12:34:36.637 [conn14] CMD: drop test.tmp.mr.mr1_19_inc m30001| Fri Feb 22 12:34:36.637 [conn14] build index test.tmp.mr.mr1_19_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.638 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.638 [conn14] build index test.tmp.mr.mr1_19 { _id: 1 } m30001| Fri Feb 22 12:34:36.638 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.708 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.712 [conn14] CMD: drop test.tmp.mr.mr1_19 m30001| Fri Feb 22 12:34:36.712 [conn14] CMD: drop test.tmp.mr.mr1_19 m30001| Fri Feb 22 12:34:36.712 [conn14] CMD: drop test.tmp.mr.mr1_19_inc m30001| Fri Feb 22 12:34:36.714 [conn14] CMD: drop test.tmp.mr.mr1_19 m30001| Fri Feb 22 12:34:36.714 [conn14] CMD: drop test.tmp.mr.mr1_19_inc m30999| Fri Feb 22 12:34:36.715 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.715 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.717 [conn14] CMD: drop test.tmp.mr.mr1_20 m30001| Fri Feb 22 12:34:36.718 [conn14] CMD: drop test.tmp.mr.mr1_20_inc m30001| Fri Feb 22 12:34:36.718 [conn14] build index test.tmp.mr.mr1_20_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.718 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.718 [conn14] build index test.tmp.mr.mr1_20 { _id: 1 } m30001| Fri Feb 22 12:34:36.719 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.795 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.799 [conn14] CMD: drop test.tmp.mr.mr1_20 m30001| Fri Feb 22 12:34:36.799 [conn14] CMD: drop test.tmp.mr.mr1_20 m30001| Fri Feb 22 12:34:36.799 [conn14] CMD: drop test.tmp.mr.mr1_20_inc m30001| Fri Feb 22 12:34:36.801 [conn14] CMD: drop test.tmp.mr.mr1_20 m30001| Fri Feb 22 12:34:36.801 [conn14] CMD: drop test.tmp.mr.mr1_20_inc m30999| Fri Feb 22 12:34:36.806 [conn1] DROP: test.mr1_out m30001| Fri Feb 22 12:34:36.806 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.837 [conn14] CMD: drop test.tmp.mr.mr1_21 m30001| Fri Feb 22 12:34:36.837 [conn14] CMD: drop test.tmp.mr.mr1_21_inc m30001| Fri Feb 22 12:34:36.838 [conn14] build index test.tmp.mr.mr1_21_inc { 0: 1 } m30001| Fri Feb 22 12:34:36.838 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.838 [conn14] build index test.tmp.mr.mr1_21 { _id: 1 } m30001| Fri Feb 22 12:34:36.839 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:36.915 [conn14] CMD: drop test.mr1_out m30001| Fri Feb 22 12:34:36.918 [conn14] CMD: drop test.tmp.mr.mr1_21 m30001| Fri Feb 22 12:34:36.918 [conn14] CMD: drop test.tmp.mr.mr1_21 m30001| Fri Feb 22 12:34:36.918 [conn14] CMD: drop test.tmp.mr.mr1_21_inc m30001| Fri Feb 22 12:34:36.920 [conn14] CMD: drop test.tmp.mr.mr1_21 m30001| Fri Feb 22 12:34:36.920 [conn14] CMD: drop test.tmp.mr.mr1_21_inc m30001| Fri Feb 22 12:34:36.920 [conn14] command test.$cmd command: { mapreduce: "mr1", map: function (){ m30001| this.tags.forEach( m30001| function(z){ m30001| e..., reduce: function ( key , values ){ m30001| var total = 0; m30001| for ( var i=0; i>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_multikey1.js ******************************************* Test : jstests/slice1.js ... m30999| Fri Feb 22 12:34:49.010 [conn1] DROP: test.slice1 m30001| Fri Feb 22 12:34:49.010 [conn14] CMD: drop test.slice1 m30001| Fri Feb 22 12:34:49.011 [conn14] build index test.slice1 { _id: 1 } m30001| Fri Feb 22 12:34:49.012 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:49.014 [conn1] DROP: test.slice1 m30001| Fri Feb 22 12:34:49.014 [conn14] CMD: drop test.slice1 m30001| Fri Feb 22 12:34:49.017 [conn14] build index test.slice1 { _id: 1 } m30001| Fri Feb 22 12:34:49.017 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:49.018 [conn1] DROP: test.slice1 m30001| Fri Feb 22 12:34:49.019 [conn14] CMD: drop test.slice1 m30001| Fri Feb 22 12:34:49.021 [conn14] build index test.slice1 { _id: 1 } m30001| Fri Feb 22 12:34:49.023 [conn14] build index done. scanned 0 total records. 0.001 secs 15ms ******************************************* Test : jstests/countb.js ... m30999| Fri Feb 22 12:34:49.025 [conn1] DROP: test.jstests_countb m30001| Fri Feb 22 12:34:49.025 [conn14] CMD: drop test.jstests_countb m30001| Fri Feb 22 12:34:49.025 [conn14] build index test.jstests_countb { _id: 1 } m30001| Fri Feb 22 12:34:49.026 [conn14] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:49.026 [conn14] info: creating collection test.jstests_countb on add index m30001| Fri Feb 22 12:34:49.027 [conn14] build index test.jstests_countb { a: 1.0 } m30001| Fri Feb 22 12:34:49.027 [conn14] build index done. scanned 0 total records. 0 secs 35ms ******************************************* Test : jstests/eval7.js ... m30001| Fri Feb 22 12:34:49.064 [conn14] JavaScript execution failed: SyntaxError: Unexpected token ; 5ms ******************************************* Test : jstests/explain1.js ... m30999| Fri Feb 22 12:34:49.066 [conn1] DROP: test.explain1 m30001| Fri Feb 22 12:34:49.066 [conn14] CMD: drop test.explain1 m30001| Fri Feb 22 12:34:49.067 [conn14] build index test.explain1 { _id: 1 } m30001| Fri Feb 22 12:34:49.067 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:49.075 [conn14] build index test.explain1 { x: 1.0 } m30001| Fri Feb 22 12:34:49.076 [conn14] build index done. scanned 100 total records. 0 secs 15ms ******************************************* Test : jstests/sortf.js ... m30999| Fri Feb 22 12:34:49.081 [conn1] DROP: test.jstests_sortf m30001| Fri Feb 22 12:34:49.081 [conn14] CMD: drop test.jstests_sortf m30001| Fri Feb 22 12:34:49.081 [conn14] build index test.jstests_sortf { _id: 1 } m30001| Fri Feb 22 12:34:49.082 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:49.082 [conn14] info: creating collection test.jstests_sortf on add index m30001| Fri Feb 22 12:34:49.082 [conn14] build index test.jstests_sortf { a: 1.0 } m30001| Fri Feb 22 12:34:49.083 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:49.083 [conn14] build index test.jstests_sortf { b: 1.0 } m30001| Fri Feb 22 12:34:49.084 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:49.631 [conn1] DROP: test.jstests_sortf m30001| Fri Feb 22 12:34:49.631 [conn14] CMD: drop test.jstests_sortf 557ms >>>>>>>>>>>>>>> skipping jstests/replsets ******************************************* Test : jstests/js9.js ... m30999| Fri Feb 22 12:34:49.638 [conn1] DROP: test.jstests_js9 m30001| Fri Feb 22 12:34:49.639 [conn14] CMD: drop test.jstests_js9 m30001| Fri Feb 22 12:34:49.640 [conn14] build index test.jstests_js9 { _id: 1 } m30001| Fri Feb 22 12:34:49.641 [conn14] build index done. scanned 0 total records. 0 secs 47ms ******************************************* Test : jstests/sort_numeric.js ... m30999| Fri Feb 22 12:34:49.689 [conn1] DROP: test.sort_numeric m30001| Fri Feb 22 12:34:49.690 [conn14] CMD: drop test.sort_numeric m30001| Fri Feb 22 12:34:49.691 [conn14] build index test.sort_numeric { _id: 1 } m30001| Fri Feb 22 12:34:49.692 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:49.694 [conn14] build index test.sort_numeric { a: 1.0 } m30001| Fri Feb 22 12:34:49.694 [conn14] build index done. scanned 8 total records. 0 secs m30001| Fri Feb 22 12:34:49.697 [conn9] CMD: validate test.sort_numeric m30001| Fri Feb 22 12:34:49.697 [conn9] validating index 0: test.sort_numeric.$_id_ m30001| Fri Feb 22 12:34:49.697 [conn9] validating index 1: test.sort_numeric.$a_1 12ms ******************************************* Test : jstests/inc3.js ... m30999| Fri Feb 22 12:34:49.700 [conn1] DROP: test.inc3 m30001| Fri Feb 22 12:34:49.700 [conn14] CMD: drop test.inc3 m30001| Fri Feb 22 12:34:49.701 [conn14] build index test.inc3 { _id: 1 } m30001| Fri Feb 22 12:34:49.702 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:49.703 [conn1] DROP: test.inc3 m30001| Fri Feb 22 12:34:49.703 [conn14] CMD: drop test.inc3 m30001| Fri Feb 22 12:34:49.705 [conn14] build index test.inc3 { _id: 1 } m30001| Fri Feb 22 12:34:49.706 [conn14] build index done. scanned 0 total records. 0 secs 9ms ******************************************* Test : jstests/error5.js ... m30999| Fri Feb 22 12:34:49.710 [conn1] DROP: test.error5 m30001| Fri Feb 22 12:34:49.710 [conn14] CMD: drop test.error5 m30001| Fri Feb 22 12:34:49.711 [conn14] build index test.error5 { _id: 1 } m30001| Fri Feb 22 12:34:49.712 [conn14] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/storageDetailsCommand.js ... m30999| Fri Feb 22 12:34:49.713 [conn1] DROP: test.jstests_commands m30001| Fri Feb 22 12:34:49.713 [conn14] CMD: drop test.jstests_commands m30001| Fri Feb 22 12:34:49.714 [conn14] build index test.jstests_commands { _id: 1 } m30001| Fri Feb 22 12:34:49.714 [conn14] build index done. scanned 0 total records. 0 secs 135ms ******************************************* Test : jstests/query1.js ... m30999| Fri Feb 22 12:34:49.848 [conn1] DROP: test.query1 m30001| Fri Feb 22 12:34:49.887 [conn14] CMD: drop test.query1 m30001| Fri Feb 22 12:34:49.887 [conn14] build index test.query1 { _id: 1 } m30001| Fri Feb 22 12:34:49.889 [conn14] build index done. scanned 0 total records. 0.001 secs 44ms ******************************************* Test : jstests/array3.js ... 6ms ******************************************* Test : jstests/fts_mix.js ... m30000| Fri Feb 22 12:34:49.904 [initandlisten] connection accepted from 127.0.0.1:48565 #15 (9 connections now open) m30001| Fri Feb 22 12:34:49.904 [initandlisten] connection accepted from 127.0.0.1:54459 #16 (7 connections now open) m30999| Fri Feb 22 12:34:49.905 [conn1] DROP: test.text_mix m30001| Fri Feb 22 12:34:49.905 [conn14] CMD: drop test.text_mix m30001| Fri Feb 22 12:34:49.906 [conn14] build index test.text_mix { _id: 1 } m30001| Fri Feb 22 12:34:49.907 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:49.908 [conn14] build index test.text_mix { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:34:49.909 [conn14] build index done. scanned 10 total records. 0.001 secs m30001| Fri Feb 22 12:34:49.911 [conn9] CMD: dropIndexes test.text_mix m30001| Fri Feb 22 12:34:49.913 [conn14] build index test.text_mix { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:34:49.928 [conn14] build index done. scanned 10 total records. 0.015 secs m30001| Fri Feb 22 12:34:49.929 [conn9] CMD: dropIndexes test.text_mix m30001| Fri Feb 22 12:34:49.932 [conn14] build index test.text_mix { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:34:49.947 [conn14] build index done. scanned 10 total records. 0.014 secs m30001| Fri Feb 22 12:34:49.948 [conn9] CMD: dropIndexes test.text_mix m30001| Fri Feb 22 12:34:49.950 [conn14] build index test.text_mix { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:34:49.964 [conn14] build index done. scanned 10 total records. 0.014 secs m30001| Fri Feb 22 12:34:49.966 [conn9] CMD: dropIndexes test.text_mix m30001| Fri Feb 22 12:34:49.967 [conn14] build index test.text_mix { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:34:49.981 [conn14] build index done. scanned 10 total records. 0.014 secs m30001| Fri Feb 22 12:34:49.996 [conn9] CMD: validate test.text_mix m30001| Fri Feb 22 12:34:49.996 [conn9] validating index 0: test.text_mix.$_id_ m30001| Fri Feb 22 12:34:49.996 [conn9] validating index 1: test.text_mix.$$**_text 99ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo7.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2polywithholes.js >>>>>>>>>>>>>>> skipping jstests/multiClient ******************************************* Test : jstests/error4.js ... m30999| Fri Feb 22 12:34:50.001 [conn1] DROP: test.error4 m30001| Fri Feb 22 12:34:50.001 [conn14] CMD: drop test.error4 m30001| Fri Feb 22 12:34:50.002 [conn14] build index test.error4 { _id: 1 } m30001| Fri Feb 22 12:34:50.002 [conn14] build index done. scanned 0 total records. 0 secs 7ms ******************************************* Test : jstests/inc2.js ... m30999| Fri Feb 22 12:34:50.007 [conn1] DROP: test.inc2 m30001| Fri Feb 22 12:34:50.008 [conn14] CMD: drop test.inc2 m30001| Fri Feb 22 12:34:50.008 [conn14] build index test.inc2 { _id: 1 } m30001| Fri Feb 22 12:34:50.010 [conn14] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:50.011 [conn14] build index test.inc2 { x: 1.0 } m30001| Fri Feb 22 12:34:50.012 [conn14] build index done. scanned 3 total records. 0 secs 11ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2cursorlimitskip.js ******************************************* Test : jstests/js8.js ... m30999| Fri Feb 22 12:34:50.021 [conn1] DROP: test.jstests_js8 m30001| Fri Feb 22 12:34:50.021 [conn14] CMD: drop test.jstests_js8 m30001| Fri Feb 22 12:34:50.022 [conn14] build index test.jstests_js8 { _id: 1 } m30001| Fri Feb 22 12:34:50.023 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.070 [conn9] CMD: validate test.jstests_js8 m30001| Fri Feb 22 12:34:50.070 [conn9] validating index 0: test.jstests_js8.$_id_ 56ms ******************************************* Test : jstests/not1.js ... m30999| Fri Feb 22 12:34:50.071 [conn1] DROP: test.not1 m30001| Fri Feb 22 12:34:50.071 [conn14] CMD: drop test.not1 m30001| Fri Feb 22 12:34:50.072 [conn14] build index test.not1 { _id: 1 } m30001| Fri Feb 22 12:34:50.073 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.076 [conn14] build index test.not1 { a: 1.0 } m30001| Fri Feb 22 12:34:50.078 [conn14] build index done. scanned 3 total records. 0.001 secs 11ms ******************************************* Test : jstests/ref4.js ... m30999| Fri Feb 22 12:34:50.082 [conn1] DROP: test.ref4a m30001| Fri Feb 22 12:34:50.082 [conn14] CMD: drop test.ref4a m30999| Fri Feb 22 12:34:50.083 [conn1] DROP: test.ref4b m30001| Fri Feb 22 12:34:50.083 [conn14] CMD: drop test.ref4b m30999| Fri Feb 22 12:34:50.083 [conn1] DROP: test.otherthings m30001| Fri Feb 22 12:34:50.083 [conn14] CMD: drop test.otherthings m30999| Fri Feb 22 12:34:50.084 [conn1] DROP: test.things m30001| Fri Feb 22 12:34:50.084 [conn14] CMD: drop test.things m30001| Fri Feb 22 12:34:50.085 [conn14] build index test.ref4b { _id: 1 } m30001| Fri Feb 22 12:34:50.086 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.086 [conn14] build index test.ref4a { _id: 1 } m30001| Fri Feb 22 12:34:50.087 [conn14] build index done. scanned 0 total records. 0 secs 9ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2index.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo6.js ******************************************* Test : jstests/index_diag.js ... m30999| Fri Feb 22 12:34:50.091 [conn1] DROP: test.index_diag m30001| Fri Feb 22 12:34:50.091 [conn14] CMD: drop test.index_diag m30001| Fri Feb 22 12:34:50.092 [conn14] build index test.index_diag { _id: 1 } m30001| Fri Feb 22 12:34:50.093 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.093 [conn14] info: creating collection test.index_diag on add index m30001| Fri Feb 22 12:34:50.093 [conn14] build index test.index_diag { x: 1.0 } m30001| Fri Feb 22 12:34:50.093 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.100 [conn14] build index test.index_diag { _id: 1.0, x: 1.0 } m30001| Fri Feb 22 12:34:50.101 [conn14] build index done. scanned 3 total records. 0 secs m30001| Fri Feb 22 12:34:50.103 [conn9] CMD: dropIndexes test.index_diag 16ms ******************************************* Test : jstests/where4.js ... m30999| Fri Feb 22 12:34:50.107 [conn1] DROP: test.where4 m30001| Fri Feb 22 12:34:50.107 [conn14] CMD: drop test.where4 m30001| Fri Feb 22 12:34:50.108 [conn14] build index test.where4 { _id: 1 } m30001| Fri Feb 22 12:34:50.109 [conn14] build index done. scanned 0 total records. 0 secs 34ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_multikey0.js ******************************************* Test : jstests/array_match2.js ... m30999| Fri Feb 22 12:34:50.141 [conn1] DROP: test.jstests_array_match2 m30001| Fri Feb 22 12:34:50.141 [conn14] CMD: drop test.jstests_array_match2 m30001| Fri Feb 22 12:34:50.142 [conn14] build index test.jstests_array_match2 { _id: 1 } m30001| Fri Feb 22 12:34:50.143 [conn14] build index done. scanned 0 total records. 0 secs 5ms ******************************************* Test : jstests/dbref1.js ... m30999| Fri Feb 22 12:34:50.146 [conn1] DROP: test.dbref1a m30001| Fri Feb 22 12:34:50.146 [conn14] CMD: drop test.dbref1a m30999| Fri Feb 22 12:34:50.146 [conn1] DROP: test.dbref1b m30001| Fri Feb 22 12:34:50.146 [conn14] CMD: drop test.dbref1b m30001| Fri Feb 22 12:34:50.147 [conn14] build index test.dbref1a { _id: 1 } m30001| Fri Feb 22 12:34:50.147 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.148 [conn14] build index test.dbref1b { _id: 1 } m30001| Fri Feb 22 12:34:50.149 [conn14] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/rename_stayTemp.js ... m30999| Fri Feb 22 12:34:50.150 [conn1] DROP: test.rename_stayTemp_orig m30001| Fri Feb 22 12:34:50.150 [conn14] CMD: drop test.rename_stayTemp_orig m30999| Fri Feb 22 12:34:50.151 [conn1] DROP: test.rename_stayTemp_dest m30001| Fri Feb 22 12:34:50.151 [conn14] CMD: drop test.rename_stayTemp_dest m30001| Fri Feb 22 12:34:50.151 [conn14] build index test.rename_stayTemp_orig { _id: 1 } m30001| Fri Feb 22 12:34:50.152 [conn14] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:50.158 [conn1] DROP: test.rename_stayTemp_dest m30001| Fri Feb 22 12:34:50.158 [conn14] CMD: drop test.rename_stayTemp_dest m30001| Fri Feb 22 12:34:50.160 [conn14] build index test.rename_stayTemp_orig { _id: 1 } m30001| Fri Feb 22 12:34:50.161 [conn14] build index done. scanned 0 total records. 0 secs 17ms ******************************************* Test : jstests/fm4.js ... m30999| Fri Feb 22 12:34:50.168 [conn1] DROP: test.fm4 m30001| Fri Feb 22 12:34:50.168 [conn14] CMD: drop test.fm4 m30001| Fri Feb 22 12:34:50.169 [conn14] build index test.fm4 { _id: 1 } m30001| Fri Feb 22 12:34:50.169 [conn14] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/exists.js ... m30999| Fri Feb 22 12:34:50.173 [conn1] DROP: test.jstests_exists m30001| Fri Feb 22 12:34:50.173 [conn14] CMD: drop test.jstests_exists m30001| Fri Feb 22 12:34:50.173 [conn14] build index test.jstests_exists { _id: 1 } m30001| Fri Feb 22 12:34:50.174 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.184 [conn14] build index test.jstests_exists { a: 1.0 } m30001| Fri Feb 22 12:34:50.184 [conn14] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:34:50.185 [conn14] build index test.jstests_exists { a.b: 1.0 } m30001| Fri Feb 22 12:34:50.185 [conn14] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:34:50.186 [conn14] build index test.jstests_exists { a.b.c: 1.0 } m30001| Fri Feb 22 12:34:50.186 [conn14] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:34:50.187 [conn14] build index test.jstests_exists { a.b.c.d: 1.0 } m30001| Fri Feb 22 12:34:50.187 [conn14] build index done. scanned 5 total records. 0 secs m30999| Fri Feb 22 12:34:50.198 [conn1] DROP: test.jstests_exists m30001| Fri Feb 22 12:34:50.198 [conn14] CMD: drop test.jstests_exists m30001| Fri Feb 22 12:34:50.203 [conn14] build index test.jstests_exists { _id: 1 } m30001| Fri Feb 22 12:34:50.204 [conn14] build index done. scanned 0 total records. 0 secs 33ms ******************************************* Test : jstests/server5346.js ... m30999| Fri Feb 22 12:34:50.206 [conn1] DROP: test.server5346 m30001| Fri Feb 22 12:34:50.206 [conn14] CMD: drop test.server5346 m30001| Fri Feb 22 12:34:50.207 [conn14] build index test.server5346 { _id: 1 } m30001| Fri Feb 22 12:34:50.207 [conn14] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/sortg.js ... m30999| Fri Feb 22 12:34:50.209 [conn1] DROP: test.jstests_sortg m30001| Fri Feb 22 12:34:50.209 [conn14] CMD: drop test.jstests_sortg m30001| Fri Feb 22 12:34:50.225 [conn14] build index test.jstests_sortg { _id: 1 } m30001| Fri Feb 22 12:34:50.225 [conn14] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:50.466 [conn14] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortg query:{ query: {}, orderby: { a: 1.0 } } m30001| Fri Feb 22 12:34:50.466 [conn14] ntoskip:0 ntoreturn:1000 m30001| Fri Feb 22 12:34:50.466 [conn14] problem detected during query over test.jstests_sortg : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:50.466 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:50.467 [conn14] end connection 127.0.0.1:35456 (6 connections now open) m30001| Fri Feb 22 12:34:50.501 [conn15] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortg query:{ query: {}, orderby: { a: 1.0 }, $explain: true } m30001| Fri Feb 22 12:34:50.501 [conn15] ntoskip:0 ntoreturn:1000 m30001| Fri Feb 22 12:34:50.501 [conn15] problem detected during query over test.jstests_sortg : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:50.502 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:50.502 [conn15] end connection 127.0.0.1:52532 (5 connections now open) m30001| Fri Feb 22 12:34:50.503 [initandlisten] connection accepted from 127.0.0.1:52885 #17 (6 connections now open) m30001| Fri Feb 22 12:34:50.514 [conn17] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortg query:{ query: {}, orderby: { b: 1.0 } } m30001| Fri Feb 22 12:34:50.514 [conn17] ntoskip:0 ntoreturn:1000 m30001| Fri Feb 22 12:34:50.514 [conn17] problem detected during query over test.jstests_sortg : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:50.514 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:50.514 [conn17] end connection 127.0.0.1:52885 (5 connections now open) m30001| Fri Feb 22 12:34:50.516 [initandlisten] connection accepted from 127.0.0.1:65078 #18 (6 connections now open) m30001| Fri Feb 22 12:34:50.527 [conn18] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortg query:{ query: {}, orderby: { b: 1.0 }, $explain: true } m30001| Fri Feb 22 12:34:50.527 [conn18] ntoskip:0 ntoreturn:1000 m30001| Fri Feb 22 12:34:50.527 [conn18] problem detected during query over test.jstests_sortg : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:50.527 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:50.527 [conn18] end connection 127.0.0.1:65078 (5 connections now open) m30001| Fri Feb 22 12:34:50.529 [initandlisten] connection accepted from 127.0.0.1:64144 #19 (6 connections now open) m30001| Fri Feb 22 12:34:50.595 [conn16] end connection 127.0.0.1:54459 (5 connections now open) m30000| Fri Feb 22 12:34:50.595 [conn15] end connection 127.0.0.1:48565 (8 connections now open) m30001| Fri Feb 22 12:34:50.734 [conn19] build index test.jstests_sortg { a: 1.0 } m30001| Fri Feb 22 12:34:50.782 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.783 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.783 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.783 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.783 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.784 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.784 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.784 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.784 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.785 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.785 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.785 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.785 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.786 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.786 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.786 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.787 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.787 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.787 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.787 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.788 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.788 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.788 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.789 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.789 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.789 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.789 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.790 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.790 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.790 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.790 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.791 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.791 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.791 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.791 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.792 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.792 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.792 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.792 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.793 [conn19] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_sortg.$a_1 1000012 { : ",,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,..." } m30001| Fri Feb 22 12:34:50.793 [conn19] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:34:50.793 [conn19] build index done. scanned 140 total records. 0.059 secs m30001| Fri Feb 22 12:34:50.801 [conn19] build index test.jstests_sortg { b: 1.0 } m30001| Fri Feb 22 12:34:50.803 [conn19] build index done. scanned 140 total records. 0.001 secs m30001| Fri Feb 22 12:34:50.803 [conn19] build index test.jstests_sortg { c: 1.0 } m30001| Fri Feb 22 12:34:50.805 [conn19] build index done. scanned 140 total records. 0.001 secs m30001| Fri Feb 22 12:34:50.918 [conn19] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortg query:{ query: { b: null, c: null }, orderby: { d: 1.0 } } m30001| Fri Feb 22 12:34:50.919 [conn19] ntoskip:0 ntoreturn:1000 m30001| Fri Feb 22 12:34:50.919 [conn19] problem detected during query over test.jstests_sortg : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:50.919 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:50.919 [conn19] end connection 127.0.0.1:64144 (4 connections now open) m30001| Fri Feb 22 12:34:50.921 [initandlisten] connection accepted from 127.0.0.1:36068 #20 (5 connections now open) m30001| Fri Feb 22 12:34:50.935 [conn20] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortg query:{ query: { b: null, c: null }, orderby: { d: 1.0 }, $explain: true } m30001| Fri Feb 22 12:34:50.935 [conn20] ntoskip:0 ntoreturn:1000 m30001| Fri Feb 22 12:34:50.935 [conn20] problem detected during query over test.jstests_sortg : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:34:50.935 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:34:50.935 [conn20] end connection 127.0.0.1:36068 (4 connections now open) m30001| Fri Feb 22 12:34:50.937 [initandlisten] connection accepted from 127.0.0.1:62916 #21 (5 connections now open) m30999| Fri Feb 22 12:34:51.220 [conn1] DROP: test.jstests_sortg m30001| Fri Feb 22 12:34:51.220 [conn21] CMD: drop test.jstests_sortg 1019ms ******************************************* Test : jstests/shellstartparallel.js ... m30999| Fri Feb 22 12:34:51.246 [conn1] DROP: test.sps m30001| Fri Feb 22 12:34:51.247 [conn21] CMD: drop test.sps Fri Feb 22 12:34:51.289 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep(1000); db.sps.insert({x:1}); db.getLastError(); localhost:30999/admin sh14746| MongoDB shell version: 2.4.0-rc1-pre- sh14746| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:34:51.377 [mongosMain] connection accepted from 127.0.0.1:62346 #8 (2 connections now open) m30001| Fri Feb 22 12:34:52.380 [initandlisten] connection accepted from 127.0.0.1:43590 #22 (6 connections now open) m30001| Fri Feb 22 12:34:52.381 [conn22] build index test.sps { _id: 1 } m30001| Fri Feb 22 12:34:52.382 [conn22] build index done. scanned 0 total records. 0 secs sh14746| null m30999| Fri Feb 22 12:34:52.390 [conn8] end connection 127.0.0.1:62346 (1 connection now open) Fri Feb 22 12:34:52.422 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.sps.insert({x:1}); db.getLastError(); throw 'intentionally_uncaught'; localhost:30999/admin sh14747| MongoDB shell version: 2.4.0-rc1-pre- sh14747| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:34:52.487 [mongosMain] connection accepted from 127.0.0.1:33665 #9 (2 connections now open) sh14747| Fri Feb 22 12:34:52.490 JavaScript execution failed: intentionally_uncaught m30999| Fri Feb 22 12:34:52.497 [conn9] end connection 127.0.0.1:33665 (1 connection now open) shellstartparallel.js SUCCESS 1275ms ******************************************* Test : jstests/eval6.js ... m30999| Fri Feb 22 12:34:52.504 [conn1] DROP: test.eval6 m30001| Fri Feb 22 12:34:52.504 [conn21] CMD: drop test.eval6 m30001| Fri Feb 22 12:34:52.505 [conn21] build index test.eval6 { _id: 1 } m30001| Fri Feb 22 12:34:52.506 [conn21] build index done. scanned 0 total records. 0.001 secs 49ms ******************************************* Test : jstests/countc.js ... m30999| Fri Feb 22 12:34:52.553 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.554 [conn21] CMD: drop test.jstests_countc m30999| Fri Feb 22 12:34:52.554 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.554 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.555 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.556 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.556 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.556 [conn21] build index test.jstests_countc { a: 1.0 } m30001| Fri Feb 22 12:34:52.557 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.565 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.565 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.568 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.569 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.569 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.569 [conn21] build index test.jstests_countc { a: 1.0 } m30001| Fri Feb 22 12:34:52.570 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.571 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.571 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.574 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.574 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.574 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.574 [conn21] build index test.jstests_countc { a.b: 1.0, a.c: 1.0 } m30001| Fri Feb 22 12:34:52.575 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.576 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.576 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.579 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.579 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.579 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.580 [conn21] build index test.jstests_countc { a: 1.0 } m30001| Fri Feb 22 12:34:52.580 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.581 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.581 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.584 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.584 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.584 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.584 [conn21] build index test.jstests_countc { a: 1.0 } m30001| Fri Feb 22 12:34:52.585 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.586 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.586 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.588 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.589 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.589 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.589 [conn21] build index test.jstests_countc { a: 1.0 } m30001| Fri Feb 22 12:34:52.589 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.590 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.591 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.593 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.594 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.594 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.594 [conn21] build index test.jstests_countc { a: -1.0 } m30001| Fri Feb 22 12:34:52.594 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.597 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.597 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.600 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.600 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.600 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.600 [conn21] build index test.jstests_countc { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:52.601 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.604 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.604 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.607 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.608 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.608 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.608 [conn21] build index test.jstests_countc { a: 1.0, b: -1.0 } m30001| Fri Feb 22 12:34:52.608 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.611 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.611 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.614 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.615 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.615 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.615 [conn21] build index test.jstests_countc { a: 1.0 } m30001| Fri Feb 22 12:34:52.615 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.616 [conn1] DROP: test.jstests_countc m30001| Fri Feb 22 12:34:52.616 [conn21] CMD: drop test.jstests_countc m30001| Fri Feb 22 12:34:52.619 [conn21] build index test.jstests_countc { _id: 1 } m30001| Fri Feb 22 12:34:52.619 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.619 [conn21] info: creating collection test.jstests_countc on add index m30001| Fri Feb 22 12:34:52.619 [conn21] build index test.jstests_countc { a: 1.0 } m30001| Fri Feb 22 12:34:52.620 [conn21] build index done. scanned 0 total records. 0 secs 98ms ******************************************* Test : jstests/multi.js ... m30999| Fri Feb 22 12:34:52.652 [conn1] DROP: test.jstests_multi m30001| Fri Feb 22 12:34:52.653 [conn21] CMD: drop test.jstests_multi m30001| Fri Feb 22 12:34:52.653 [conn21] build index test.jstests_multi { _id: 1 } m30001| Fri Feb 22 12:34:52.654 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:52.654 [conn21] info: creating collection test.jstests_multi on add index m30001| Fri Feb 22 12:34:52.654 [conn21] build index test.jstests_multi { a: 1.0 } m30001| Fri Feb 22 12:34:52.655 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.657 [conn1] DROP: test.jstests_multi m30001| Fri Feb 22 12:34:52.657 [conn21] CMD: drop test.jstests_multi m30001| Fri Feb 22 12:34:52.660 [conn21] build index test.jstests_multi { _id: 1 } m30001| Fri Feb 22 12:34:52.661 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.662 [conn1] DROP: test.jstests_multi m30001| Fri Feb 22 12:34:52.663 [conn21] CMD: drop test.jstests_multi m30001| Fri Feb 22 12:34:52.665 [conn21] build index test.jstests_multi { _id: 1 } m30001| Fri Feb 22 12:34:52.665 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:52.666 [conn1] DROP: test.jstests_multi m30001| Fri Feb 22 12:34:52.666 [conn21] CMD: drop test.jstests_multi m30001| Fri Feb 22 12:34:52.668 [conn21] build index test.jstests_multi { _id: 1 } m30001| Fri Feb 22 12:34:52.668 [conn21] build index done. scanned 0 total records. 0 secs 19ms ******************************************* Test : jstests/testminmax.js ... m30999| Fri Feb 22 12:34:52.669 [conn1] DROP: test.minmaxtest m30001| Fri Feb 22 12:34:52.670 [conn21] CMD: drop test.minmaxtest m30001| Fri Feb 22 12:34:52.670 [conn21] build index test.minmaxtest { _id: 1 } m30001| Fri Feb 22 12:34:52.671 [conn21] build index done. scanned 0 total records. 0 secs [ { "_id" : "IBM.N|00001264779918428889", "DESCRIPTION" : { "n" : "IBMSTK2", "o" : "IBM STK", "s" : "changed" } }, { "_id" : "IBM.N|00001264779918437075", "DESCRIPTION" : { "n" : "IBMSTK3", "o" : "IBM STK2", "s" : "changed" } } ] 4 18ms ******************************************* Test : jstests/queryoptimizer5.js ... m30999| Fri Feb 22 12:34:52.695 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:52.695 [conn21] CMD: drop test.jstests_queryoptimizer5 Fri Feb 22 12:34:52.722 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i = 0; i < 30; ++i ) { sleep( 200 ); db.jstests_queryoptimizer5.drop(); } localhost:30999/admin m30999| Fri Feb 22 12:34:52.723 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:52.723 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:52.724 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:52.724 [conn21] build index done. scanned 0 total records. 0 secs sh14748| MongoDB shell version: 2.4.0-rc1-pre- sh14748| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:34:52.808 [mongosMain] connection accepted from 127.0.0.1:32874 #10 (2 connections now open) m30999| Fri Feb 22 12:34:53.011 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.012 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.015 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:53.016 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:53.038 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:53.041 [conn21] build index done. scanned 396 total records. 0.002 secs m30001| Fri Feb 22 12:34:53.042 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:53.045 [conn21] build index done. scanned 396 total records. 0.003 secs m30999| Fri Feb 22 12:34:53.061 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.061 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.066 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:53.067 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:53.216 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.216 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.219 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:53.219 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:53.346 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:53.354 [conn21] build index done. scanned 2422 total records. 0.008 secs m30001| Fri Feb 22 12:34:53.355 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:53.363 [conn21] build index done. scanned 2422 total records. 0.008 secs m30999| Fri Feb 22 12:34:53.419 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.419 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:53.428 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.429 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.429 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:53.429 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:53.623 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.623 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.626 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:53.627 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:53.710 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:53.715 [conn21] build index done. scanned 1374 total records. 0.004 secs m30001| Fri Feb 22 12:34:53.715 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:53.720 [conn21] build index done. scanned 1374 total records. 0.005 secs m30999| Fri Feb 22 12:34:53.740 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765ed881c8e7453916046 m30999| Fri Feb 22 12:34:53.740 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:34:53.758 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.758 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.762 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:53.762 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:53.826 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.826 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:53.828 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:53.828 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:54.029 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.029 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.030 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:54.031 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:54.082 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:54.086 [conn21] build index done. scanned 908 total records. 0.003 secs m30001| Fri Feb 22 12:34:54.087 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:54.090 [conn21] build index done. scanned 908 total records. 0.003 secs m30999| Fri Feb 22 12:34:54.118 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.118 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.122 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:54.123 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:54.231 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.231 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.232 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:54.233 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:54.407 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:54.422 [conn21] build index done. scanned 3063 total records. 0.015 secs m30001| Fri Feb 22 12:34:54.423 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30999| Fri Feb 22 12:34:54.433 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.438 [conn21] build index done. scanned 3063 total records. 0.015 secs m30001| Fri Feb 22 12:34:54.439 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:54.443 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.443 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.444 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:54.444 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:54.643 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.643 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.645 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:54.645 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:54.733 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:54.741 [conn21] build index done. scanned 1488 total records. 0.008 secs m30001| Fri Feb 22 12:34:54.742 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:54.750 [conn21] build index done. scanned 1488 total records. 0.008 secs m30999| Fri Feb 22 12:34:54.799 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.800 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.805 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:54.806 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:54.845 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.845 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:54.847 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:54.847 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:55.048 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.048 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.050 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.051 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:55.100 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:55.105 [conn21] build index done. scanned 859 total records. 0.004 secs m30001| Fri Feb 22 12:34:55.105 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:55.110 [conn21] build index done. scanned 859 total records. 0.004 secs m30999| Fri Feb 22 12:34:55.139 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.139 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.143 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.144 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:55.251 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.251 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.255 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.256 [conn21] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:55.443 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30999| Fri Feb 22 12:34:55.455 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.460 [conn21] build index done. scanned 3238 total records. 0.016 secs m30001| Fri Feb 22 12:34:55.460 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.466 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.466 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:55.466 [conn21] info: creating collection test.jstests_queryoptimizer5 on add index m30001| Fri Feb 22 12:34:55.466 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:55.467 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:55.468 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.468 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.472 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.472 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:55.666 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.666 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.668 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.669 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:55.766 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:55.775 [conn21] build index done. scanned 1711 total records. 0.008 secs m30001| Fri Feb 22 12:34:55.776 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:55.784 [conn21] build index done. scanned 1711 total records. 0.008 secs m30999| Fri Feb 22 12:34:55.827 [conn1] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.827 [conn21] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.831 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.832 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:55.869 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.869 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:55.873 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:55.874 [conn21] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:34:56.072 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:56.072 [conn22] CMD: drop test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:56.074 [conn21] build index test.jstests_queryoptimizer5 { _id: 1 } m30001| Fri Feb 22 12:34:56.075 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:56.134 [conn21] build index test.jstests_queryoptimizer5 { a: 1.0 } m30001| Fri Feb 22 12:34:56.140 [conn21] build index done. scanned 1006 total records. 0.005 secs m30001| Fri Feb 22 12:34:56.140 [conn21] build index test.jstests_queryoptimizer5 { b: 1.0 } m30001| Fri Feb 22 12:34:56.147 [conn21] build index done. scanned 1006 total records. 0.006 secs m30999| Fri Feb 22 12:34:56.275 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:56.275 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:56.480 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:56.480 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:56.681 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:56.681 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:56.882 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:56.882 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:57.082 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:57.082 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:57.283 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:57.284 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:57.484 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:57.485 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:57.686 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:57.686 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:57.886 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:57.886 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:58.087 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:58.087 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:58.290 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:58.290 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:58.491 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:58.491 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:58.692 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:58.692 [conn22] CMD: drop test.jstests_queryoptimizer5 m30999| Fri Feb 22 12:34:58.892 [conn10] DROP: test.jstests_queryoptimizer5 m30001| Fri Feb 22 12:34:58.892 [conn22] CMD: drop test.jstests_queryoptimizer5 sh14748| false m30999| Fri Feb 22 12:34:58.905 [conn10] end connection 127.0.0.1:32874 (1 connection now open) 6224ms ******************************************* Test : jstests/covered_index_geo_2.js ... m30999| Fri Feb 22 12:34:58.918 [conn1] DROP: test.covered_geo_2 m30001| Fri Feb 22 12:34:58.919 [conn21] CMD: drop test.covered_geo_2 m30001| Fri Feb 22 12:34:58.920 [conn21] build index test.covered_geo_2 { _id: 1 } m30001| Fri Feb 22 12:34:58.921 [conn21] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:34:58.921 [conn21] build index test.covered_geo_2 { loc1: "2dsphere", type1: 1.0 } m30001| Fri Feb 22 12:34:58.922 [conn21] build index done. scanned 3 total records. 0 secs m30001| Fri Feb 22 12:34:58.922 [conn21] build index test.covered_geo_2 { type2: 1.0, loc2: "2dsphere" } m30001| Fri Feb 22 12:34:58.923 [conn21] build index done. scanned 3 total records. 0 secs all tests passed 15ms ******************************************* Test : jstests/coveredIndex5.js ... m30999| Fri Feb 22 12:34:58.927 [conn1] DROP: test.jstests_coveredIndex5 m30001| Fri Feb 22 12:34:58.928 [conn21] CMD: drop test.jstests_coveredIndex5 m30001| Fri Feb 22 12:34:58.928 [conn21] build index test.jstests_coveredIndex5 { _id: 1 } m30001| Fri Feb 22 12:34:58.929 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:58.929 [conn21] info: creating collection test.jstests_coveredIndex5 on add index m30001| Fri Feb 22 12:34:58.929 [conn21] build index test.jstests_coveredIndex5 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:58.930 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:58.930 [conn21] build index test.jstests_coveredIndex5 { a: 1.0, c: 1.0 } m30001| Fri Feb 22 12:34:58.931 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:58.932 [conn21] build index test.jstests_coveredIndex5 { z: 1.0 } m30001| Fri Feb 22 12:34:58.933 [conn21] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:34:58.934 [conn9] CMD: dropIndexes test.jstests_coveredIndex5 m30001| Fri Feb 22 12:34:58.937 [conn21] build index test.jstests_coveredIndex5 { z: 1.0 } m30001| Fri Feb 22 12:34:58.937 [conn21] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:34:58.938 [conn9] CMD: dropIndexes test.jstests_coveredIndex5 m30001| Fri Feb 22 12:34:58.941 [conn21] build index test.jstests_coveredIndex5 { z: 1.0 } m30001| Fri Feb 22 12:34:58.941 [conn21] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:34:58.941 [conn9] CMD: dropIndexes test.jstests_coveredIndex5 m30001| Fri Feb 22 12:34:58.945 [conn21] build index test.jstests_coveredIndex5 { z: 1.0 } m30001| Fri Feb 22 12:34:58.945 [conn21] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:34:58.946 [conn9] CMD: dropIndexes test.jstests_coveredIndex5 m30001| Fri Feb 22 12:34:58.949 [conn21] build index test.jstests_coveredIndex5 { z: 1.0 } m30001| Fri Feb 22 12:34:58.949 [conn21] build index done. scanned 12 total records. 0 secs m30001| Fri Feb 22 12:34:58.950 [conn9] CMD: dropIndexes test.jstests_coveredIndex5 m30001| Fri Feb 22 12:34:58.953 [conn21] build index test.jstests_coveredIndex5 { z: 1.0 } m30001| Fri Feb 22 12:34:58.953 [conn21] build index done. scanned 14 total records. 0 secs m30001| Fri Feb 22 12:34:58.953 [conn9] CMD: dropIndexes test.jstests_coveredIndex5 31ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_haystack1.js >>>>>>>>>>>>>>> skipping jstests/auth ******************************************* Test : jstests/connection_status.js ... m30999| Fri Feb 22 12:34:58.958 [conn1] couldn't find database [connection_status] in config db m30999| Fri Feb 22 12:34:58.960 [conn1] put [connection_status] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:34:58.961 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/connection_status.ns, filling with zeroes... { "user" : "someone", "readOnly" : false, "pwd" : "d237e802a504ce5dc99720db8df8bc78", "_id" : ObjectId("512765f2000decca0874f071") } m30000| Fri Feb 22 12:34:58.961 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/connection_status.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:34:58.961 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/connection_status.0, filling with zeroes... m30000| Fri Feb 22 12:34:58.961 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/connection_status.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:34:58.961 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/connection_status.1, filling with zeroes... m30000| Fri Feb 22 12:34:58.961 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/connection_status.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:34:58.964 [conn6] build index connection_status.system.users { _id: 1 } m30000| Fri Feb 22 12:34:58.965 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:34:58.965 [conn6] build index connection_status.system.users { user: 1, userSource: 1 } m30000| Fri Feb 22 12:34:58.966 [conn6] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:58.967 [conn1] authenticate db: connection_status { authenticate: 1, nonce: "574b5ee7cc883bc7", user: "someone", key: "584ead84a875d104785d921c4e7268b1" } { "user" : "someone else", "readOnly" : false, "pwd" : "ceaeff969ad3790f6a9f652690dfab7e", "_id" : ObjectId("512765f2000decca0874f072") } m30999| Fri Feb 22 12:34:58.969 [conn1] authenticate db: connection_status { authenticate: 1, nonce: "39b7c337f3d4214f", user: "someone else", key: "2e1e045501032ec9c4878622ad6de98d" } 13ms ******************************************* Test : jstests/all3.js ... 1ms ******************************************* Test : jstests/nestedobj1.js ... m30999| Fri Feb 22 12:34:58.972 [conn1] DROP: test.objNestTest m30001| Fri Feb 22 12:34:58.972 [conn21] CMD: drop test.objNestTest m30001| Fri Feb 22 12:34:58.973 [conn21] build index test.objNestTest { _id: 1 } m30001| Fri Feb 22 12:34:58.973 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:58.973 [conn21] info: creating collection test.objNestTest on add index m30001| Fri Feb 22 12:34:58.973 [conn21] build index test.objNestTest { a: 1.0 } m30001| Fri Feb 22 12:34:58.974 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:58.980 [conn21] test.objNestTest ERROR: key too large len:4016 max:1024 4016 test.objNestTest.$a_1 m30001| Fri Feb 22 12:34:58.984 [conn21] test.objNestTest ERROR: key too large len:4016 max:1024 4016 test.objNestTest.$a_1 m30001| Fri Feb 22 12:34:58.988 [conn21] test.objNestTest ERROR: key too large len:4016 max:1024 4016 test.objNestTest.$a_1 Test succeeded! 18ms ******************************************* Test : jstests/update_addToSet.js ... m30999| Fri Feb 22 12:34:58.990 [conn1] DROP: test.update_addToSet1 m30001| Fri Feb 22 12:34:58.990 [conn21] CMD: drop test.update_addToSet1 m30001| Fri Feb 22 12:34:58.991 [conn21] build index test.update_addToSet1 { _id: 1 } m30001| Fri Feb 22 12:34:58.991 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:58.993 [conn1] DROP: test.update_addToSet1 m30001| Fri Feb 22 12:34:58.993 [conn21] CMD: drop test.update_addToSet1 m30001| Fri Feb 22 12:34:58.996 [conn21] build index test.update_addToSet1 { _id: 1 } m30001| Fri Feb 22 12:34:58.996 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:58.997 [conn1] DROP: test.update_addToSet1 m30001| Fri Feb 22 12:34:58.997 [conn21] CMD: drop test.update_addToSet1 m30001| Fri Feb 22 12:34:58.998 [conn21] build index test.update_addToSet1 { _id: 1 } m30001| Fri Feb 22 12:34:58.999 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.000 [conn1] DROP: test.update_addToSet1 m30001| Fri Feb 22 12:34:59.000 [conn21] CMD: drop test.update_addToSet1 m30001| Fri Feb 22 12:34:59.001 [conn21] build index test.update_addToSet1 { _id: 1 } m30001| Fri Feb 22 12:34:59.002 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.003 [conn1] DROP: test.update_addToSet1 m30001| Fri Feb 22 12:34:59.003 [conn21] CMD: drop test.update_addToSet1 m30001| Fri Feb 22 12:34:59.004 [conn21] build index test.update_addToSet1 { _id: 1 } m30001| Fri Feb 22 12:34:59.005 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.005 [conn1] DROP: test.update_addToSet1 m30001| Fri Feb 22 12:34:59.005 [conn21] CMD: drop test.update_addToSet1 m30001| Fri Feb 22 12:34:59.007 [conn21] build index test.update_addToSet1 { _id: 1 } m30001| Fri Feb 22 12:34:59.007 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.008 [conn1] DROP: test.update_addToSet1 m30001| Fri Feb 22 12:34:59.008 [conn21] CMD: drop test.update_addToSet1 m30001| Fri Feb 22 12:34:59.010 [conn21] build index test.update_addToSet1 { _id: 1 } m30001| Fri Feb 22 12:34:59.010 [conn21] build index done. scanned 0 total records. 0 secs 22ms ******************************************* Test : jstests/mr_merge2.js ... m30999| Fri Feb 22 12:34:59.012 [conn1] DROP: test.mr_merge2 m30001| Fri Feb 22 12:34:59.013 [conn21] CMD: drop test.mr_merge2 m30001| Fri Feb 22 12:34:59.013 [conn21] build index test.mr_merge2 { _id: 1 } m30999| Fri Feb 22 12:34:59.013 [conn1] DROP: test.mr_merge2_out m30001| Fri Feb 22 12:34:59.014 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.014 [conn21] CMD: drop test.mr_merge2_out m30001| Fri Feb 22 12:34:59.038 [conn21] CMD: drop test.tmp.mr.mr_merge2_31 m30001| Fri Feb 22 12:34:59.038 [conn21] CMD: drop test.tmp.mr.mr_merge2_31_inc m30001| Fri Feb 22 12:34:59.038 [conn21] build index test.tmp.mr.mr_merge2_31_inc { 0: 1 } m30001| Fri Feb 22 12:34:59.039 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.039 [conn21] build index test.tmp.mr.mr_merge2_31 { _id: 1 } m30001| Fri Feb 22 12:34:59.040 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.042 [conn21] CMD: drop test.mr_merge2_out m30001| Fri Feb 22 12:34:59.045 [conn21] CMD: drop test.tmp.mr.mr_merge2_31 m30001| Fri Feb 22 12:34:59.045 [conn21] CMD: drop test.tmp.mr.mr_merge2_31 m30001| Fri Feb 22 12:34:59.045 [conn21] CMD: drop test.tmp.mr.mr_merge2_31_inc m30001| Fri Feb 22 12:34:59.047 [conn21] CMD: drop test.tmp.mr.mr_merge2_31 m30001| Fri Feb 22 12:34:59.047 [conn21] CMD: drop test.tmp.mr.mr_merge2_31_inc m30001| Fri Feb 22 12:34:59.050 [conn21] CMD: drop test.tmp.mr.mr_merge2_32 m30001| Fri Feb 22 12:34:59.050 [conn21] CMD: drop test.tmp.mr.mr_merge2_32_inc m30001| Fri Feb 22 12:34:59.050 [conn21] build index test.tmp.mr.mr_merge2_32_inc { 0: 1 } m30001| Fri Feb 22 12:34:59.050 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.050 [conn21] build index test.tmp.mr.mr_merge2_32 { _id: 1 } m30001| Fri Feb 22 12:34:59.051 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.053 [conn21] CMD: drop test.tmp.mr.mr_merge2_32 m30001| Fri Feb 22 12:34:59.054 [conn21] CMD: drop test.tmp.mr.mr_merge2_32 m30001| Fri Feb 22 12:34:59.054 [conn21] CMD: drop test.tmp.mr.mr_merge2_32_inc m30001| Fri Feb 22 12:34:59.056 [conn21] CMD: drop test.tmp.mr.mr_merge2_32 m30001| Fri Feb 22 12:34:59.056 [conn21] CMD: drop test.tmp.mr.mr_merge2_32_inc 47ms ******************************************* Test : jstests/cursor1.js ... m30999| Fri Feb 22 12:34:59.062 [conn1] DROP: test.cursor1 m30001| Fri Feb 22 12:34:59.062 [conn21] CMD: drop test.cursor1 m30001| Fri Feb 22 12:34:59.063 [conn21] build index test.cursor1 { _id: 1 } m30001| Fri Feb 22 12:34:59.064 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.189 [conn1] DROP: test.cursor1 m30001| Fri Feb 22 12:34:59.189 [conn21] CMD: drop test.cursor1 132ms ******************************************* Test : jstests/distinct2.js ... m30999| Fri Feb 22 12:34:59.194 [conn1] DROP: test.distinct2 m30001| Fri Feb 22 12:34:59.194 [conn21] CMD: drop test.distinct2 m30001| Fri Feb 22 12:34:59.195 [conn21] build index test.distinct2 { _id: 1 } m30001| Fri Feb 22 12:34:59.195 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.196 [conn1] DROP: test.distinct2 m30001| Fri Feb 22 12:34:59.196 [conn21] CMD: drop test.distinct2 m30001| Fri Feb 22 12:34:59.197 [conn21] build index test.distinct2 { _id: 1 } m30001| Fri Feb 22 12:34:59.198 [conn21] build index done. scanned 0 total records. 0 secs 8ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_max.js ******************************************* Test : jstests/objid3.js ... m30999| Fri Feb 22 12:34:59.199 [conn1] DROP: test.objid3 m30001| Fri Feb 22 12:34:59.199 [conn21] CMD: drop test.objid3 m30001| Fri Feb 22 12:34:59.200 [conn21] build index test.objid3 { _id: 1 } m30001| Fri Feb 22 12:34:59.200 [conn21] build index done. scanned 0 total records. 0 secs 2ms >>>>>>>>>>>>>>> skipping jstests/_tst.js ******************************************* Test : jstests/exists6.js ... m30999| Fri Feb 22 12:34:59.202 [conn1] DROP: test.jstests_exists6 m30001| Fri Feb 22 12:34:59.202 [conn21] CMD: drop test.jstests_exists6 m30001| Fri Feb 22 12:34:59.202 [conn21] build index test.jstests_exists6 { _id: 1 } m30001| Fri Feb 22 12:34:59.203 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.203 [conn21] info: creating collection test.jstests_exists6 on add index m30001| Fri Feb 22 12:34:59.203 [conn21] build index test.jstests_exists6 { b: 1.0 } m30001| Fri Feb 22 12:34:59.203 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.237 [conn9] killcursors: found 0 of 1 m30001| Fri Feb 22 12:34:59.242 [conn21] build index test.jstests_exists6 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:59.243 [conn21] build index done. scanned 3 total records. 0 secs 49ms ******************************************* Test : jstests/orc.js ... m30999| Fri Feb 22 12:34:59.251 [conn1] DROP: test.jstests_orc m30001| Fri Feb 22 12:34:59.251 [conn21] CMD: drop test.jstests_orc m30001| Fri Feb 22 12:34:59.252 [conn21] build index test.jstests_orc { _id: 1 } m30001| Fri Feb 22 12:34:59.252 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.252 [conn21] info: creating collection test.jstests_orc on add index m30001| Fri Feb 22 12:34:59.252 [conn21] build index test.jstests_orc { a: -1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:34:59.253 [conn21] build index done. scanned 0 total records. 0 secs 5ms ******************************************* Test : jstests/index7.js ... m30999| Fri Feb 22 12:34:59.257 [conn1] DROP: test.ed_db_index7 m30001| Fri Feb 22 12:34:59.257 [conn21] CMD: drop test.ed_db_index7 m30001| Fri Feb 22 12:34:59.257 [conn21] build index test.ed_db_index7 { _id: 1 } m30001| Fri Feb 22 12:34:59.258 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.258 [conn21] build index test.ed_db_index7 { a: 1.0 } m30001| Fri Feb 22 12:34:59.258 [conn21] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:59.260 [conn1] DROP: test.ed_db_index7 m30001| Fri Feb 22 12:34:59.260 [conn21] CMD: drop test.ed_db_index7 m30001| Fri Feb 22 12:34:59.263 [conn21] build index test.ed_db_index7 { _id: 1 } m30001| Fri Feb 22 12:34:59.263 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.263 [conn21] info: creating collection test.ed_db_index7 on add index m30001| Fri Feb 22 12:34:59.263 [conn21] build index test.ed_db_index7 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:59.263 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.276 [conn1] DROP: test.ed_db_index7 m30001| Fri Feb 22 12:34:59.276 [conn21] CMD: drop test.ed_db_index7 m30001| Fri Feb 22 12:34:59.280 [conn21] build index test.ed_db_index7 { _id: 1 } m30001| Fri Feb 22 12:34:59.280 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.280 [conn21] info: creating collection test.ed_db_index7 on add index m30001| Fri Feb 22 12:34:59.280 [conn21] build index test.ed_db_index7 { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:34:59.280 [conn21] build index done. scanned 0 total records. 0 secs 26ms ******************************************* Test : jstests/update2.js ... m30999| Fri Feb 22 12:34:59.283 [conn1] DROP: test.ed_db_update2 m30001| Fri Feb 22 12:34:59.283 [conn21] CMD: drop test.ed_db_update2 m30001| Fri Feb 22 12:34:59.283 [conn21] build index test.ed_db_update2 { _id: 1 } m30001| Fri Feb 22 12:34:59.284 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.284 [conn1] DROP: test.ed_db_update2 m30001| Fri Feb 22 12:34:59.285 [conn21] CMD: drop test.ed_db_update2 m30001| Fri Feb 22 12:34:59.287 [conn21] build index test.ed_db_update2 { _id: 1 } m30001| Fri Feb 22 12:34:59.287 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.287 [conn21] build index test.ed_db_update2 { a: 1.0 } m30001| Fri Feb 22 12:34:59.288 [conn21] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:59.289 [conn1] DROP: test.ed_db_update2 m30001| Fri Feb 22 12:34:59.289 [conn21] CMD: drop test.ed_db_update2 m30001| Fri Feb 22 12:34:59.292 [conn21] build index test.ed_db_update2 { _id: 1 } m30001| Fri Feb 22 12:34:59.292 [conn21] build index done. scanned 0 total records. 0 secs 11ms ******************************************* Test : jstests/datasize3.js ... m30999| Fri Feb 22 12:34:59.294 [conn1] DROP: test.datasize3 m30001| Fri Feb 22 12:34:59.294 [conn21] CMD: drop test.datasize3 m30001| Fri Feb 22 12:34:59.294 [conn21] build index test.datasize3 { _id: 1 } m30001| Fri Feb 22 12:34:59.295 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.296 [conn21] build index test.datasize3 { x: 1.0 } m30001| Fri Feb 22 12:34:59.296 [conn21] build index done. scanned 1 total records. 0 secs 11ms ******************************************* Test : jstests/regex9.js ... m30999| Fri Feb 22 12:34:59.305 [conn1] DROP: test.regex9 m30001| Fri Feb 22 12:34:59.305 [conn21] CMD: drop test.regex9 m30001| Fri Feb 22 12:34:59.306 [conn21] build index test.regex9 { _id: 1 } m30001| Fri Feb 22 12:34:59.306 [conn21] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/set6.js ... m30999| Fri Feb 22 12:34:59.308 [conn1] DROP: test.set6 m30001| Fri Feb 22 12:34:59.309 [conn21] CMD: drop test.set6 m30001| Fri Feb 22 12:34:59.309 [conn21] build index test.set6 { _id: 1 } m30001| Fri Feb 22 12:34:59.310 [conn21] build index done. scanned 0 total records. 0 secs 5ms >>>>>>>>>>>>>>> skipping jstests/repl ******************************************* Test : jstests/indexy.js ... m30999| Fri Feb 22 12:34:59.314 [conn1] DROP: test.jstests_indexy m30001| Fri Feb 22 12:34:59.314 [conn21] CMD: drop test.jstests_indexy m30001| Fri Feb 22 12:34:59.314 [conn21] build index test.jstests_indexy { _id: 1 } m30001| Fri Feb 22 12:34:59.315 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.316 [conn21] build index test.jstests_indexy { a.c: 1.0 } m30001| Fri Feb 22 12:34:59.317 [conn21] build index done. scanned 1 total records. 0 secs 7ms ******************************************* Test : jstests/updatej.js ... m30999| Fri Feb 22 12:34:59.321 [conn1] DROP: test.jstests_updatej m30001| Fri Feb 22 12:34:59.321 [conn21] CMD: drop test.jstests_updatej m30001| Fri Feb 22 12:34:59.322 [conn21] build index test.jstests_updatej { _id: 1 } m30001| Fri Feb 22 12:34:59.322 [conn21] build index done. scanned 0 total records. 0 secs 2ms ******************************************* Test : jstests/regexa.js ... m30999| Fri Feb 22 12:34:59.324 [conn1] DROP: test.jstests_regexa m30001| Fri Feb 22 12:34:59.324 [conn21] CMD: drop test.jstests_regexa m30001| Fri Feb 22 12:34:59.325 [conn21] build index test.jstests_regexa { _id: 1 } m30001| Fri Feb 22 12:34:59.325 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.328 [conn21] build index test.jstests_regexa { a: 1.0 } m30001| Fri Feb 22 12:34:59.328 [conn21] build index done. scanned 1 total records. 0 secs 8ms ******************************************* Test : jstests/indexo.js ... m30999| Fri Feb 22 12:34:59.331 [conn1] DROP: test.jstests_indexo m30001| Fri Feb 22 12:34:59.332 [conn21] CMD: drop test.jstests_indexo m30001| Fri Feb 22 12:34:59.332 [conn21] build index test.jstests_indexo { _id: 1 } m30001| Fri Feb 22 12:34:59.333 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.333 [conn21] build index test.jstests_indexo { a: 1.0 } m30001| Fri Feb 22 12:34:59.333 [conn21] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:59.334 [conn1] DROP: test.jstests_indexo m30001| Fri Feb 22 12:34:59.334 [conn21] CMD: drop test.jstests_indexo m30001| Fri Feb 22 12:34:59.338 [conn21] build index test.jstests_indexo { _id: 1 } m30001| Fri Feb 22 12:34:59.338 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.339 [conn21] build index test.jstests_indexo { a: 1.0 } m30001| Fri Feb 22 12:34:59.339 [conn21] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:34:59.340 [conn1] DROP: test.jstests_indexo m30001| Fri Feb 22 12:34:59.340 [conn21] CMD: drop test.jstests_indexo m30001| Fri Feb 22 12:34:59.343 [conn21] build index test.jstests_indexo { _id: 1 } m30001| Fri Feb 22 12:34:59.344 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.344 [conn21] build index test.jstests_indexo { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:59.344 [conn21] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:59.345 [conn21] build index test.jstests_indexo { a: 1.0 } m30001| Fri Feb 22 12:34:59.345 [conn21] build index done. scanned 1 total records. 0 secs 16ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped6.js ******************************************* Test : jstests/group2.js ... m30999| Fri Feb 22 12:34:59.348 [conn1] DROP: test.group2 m30001| Fri Feb 22 12:34:59.348 [conn21] CMD: drop test.group2 m30001| Fri Feb 22 12:34:59.348 [conn21] build index test.group2 { _id: 1 } m30001| Fri Feb 22 12:34:59.349 [conn21] build index done. scanned 0 total records. 0 secs 65ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_circle4.js ******************************************* Test : jstests/fts4.js ... m30000| Fri Feb 22 12:34:59.415 [initandlisten] connection accepted from 127.0.0.1:61425 #16 (9 connections now open) m30001| Fri Feb 22 12:34:59.416 [initandlisten] connection accepted from 127.0.0.1:52552 #23 (7 connections now open) m30999| Fri Feb 22 12:34:59.416 [conn1] DROP: test.text4 m30001| Fri Feb 22 12:34:59.416 [conn21] CMD: drop test.text4 m30001| Fri Feb 22 12:34:59.417 [conn21] build index test.text4 { _id: 1 } m30001| Fri Feb 22 12:34:59.418 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.418 [conn21] build index test.text4 { _fts: "text", _ftsx: 1, z: 1.0 } m30001| Fri Feb 22 12:34:59.419 [conn21] build index done. scanned 2 total records. 0 secs 12ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_haystack3.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_center_sphere1.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2ordering.js ******************************************* Test : jstests/in2.js ... m30999| Fri Feb 22 12:34:59.425 [conn1] DROP: test.in2 m30001| Fri Feb 22 12:34:59.425 [conn21] CMD: drop test.in2 m30001| Fri Feb 22 12:34:59.426 [conn21] build index test.in2 { _id: 1 } m30001| Fri Feb 22 12:34:59.426 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.428 [conn1] DROP: test.in2 m30001| Fri Feb 22 12:34:59.428 [conn21] CMD: drop test.in2 m30001| Fri Feb 22 12:34:59.430 [conn21] build index test.in2 { _id: 1 } m30001| Fri Feb 22 12:34:59.431 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.431 [conn21] build index test.in2 { a: 1.0 } m30001| Fri Feb 22 12:34:59.432 [conn21] build index done. scanned 9 total records. 0 secs m30999| Fri Feb 22 12:34:59.434 [conn1] DROP: test.in2 m30001| Fri Feb 22 12:34:59.434 [conn21] CMD: drop test.in2 m30001| Fri Feb 22 12:34:59.437 [conn21] build index test.in2 { _id: 1 } m30001| Fri Feb 22 12:34:59.437 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.438 [conn21] build index test.in2 { b: 1.0 } m30001| Fri Feb 22 12:34:59.438 [conn21] build index done. scanned 9 total records. 0 secs m30999| Fri Feb 22 12:34:59.440 [conn1] DROP: test.in2 m30001| Fri Feb 22 12:34:59.440 [conn21] CMD: drop test.in2 m30001| Fri Feb 22 12:34:59.443 [conn21] build index test.in2 { _id: 1 } m30001| Fri Feb 22 12:34:59.444 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.444 [conn21] build index test.in2 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:59.445 [conn21] build index done. scanned 9 total records. 0 secs 22ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_polygon2.js ******************************************* Test : jstests/arrayfinda.js ... m30999| Fri Feb 22 12:34:59.447 [conn1] DROP: test.jstests_arrayfinda m30001| Fri Feb 22 12:34:59.447 [conn21] CMD: drop test.jstests_arrayfinda m30001| Fri Feb 22 12:34:59.448 [conn21] build index test.jstests_arrayfinda { _id: 1 } m30001| Fri Feb 22 12:34:59.448 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.450 [conn21] build index test.jstests_arrayfinda { a: 1.0 } m30001| Fri Feb 22 12:34:59.450 [conn21] build index done. scanned 2 total records. 0 secs 6ms ******************************************* Test : jstests/index_many.js ... m30999| Fri Feb 22 12:34:59.453 [conn1] DROP: test.many m30001| Fri Feb 22 12:34:59.453 [conn21] CMD: drop test.many m30999| Fri Feb 22 12:34:59.453 [conn1] DROP: test.many2 m30001| Fri Feb 22 12:34:59.453 [conn21] CMD: drop test.many2 m30001| Fri Feb 22 12:34:59.454 [conn21] build index test.many { _id: 1 } m30001| Fri Feb 22 12:34:59.455 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.455 [conn21] build index test.many { 2: 1.0 } m30001| Fri Feb 22 12:34:59.455 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.456 [conn21] build index test.many { 3: 1.0 } m30001| Fri Feb 22 12:34:59.456 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.457 [conn21] build index test.many { 4: 1.0 } m30001| Fri Feb 22 12:34:59.457 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.458 [conn21] build index test.many { 5: 1.0 } m30001| Fri Feb 22 12:34:59.458 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.459 [conn21] build index test.many { 6: 1.0 } m30001| Fri Feb 22 12:34:59.459 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.460 [conn21] build index test.many { 7: 1.0 } m30001| Fri Feb 22 12:34:59.460 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.461 [conn21] build index test.many { 8: 1.0 } m30001| Fri Feb 22 12:34:59.461 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.462 [conn21] build index test.many { 9: 1.0 } m30001| Fri Feb 22 12:34:59.462 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.463 [conn21] build index test.many { 10: 1.0 } m30001| Fri Feb 22 12:34:59.463 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.464 [conn21] build index test.many { 11: 1.0 } m30001| Fri Feb 22 12:34:59.464 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.465 [conn21] build index test.many { 12: 1.0 } m30001| Fri Feb 22 12:34:59.465 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.466 [conn21] build index test.many { 13: 1.0 } m30001| Fri Feb 22 12:34:59.466 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.467 [conn21] build index test.many { 14: 1.0 } m30001| Fri Feb 22 12:34:59.467 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.468 [conn21] build index test.many { 15: 1.0 } m30001| Fri Feb 22 12:34:59.469 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.469 [conn21] build index test.many { 16: 1.0 } m30001| Fri Feb 22 12:34:59.470 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.470 [conn21] build index test.many { 17: 1.0 } m30001| Fri Feb 22 12:34:59.471 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.471 [conn21] build index test.many { 18: 1.0 } m30001| Fri Feb 22 12:34:59.472 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.472 [conn21] build index test.many { 19: 1.0 } m30001| Fri Feb 22 12:34:59.473 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.473 [conn21] build index test.many { x: 1.0 } m30001| Fri Feb 22 12:34:59.474 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.475 [conn21] build index test.many { 21: 1.0 } m30001| Fri Feb 22 12:34:59.475 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.476 [conn21] build index test.many { 22: 1.0 } m30001| Fri Feb 22 12:34:59.476 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.477 [conn21] build index test.many { 23: 1.0 } m30001| Fri Feb 22 12:34:59.477 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.478 [conn21] build index test.many { 24: 1.0 } m30001| Fri Feb 22 12:34:59.478 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.479 [conn21] build index test.many { 25: 1.0 } m30001| Fri Feb 22 12:34:59.479 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.480 [conn21] build index test.many { 26: 1.0 } m30001| Fri Feb 22 12:34:59.480 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.497 [conn21] build index test.many { 27: 1.0 } m30001| Fri Feb 22 12:34:59.499 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.500 [conn21] build index test.many { 28: 1.0 } m30001| Fri Feb 22 12:34:59.501 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.502 [conn21] build index test.many { 29: 1.0 } m30001| Fri Feb 22 12:34:59.503 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.503 [conn21] build index test.many { 30: 1.0 } m30001| Fri Feb 22 12:34:59.504 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.504 [conn21] build index test.many { 31: 1.0 } m30001| Fri Feb 22 12:34:59.505 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.506 [conn21] build index test.many { 32: 1.0 } m30001| Fri Feb 22 12:34:59.507 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.507 [conn21] build index test.many { 33: 1.0 } m30001| Fri Feb 22 12:34:59.508 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.509 [conn21] build index test.many { 34: 1.0 } m30001| Fri Feb 22 12:34:59.510 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.510 [conn21] build index test.many { 35: 1.0 } m30001| Fri Feb 22 12:34:59.511 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.512 [conn21] build index test.many { 36: 1.0 } m30001| Fri Feb 22 12:34:59.513 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.513 [conn21] build index test.many { 37: 1.0 } m30001| Fri Feb 22 12:34:59.515 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.515 [conn21] build index test.many { 38: 1.0 } m30001| Fri Feb 22 12:34:59.516 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.517 [conn21] build index test.many { 39: 1.0 } m30001| Fri Feb 22 12:34:59.518 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.518 [conn21] build index test.many { 40: 1.0 } m30001| Fri Feb 22 12:34:59.519 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.520 [conn21] build index test.many { 41: 1.0 } m30001| Fri Feb 22 12:34:59.521 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.522 [conn21] build index test.many { 42: 1.0 } m30001| Fri Feb 22 12:34:59.523 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.523 [conn21] build index test.many { 43: 1.0 } m30001| Fri Feb 22 12:34:59.524 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.525 [conn21] build index test.many { 44: 1.0 } m30001| Fri Feb 22 12:34:59.526 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.526 [conn21] build index test.many { 45: 1.0 } m30001| Fri Feb 22 12:34:59.528 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.528 [conn21] build index test.many { 46: 1.0 } m30001| Fri Feb 22 12:34:59.529 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.529 [conn21] build index test.many { 47: 1.0 } m30001| Fri Feb 22 12:34:59.530 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.530 [conn21] build index test.many { 48: 1.0 } m30001| Fri Feb 22 12:34:59.531 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.531 [conn21] build index test.many { 49: 1.0 } m30001| Fri Feb 22 12:34:59.532 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.533 [conn21] build index test.many { 50: 1.0 } m30001| Fri Feb 22 12:34:59.533 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.534 [conn21] build index test.many { 51: 1.0 } m30001| Fri Feb 22 12:34:59.535 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.535 [conn21] build index test.many { 52: 1.0 } m30001| Fri Feb 22 12:34:59.536 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.536 [conn21] build index test.many { 53: 1.0 } m30001| Fri Feb 22 12:34:59.537 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.538 [conn21] build index test.many { 54: 1.0 } m30001| Fri Feb 22 12:34:59.538 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.539 [conn21] build index test.many { 55: 1.0 } m30001| Fri Feb 22 12:34:59.540 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.540 [conn21] build index test.many { 56: 1.0 } m30001| Fri Feb 22 12:34:59.541 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.542 [conn21] build index test.many { 57: 1.0 } m30001| Fri Feb 22 12:34:59.543 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.543 [conn21] build index test.many { 58: 1.0 } m30001| Fri Feb 22 12:34:59.544 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.544 [conn21] build index test.many { 59: 1.0 } m30001| Fri Feb 22 12:34:59.545 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.546 [conn21] build index test.many { 60: 1.0 } m30001| Fri Feb 22 12:34:59.546 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.547 [conn21] build index test.many { 61: 1.0 } m30001| Fri Feb 22 12:34:59.547 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.548 [conn21] build index test.many { 62: 1.0 } m30001| Fri Feb 22 12:34:59.549 [conn21] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:34:59.550 [conn21] build index test.many { 63: 1.0 } m30001| Fri Feb 22 12:34:59.550 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.551 [conn21] build index test.many { y: 1.0 } m30001| Fri Feb 22 12:34:59.552 [conn21] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:34:59.552 [conn21] add index fails, too many indexes for test.many key:{ 65: 1.0 } m30001| Fri Feb 22 12:34:59.553 [conn21] add index fails, too many indexes for test.many key:{ 66: 1.0 } m30001| Fri Feb 22 12:34:59.553 [conn21] add index fails, too many indexes for test.many key:{ 67: 1.0 } m30001| Fri Feb 22 12:34:59.554 [conn21] add index fails, too many indexes for test.many key:{ 68: 1.0 } m30001| Fri Feb 22 12:34:59.554 [conn21] add index fails, too many indexes for test.many key:{ 69: 1.0 } m30001| Fri Feb 22 12:34:59.719 [conn21] command admin.$cmd command: { renameCollection: "test.many", to: "test.many2", dropTarget: undefined } ntoreturn:1 keyUpdates:0 locks(micros) W:157401 reslen:37 157ms 269ms ******************************************* Test : jstests/queryoptimizer7.js ... m30999| Fri Feb 22 12:34:59.724 [conn1] DROP: test.jstests_queryoptimizer7 m30001| Fri Feb 22 12:34:59.724 [conn21] CMD: drop test.jstests_queryoptimizer7 m30001| Fri Feb 22 12:34:59.725 [conn21] build index test.jstests_queryoptimizer7 { _id: 1 } m30001| Fri Feb 22 12:34:59.726 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.726 [conn21] info: creating collection test.jstests_queryoptimizer7 on add index m30001| Fri Feb 22 12:34:59.726 [conn21] build index test.jstests_queryoptimizer7 { a: 1.0 } m30001| Fri Feb 22 12:34:59.726 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.729 [conn1] DROP: test.jstests_queryoptimizer7 m30001| Fri Feb 22 12:34:59.729 [conn21] CMD: drop test.jstests_queryoptimizer7 m30001| Fri Feb 22 12:34:59.733 [conn21] build index test.jstests_queryoptimizer7 { _id: 1 } m30001| Fri Feb 22 12:34:59.733 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.733 [conn21] info: creating collection test.jstests_queryoptimizer7 on add index m30001| Fri Feb 22 12:34:59.733 [conn21] build index test.jstests_queryoptimizer7 { a: 1.0 } m30001| Fri Feb 22 12:34:59.734 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.735 [conn1] DROP: test.jstests_queryoptimizer7 m30001| Fri Feb 22 12:34:59.735 [conn21] CMD: drop test.jstests_queryoptimizer7 m30001| Fri Feb 22 12:34:59.739 [conn21] build index test.jstests_queryoptimizer7 { _id: 1 } m30001| Fri Feb 22 12:34:59.739 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.739 [conn21] info: creating collection test.jstests_queryoptimizer7 on add index m30001| Fri Feb 22 12:34:59.739 [conn21] build index test.jstests_queryoptimizer7 { a: 1.0 } m30001| Fri Feb 22 12:34:59.740 [conn21] build index done. scanned 0 total records. 0 secs 20ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2exact.js ******************************************* Test : jstests/arrayfind9.js ... m30999| Fri Feb 22 12:34:59.742 [conn1] DROP: test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.742 [conn21] CMD: drop test.jstests_arrayfind9 m30999| Fri Feb 22 12:34:59.742 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765f3881c8e7453916047 m30001| Fri Feb 22 12:34:59.743 [conn21] build index test.jstests_arrayfind9 { _id: 1 } m30999| Fri Feb 22 12:34:59.743 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:34:59.743 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.744 [conn1] DROP: test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.744 [conn21] CMD: drop test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.747 [conn21] build index test.jstests_arrayfind9 { _id: 1 } m30001| Fri Feb 22 12:34:59.747 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.747 [conn1] DROP: test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.748 [conn21] CMD: drop test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.750 [conn21] build index test.jstests_arrayfind9 { _id: 1 } m30001| Fri Feb 22 12:34:59.750 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.751 [conn1] DROP: test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.751 [conn21] CMD: drop test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.754 [conn21] build index test.jstests_arrayfind9 { _id: 1 } m30001| Fri Feb 22 12:34:59.754 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.755 [conn1] DROP: test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.755 [conn21] CMD: drop test.jstests_arrayfind9 m30001| Fri Feb 22 12:34:59.757 [conn21] build index test.jstests_arrayfind9 { _id: 1 } m30001| Fri Feb 22 12:34:59.758 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.758 [conn21] build index test.jstests_arrayfind9 { a: 1.0 } m30001| Fri Feb 22 12:34:59.758 [conn21] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:59.759 [conn21] build index test.jstests_arrayfind9 { a.b: 1.0 } m30001| Fri Feb 22 12:34:59.759 [conn21] build index done. scanned 1 total records. 0 secs 20ms ******************************************* Test : jstests/indexm.js ... m30999| Fri Feb 22 12:34:59.762 [conn1] DROP: test.jstests_indexm m30001| Fri Feb 22 12:34:59.762 [conn21] CMD: drop test.jstests_indexm m30001| Fri Feb 22 12:34:59.763 [conn21] build index test.jstests_indexm { _id: 1 } m30001| Fri Feb 22 12:34:59.763 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.764 [conn21] build index test.jstests_indexm { a: 1.0 } m30001| Fri Feb 22 12:34:59.764 [conn21] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:59.766 [conn9] CMD: dropIndexes test.jstests_indexm m30001| Fri Feb 22 12:34:59.768 [conn21] build index test.jstests_indexm { a.x: 1.0 } m30001| Fri Feb 22 12:34:59.769 [conn21] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:34:59.770 [conn9] CMD: dropIndexes test.jstests_indexm 11ms ******************************************* Test : jstests/updateh.js ... m30999| Fri Feb 22 12:34:59.773 [conn1] DROP: test.jstest_updateh m30001| Fri Feb 22 12:34:59.773 [conn21] CMD: drop test.jstest_updateh m30001| Fri Feb 22 12:34:59.774 [conn21] build index test.jstest_updateh { _id: 1 } m30001| Fri Feb 22 12:34:59.774 [conn21] build index done. scanned 0 total records. 0 secs 5ms ******************************************* Test : jstests/pullall.js ... m30999| Fri Feb 22 12:34:59.778 [conn1] DROP: test.jstests_pullall m30001| Fri Feb 22 12:34:59.778 [conn21] CMD: drop test.jstests_pullall m30001| Fri Feb 22 12:34:59.779 [conn21] build index test.jstests_pullall { _id: 1 } m30001| Fri Feb 22 12:34:59.779 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.780 [conn1] DROP: test.jstests_pullall m30001| Fri Feb 22 12:34:59.780 [conn21] CMD: drop test.jstests_pullall m30001| Fri Feb 22 12:34:59.783 [conn21] build index test.jstests_pullall { _id: 1 } m30001| Fri Feb 22 12:34:59.783 [conn21] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:34:59.784 [conn1] DROP: test.jstests_pullall m30001| Fri Feb 22 12:34:59.784 [conn21] CMD: drop test.jstests_pullall m30001| Fri Feb 22 12:34:59.787 [conn21] build index test.jstests_pullall { _id: 1 } m30001| Fri Feb 22 12:34:59.787 [conn21] build index done. scanned 0 total records. 0 secs 11ms ******************************************* Test : jstests/set4.js ... m30999| Fri Feb 22 12:34:59.789 [conn1] DROP: test.set4 m30001| Fri Feb 22 12:34:59.789 [conn21] CMD: drop test.set4 m30001| Fri Feb 22 12:34:59.790 [conn21] build index test.set4 { _id: 1 } m30001| Fri Feb 22 12:34:59.790 [conn21] build index done. scanned 0 total records. 0 secs 4ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped_max.js ******************************************* Test : jstests/or9.js ... m30999| Fri Feb 22 12:34:59.793 [conn1] DROP: test.jstests_or9 m30001| Fri Feb 22 12:34:59.793 [conn21] CMD: drop test.jstests_or9 m30001| Fri Feb 22 12:34:59.794 [conn21] build index test.jstests_or9 { _id: 1 } m30001| Fri Feb 22 12:34:59.794 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:34:59.794 [conn21] info: creating collection test.jstests_or9 on add index m30001| Fri Feb 22 12:34:59.794 [conn21] build index test.jstests_or9 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:34:59.795 [conn21] build index done. scanned 0 total records. 0 secs 37ms ******************************************* Test : jstests/index_big1.js ... m30999| Fri Feb 22 12:34:59.836 [conn1] DROP: test.index_big1 m30001| Fri Feb 22 12:34:59.836 [conn21] CMD: drop test.index_big1 m30001| Fri Feb 22 12:34:59.837 [conn21] build index test.index_big1 { _id: 1 } m30001| Fri Feb 22 12:34:59.837 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:00.117 [conn21] build index test.index_big1 { a: 1.0, x: 1.0 } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1025 { : 1002.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1026 { : 1003.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1027 { : 1004.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1028 { : 1005.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1029 { : 1006.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1030 { : 1007.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1031 { : 1008.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1032 { : 1009.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1033 { : 1010.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1034 { : 1011.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1035 { : 1012.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1036 { : 1013.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1037 { : 1014.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1038 { : 1015.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1039 { : 1016.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.158 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1040 { : 1017.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1041 { : 1018.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1042 { : 1019.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1043 { : 1020.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1044 { : 1021.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1045 { : 1022.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1046 { : 1023.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1047 { : 1024.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1048 { : 1025.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1049 { : 1026.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1050 { : 1027.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1051 { : 1028.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1052 { : 1029.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1053 { : 1030.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1054 { : 1031.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1055 { : 1032.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1056 { : 1033.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1057 { : 1034.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1058 { : 1035.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1059 { : 1036.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1060 { : 1037.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1061 { : 1038.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1062 { : 1039.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1063 { : 1040.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1064 { : 1041.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1065 { : 1042.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1066 { : 1043.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1067 { : 1044.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1068 { : 1045.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1069 { : 1046.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1070 { : 1047.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1071 { : 1048.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1072 { : 1049.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1073 { : 1050.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1074 { : 1051.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1075 { : 1052.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1076 { : 1053.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1077 { : 1054.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1078 { : 1055.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1079 { : 1056.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1080 { : 1057.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1081 { : 1058.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1082 { : 1059.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1083 { : 1060.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1084 { : 1061.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1085 { : 1062.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1086 { : 1063.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1087 { : 1064.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1088 { : 1065.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.159 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1089 { : 1066.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1090 { : 1067.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1091 { : 1068.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1092 { : 1069.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1093 { : 1070.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1094 { : 1071.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1095 { : 1072.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1096 { : 1073.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1097 { : 1074.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1098 { : 1075.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1099 { : 1076.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1100 { : 1077.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1101 { : 1078.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1102 { : 1079.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1103 { : 1080.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1104 { : 1081.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1105 { : 1082.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1106 { : 1083.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1107 { : 1084.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1108 { : 1085.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1109 { : 1086.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1110 { : 1087.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1111 { : 1088.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1112 { : 1089.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1113 { : 1090.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1114 { : 1091.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1115 { : 1092.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1116 { : 1093.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1117 { : 1094.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1118 { : 1095.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1119 { : 1096.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1120 { : 1097.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1121 { : 1098.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1122 { : 1099.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1123 { : 1100.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1124 { : 1101.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1125 { : 1102.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1126 { : 1103.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1127 { : 1104.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1128 { : 1105.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1129 { : 1106.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1130 { : 1107.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1131 { : 1108.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1132 { : 1109.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1133 { : 1110.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1134 { : 1111.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1135 { : 1112.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1136 { : 1113.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1137 { : 1114.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1138 { : 1115.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1139 { : 1116.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1140 { : 1117.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.160 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1141 { : 1118.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1142 { : 1119.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1143 { : 1120.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1144 { : 1121.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1145 { : 1122.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1146 { : 1123.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1147 { : 1124.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1148 { : 1125.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1149 { : 1126.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1150 { : 1127.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1151 { : 1128.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1152 { : 1129.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1153 { : 1130.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1154 { : 1131.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1155 { : 1132.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1156 { : 1133.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1157 { : 1134.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1158 { : 1135.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1159 { : 1136.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1160 { : 1137.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1161 { : 1138.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1162 { : 1139.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1163 { : 1140.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1164 { : 1141.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1165 { : 1142.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1166 { : 1143.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1167 { : 1144.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1168 { : 1145.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1169 { : 1146.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1170 { : 1147.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1171 { : 1148.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1172 { : 1149.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1173 { : 1150.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1174 { : 1151.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1175 { : 1152.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1176 { : 1153.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1177 { : 1154.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1178 { : 1155.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1179 { : 1156.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1180 { : 1157.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1181 { : 1158.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1182 { : 1159.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1183 { : 1160.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1184 { : 1161.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1185 { : 1162.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1186 { : 1163.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1187 { : 1164.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1188 { : 1165.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1189 { : 1166.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.161 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1190 { : 1167.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1191 { : 1168.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1192 { : 1169.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1193 { : 1170.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1194 { : 1171.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1195 { : 1172.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1196 { : 1173.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1197 { : 1174.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1198 { : 1175.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1199 { : 1176.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1200 { : 1177.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1201 { : 1178.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1202 { : 1179.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1203 { : 1180.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1204 { : 1181.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1205 { : 1182.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1206 { : 1183.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1207 { : 1184.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1208 { : 1185.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1209 { : 1186.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1210 { : 1187.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1211 { : 1188.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1212 { : 1189.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1213 { : 1190.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1214 { : 1191.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1215 { : 1192.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1216 { : 1193.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1217 { : 1194.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1218 { : 1195.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1219 { : 1196.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1220 { : 1197.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1221 { : 1198.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1222 { : 1199.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1223 { : 1200.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1224 { : 1201.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1225 { : 1202.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1226 { : 1203.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1227 { : 1204.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1228 { : 1205.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1229 { : 1206.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1230 { : 1207.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1231 { : 1208.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1232 { : 1209.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1233 { : 1210.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1234 { : 1211.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1235 { : 1212.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1236 { : 1213.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1237 { : 1214.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1238 { : 1215.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1239 { : 1216.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1240 { : 1217.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1241 { : 1218.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1242 { : 1219.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.162 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1243 { : 1220.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1244 { : 1221.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1245 { : 1222.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1246 { : 1223.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1247 { : 1224.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1248 { : 1225.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1249 { : 1226.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1250 { : 1227.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1251 { : 1228.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1252 { : 1229.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1253 { : 1230.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1254 { : 1231.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1255 { : 1232.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1256 { : 1233.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1257 { : 1234.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1258 { : 1235.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1259 { : 1236.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1260 { : 1237.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1261 { : 1238.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1262 { : 1239.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1263 { : 1240.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1264 { : 1241.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1265 { : 1242.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1266 { : 1243.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1267 { : 1244.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1268 { : 1245.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1269 { : 1246.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1270 { : 1247.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1271 { : 1248.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1272 { : 1249.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1273 { : 1250.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1274 { : 1251.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1275 { : 1252.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1276 { : 1253.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1277 { : 1254.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1278 { : 1255.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1279 { : 1256.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1280 { : 1257.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1281 { : 1258.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1282 { : 1259.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1283 { : 1260.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1284 { : 1261.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1285 { : 1262.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1286 { : 1263.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1287 { : 1264.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1288 { : 1265.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1289 { : 1266.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1290 { : 1267.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1291 { : 1268.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1292 { : 1269.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.163 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1293 { : 1270.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1294 { : 1271.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1295 { : 1272.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1296 { : 1273.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1297 { : 1274.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1298 { : 1275.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1299 { : 1276.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1300 { : 1277.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1301 { : 1278.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1302 { : 1279.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1303 { : 1280.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1304 { : 1281.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1305 { : 1282.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1306 { : 1283.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1307 { : 1284.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1308 { : 1285.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1309 { : 1286.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1310 { : 1287.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1311 { : 1288.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1312 { : 1289.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1313 { : 1290.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1314 { : 1291.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1315 { : 1292.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1316 { : 1293.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1317 { : 1294.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1318 { : 1295.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1319 { : 1296.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1320 { : 1297.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1321 { : 1298.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1322 { : 1299.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1323 { : 1300.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1324 { : 1301.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1325 { : 1302.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1326 { : 1303.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1327 { : 1304.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1328 { : 1305.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1329 { : 1306.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1330 { : 1307.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1331 { : 1308.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1332 { : 1309.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1333 { : 1310.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1334 { : 1311.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1335 { : 1312.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1336 { : 1313.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1337 { : 1314.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1338 { : 1315.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1339 { : 1316.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1340 { : 1317.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1341 { : 1318.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1342 { : 1319.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.164 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1343 { : 1320.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1344 { : 1321.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1345 { : 1322.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1346 { : 1323.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1347 { : 1324.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1348 { : 1325.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1349 { : 1326.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1350 { : 1327.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1351 { : 1328.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1352 { : 1329.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1353 { : 1330.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1354 { : 1331.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1355 { : 1332.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1356 { : 1333.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1357 { : 1334.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1358 { : 1335.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1359 { : 1336.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1360 { : 1337.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1361 { : 1338.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1362 { : 1339.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1363 { : 1340.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1364 { : 1341.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1365 { : 1342.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1366 { : 1343.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1367 { : 1344.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1368 { : 1345.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1369 { : 1346.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1370 { : 1347.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1371 { : 1348.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1372 { : 1349.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1373 { : 1350.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1374 { : 1351.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1375 { : 1352.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1376 { : 1353.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1377 { : 1354.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1378 { : 1355.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1379 { : 1356.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1380 { : 1357.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1381 { : 1358.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1382 { : 1359.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1383 { : 1360.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1384 { : 1361.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1385 { : 1362.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1386 { : 1363.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1387 { : 1364.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1388 { : 1365.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1389 { : 1366.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1390 { : 1367.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1391 { : 1368.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1392 { : 1369.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.165 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1393 { : 1370.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1394 { : 1371.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1395 { : 1372.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1396 { : 1373.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1397 { : 1374.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1398 { : 1375.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1399 { : 1376.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1400 { : 1377.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1401 { : 1378.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1402 { : 1379.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1403 { : 1380.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1404 { : 1381.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1405 { : 1382.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1406 { : 1383.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1407 { : 1384.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1408 { : 1385.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1409 { : 1386.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1410 { : 1387.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1411 { : 1388.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1412 { : 1389.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1413 { : 1390.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1414 { : 1391.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1415 { : 1392.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1416 { : 1393.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1417 { : 1394.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1418 { : 1395.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1419 { : 1396.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1420 { : 1397.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1421 { : 1398.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1422 { : 1399.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1423 { : 1400.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1424 { : 1401.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1425 { : 1402.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1426 { : 1403.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1427 { : 1404.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1428 { : 1405.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1429 { : 1406.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1430 { : 1407.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1431 { : 1408.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1432 { : 1409.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1433 { : 1410.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1434 { : 1411.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1435 { : 1412.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1436 { : 1413.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1437 { : 1414.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1438 { : 1415.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1439 { : 1416.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1440 { : 1417.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1441 { : 1418.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1442 { : 1419.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1443 { : 1420.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.166 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1444 { : 1421.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1445 { : 1422.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1446 { : 1423.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1447 { : 1424.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1448 { : 1425.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1449 { : 1426.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1450 { : 1427.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1451 { : 1428.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1452 { : 1429.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1453 { : 1430.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1454 { : 1431.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1455 { : 1432.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1456 { : 1433.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1457 { : 1434.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1458 { : 1435.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1459 { : 1436.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1460 { : 1437.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1461 { : 1438.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1462 { : 1439.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1463 { : 1440.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1464 { : 1441.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1465 { : 1442.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1466 { : 1443.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1467 { : 1444.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1468 { : 1445.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1469 { : 1446.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1470 { : 1447.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1471 { : 1448.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1472 { : 1449.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1473 { : 1450.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1474 { : 1451.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1475 { : 1452.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1476 { : 1453.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1477 { : 1454.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1478 { : 1455.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1479 { : 1456.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1480 { : 1457.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1481 { : 1458.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1482 { : 1459.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1483 { : 1460.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1484 { : 1461.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1485 { : 1462.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1486 { : 1463.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1487 { : 1464.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1488 { : 1465.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1489 { : 1466.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1490 { : 1467.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1491 { : 1468.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1492 { : 1469.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.167 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1493 { : 1470.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1494 { : 1471.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1495 { : 1472.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1496 { : 1473.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1497 { : 1474.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1498 { : 1475.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1499 { : 1476.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1500 { : 1477.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1501 { : 1478.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1502 { : 1479.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1503 { : 1480.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1504 { : 1481.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1505 { : 1482.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1506 { : 1483.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1507 { : 1484.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1508 { : 1485.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1509 { : 1486.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1510 { : 1487.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1511 { : 1488.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1512 { : 1489.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1513 { : 1490.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1514 { : 1491.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1515 { : 1492.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1516 { : 1493.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1517 { : 1494.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1518 { : 1495.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1519 { : 1496.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1520 { : 1497.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1521 { : 1498.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1522 { : 1499.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1523 { : 1500.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1524 { : 1501.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1525 { : 1502.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1526 { : 1503.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1527 { : 1504.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1528 { : 1505.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1529 { : 1506.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1530 { : 1507.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1531 { : 1508.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1532 { : 1509.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1533 { : 1510.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1534 { : 1511.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1535 { : 1512.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1536 { : 1513.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1537 { : 1514.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1538 { : 1515.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1539 { : 1516.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1540 { : 1517.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1541 { : 1518.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1542 { : 1519.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.168 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1543 { : 1520.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1544 { : 1521.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1545 { : 1522.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1546 { : 1523.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1547 { : 1524.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1548 { : 1525.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1549 { : 1526.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1550 { : 1527.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1551 { : 1528.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1552 { : 1529.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1553 { : 1530.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1554 { : 1531.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1555 { : 1532.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1556 { : 1533.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1557 { : 1534.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1558 { : 1535.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1559 { : 1536.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1560 { : 1537.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1561 { : 1538.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1562 { : 1539.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1563 { : 1540.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1564 { : 1541.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1565 { : 1542.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1566 { : 1543.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1567 { : 1544.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1568 { : 1545.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1569 { : 1546.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1570 { : 1547.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1571 { : 1548.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1572 { : 1549.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1573 { : 1550.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1574 { : 1551.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1575 { : 1552.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1576 { : 1553.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1577 { : 1554.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1578 { : 1555.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1579 { : 1556.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1580 { : 1557.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1581 { : 1558.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1582 { : 1559.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1583 { : 1560.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1584 { : 1561.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1585 { : 1562.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1586 { : 1563.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1587 { : 1564.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1588 { : 1565.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1589 { : 1566.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1590 { : 1567.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1591 { : 1568.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1592 { : 1569.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.169 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1593 { : 1570.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1594 { : 1571.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1595 { : 1572.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1596 { : 1573.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1597 { : 1574.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1598 { : 1575.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1599 { : 1576.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1600 { : 1577.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1601 { : 1578.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1602 { : 1579.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1603 { : 1580.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1604 { : 1581.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1605 { : 1582.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1606 { : 1583.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1607 { : 1584.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1608 { : 1585.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1609 { : 1586.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1610 { : 1587.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1611 { : 1588.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1612 { : 1589.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1613 { : 1590.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1614 { : 1591.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1615 { : 1592.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1616 { : 1593.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1617 { : 1594.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1618 { : 1595.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1619 { : 1596.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1620 { : 1597.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1621 { : 1598.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1622 { : 1599.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1623 { : 1600.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1624 { : 1601.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1625 { : 1602.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1626 { : 1603.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1627 { : 1604.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1628 { : 1605.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1629 { : 1606.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1630 { : 1607.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1631 { : 1608.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1632 { : 1609.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1633 { : 1610.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1634 { : 1611.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1635 { : 1612.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1636 { : 1613.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1637 { : 1614.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1638 { : 1615.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1639 { : 1616.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.170 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1640 { : 1617.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1641 { : 1618.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1642 { : 1619.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1643 { : 1620.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1644 { : 1621.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1645 { : 1622.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1646 { : 1623.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1647 { : 1624.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1648 { : 1625.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1649 { : 1626.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1650 { : 1627.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1651 { : 1628.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1652 { : 1629.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1653 { : 1630.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1654 { : 1631.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1655 { : 1632.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1656 { : 1633.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1657 { : 1634.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1658 { : 1635.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1659 { : 1636.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1660 { : 1637.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1661 { : 1638.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1662 { : 1639.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1663 { : 1640.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1664 { : 1641.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1665 { : 1642.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1666 { : 1643.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1667 { : 1644.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1668 { : 1645.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1669 { : 1646.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1670 { : 1647.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1671 { : 1648.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1672 { : 1649.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1673 { : 1650.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1674 { : 1651.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1675 { : 1652.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1676 { : 1653.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1677 { : 1654.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1678 { : 1655.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1679 { : 1656.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1680 { : 1657.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1681 { : 1658.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1682 { : 1659.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1683 { : 1660.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1684 { : 1661.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1685 { : 1662.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1686 { : 1663.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1687 { : 1664.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1688 { : 1665.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.171 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1689 { : 1666.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1690 { : 1667.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1691 { : 1668.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1692 { : 1669.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1693 { : 1670.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1694 { : 1671.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1695 { : 1672.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1696 { : 1673.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1697 { : 1674.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1698 { : 1675.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1699 { : 1676.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1700 { : 1677.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1701 { : 1678.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1702 { : 1679.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1703 { : 1680.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1704 { : 1681.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1705 { : 1682.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1706 { : 1683.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1707 { : 1684.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1708 { : 1685.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1709 { : 1686.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1710 { : 1687.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1711 { : 1688.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1712 { : 1689.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1713 { : 1690.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1714 { : 1691.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1715 { : 1692.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1716 { : 1693.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1717 { : 1694.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1718 { : 1695.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1719 { : 1696.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1720 { : 1697.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1721 { : 1698.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1722 { : 1699.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1723 { : 1700.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1724 { : 1701.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1725 { : 1702.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1726 { : 1703.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1727 { : 1704.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1728 { : 1705.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1729 { : 1706.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1730 { : 1707.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1731 { : 1708.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1732 { : 1709.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1733 { : 1710.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1734 { : 1711.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1735 { : 1712.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1736 { : 1713.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1737 { : 1714.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1738 { : 1715.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1739 { : 1716.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1740 { : 1717.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1741 { : 1718.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1742 { : 1719.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.172 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1743 { : 1720.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1744 { : 1721.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1745 { : 1722.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1746 { : 1723.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1747 { : 1724.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1748 { : 1725.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1749 { : 1726.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1750 { : 1727.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1751 { : 1728.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1752 { : 1729.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1753 { : 1730.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1754 { : 1731.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1755 { : 1732.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1756 { : 1733.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1757 { : 1734.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1758 { : 1735.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1759 { : 1736.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1760 { : 1737.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1761 { : 1738.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1762 { : 1739.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1763 { : 1740.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1764 { : 1741.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1765 { : 1742.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1766 { : 1743.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1767 { : 1744.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1768 { : 1745.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1769 { : 1746.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1770 { : 1747.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1771 { : 1748.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1772 { : 1749.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1773 { : 1750.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1774 { : 1751.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1775 { : 1752.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1776 { : 1753.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1777 { : 1754.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1778 { : 1755.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1779 { : 1756.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1780 { : 1757.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1781 { : 1758.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1782 { : 1759.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1783 { : 1760.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1784 { : 1761.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1785 { : 1762.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1786 { : 1763.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1787 { : 1764.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1788 { : 1765.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1789 { : 1766.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1790 { : 1767.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1791 { : 1768.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.173 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1792 { : 1769.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1793 { : 1770.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1794 { : 1771.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1795 { : 1772.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1796 { : 1773.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1797 { : 1774.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1798 { : 1775.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1799 { : 1776.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1800 { : 1777.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1801 { : 1778.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1802 { : 1779.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1803 { : 1780.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1804 { : 1781.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1805 { : 1782.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1806 { : 1783.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1807 { : 1784.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1808 { : 1785.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1809 { : 1786.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1810 { : 1787.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1811 { : 1788.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1812 { : 1789.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1813 { : 1790.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1814 { : 1791.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1815 { : 1792.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1816 { : 1793.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1817 { : 1794.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1818 { : 1795.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1819 { : 1796.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1820 { : 1797.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1821 { : 1798.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1822 { : 1799.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1823 { : 1800.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1824 { : 1801.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1825 { : 1802.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1826 { : 1803.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1827 { : 1804.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1828 { : 1805.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1829 { : 1806.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1830 { : 1807.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1831 { : 1808.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1832 { : 1809.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1833 { : 1810.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1834 { : 1811.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1835 { : 1812.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.174 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1836 { : 1813.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1837 { : 1814.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1838 { : 1815.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1839 { : 1816.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1840 { : 1817.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1841 { : 1818.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1842 { : 1819.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1843 { : 1820.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1844 { : 1821.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1845 { : 1822.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1846 { : 1823.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1847 { : 1824.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1848 { : 1825.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1849 { : 1826.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1850 { : 1827.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1851 { : 1828.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1852 { : 1829.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1853 { : 1830.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1854 { : 1831.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1855 { : 1832.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1856 { : 1833.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1857 { : 1834.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1858 { : 1835.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1859 { : 1836.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1860 { : 1837.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1861 { : 1838.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1862 { : 1839.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1863 { : 1840.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1864 { : 1841.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1865 { : 1842.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1866 { : 1843.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1867 { : 1844.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1868 { : 1845.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1869 { : 1846.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1870 { : 1847.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1871 { : 1848.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1872 { : 1849.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1873 { : 1850.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1874 { : 1851.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1875 { : 1852.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1876 { : 1853.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1877 { : 1854.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1878 { : 1855.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1879 { : 1856.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1880 { : 1857.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1881 { : 1858.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1882 { : 1859.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1883 { : 1860.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.175 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1884 { : 1861.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1885 { : 1862.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1886 { : 1863.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1887 { : 1864.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1888 { : 1865.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1889 { : 1866.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1890 { : 1867.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1891 { : 1868.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1892 { : 1869.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1893 { : 1870.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1894 { : 1871.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1895 { : 1872.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1896 { : 1873.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1897 { : 1874.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1898 { : 1875.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1899 { : 1876.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1900 { : 1877.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1901 { : 1878.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1902 { : 1879.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1903 { : 1880.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1904 { : 1881.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1905 { : 1882.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1906 { : 1883.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1907 { : 1884.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1908 { : 1885.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1909 { : 1886.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1910 { : 1887.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1911 { : 1888.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1912 { : 1889.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1913 { : 1890.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1914 { : 1891.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1915 { : 1892.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1916 { : 1893.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1917 { : 1894.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1918 { : 1895.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1919 { : 1896.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1920 { : 1897.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1921 { : 1898.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1922 { : 1899.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1923 { : 1900.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1924 { : 1901.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1925 { : 1902.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1926 { : 1903.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1927 { : 1904.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1928 { : 1905.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1929 { : 1906.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1930 { : 1907.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1931 { : 1908.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1932 { : 1909.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1933 { : 1910.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1934 { : 1911.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1935 { : 1912.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1936 { : 1913.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1937 { : 1914.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.176 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1938 { : 1915.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1939 { : 1916.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1940 { : 1917.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1941 { : 1918.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1942 { : 1919.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1943 { : 1920.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1944 { : 1921.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1945 { : 1922.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1946 { : 1923.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1947 { : 1924.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1948 { : 1925.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1949 { : 1926.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1950 { : 1927.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1951 { : 1928.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1952 { : 1929.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1953 { : 1930.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1954 { : 1931.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1955 { : 1932.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1956 { : 1933.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1957 { : 1934.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1958 { : 1935.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1959 { : 1936.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1960 { : 1937.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1961 { : 1938.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1962 { : 1939.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1963 { : 1940.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1964 { : 1941.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1965 { : 1942.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1966 { : 1943.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1967 { : 1944.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1968 { : 1945.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1969 { : 1946.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1970 { : 1947.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1971 { : 1948.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1972 { : 1949.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1973 { : 1950.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1974 { : 1951.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1975 { : 1952.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1976 { : 1953.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1977 { : 1954.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1978 { : 1955.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1979 { : 1956.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1980 { : 1957.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1981 { : 1958.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1982 { : 1959.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1983 { : 1960.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1984 { : 1961.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1985 { : 1962.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1986 { : 1963.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1987 { : 1964.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1988 { : 1965.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1989 { : 1966.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.177 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1990 { : 1967.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1991 { : 1968.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1992 { : 1969.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1993 { : 1970.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1994 { : 1971.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1995 { : 1972.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1996 { : 1973.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1997 { : 1974.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1998 { : 1975.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 1999 { : 1976.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2000 { : 1977.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2001 { : 1978.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2002 { : 1979.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2003 { : 1980.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2004 { : 1981.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2005 { : 1982.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2006 { : 1983.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2007 { : 1984.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2008 { : 1985.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2009 { : 1986.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2010 { : 1987.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2011 { : 1988.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2012 { : 1989.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2013 { : 1990.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2014 { : 1991.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2015 { : 1992.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2016 { : 1993.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2017 { : 1994.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2018 { : 1995.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2019 { : 1996.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2020 { : 1997.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2021 { : 1998.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2022 { : 1999.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2023 { : 2000.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2024 { : 2001.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2025 { : 2002.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2026 { : 2003.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2027 { : 2004.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2028 { : 2005.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2029 { : 2006.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2030 { : 2007.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2031 { : 2008.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2032 { : 2009.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2033 { : 2010.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2034 { : 2011.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2035 { : 2012.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.178 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2036 { : 2013.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2037 { : 2014.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2038 { : 2015.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2039 { : 2016.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2040 { : 2017.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2041 { : 2018.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2042 { : 2019.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2043 { : 2020.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2044 { : 2021.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2045 { : 2022.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2046 { : 2023.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2047 { : 2024.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2048 { : 2025.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2049 { : 2026.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2050 { : 2027.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2051 { : 2028.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2052 { : 2029.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2053 { : 2030.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2054 { : 2031.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2055 { : 2032.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2056 { : 2033.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2057 { : 2034.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2058 { : 2035.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2059 { : 2036.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2060 { : 2037.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2061 { : 2038.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2062 { : 2039.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2063 { : 2040.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2064 { : 2041.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2065 { : 2042.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2066 { : 2043.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2067 { : 2044.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2068 { : 2045.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2069 { : 2046.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2070 { : 2047.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2071 { : 2048.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2072 { : 2049.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2073 { : 2050.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2074 { : 2051.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2075 { : 2052.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2076 { : 2053.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2077 { : 2054.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2078 { : 2055.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2079 { : 2056.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2080 { : 2057.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2081 { : 2058.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2082 { : 2059.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2083 { : 2060.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2084 { : 2061.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2085 { : 2062.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.179 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2086 { : 2063.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2087 { : 2064.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2088 { : 2065.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2089 { : 2066.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2090 { : 2067.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2091 { : 2068.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2092 { : 2069.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2093 { : 2070.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2094 { : 2071.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2095 { : 2072.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2096 { : 2073.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2097 { : 2074.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2098 { : 2075.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2099 { : 2076.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2100 { : 2077.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2101 { : 2078.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2102 { : 2079.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2103 { : 2080.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2104 { : 2081.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2105 { : 2082.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2106 { : 2083.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2107 { : 2084.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2108 { : 2085.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2109 { : 2086.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2110 { : 2087.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2111 { : 2088.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2112 { : 2089.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2113 { : 2090.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2114 { : 2091.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2115 { : 2092.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2116 { : 2093.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2117 { : 2094.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2118 { : 2095.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2119 { : 2096.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2120 { : 2097.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2121 { : 2098.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2122 { : 2099.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2123 { : 2100.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2124 { : 2101.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2125 { : 2102.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2126 { : 2103.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2127 { : 2104.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2128 { : 2105.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2129 { : 2106.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2130 { : 2107.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.180 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2131 { : 2108.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2132 { : 2109.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2133 { : 2110.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2134 { : 2111.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2135 { : 2112.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2136 { : 2113.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2137 { : 2114.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2138 { : 2115.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2139 { : 2116.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2140 { : 2117.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2141 { : 2118.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2142 { : 2119.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2143 { : 2120.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2144 { : 2121.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2145 { : 2122.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2146 { : 2123.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2147 { : 2124.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2148 { : 2125.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2149 { : 2126.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2150 { : 2127.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2151 { : 2128.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2152 { : 2129.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2153 { : 2130.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2154 { : 2131.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2155 { : 2132.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2156 { : 2133.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2157 { : 2134.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2158 { : 2135.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2159 { : 2136.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2160 { : 2137.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2161 { : 2138.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2162 { : 2139.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2163 { : 2140.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2164 { : 2141.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2165 { : 2142.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2166 { : 2143.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2167 { : 2144.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2168 { : 2145.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2169 { : 2146.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2170 { : 2147.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2171 { : 2148.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2172 { : 2149.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2173 { : 2150.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2174 { : 2151.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2175 { : 2152.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2176 { : 2153.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2177 { : 2154.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2178 { : 2155.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2179 { : 2156.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2180 { : 2157.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2181 { : 2158.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.181 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2182 { : 2159.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2183 { : 2160.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2184 { : 2161.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2185 { : 2162.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2186 { : 2163.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2187 { : 2164.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2188 { : 2165.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2189 { : 2166.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2190 { : 2167.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2191 { : 2168.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2192 { : 2169.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2193 { : 2170.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2194 { : 2171.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2195 { : 2172.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2196 { : 2173.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2197 { : 2174.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2198 { : 2175.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2199 { : 2176.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2200 { : 2177.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2201 { : 2178.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2202 { : 2179.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2203 { : 2180.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2204 { : 2181.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2205 { : 2182.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2206 { : 2183.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2207 { : 2184.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2208 { : 2185.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2209 { : 2186.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2210 { : 2187.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2211 { : 2188.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2212 { : 2189.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2213 { : 2190.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2214 { : 2191.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2215 { : 2192.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2216 { : 2193.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2217 { : 2194.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2218 { : 2195.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2219 { : 2196.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2220 { : 2197.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2221 { : 2198.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2222 { : 2199.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2223 { : 2200.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2224 { : 2201.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.182 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2225 { : 2202.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2226 { : 2203.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2227 { : 2204.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2228 { : 2205.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2229 { : 2206.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2230 { : 2207.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2231 { : 2208.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2232 { : 2209.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2233 { : 2210.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2234 { : 2211.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2235 { : 2212.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2236 { : 2213.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2237 { : 2214.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2238 { : 2215.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2239 { : 2216.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2240 { : 2217.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2241 { : 2218.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2242 { : 2219.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2243 { : 2220.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2244 { : 2221.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2245 { : 2222.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2246 { : 2223.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2247 { : 2224.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2248 { : 2225.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2249 { : 2226.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2250 { : 2227.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2251 { : 2228.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2252 { : 2229.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2253 { : 2230.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2254 { : 2231.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2255 { : 2232.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2256 { : 2233.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2257 { : 2234.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2258 { : 2235.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2259 { : 2236.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2260 { : 2237.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2261 { : 2238.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2262 { : 2239.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2263 { : 2240.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2264 { : 2241.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2265 { : 2242.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2266 { : 2243.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2267 { : 2244.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2268 { : 2245.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2269 { : 2246.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2270 { : 2247.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2271 { : 2248.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2272 { : 2249.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.183 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2273 { : 2250.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2274 { : 2251.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2275 { : 2252.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2276 { : 2253.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2277 { : 2254.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2278 { : 2255.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2279 { : 2256.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2280 { : 2257.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2281 { : 2258.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2282 { : 2259.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2283 { : 2260.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2284 { : 2261.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2285 { : 2262.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2286 { : 2263.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2287 { : 2264.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2288 { : 2265.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2289 { : 2266.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2290 { : 2267.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2291 { : 2268.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2292 { : 2269.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2293 { : 2270.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2294 { : 2271.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2295 { : 2272.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2296 { : 2273.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2297 { : 2274.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2298 { : 2275.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2299 { : 2276.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2300 { : 2277.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2301 { : 2278.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2302 { : 2279.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2303 { : 2280.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2304 { : 2281.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2305 { : 2282.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2306 { : 2283.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2307 { : 2284.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2308 { : 2285.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2309 { : 2286.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2310 { : 2287.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2311 { : 2288.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2312 { : 2289.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2313 { : 2290.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2314 { : 2291.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2315 { : 2292.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2316 { : 2293.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2317 { : 2294.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2318 { : 2295.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2319 { : 2296.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2320 { : 2297.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2321 { : 2298.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2322 { : 2299.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2323 { : 2300.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2324 { : 2301.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2325 { : 2302.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2326 { : 2303.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.184 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2327 { : 2304.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2328 { : 2305.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2329 { : 2306.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2330 { : 2307.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2331 { : 2308.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2332 { : 2309.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2333 { : 2310.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2334 { : 2311.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2335 { : 2312.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2336 { : 2313.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2337 { : 2314.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2338 { : 2315.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2339 { : 2316.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2340 { : 2317.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2341 { : 2318.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2342 { : 2319.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2343 { : 2320.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2344 { : 2321.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2345 { : 2322.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2346 { : 2323.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2347 { : 2324.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2348 { : 2325.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2349 { : 2326.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2350 { : 2327.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2351 { : 2328.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2352 { : 2329.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2353 { : 2330.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2354 { : 2331.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2355 { : 2332.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2356 { : 2333.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2357 { : 2334.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2358 { : 2335.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2359 { : 2336.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2360 { : 2337.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2361 { : 2338.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2362 { : 2339.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2363 { : 2340.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2364 { : 2341.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2365 { : 2342.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2366 { : 2343.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2367 { : 2344.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2368 { : 2345.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2369 { : 2346.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2370 { : 2347.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2371 { : 2348.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2372 { : 2349.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2373 { : 2350.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2374 { : 2351.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2375 { : 2352.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2376 { : 2353.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2377 { : 2354.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2378 { : 2355.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.185 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2379 { : 2356.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2380 { : 2357.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2381 { : 2358.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2382 { : 2359.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2383 { : 2360.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2384 { : 2361.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2385 { : 2362.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2386 { : 2363.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2387 { : 2364.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2388 { : 2365.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2389 { : 2366.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2390 { : 2367.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2391 { : 2368.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2392 { : 2369.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2393 { : 2370.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2394 { : 2371.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2395 { : 2372.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2396 { : 2373.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2397 { : 2374.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2398 { : 2375.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2399 { : 2376.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2400 { : 2377.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2401 { : 2378.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2402 { : 2379.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2403 { : 2380.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2404 { : 2381.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2405 { : 2382.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2406 { : 2383.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2407 { : 2384.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2408 { : 2385.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2409 { : 2386.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2410 { : 2387.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2411 { : 2388.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2412 { : 2389.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2413 { : 2390.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2414 { : 2391.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2415 { : 2392.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2416 { : 2393.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2417 { : 2394.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2418 { : 2395.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2419 { : 2396.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2420 { : 2397.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2421 { : 2398.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2422 { : 2399.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2423 { : 2400.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.186 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2424 { : 2401.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2425 { : 2402.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2426 { : 2403.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2427 { : 2404.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2428 { : 2405.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2429 { : 2406.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2430 { : 2407.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2431 { : 2408.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2432 { : 2409.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2433 { : 2410.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2434 { : 2411.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2435 { : 2412.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2436 { : 2413.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2437 { : 2414.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2438 { : 2415.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2439 { : 2416.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2440 { : 2417.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2441 { : 2418.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2442 { : 2419.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2443 { : 2420.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2444 { : 2421.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2445 { : 2422.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2446 { : 2423.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2447 { : 2424.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2448 { : 2425.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2449 { : 2426.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2450 { : 2427.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2451 { : 2428.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2452 { : 2429.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2453 { : 2430.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2454 { : 2431.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2455 { : 2432.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2456 { : 2433.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2457 { : 2434.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2458 { : 2435.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2459 { : 2436.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2460 { : 2437.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2461 { : 2438.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2462 { : 2439.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2463 { : 2440.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2464 { : 2441.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2465 { : 2442.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2466 { : 2443.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2467 { : 2444.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2468 { : 2445.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.187 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2469 { : 2446.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2470 { : 2447.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2471 { : 2448.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2472 { : 2449.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2473 { : 2450.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2474 { : 2451.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2475 { : 2452.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2476 { : 2453.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2477 { : 2454.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2478 { : 2455.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2479 { : 2456.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2480 { : 2457.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2481 { : 2458.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2482 { : 2459.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2483 { : 2460.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2484 { : 2461.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2485 { : 2462.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2486 { : 2463.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2487 { : 2464.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2488 { : 2465.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2489 { : 2466.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2490 { : 2467.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2491 { : 2468.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2492 { : 2469.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2493 { : 2470.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2494 { : 2471.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2495 { : 2472.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2496 { : 2473.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2497 { : 2474.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2498 { : 2475.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2499 { : 2476.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2500 { : 2477.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2501 { : 2478.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2502 { : 2479.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2503 { : 2480.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2504 { : 2481.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2505 { : 2482.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2506 { : 2483.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2507 { : 2484.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2508 { : 2485.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2509 { : 2486.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2510 { : 2487.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2511 { : 2488.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2512 { : 2489.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2513 { : 2490.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2514 { : 2491.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2515 { : 2492.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2516 { : 2493.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2517 { : 2494.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2518 { : 2495.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2519 { : 2496.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2520 { : 2497.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.188 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2521 { : 2498.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2522 { : 2499.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2523 { : 2500.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2524 { : 2501.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2525 { : 2502.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2526 { : 2503.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2527 { : 2504.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2528 { : 2505.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2529 { : 2506.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2530 { : 2507.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2531 { : 2508.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2532 { : 2509.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2533 { : 2510.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2534 { : 2511.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2535 { : 2512.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2536 { : 2513.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2537 { : 2514.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2538 { : 2515.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2539 { : 2516.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2540 { : 2517.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2541 { : 2518.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2542 { : 2519.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2543 { : 2520.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2544 { : 2521.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2545 { : 2522.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2546 { : 2523.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2547 { : 2524.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2548 { : 2525.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2549 { : 2526.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2550 { : 2527.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2551 { : 2528.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2552 { : 2529.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2553 { : 2530.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2554 { : 2531.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2555 { : 2532.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2556 { : 2533.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2557 { : 2534.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2558 { : 2535.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2559 { : 2536.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2560 { : 2537.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2561 { : 2538.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2562 { : 2539.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2563 { : 2540.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2564 { : 2541.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2565 { : 2542.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2566 { : 2543.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2567 { : 2544.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2568 { : 2545.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2569 { : 2546.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2570 { : 2547.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2571 { : 2548.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2572 { : 2549.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.189 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2573 { : 2550.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2574 { : 2551.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2575 { : 2552.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2576 { : 2553.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2577 { : 2554.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2578 { : 2555.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2579 { : 2556.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2580 { : 2557.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2581 { : 2558.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2582 { : 2559.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2583 { : 2560.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2584 { : 2561.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2585 { : 2562.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2586 { : 2563.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2587 { : 2564.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2588 { : 2565.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2589 { : 2566.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2590 { : 2567.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2591 { : 2568.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2592 { : 2569.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2593 { : 2570.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2594 { : 2571.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2595 { : 2572.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2596 { : 2573.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2597 { : 2574.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2598 { : 2575.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2599 { : 2576.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2600 { : 2577.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2601 { : 2578.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2602 { : 2579.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2603 { : 2580.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2604 { : 2581.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2605 { : 2582.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2606 { : 2583.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2607 { : 2584.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2608 { : 2585.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2609 { : 2586.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2610 { : 2587.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2611 { : 2588.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2612 { : 2589.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2613 { : 2590.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2614 { : 2591.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2615 { : 2592.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2616 { : 2593.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2617 { : 2594.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2618 { : 2595.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2619 { : 2596.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.190 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2620 { : 2597.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2621 { : 2598.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2622 { : 2599.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2623 { : 2600.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2624 { : 2601.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2625 { : 2602.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2626 { : 2603.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2627 { : 2604.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2628 { : 2605.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2629 { : 2606.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2630 { : 2607.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2631 { : 2608.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2632 { : 2609.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2633 { : 2610.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2634 { : 2611.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2635 { : 2612.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2636 { : 2613.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2637 { : 2614.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2638 { : 2615.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2639 { : 2616.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2640 { : 2617.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2641 { : 2618.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2642 { : 2619.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2643 { : 2620.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2644 { : 2621.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2645 { : 2622.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2646 { : 2623.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2647 { : 2624.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2648 { : 2625.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2649 { : 2626.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2650 { : 2627.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2651 { : 2628.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2652 { : 2629.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2653 { : 2630.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2654 { : 2631.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2655 { : 2632.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2656 { : 2633.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2657 { : 2634.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2658 { : 2635.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2659 { : 2636.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2660 { : 2637.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2661 { : 2638.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2662 { : 2639.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2663 { : 2640.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2664 { : 2641.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2665 { : 2642.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.191 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2666 { : 2643.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2667 { : 2644.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2668 { : 2645.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2669 { : 2646.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2670 { : 2647.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2671 { : 2648.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2672 { : 2649.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2673 { : 2650.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2674 { : 2651.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2675 { : 2652.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2676 { : 2653.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2677 { : 2654.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2678 { : 2655.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2679 { : 2656.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2680 { : 2657.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2681 { : 2658.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2682 { : 2659.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2683 { : 2660.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2684 { : 2661.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2685 { : 2662.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2686 { : 2663.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2687 { : 2664.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2688 { : 2665.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2689 { : 2666.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2690 { : 2667.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2691 { : 2668.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2692 { : 2669.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2693 { : 2670.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2694 { : 2671.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2695 { : 2672.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2696 { : 2673.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2697 { : 2674.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2698 { : 2675.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2699 { : 2676.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2700 { : 2677.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2701 { : 2678.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2702 { : 2679.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2703 { : 2680.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2704 { : 2681.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2705 { : 2682.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2706 { : 2683.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2707 { : 2684.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2708 { : 2685.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2709 { : 2686.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2710 { : 2687.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2711 { : 2688.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2712 { : 2689.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2713 { : 2690.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.192 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2714 { : 2691.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2715 { : 2692.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2716 { : 2693.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2717 { : 2694.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2718 { : 2695.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2719 { : 2696.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2720 { : 2697.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2721 { : 2698.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2722 { : 2699.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2723 { : 2700.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2724 { : 2701.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2725 { : 2702.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2726 { : 2703.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2727 { : 2704.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2728 { : 2705.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2729 { : 2706.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2730 { : 2707.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2731 { : 2708.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2732 { : 2709.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2733 { : 2710.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2734 { : 2711.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2735 { : 2712.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2736 { : 2713.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2737 { : 2714.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2738 { : 2715.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2739 { : 2716.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2740 { : 2717.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2741 { : 2718.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2742 { : 2719.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2743 { : 2720.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2744 { : 2721.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2745 { : 2722.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2746 { : 2723.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2747 { : 2724.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2748 { : 2725.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2749 { : 2726.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2750 { : 2727.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2751 { : 2728.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2752 { : 2729.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2753 { : 2730.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2754 { : 2731.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2755 { : 2732.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2756 { : 2733.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2757 { : 2734.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2758 { : 2735.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2759 { : 2736.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2760 { : 2737.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2761 { : 2738.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.193 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2762 { : 2739.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2763 { : 2740.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2764 { : 2741.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2765 { : 2742.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2766 { : 2743.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2767 { : 2744.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2768 { : 2745.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2769 { : 2746.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2770 { : 2747.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2771 { : 2748.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2772 { : 2749.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2773 { : 2750.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2774 { : 2751.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2775 { : 2752.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2776 { : 2753.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2777 { : 2754.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2778 { : 2755.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2779 { : 2756.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2780 { : 2757.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2781 { : 2758.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2782 { : 2759.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2783 { : 2760.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2784 { : 2761.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2785 { : 2762.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2786 { : 2763.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2787 { : 2764.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2788 { : 2765.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2789 { : 2766.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2790 { : 2767.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2791 { : 2768.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2792 { : 2769.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2793 { : 2770.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2794 { : 2771.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2795 { : 2772.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2796 { : 2773.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2797 { : 2774.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2798 { : 2775.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2799 { : 2776.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2800 { : 2777.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2801 { : 2778.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2802 { : 2779.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2803 { : 2780.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2804 { : 2781.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2805 { : 2782.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2806 { : 2783.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2807 { : 2784.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.194 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2808 { : 2785.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2809 { : 2786.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2810 { : 2787.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2811 { : 2788.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2812 { : 2789.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2813 { : 2790.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2814 { : 2791.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2815 { : 2792.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2816 { : 2793.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2817 { : 2794.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2818 { : 2795.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2819 { : 2796.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2820 { : 2797.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2821 { : 2798.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2822 { : 2799.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2823 { : 2800.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2824 { : 2801.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2825 { : 2802.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2826 { : 2803.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2827 { : 2804.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2828 { : 2805.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2829 { : 2806.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2830 { : 2807.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2831 { : 2808.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2832 { : 2809.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2833 { : 2810.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2834 { : 2811.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2835 { : 2812.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2836 { : 2813.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2837 { : 2814.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2838 { : 2815.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2839 { : 2816.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2840 { : 2817.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2841 { : 2818.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2842 { : 2819.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2843 { : 2820.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2844 { : 2821.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2845 { : 2822.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2846 { : 2823.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2847 { : 2824.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2848 { : 2825.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2849 { : 2826.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2850 { : 2827.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2851 { : 2828.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2852 { : 2829.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2853 { : 2830.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2854 { : 2831.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2855 { : 2832.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.195 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2856 { : 2833.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2857 { : 2834.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2858 { : 2835.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2859 { : 2836.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2860 { : 2837.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2861 { : 2838.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2862 { : 2839.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2863 { : 2840.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2864 { : 2841.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2865 { : 2842.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2866 { : 2843.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2867 { : 2844.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2868 { : 2845.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2869 { : 2846.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2870 { : 2847.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2871 { : 2848.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2872 { : 2849.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2873 { : 2850.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2874 { : 2851.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2875 { : 2852.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2876 { : 2853.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2877 { : 2854.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2878 { : 2855.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2879 { : 2856.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2880 { : 2857.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2881 { : 2858.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2882 { : 2859.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2883 { : 2860.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2884 { : 2861.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2885 { : 2862.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2886 { : 2863.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2887 { : 2864.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2888 { : 2865.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2889 { : 2866.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2890 { : 2867.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2891 { : 2868.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2892 { : 2869.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2893 { : 2870.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.196 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2894 { : 2871.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2895 { : 2872.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2896 { : 2873.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2897 { : 2874.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2898 { : 2875.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2899 { : 2876.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2900 { : 2877.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2901 { : 2878.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2902 { : 2879.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2903 { : 2880.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2904 { : 2881.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2905 { : 2882.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2906 { : 2883.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2907 { : 2884.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2908 { : 2885.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2909 { : 2886.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2910 { : 2887.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2911 { : 2888.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2912 { : 2889.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2913 { : 2890.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2914 { : 2891.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2915 { : 2892.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2916 { : 2893.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2917 { : 2894.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2918 { : 2895.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2919 { : 2896.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2920 { : 2897.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2921 { : 2898.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2922 { : 2899.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2923 { : 2900.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2924 { : 2901.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2925 { : 2902.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2926 { : 2903.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2927 { : 2904.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2928 { : 2905.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2929 { : 2906.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2930 { : 2907.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2931 { : 2908.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2932 { : 2909.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2933 { : 2910.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2934 { : 2911.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2935 { : 2912.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2936 { : 2913.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2937 { : 2914.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2938 { : 2915.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2939 { : 2916.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.197 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2940 { : 2917.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2941 { : 2918.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2942 { : 2919.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2943 { : 2920.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2944 { : 2921.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2945 { : 2922.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2946 { : 2923.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2947 { : 2924.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2948 { : 2925.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2949 { : 2926.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2950 { : 2927.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2951 { : 2928.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2952 { : 2929.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2953 { : 2930.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2954 { : 2931.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2955 { : 2932.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2956 { : 2933.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2957 { : 2934.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2958 { : 2935.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2959 { : 2936.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2960 { : 2937.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2961 { : 2938.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2962 { : 2939.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2963 { : 2940.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2964 { : 2941.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2965 { : 2942.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2966 { : 2943.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2967 { : 2944.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2968 { : 2945.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2969 { : 2946.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2970 { : 2947.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2971 { : 2948.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2972 { : 2949.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2973 { : 2950.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2974 { : 2951.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2975 { : 2952.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2976 { : 2953.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2977 { : 2954.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2978 { : 2955.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2979 { : 2956.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2980 { : 2957.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2981 { : 2958.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2982 { : 2959.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2983 { : 2960.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2984 { : 2961.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2985 { : 2962.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2986 { : 2963.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2987 { : 2964.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.198 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2988 { : 2965.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2989 { : 2966.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2990 { : 2967.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2991 { : 2968.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2992 { : 2969.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2993 { : 2970.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2994 { : 2971.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2995 { : 2972.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2996 { : 2973.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2997 { : 2974.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2998 { : 2975.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 2999 { : 2976.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3000 { : 2977.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3001 { : 2978.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3002 { : 2979.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3003 { : 2980.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3004 { : 2981.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3005 { : 2982.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3006 { : 2983.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3007 { : 2984.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3008 { : 2985.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3009 { : 2986.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3010 { : 2987.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3011 { : 2988.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3012 { : 2989.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3013 { : 2990.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3014 { : 2991.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3015 { : 2992.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3016 { : 2993.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3017 { : 2994.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3018 { : 2995.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3019 { : 2996.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3020 { : 2997.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3021 { : 2998.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3022 { : 2999.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3023 { : 3000.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3024 { : 3001.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3025 { : 3002.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3026 { : 3003.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3027 { : 3004.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3028 { : 3005.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3029 { : 3006.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3030 { : 3007.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3031 { : 3008.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.199 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3032 { : 3009.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3033 { : 3010.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3034 { : 3011.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3035 { : 3012.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3036 { : 3013.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3037 { : 3014.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3038 { : 3015.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3039 { : 3016.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3040 { : 3017.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3041 { : 3018.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3042 { : 3019.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3043 { : 3020.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3044 { : 3021.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3045 { : 3022.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3046 { : 3023.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3047 { : 3024.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3048 { : 3025.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3049 { : 3026.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3050 { : 3027.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3051 { : 3028.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3052 { : 3029.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3053 { : 3030.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3054 { : 3031.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3055 { : 3032.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3056 { : 3033.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3057 { : 3034.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3058 { : 3035.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3059 { : 3036.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3060 { : 3037.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3061 { : 3038.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3062 { : 3039.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3063 { : 3040.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3064 { : 3041.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3065 { : 3042.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3066 { : 3043.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3067 { : 3044.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3068 { : 3045.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3069 { : 3046.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3070 { : 3047.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.200 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3071 { : 3048.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3072 { : 3049.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3073 { : 3050.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3074 { : 3051.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3075 { : 3052.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3076 { : 3053.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3077 { : 3054.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3078 { : 3055.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3079 { : 3056.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3080 { : 3057.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3081 { : 3058.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3082 { : 3059.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3083 { : 3060.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3084 { : 3061.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3085 { : 3062.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3086 { : 3063.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3087 { : 3064.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3088 { : 3065.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3089 { : 3066.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3090 { : 3067.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3091 { : 3068.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3092 { : 3069.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3093 { : 3070.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3094 { : 3071.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3095 { : 3072.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3096 { : 3073.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3097 { : 3074.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3098 { : 3075.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3099 { : 3076.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3100 { : 3077.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.201 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3101 { : 3078.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3102 { : 3079.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3103 { : 3080.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3104 { : 3081.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3105 { : 3082.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3106 { : 3083.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3107 { : 3084.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3108 { : 3085.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3109 { : 3086.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3110 { : 3087.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3111 { : 3088.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3112 { : 3089.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3113 { : 3090.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3114 { : 3091.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3115 { : 3092.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3116 { : 3093.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3117 { : 3094.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3118 { : 3095.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3119 { : 3096.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3120 { : 3097.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3121 { : 3098.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3122 { : 3099.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3123 { : 3100.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3124 { : 3101.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3125 { : 3102.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3126 { : 3103.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3127 { : 3104.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3128 { : 3105.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3129 { : 3106.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3130 { : 3107.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3131 { : 3108.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3132 { : 3109.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3133 { : 3110.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3134 { : 3111.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3135 { : 3112.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3136 { : 3113.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3137 { : 3114.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3138 { : 3115.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3139 { : 3116.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3140 { : 3117.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3141 { : 3118.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3142 { : 3119.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3143 { : 3120.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3144 { : 3121.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3145 { : 3122.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3146 { : 3123.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3147 { : 3124.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.202 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3148 { : 3125.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3149 { : 3126.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3150 { : 3127.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3151 { : 3128.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3152 { : 3129.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3153 { : 3130.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3154 { : 3131.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3155 { : 3132.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3156 { : 3133.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3157 { : 3134.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3158 { : 3135.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3159 { : 3136.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3160 { : 3137.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3161 { : 3138.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3162 { : 3139.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3163 { : 3140.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3164 { : 3141.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3165 { : 3142.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3166 { : 3143.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3167 { : 3144.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3168 { : 3145.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3169 { : 3146.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3170 { : 3147.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3171 { : 3148.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3172 { : 3149.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3173 { : 3150.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3174 { : 3151.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3175 { : 3152.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3176 { : 3153.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3177 { : 3154.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3178 { : 3155.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3179 { : 3156.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3180 { : 3157.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3181 { : 3158.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3182 { : 3159.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3183 { : 3160.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3184 { : 3161.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.203 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3185 { : 3162.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3186 { : 3163.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3187 { : 3164.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3188 { : 3165.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3189 { : 3166.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3190 { : 3167.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3191 { : 3168.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3192 { : 3169.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3193 { : 3170.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3194 { : 3171.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3195 { : 3172.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3196 { : 3173.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3197 { : 3174.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3198 { : 3175.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3199 { : 3176.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3200 { : 3177.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3201 { : 3178.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3202 { : 3179.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3203 { : 3180.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3204 { : 3181.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3205 { : 3182.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3206 { : 3183.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3207 { : 3184.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3208 { : 3185.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3209 { : 3186.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3210 { : 3187.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3211 { : 3188.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3212 { : 3189.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3213 { : 3190.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3214 { : 3191.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3215 { : 3192.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3216 { : 3193.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3217 { : 3194.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3218 { : 3195.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3219 { : 3196.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3220 { : 3197.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3221 { : 3198.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.204 [conn21] test.system.indexes Btree::insert: key too large to index, skipping test.index_big1.$a_1_x_1 3222 { : 3199.5, : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:35:00.205 [conn21] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:35:00.206 [conn21] build index done. scanned 3200 total records. 0.088 secs m30001| Fri Feb 22 12:35:00.460 [conn23] end connection 127.0.0.1:52552 (6 connections now open) m30000| Fri Feb 22 12:35:00.460 [conn16] end connection 127.0.0.1:61425 (8 connections now open) 1751ms ******************************************* Test : jstests/index5.js ... m30999| Fri Feb 22 12:35:01.582 [conn1] DROP: test.index5 m30001| Fri Feb 22 12:35:01.582 [conn21] CMD: drop test.index5 m30001| Fri Feb 22 12:35:01.582 [conn21] build index test.index5 { _id: 1 } m30001| Fri Feb 22 12:35:01.583 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:01.585 [conn21] build index test.index5 { a: -1.0 } m30001| Fri Feb 22 12:35:01.585 [conn21] build index done. scanned 2 total records. 0 secs 6ms ******************************************* Test : jstests/extent2.js ... m30999| Fri Feb 22 12:35:01.588 [conn1] couldn't find database [test_extent2] in config db m30999| Fri Feb 22 12:35:01.589 [conn1] put [test_extent2] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:01.589 [conn1] DROP DATABASE: test_extent2 m30999| Fri Feb 22 12:35:01.589 [conn1] erased database test_extent2 from local registry m30999| Fri Feb 22 12:35:01.590 [conn1] DBConfig::dropDatabase: test_extent2 m30999| Fri Feb 22 12:35:01.590 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:01-512765f5881c8e7453916048", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536501590), what: "dropDatabase.start", ns: "test_extent2", details: {} } m30999| Fri Feb 22 12:35:01.591 [conn1] DBConfig::dropDatabase: test_extent2 dropped sharded collections: 0 m30000| Fri Feb 22 12:35:01.591 [conn5] dropDatabase test_extent2 starting m30000| Fri Feb 22 12:35:01.616 [conn5] removeJournalFiles m30000| Fri Feb 22 12:35:01.617 [conn5] dropDatabase test_extent2 finished m30999| Fri Feb 22 12:35:01.617 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:01-512765f5881c8e7453916049", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536501617), what: "dropDatabase", ns: "test_extent2", details: {} } m30999| Fri Feb 22 12:35:01.618 [conn1] couldn't find database [test_extent2] in config db m30999| Fri Feb 22 12:35:01.619 [conn1] put [test_extent2] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:35:01.619 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/test_extent2.ns, filling with zeroes... m30000| Fri Feb 22 12:35:01.619 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/test_extent2.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:35:01.619 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/test_extent2.0, filling with zeroes... m30000| Fri Feb 22 12:35:01.620 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/test_extent2.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:35:01.620 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/test_extent2.1, filling with zeroes... m30000| Fri Feb 22 12:35:01.620 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/test_extent2.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:35:01.624 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.625 [conn6] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:35:01.625 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.626 [conn6] build index done. scanned 3 total records. 0.001 secs m30999| Fri Feb 22 12:35:01.627 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.627 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.631 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.631 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.632 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.632 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.632 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.632 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.633 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.633 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.634 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.634 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.634 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.634 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.635 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.635 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.636 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.636 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.636 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.636 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.637 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.637 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.638 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.638 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.638 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.638 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.639 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.639 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.640 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.640 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.640 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.640 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.641 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.641 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.642 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.642 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.642 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.642 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.643 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.644 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.644 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.644 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.644 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.644 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.645 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.646 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.646 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.646 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.646 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.646 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.647 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.647 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.648 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.648 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.648 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.648 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.649 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.649 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.650 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.650 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.650 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.650 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.651 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.651 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.652 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.652 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.652 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.652 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.653 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.653 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.654 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.654 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.654 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.654 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.655 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.655 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.656 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.656 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.657 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.657 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.657 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.658 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.658 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.658 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.659 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.659 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.660 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.660 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.660 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.660 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.661 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.661 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.662 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.662 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.662 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.662 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.663 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.663 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.664 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.664 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.664 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.664 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.665 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.665 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.666 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.666 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.666 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.667 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.667 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.667 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.668 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.668 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.668 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.669 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.669 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.669 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.670 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.670 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.670 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.671 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.671 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.671 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.672 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.672 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.672 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.673 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.673 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.673 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.674 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.674 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.674 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.675 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.675 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.675 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.676 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.676 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.676 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.677 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.677 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.677 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.678 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.678 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.678 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.679 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.679 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.679 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.680 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.680 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.680 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.681 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.681 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.681 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.682 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.682 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.682 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.682 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.683 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.683 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.683 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.684 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.684 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.684 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.685 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.685 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.685 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.686 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.686 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.686 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.687 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.687 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.687 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.688 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.688 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.688 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.689 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.689 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.689 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.690 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.690 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.690 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.691 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.691 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.691 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.692 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.692 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.692 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.693 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.693 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.693 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.694 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.694 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.694 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.694 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.695 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.695 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.696 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.696 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.696 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.696 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.696 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.697 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.698 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.698 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.698 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.698 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.698 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.699 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.699 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.700 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.700 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.700 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.700 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.701 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.701 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.702 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.702 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.702 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.702 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.703 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.703 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.704 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.704 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.704 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.704 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.705 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.706 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.706 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.706 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.706 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.707 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.707 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.708 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.708 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.708 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.708 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.708 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.709 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.710 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.710 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.710 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.710 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.710 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.711 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.712 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.712 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.712 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.712 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.712 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.713 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.713 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.714 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.714 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.714 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.714 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.715 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.715 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.716 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.716 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.716 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.716 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.717 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.717 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.718 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.718 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.718 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.718 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.719 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.719 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.720 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.720 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.720 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.720 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.721 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.721 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.722 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.722 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.722 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.722 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.723 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.723 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.724 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.724 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.724 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.724 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.725 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.725 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.726 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.726 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.726 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.726 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.727 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.727 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.727 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.728 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.728 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.728 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.729 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.729 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.729 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.730 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.730 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.730 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.731 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.731 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.731 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.732 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.732 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.732 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.733 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.733 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.734 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.735 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.735 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.735 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.736 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.736 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.736 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.737 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.737 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.737 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.738 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.738 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.738 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.739 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.739 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.739 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.740 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.740 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.740 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.741 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.741 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.741 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.742 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.742 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.742 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.743 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.743 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.743 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.744 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.744 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.744 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.745 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.745 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.745 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.746 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.746 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.746 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.747 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.747 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.747 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.748 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.748 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.748 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.749 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.749 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.749 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.750 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.750 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.750 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.751 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.751 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.751 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.752 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.752 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.752 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.752 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.753 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.753 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.754 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.754 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.754 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.754 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.755 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.755 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.755 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.756 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.756 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.756 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.757 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.757 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.757 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.758 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.758 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.758 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.759 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.759 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.759 [conn6] info DFM::findAll(): extent 0:3000 was empty, skipping ahead. ns:test_extent2.system.indexes m30000| Fri Feb 22 12:35:01.759 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.760 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.760 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.760 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.761 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.761 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.761 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.762 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.762 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.762 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.763 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.763 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.763 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.764 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.764 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.764 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.765 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.765 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.765 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.766 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.766 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.766 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.767 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.767 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.767 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.768 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.768 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.768 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.769 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.769 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.770 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.770 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.770 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.771 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.771 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.771 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.772 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.772 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.772 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.773 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.773 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.773 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.774 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.774 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.774 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.775 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.775 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.775 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.776 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.776 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.777 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.777 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.777 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.777 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.778 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.779 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.779 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.779 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.779 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.780 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.780 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.781 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.781 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.781 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.782 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.782 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.782 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.783 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.783 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.783 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.784 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.784 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.784 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.785 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.785 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.785 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.786 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.786 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.787 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.787 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.787 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.788 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.788 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.788 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.789 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.789 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.789 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.790 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.790 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.790 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.791 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.791 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.791 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.792 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.792 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.792 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.793 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.793 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.793 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.794 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.794 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.794 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.795 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.795 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.796 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.796 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.798 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.798 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.799 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.799 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.800 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.800 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.800 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.801 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.801 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.802 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.802 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.802 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.803 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.803 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.803 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.804 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.804 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.804 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.805 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.805 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.806 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.806 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.807 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.807 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.807 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.807 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.808 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.808 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.809 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.809 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.809 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.809 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.810 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.811 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.811 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.811 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.812 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.812 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.812 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.813 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.813 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.814 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.814 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.814 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.815 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.815 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.815 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.816 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.816 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.816 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.817 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.817 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.818 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.818 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.818 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.818 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.819 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.820 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.820 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.820 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.821 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.821 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.822 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.822 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.822 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.823 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.823 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.823 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.824 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.824 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.824 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.825 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.825 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.825 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.826 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.827 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.827 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.827 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.827 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.828 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.828 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.829 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.829 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.829 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.830 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.830 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.830 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.831 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.831 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.831 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.832 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.832 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.833 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.833 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.833 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.834 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.834 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.834 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.835 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.835 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.836 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.836 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.836 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.836 [conn6] CMD: drop test_extent2.foo m30000| Fri Feb 22 12:35:01.837 [conn6] build index test_extent2.foo { _id: 1 } m30000| Fri Feb 22 12:35:01.838 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:01.838 [conn6] build index test_extent2.foo { x: 1.0 } m30000| Fri Feb 22 12:35:01.839 [conn6] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:01.839 [conn1] DROP: test_extent2.foo m30000| Fri Feb 22 12:35:01.839 [conn6] CMD: drop test_extent2.foo { "sharded" : false, "primary" : "shard0000", "ns" : "test_extent2.$freelist", "count" : 0, "size" : 0, "storageSize" : 86016, "numExtents" : 4, "nindexes" : 0, "lastExtentSize" : 8192, "paddingFactor" : 1, "systemFlags" : 0, "userFlags" : 0, "totalIndexSize" : 0, "indexSizes" : { }, "ok" : 1 } { "sharded" : false, "primary" : "shard0000", "ns" : "test_extent2.$freelist", "count" : 0, "size" : 0, "storageSize" : 86016, "numExtents" : 4, "nindexes" : 0, "lastExtentSize" : 8192, "paddingFactor" : 1, "systemFlags" : 0, "userFlags" : 0, "totalIndexSize" : 0, "indexSizes" : { }, "ok" : 1 } 254ms ******************************************* Test : jstests/index_check2.js ... m30999| Fri Feb 22 12:35:01.846 [conn1] DROP: test.index_check2 m30001| Fri Feb 22 12:35:01.846 [conn21] CMD: drop test.index_check2 m30001| Fri Feb 22 12:35:01.847 [conn21] build index test.index_check2 { _id: 1 } m30001| Fri Feb 22 12:35:01.848 [conn21] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:01.921 [conn21] build index test.index_check2 { tags: 1.0 } m30001| Fri Feb 22 12:35:01.999 [conn21] build index done. scanned 1000 total records. 0.077 secs 172ms ******************************************* Test : jstests/mr_bigobject.js ... m30999| Fri Feb 22 12:35:02.019 [conn1] DROP: test.mr_bigobject m30001| Fri Feb 22 12:35:02.020 [conn21] CMD: drop test.mr_bigobject m30001| Fri Feb 22 12:35:02.110 [FileAllocator] allocating new datafile /data/db/sharding_passthrough1/test.5, filling with zeroes... m30001| Fri Feb 22 12:35:02.110 [conn21] build index test.mr_bigobject { _id: 1 } m30001| Fri Feb 22 12:35:02.110 [FileAllocator] done allocating datafile /data/db/sharding_passthrough1/test.5, size: 2047MB, took 0 secs m30001| Fri Feb 22 12:35:02.111 [conn21] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:02.282 [conn21] CMD: drop test.tmp.mr.mr_bigobject_33 m30001| Fri Feb 22 12:35:02.283 [conn21] CMD: drop test.tmp.mr.mr_bigobject_33_inc m30001| Fri Feb 22 12:35:02.283 [conn21] build index test.tmp.mr.mr_bigobject_33_inc { 0: 1 } m30001| Fri Feb 22 12:35:02.284 [conn21] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:02.285 [conn21] build index test.tmp.mr.mr_bigobject_33 { _id: 1 } m30001| Fri Feb 22 12:35:02.285 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:02.362 [conn21] JavaScript execution failed: Error: an emit can't be more than half max bson size near 'n ()' (line 2) m30001| Fri Feb 22 12:35:02.363 [conn21] CMD: drop test.tmp.mr.mr_bigobject_33 m30001| Fri Feb 22 12:35:02.366 [conn21] CMD: drop test.tmp.mr.mr_bigobject_33_inc m30001| Fri Feb 22 12:35:02.369 [conn21] mr failed, removing collection :: caused by :: 16722 JavaScript execution failed: Error: an emit can't be more than half max bson size near 'n ()' (line 2) m30001| Fri Feb 22 12:35:02.369 [conn21] CMD: drop test.tmp.mr.mr_bigobject_33 m30001| Fri Feb 22 12:35:02.369 [conn21] CMD: drop test.tmp.mr.mr_bigobject_33_inc m30001| Fri Feb 22 12:35:02.410 [conn21] CMD: drop test.tmp.mr.mr_bigobject_34 m30001| Fri Feb 22 12:35:02.410 [conn21] CMD: drop test.tmp.mr.mr_bigobject_34_inc m30001| Fri Feb 22 12:35:02.411 [conn21] build index test.tmp.mr.mr_bigobject_34_inc { 0: 1 } m30001| Fri Feb 22 12:35:02.411 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:02.411 [conn21] build index test.tmp.mr.mr_bigobject_34 { _id: 1 } m30001| Fri Feb 22 12:35:02.412 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:02.828 [conn21] CMD: drop test.mr_bigobject_out m30001| Fri Feb 22 12:35:02.834 [conn21] CMD: drop test.tmp.mr.mr_bigobject_34 m30001| Fri Feb 22 12:35:02.834 [conn21] CMD: drop test.tmp.mr.mr_bigobject_34 m30001| Fri Feb 22 12:35:02.834 [conn21] CMD: drop test.tmp.mr.mr_bigobject_34_inc m30001| Fri Feb 22 12:35:02.836 [conn21] CMD: drop test.tmp.mr.mr_bigobject_34 m30001| Fri Feb 22 12:35:02.837 [conn21] CMD: drop test.tmp.mr.mr_bigobject_34_inc m30001| Fri Feb 22 12:35:02.837 [conn21] command test.$cmd command: { mapreduce: "mr_bigobject", map: function (){ m30001| emit( 1 , this.s ); m30001| }, reduce: function ( k , v ){ m30001| return 1; m30001| }, out: "mr_bigobject_out" } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) W:5467 r:208024 w:1337 reslen:140 458ms m30001| Fri Feb 22 12:35:02.840 [conn21] CMD: drop test.tmp.mr.mr_bigobject_35 m30001| Fri Feb 22 12:35:02.840 [conn21] CMD: drop test.tmp.mr.mr_bigobject_35_inc m30001| Fri Feb 22 12:35:02.841 [conn21] build index test.tmp.mr.mr_bigobject_35_inc { 0: 1 } m30001| Fri Feb 22 12:35:02.841 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:02.841 [conn21] build index test.tmp.mr.mr_bigobject_35 { _id: 1 } m30001| Fri Feb 22 12:35:02.842 [conn21] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:03.281 [conn21] CMD: drop test.mr_bigobject_out m30001| Fri Feb 22 12:35:03.288 [conn21] CMD: drop test.tmp.mr.mr_bigobject_35 m30001| Fri Feb 22 12:35:03.289 [conn21] CMD: drop test.tmp.mr.mr_bigobject_35 m30001| Fri Feb 22 12:35:03.289 [conn21] CMD: drop test.tmp.mr.mr_bigobject_35_inc m30001| Fri Feb 22 12:35:03.291 [conn21] CMD: drop test.tmp.mr.mr_bigobject_35 m30001| Fri Feb 22 12:35:03.291 [conn21] CMD: drop test.tmp.mr.mr_bigobject_35_inc m30001| Fri Feb 22 12:35:03.292 [conn21] command test.$cmd command: { mapreduce: "mr_bigobject", map: function (){ m30001| emit( 1 , this.s ); m30001| }, reduce: function ( k , v ){ m30001| total = 0; m30001| for ( var i=0; i>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2holesameasshell.js ******************************************* Test : jstests/count.js ... m30999| Fri Feb 22 12:35:03.527 [conn1] DROP: test.jstests_count m30001| Fri Feb 22 12:35:03.527 [conn22] CMD: drop test.jstests_count m30001| Fri Feb 22 12:35:03.527 [conn22] build index test.jstests_count { _id: 1 } m30001| Fri Feb 22 12:35:03.528 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:03.531 [conn1] DROP: test.jstests_count m30001| Fri Feb 22 12:35:03.531 [conn22] CMD: drop test.jstests_count m30001| Fri Feb 22 12:35:03.533 [conn22] build index test.jstests_count { _id: 1 } m30001| Fri Feb 22 12:35:03.534 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:03.534 [conn22] build index test.jstests_count { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:35:03.534 [conn22] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:03.536 [conn1] DROP: test.jstests_count m30001| Fri Feb 22 12:35:03.536 [conn22] CMD: drop test.jstests_count m30001| Fri Feb 22 12:35:03.540 [conn22] build index test.jstests_count { _id: 1 } m30001| Fri Feb 22 12:35:03.540 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:03.540 [conn22] build index test.jstests_count { b: 1.0, a: 1.0, c: 1.0 } m30001| Fri Feb 22 12:35:03.541 [conn22] build index done. scanned 1 total records. 0 secs 16ms ******************************************* Test : jstests/cursor3.js ... m30999| Fri Feb 22 12:35:03.543 [conn1] DROP: test.cursor3 m30001| Fri Feb 22 12:35:03.544 [conn22] CMD: drop test.cursor3 m30001| Fri Feb 22 12:35:03.544 [conn22] build index test.cursor3 { _id: 1 } m30001| Fri Feb 22 12:35:03.545 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:03.545 [conn22] build index test.cursor3 { a: 1.0 } m30001| Fri Feb 22 12:35:03.546 [conn22] build index done. scanned 3 total records. 0 secs 24ms ******************************************* Test : jstests/regex.js ... m30999| Fri Feb 22 12:35:03.567 [conn1] DROP: test.jstests_regex m30001| Fri Feb 22 12:35:03.568 [conn22] CMD: drop test.jstests_regex m30001| Fri Feb 22 12:35:03.568 [conn22] build index test.jstests_regex { _id: 1 } m30001| Fri Feb 22 12:35:03.569 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:03.572 [conn1] DROP: test.jstests_regex m30001| Fri Feb 22 12:35:03.573 [conn22] CMD: drop test.jstests_regex m30001| Fri Feb 22 12:35:03.575 [conn22] build index test.jstests_regex { _id: 1 } m30001| Fri Feb 22 12:35:03.576 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:03.576 [conn1] DROP: test.jstests_regex m30001| Fri Feb 22 12:35:03.576 [conn22] CMD: drop test.jstests_regex m30001| Fri Feb 22 12:35:03.579 [conn22] build index test.jstests_regex { _id: 1 } m30001| Fri Feb 22 12:35:03.579 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:03.580 [conn1] DROP: test.jstests_regex m30001| Fri Feb 22 12:35:03.580 [conn22] CMD: drop test.jstests_regex m30001| Fri Feb 22 12:35:03.583 [conn22] build index test.jstests_regex { _id: 1 } m30001| Fri Feb 22 12:35:03.583 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:03.584 [conn1] DROP: test.jstests_regex m30001| Fri Feb 22 12:35:03.584 [conn22] CMD: drop test.jstests_regex m30001| Fri Feb 22 12:35:03.586 [conn22] build index test.jstests_regex { _id: 1 } m30001| Fri Feb 22 12:35:03.587 [conn22] build index done. scanned 0 total records. 0 secs 21ms ******************************************* Test : jstests/mr_killop.js ... m30999| Fri Feb 22 12:35:03.593 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:03.593 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:03.593 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:03.593 [conn22] CMD: drop test.jstests_mr_killop_out m30999| Fri Feb 22 12:35:03.594 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:03.594 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:03.594 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:03.594 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:03.595 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:03.595 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:03.639 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { while( 1 ) { ; } }, 'reduce' : function ( k, v ) { return v[ 0 ]; } } ) ); localhost:30999/admin sh14978| MongoDB shell version: 2.4.0-rc1-pre- sh14978| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:03.727 [mongosMain] connection accepted from 127.0.0.1:63437 #11 (2 connections now open) m30001| Fri Feb 22 12:35:03.730 [initandlisten] connection accepted from 127.0.0.1:38948 #24 (6 connections now open) m30001| Fri Feb 22 12:35:03.761 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_39 m30001| Fri Feb 22 12:35:03.761 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_39_inc m30001| Fri Feb 22 12:35:03.761 [conn24] build index test.tmp.mr.jstests_mr_killop_39_inc { 0: 1 } m30001| Fri Feb 22 12:35:03.762 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:03.762 [conn24] build index test.tmp.mr.jstests_mr_killop_39 { _id: 1 } m30001| Fri Feb 22 12:35:03.762 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:03.843 [conn1] want to kill op: op: "shard0001:336614" m30001| Fri Feb 22 12:35:03.843 [conn9] going to kill op: op: 336614 m30001| Fri Feb 22 12:35:03.843 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:03.843 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_39 m30001| Fri Feb 22 12:35:03.846 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_39_inc m30001| Fri Feb 22 12:35:03.848 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:03.848 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_39 m30001| Fri Feb 22 12:35:03.848 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_39_inc m30001| Fri Feb 22 12:35:03.849 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { while( 1 ) { ; } }, reduce: function ( k, v ) { return v[ 0 ]; } } ntoreturn:1 keyUpdates:0 locks(micros) r:80815 w:867 reslen:102 118ms sh14978| assert: command failed: { sh14978| "errmsg" : "exception: JavaScript execution terminated", sh14978| "code" : 16712, sh14978| "ok" : 0 sh14978| } : undefined sh14978| Error: Printing Stack Trace sh14978| at printStackTrace (src/mongo/shell/utils.js:37:7) sh14978| at doassert (src/mongo/shell/assert.js:6:1) sh14978| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh14978| at (shell eval):10:39 sh14978| Fri Feb 22 12:35:03.855 JavaScript execution failed: command failed: { sh14978| "errmsg" : "exception: JavaScript execution terminated", sh14978| "code" : 16712, sh14978| "ok" : 0 sh14978| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:03.861 [conn11] end connection 127.0.0.1:63437 (1 connection now open) m30999| Fri Feb 22 12:35:03.869 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:03.869 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:03.872 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:03.872 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:03.872 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:03.873 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:03.904 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { while( 1 ) { ; } }, 'reduce' : function ( k, v ) { return v[ 0 ]; } } ) ); localhost:30999/admin sh14994| MongoDB shell version: 2.4.0-rc1-pre- sh14994| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:03.990 [mongosMain] connection accepted from 127.0.0.1:58944 #12 (2 connections now open) m30001| Fri Feb 22 12:35:04.027 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_40 m30001| Fri Feb 22 12:35:04.027 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_40_inc m30001| Fri Feb 22 12:35:04.028 [conn24] build index test.tmp.mr.jstests_mr_killop_40_inc { 0: 1 } m30001| Fri Feb 22 12:35:04.028 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:04.028 [conn24] build index test.tmp.mr.jstests_mr_killop_40 { _id: 1 } m30001| Fri Feb 22 12:35:04.029 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:05.745 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765f9881c8e745391604a m30999| Fri Feb 22 12:35:05.745 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:35:05.906 [conn1] want to kill op: op: "shard0001:336636" m30001| Fri Feb 22 12:35:05.906 [conn9] going to kill op: op: 336636 m30001| Fri Feb 22 12:35:05.906 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:05.915 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_40 m30001| Fri Feb 22 12:35:05.918 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_40_inc m30001| Fri Feb 22 12:35:05.920 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:05.920 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_40 m30001| Fri Feb 22 12:35:05.921 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_40_inc m30001| Fri Feb 22 12:35:05.921 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { while( 1 ) { ; } }, reduce: function ( k, v ) { return v[ 0 ]; } } ntoreturn:1 keyUpdates:0 locks(micros) r:1877385 w:765 reslen:102 1928ms sh14994| assert: command failed: { sh14994| "errmsg" : "exception: JavaScript execution terminated", sh14994| "code" : 16712, sh14994| "ok" : 0 sh14994| } : undefined sh14994| Error: Printing Stack Trace sh14994| at printStackTrace (src/mongo/shell/utils.js:37:7) sh14994| at doassert (src/mongo/shell/assert.js:6:1) sh14994| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh14994| at (shell eval):10:39 sh14994| Fri Feb 22 12:35:05.927 JavaScript execution failed: command failed: { sh14994| "errmsg" : "exception: JavaScript execution terminated", sh14994| "code" : 16712, sh14994| "ok" : 0 sh14994| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:05.933 [conn12] end connection 127.0.0.1:58944 (1 connection now open) m30999| Fri Feb 22 12:35:05.939 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:05.939 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:05.943 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:05.944 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:05.944 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:05.945 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:05.972 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { emit( this.a, 1 ); }, 'reduce' : function () { while( 1 ) { ; } } } ) ); localhost:30999/admin sh15116| MongoDB shell version: 2.4.0-rc1-pre- sh15116| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:06.038 [mongosMain] connection accepted from 127.0.0.1:41006 #13 (2 connections now open) m30001| Fri Feb 22 12:35:06.061 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_41 m30001| Fri Feb 22 12:35:06.061 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_41_inc m30001| Fri Feb 22 12:35:06.061 [conn24] build index test.tmp.mr.jstests_mr_killop_41_inc { 0: 1 } m30001| Fri Feb 22 12:35:06.062 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:06.062 [conn24] build index test.tmp.mr.jstests_mr_killop_41 { _id: 1 } m30001| Fri Feb 22 12:35:06.062 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:06.174 [conn1] want to kill op: op: "shard0001:336659" m30001| Fri Feb 22 12:35:06.174 [conn9] going to kill op: op: 336659 m30001| Fri Feb 22 12:35:06.174 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:06.174 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_41 m30001| Fri Feb 22 12:35:06.176 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_41_inc m30001| Fri Feb 22 12:35:06.178 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:06.178 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_41 m30001| Fri Feb 22 12:35:06.178 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_41_inc m30001| Fri Feb 22 12:35:06.179 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { emit( this.a, 1 ); }, reduce: function () { while( 1 ) { ; } } } ntoreturn:1 keyUpdates:0 locks(micros) r:597 w:567 reslen:102 138ms sh15116| assert: command failed: { sh15116| "errmsg" : "exception: JavaScript execution terminated", sh15116| "code" : 16712, sh15116| "ok" : 0 sh15116| } : undefined sh15116| Error: Printing Stack Trace sh15116| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15116| at doassert (src/mongo/shell/assert.js:6:1) sh15116| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15116| at (shell eval):10:39 sh15116| Fri Feb 22 12:35:06.183 JavaScript execution failed: command failed: { sh15116| "errmsg" : "exception: JavaScript execution terminated", sh15116| "code" : 16712, sh15116| "ok" : 0 sh15116| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:06.191 [conn13] end connection 127.0.0.1:41006 (1 connection now open) m30999| Fri Feb 22 12:35:06.197 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:06.197 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:06.200 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:06.200 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:06.200 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:06.201 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:06.228 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { emit( this.a, 1 ); }, 'reduce' : function () { while( 1 ) { ; } } } ) ); localhost:30999/admin sh15134| MongoDB shell version: 2.4.0-rc1-pre- sh15134| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:06.314 [mongosMain] connection accepted from 127.0.0.1:39022 #14 (2 connections now open) m30001| Fri Feb 22 12:35:06.337 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_42 m30001| Fri Feb 22 12:35:06.338 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_42_inc m30001| Fri Feb 22 12:35:06.338 [conn24] build index test.tmp.mr.jstests_mr_killop_42_inc { 0: 1 } m30001| Fri Feb 22 12:35:06.339 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:06.339 [conn24] build index test.tmp.mr.jstests_mr_killop_42 { _id: 1 } m30001| Fri Feb 22 12:35:06.339 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:08.229 [conn1] want to kill op: op: "shard0001:336681" m30001| Fri Feb 22 12:35:08.230 [conn9] going to kill op: op: 336681 m30001| Fri Feb 22 12:35:08.230 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:08.230 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_42 m30001| Fri Feb 22 12:35:08.233 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_42_inc m30001| Fri Feb 22 12:35:08.235 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:08.235 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_42 m30001| Fri Feb 22 12:35:08.236 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_42_inc m30001| Fri Feb 22 12:35:08.236 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { emit( this.a, 1 ); }, reduce: function () { while( 1 ) { ; } } } ntoreturn:1 keyUpdates:0 locks(micros) r:596 w:522 reslen:102 1919ms sh15134| assert: command failed: { sh15134| "errmsg" : "exception: JavaScript execution terminated", sh15134| "code" : 16712, sh15134| "ok" : 0 sh15134| } : undefined sh15134| Error: Printing Stack Trace sh15134| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15134| at doassert (src/mongo/shell/assert.js:6:1) sh15134| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15134| at (shell eval):10:39 sh15134| Fri Feb 22 12:35:08.242 JavaScript execution failed: command failed: { sh15134| "errmsg" : "exception: JavaScript execution terminated", sh15134| "code" : 16712, sh15134| "ok" : 0 sh15134| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:08.249 [conn14] end connection 127.0.0.1:39022 (1 connection now open) m30999| Fri Feb 22 12:35:08.256 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:08.256 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:08.260 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:08.260 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:08.261 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:08.262 [conn22] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 12:35:08.289 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { loop(); }, 'reduce' : function ( k, v ) { return v[ 0 ] }, 'scope' : { 'loop' : function () { while( 1 ) { ; } } } } ) ); localhost:30999/admin sh15263| MongoDB shell version: 2.4.0-rc1-pre- sh15263| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:08.375 [mongosMain] connection accepted from 127.0.0.1:40054 #15 (2 connections now open) m30001| Fri Feb 22 12:35:08.477 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_43 m30001| Fri Feb 22 12:35:08.478 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_43_inc m30001| Fri Feb 22 12:35:08.478 [conn24] build index test.tmp.mr.jstests_mr_killop_43_inc { 0: 1 } m30001| Fri Feb 22 12:35:08.478 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:08.478 [conn24] build index test.tmp.mr.jstests_mr_killop_43 { _id: 1 } m30001| Fri Feb 22 12:35:08.479 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:08.490 [conn1] want to kill op: op: "shard0001:336703" m30001| Fri Feb 22 12:35:08.491 [conn9] going to kill op: op: 336703 m30001| Fri Feb 22 12:35:08.491 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:08.491 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_43 m30001| Fri Feb 22 12:35:08.493 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_43_inc m30001| Fri Feb 22 12:35:08.495 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:08.495 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_43 m30001| Fri Feb 22 12:35:08.495 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_43_inc m30001| Fri Feb 22 12:35:08.496 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { loop(); }, reduce: function ( k, v ) { return v[ 0 ] }, scope: { loop: function () { while( 1 ) { ; } } } } ntoreturn:1 keyUpdates:0 locks(micros) r:12089 w:516 reslen:102 117ms sh15263| assert: command failed: { sh15263| "errmsg" : "exception: JavaScript execution terminated", sh15263| "code" : 16712, sh15263| "ok" : 0 sh15263| } : undefined sh15263| Error: Printing Stack Trace sh15263| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15263| at doassert (src/mongo/shell/assert.js:6:1) sh15263| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15263| at (shell eval):10:39 sh15263| Fri Feb 22 12:35:08.501 JavaScript execution failed: command failed: { sh15263| "errmsg" : "exception: JavaScript execution terminated", sh15263| "code" : 16712, sh15263| "ok" : 0 sh15263| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:08.507 [conn15] end connection 127.0.0.1:40054 (1 connection now open) m30999| Fri Feb 22 12:35:08.513 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:08.513 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:08.516 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:08.516 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:08.516 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:08.517 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:08.543 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { loop(); }, 'reduce' : function ( k, v ) { return v[ 0 ] }, 'scope' : { 'loop' : function () { while( 1 ) { ; } } } } ) ); localhost:30999/admin sh15281| MongoDB shell version: 2.4.0-rc1-pre- sh15281| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:08.628 [mongosMain] connection accepted from 127.0.0.1:56487 #16 (2 connections now open) m30001| Fri Feb 22 12:35:08.651 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_44 m30001| Fri Feb 22 12:35:08.651 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_44_inc m30001| Fri Feb 22 12:35:08.651 [conn24] build index test.tmp.mr.jstests_mr_killop_44_inc { 0: 1 } m30001| Fri Feb 22 12:35:08.652 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:08.652 [conn24] build index test.tmp.mr.jstests_mr_killop_44 { _id: 1 } m30001| Fri Feb 22 12:35:08.653 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:10.544 [conn1] want to kill op: op: "shard0001:336725" m30001| Fri Feb 22 12:35:10.544 [conn9] going to kill op: op: 336725 m30001| Fri Feb 22 12:35:10.544 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:10.549 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_44 m30001| Fri Feb 22 12:35:10.552 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_44_inc m30001| Fri Feb 22 12:35:10.554 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:10.554 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_44 m30001| Fri Feb 22 12:35:10.555 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_44_inc m30001| Fri Feb 22 12:35:10.556 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { loop(); }, reduce: function ( k, v ) { return v[ 0 ] }, scope: { loop: function () { while( 1 ) { ; } } } } ntoreturn:1 keyUpdates:0 locks(micros) r:1891744 w:664 reslen:102 1925ms sh15281| assert: command failed: { sh15281| "errmsg" : "exception: JavaScript execution terminated", sh15281| "code" : 16712, sh15281| "ok" : 0 sh15281| } : undefined sh15281| Error: Printing Stack Trace sh15281| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15281| at doassert (src/mongo/shell/assert.js:6:1) sh15281| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15281| at (shell eval):10:39 sh15281| Fri Feb 22 12:35:10.561 JavaScript execution failed: command failed: { sh15281| "errmsg" : "exception: JavaScript execution terminated", sh15281| "code" : 16712, sh15281| "ok" : 0 sh15281| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:10.567 [conn16] end connection 127.0.0.1:56487 (1 connection now open) m30999| Fri Feb 22 12:35:10.573 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:10.574 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:10.577 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:10.578 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:10.579 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:10.579 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:10.606 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { emit( this.a, 1 ); }, 'reduce' : function ( k, v ) { return v[ 0 ] }, 'finalize' : function () { while( 1 ) { ; } } } ) ); localhost:30999/admin sh15346| MongoDB shell version: 2.4.0-rc1-pre- sh15346| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:10.693 [mongosMain] connection accepted from 127.0.0.1:37709 #17 (2 connections now open) m30001| Fri Feb 22 12:35:10.716 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_45 m30001| Fri Feb 22 12:35:10.717 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_45_inc m30001| Fri Feb 22 12:35:10.717 [conn24] build index test.tmp.mr.jstests_mr_killop_45_inc { 0: 1 } m30001| Fri Feb 22 12:35:10.718 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:10.718 [conn24] build index test.tmp.mr.jstests_mr_killop_45 { _id: 1 } m30001| Fri Feb 22 12:35:10.718 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:10.807 [conn1] want to kill op: op: "shard0001:336747" m30001| Fri Feb 22 12:35:10.808 [conn9] going to kill op: op: 336747 m30001| Fri Feb 22 12:35:10.808 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:10.808 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_45 m30001| Fri Feb 22 12:35:10.810 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_45_inc m30001| Fri Feb 22 12:35:10.812 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:10.812 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_45 m30001| Fri Feb 22 12:35:10.812 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_45_inc m30001| Fri Feb 22 12:35:10.813 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { emit( this.a, 1 ); }, reduce: function ( k, v ) { return v[ 0 ] }, finalize: function () { while( 1 ) { ; } } } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) r:771 w:902 reslen:102 116ms sh15346| assert: command failed: { sh15346| "errmsg" : "exception: JavaScript execution terminated", sh15346| "code" : 16712, sh15346| "ok" : 0 sh15346| } : undefined sh15346| Error: Printing Stack Trace sh15346| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15346| at doassert (src/mongo/shell/assert.js:6:1) sh15346| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15346| at (shell eval):10:39 sh15346| Fri Feb 22 12:35:10.818 JavaScript execution failed: command failed: { sh15346| "errmsg" : "exception: JavaScript execution terminated", sh15346| "code" : 16712, sh15346| "ok" : 0 sh15346| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:10.824 [conn17] end connection 127.0.0.1:37709 (1 connection now open) m30999| Fri Feb 22 12:35:10.830 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:10.830 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:10.833 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:10.833 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:10.834 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:10.834 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:10.860 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { emit( this.a, 1 ); }, 'reduce' : function ( k, v ) { return v[ 0 ] }, 'finalize' : function () { while( 1 ) { ; } } } ) ); localhost:30999/admin sh15354| MongoDB shell version: 2.4.0-rc1-pre- sh15354| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:10.926 [mongosMain] connection accepted from 127.0.0.1:33803 #18 (2 connections now open) m30001| Fri Feb 22 12:35:10.948 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_46 m30001| Fri Feb 22 12:35:10.949 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_46_inc m30001| Fri Feb 22 12:35:10.949 [conn24] build index test.tmp.mr.jstests_mr_killop_46_inc { 0: 1 } m30001| Fri Feb 22 12:35:10.949 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:10.950 [conn24] build index test.tmp.mr.jstests_mr_killop_46 { _id: 1 } m30001| Fri Feb 22 12:35:10.950 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:11.747 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 512765ff881c8e745391604b m30999| Fri Feb 22 12:35:11.748 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:35:12.861 [conn1] want to kill op: op: "shard0001:336771" m30001| Fri Feb 22 12:35:12.861 [conn9] going to kill op: op: 336771 m30001| Fri Feb 22 12:35:12.862 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:12.862 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_46 m30001| Fri Feb 22 12:35:12.865 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_46_inc m30001| Fri Feb 22 12:35:12.867 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:12.867 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_46 m30001| Fri Feb 22 12:35:12.868 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_46_inc m30001| Fri Feb 22 12:35:12.868 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { emit( this.a, 1 ); }, reduce: function ( k, v ) { return v[ 0 ] }, finalize: function () { while( 1 ) { ; } } } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) r:739 w:656 reslen:102 1940ms sh15354| assert: command failed: { sh15354| "errmsg" : "exception: JavaScript execution terminated", sh15354| "code" : 16712, sh15354| "ok" : 0 sh15354| } : undefined sh15354| Error: Printing Stack Trace sh15354| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15354| at doassert (src/mongo/shell/assert.js:6:1) sh15354| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15354| at (shell eval):10:39 sh15354| Fri Feb 22 12:35:12.872 JavaScript execution failed: command failed: { sh15354| "errmsg" : "exception: JavaScript execution terminated", sh15354| "code" : 16712, sh15354| "ok" : 0 sh15354| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:12.880 [conn18] end connection 127.0.0.1:33803 (1 connection now open) m30999| Fri Feb 22 12:35:12.886 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:12.886 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:12.890 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:12.890 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:12.891 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:12.892 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:12.919 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { emit( this.a, 1 ); }, 'reduce' : function ( k, v ) { return v[ 0 ] }, 'finalize' : function ( a, b ) { loop() }, 'scope' : { 'loop' : function () { while( 1 ) { ; } } } } ) ); localhost:30999/admin sh15421| MongoDB shell version: 2.4.0-rc1-pre- sh15421| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:12.985 [mongosMain] connection accepted from 127.0.0.1:64078 #19 (2 connections now open) m30001| Fri Feb 22 12:35:13.008 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_47 m30001| Fri Feb 22 12:35:13.009 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_47_inc m30001| Fri Feb 22 12:35:13.009 [conn24] build index test.tmp.mr.jstests_mr_killop_47_inc { 0: 1 } m30001| Fri Feb 22 12:35:13.009 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:13.009 [conn24] build index test.tmp.mr.jstests_mr_killop_47 { _id: 1 } m30001| Fri Feb 22 12:35:13.010 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:13.121 [conn1] want to kill op: op: "shard0001:336796" m30001| Fri Feb 22 12:35:13.121 [conn9] going to kill op: op: 336796 m30001| Fri Feb 22 12:35:13.121 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:13.121 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_47 m30001| Fri Feb 22 12:35:13.124 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_47_inc m30001| Fri Feb 22 12:35:13.126 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:13.126 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_47 m30001| Fri Feb 22 12:35:13.126 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_47_inc m30001| Fri Feb 22 12:35:13.126 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { emit( this.a, 1 ); }, reduce: function ( k, v ) { return v[ 0 ] }, finalize: function ( a, b ) { loop() }, scope: { loop: function () { while( 1 ) { ; } } } } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) r:823 w:1029 reslen:102 139ms sh15421| assert: command failed: { sh15421| "errmsg" : "exception: JavaScript execution terminated", sh15421| "code" : 16712, sh15421| "ok" : 0 sh15421| } : undefined sh15421| Error: Printing Stack Trace sh15421| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15421| at doassert (src/mongo/shell/assert.js:6:1) sh15421| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15421| at (shell eval):10:39 sh15421| Fri Feb 22 12:35:13.130 JavaScript execution failed: command failed: { sh15421| "errmsg" : "exception: JavaScript execution terminated", sh15421| "code" : 16712, sh15421| "ok" : 0 sh15421| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:13.138 [conn19] end connection 127.0.0.1:64078 (1 connection now open) m30999| Fri Feb 22 12:35:13.144 [conn1] DROP: test.jstests_mr_killop m30001| Fri Feb 22 12:35:13.144 [conn22] CMD: drop test.jstests_mr_killop m30999| Fri Feb 22 12:35:13.147 [conn1] DROP: test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:13.147 [conn22] CMD: drop test.jstests_mr_killop_out m30001| Fri Feb 22 12:35:13.147 [conn22] build index test.jstests_mr_killop { _id: 1 } m30001| Fri Feb 22 12:35:13.147 [conn22] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:13.174 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.commandWorked( db.runCommand( { 'mapreduce' : 'jstests_mr_killop', 'out' : 'jstests_mr_killop_out', 'map' : function () { emit( this.a, 1 ); }, 'reduce' : function ( k, v ) { return v[ 0 ] }, 'finalize' : function ( a, b ) { loop() }, 'scope' : { 'loop' : function () { while( 1 ) { ; } } } } ) ); localhost:30999/admin sh15429| MongoDB shell version: 2.4.0-rc1-pre- sh15429| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:13.270 [mongosMain] connection accepted from 127.0.0.1:55953 #20 (2 connections now open) m30001| Fri Feb 22 12:35:13.296 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_48 m30001| Fri Feb 22 12:35:13.296 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_48_inc m30001| Fri Feb 22 12:35:13.296 [conn24] build index test.tmp.mr.jstests_mr_killop_48_inc { 0: 1 } m30001| Fri Feb 22 12:35:13.297 [conn24] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:13.297 [conn24] build index test.tmp.mr.jstests_mr_killop_48 { _id: 1 } m30001| Fri Feb 22 12:35:13.297 [conn24] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:15.176 [conn1] want to kill op: op: "shard0001:336820" m30001| Fri Feb 22 12:35:15.176 [conn9] going to kill op: op: 336820 m30001| Fri Feb 22 12:35:15.176 [conn24] JavaScript execution terminated m30001| Fri Feb 22 12:35:15.176 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_48 m30001| Fri Feb 22 12:35:15.179 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_48_inc m30001| Fri Feb 22 12:35:15.181 [conn24] mr failed, removing collection :: caused by :: 16712 JavaScript execution terminated m30001| Fri Feb 22 12:35:15.181 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_48 m30001| Fri Feb 22 12:35:15.181 [conn24] CMD: drop test.tmp.mr.jstests_mr_killop_48_inc m30001| Fri Feb 22 12:35:15.182 [conn24] command test.$cmd command: { mapreduce: "jstests_mr_killop", out: "jstests_mr_killop_out", map: function () { emit( this.a, 1 ); }, reduce: function ( k, v ) { return v[ 0 ] }, finalize: function ( a, b ) { loop() }, scope: { loop: function () { while( 1 ) { ; } } } } ntoreturn:1 keyUpdates:0 numYields: 2 locks(micros) r:831 w:667 reslen:102 1908ms sh15429| assert: command failed: { sh15429| "errmsg" : "exception: JavaScript execution terminated", sh15429| "code" : 16712, sh15429| "ok" : 0 sh15429| } : undefined sh15429| Error: Printing Stack Trace sh15429| at printStackTrace (src/mongo/shell/utils.js:37:7) sh15429| at doassert (src/mongo/shell/assert.js:6:1) sh15429| at Function.assert.commandWorked (src/mongo/shell/assert.js:155:1) sh15429| at (shell eval):10:39 sh15429| Fri Feb 22 12:35:15.188 JavaScript execution failed: command failed: { sh15429| "errmsg" : "exception: JavaScript execution terminated", sh15429| "code" : 16712, sh15429| "ok" : 0 sh15429| } : undefined at src/mongo/shell/assert.js:L7 m30999| Fri Feb 22 12:35:15.194 [conn20] end connection 127.0.0.1:55953 (1 connection now open) 11613ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo4.js ******************************************* Test : jstests/numberlong3.js ... m30999| Fri Feb 22 12:35:15.208 [conn1] DROP: test.jstests_numberlong3 m30001| Fri Feb 22 12:35:15.209 [conn22] CMD: drop test.jstests_numberlong3 m30001| Fri Feb 22 12:35:15.210 [conn22] build index test.jstests_numberlong3 { _id: 1 } m30001| Fri Feb 22 12:35:15.211 [conn22] build index done. scanned 0 total records. 0.001 secs 13ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/mr_replaceIntoDB.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_update_btree2.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2nongeoarray.js ******************************************* Test : jstests/datasize.js ... m30999| Fri Feb 22 12:35:15.222 [conn1] DROP: test.jstests_datasize m30001| Fri Feb 22 12:35:15.222 [conn22] CMD: drop test.jstests_datasize m30001| Fri Feb 22 12:35:15.223 [conn22] build index test.jstests_datasize { _id: 1 } m30001| Fri Feb 22 12:35:15.224 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:15.224 [conn1] DROP: test.jstests_datasize m30001| Fri Feb 22 12:35:15.224 [conn22] CMD: drop test.jstests_datasize m30001| Fri Feb 22 12:35:15.228 [conn22] build index test.jstests_datasize { _id: 1 } m30001| Fri Feb 22 12:35:15.228 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.228 [conn22] info: creating collection test.jstests_datasize on add index m30001| Fri Feb 22 12:35:15.228 [conn22] build index test.jstests_datasize { qq: 1.0 } m30001| Fri Feb 22 12:35:15.229 [conn22] build index done. scanned 0 total records. 0 secs 20ms ******************************************* Test : jstests/counta.js ... m30999| Fri Feb 22 12:35:15.234 [conn1] DROP: test.jstests_counta m30001| Fri Feb 22 12:35:15.234 [conn22] CMD: drop test.jstests_counta m30001| Fri Feb 22 12:35:15.234 [conn22] build index test.jstests_counta { _id: 1 } m30001| Fri Feb 22 12:35:15.235 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.269 [conn22] JavaScript execution failed: ReferenceError: f is not defined near '} else { f(); } }' m30001| Fri Feb 22 12:35:15.269 [conn22] Count with ns: test.jstests_counta and query: { $where: function () { if ( this.a < 5 ) { return true; } else { f(); } } } failed with exception: 16722 JavaScript execution failed: ReferenceError: f is not defined near '} else { f(); } }' code: 16722 37ms ******************************************* Test : jstests/eval4.js ... m30999| Fri Feb 22 12:35:15.271 [conn1] DROP: test.eval4 m30001| Fri Feb 22 12:35:15.271 [conn22] CMD: drop test.eval4 m30001| Fri Feb 22 12:35:15.272 [conn22] build index test.eval4 { _id: 1 } m30001| Fri Feb 22 12:35:15.272 [conn22] build index done. scanned 0 total records. 0 secs 49ms ******************************************* Test : jstests/explain2.js ... m30999| Fri Feb 22 12:35:15.320 [conn1] DROP: test.explain2 m30001| Fri Feb 22 12:35:15.320 [conn22] CMD: drop test.explain2 m30001| Fri Feb 22 12:35:15.321 [conn22] build index test.explain2 { _id: 1 } m30001| Fri Feb 22 12:35:15.322 [conn22] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:15.322 [conn22] info: creating collection test.explain2 on add index m30001| Fri Feb 22 12:35:15.322 [conn22] build index test.explain2 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:15.323 [conn22] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/pushall.js ... m30999| Fri Feb 22 12:35:15.327 [conn1] DROP: test.jstests_pushall m30001| Fri Feb 22 12:35:15.327 [conn22] CMD: drop test.jstests_pushall m30001| Fri Feb 22 12:35:15.327 [conn22] build index test.jstests_pushall { _id: 1 } m30001| Fri Feb 22 12:35:15.328 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:15.329 [conn1] DROP: test.jstests_pushall m30001| Fri Feb 22 12:35:15.329 [conn22] CMD: drop test.jstests_pushall m30001| Fri Feb 22 12:35:15.332 [conn22] build index test.jstests_pushall { _id: 1 } m30001| Fri Feb 22 12:35:15.332 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:15.335 [conn1] DROP: test.jstests_pushall m30001| Fri Feb 22 12:35:15.335 [conn22] CMD: drop test.jstests_pushall m30001| Fri Feb 22 12:35:15.337 [conn22] build index test.jstests_pushall { _id: 1 } m30001| Fri Feb 22 12:35:15.338 [conn22] build index done. scanned 0 total records. 0 secs 12ms ******************************************* Test : jstests/sorte.js ... 1ms ******************************************* Test : jstests/count9.js ... m30999| Fri Feb 22 12:35:15.341 [conn1] DROP: test.jstests_count9 m30001| Fri Feb 22 12:35:15.341 [conn22] CMD: drop test.jstests_count9 m30001| Fri Feb 22 12:35:15.342 [conn22] build index test.jstests_count9 { _id: 1 } m30001| Fri Feb 22 12:35:15.342 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.342 [conn22] info: creating collection test.jstests_count9 on add index m30001| Fri Feb 22 12:35:15.342 [conn22] build index test.jstests_count9 { a: 1.0 } m30001| Fri Feb 22 12:35:15.343 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:15.345 [conn1] DROP: test.jstests_count9 m30001| Fri Feb 22 12:35:15.345 [conn22] CMD: drop test.jstests_count9 m30001| Fri Feb 22 12:35:15.349 [conn22] build index test.jstests_count9 { _id: 1 } m30001| Fri Feb 22 12:35:15.349 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.349 [conn22] info: creating collection test.jstests_count9 on add index m30001| Fri Feb 22 12:35:15.350 [conn22] build index test.jstests_count9 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:15.350 [conn22] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:15.352 [conn1] DROP: test.jstests_count9 m30001| Fri Feb 22 12:35:15.352 [conn22] CMD: drop test.jstests_count9 m30001| Fri Feb 22 12:35:15.357 [conn22] build index test.jstests_count9 { _id: 1 } m30001| Fri Feb 22 12:35:15.357 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.357 [conn22] info: creating collection test.jstests_count9 on add index m30001| Fri Feb 22 12:35:15.357 [conn22] build index test.jstests_count9 { a.b: 1.0, a.c: 1.0 } m30001| Fri Feb 22 12:35:15.358 [conn22] build index done. scanned 0 total records. 0 secs 19ms ******************************************* Test : jstests/unique2.js ... m30999| Fri Feb 22 12:35:15.360 [conn1] DROP: test.jstests_unique2 m30001| Fri Feb 22 12:35:15.360 [conn22] CMD: drop test.jstests_unique2 m30001| Fri Feb 22 12:35:15.360 [conn22] build index test.jstests_unique2 { _id: 1 } m30001| Fri Feb 22 12:35:15.361 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.362 [conn22] build index test.jstests_unique2 { k: 1.0 } m30001| Fri Feb 22 12:35:15.362 [conn22] fastBuildIndex dupsToDrop:3 m30001| Fri Feb 22 12:35:15.362 [conn22] build index done. scanned 4 total records. 0 secs m30999| Fri Feb 22 12:35:15.364 [conn1] DROP: test.jstests_unique2 m30001| Fri Feb 22 12:35:15.364 [conn22] CMD: drop test.jstests_unique2 m30001| Fri Feb 22 12:35:15.369 [conn22] build index test.jstests_unique2 { _id: 1 } m30001| Fri Feb 22 12:35:15.369 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.370 [conn22] build index test.jstests_unique2 { k: 1.0 } background m30001| Fri Feb 22 12:35:15.370 [conn22] backgroundIndexBuild dupsToDrop: 3 m30001| Fri Feb 22 12:35:15.370 [conn22] build index done. scanned 4 total records. 0 secs m30999| Fri Feb 22 12:35:15.371 [conn1] DROP: test.jstests_unique2 m30001| Fri Feb 22 12:35:15.372 [conn22] CMD: drop test.jstests_unique2 m30001| Fri Feb 22 12:35:15.376 [conn22] build index test.jstests_unique2 { _id: 1 } m30001| Fri Feb 22 12:35:15.376 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.376 [conn22] info: creating collection test.jstests_unique2 on add index m30001| Fri Feb 22 12:35:15.376 [conn22] build index test.jstests_unique2 { k: 1.0 } m30001| Fri Feb 22 12:35:15.376 [conn22] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.379 [conn9] CMD: dropIndexes test.jstests_unique2 m30001| Fri Feb 22 12:35:15.412 [conn22] JavaScript execution failed: ReferenceError: aaa is not defined m30001| Fri Feb 22 12:35:15.413 [conn22] assertion 16722 JavaScript execution failed: ReferenceError: aaa is not defined ns:test.jstests_unique2 query:{ $where: "aaa" } m30001| Fri Feb 22 12:35:15.413 [conn22] problem detected during query over test.jstests_unique2 : { $err: "JavaScript execution failed: ReferenceError: aaa is not defined", code: 16722 } m30999| Fri Feb 22 12:35:15.413 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16722 JavaScript execution failed: ReferenceError: aaa is not defined m30001| Fri Feb 22 12:35:15.414 [conn22] end connection 127.0.0.1:43590 (5 connections now open) m30001| Fri Feb 22 12:35:15.416 [conn24] build index test.jstests_unique2 { k: 1.0 } m30001| Fri Feb 22 12:35:15.417 [conn24] fastBuildIndex dupsToDrop:2 m30001| Fri Feb 22 12:35:15.417 [conn24] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:15.419 [conn1] DROP: test.jstests_unique2 m30001| Fri Feb 22 12:35:15.419 [conn24] CMD: drop test.jstests_unique2 m30001| Fri Feb 22 12:35:15.422 [conn24] build index test.jstests_unique2 { _id: 1 } m30001| Fri Feb 22 12:35:15.423 [conn24] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:15.444 [conn24] JavaScript execution failed: ReferenceError: aaa is not defined m30001| Fri Feb 22 12:35:15.446 [conn24] assertion 16722 JavaScript execution failed: ReferenceError: aaa is not defined ns:test.jstests_unique2 query:{ $where: "aaa" } m30001| Fri Feb 22 12:35:15.446 [conn24] problem detected during query over test.jstests_unique2 : { $err: "JavaScript execution failed: ReferenceError: aaa is not defined", code: 16722 } m30999| Fri Feb 22 12:35:15.446 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16722 JavaScript execution failed: ReferenceError: aaa is not defined m30001| Fri Feb 22 12:35:15.447 [conn24] end connection 127.0.0.1:38948 (4 connections now open) m30001| Fri Feb 22 12:35:15.448 [initandlisten] connection accepted from 127.0.0.1:55259 #25 (5 connections now open) m30001| Fri Feb 22 12:35:15.448 [conn25] build index test.jstests_unique2 { k: 1.0 } background m30001| Fri Feb 22 12:35:15.448 [conn25] backgroundIndexBuild dupsToDrop: 2 m30001| Fri Feb 22 12:35:15.449 [conn25] build index done. scanned 3 total records. 0 secs 92ms ******************************************* Test : jstests/find_and_modify_server6582.js ... m30999| Fri Feb 22 12:35:15.451 [conn1] DROP: test.find_and_modify_server6582 m30001| Fri Feb 22 12:35:15.452 [conn25] CMD: drop test.find_and_modify_server6582 m30001| Fri Feb 22 12:35:15.453 [conn25] build index test.find_and_modify_server6582 { _id: 1 } m30001| Fri Feb 22 12:35:15.453 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:15.454 [conn1] DROP: test.find_and_modify_server6582 m30001| Fri Feb 22 12:35:15.454 [conn25] CMD: drop test.find_and_modify_server6582 m30001| Fri Feb 22 12:35:15.457 [conn25] build index test.find_and_modify_server6582 { _id: 1 } m30001| Fri Feb 22 12:35:15.457 [conn25] build index done. scanned 0 total records. 0 secs 6ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2within.js ******************************************* Test : jstests/mr2.js ... m30999| Fri Feb 22 12:35:15.458 [conn1] DROP: test.mr2 m30001| Fri Feb 22 12:35:15.459 [conn25] CMD: drop test.mr2 m30001| Fri Feb 22 12:35:15.459 [conn25] build index test.mr2 { _id: 1 } m30001| Fri Feb 22 12:35:15.460 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.490 [conn25] CMD: drop test.tmp.mr.mr2_49 m30001| Fri Feb 22 12:35:15.490 [conn25] CMD: drop test.tmp.mr.mr2_49_inc m30001| Fri Feb 22 12:35:15.491 [conn25] build index test.tmp.mr.mr2_49_inc { 0: 1 } m30001| Fri Feb 22 12:35:15.491 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.491 [conn25] build index test.tmp.mr.mr2_49 { _id: 1 } m30001| Fri Feb 22 12:35:15.493 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.496 [conn25] CMD: drop test.mr2_out m30001| Fri Feb 22 12:35:15.501 [conn25] CMD: drop test.tmp.mr.mr2_49 m30001| Fri Feb 22 12:35:15.502 [conn25] CMD: drop test.tmp.mr.mr2_49 m30001| Fri Feb 22 12:35:15.502 [conn25] CMD: drop test.tmp.mr.mr2_49_inc m30001| Fri Feb 22 12:35:15.505 [conn25] CMD: drop test.tmp.mr.mr2_49 m30001| Fri Feb 22 12:35:15.505 [conn25] CMD: drop test.tmp.mr.mr2_49_inc { "result" : "mr2_out", "timeMillis" : 41, "counts" : { "input" : 2, "emit" : 4, "reduce" : 1, "output" : 3 }, "ok" : 1, } m30999| Fri Feb 22 12:35:15.507 [conn1] DROP: test.mr2_out m30001| Fri Feb 22 12:35:15.507 [conn25] CMD: drop test.mr2_out { "results" : [ { "_id" : "a", "value" : { "totalSize" : 9, "num" : 1, "avg" : 9 } }, { "_id" : "b", "value" : { "totalSize" : 32, "num" : 2, "avg" : 16 } }, { "_id" : "c", "value" : { "totalSize" : 18, "num" : 1, "avg" : 18 } } ], "timeMillis" : 2, "counts" : { "input" : 2, "emit" : 4, "reduce" : 1, "output" : 3 }, "ok" : 1, } m30001| Fri Feb 22 12:35:15.515 [conn25] CMD: drop test.tmp.mr.mr2_50 m30001| Fri Feb 22 12:35:15.515 [conn25] CMD: drop test.tmp.mr.mr2_50_inc m30001| Fri Feb 22 12:35:15.515 [conn25] build index test.tmp.mr.mr2_50_inc { 0: 1 } m30001| Fri Feb 22 12:35:15.516 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.516 [conn25] build index test.tmp.mr.mr2_50 { _id: 1 } m30001| Fri Feb 22 12:35:15.516 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.518 [conn25] CMD: drop test.mr2_out m30001| Fri Feb 22 12:35:15.524 [conn25] CMD: drop test.tmp.mr.mr2_50 m30001| Fri Feb 22 12:35:15.524 [conn25] CMD: drop test.tmp.mr.mr2_50 m30001| Fri Feb 22 12:35:15.524 [conn25] CMD: drop test.tmp.mr.mr2_50_inc m30001| Fri Feb 22 12:35:15.527 [conn25] CMD: drop test.tmp.mr.mr2_50 m30001| Fri Feb 22 12:35:15.527 [conn25] CMD: drop test.tmp.mr.mr2_50_inc { "result" : "mr2_out", "timeMillis" : 10, "counts" : { "input" : 2, "emit" : 4, "reduce" : 1, "output" : 3 }, "ok" : 1, } m30999| Fri Feb 22 12:35:15.529 [conn1] DROP: test.mr2_out m30001| Fri Feb 22 12:35:15.529 [conn25] CMD: drop test.mr2_out { "results" : [ { "_id" : "a", "value" : { "totalSize" : 9, "num" : 1, "avg" : 9 } }, { "_id" : "b", "value" : { "totalSize" : 32, "num" : 2, "avg" : 16 } }, { "_id" : "c", "value" : { "totalSize" : 18, "num" : 1, "avg" : 18 } } ], "timeMillis" : 1, "counts" : { "input" : 2, "emit" : 4, "reduce" : 1, "output" : 3 }, "ok" : 1, } 76ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2dedupnear.js ******************************************* Test : jstests/showdiskloc.js ... m30999| Fri Feb 22 12:35:15.535 [conn1] DROP: test.jstests_showdiskloc m30001| Fri Feb 22 12:35:15.536 [conn25] CMD: drop test.jstests_showdiskloc m30001| Fri Feb 22 12:35:15.536 [conn25] build index test.jstests_showdiskloc { _id: 1 } m30001| Fri Feb 22 12:35:15.537 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.538 [conn25] build index test.jstests_showdiskloc { a: 1.0 } m30001| Fri Feb 22 12:35:15.538 [conn25] build index done. scanned 3 total records. 0 secs 5ms ******************************************* Test : jstests/sortd.js ... m30999| Fri Feb 22 12:35:15.540 [conn1] DROP: test.jstests_sortd m30001| Fri Feb 22 12:35:15.540 [conn25] CMD: drop test.jstests_sortd m30001| Fri Feb 22 12:35:15.540 [conn25] build index test.jstests_sortd { _id: 1 } m30001| Fri Feb 22 12:35:15.541 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.541 [conn25] build index test.jstests_sortd { a: 1.0 } m30001| Fri Feb 22 12:35:15.542 [conn25] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:35:15.543 [conn1] DROP: test.jstests_sortd m30001| Fri Feb 22 12:35:15.544 [conn25] CMD: drop test.jstests_sortd m30001| Fri Feb 22 12:35:15.548 [conn25] build index test.jstests_sortd { _id: 1 } m30001| Fri Feb 22 12:35:15.549 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.551 [conn25] build index test.jstests_sortd { a: 1.0 } m30001| Fri Feb 22 12:35:15.551 [conn25] build index done. scanned 40 total records. 0 secs m30999| Fri Feb 22 12:35:15.553 [conn1] DROP: test.jstests_sortd m30001| Fri Feb 22 12:35:15.553 [conn25] CMD: drop test.jstests_sortd m30001| Fri Feb 22 12:35:15.558 [conn25] build index test.jstests_sortd { _id: 1 } m30001| Fri Feb 22 12:35:15.558 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.572 [conn25] build index test.jstests_sortd { a: 1.0 } m30001| Fri Feb 22 12:35:15.574 [conn25] build index done. scanned 230 total records. 0.001 secs m30999| Fri Feb 22 12:35:15.589 [conn1] DROP: test.jstests_sortd m30001| Fri Feb 22 12:35:15.589 [conn25] CMD: drop test.jstests_sortd m30001| Fri Feb 22 12:35:15.594 [conn25] build index test.jstests_sortd { _id: 1 } m30001| Fri Feb 22 12:35:15.594 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.620 [conn25] build index test.jstests_sortd { a: 1.0 } m30001| Fri Feb 22 12:35:15.623 [conn25] build index done. scanned 400 total records. 0.002 secs 96ms ******************************************* Test : jstests/explain3.js ... m30999| Fri Feb 22 12:35:15.636 [conn1] DROP: test.jstests_explain3 m30001| Fri Feb 22 12:35:15.636 [conn25] CMD: drop test.jstests_explain3 m30001| Fri Feb 22 12:35:15.637 [conn25] build index test.jstests_explain3 { _id: 1 } m30001| Fri Feb 22 12:35:15.637 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:15.637 [conn25] info: creating collection test.jstests_explain3 on add index m30001| Fri Feb 22 12:35:15.637 [conn25] build index test.jstests_explain3 { i: 1.0 } m30001| Fri Feb 22 12:35:15.638 [conn25] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:16.405 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep( 20 ); db.jstests_explain3.dropIndex( {i:1} ); localhost:30999/admin sh15534| MongoDB shell version: 2.4.0-rc1-pre- sh15534| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:16.491 [mongosMain] connection accepted from 127.0.0.1:40023 #21 (2 connections now open) m30001| Fri Feb 22 12:35:16.515 [conn9] CMD: dropIndexes test.jstests_explain3 sh15534| [object Object] m30999| Fri Feb 22 12:35:16.525 [conn21] end connection 127.0.0.1:40023 (1 connection now open) 896ms ******************************************* Test : jstests/dbcase.js ... m30999| Fri Feb 22 12:35:16.542 [conn1] couldn't find database [dbcasetest_dbnamea] in config db m30999| Fri Feb 22 12:35:16.544 [conn1] put [dbcasetest_dbnamea] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:16.544 [conn1] DROP DATABASE: dbcasetest_dbnamea m30999| Fri Feb 22 12:35:16.544 [conn1] erased database dbcasetest_dbnamea from local registry m30999| Fri Feb 22 12:35:16.545 [conn1] DBConfig::dropDatabase: dbcasetest_dbnamea m30999| Fri Feb 22 12:35:16.545 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e745391604c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516545), what: "dropDatabase.start", ns: "dbcasetest_dbnamea", details: {} } m30999| Fri Feb 22 12:35:16.545 [conn1] DBConfig::dropDatabase: dbcasetest_dbnamea dropped sharded collections: 0 m30000| Fri Feb 22 12:35:16.545 [conn5] dropDatabase dbcasetest_dbnamea starting m30000| Fri Feb 22 12:35:16.580 [conn5] removeJournalFiles m30000| Fri Feb 22 12:35:16.580 [conn5] dropDatabase dbcasetest_dbnamea finished m30999| Fri Feb 22 12:35:16.580 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e745391604d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516580), what: "dropDatabase", ns: "dbcasetest_dbnamea", details: {} } m30999| Fri Feb 22 12:35:16.581 [conn1] couldn't find database [dbcasetest_dbnameA] in config db m30999| Fri Feb 22 12:35:16.582 [conn1] put [dbcasetest_dbnameA] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:16.582 [conn1] DROP DATABASE: dbcasetest_dbnameA m30999| Fri Feb 22 12:35:16.582 [conn1] erased database dbcasetest_dbnameA from local registry m30999| Fri Feb 22 12:35:16.583 [conn1] DBConfig::dropDatabase: dbcasetest_dbnameA m30999| Fri Feb 22 12:35:16.583 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e745391604e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516583), what: "dropDatabase.start", ns: "dbcasetest_dbnameA", details: {} } m30999| Fri Feb 22 12:35:16.584 [conn1] DBConfig::dropDatabase: dbcasetest_dbnameA dropped sharded collections: 0 m30000| Fri Feb 22 12:35:16.584 [conn5] dropDatabase dbcasetest_dbnameA starting m30000| Fri Feb 22 12:35:16.618 [conn5] removeJournalFiles m30000| Fri Feb 22 12:35:16.619 [conn5] dropDatabase dbcasetest_dbnameA finished m30999| Fri Feb 22 12:35:16.619 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e745391604f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516619), what: "dropDatabase", ns: "dbcasetest_dbnameA", details: {} } m30999| Fri Feb 22 12:35:16.620 [conn1] couldn't find database [dbcasetest_dbnamea] in config db m30999| Fri Feb 22 12:35:16.621 [conn1] put [dbcasetest_dbnamea] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:35:16.621 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/dbcasetest_dbnamea.ns, filling with zeroes... m30000| Fri Feb 22 12:35:16.622 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/dbcasetest_dbnamea.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:35:16.622 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/dbcasetest_dbnamea.0, filling with zeroes... m30000| Fri Feb 22 12:35:16.622 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/dbcasetest_dbnamea.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:35:16.622 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/dbcasetest_dbnamea.1, filling with zeroes... m30000| Fri Feb 22 12:35:16.622 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/dbcasetest_dbnamea.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:35:16.626 [conn6] build index dbcasetest_dbnamea.foo { _id: 1 } m30000| Fri Feb 22 12:35:16.627 [conn6] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:16.628 [conn1] couldn't find database [dbcasetest_dbnameA] in config db m30999| Fri Feb 22 12:35:16.628 [conn1] warning: error creating initial database config information :: caused by :: can't have 2 databases that just differ on case have: dbcasetest_dbnamea want to add: dbcasetest_dbnameA [ { "name" : "connection_status", "sizeOnDisk" : 218103808, "empty" : false, "shards" : { "shard0000" : 218103808 } }, { "name" : "dbcasetest_dbnamea", "sizeOnDisk" : 218103808, "empty" : false, "shards" : { "shard0000" : 218103808 } }, { "name" : "test", "sizeOnDisk" : 4243587072, "empty" : false, "shards" : { "shard0001" : 4243587072 } }, { "name" : "test_extent2", "sizeOnDisk" : 218103808, "empty" : false, "shards" : { "shard0000" : 218103808 } }, { "name" : "config", "empty" : false, "sizeOnDisk" : 201326592 }, { "name" : "admin", "empty" : false, "sizeOnDisk" : 0 } ] m30999| Fri Feb 22 12:35:16.651 [conn1] DROP DATABASE: dbcasetest_dbnamea m30999| Fri Feb 22 12:35:16.651 [conn1] erased database dbcasetest_dbnamea from local registry m30999| Fri Feb 22 12:35:16.652 [conn1] DBConfig::dropDatabase: dbcasetest_dbnamea m30999| Fri Feb 22 12:35:16.652 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e7453916050", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516652), what: "dropDatabase.start", ns: "dbcasetest_dbnamea", details: {} } m30999| Fri Feb 22 12:35:16.653 [conn1] DBConfig::dropDatabase: dbcasetest_dbnamea dropped sharded collections: 0 m30000| Fri Feb 22 12:35:16.653 [conn5] dropDatabase dbcasetest_dbnamea starting m30000| Fri Feb 22 12:35:16.699 [conn5] removeJournalFiles m30000| Fri Feb 22 12:35:16.704 [conn5] dropDatabase dbcasetest_dbnamea finished m30999| Fri Feb 22 12:35:16.704 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e7453916051", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516704), what: "dropDatabase", ns: "dbcasetest_dbnamea", details: {} } m30999| Fri Feb 22 12:35:16.705 [conn1] couldn't find database [dbcasetest_dbnameA] in config db m30999| Fri Feb 22 12:35:16.707 [conn1] put [dbcasetest_dbnameA] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:16.707 [conn1] DROP DATABASE: dbcasetest_dbnameA m30999| Fri Feb 22 12:35:16.707 [conn1] erased database dbcasetest_dbnameA from local registry m30999| Fri Feb 22 12:35:16.708 [conn1] DBConfig::dropDatabase: dbcasetest_dbnameA m30999| Fri Feb 22 12:35:16.708 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e7453916052", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516708), what: "dropDatabase.start", ns: "dbcasetest_dbnameA", details: {} } m30999| Fri Feb 22 12:35:16.708 [conn1] DBConfig::dropDatabase: dbcasetest_dbnameA dropped sharded collections: 0 m30000| Fri Feb 22 12:35:16.708 [conn5] dropDatabase dbcasetest_dbnameA starting m30000| Fri Feb 22 12:35:16.744 [conn5] removeJournalFiles m30000| Fri Feb 22 12:35:16.744 [conn5] dropDatabase dbcasetest_dbnameA finished m30999| Fri Feb 22 12:35:16.744 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:16-51276604881c8e7453916053", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536516744), what: "dropDatabase", ns: "dbcasetest_dbnameA", details: {} } [ { "name" : "connection_status", "sizeOnDisk" : 218103808, "empty" : false, "shards" : { "shard0000" : 218103808 } }, { "name" : "test", "sizeOnDisk" : 4243587072, "empty" : false, "shards" : { "shard0001" : 4243587072 } }, { "name" : "test_extent2", "sizeOnDisk" : 218103808, "empty" : false, "shards" : { "shard0000" : 218103808 } }, { "name" : "config", "empty" : false, "sizeOnDisk" : 201326592 }, { "name" : "admin", "empty" : false, "sizeOnDisk" : 0 } ] 237ms ******************************************* Test : jstests/eval5.js ... m30999| Fri Feb 22 12:35:16.776 [conn1] DROP: test.eval5 m30001| Fri Feb 22 12:35:16.776 [conn25] CMD: drop test.eval5 m30001| Fri Feb 22 12:35:16.777 [conn25] build index test.eval5 { _id: 1 } m30001| Fri Feb 22 12:35:16.779 [conn25] build index done. scanned 0 total records. 0.001 secs 55ms ******************************************* Test : jstests/array_match1.js ... m30999| Fri Feb 22 12:35:16.825 [conn1] DROP: test.array_match1 m30001| Fri Feb 22 12:35:16.825 [conn25] CMD: drop test.array_match1 m30001| Fri Feb 22 12:35:16.826 [conn25] build index test.array_match1 { _id: 1 } m30001| Fri Feb 22 12:35:16.827 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.828 [conn25] build index test.array_match1 { a: 1.0 } m30001| Fri Feb 22 12:35:16.829 [conn25] build index done. scanned 3 total records. 0.001 secs m30999| Fri Feb 22 12:35:16.830 [conn1] DROP: test.array_match1 m30001| Fri Feb 22 12:35:16.830 [conn25] CMD: drop test.array_match1 m30001| Fri Feb 22 12:35:16.836 [conn25] build index test.array_match1 { _id: 1 } m30001| Fri Feb 22 12:35:16.836 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.837 [conn25] build index test.array_match1 { a: 1.0 } m30001| Fri Feb 22 12:35:16.838 [conn25] build index done. scanned 3 total records. 0 secs 14ms ******************************************* Test : jstests/dbref2.js ... m30999| Fri Feb 22 12:35:16.839 [conn1] DROP: test.dbref2a m30001| Fri Feb 22 12:35:16.840 [conn25] CMD: drop test.dbref2a m30999| Fri Feb 22 12:35:16.840 [conn1] DROP: test.dbref2b m30001| Fri Feb 22 12:35:16.840 [conn25] CMD: drop test.dbref2b m30001| Fri Feb 22 12:35:16.841 [conn25] build index test.dbref2a { _id: 1 } m30001| Fri Feb 22 12:35:16.841 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.842 [conn25] build index test.dbref2b { _id: 1 } m30001| Fri Feb 22 12:35:16.843 [conn25] build index done. scanned 0 total records. 0 secs 5ms ******************************************* Test : jstests/mr3.js ... m30999| Fri Feb 22 12:35:16.846 [conn1] DROP: test.mr3 m30001| Fri Feb 22 12:35:16.846 [conn25] CMD: drop test.mr3 m30001| Fri Feb 22 12:35:16.847 [conn25] build index test.mr3 { _id: 1 } m30001| Fri Feb 22 12:35:16.847 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.849 [conn25] CMD: drop test.tmp.mr.mr3_51 m30001| Fri Feb 22 12:35:16.849 [conn25] CMD: drop test.tmp.mr.mr3_51_inc m30001| Fri Feb 22 12:35:16.849 [conn25] build index test.tmp.mr.mr3_51_inc { 0: 1 } m30001| Fri Feb 22 12:35:16.850 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.850 [conn25] build index test.tmp.mr.mr3_51 { _id: 1 } m30001| Fri Feb 22 12:35:16.851 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.854 [conn25] CMD: drop test.mr3_out m30001| Fri Feb 22 12:35:16.860 [conn25] CMD: drop test.tmp.mr.mr3_51 m30001| Fri Feb 22 12:35:16.860 [conn25] CMD: drop test.tmp.mr.mr3_51 m30001| Fri Feb 22 12:35:16.860 [conn25] CMD: drop test.tmp.mr.mr3_51_inc m30001| Fri Feb 22 12:35:16.863 [conn25] CMD: drop test.tmp.mr.mr3_51 m30001| Fri Feb 22 12:35:16.863 [conn25] CMD: drop test.tmp.mr.mr3_51_inc m30999| Fri Feb 22 12:35:16.864 [conn1] DROP: test.mr3_out m30001| Fri Feb 22 12:35:16.864 [conn25] CMD: drop test.mr3_out m30001| Fri Feb 22 12:35:16.868 [conn25] CMD: drop test.tmp.mr.mr3_52 m30001| Fri Feb 22 12:35:16.868 [conn25] CMD: drop test.tmp.mr.mr3_52_inc m30001| Fri Feb 22 12:35:16.869 [conn25] build index test.tmp.mr.mr3_52_inc { 0: 1 } m30001| Fri Feb 22 12:35:16.869 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.870 [conn25] build index test.tmp.mr.mr3_52 { _id: 1 } m30001| Fri Feb 22 12:35:16.870 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.872 [conn25] CMD: drop test.mr3_out m30001| Fri Feb 22 12:35:16.878 [conn25] CMD: drop test.tmp.mr.mr3_52 m30001| Fri Feb 22 12:35:16.878 [conn25] CMD: drop test.tmp.mr.mr3_52 m30001| Fri Feb 22 12:35:16.878 [conn25] CMD: drop test.tmp.mr.mr3_52_inc m30001| Fri Feb 22 12:35:16.881 [conn25] CMD: drop test.tmp.mr.mr3_52 m30001| Fri Feb 22 12:35:16.881 [conn25] CMD: drop test.tmp.mr.mr3_52_inc m30999| Fri Feb 22 12:35:16.882 [conn1] DROP: test.mr3_out m30001| Fri Feb 22 12:35:16.882 [conn25] CMD: drop test.mr3_out m30001| Fri Feb 22 12:35:16.886 [conn25] CMD: drop test.tmp.mr.mr3_53 m30001| Fri Feb 22 12:35:16.887 [conn25] CMD: drop test.tmp.mr.mr3_53_inc m30001| Fri Feb 22 12:35:16.887 [conn25] build index test.tmp.mr.mr3_53_inc { 0: 1 } m30001| Fri Feb 22 12:35:16.887 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.888 [conn25] build index test.tmp.mr.mr3_53 { _id: 1 } m30001| Fri Feb 22 12:35:16.888 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.890 [conn25] CMD: drop test.mr3_out m30001| Fri Feb 22 12:35:16.896 [conn25] CMD: drop test.tmp.mr.mr3_53 m30001| Fri Feb 22 12:35:16.896 [conn25] CMD: drop test.tmp.mr.mr3_53 m30001| Fri Feb 22 12:35:16.896 [conn25] CMD: drop test.tmp.mr.mr3_53_inc m30001| Fri Feb 22 12:35:16.899 [conn25] CMD: drop test.tmp.mr.mr3_53 m30001| Fri Feb 22 12:35:16.899 [conn25] CMD: drop test.tmp.mr.mr3_53_inc m30999| Fri Feb 22 12:35:16.900 [conn1] DROP: test.mr3_out m30001| Fri Feb 22 12:35:16.900 [conn25] CMD: drop test.mr3_out m30001| Fri Feb 22 12:35:16.916 [conn25] CMD: drop test.tmp.mr.mr3_54 m30001| Fri Feb 22 12:35:16.916 [conn25] CMD: drop test.tmp.mr.mr3_54_inc m30001| Fri Feb 22 12:35:16.917 [conn25] build index test.tmp.mr.mr3_54_inc { 0: 1 } m30001| Fri Feb 22 12:35:16.918 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.918 [conn25] build index test.tmp.mr.mr3_54 { _id: 1 } m30001| Fri Feb 22 12:35:16.918 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:16.922 [conn25] JavaScript execution failed: TypeError: Cannot read property 'a' of undefined near 'this.xzz.a )' (line 2) m30001| Fri Feb 22 12:35:16.922 [conn25] CMD: drop test.tmp.mr.mr3_54 m30001| Fri Feb 22 12:35:16.932 [conn25] CMD: drop test.tmp.mr.mr3_54_inc m30001| Fri Feb 22 12:35:16.935 [conn25] mr failed, removing collection :: caused by :: 16722 JavaScript execution failed: TypeError: Cannot read property 'a' of undefined near 'this.xzz.a )' (line 2) m30001| Fri Feb 22 12:35:16.936 [conn25] CMD: drop test.tmp.mr.mr3_54 m30001| Fri Feb 22 12:35:16.936 [conn25] CMD: drop test.tmp.mr.mr3_54_inc m30001| Fri Feb 22 12:35:16.991 [conn25] CMD: drop test.tmp.mr.mr3_55 m30001| Fri Feb 22 12:35:16.992 [conn25] CMD: drop test.tmp.mr.mr3_55_inc m30001| Fri Feb 22 12:35:16.992 [conn25] build index test.tmp.mr.mr3_55_inc { 0: 1 } m30001| Fri Feb 22 12:35:16.994 [conn25] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:16.994 [conn25] build index test.tmp.mr.mr3_55 { _id: 1 } m30001| Fri Feb 22 12:35:16.995 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.000 [conn25] JavaScript execution failed: TypeError: Cannot read property 'x' of undefined near 'return v.x.x.x' (line 2) m30001| Fri Feb 22 12:35:17.000 [conn25] CMD: drop test.tmp.mr.mr3_55 m30001| Fri Feb 22 12:35:17.003 [conn25] CMD: drop test.tmp.mr.mr3_55_inc m30001| Fri Feb 22 12:35:17.006 [conn25] mr failed, removing collection :: caused by :: 16722 JavaScript execution failed: TypeError: Cannot read property 'x' of undefined near 'return v.x.x.x' (line 2) m30001| Fri Feb 22 12:35:17.007 [conn25] CMD: drop test.tmp.mr.mr3_55 m30001| Fri Feb 22 12:35:17.007 [conn25] CMD: drop test.tmp.mr.mr3_55_inc m30001| Fri Feb 22 12:35:17.108 [conn25] command test.$cmd command: { mapreduce: "mr3", map: function ( n , x ){ m30001| x = x || 1; m30001| this.tags.forEach( m30001| fun..., reduce: function ( k , v ){ m30001| return v.x.x.x; m30001| }, out: "mr3_out" } ntoreturn:1 keyUpdates:0 locks(micros) r:1617 w:1321 reslen:180 146ms 275ms ******************************************* Test : jstests/numberlong2.js ... m30999| Fri Feb 22 12:35:17.121 [conn1] DROP: test.jstests_numberlong2 m30001| Fri Feb 22 12:35:17.121 [conn25] CMD: drop test.jstests_numberlong2 m30001| Fri Feb 22 12:35:17.122 [conn25] build index test.jstests_numberlong2 { _id: 1 } m30001| Fri Feb 22 12:35:17.122 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.122 [conn25] info: creating collection test.jstests_numberlong2 on add index m30001| Fri Feb 22 12:35:17.122 [conn25] build index test.jstests_numberlong2 { x: 1.0 } m30001| Fri Feb 22 12:35:17.123 [conn25] build index done. scanned 0 total records. 0 secs 43ms ******************************************* Test : jstests/array1.js ... m30999| Fri Feb 22 12:35:17.167 [conn1] DROP: test.array1 m30001| Fri Feb 22 12:35:17.167 [conn25] CMD: drop test.array1 m30001| Fri Feb 22 12:35:17.168 [conn25] build index test.array1 { _id: 1 } m30001| Fri Feb 22 12:35:17.169 [conn25] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:17.170 [conn25] build index test.array1 { a: 1.0 } m30001| Fri Feb 22 12:35:17.171 [conn25] build index done. scanned 2 total records. 0.001 secs 9ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo5.js ******************************************* Test : jstests/inc1.js ... m30999| Fri Feb 22 12:35:17.176 [conn1] DROP: test.inc1 m30001| Fri Feb 22 12:35:17.176 [conn25] CMD: drop test.inc1 m30001| Fri Feb 22 12:35:17.177 [conn25] build index test.inc1 { _id: 1 } m30001| Fri Feb 22 12:35:17.178 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.182 [conn25] build index test.inc1 { x: 1.0 } m30001| Fri Feb 22 12:35:17.183 [conn25] build index done. scanned 1 total records. 0.001 secs 12ms ******************************************* Test : jstests/not2.js ... m30999| Fri Feb 22 12:35:17.186 [conn1] DROP: test.jstests_not2 m30001| Fri Feb 22 12:35:17.186 [conn25] CMD: drop test.jstests_not2 m30001| Fri Feb 22 12:35:17.187 [conn25] build index test.jstests_not2 { _id: 1 } m30001| Fri Feb 22 12:35:17.188 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.223 [conn25] build index test.jstests_not2 { i: 1.0 } m30001| Fri Feb 22 12:35:17.224 [conn25] build index done. scanned 2 total records. 0.001 secs m30999| Fri Feb 22 12:35:17.259 [conn1] DROP: test.jstests_not2 m30001| Fri Feb 22 12:35:17.259 [conn25] CMD: drop test.jstests_not2 m30001| Fri Feb 22 12:35:17.263 [conn25] build index test.jstests_not2 { _id: 1 } m30001| Fri Feb 22 12:35:17.264 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.264 [conn25] build index test.jstests_not2 { i: 1.0 } m30001| Fri Feb 22 12:35:17.265 [conn25] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:35:17.275 [conn1] DROP: test.jstests_not2 m30001| Fri Feb 22 12:35:17.275 [conn25] CMD: drop test.jstests_not2 m30001| Fri Feb 22 12:35:17.279 [conn25] build index test.jstests_not2 { _id: 1 } m30001| Fri Feb 22 12:35:17.280 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.280 [conn25] info: creating collection test.jstests_not2 on add index m30001| Fri Feb 22 12:35:17.280 [conn25] build index test.jstests_not2 { i.j: 1.0 } m30001| Fri Feb 22 12:35:17.280 [conn25] build index done. scanned 0 total records. 0 secs 97ms ******************************************* Test : jstests/extent.js ... m30999| Fri Feb 22 12:35:17.286 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.286 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.287 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.287 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.288 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.288 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.293 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.294 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.294 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.295 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.297 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.298 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.298 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.299 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.301 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.302 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.302 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.303 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.305 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.306 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.306 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.307 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.309 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.310 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.310 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.311 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.313 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.314 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.315 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.315 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.318 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.318 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.319 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.319 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.322 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.322 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.323 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.323 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.326 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.326 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.327 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.327 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.330 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.330 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.331 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.331 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.334 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.334 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.335 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.335 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.338 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.338 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.339 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.339 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.342 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.342 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.343 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.343 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.346 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.346 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.347 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.347 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.350 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.350 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.351 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.351 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.354 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.354 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.355 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.355 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.358 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.358 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.359 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.359 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.362 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.362 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.363 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.363 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.366 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.366 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.367 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.367 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.370 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.370 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.371 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.371 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.374 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.374 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.375 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.375 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.378 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.378 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.379 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.379 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.382 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.382 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.383 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.383 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.386 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.386 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.387 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.387 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.390 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.390 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.391 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.391 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.393 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.394 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.395 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.395 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.397 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.398 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.398 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.398 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.401 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.402 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.402 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.402 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.405 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.406 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.406 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.406 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.409 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.410 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.410 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.410 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.413 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.414 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.414 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.414 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.417 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.418 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.418 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.419 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.421 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.422 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.422 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.422 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.425 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.426 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.426 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.427 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.429 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.430 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.430 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.430 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.433 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.434 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.434 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.434 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.437 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.438 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.438 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.438 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.441 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.442 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.442 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.442 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.445 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.446 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.446 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.446 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.449 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.450 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.450 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.450 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.453 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.454 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.454 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.454 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.457 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.458 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.458 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.458 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.461 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.462 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.462 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.462 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.465 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.466 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.466 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.466 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.469 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.470 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.470 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.470 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.473 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.474 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.474 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.474 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.477 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.478 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.478 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.478 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.481 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.482 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.482 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.483 [conn25] CMD: drop test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.486 [conn25] build index test.reclaimExtentsTest { _id: 1 } m30001| Fri Feb 22 12:35:17.486 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.487 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.487 [conn25] CMD: drop test.reclaimExtentsTest m30999| Fri Feb 22 12:35:17.489 [conn1] DROP: test.reclaimExtentsTest m30001| Fri Feb 22 12:35:17.490 [conn25] CMD: drop test.reclaimExtentsTest 208ms ******************************************* Test : jstests/update_arraymatch8.js ... m30999| Fri Feb 22 12:35:17.495 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.495 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.496 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.496 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.496 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.496 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.497 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.499 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.499 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.503 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.504 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.504 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.504 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.504 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.506 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.506 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.510 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.510 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.510 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.510 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.511 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.512 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.512 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.517 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.517 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.517 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.517 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.517 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.519 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.519 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.523 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.524 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.524 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.524 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.524 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.526 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.526 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.530 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.531 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.531 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.531 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.531 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.533 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.533 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.538 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.538 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.538 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.538 [conn25] build index test.jstests_update_arraymatch8 { array.123a.name: 1.0 } m30001| Fri Feb 22 12:35:17.538 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.540 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.540 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.545 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.545 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.545 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.545 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.545 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.547 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.547 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.551 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.551 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.551 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.551 [conn25] build index test.jstests_update_arraymatch8 { array.123a.name: 1.0 } m30001| Fri Feb 22 12:35:17.552 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.553 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.553 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.558 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.558 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.558 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.558 [conn25] build index test.jstests_update_arraymatch8 { array.name: 1.0 } m30001| Fri Feb 22 12:35:17.558 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.560 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.560 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.564 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.564 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.565 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.565 [conn25] build index test.jstests_update_arraymatch8 { a.0.b: 1.0 } m30001| Fri Feb 22 12:35:17.565 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.567 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.567 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.571 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.571 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.572 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.572 [conn25] build index test.jstests_update_arraymatch8 { a.0.b.c: 1.0 } m30001| Fri Feb 22 12:35:17.572 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.574 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.574 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.579 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.579 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.579 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.579 [conn25] build index test.jstests_update_arraymatch8 { a.b.$ref: 1.0 } m30001| Fri Feb 22 12:35:17.580 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.582 [conn1] DROP: test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.582 [conn25] CMD: drop test.jstests_update_arraymatch8 m30001| Fri Feb 22 12:35:17.586 [conn25] build index test.jstests_update_arraymatch8 { _id: 1 } m30001| Fri Feb 22 12:35:17.586 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.586 [conn25] info: creating collection test.jstests_update_arraymatch8 on add index m30001| Fri Feb 22 12:35:17.586 [conn25] build index test.jstests_update_arraymatch8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:17.587 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.587 [conn25] build index test.jstests_update_arraymatch8 { a-b: 1.0 } m30001| Fri Feb 22 12:35:17.588 [conn25] build index done. scanned 0 total records. 0 secs 99ms ******************************************* Test : jstests/shelltypes.js ... ObjectId("51276605000decca08752e38") DBRef("test", "theid") DBPointer("test", ObjectId("51276605000decca08752e39")) Timestamp(10, 20) BinData(3,"VQ6EAOKbQdSnFkRmVUQAAA==") BinData(3,"VQ6EAOKbQdSnFkRmVUQAAA==") BinData(5,"VQ6EAOKbQdSnFkRmVUQAAA==") BinData(4,"VQ6EAOKbQdSnFkRmVUQAAA==") NumberLong(100) NumberInt(100) 2ms ******************************************* Test : jstests/exists5.js ... m30999| Fri Feb 22 12:35:17.592 [conn1] DROP: test.jstests_exists5 m30001| Fri Feb 22 12:35:17.592 [conn25] CMD: drop test.jstests_exists5 m30001| Fri Feb 22 12:35:17.592 [conn25] build index test.jstests_exists5 { _id: 1 } m30001| Fri Feb 22 12:35:17.593 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.596 [conn1] DROP: test.jstests_exists5 m30001| Fri Feb 22 12:35:17.596 [conn25] CMD: drop test.jstests_exists5 m30001| Fri Feb 22 12:35:17.599 [conn25] build index test.jstests_exists5 { _id: 1 } m30001| Fri Feb 22 12:35:17.599 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.601 [conn1] DROP: test.jstests_exists5 m30001| Fri Feb 22 12:35:17.601 [conn25] CMD: drop test.jstests_exists5 m30001| Fri Feb 22 12:35:17.604 [conn25] build index test.jstests_exists5 { _id: 1 } m30001| Fri Feb 22 12:35:17.604 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.606 [conn1] DROP: test.jstests_exists5 m30001| Fri Feb 22 12:35:17.606 [conn25] CMD: drop test.jstests_exists5 m30001| Fri Feb 22 12:35:17.609 [conn25] build index test.jstests_exists5 { _id: 1 } m30001| Fri Feb 22 12:35:17.609 [conn25] build index done. scanned 0 total records. 0 secs 19ms ******************************************* Test : jstests/index_check3.js ... m30999| Fri Feb 22 12:35:17.612 [conn1] DROP: test.index_check3 m30001| Fri Feb 22 12:35:17.612 [conn25] CMD: drop test.index_check3 m30001| Fri Feb 22 12:35:17.612 [conn25] build index test.index_check3 { _id: 1 } m30001| Fri Feb 22 12:35:17.613 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.614 [conn25] build index test.index_check3 { a: 1.0 } m30001| Fri Feb 22 12:35:17.614 [conn25] build index done. scanned 4 total records. 0 secs m30999| Fri Feb 22 12:35:17.615 [conn1] DROP: test.index_check3 m30001| Fri Feb 22 12:35:17.616 [conn25] CMD: drop test.index_check3 m30001| Fri Feb 22 12:35:17.620 [conn25] build index test.index_check3 { _id: 1 } m30001| Fri Feb 22 12:35:17.620 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.627 [conn25] build index test.index_check3 { foo: 1.0 } m30001| Fri Feb 22 12:35:17.628 [conn25] build index done. scanned 100 total records. 0 secs m30999| Fri Feb 22 12:35:17.629 [conn1] DROP: test.index_check3 m30001| Fri Feb 22 12:35:17.630 [conn25] CMD: drop test.index_check3 m30001| Fri Feb 22 12:35:17.634 [conn25] build index test.index_check3 { _id: 1 } m30001| Fri Feb 22 12:35:17.634 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.635 [conn25] build index test.index_check3 { i: 1.0 } m30001| Fri Feb 22 12:35:17.635 [conn25] build index done. scanned 11 total records. 0 secs 29ms ******************************************* Test : jstests/mr_index.js ... m30999| Fri Feb 22 12:35:17.641 [conn1] DROP: test.mr_index m30001| Fri Feb 22 12:35:17.641 [conn25] CMD: drop test.mr_index m30999| Fri Feb 22 12:35:17.642 [conn1] DROP: test.mr_index_out m30001| Fri Feb 22 12:35:17.642 [conn25] CMD: drop test.mr_index_out m30001| Fri Feb 22 12:35:17.642 [conn25] build index test.mr_index { _id: 1 } m30001| Fri Feb 22 12:35:17.643 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.673 [conn25] CMD: drop test.tmp.mr.mr_index_56 m30001| Fri Feb 22 12:35:17.673 [conn25] CMD: drop test.tmp.mr.mr_index_56_inc m30001| Fri Feb 22 12:35:17.673 [conn25] build index test.tmp.mr.mr_index_56_inc { 0: 1 } m30001| Fri Feb 22 12:35:17.674 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.674 [conn25] build index test.tmp.mr.mr_index_56 { _id: 1 } m30001| Fri Feb 22 12:35:17.675 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.678 [conn25] CMD: drop test.mr_index_out m30001| Fri Feb 22 12:35:17.683 [conn25] CMD: drop test.tmp.mr.mr_index_56 m30001| Fri Feb 22 12:35:17.683 [conn25] CMD: drop test.tmp.mr.mr_index_56 m30001| Fri Feb 22 12:35:17.683 [conn25] CMD: drop test.tmp.mr.mr_index_56_inc m30001| Fri Feb 22 12:35:17.686 [conn25] CMD: drop test.tmp.mr.mr_index_56 m30001| Fri Feb 22 12:35:17.686 [conn25] CMD: drop test.tmp.mr.mr_index_56_inc m30001| Fri Feb 22 12:35:17.688 [conn25] build index test.mr_index_out { value: 1.0 } m30001| Fri Feb 22 12:35:17.688 [conn25] build index done. scanned 3 total records. 0 secs m30001| Fri Feb 22 12:35:17.691 [conn25] CMD: drop test.tmp.mr.mr_index_57 m30001| Fri Feb 22 12:35:17.691 [conn25] CMD: drop test.tmp.mr.mr_index_57_inc m30001| Fri Feb 22 12:35:17.691 [conn25] build index test.tmp.mr.mr_index_57_inc { 0: 1 } m30001| Fri Feb 22 12:35:17.691 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.692 [conn25] build index test.tmp.mr.mr_index_57 { _id: 1 } m30001| Fri Feb 22 12:35:17.692 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.693 [conn25] build index test.tmp.mr.mr_index_57 { value: 1.0 } m30001| Fri Feb 22 12:35:17.693 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.695 [conn25] CMD: drop test.mr_index_out m30001| Fri Feb 22 12:35:17.707 [conn25] CMD: drop test.tmp.mr.mr_index_57 m30001| Fri Feb 22 12:35:17.708 [conn25] CMD: drop test.tmp.mr.mr_index_57 m30001| Fri Feb 22 12:35:17.708 [conn25] CMD: drop test.tmp.mr.mr_index_57_inc m30001| Fri Feb 22 12:35:17.710 [conn25] CMD: drop test.tmp.mr.mr_index_57 m30001| Fri Feb 22 12:35:17.710 [conn25] CMD: drop test.tmp.mr.mr_index_57_inc m30999| Fri Feb 22 12:35:17.712 [conn1] DROP: test.mr_index_out m30001| Fri Feb 22 12:35:17.712 [conn25] CMD: drop test.mr_index_out 76ms ******************************************* Test : jstests/index4.js ... m30999| Fri Feb 22 12:35:17.717 [conn1] DROP: test.index4 m30001| Fri Feb 22 12:35:17.717 [conn25] CMD: drop test.index4 m30001| Fri Feb 22 12:35:17.718 [conn25] build index test.index4 { _id: 1 } m30001| Fri Feb 22 12:35:17.718 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.720 [conn25] build index test.index4 { instances.pool: 1.0 } m30001| Fri Feb 22 12:35:17.720 [conn25] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:35:17.731 [conn9] CMD: validate test.index4 m30001| Fri Feb 22 12:35:17.731 [conn9] validating index 0: test.index4.$_id_ m30001| Fri Feb 22 12:35:17.731 [conn9] validating index 1: test.index4.$instances.pool_1 16ms >>>>>>>>>>>>>>> skipping jstests/quota ******************************************* Test : jstests/or8.js ... m30999| Fri Feb 22 12:35:17.732 [conn1] DROP: test.jstests_or8 m30001| Fri Feb 22 12:35:17.732 [conn25] CMD: drop test.jstests_or8 m30001| Fri Feb 22 12:35:17.733 [conn25] build index test.jstests_or8 { _id: 1 } m30001| Fri Feb 22 12:35:17.734 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.734 [conn25] build index test.jstests_or8 { a: 1.0 } m30001| Fri Feb 22 12:35:17.734 [conn25] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:35:17.737 [conn1] DROP: test.jstests_or8 m30001| Fri Feb 22 12:35:17.737 [conn25] CMD: drop test.jstests_or8 m30001| Fri Feb 22 12:35:17.742 [conn25] build index test.jstests_or8 { _id: 1 } m30001| Fri Feb 22 12:35:17.742 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.742 [conn25] build index test.jstests_or8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:17.743 [conn25] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:17.743 [conn25] build index test.jstests_or8 { a.c: 1.0 } m30001| Fri Feb 22 12:35:17.743 [conn25] build index done. scanned 1 total records. 0 secs 13ms ******************************************* Test : jstests/set5.js ... m30999| Fri Feb 22 12:35:17.746 [conn1] DROP: test.set5 m30001| Fri Feb 22 12:35:17.746 [conn25] CMD: drop test.set5 m30001| Fri Feb 22 12:35:17.746 [conn25] build index test.set5 { _id: 1 } m30001| Fri Feb 22 12:35:17.747 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.747 [conn1] DROP: test.set5 m30001| Fri Feb 22 12:35:17.748 [conn25] CMD: drop test.set5 m30999| Fri Feb 22 12:35:17.749 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276605881c8e7453916054 m30999| Fri Feb 22 12:35:17.750 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:35:17.751 [conn25] build index test.set5 { _id: 1 } m30001| Fri Feb 22 12:35:17.751 [conn25] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/regexb.js ... m30999| Fri Feb 22 12:35:17.752 [conn1] DROP: test.jstests_regexb m30001| Fri Feb 22 12:35:17.752 [conn25] CMD: drop test.jstests_regexb m30001| Fri Feb 22 12:35:17.753 [conn25] build index test.jstests_regexb { _id: 1 } m30001| Fri Feb 22 12:35:17.753 [conn25] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/null.js ... m30999| Fri Feb 22 12:35:17.755 [conn1] DROP: test.null1 m30001| Fri Feb 22 12:35:17.755 [conn25] CMD: drop test.null1 m30001| Fri Feb 22 12:35:17.756 [conn25] build index test.null1 { _id: 1 } m30001| Fri Feb 22 12:35:17.756 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.757 [conn25] build index test.null1 { x: 1.0 } m30001| Fri Feb 22 12:35:17.758 [conn25] build index done. scanned 2 total records. 0 secs 5ms ******************************************* Test : jstests/indexz.js ... m30999| Fri Feb 22 12:35:17.763 [conn1] DROP: test.jstests_indexz m30001| Fri Feb 22 12:35:17.763 [conn25] CMD: drop test.jstests_indexz m30001| Fri Feb 22 12:35:17.764 [conn25] build index test.jstests_indexz { _id: 1 } m30001| Fri Feb 22 12:35:17.764 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.764 [conn25] info: creating collection test.jstests_indexz on add index m30001| Fri Feb 22 12:35:17.764 [conn25] build index test.jstests_indexz { a: 1.0 } m30001| Fri Feb 22 12:35:17.765 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.765 [conn25] build index test.jstests_indexz { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:17.766 [conn25] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/updatei.js ... m30999| Fri Feb 22 12:35:17.767 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.768 [conn25] CMD: drop test.updatei m30001| Fri Feb 22 12:35:17.768 [conn25] build index test.updatei { _id: 1 } m30001| Fri Feb 22 12:35:17.769 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.771 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.771 [conn25] CMD: drop test.updatei m30001| Fri Feb 22 12:35:17.774 [conn25] build index test.updatei { _id: 1 } m30001| Fri Feb 22 12:35:17.774 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.776 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.776 [conn25] CMD: drop test.updatei m30001| Fri Feb 22 12:35:17.779 [conn25] build index test.updatei { _id: 1 } m30001| Fri Feb 22 12:35:17.779 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.780 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.780 [conn25] CMD: drop test.updatei m30001| Fri Feb 22 12:35:17.783 [conn25] build index test.updatei { _id: 1 } m30001| Fri Feb 22 12:35:17.784 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.786 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.786 [conn25] CMD: drop test.updatei m30001| Fri Feb 22 12:35:17.790 [conn25] build index test.updatei { _id: 1 } m30001| Fri Feb 22 12:35:17.790 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.791 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.791 [conn25] CMD: drop test.updatei m30001| Fri Feb 22 12:35:17.795 [conn25] build index test.updatei { _id: 1 } m30001| Fri Feb 22 12:35:17.795 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.797 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.797 [conn25] CMD: drop test.updatei m30001| Fri Feb 22 12:35:17.801 [conn25] build index test.updatei { _id: 1 } m30001| Fri Feb 22 12:35:17.801 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.802 [conn1] DROP: test.updatei m30001| Fri Feb 22 12:35:17.803 [conn25] CMD: drop test.updatei 38ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped5.js ******************************************* Test : jstests/indexl.js ... m30999| Fri Feb 22 12:35:17.806 [conn1] DROP: test.jstests_indexl m30001| Fri Feb 22 12:35:17.806 [conn25] CMD: drop test.jstests_indexl m30001| Fri Feb 22 12:35:17.807 [conn25] build index test.jstests_indexl { _id: 1 } m30001| Fri Feb 22 12:35:17.808 [conn25] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:17.813 [conn1] DROP: test.jstests_indexl m30001| Fri Feb 22 12:35:17.813 [conn25] CMD: drop test.jstests_indexl m30001| Fri Feb 22 12:35:17.817 [conn25] build index test.jstests_indexl { _id: 1 } m30001| Fri Feb 22 12:35:17.817 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:17.817 [conn25] info: creating collection test.jstests_indexl on add index m30001| Fri Feb 22 12:35:17.817 [conn25] build index test.jstests_indexl { a: 1.0 } m30001| Fri Feb 22 12:35:17.818 [conn25] build index done. scanned 0 total records. 0 secs 19ms ******************************************* Test : jstests/update.js ... m30999| Fri Feb 22 12:35:17.825 [conn1] DROP: test.asdf m30001| Fri Feb 22 12:35:17.825 [conn25] CMD: drop test.asdf m30001| Fri Feb 22 12:35:17.825 [conn25] build index test.asdf { _id: 1 } m30001| Fri Feb 22 12:35:17.826 [conn25] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.437 [conn9] CMD: validate test.asdf m30001| Fri Feb 22 12:35:20.438 [conn9] validating index 0: test.asdf.$_id_ update.js padding factor: 1.998000000000371 m30999| Fri Feb 22 12:35:20.439 [conn1] DROP: test.asdf m30001| Fri Feb 22 12:35:20.439 [conn25] CMD: drop test.asdf 2620ms ******************************************* Test : jstests/storefunc.js ... 13ms ******************************************* Test : jstests/fts_blog.js ... m30000| Fri Feb 22 12:35:20.460 [initandlisten] connection accepted from 127.0.0.1:34989 #17 (9 connections now open) m30001| Fri Feb 22 12:35:20.460 [initandlisten] connection accepted from 127.0.0.1:34564 #26 (6 connections now open) m30999| Fri Feb 22 12:35:20.461 [conn1] DROP: test.text_blog m30001| Fri Feb 22 12:35:20.461 [conn25] CMD: drop test.text_blog m30001| Fri Feb 22 12:35:20.462 [conn25] build index test.text_blog { _id: 1 } m30001| Fri Feb 22 12:35:20.463 [conn25] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:20.463 [conn25] build index test.text_blog { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:35:20.465 [conn25] build index done. scanned 3 total records. 0.001 secs 11ms >>>>>>>>>>>>>>> skipping jstests/slowWeekly ******************************************* Test : jstests/cursor2.js ... m30999| Fri Feb 22 12:35:20.469 [conn1] DROP: test.ed_db_cursor2_ccvsal m30001| Fri Feb 22 12:35:20.469 [conn25] CMD: drop test.ed_db_cursor2_ccvsal m30001| Fri Feb 22 12:35:20.470 [conn25] build index test.ed_db_cursor2_ccvsal { _id: 1 } m30001| Fri Feb 22 12:35:20.470 [conn25] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/distinct1.js ... m30999| Fri Feb 22 12:35:20.472 [conn1] DROP: test.distinct1 m30001| Fri Feb 22 12:35:20.473 [conn25] CMD: drop test.distinct1 m30001| Fri Feb 22 12:35:20.474 [conn25] build index test.distinct1 { _id: 1 } m30001| Fri Feb 22 12:35:20.475 [conn25] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:20.476 [conn1] DROP: test.distinct1 m30001| Fri Feb 22 12:35:20.477 [conn25] CMD: drop test.distinct1 m30001| Fri Feb 22 12:35:20.480 [conn25] build index test.distinct1 { _id: 1 } m30001| Fri Feb 22 12:35:20.481 [conn25] build index done. scanned 0 total records. 0 secs 11ms ******************************************* Test : jstests/ne3.js ... m30999| Fri Feb 22 12:35:20.483 [conn1] DROP: test.jstests_ne3 m30001| Fri Feb 22 12:35:20.483 [conn25] CMD: drop test.jstests_ne3 m30001| Fri Feb 22 12:35:20.484 [conn25] assertion 13454 invalid regular expression operator ns:test.jstests_ne3 query:{ t: { $ne: /a/ } } m30001| Fri Feb 22 12:35:20.484 [conn25] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:20.484 [conn25] problem detected during query over test.jstests_ne3 : { $err: "invalid regular expression operator", code: 13454 } m30999| Fri Feb 22 12:35:20.484 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13454 invalid regular expression operator m30001| Fri Feb 22 12:35:20.484 [conn25] end connection 127.0.0.1:55259 (5 connections now open) m30001| Fri Feb 22 12:35:20.485 [initandlisten] connection accepted from 127.0.0.1:49283 #27 (7 connections now open) m30001| Fri Feb 22 12:35:20.486 [conn27] assertion 13454 invalid regular expression operator ns:test.jstests_ne3 query:{ t: { $gt: /a/ } } m30001| Fri Feb 22 12:35:20.486 [conn27] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:20.486 [conn27] problem detected during query over test.jstests_ne3 : { $err: "invalid regular expression operator", code: 13454 } m30999| Fri Feb 22 12:35:20.486 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13454 invalid regular expression operator m30001| Fri Feb 22 12:35:20.486 [conn27] end connection 127.0.0.1:49283 (6 connections now open) m30001| Fri Feb 22 12:35:20.487 [initandlisten] connection accepted from 127.0.0.1:60087 #28 (6 connections now open) m30001| Fri Feb 22 12:35:20.487 [conn28] assertion 13454 invalid regular expression operator ns:test.jstests_ne3 query:{ t: { $gte: /a/ } } m30001| Fri Feb 22 12:35:20.487 [conn28] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:20.487 [conn28] problem detected during query over test.jstests_ne3 : { $err: "invalid regular expression operator", code: 13454 } m30999| Fri Feb 22 12:35:20.487 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13454 invalid regular expression operator m30001| Fri Feb 22 12:35:20.487 [conn28] end connection 127.0.0.1:60087 (5 connections now open) m30001| Fri Feb 22 12:35:20.488 [initandlisten] connection accepted from 127.0.0.1:50455 #29 (6 connections now open) m30001| Fri Feb 22 12:35:20.488 [conn29] assertion 13454 invalid regular expression operator ns:test.jstests_ne3 query:{ t: { $lt: /a/ } } m30001| Fri Feb 22 12:35:20.488 [conn29] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:20.488 [conn29] problem detected during query over test.jstests_ne3 : { $err: "invalid regular expression operator", code: 13454 } m30999| Fri Feb 22 12:35:20.489 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13454 invalid regular expression operator m30001| Fri Feb 22 12:35:20.489 [conn29] end connection 127.0.0.1:50455 (5 connections now open) m30001| Fri Feb 22 12:35:20.489 [initandlisten] connection accepted from 127.0.0.1:52699 #30 (6 connections now open) m30001| Fri Feb 22 12:35:20.490 [conn30] assertion 13454 invalid regular expression operator ns:test.jstests_ne3 query:{ t: { $lte: /a/ } } m30001| Fri Feb 22 12:35:20.490 [conn30] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:20.490 [conn30] problem detected during query over test.jstests_ne3 : { $err: "invalid regular expression operator", code: 13454 } m30999| Fri Feb 22 12:35:20.490 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13454 invalid regular expression operator m30001| Fri Feb 22 12:35:20.490 [conn30] end connection 127.0.0.1:52699 (5 connections now open) m30001| Fri Feb 22 12:35:20.491 [initandlisten] connection accepted from 127.0.0.1:34134 #31 (6 connections now open) 10ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_box1_noindex.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_oob_sphere.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_haystack2.js ******************************************* Test : jstests/fts5.js ... m30000| Fri Feb 22 12:35:20.495 [initandlisten] connection accepted from 127.0.0.1:62902 #18 (10 connections now open) m30001| Fri Feb 22 12:35:20.495 [initandlisten] connection accepted from 127.0.0.1:43873 #32 (7 connections now open) m30999| Fri Feb 22 12:35:20.496 [conn1] DROP: test.text5 m30001| Fri Feb 22 12:35:20.496 [conn31] CMD: drop test.text5 m30001| Fri Feb 22 12:35:20.497 [conn31] build index test.text5 { _id: 1 } m30001| Fri Feb 22 12:35:20.498 [conn31] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:20.498 [conn31] build index test.text5 { _fts: "text", _ftsx: 1, z: 1.0 } m30001| Fri Feb 22 12:35:20.500 [conn31] build index done. scanned 2 total records. 0.001 secs 13ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_circle5.js >>>>>>>>>>>>>>> skipping jstests/_runner_leak_nojni.js ******************************************* Test : jstests/group3.js ... m30999| Fri Feb 22 12:35:20.509 [conn1] DROP: test.group3 m30001| Fri Feb 22 12:35:20.509 [conn31] CMD: drop test.group3 m30001| Fri Feb 22 12:35:20.510 [conn31] build index test.group3 { _id: 1 } m30001| Fri Feb 22 12:35:20.511 [conn31] build index done. scanned 0 total records. 0 secs 59ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/dbadmin.js ******************************************* Test : jstests/sort10.js ... m30999| Fri Feb 22 12:35:20.566 [conn1] DROP: test.sort10 m30001| Fri Feb 22 12:35:20.567 [conn31] CMD: drop test.sort10 m30001| Fri Feb 22 12:35:20.567 [conn31] build index test.sort10 { _id: 1 } m30001| Fri Feb 22 12:35:20.568 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.569 [conn31] build index test.sort10 { x: 1.0 } m30001| Fri Feb 22 12:35:20.570 [conn31] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:35:20.571 [conn1] DROP: test.sort10 m30001| Fri Feb 22 12:35:20.571 [conn31] CMD: drop test.sort10 m30001| Fri Feb 22 12:35:20.578 [conn31] build index test.sort10 { _id: 1 } m30001| Fri Feb 22 12:35:20.578 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.579 [conn31] build index test.sort10 { x: 1.0 } background m30001| Fri Feb 22 12:35:20.579 [conn31] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:35:20.581 [conn1] DROP: test.sort10 m30001| Fri Feb 22 12:35:20.581 [conn31] CMD: drop test.sort10 m30001| Fri Feb 22 12:35:20.586 [conn31] build index test.sort10 { _id: 1 } m30001| Fri Feb 22 12:35:20.586 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.589 [conn31] build index test.sort10 { x: 1.0 } m30001| Fri Feb 22 12:35:20.589 [conn31] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:35:20.591 [conn9] CMD: dropIndexes test.sort10 m30001| Fri Feb 22 12:35:20.594 [conn31] build index test.sort10 { x: -1.0 } m30001| Fri Feb 22 12:35:20.594 [conn31] build index done. scanned 5 total records. 0 secs 31ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_fiddly_box2.js ******************************************* Test : jstests/arrayfind8.js ... m30999| Fri Feb 22 12:35:20.598 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.598 [conn31] CMD: drop test.jstests_arrayfind8 m30999| Fri Feb 22 12:35:20.599 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.599 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.600 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.601 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.603 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.603 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.606 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.607 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.607 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.608 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.615 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.615 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.620 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.620 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.623 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.623 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.626 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.626 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.626 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.627 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.634 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.635 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.639 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.639 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.642 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.642 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.645 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.645 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.645 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.646 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.653 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.653 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.658 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.658 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.660 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.660 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.663 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.664 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.664 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.664 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.672 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.672 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.676 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.677 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.679 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.679 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.682 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.683 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.683 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.683 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.690 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.691 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.695 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.695 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.698 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.698 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.701 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.702 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.702 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.702 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.710 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.710 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.714 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.715 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.717 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.717 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.720 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.720 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.720 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.721 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.729 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.729 [conn31] CMD: drop test.jstests_arrayfind8 m30999| Fri Feb 22 12:35:20.735 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.735 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.736 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.736 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.736 [conn31] info: creating collection test.jstests_arrayfind8 on add index m30001| Fri Feb 22 12:35:20.736 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.737 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.744 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.744 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.748 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.749 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.751 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.751 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.754 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.755 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.755 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.755 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.763 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.763 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.767 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.768 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.770 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.770 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.773 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.773 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.774 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.774 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.781 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.781 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.786 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.786 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.788 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.788 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.791 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.792 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.792 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.792 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.801 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.801 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.806 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.806 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.808 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.808 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.811 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.812 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.812 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.812 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.820 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.820 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.824 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.824 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.827 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.827 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.830 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.830 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.830 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.831 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.839 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.839 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.843 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.844 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.846 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.846 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.850 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.850 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.850 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.851 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.858 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.858 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.863 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.863 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.865 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.865 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.868 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.869 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.869 [conn31] build index test.jstests_arrayfind8 { a: 1.0 } m30001| Fri Feb 22 12:35:20.869 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.878 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.878 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.882 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.883 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.885 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.885 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.888 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.888 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.889 [conn31] build index test.jstests_arrayfind8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:20.889 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.897 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.897 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.901 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.902 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.904 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.904 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.907 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.908 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.908 [conn31] build index test.jstests_arrayfind8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:20.908 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.916 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.916 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.921 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.921 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.923 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.923 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.926 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.927 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.927 [conn31] build index test.jstests_arrayfind8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:20.927 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.935 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.936 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.940 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.940 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.943 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.943 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.946 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.947 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.947 [conn31] build index test.jstests_arrayfind8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:20.947 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.955 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.955 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.960 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.960 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.962 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.962 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.965 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.966 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.966 [conn31] build index test.jstests_arrayfind8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:20.966 [conn31] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:20.976 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.977 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.981 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.981 [conn31] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:20.984 [conn1] DROP: test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.984 [conn31] CMD: drop test.jstests_arrayfind8 m30001| Fri Feb 22 12:35:20.987 [conn31] build index test.jstests_arrayfind8 { _id: 1 } m30001| Fri Feb 22 12:35:20.988 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:20.988 [conn31] build index test.jstests_arrayfind8 { a.b: 1.0 } m30001| Fri Feb 22 12:35:20.988 [conn31] build index done. scanned 1 total records. 0 secs 400ms ******************************************* Test : jstests/covered_index_geo_1.js ... m30999| Fri Feb 22 12:35:21.004 [conn1] DROP: test.covered_geo_1 m30001| Fri Feb 22 12:35:21.005 [conn31] CMD: drop test.covered_geo_1 m30001| Fri Feb 22 12:35:21.005 [conn31] build index test.covered_geo_1 { _id: 1 } m30001| Fri Feb 22 12:35:21.006 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.007 [conn31] build index test.covered_geo_1 { loc: "2d", type: 1.0 } m30001| Fri Feb 22 12:35:21.008 [conn31] build index done. scanned 3 total records. 0.001 secs all tests passed 15ms ******************************************* Test : jstests/queryoptimizer6.js ... m30999| Fri Feb 22 12:35:21.013 [conn1] DROP: test.jstests_queryoptimizer6 m30001| Fri Feb 22 12:35:21.013 [conn31] CMD: drop test.jstests_queryoptimizer6 m30001| Fri Feb 22 12:35:21.014 [conn31] build index test.jstests_queryoptimizer6 { _id: 1 } m30001| Fri Feb 22 12:35:21.015 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.015 [conn31] build index test.jstests_queryoptimizer6 { b: 1.0 } m30001| Fri Feb 22 12:35:21.016 [conn31] build index done. scanned 1 total records. 0 secs 5ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo10.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_polygon3.js ******************************************* Test : jstests/ismaster.js ... 3ms ******************************************* Test : jstests/in3.js ... m30999| Fri Feb 22 12:35:21.020 [conn1] DROP: test.jstests_in3 m30001| Fri Feb 22 12:35:21.021 [conn31] CMD: drop test.jstests_in3 m30001| Fri Feb 22 12:35:21.022 [conn31] build index test.jstests_in3 { _id: 1 } m30001| Fri Feb 22 12:35:21.023 [conn31] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:21.023 [conn31] info: creating collection test.jstests_in3 on add index m30001| Fri Feb 22 12:35:21.023 [conn31] build index test.jstests_in3 { i: 1.0 } m30001| Fri Feb 22 12:35:21.023 [conn31] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/find_and_modify_server6254.js ... m30999| Fri Feb 22 12:35:21.029 [conn1] DROP: test.find_and_modify_server6254 m30001| Fri Feb 22 12:35:21.029 [conn31] CMD: drop test.find_and_modify_server6254 m30001| Fri Feb 22 12:35:21.030 [conn31] build index test.find_and_modify_server6254 { _id: 1 } m30001| Fri Feb 22 12:35:21.031 [conn31] build index done. scanned 0 total records. 0.001 secs 4ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/remove5.js ******************************************* Test : jstests/distinct_index1.js ... m30999| Fri Feb 22 12:35:21.033 [conn1] DROP: test.distinct_index1 m30001| Fri Feb 22 12:35:21.034 [conn31] CMD: drop test.distinct_index1 m30001| Fri Feb 22 12:35:21.034 [conn31] build index test.distinct_index1 { _id: 1 } m30001| Fri Feb 22 12:35:21.036 [conn31] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:21.105 [conn31] build index test.distinct_index1 { a: 1.0 } m30001| Fri Feb 22 12:35:21.112 [conn31] build index done. scanned 1000 total records. 0.006 secs m30001| Fri Feb 22 12:35:21.119 [conn9] CMD: dropIndexes test.distinct_index1 m30001| Fri Feb 22 12:35:21.122 [conn31] build index test.distinct_index1 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:21.129 [conn31] build index done. scanned 1000 total records. 0.007 secs 99ms ******************************************* Test : jstests/sortm.js ... m30999| Fri Feb 22 12:35:21.133 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.133 [conn31] CMD: drop test.jstests_sortm m30999| Fri Feb 22 12:35:21.134 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.134 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.135 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.136 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.136 [conn31] build index test.jstests_sortm { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:21.137 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.167 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.167 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.172 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.172 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.172 [conn31] build index test.jstests_sortm { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:35:21.173 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.203 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.203 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.207 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.208 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.208 [conn31] build index test.jstests_sortm { a: 1.0, b: 1.0, c: -1.0 } m30001| Fri Feb 22 12:35:21.209 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.243 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.243 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.248 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.248 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.249 [conn31] build index test.jstests_sortm { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:35:21.249 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.282 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.282 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.287 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.287 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.288 [conn31] build index test.jstests_sortm { a: 1.0, b: 1.0, c: -1.0 } m30001| Fri Feb 22 12:35:21.288 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.323 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.323 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.327 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.328 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.328 [conn31] build index test.jstests_sortm { a: 1.0, b: -1.0 } m30001| Fri Feb 22 12:35:21.329 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.345 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.345 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.350 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.350 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.351 [conn31] build index test.jstests_sortm { a: 1.0, b: -1.0, c: 1.0 } m30001| Fri Feb 22 12:35:21.352 [conn31] build index done. scanned 6 total records. 0.001 secs m30999| Fri Feb 22 12:35:21.368 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.368 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.373 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.373 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.374 [conn31] build index test.jstests_sortm { a: 1.0, b: -1.0, c: -1.0 } m30001| Fri Feb 22 12:35:21.374 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.390 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.390 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.394 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.395 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.395 [conn31] build index test.jstests_sortm { a: 1.0, b: -1.0, c: 1.0 } m30001| Fri Feb 22 12:35:21.396 [conn31] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:35:21.411 [conn1] DROP: test.jstests_sortm m30001| Fri Feb 22 12:35:21.411 [conn31] CMD: drop test.jstests_sortm m30001| Fri Feb 22 12:35:21.416 [conn31] build index test.jstests_sortm { _id: 1 } m30001| Fri Feb 22 12:35:21.416 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.417 [conn31] build index test.jstests_sortm { a: 1.0, b: -1.0, c: -1.0 } m30001| Fri Feb 22 12:35:21.417 [conn31] build index done. scanned 6 total records. 0 secs 300ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/evald.js ******************************************* Test : jstests/explainb.js ... m30999| Fri Feb 22 12:35:21.437 [conn1] DROP: test.jstests_explainb m30001| Fri Feb 22 12:35:21.437 [conn31] CMD: drop test.jstests_explainb m30001| Fri Feb 22 12:35:21.438 [conn31] build index test.jstests_explainb { _id: 1 } m30001| Fri Feb 22 12:35:21.439 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.439 [conn31] info: creating collection test.jstests_explainb on add index m30001| Fri Feb 22 12:35:21.439 [conn31] build index test.jstests_explainb { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:21.439 [conn31] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:21.440 [conn31] build index test.jstests_explainb { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:35:21.440 [conn31] build index done. scanned 0 total records. 0 secs 12ms ******************************************* Test : jstests/shellspawn.js ... m30999| Fri Feb 22 12:35:21.446 [conn1] DROP: test.jstests_shellspawn m30001| Fri Feb 22 12:35:21.446 [conn31] CMD: drop test.jstests_shellspawn Fri Feb 22 12:35:21.480 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo admin --port 30999 --eval sleep( 2000 ); db.getSiblingDB('test').getCollection( 'jstests_shellspawn' ).save( {a:1} ); sh15681| MongoDB shell version: 2.4.0-rc1-pre- sh15681| connecting to: 127.0.0.1:30999/admin m30999| Fri Feb 22 12:35:21.566 [mongosMain] connection accepted from 127.0.0.1:35013 #22 (2 connections now open) m30001| Fri Feb 22 12:35:23.569 [initandlisten] connection accepted from 127.0.0.1:54766 #33 (8 connections now open) m30001| Fri Feb 22 12:35:23.571 [conn33] build index test.jstests_shellspawn { _id: 1 } m30001| Fri Feb 22 12:35:23.574 [conn33] build index done. scanned 0 total records. 0.002 secs m30999| Fri Feb 22 12:35:23.579 [conn22] end connection 127.0.0.1:35013 (1 connection now open) Fri Feb 22 12:35:23.697 shell: stopped mongo program on pid 15681 Fri Feb 22 12:35:23.724 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 30999 --eval print( 'I am a shell' ); m30999| Fri Feb 22 12:35:23.751 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127660b881c8e7453916055 m30999| Fri Feb 22 12:35:23.752 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. Fri Feb 22 12:35:24.725 shell: stopped mongo program on pid 15694 Fri Feb 22 12:35:24.752 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 30999 Fri Feb 22 12:35:25.753 shell: stopped mongo program on pid 15699 Fri Feb 22 12:35:25.787 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 30999 Fri Feb 22 12:35:26.787 shell: stopped mongo program on pid 15700 5342ms ******************************************* Test : jstests/sort5.js ... m30999| Fri Feb 22 12:35:26.795 [conn1] DROP: test.sort5 m30001| Fri Feb 22 12:35:26.795 [conn31] CMD: drop test.sort5 m30001| Fri Feb 22 12:35:26.797 [conn31] build index test.sort5 { _id: 1 } m30001| Fri Feb 22 12:35:26.798 [conn31] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:26.799 [conn31] build index test.sort5 { y.b: 1.0, y.a: -1.0 } m30001| Fri Feb 22 12:35:26.800 [conn31] build index done. scanned 4 total records. 0.001 secs m30001| Fri Feb 22 12:35:26.801 [conn4] CMD: validate test.sort5 m30001| Fri Feb 22 12:35:26.801 [conn4] validating index 0: test.sort5.$_id_ m30001| Fri Feb 22 12:35:26.801 [conn4] validating index 1: test.sort5.$y.b_1_y.a_-1 m30001| Fri Feb 22 12:35:26.802 [conn31] build index test.sort5 { y.b: 1.0, _id: -1.0 } m30001| Fri Feb 22 12:35:26.803 [conn31] build index done. scanned 4 total records. 0.001 secs m30001| Fri Feb 22 12:35:26.804 [conn4] CMD: validate test.sort5 m30001| Fri Feb 22 12:35:26.804 [conn4] validating index 0: test.sort5.$_id_ m30001| Fri Feb 22 12:35:26.804 [conn4] validating index 1: test.sort5.$y.b_1_y.a_-1 m30001| Fri Feb 22 12:35:26.804 [conn4] validating index 2: test.sort5.$y.b_1__id_-1 16ms ******************************************* Test : jstests/js2.js ... m30001| Fri Feb 22 12:35:26.812 [conn31] build index test.jstests_js2 { _id: 1 } m30001| Fri Feb 22 12:35:26.813 [conn31] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:26.844 [conn31] JavaScript execution failed: ReferenceError: db is not defined near 'db.jstests_js2_2.save( { y : 1 ' (line 2) m30001| Fri Feb 22 12:35:26.845 [conn31] assertion 16722 JavaScript execution failed: ReferenceError: db is not defined near 'db.jstests_js2_2.save( { y : 1 ' (line 2) ns:test.jstests_js2 query:{ $where: function (){ m30001| db.jstests_js2_2.save( { y : ... } m30001| Fri Feb 22 12:35:26.845 [conn31] problem detected during query over test.jstests_js2 : { $err: "JavaScript execution failed: ReferenceError: db is not defined near 'db.jstests_js2_2.save( { y : 1 ' (line 2)", code: 16722 } m30999| Fri Feb 22 12:35:26.845 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16722 JavaScript execution failed: ReferenceError: db is not defined near 'db.jstests_js2_2.save( { y : 1 ' (line 2) m30001| Fri Feb 22 12:35:26.845 [conn31] end connection 127.0.0.1:34134 (7 connections now open) m30001| Fri Feb 22 12:35:26.846 [conn4] CMD: validate test.jstests_js2 m30001| Fri Feb 22 12:35:26.846 [conn4] validating index 0: test.jstests_js2.$_id_ 42ms ******************************************* Test : jstests/date1.js ... m30999| Fri Feb 22 12:35:26.849 [conn1] DROP: test.date1 m30001| Fri Feb 22 12:35:26.849 [conn33] CMD: drop test.date1 m30001| Fri Feb 22 12:35:26.849 [conn33] build index test.date1 { _id: 1 } m30001| Fri Feb 22 12:35:26.850 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:26.851 [conn1] DROP: test.date1 m30001| Fri Feb 22 12:35:26.851 [conn33] CMD: drop test.date1 m30001| Fri Feb 22 12:35:26.855 [conn33] build index test.date1 { _id: 1 } m30001| Fri Feb 22 12:35:26.856 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:26.856 [conn1] DROP: test.date1 m30001| Fri Feb 22 12:35:26.856 [conn33] CMD: drop test.date1 m30001| Fri Feb 22 12:35:26.859 [conn33] build index test.date1 { _id: 1 } m30001| Fri Feb 22 12:35:26.860 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:26.860 [conn1] DROP: test.date1 m30001| Fri Feb 22 12:35:26.860 [conn33] CMD: drop test.date1 m30001| Fri Feb 22 12:35:26.864 [conn33] build index test.date1 { _id: 1 } m30001| Fri Feb 22 12:35:26.864 [conn33] build index done. scanned 0 total records. 0 secs 18ms ******************************************* Test : jstests/update_arraymatch1.js ... m30999| Fri Feb 22 12:35:26.873 [conn1] DROP: test.update_arraymatch1 m30001| Fri Feb 22 12:35:26.873 [conn33] CMD: drop test.update_arraymatch1 m30001| Fri Feb 22 12:35:26.874 [conn33] build index test.update_arraymatch1 { _id: 1 } m30001| Fri Feb 22 12:35:26.875 [conn33] build index done. scanned 0 total records. 0 secs 13ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geod.js ******************************************* Test : jstests/index_bigkeys.js ... 16384 pass:1 m30999| Fri Feb 22 12:35:26.881 [conn1] DROP: test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.882 [conn33] CMD: drop test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.882 [conn33] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.883 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.883 [conn33] info: creating collection test.bigkeysidxtest on add index m30001| Fri Feb 22 12:35:26.883 [conn33] build index test.bigkeysidxtest { k: 1.0 } m30001| Fri Feb 22 12:35:26.884 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.885 [conn33] test.bigkeysidxtest ERROR: key too large len:1037 max:1024 1037 test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.885 [conn33] test.bigkeysidxtest ERROR: key too large len:2061 max:1024 2061 test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.885 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.885 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.885 [conn4] validating index 1: test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.885 [conn33] test.bigkeysidxtest ERROR: key too large len:4109 max:1024 4109 test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.885 [conn33] test.bigkeysidxtest ERROR: key too large len:8205 max:1024 8205 test.bigkeysidxtest.$k_1 keycount:4 m30001| Fri Feb 22 12:35:26.886 [conn4] CMD: reIndex test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.889 [conn4] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.889 [conn4] build index done. scanned 9 total records. 0 secs m30001| Fri Feb 22 12:35:26.889 [conn4] build index test.bigkeysidxtest { k: 1.0 } m30001| Fri Feb 22 12:35:26.890 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 1037 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.890 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 2061 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.890 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 4109 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.890 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 8205 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.890 [conn4] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:35:26.890 [conn4] build index done. scanned 9 total records. 0 secs m30001| Fri Feb 22 12:35:26.890 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.890 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.890 [conn4] validating index 1: test.bigkeysidxtest.$k_1 keycount:4 m30999| Fri Feb 22 12:35:26.891 [conn1] DROP: test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.891 [conn33] CMD: drop test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.896 [conn33] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.897 [conn33] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:26.898 [conn33] build index test.bigkeysidxtest { k: 1.0 } m30001| Fri Feb 22 12:35:26.899 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 1037 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.899 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 2061 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.899 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 4109 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.899 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 8205 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.899 [conn33] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:35:26.899 [conn33] build index done. scanned 9 total records. 0 secs m30001| Fri Feb 22 12:35:26.900 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.900 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.900 [conn4] validating index 1: test.bigkeysidxtest.$k_1 keycount:4 m30999| Fri Feb 22 12:35:26.901 [conn1] DROP: test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.901 [conn33] CMD: drop test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.906 [conn33] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.906 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.907 [conn33] build index test.bigkeysidxtest { k: 1.0 } background m30001| Fri Feb 22 12:35:26.907 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 1037 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.907 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 2061 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.907 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 4109 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.907 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 8205 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.907 [conn33] build index done. scanned 9 total records. 0 secs m30001| Fri Feb 22 12:35:26.907 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.908 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.908 [conn4] validating index 1: test.bigkeysidxtest.$k_1 keycount:4 pass:2 m30999| Fri Feb 22 12:35:26.908 [conn1] DROP: test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.909 [conn33] CMD: drop test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.914 [conn33] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.914 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.914 [conn33] info: creating collection test.bigkeysidxtest on add index m30001| Fri Feb 22 12:35:26.914 [conn33] build index test.bigkeysidxtest { k: 1.0 } m30001| Fri Feb 22 12:35:26.914 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.915 [conn33] test.bigkeysidxtest ERROR: key too large len:8205 max:1024 8205 test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.915 [conn33] test.bigkeysidxtest ERROR: key too large len:4109 max:1024 4109 test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.915 [conn33] test.bigkeysidxtest ERROR: key too large len:2061 max:1024 2061 test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.915 [conn33] test.bigkeysidxtest ERROR: key too large len:1037 max:1024 1037 test.bigkeysidxtest.$k_1 m30001| Fri Feb 22 12:35:26.919 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.919 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.919 [conn4] validating index 1: test.bigkeysidxtest.$k_1 keycount:4 m30001| Fri Feb 22 12:35:26.920 [conn4] CMD: reIndex test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.923 [conn4] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.924 [conn4] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:35:26.924 [conn4] build index test.bigkeysidxtest { k: 1.0 } m30001| Fri Feb 22 12:35:26.924 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 1037 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.924 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 2061 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.924 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 4109 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.924 [conn4] test Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 8205 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.924 [conn4] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:35:26.924 [conn4] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:35:26.925 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.925 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.925 [conn4] validating index 1: test.bigkeysidxtest.$k_1 keycount:4 m30999| Fri Feb 22 12:35:26.926 [conn1] DROP: test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.926 [conn33] CMD: drop test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.931 [conn33] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.931 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.932 [conn33] build index test.bigkeysidxtest { k: 1.0 } m30001| Fri Feb 22 12:35:26.932 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 1037 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.932 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 2061 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.932 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 4109 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.932 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 8205 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.932 [conn33] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:35:26.932 [conn33] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:35:26.933 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.933 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.933 [conn4] validating index 1: test.bigkeysidxtest.$k_1 keycount:4 m30999| Fri Feb 22 12:35:26.934 [conn1] DROP: test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.934 [conn33] CMD: drop test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.939 [conn33] build index test.bigkeysidxtest { _id: 1 } m30001| Fri Feb 22 12:35:26.939 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.940 [conn33] build index test.bigkeysidxtest { k: 1.0 } background m30001| Fri Feb 22 12:35:26.940 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 2061 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.940 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 1037 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.940 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 8205 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.940 [conn33] test.system.indexes Btree::insert: key too large to index, skipping test.bigkeysidxtest.$k_1 4109 { : "aaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeffffgggghhhhaaaabbbbccccddddeeeeff..." } m30001| Fri Feb 22 12:35:26.940 [conn33] build index done. scanned 10 total records. 0 secs m30001| Fri Feb 22 12:35:26.941 [conn4] CMD: validate test.bigkeysidxtest m30001| Fri Feb 22 12:35:26.941 [conn4] validating index 0: test.bigkeysidxtest.$_id_ m30001| Fri Feb 22 12:35:26.941 [conn4] validating index 1: test.bigkeysidxtest.$k_1 keycount:4 65ms ******************************************* Test : jstests/rename.js ... m30999| Fri Feb 22 12:35:26.943 [conn1] DROP: test.jstests_rename_a m30001| Fri Feb 22 12:35:26.944 [conn33] CMD: drop test.jstests_rename_a m30999| Fri Feb 22 12:35:26.944 [conn1] DROP: test.jstests_rename_b m30001| Fri Feb 22 12:35:26.944 [conn33] CMD: drop test.jstests_rename_b m30999| Fri Feb 22 12:35:26.944 [conn1] DROP: test.jstests_rename_c m30001| Fri Feb 22 12:35:26.944 [conn33] CMD: drop test.jstests_rename_c m30001| Fri Feb 22 12:35:26.945 [conn33] build index test.jstests_rename_a { _id: 1 } m30001| Fri Feb 22 12:35:26.945 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:26.946 [conn33] build index test.jstests_rename_a { a: 1.0 } m30001| Fri Feb 22 12:35:26.946 [conn33] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:35:26.947 [conn33] build index test.jstests_rename_a { b: 1.0 } m30001| Fri Feb 22 12:35:26.948 [conn33] build index done. scanned 2 total records. 0.001 secs m30001| Fri Feb 22 12:35:26.949 [conn33] build index test.jstests_rename_c { _id: 1 } m30001| Fri Feb 22 12:35:26.949 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:26.971 [conn1] DROP: test.jstests_rename_a m30001| Fri Feb 22 12:35:26.972 [conn33] CMD: drop test.jstests_rename_a m30999| Fri Feb 22 12:35:26.972 [conn1] DROP: test.jstests_rename_b m30001| Fri Feb 22 12:35:26.972 [conn33] CMD: drop test.jstests_rename_b m30999| Fri Feb 22 12:35:26.979 [conn1] DROP: test.jstests_rename_c m30001| Fri Feb 22 12:35:26.979 [conn33] CMD: drop test.jstests_rename_c m30001| Fri Feb 22 12:35:26.983 [conn33] build index test.jstests_rename_a { _id: 1 } m30001| Fri Feb 22 12:35:26.983 [conn33] build index done. scanned 0 total records. 0 secs 82ms ******************************************* Test : jstests/drop3.js ... m30999| Fri Feb 22 12:35:27.026 [conn1] DROP: test.jstests_drop3 m30001| Fri Feb 22 12:35:27.027 [conn33] CMD: drop test.jstests_drop3 m30999| Fri Feb 22 12:35:27.027 [conn1] DROP: test.jstests_drop3.sub m30001| Fri Feb 22 12:35:27.028 [conn33] CMD: drop test.jstests_drop3.sub m30001| Fri Feb 22 12:35:27.029 [conn33] build index test.jstests_drop3 { _id: 1 } m30001| Fri Feb 22 12:35:27.030 [conn33] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:27.031 [conn33] build index test.jstests_drop3.sub { _id: 1 } m30001| Fri Feb 22 12:35:27.032 [conn33] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:27.034 [conn1] DROP: test.jstests_drop3 m30001| Fri Feb 22 12:35:27.034 [conn33] CMD: drop test.jstests_drop3 m30001| Fri Feb 22 12:35:27.039 [conn4] getMore: cursorid not found test.jstests_drop3 529369055068440 15ms ******************************************* Test : jstests/insert2.js ... m30999| Fri Feb 22 12:35:27.040 [conn1] DROP: test.insert2 m30001| Fri Feb 22 12:35:27.041 [conn33] CMD: drop test.insert2 3ms ******************************************* Test : jstests/regex3.js ... m30999| Fri Feb 22 12:35:27.042 [conn1] DROP: test.regex3 m30001| Fri Feb 22 12:35:27.043 [conn33] CMD: drop test.regex3 m30001| Fri Feb 22 12:35:27.043 [conn33] build index test.regex3 { _id: 1 } m30001| Fri Feb 22 12:35:27.044 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.045 [conn33] build index test.regex3 { name: 1.0 } m30001| Fri Feb 22 12:35:27.046 [conn33] build index done. scanned 4 total records. 0.001 secs m30999| Fri Feb 22 12:35:27.047 [conn1] DROP: test.regex3 m30001| Fri Feb 22 12:35:27.047 [conn33] CMD: drop test.regex3 m30001| Fri Feb 22 12:35:27.053 [conn33] build index test.regex3 { _id: 1 } m30001| Fri Feb 22 12:35:27.053 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.054 [conn33] build index test.regex3 { name: 1.0 } m30001| Fri Feb 22 12:35:27.054 [conn33] build index done. scanned 4 total records. 0 secs m30999| Fri Feb 22 12:35:27.057 [conn1] DROP: test.regex3 m30001| Fri Feb 22 12:35:27.057 [conn33] CMD: drop test.regex3 m30001| Fri Feb 22 12:35:27.061 [conn33] build index test.regex3 { _id: 1 } m30001| Fri Feb 22 12:35:27.062 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.062 [conn33] build index test.regex3 { name: 1.0 } m30001| Fri Feb 22 12:35:27.063 [conn33] build index done. scanned 1 total records. 0 secs 21ms ******************************************* Test : jstests/update8.js ... m30999| Fri Feb 22 12:35:27.064 [conn1] DROP: test.update8 m30001| Fri Feb 22 12:35:27.064 [conn33] CMD: drop test.update8 m30001| Fri Feb 22 12:35:27.065 [conn33] build index test.update8 { _id: 1 } m30001| Fri Feb 22 12:35:27.066 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:27.067 [conn1] DROP: test.update8 m30001| Fri Feb 22 12:35:27.067 [conn33] CMD: drop test.update8 7ms ******************************************* Test : jstests/ori.js ... m30999| Fri Feb 22 12:35:27.078 [conn1] DROP: test.jstests_ori m30001| Fri Feb 22 12:35:27.079 [conn33] CMD: drop test.jstests_ori m30001| Fri Feb 22 12:35:27.079 [conn33] build index test.jstests_ori { _id: 1 } m30001| Fri Feb 22 12:35:27.080 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.080 [conn33] info: creating collection test.jstests_ori on add index m30001| Fri Feb 22 12:35:27.080 [conn33] build index test.jstests_ori { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:27.081 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.081 [conn33] build index test.jstests_ori { a: 1.0, c: 1.0 } m30001| Fri Feb 22 12:35:27.082 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:27.085 [conn1] DROP: test.jstests_ori m30001| Fri Feb 22 12:35:27.085 [conn33] CMD: drop test.jstests_ori m30001| Fri Feb 22 12:35:27.093 [conn33] build index test.jstests_ori { _id: 1 } m30001| Fri Feb 22 12:35:27.093 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.093 [conn33] info: creating collection test.jstests_ori on add index m30001| Fri Feb 22 12:35:27.093 [conn33] build index test.jstests_ori { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:35:27.094 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.094 [conn33] build index test.jstests_ori { a: 1.0, c: 1.0 } m30001| Fri Feb 22 12:35:27.094 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:27.097 [conn1] DROP: test.jstests_ori m30001| Fri Feb 22 12:35:27.097 [conn33] CMD: drop test.jstests_ori m30001| Fri Feb 22 12:35:27.103 [conn33] build index test.jstests_ori { _id: 1 } m30001| Fri Feb 22 12:35:27.103 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.103 [conn33] info: creating collection test.jstests_ori on add index m30001| Fri Feb 22 12:35:27.103 [conn33] build index test.jstests_ori { a: 1.0 } m30001| Fri Feb 22 12:35:27.104 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.105 [conn33] build index test.jstests_ori { b: 1.0 } m30001| Fri Feb 22 12:35:27.105 [conn33] build index done. scanned 0 total records. 0 secs 37ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_circle1_noindex.js ******************************************* Test : jstests/indexs.js ... m30999| Fri Feb 22 12:35:27.109 [conn1] DROP: test.jstests_indexs m30001| Fri Feb 22 12:35:27.110 [conn33] CMD: drop test.jstests_indexs m30001| Fri Feb 22 12:35:27.110 [conn33] build index test.jstests_indexs { _id: 1 } m30001| Fri Feb 22 12:35:27.111 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.111 [conn33] info: creating collection test.jstests_indexs on add index m30001| Fri Feb 22 12:35:27.111 [conn33] build index test.jstests_indexs { a: 1.0 } m30001| Fri Feb 22 12:35:27.112 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:27.114 [conn1] DROP: test.jstests_indexs m30001| Fri Feb 22 12:35:27.114 [conn33] CMD: drop test.jstests_indexs m30001| Fri Feb 22 12:35:27.119 [conn33] build index test.jstests_indexs { _id: 1 } m30001| Fri Feb 22 12:35:27.119 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.119 [conn33] info: creating collection test.jstests_indexs on add index m30001| Fri Feb 22 12:35:27.119 [conn33] build index test.jstests_indexs { a: 1.0, a.b: 1.0 } m30001| Fri Feb 22 12:35:27.120 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:27.121 [conn1] DROP: test.jstests_indexs m30001| Fri Feb 22 12:35:27.121 [conn33] CMD: drop test.jstests_indexs m30001| Fri Feb 22 12:35:27.127 [conn33] build index test.jstests_indexs { _id: 1 } m30001| Fri Feb 22 12:35:27.127 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:27.127 [conn33] info: creating collection test.jstests_indexs on add index m30001| Fri Feb 22 12:35:27.127 [conn33] build index test.jstests_indexs { a: 1.0, a.b: 1.0 } m30001| Fri Feb 22 12:35:27.128 [conn33] build index done. scanned 0 total records. 0 secs 21ms ******************************************* Test : jstests/indexe.js ... m30999| Fri Feb 22 12:35:27.131 [conn1] DROP: test.indexe m30001| Fri Feb 22 12:35:27.131 [conn33] CMD: drop test.indexe m30001| Fri Feb 22 12:35:27.132 [conn33] build index test.indexe { _id: 1 } m30001| Fri Feb 22 12:35:27.132 [conn33] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:29.753 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276611881c8e7453916056 m30999| Fri Feb 22 12:35:29.754 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:35:32.958 [conn33] command test.$cmd command: { count: "indexe", query: { a: "b" } } ntoreturn:1 keyUpdates:0 locks(micros) r:111916 reslen:48 111ms m30001| Fri Feb 22 12:35:33.088 [conn4] getmore test.indexe query: { a: "b" } cursorid:554794031839527 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:174549 nreturned:99899 reslen:3096889 128ms m30001| Fri Feb 22 12:35:34.008 [conn32] end connection 127.0.0.1:43873 (6 connections now open) m30000| Fri Feb 22 12:35:34.008 [conn18] end connection 127.0.0.1:62902 (9 connections now open) m30001| Fri Feb 22 12:35:34.008 [conn26] end connection 127.0.0.1:34564 (5 connections now open) m30000| Fri Feb 22 12:35:34.008 [conn17] end connection 127.0.0.1:34989 (8 connections now open) m30001| Fri Feb 22 12:35:34.099 [conn33] build index test.indexe { a: 1.0 } m30001| Fri Feb 22 12:35:34.824 [conn33] build index done. scanned 100000 total records. 0.724 secs m30001| Fri Feb 22 12:35:34.824 [conn33] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:725157 725ms m30001| Fri Feb 22 12:35:35.010 [conn4] getmore test.indexe query: { query: {}, orderby: { a: "b" } } cursorid:562783219647294 ntoreturn:0 keyUpdates:0 locks(micros) r:183963 nreturned:99899 reslen:3096889 183ms m30999| Fri Feb 22 12:35:35.756 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276617881c8e7453916057 m30999| Fri Feb 22 12:35:35.756 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:35:36.072 [conn4] getmore test.indexe query: { a: "b" } cursorid:566949783469033 ntoreturn:0 keyUpdates:0 numYields: 1 locks(micros) r:509230 nreturned:99899 reslen:3096889 271ms m30999| Fri Feb 22 12:35:36.816 [conn1] DROP: test.indexe m30001| Fri Feb 22 12:35:36.816 [conn33] CMD: drop test.indexe 9693ms ******************************************* Test : jstests/or1.js ... m30999| Fri Feb 22 12:35:36.824 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.825 [conn33] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.826 [conn33] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.827 [conn33] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.827 [conn33] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: "a" } m30001| Fri Feb 22 12:35:36.827 [conn33] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.827 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.828 [conn33] end connection 127.0.0.1:54766 (4 connections now open) m30001| Fri Feb 22 12:35:36.828 [initandlisten] connection accepted from 127.0.0.1:40197 #34 (5 connections now open) m30001| Fri Feb 22 12:35:36.829 [conn34] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: {} } m30001| Fri Feb 22 12:35:36.829 [conn34] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.829 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.829 [conn34] end connection 127.0.0.1:40197 (4 connections now open) m30001| Fri Feb 22 12:35:36.829 [initandlisten] connection accepted from 127.0.0.1:60185 #35 (5 connections now open) m30001| Fri Feb 22 12:35:36.829 [conn35] assertion 14817 $and/$or elements must be objects ns:test.jstests_or1 query:{ $or: [ "a" ] } m30001| Fri Feb 22 12:35:36.829 [conn35] problem detected during query over test.jstests_or1 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:36.830 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:36.830 [conn35] end connection 127.0.0.1:60185 (4 connections now open) m30001| Fri Feb 22 12:35:36.830 [initandlisten] connection accepted from 127.0.0.1:62251 #36 (5 connections now open) m30999| Fri Feb 22 12:35:36.832 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.832 [conn36] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.836 [conn36] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.836 [conn36] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:36.838 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.838 [conn36] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.841 [conn36] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.841 [conn36] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.841 [conn36] info: creating collection test.jstests_or1 on add index m30001| Fri Feb 22 12:35:36.841 [conn36] build index test.jstests_or1 { a: 1.0 } m30001| Fri Feb 22 12:35:36.842 [conn36] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.843 [conn36] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: "a" } m30001| Fri Feb 22 12:35:36.843 [conn36] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.843 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.844 [conn36] end connection 127.0.0.1:62251 (4 connections now open) m30001| Fri Feb 22 12:35:36.844 [initandlisten] connection accepted from 127.0.0.1:62278 #37 (5 connections now open) m30001| Fri Feb 22 12:35:36.844 [conn37] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: {} } m30001| Fri Feb 22 12:35:36.844 [conn37] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.844 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.845 [conn37] end connection 127.0.0.1:62278 (4 connections now open) m30001| Fri Feb 22 12:35:36.845 [initandlisten] connection accepted from 127.0.0.1:33409 #38 (5 connections now open) m30001| Fri Feb 22 12:35:36.846 [conn38] assertion 14817 $and/$or elements must be objects ns:test.jstests_or1 query:{ $or: [ "a" ] } m30001| Fri Feb 22 12:35:36.846 [conn38] problem detected during query over test.jstests_or1 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:36.846 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:36.846 [conn38] end connection 127.0.0.1:33409 (4 connections now open) m30001| Fri Feb 22 12:35:36.846 [initandlisten] connection accepted from 127.0.0.1:41223 #39 (5 connections now open) m30999| Fri Feb 22 12:35:36.848 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.848 [conn39] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.853 [conn39] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.853 [conn39] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:36.854 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.854 [conn39] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.858 [conn39] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.858 [conn39] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.859 [conn39] info: creating collection test.jstests_or1 on add index m30001| Fri Feb 22 12:35:36.859 [conn39] build index test.jstests_or1 { b: 1.0 } m30001| Fri Feb 22 12:35:36.859 [conn39] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.860 [conn39] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: "a" } m30001| Fri Feb 22 12:35:36.860 [conn39] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.860 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.860 [conn39] end connection 127.0.0.1:41223 (4 connections now open) m30001| Fri Feb 22 12:35:36.861 [initandlisten] connection accepted from 127.0.0.1:55814 #40 (5 connections now open) m30001| Fri Feb 22 12:35:36.861 [conn40] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: {} } m30001| Fri Feb 22 12:35:36.861 [conn40] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.861 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.861 [conn40] end connection 127.0.0.1:55814 (4 connections now open) m30001| Fri Feb 22 12:35:36.861 [initandlisten] connection accepted from 127.0.0.1:35941 #41 (5 connections now open) m30001| Fri Feb 22 12:35:36.862 [conn41] assertion 14817 $and/$or elements must be objects ns:test.jstests_or1 query:{ $or: [ "a" ] } m30001| Fri Feb 22 12:35:36.862 [conn41] problem detected during query over test.jstests_or1 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:36.862 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:36.862 [conn41] end connection 127.0.0.1:35941 (4 connections now open) m30001| Fri Feb 22 12:35:36.862 [initandlisten] connection accepted from 127.0.0.1:33034 #42 (5 connections now open) m30999| Fri Feb 22 12:35:36.864 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.864 [conn42] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.869 [conn42] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.870 [conn42] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:36.871 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.871 [conn42] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.874 [conn42] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.875 [conn42] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.875 [conn42] info: creating collection test.jstests_or1 on add index m30001| Fri Feb 22 12:35:36.875 [conn42] build index test.jstests_or1 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:36.875 [conn42] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.876 [conn42] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: "a" } m30001| Fri Feb 22 12:35:36.876 [conn42] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.876 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.876 [conn42] end connection 127.0.0.1:33034 (4 connections now open) m30001| Fri Feb 22 12:35:36.877 [initandlisten] connection accepted from 127.0.0.1:44197 #43 (5 connections now open) m30001| Fri Feb 22 12:35:36.877 [conn43] assertion 13262 $or requires nonempty array ns:test.jstests_or1 query:{ $or: {} } m30001| Fri Feb 22 12:35:36.877 [conn43] problem detected during query over test.jstests_or1 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:36.877 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:36.877 [conn43] end connection 127.0.0.1:44197 (4 connections now open) m30001| Fri Feb 22 12:35:36.878 [initandlisten] connection accepted from 127.0.0.1:45493 #44 (5 connections now open) m30001| Fri Feb 22 12:35:36.878 [conn44] assertion 14817 $and/$or elements must be objects ns:test.jstests_or1 query:{ $or: [ "a" ] } m30001| Fri Feb 22 12:35:36.878 [conn44] problem detected during query over test.jstests_or1 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:36.878 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:36.878 [conn44] end connection 127.0.0.1:45493 (4 connections now open) m30001| Fri Feb 22 12:35:36.879 [initandlisten] connection accepted from 127.0.0.1:33261 #45 (5 connections now open) m30999| Fri Feb 22 12:35:36.880 [conn1] DROP: test.jstests_or1 m30001| Fri Feb 22 12:35:36.881 [conn45] CMD: drop test.jstests_or1 m30001| Fri Feb 22 12:35:36.886 [conn45] build index test.jstests_or1 { _id: 1 } m30001| Fri Feb 22 12:35:36.886 [conn45] build index done. scanned 0 total records. 0 secs 64ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_update3.js ******************************************* Test : jstests/inb.js ... m30999| Fri Feb 22 12:35:36.897 [conn1] DROP: test.jstests_inb m30001| Fri Feb 22 12:35:36.897 [conn45] CMD: drop test.jstests_inb m30001| Fri Feb 22 12:35:36.898 [conn45] build index test.jstests_inb { _id: 1 } m30001| Fri Feb 22 12:35:36.899 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.899 [conn45] info: creating collection test.jstests_inb on add index m30001| Fri Feb 22 12:35:36.899 [conn45] build index test.jstests_inb { x: 1.0 } m30001| Fri Feb 22 12:35:36.900 [conn45] build index done. scanned 0 total records. 0 secs 17ms >>>>>>>>>>>>>>> skipping jstests/_runner_sharding.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_near_random1.js ******************************************* Test : jstests/arrayfind1.js ... m30999| Fri Feb 22 12:35:36.905 [conn1] DROP: test.arrayfind1 m30001| Fri Feb 22 12:35:36.905 [conn45] CMD: drop test.arrayfind1 m30001| Fri Feb 22 12:35:36.906 [conn45] build index test.arrayfind1 { _id: 1 } m30001| Fri Feb 22 12:35:36.907 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.912 [conn45] build index test.arrayfind1 { a.x: 1.0 } m30001| Fri Feb 22 12:35:36.913 [conn45] build index done. scanned 6 total records. 0 secs 13ms ******************************************* Test : jstests/basic7.js ... m30999| Fri Feb 22 12:35:36.918 [conn1] DROP: test.basic7 m30001| Fri Feb 22 12:35:36.918 [conn45] CMD: drop test.basic7 m30001| Fri Feb 22 12:35:36.919 [conn45] build index test.basic7 { _id: 1 } m30001| Fri Feb 22 12:35:36.920 [conn45] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:36.920 [conn45] build index test.basic7 { a: 1.0 } m30001| Fri Feb 22 12:35:36.921 [conn45] build index done. scanned 1 total records. 0 secs 5ms ******************************************* Test : jstests/fts_proj.js ... m30000| Fri Feb 22 12:35:36.930 [initandlisten] connection accepted from 127.0.0.1:35789 #19 (9 connections now open) m30001| Fri Feb 22 12:35:36.931 [initandlisten] connection accepted from 127.0.0.1:39219 #46 (6 connections now open) m30999| Fri Feb 22 12:35:36.931 [conn1] DROP: test.text_proj m30001| Fri Feb 22 12:35:36.931 [conn45] CMD: drop test.text_proj m30001| Fri Feb 22 12:35:36.932 [conn45] build index test.text_proj { _id: 1 } m30001| Fri Feb 22 12:35:36.933 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.933 [conn45] build index test.text_proj { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:35:36.934 [conn45] build index done. scanned 3 total records. 0 secs 16ms ******************************************* Test : jstests/find7.js ... m30999| Fri Feb 22 12:35:36.939 [conn1] DROP: test.find7 m30001| Fri Feb 22 12:35:36.939 [conn45] CMD: drop test.find7 m30001| Fri Feb 22 12:35:36.940 [conn45] build index test.find7 { _id: 1 } m30001| Fri Feb 22 12:35:36.940 [conn45] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/find_and_modify2.js ... m30999| Fri Feb 22 12:35:36.942 [conn1] DROP: test.find_and_modify2 m30001| Fri Feb 22 12:35:36.942 [conn45] CMD: drop test.find_and_modify2 m30001| Fri Feb 22 12:35:36.943 [conn45] build index test.find_and_modify2 { _id: 1 } m30001| Fri Feb 22 12:35:36.944 [conn45] build index done. scanned 0 total records. 0 secs 5ms ******************************************* Test : jstests/index_sparse2.js ... m30999| Fri Feb 22 12:35:36.948 [conn1] DROP: test.index_sparse2 m30001| Fri Feb 22 12:35:36.948 [conn45] CMD: drop test.index_sparse2 m30001| Fri Feb 22 12:35:36.949 [conn45] build index test.index_sparse2 { _id: 1 } m30001| Fri Feb 22 12:35:36.949 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:36.950 [conn45] build index test.index_sparse2 { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:35:36.951 [conn45] build index done. scanned 3 total records. 0.001 secs m30001| Fri Feb 22 12:35:36.952 [conn4] CMD: dropIndexes test.index_sparse2 m30001| Fri Feb 22 12:35:36.955 [conn45] build index test.index_sparse2 { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:35:36.955 [conn45] build index done. scanned 3 total records. 0 secs m30001| Fri Feb 22 12:35:36.957 [conn4] CMD: dropIndexes test.index_sparse2 12ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_circle2a.js ******************************************* Test : jstests/ref.js ... m30999| Fri Feb 22 12:35:36.960 [conn1] DROP: test.otherthings m30001| Fri Feb 22 12:35:36.960 [conn45] CMD: drop test.otherthings m30999| Fri Feb 22 12:35:36.960 [conn1] DROP: test.things m30001| Fri Feb 22 12:35:36.960 [conn45] CMD: drop test.things m30001| Fri Feb 22 12:35:36.961 [conn45] build index test.otherthings { _id: 1 } m30001| Fri Feb 22 12:35:36.962 [conn45] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:36.962 [conn45] build index test.things { _id: 1 } m30001| Fri Feb 22 12:35:36.963 [conn45] build index done. scanned 0 total records. 0 secs 7ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_update2.js >>>>>>>>>>>>>>> skipping jstests/_fail.js ******************************************* Test : jstests/find_and_modify3.js ... m30999| Fri Feb 22 12:35:36.966 [conn1] DROP: test.find_and_modify3 m30001| Fri Feb 22 12:35:36.967 [conn45] CMD: drop test.find_and_modify3 m30001| Fri Feb 22 12:35:36.967 [conn45] build index test.find_and_modify3 { _id: 1 } m30001| Fri Feb 22 12:35:36.968 [conn45] build index done. scanned 0 total records. 0 secs 7ms ******************************************* Test : jstests/find6.js ... m30999| Fri Feb 22 12:35:36.974 [conn1] DROP: test.find6 m30001| Fri Feb 22 12:35:36.974 [conn45] CMD: drop test.find6 m30001| Fri Feb 22 12:35:36.974 [conn45] build index test.find6 { _id: 1 } m30001| Fri Feb 22 12:35:36.975 [conn45] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:37.006 [conn1] DROP: test.find6a m30001| Fri Feb 22 12:35:37.006 [conn45] CMD: drop test.find6a m30001| Fri Feb 22 12:35:37.006 [conn45] build index test.find6a { _id: 1 } m30001| Fri Feb 22 12:35:37.007 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.009 [conn45] build index test.find6a { a: 1.0 } m30001| Fri Feb 22 12:35:37.010 [conn45] build index done. scanned 4 total records. 0 secs m30999| Fri Feb 22 12:35:37.012 [conn1] DROP: test.multidim m30001| Fri Feb 22 12:35:37.013 [conn45] CMD: drop test.multidim m30001| Fri Feb 22 12:35:37.013 [conn45] build index test.multidim { _id: 1 } m30001| Fri Feb 22 12:35:37.014 [conn45] build index done. scanned 0 total records. 0 secs 42ms ******************************************* Test : jstests/basic6.js ... 1ms ******************************************* Test : jstests/cursorb.js ... m30999| Fri Feb 22 12:35:37.024 [conn1] DROP: test.jstests_cursorb m30001| Fri Feb 22 12:35:37.024 [conn45] CMD: drop test.jstests_cursorb m30001| Fri Feb 22 12:35:37.024 [conn45] build index test.jstests_cursorb { _id: 1 } m30001| Fri Feb 22 12:35:37.025 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.047 [conn46] end connection 127.0.0.1:39219 (5 connections now open) m30000| Fri Feb 22 12:35:37.047 [conn19] end connection 127.0.0.1:35789 (8 connections now open) 49ms ******************************************* Test : jstests/idhack.js ... m30999| Fri Feb 22 12:35:37.065 [conn1] DROP: test.idhack m30001| Fri Feb 22 12:35:37.067 [conn45] CMD: drop test.idhack m30001| Fri Feb 22 12:35:37.067 [conn45] build index test.idhack { _id: 1 } m30001| Fri Feb 22 12:35:37.068 [conn45] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/regex_embed1.js ... m30999| Fri Feb 22 12:35:37.072 [conn1] DROP: test.regex_embed1 m30001| Fri Feb 22 12:35:37.072 [conn45] CMD: drop test.regex_embed1 m30001| Fri Feb 22 12:35:37.072 [conn45] build index test.regex_embed1 { _id: 1 } m30001| Fri Feb 22 12:35:37.073 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.075 [conn45] build index test.regex_embed1 { a.x: 1.0 } m30001| Fri Feb 22 12:35:37.076 [conn45] build index done. scanned 3 total records. 0 secs 7ms ******************************************* Test : jstests/indexd.js ... m30999| Fri Feb 22 12:35:37.079 [conn1] DROP: test.indexd m30001| Fri Feb 22 12:35:37.079 [conn45] CMD: drop test.indexd m30001| Fri Feb 22 12:35:37.079 [conn45] build index test.indexd { _id: 1 } m30001| Fri Feb 22 12:35:37.080 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.080 [conn45] build index test.indexd { a: 1.0 } m30001| Fri Feb 22 12:35:37.080 [conn45] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:35:37.081 [conn1] DROP: test.indexd.$_id_ m30001| Fri Feb 22 12:35:37.081 [conn45] CMD: drop test.indexd.$_id_ m30999| Fri Feb 22 12:35:37.081 [conn1] DROP: test.indexd m30001| Fri Feb 22 12:35:37.081 [conn45] CMD: drop test.indexd 9ms ******************************************* Test : jstests/indexr.js ... m30999| Fri Feb 22 12:35:37.097 [conn1] DROP: test.jstests_indexr m30001| Fri Feb 22 12:35:37.097 [conn45] CMD: drop test.jstests_indexr m30001| Fri Feb 22 12:35:37.098 [conn45] build index test.jstests_indexr { _id: 1 } m30001| Fri Feb 22 12:35:37.098 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.099 [conn45] build index test.jstests_indexr { a.b: 1.0, a.c: 1.0 } m30001| Fri Feb 22 12:35:37.100 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.100 [conn45] build index test.jstests_indexr { a: 1.0, a.c: 1.0 } m30001| Fri Feb 22 12:35:37.101 [conn45] build index done. scanned 0 total records. 0 secs 26ms ******************************************* Test : jstests/updatea.js ... m30999| Fri Feb 22 12:35:37.114 [conn1] DROP: test.updatea m30001| Fri Feb 22 12:35:37.114 [conn45] CMD: drop test.updatea m30001| Fri Feb 22 12:35:37.115 [conn45] build index test.updatea { _id: 1 } m30001| Fri Feb 22 12:35:37.115 [conn45] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:37.117 [conn1] DROP: test.updatea m30001| Fri Feb 22 12:35:37.117 [conn45] CMD: drop test.updatea m30001| Fri Feb 22 12:35:37.121 [conn45] build index test.updatea { _id: 1 } m30001| Fri Feb 22 12:35:37.122 [conn45] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:37.122 [conn1] DROP: test.updatea m30001| Fri Feb 22 12:35:37.123 [conn45] CMD: drop test.updatea m30001| Fri Feb 22 12:35:37.126 [conn45] build index test.updatea { _id: 1 } m30001| Fri Feb 22 12:35:37.127 [conn45] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:37.128 [conn1] DROP: test.updatea m30001| Fri Feb 22 12:35:37.128 [conn45] CMD: drop test.updatea m30001| Fri Feb 22 12:35:37.131 [conn45] build index test.updatea { _id: 1 } m30001| Fri Feb 22 12:35:37.132 [conn45] build index done. scanned 0 total records. 0 secs 20ms ******************************************* Test : jstests/filemd5.js ... m30999| Fri Feb 22 12:35:37.133 [conn1] DROP: test.fs.chunks m30001| Fri Feb 22 12:35:37.133 [conn45] CMD: drop test.fs.chunks m30001| Fri Feb 22 12:35:37.134 [conn45] build index test.fs.chunks { _id: 1 } m30001| Fri Feb 22 12:35:37.135 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.135 [conn45] warning: best guess query plan requested, but scan and order are required for all plans query: { files_id: 1.0, n: { $gte: 0 } } order: { files_id: 1, n: 1 } choices: { $natural: 1 } m30001| Fri Feb 22 12:35:37.135 [conn45] build index test.fs.chunks { files_id: 1.0, n: 1.0 } m30001| Fri Feb 22 12:35:37.136 [conn45] build index done. scanned 1 total records. 0 secs 4ms ******************************************* Test : jstests/remove_undefined.js ... m30001| Fri Feb 22 12:35:37.138 [conn45] build index test.drop_undefined.js { _id: 1 } m30001| Fri Feb 22 12:35:37.138 [conn45] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/orh.js ... m30999| Fri Feb 22 12:35:37.146 [conn1] DROP: test.jstests_orh m30001| Fri Feb 22 12:35:37.146 [conn45] CMD: drop test.jstests_orh m30001| Fri Feb 22 12:35:37.146 [conn45] build index test.jstests_orh { _id: 1 } m30001| Fri Feb 22 12:35:37.147 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.147 [conn45] info: creating collection test.jstests_orh on add index m30001| Fri Feb 22 12:35:37.147 [conn45] build index test.jstests_orh { a: 1.0 } m30001| Fri Feb 22 12:35:37.148 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.148 [conn45] build index test.jstests_orh { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:35:37.148 [conn45] build index done. scanned 0 total records. 0 secs 10ms ******************************************* Test : jstests/update9.js ... m30999| Fri Feb 22 12:35:37.152 [conn1] DROP: test.update9 m30001| Fri Feb 22 12:35:37.152 [conn45] CMD: drop test.update9 m30001| Fri Feb 22 12:35:37.153 [conn45] build index test.update9 { _id: 1 } m30001| Fri Feb 22 12:35:37.153 [conn45] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/regex2.js ... m30999| Fri Feb 22 12:35:37.156 [conn1] DROP: test.regex2 m30001| Fri Feb 22 12:35:37.156 [conn45] CMD: drop test.regex2 m30001| Fri Feb 22 12:35:37.157 [conn45] build index test.regex2 { _id: 1 } m30001| Fri Feb 22 12:35:37.157 [conn45] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:37.160 [conn1] DROP: test.regex2 m30001| Fri Feb 22 12:35:37.160 [conn45] CMD: drop test.regex2 m30001| Fri Feb 22 12:35:37.164 [conn45] build index test.regex2 { _id: 1 } m30001| Fri Feb 22 12:35:37.164 [conn45] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:37.167 [conn1] DROP: test.regex2 m30001| Fri Feb 22 12:35:37.167 [conn45] CMD: drop test.regex2 m30001| Fri Feb 22 12:35:37.171 [conn45] build index test.regex2 { _id: 1 } m30001| Fri Feb 22 12:35:37.172 [conn45] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:37.175 [conn1] DROP: test.regex2 m30001| Fri Feb 22 12:35:37.175 [conn45] CMD: drop test.regex2 m30001| Fri Feb 22 12:35:37.179 [conn45] build index test.regex2 { _id: 1 } m30001| Fri Feb 22 12:35:37.179 [conn45] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:37.180 [conn1] DROP: test.regex2 m30001| Fri Feb 22 12:35:37.180 [conn45] CMD: drop test.regex2 m30001| Fri Feb 22 12:35:37.184 [conn45] build index test.regex2 { _id: 1 } m30001| Fri Feb 22 12:35:37.184 [conn45] build index done. scanned 0 total records. 0 secs 30ms ******************************************* Test : jstests/drop2.js ... m30999| Fri Feb 22 12:35:37.186 [conn1] DROP: test.jstests_drop2 m30001| Fri Feb 22 12:35:37.186 [conn45] CMD: drop test.jstests_drop2 m30001| Fri Feb 22 12:35:37.187 [conn45] build index test.jstests_drop2 { _id: 1 } m30001| Fri Feb 22 12:35:37.187 [conn45] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:37.220 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');print("Count thread started");db.jstests_drop2.count( { $where: function() {while( 1 ) { sleep( 1 ); } } } );print("Count thread terminating"); localhost:30999/admin [ ] sh15728| MongoDB shell version: 2.4.0-rc1-pre- sh15728| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:37.298 [mongosMain] connection accepted from 127.0.0.1:49074 #23 (2 connections now open) sh15728| Count thread started m30001| Fri Feb 22 12:35:37.301 [initandlisten] connection accepted from 127.0.0.1:47676 #47 (6 connections now open) [ { "opid" : "shard0001:462544", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_drop2", "query" : { "count" : "jstests_drop2", "query" : { "$where" : function () {while( 1 ) { sleep( 1 ); } } } }, "client_s" : "127.0.0.1:47676", "desc" : "conn47", "threadId" : "0x44", "connectionId" : 47, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(6), "w" : NumberLong(0) } } } ] Fri Feb 22 12:35:37.455 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');print("Drop thread started");db.jstests_drop2.drop();print("Drop thread terminating") localhost:30999/admin [ { "opid" : "shard0001:462544", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_drop2", "query" : { "count" : "jstests_drop2", "query" : { "$where" : function () {while( 1 ) { sleep( 1 ); } } } }, "client_s" : "127.0.0.1:47676", "desc" : "conn47", "threadId" : "0x44", "connectionId" : 47, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(6), "w" : NumberLong(0) } } } ] sh15729| MongoDB shell version: 2.4.0-rc1-pre- sh15729| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:37.541 [mongosMain] connection accepted from 127.0.0.1:43163 #24 (3 connections now open) sh15729| Drop thread started m30999| Fri Feb 22 12:35:37.544 [conn24] DROP: test.jstests_drop2 m30001| Fri Feb 22 12:35:37.544 [initandlisten] connection accepted from 127.0.0.1:38005 #48 (7 connections now open) [ { "opid" : "shard0001:462547", "active" : false, "op" : "query", "ns" : "", "query" : { "drop" : "jstests_drop2" }, "client_s" : "127.0.0.1:38005", "desc" : "conn48", "threadId" : "0x45", "connectionId" : 48, "locks" : { "^test" : "W" }, "waitingForLock" : true, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { } } }, { "opid" : "shard0001:462544", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_drop2", "query" : { "count" : "jstests_drop2", "query" : { "$where" : function () {while( 1 ) { sleep( 1 ); } } } }, "client_s" : "127.0.0.1:47676", "desc" : "conn47", "threadId" : "0x44", "connectionId" : 47, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(6), "w" : NumberLong(0) } } } ] m30999| Fri Feb 22 12:35:37.660 [conn1] want to kill op: op: "shard0001:462547" m30001| Fri Feb 22 12:35:37.660 [conn4] going to kill op: op: 462547 m30999| Fri Feb 22 12:35:37.660 [conn1] want to kill op: op: "shard0001:462544" m30001| Fri Feb 22 12:35:37.660 [conn4] going to kill op: op: 462544 m30001| Fri Feb 22 12:35:37.664 [conn47] JavaScript execution terminated m30001| Fri Feb 22 12:35:37.664 [conn47] Count with ns: test.jstests_drop2 and query: { $where: function () {while( 1 ) { sleep( 1 ); } } } failed with exception: 16712 JavaScript execution terminated code: 16712 m30001| Fri Feb 22 12:35:37.665 [conn47] command test.$cmd command: { count: "jstests_drop2", query: { $where: function () {while( 1 ) { sleep( 1 ); } } } } ntoreturn:1 keyUpdates:0 locks(micros) r:362932 reslen:87 362ms m30001| Fri Feb 22 12:35:37.665 [conn48] CMD: drop test.jstests_drop2 sh15728| Fri Feb 22 12:35:37.668 JavaScript execution failed: count failed: { sh15728| "shards" : { sh15728| sh15728| }, sh15728| "cause" : { sh15728| "ok" : 0, sh15728| "errmsg" : "16712 JavaScript execution terminated" sh15728| }, sh15728| "ok" : 0, sh15728| "errmsg" : "failed on : shard0001" sh15728| } at src/mongo/shell/query.js:L180 sh15729| Drop thread terminating m30999| Fri Feb 22 12:35:37.677 [conn23] end connection 127.0.0.1:49074 (2 connections now open) m30999| Fri Feb 22 12:35:37.680 [conn24] end connection 127.0.0.1:43163 (1 connection now open) m30999| Fri Feb 22 12:35:37.686 [conn1] DROP: test.jstests_drop2 m30001| Fri Feb 22 12:35:37.686 [conn45] CMD: drop test.jstests_drop2 501ms ******************************************* Test : jstests/hint1.js ... m30999| Fri Feb 22 12:35:37.687 [conn1] DROP: test.jstests_hint1 m30001| Fri Feb 22 12:35:37.687 [conn45] CMD: drop test.jstests_hint1 m30001| Fri Feb 22 12:35:37.688 [conn45] build index test.jstests_hint1 { _id: 1 } m30001| Fri Feb 22 12:35:37.689 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.689 [conn45] build index test.jstests_hint1 { ts: 1.0 } m30001| Fri Feb 22 12:35:37.689 [conn45] build index done. scanned 1 total records. 0 secs 5ms ******************************************* Test : jstests/in.js ... m30999| Fri Feb 22 12:35:37.692 [conn1] DROP: test.in1 m30001| Fri Feb 22 12:35:37.692 [conn45] CMD: drop test.in1 m30001| Fri Feb 22 12:35:37.693 [conn45] build index test.in1 { _id: 1 } m30001| Fri Feb 22 12:35:37.693 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.695 [conn45] build index test.in1 { a: 1.0 } m30001| Fri Feb 22 12:35:37.695 [conn45] build index done. scanned 2 total records. 0 secs 6ms ******************************************* Test : jstests/js3.js ... m30999| Fri Feb 22 12:35:37.703 [conn1] DROP: test.jstests_js3 m30001| Fri Feb 22 12:35:37.703 [conn45] CMD: drop test.jstests_js3 m30001| Fri Feb 22 12:35:37.704 [conn45] build index test.jstests_js3 { _id: 1 } m30001| Fri Feb 22 12:35:37.705 [conn45] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:37.881 [conn45] JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2) m30001| Fri Feb 22 12:35:37.882 [conn45] assertion 16722 JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2) ns:test.jstests_js3 query:{ $where: function (){ m30001| asdf.asdf.f.s.s(); m30001| } } m30001| Fri Feb 22 12:35:37.882 [conn45] problem detected during query over test.jstests_js3 : { $err: "JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2)", code: 16722 } m30999| Fri Feb 22 12:35:37.882 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16722 JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2) m30001| Fri Feb 22 12:35:37.882 [conn45] end connection 127.0.0.1:33261 (6 connections now open) m30001| Fri Feb 22 12:35:37.883 [conn48] build index test.jstests_js3 { z: 1.0 } m30001| Fri Feb 22 12:35:37.892 [conn48] build index done. scanned 1001 total records. 0.008 secs m30001| Fri Feb 22 12:35:37.892 [conn48] build index test.jstests_js3 { q: 1.0 } m30001| Fri Feb 22 12:35:37.898 [conn48] build index done. scanned 1001 total records. 0.005 secs m30001| Fri Feb 22 12:35:38.047 [conn4] CMD: validate test.jstests_js3 m30001| Fri Feb 22 12:35:38.047 [conn4] validating index 0: test.jstests_js3.$_id_ m30001| Fri Feb 22 12:35:38.047 [conn4] validating index 1: test.jstests_js3.$z_1 m30001| Fri Feb 22 12:35:38.047 [conn4] validating index 2: test.jstests_js3.$q_1 m30999| Fri Feb 22 12:35:38.048 [conn1] DROP: test.jstests_js3 m30001| Fri Feb 22 12:35:38.048 [conn48] CMD: drop test.jstests_js3 m30001| Fri Feb 22 12:35:38.054 [conn48] build index test.jstests_js3 { _id: 1 } m30001| Fri Feb 22 12:35:38.055 [conn48] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.055 [conn48] info: creating collection test.jstests_js3 on add index m30001| Fri Feb 22 12:35:38.055 [conn48] build index test.jstests_js3 { i: 1.0 } m30001| Fri Feb 22 12:35:38.056 [conn48] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.223 [conn48] JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2) m30001| Fri Feb 22 12:35:38.225 [conn48] assertion 16722 JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2) ns:test.jstests_js3 query:{ $where: function (){ m30001| asdf.asdf.f.s.s(); m30001| } } m30001| Fri Feb 22 12:35:38.225 [conn48] problem detected during query over test.jstests_js3 : { $err: "JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2)", code: 16722 } m30999| Fri Feb 22 12:35:38.226 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16722 JavaScript execution failed: ReferenceError: asdf is not defined near 'asdf.asdf.f' (line 2) m30001| Fri Feb 22 12:35:38.226 [conn48] end connection 127.0.0.1:38005 (5 connections now open) m30001| Fri Feb 22 12:35:38.228 [conn47] build index test.jstests_js3 { z: 1.0 } m30001| Fri Feb 22 12:35:38.237 [conn47] build index done. scanned 1001 total records. 0.008 secs m30001| Fri Feb 22 12:35:38.237 [conn47] build index test.jstests_js3 { q: 1.0 } m30001| Fri Feb 22 12:35:38.244 [conn47] build index done. scanned 1001 total records. 0.006 secs m30001| Fri Feb 22 12:35:38.412 [conn4] CMD: validate test.jstests_js3 m30001| Fri Feb 22 12:35:38.412 [conn4] validating index 0: test.jstests_js3.$_id_ m30001| Fri Feb 22 12:35:38.412 [conn4] validating index 1: test.jstests_js3.$i_1 m30001| Fri Feb 22 12:35:38.412 [conn4] validating index 2: test.jstests_js3.$z_1 m30001| Fri Feb 22 12:35:38.413 [conn4] validating index 3: test.jstests_js3.$q_1 m30999| Fri Feb 22 12:35:38.413 [conn1] DROP: test.jstests_js3 m30001| Fri Feb 22 12:35:38.413 [conn47] CMD: drop test.jstests_js3 725ms ******************************************* Test : jstests/mod1.js ... m30999| Fri Feb 22 12:35:38.425 [conn1] DROP: test.mod1 m30001| Fri Feb 22 12:35:38.425 [conn47] CMD: drop test.mod1 m30001| Fri Feb 22 12:35:38.426 [conn47] build index test.mod1 { _id: 1 } m30001| Fri Feb 22 12:35:38.427 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.458 [conn47] build index test.mod1 { a: 1.0 } m30001| Fri Feb 22 12:35:38.459 [conn47] build index done. scanned 6 total records. 0.001 secs 39ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_small_large.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/compact2.js ******************************************* Test : jstests/bulk_insert.js ... m30999| Fri Feb 22 12:35:38.463 [conn1] DROP: test.bulkInsertTest m30001| Fri Feb 22 12:35:38.464 [conn47] CMD: drop test.bulkInsertTest Inserting 10 bulks of 40 documents. m30001| Fri Feb 22 12:35:38.466 [conn47] build index test.bulkInsertTest { _id: 1 } m30001| Fri Feb 22 12:35:38.467 [conn47] build index done. scanned 0 total records. 0 secs 13ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geoe.js ******************************************* Test : jstests/numberint.js ... m30999| Fri Feb 22 12:35:38.480 [conn1] DROP: test.numberint m30001| Fri Feb 22 12:35:38.480 [conn47] CMD: drop test.numberint m30001| Fri Feb 22 12:35:38.481 [conn47] build index test.numberint { _id: 1 } m30001| Fri Feb 22 12:35:38.482 [conn47] build index done. scanned 0 total records. 0 secs 10ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2twofields.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/apply_ops1.js ******************************************* Test : jstests/maxscan.js ... m30999| Fri Feb 22 12:35:38.488 [conn1] DROP: test.maxscan m30001| Fri Feb 22 12:35:38.488 [conn47] CMD: drop test.maxscan m30001| Fri Feb 22 12:35:38.489 [conn47] build index test.maxscan { _id: 1 } m30001| Fri Feb 22 12:35:38.490 [conn47] build index done. scanned 0 total records. 0 secs 15ms ******************************************* Test : jstests/dbcase2.js ... m30999| Fri Feb 22 12:35:38.504 [conn1] couldn't find database [dbcase2test_dbnamea] in config db m30999| Fri Feb 22 12:35:38.505 [conn1] put [dbcase2test_dbnamea] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:38.506 [conn1] couldn't find database [dbcase2test_dbnameA] in config db m30999| Fri Feb 22 12:35:38.507 [conn1] warning: error creating initial database config information :: caused by :: can't have 2 databases that just differ on case have: dbcase2test_dbnamea want to add: dbcase2test_dbnameA 8ms ******************************************* Test : jstests/shell1.js ... all2 array1 array4 array_match1 arrayfind1 basic7 basic8 basica bigkeysidxtest bulkInsertTest collModTest covered_compound_1 covered_geo_1 covered_geo_2 covered_negative_1 covered_simple_id cursor3 datasize2 datasize3 date1 dbref1a dbref1b dbref2a dbref2b distinct1 distinct2 distinct_index1 distinct_speed1 dropIndex drop_undefined.js ed.db.index6 ed_db_cursor2_ccvsal ed_db_cursor4_cfmfs ed_db_cursor5_bwsi ed_db_cursor_mi ed_db_index7 ed_db_update2 embeddedIndexTest embeddedIndexTest2 error4 error5 eval1 eval3 eval4 eval5 eval6 exists2 explain1 explain2 factories find6 find6a find7 find_and_modify find_and_modify2 find_and_modify3 find_and_modify_server6226 find_and_modify_server6254 find_and_modify_server6582 find_and_modify_server6993 find_and_modify_where fm1 fm2 fm3 fm4 foo_basic9 fs.chunks group1 group2 group3 group4 group5 idhack in1 in2 in5 inc1 inc2 inc3 index3 index4 index5 index_big1 index_check2 index_check3 index_check5 index_check6 index_check7 index_diag index_many2 index_maxkey index_sparse2 jstest_updateh jstests_all4 jstests_all5 jstests_andor jstests_array_match2 jstests_array_match3 jstests_arrayfind8 jstests_arrayfind9 jstests_arrayfinda jstests_commands jstests_count jstests_count9 jstests_counta jstests_countb jstests_countc jstests_coveredIndex1 jstests_coveredIndex2 jstests_coveredIndex4 jstests_coveredIndex5 jstests_cursorb jstests_datasize jstests_distinct3 jstests_drop jstests_drop3.sub jstests_error2 jstests_exists jstests_exists3 jstests_exists4 jstests_exists5 jstests_exists6 jstests_explain3 jstests_explain4 jstests_explain5 jstests_explain6 jstests_explain7 jstests_explainb jstests_find8 jstests_find9 jstests_finda jstests_group6 jstests_group7 jstests_hint1 jstests_in3 jstests_in4 jstests_in6 jstests_inb jstests_indexi jstests_indexj jstests_indexk jstests_indexl jstests_indexm jstests_indexn jstests_indexo jstests_indexr jstests_indexs jstests_indexy jstests_indexz jstests_js2 jstests_js8 jstests_js9 jstests_mr_killop jstests_multi jstests_ne2 jstests_not2 jstests_numberlong2 jstests_numberlong3 jstests_or1 jstests_or8 jstests_or9 jstests_ora jstests_orb jstests_orc jstests_ord jstests_ore jstests_orf jstests_org jstests_orh jstests_ori jstests_orp jstests_orq jstests_orr jstests_pullall jstests_pushall jstests_queryoptimizer4 jstests_queryoptimizer6 jstests_queryoptimizer7 jstests_regex jstests_regexa jstests_regexb jstests_remove9 jstests_removea jstests_rename_b jstests_set7 jstests_shellspawn jstests_showdiskloc jstests_slow_in1 jstests_sort8 jstests_sort9 jstests_sorta jstests_sortc jstests_sortd jstests_sortm jstests_type2 jstests_type3 jstests_unique2 jstests_update3 jstests_update_arraymatch8 jstests_updatej jstests_updatek jstests_updatel many2 max_message_size maxscan minmaxtest mod1 mr2 mr3 mr4 mr5 mr_bigobject_out mr_index mr_merge mr_merge2 mr_merge2_out mr_merge_out mr_outreduce mr_outreduce2 mr_outreduce2_out mr_outreduce_out multidim ne1 not1 null1 numberint objNestTest objid1 objid2 objid3 objid5 otherthings otherthings3 proj_key1 pull2 pull_remove1 push query1 queryoptimizer2 ref2 ref4a ref4b regex2 regex3 regex8 regex9 regex_embed1 remove8 remove_justone removetest rename_stayTemp_dest server5346 server5441 set1 set2 set3 set4 set5 set6 slice1 somecollection sort10 sort5 sort_numeric sps system.indexes system.js test text1 text2 text3 text4 text5 text_blog text_mix text_parition1 text_phrase text_proj text_spanish things things3 ts1 type1 update5 update6 update7 update9 update_addToSet1 update_arraymatch1 updatea where1 where2 where3 where4 all2 array1 array4 array_match1 arrayfind1 basic7 basic8 basica bigkeysidxtest bulkInsertTest collModTest covered_compound_1 covered_geo_1 covered_geo_2 covered_negative_1 covered_simple_id cursor3 datasize2 datasize3 date1 dbref1a dbref1b dbref2a dbref2b distinct1 distinct2 distinct_index1 distinct_speed1 dropIndex drop_undefined.js ed.db.index6 ed_db_cursor2_ccvsal ed_db_cursor4_cfmfs ed_db_cursor5_bwsi ed_db_cursor_mi ed_db_index7 ed_db_update2 embeddedIndexTest embeddedIndexTest2 error4 error5 eval1 eval3 eval4 eval5 eval6 exists2 explain1 explain2 factories find6 find6a find7 find_and_modify find_and_modify2 find_and_modify3 find_and_modify_server6226 find_and_modify_server6254 find_and_modify_server6582 find_and_modify_server6993 find_and_modify_where fm1 fm2 fm3 fm4 foo_basic9 fs.chunks group1 group2 group3 group4 group5 idhack in1 in2 in5 inc1 inc2 inc3 index3 index4 index5 index_big1 index_check2 index_check3 index_check5 index_check6 index_check7 index_diag index_many2 index_maxkey index_sparse2 jstest_updateh jstests_all4 jstests_all5 jstests_andor jstests_array_match2 jstests_array_match3 jstests_arrayfind8 jstests_arrayfind9 jstests_arrayfinda jstests_commands jstests_count jstests_count9 jstests_counta jstests_countb jstests_countc jstests_coveredIndex1 jstests_coveredIndex2 jstests_coveredIndex4 jstests_coveredIndex5 jstests_cursorb jstests_datasize jstests_distinct3 jstests_drop jstests_drop3.sub jstests_error2 jstests_exists jstests_exists3 jstests_exists4 jstests_exists5 jstests_exists6 jstests_explain3 jstests_explain4 jstests_explain5 jstests_explain6 jstests_explain7 jstests_explainb jstests_find8 jstests_find9 jstests_finda jstests_group6 jstests_group7 jstests_hint1 jstests_in3 jstests_in4 jstests_in6 jstests_inb jstests_indexi jstests_indexj jstests_indexk jstests_indexl jstests_indexm jstests_indexn jstests_indexo jstests_indexr jstests_indexs jstests_indexy jstests_indexz jstests_js2 jstests_js8 jstests_js9 jstests_mr_killop jstests_multi jstests_ne2 jstests_not2 jstests_numberlong2 jstests_numberlong3 jstests_or1 jstests_or8 jstests_or9 jstests_ora jstests_orb jstests_orc jstests_ord jstests_ore jstests_orf jstests_org jstests_orh jstests_ori jstests_orp jstests_orq jstests_orr jstests_pullall jstests_pushall jstests_queryoptimizer4 jstests_queryoptimizer6 jstests_queryoptimizer7 jstests_regex jstests_regexa jstests_regexb jstests_remove9 jstests_removea jstests_rename_b jstests_set7 jstests_shellspawn jstests_showdiskloc jstests_slow_in1 jstests_sort8 jstests_sort9 jstests_sorta jstests_sortc jstests_sortd jstests_sortm jstests_type2 jstests_type3 jstests_unique2 jstests_update3 jstests_update_arraymatch8 jstests_updatej jstests_updatek jstests_updatel many2 max_message_size maxscan minmaxtest mod1 mr2 mr3 mr4 mr5 mr_bigobject_out mr_index mr_merge mr_merge2 mr_merge2_out mr_merge_out mr_outreduce mr_outreduce2 mr_outreduce2_out mr_outreduce_out multidim ne1 not1 null1 numberint objNestTest objid1 objid2 objid3 objid5 otherthings otherthings3 proj_key1 pull2 pull_remove1 push query1 queryoptimizer2 ref2 ref4a ref4b regex2 regex3 regex8 regex9 regex_embed1 remove8 remove_justone removetest rename_stayTemp_dest server5346 server5441 set1 set2 set3 set4 set5 set6 slice1 somecollection sort10 sort5 sort_numeric sps system.indexes system.js test text1 text2 text3 text4 text5 text_blog text_mix text_parition1 text_phrase text_proj text_spanish things things3 ts1 type1 update5 update6 update7 update9 update_addToSet1 update_arraymatch1 updatea where1 where2 where3 where4 all2 array1 array4 array_match1 arrayfind1 basic7 basic8 basica bigkeysidxtest bulkInsertTest collModTest covered_compound_1 covered_geo_1 covered_geo_2 covered_negative_1 covered_simple_id cursor3 datasize2 datasize3 date1 dbref1a dbref1b dbref2a dbref2b distinct1 distinct2 distinct_index1 distinct_speed1 dropIndex drop_undefined.js ed.db.index6 ed_db_cursor2_ccvsal ed_db_cursor4_cfmfs ed_db_cursor5_bwsi ed_db_cursor_mi ed_db_index7 ed_db_update2 embeddedIndexTest embeddedIndexTest2 error4 error5 eval1 eval3 eval4 eval5 eval6 exists2 explain1 explain2 factories find6 find6a find7 find_and_modify find_and_modify2 find_and_modify3 find_and_modify_server6226 find_and_modify_server6254 find_and_modify_server6582 find_and_modify_server6993 find_and_modify_where fm1 fm2 fm3 fm4 foo_basic9 fs.chunks group1 group2 group3 group4 group5 idhack in1 in2 in5 inc1 inc2 inc3 index3 index4 index5 index_big1 index_check2 index_check3 index_check5 index_check6 index_check7 index_diag index_many2 index_maxkey index_sparse2 jstest_updateh jstests_all4 jstests_all5 jstests_andor jstests_array_match2 jstests_array_match3 jstests_arrayfind8 jstests_arrayfind9 jstests_arrayfinda jstests_commands jstests_count jstests_count9 jstests_counta jstests_countb jstests_countc jstests_coveredIndex1 jstests_coveredIndex2 jstests_coveredIndex4 jstests_coveredIndex5 jstests_cursorb jstests_datasize jstests_distinct3 jstests_drop jstests_drop3.sub jstests_error2 jstests_exists jstests_exists3 jstests_exists4 jstests_exists5 jstests_exists6 jstests_explain3 jstests_explain4 jstests_explain5 jstests_explain6 jstests_explain7 jstests_explainb jstests_find8 jstests_find9 jstests_finda jstests_group6 jstests_group7 jstests_hint1 jstests_in3 jstests_in4 jstests_in6 jstests_inb jstests_indexi jstests_indexj jstests_indexk jstests_indexl jstests_indexm jstests_indexn jstests_indexo jstests_indexr jstests_indexs jstests_indexy jstests_indexz jstests_js2 jstests_js8 jstests_js9 jstests_mr_killop jstests_multi jstests_ne2 jstests_not2 jstests_numberlong2 jstests_numberlong3 jstests_or1 jstests_or8 jstests_or9 jstests_ora jstests_orb jstests_orc jstests_ord jstests_ore jstests_orf jstests_org jstests_orh jstests_ori jstests_orp jstests_orq jstests_orr jstests_pullall jstests_pushall jstests_queryoptimizer4 jstests_queryoptimizer6 jstests_queryoptimizer7 jstests_regex jstests_regexa jstests_regexb jstests_remove9 jstests_removea jstests_rename_b jstests_set7 jstests_shellspawn jstests_showdiskloc jstests_slow_in1 jstests_sort8 jstests_sort9 jstests_sorta jstests_sortc jstests_sortd jstests_sortm jstests_type2 jstests_type3 jstests_unique2 jstests_update3 jstests_update_arraymatch8 jstests_updatej jstests_updatek jstests_updatel many2 max_message_size maxscan minmaxtest mod1 mr2 mr3 mr4 mr5 mr_bigobject_out mr_index mr_merge mr_merge2 mr_merge2_out mr_merge_out mr_outreduce mr_outreduce2 mr_outreduce2_out mr_outreduce_out multidim ne1 not1 null1 numberint objNestTest objid1 objid2 objid3 objid5 otherthings otherthings3 proj_key1 pull2 pull_remove1 push query1 queryoptimizer2 ref2 ref4a ref4b regex2 regex3 regex8 regex9 regex_embed1 remove8 remove_justone removetest rename_stayTemp_dest server5346 server5441 set1 set2 set3 set4 set5 set6 slice1 somecollection sort10 sort5 sort_numeric sps system.indexes system.js test text1 text2 text3 text4 text5 text_blog text_mix text_parition1 text_phrase text_proj text_spanish things things3 ts1 type1 update5 update6 update7 update9 update_addToSet1 update_arraymatch1 updatea where1 where2 where3 where4 61ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2nopoints.js ******************************************* Test : jstests/remove4.js ... m30999| Fri Feb 22 12:35:38.572 [conn1] DROP: test.remove4 m30001| Fri Feb 22 12:35:38.573 [conn47] CMD: drop test.remove4 m30001| Fri Feb 22 12:35:38.573 [conn47] build index test.remove4 { _id: 1 } m30001| Fri Feb 22 12:35:38.574 [conn47] build index done. scanned 0 total records. 0 secs 4ms >>>>>>>>>>>>>>> skipping jstests/_lodeRunner.js ******************************************* Test : jstests/sort4.js ... m30999| Fri Feb 22 12:35:38.576 [conn1] DROP: test.sort4 m30001| Fri Feb 22 12:35:38.576 [conn47] CMD: drop test.sort4 m30001| Fri Feb 22 12:35:38.576 [conn47] build index test.sort4 { _id: 1 } m30001| Fri Feb 22 12:35:38.577 [conn47] build index done. scanned 0 total records. 0 secs { "name" : 1 } AB,AC,BB,BD { "prename" : 1 } AB,BB,AC,BD { "name" : 1, "prename" : 1 } AB,AC,BB,BD { "name" : 1, "prename" : 1 } A,AB,AC,BB,BD { "name" : 1, "prename" : 1 } A,AB,AC,BB,BD,C m30001| Fri Feb 22 12:35:38.581 [conn47] build index test.sort4 { name: 1.0, prename: 1.0 } m30001| Fri Feb 22 12:35:38.582 [conn47] build index done. scanned 6 total records. 0 secs { "name" : 1, "prename" : 1 } A,AB,AC,BB,BD,C m30001| Fri Feb 22 12:35:38.583 [conn4] CMD: dropIndexes test.sort4 m30001| Fri Feb 22 12:35:38.586 [conn47] build index test.sort4 { name: 1.0 } m30001| Fri Feb 22 12:35:38.587 [conn47] build index done. scanned 6 total records. 0 secs { "name" : 1, "prename" : 1 } A,AB,AC,BB,BD,C 13ms ******************************************* Test : jstests/explainc.js ... m30999| Fri Feb 22 12:35:38.595 [conn1] DROP: test.jstests_explainc m30001| Fri Feb 22 12:35:38.595 [conn47] CMD: drop test.jstests_explainc m30001| Fri Feb 22 12:35:38.596 [conn47] build index test.jstests_explainc { _id: 1 } m30001| Fri Feb 22 12:35:38.596 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.596 [conn47] build index test.jstests_explainc { a: 1.0 } m30001| Fri Feb 22 12:35:38.597 [conn47] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:38.602 [conn4] CMD: dropIndexes test.jstests_explainc m30001| Fri Feb 22 12:35:38.604 [conn47] build index test.jstests_explainc { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:38.605 [conn47] build index done. scanned 1 total records. 0.001 secs m30999| Fri Feb 22 12:35:38.607 [conn1] DROP: test.jstests_explainc m30001| Fri Feb 22 12:35:38.607 [conn47] CMD: drop test.jstests_explainc m30001| Fri Feb 22 12:35:38.613 [conn47] build index test.jstests_explainc { _id: 1 } m30001| Fri Feb 22 12:35:38.613 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.613 [conn47] info: creating collection test.jstests_explainc on add index m30001| Fri Feb 22 12:35:38.613 [conn47] build index test.jstests_explainc { a: 1.0 } m30001| Fri Feb 22 12:35:38.614 [conn47] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:38.616 [conn1] DROP: test.jstests_explainc m30001| Fri Feb 22 12:35:38.616 [conn47] CMD: drop test.jstests_explainc m30001| Fri Feb 22 12:35:38.621 [conn47] build index test.jstests_explainc { _id: 1 } m30001| Fri Feb 22 12:35:38.621 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.621 [conn47] info: creating collection test.jstests_explainc on add index m30001| Fri Feb 22 12:35:38.621 [conn47] build index test.jstests_explainc { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:38.622 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.622 [conn47] build index test.jstests_explainc { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:35:38.623 [conn47] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:38.625 [conn1] DROP: test.jstests_explainc m30001| Fri Feb 22 12:35:38.626 [conn47] CMD: drop test.jstests_explainc m30001| Fri Feb 22 12:35:38.633 [conn47] build index test.jstests_explainc { _id: 1 } m30001| Fri Feb 22 12:35:38.633 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.633 [conn47] info: creating collection test.jstests_explainc on add index m30001| Fri Feb 22 12:35:38.633 [conn47] build index test.jstests_explainc { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:38.634 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.634 [conn47] build index test.jstests_explainc { b: 1.0 } m30001| Fri Feb 22 12:35:38.635 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.648 [conn47] build index test.jstests_explainc { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:35:38.650 [conn47] build index done. scanned 30 total records. 0.001 secs 77ms ******************************************* Test : jstests/evale.js ... m30999| Fri Feb 22 12:35:38.666 [conn1] DROP: test.jstests_evale m30001| Fri Feb 22 12:35:38.667 [conn47] CMD: drop test.jstests_evale 82ms ******************************************* Test : jstests/sortl.js ... m30999| Fri Feb 22 12:35:38.748 [conn1] DROP: test.jstests_sortl m30001| Fri Feb 22 12:35:38.748 [conn47] CMD: drop test.jstests_sortl m30001| Fri Feb 22 12:35:38.749 [conn47] build index test.jstests_sortl { _id: 1 } m30001| Fri Feb 22 12:35:38.749 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.749 [conn47] info: creating collection test.jstests_sortl on add index m30001| Fri Feb 22 12:35:38.749 [conn47] build index test.jstests_sortl { a: 1.0 } m30001| Fri Feb 22 12:35:38.750 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:38.750 [conn47] build index test.jstests_sortl { b: 1.0 } m30001| Fri Feb 22 12:35:38.751 [conn47] build index done. scanned 0 total records. 0 secs 238ms ******************************************* Test : jstests/validate_user_documents.js ... m30999| Fri Feb 22 12:35:38.992 [conn1] couldn't find database [validate_user_documents] in config db m30999| Fri Feb 22 12:35:38.994 [conn1] put [validate_user_documents] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:38.994 [conn1] DROP DATABASE: validate_user_documents m30999| Fri Feb 22 12:35:38.994 [conn1] erased database validate_user_documents from local registry m30999| Fri Feb 22 12:35:38.994 [conn1] DBConfig::dropDatabase: validate_user_documents m30999| Fri Feb 22 12:35:38.994 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:38-5127661a881c8e7453916058", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536538994), what: "dropDatabase.start", ns: "validate_user_documents", details: {} } m30999| Fri Feb 22 12:35:38.995 [conn1] DBConfig::dropDatabase: validate_user_documents dropped sharded collections: 0 m30000| Fri Feb 22 12:35:38.995 [conn3] dropDatabase validate_user_documents starting m30000| Fri Feb 22 12:35:39.033 [conn3] removeJournalFiles m30000| Fri Feb 22 12:35:39.034 [conn3] dropDatabase validate_user_documents finished m30999| Fri Feb 22 12:35:39.034 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:39-5127661b881c8e7453916059", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536539034), what: "dropDatabase", ns: "validate_user_documents", details: {} } m30999| Fri Feb 22 12:35:39.035 [conn1] couldn't find database [validate_user_documents] in config db m30999| Fri Feb 22 12:35:39.036 [conn1] put [validate_user_documents] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:35:39.037 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/validate_user_documents.ns, filling with zeroes... m30000| Fri Feb 22 12:35:39.037 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/validate_user_documents.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:35:39.037 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/validate_user_documents.0, filling with zeroes... m30000| Fri Feb 22 12:35:39.037 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/validate_user_documents.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:35:39.038 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/validate_user_documents.1, filling with zeroes... m30000| Fri Feb 22 12:35:39.038 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/validate_user_documents.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:35:39.042 [conn6] build index validate_user_documents.system.users { _id: 1 } m30000| Fri Feb 22 12:35:39.042 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:35:39.043 [conn6] build index validate_user_documents.system.users { user: 1, userSource: 1 } m30000| Fri Feb 22 12:35:39.044 [conn6] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:39.049 [conn1] DROP DATABASE: validate_user_documents m30999| Fri Feb 22 12:35:39.049 [conn1] erased database validate_user_documents from local registry m30999| Fri Feb 22 12:35:39.051 [conn1] DBConfig::dropDatabase: validate_user_documents m30999| Fri Feb 22 12:35:39.051 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:39-5127661b881c8e745391605a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536539051), what: "dropDatabase.start", ns: "validate_user_documents", details: {} } m30999| Fri Feb 22 12:35:39.051 [conn1] DBConfig::dropDatabase: validate_user_documents dropped sharded collections: 0 m30000| Fri Feb 22 12:35:39.051 [conn3] dropDatabase validate_user_documents starting m30000| Fri Feb 22 12:35:39.092 [conn3] removeJournalFiles m30000| Fri Feb 22 12:35:39.095 [conn3] dropDatabase validate_user_documents finished m30999| Fri Feb 22 12:35:39.095 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:39-5127661b881c8e745391605b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536539095), what: "dropDatabase", ns: "validate_user_documents", details: {} } 110ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geog.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/check_shard_index.js ******************************************* Test : jstests/js1.js ... m30001| Fri Feb 22 12:35:39.103 [conn47] build index test.jstests_js1 { _id: 1 } m30001| Fri Feb 22 12:35:39.104 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:39.134 [conn4] CMD: validate test.jstests_js1 m30001| Fri Feb 22 12:35:39.134 [conn4] validating index 0: test.jstests_js1.$_id_ 38ms ******************************************* Test : jstests/rename3.js ... m30999| Fri Feb 22 12:35:39.142 [conn1] DROP: test.rename3a m30001| Fri Feb 22 12:35:39.142 [conn47] CMD: drop test.rename3a m30999| Fri Feb 22 12:35:39.142 [conn1] DROP: test.rename3b m30001| Fri Feb 22 12:35:39.142 [conn47] CMD: drop test.rename3b m30001| Fri Feb 22 12:35:39.143 [conn47] build index test.rename3a { _id: 1 } m30001| Fri Feb 22 12:35:39.143 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:39.144 [conn47] build index test.rename3b { _id: 1 } m30001| Fri Feb 22 12:35:39.144 [conn47] build index done. scanned 0 total records. 0 secs 24ms ******************************************* Test : jstests/date2.js ... m30999| Fri Feb 22 12:35:39.159 [conn1] DROP: test.jstests_date2 m30001| Fri Feb 22 12:35:39.159 [conn47] CMD: drop test.jstests_date2 m30001| Fri Feb 22 12:35:39.160 [conn47] build index test.jstests_date2 { _id: 1 } m30001| Fri Feb 22 12:35:39.160 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:39.160 [conn47] info: creating collection test.jstests_date2 on add index m30001| Fri Feb 22 12:35:39.160 [conn47] build index test.jstests_date2 { a: 1.0 } m30001| Fri Feb 22 12:35:39.161 [conn47] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/update_arraymatch2.js ... m30999| Fri Feb 22 12:35:39.169 [conn1] DROP: test.update_arraymatch2 m30001| Fri Feb 22 12:35:39.169 [conn47] CMD: drop test.update_arraymatch2 m30001| Fri Feb 22 12:35:39.170 [conn47] build index test.update_arraymatch2 { _id: 1 } m30001| Fri Feb 22 12:35:39.170 [conn47] build index done. scanned 0 total records. 0 secs 11ms ******************************************* Test : jstests/repair.js ... m30999| Fri Feb 22 12:35:39.173 [conn1] couldn't find database [repair_test1] in config db m30999| Fri Feb 22 12:35:39.175 [conn1] put [repair_test1] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:39.175 [conn1] DROP: repair_test1.jstests_repair m30000| Fri Feb 22 12:35:39.175 [conn6] CMD: drop repair_test1.jstests_repair m30000| Fri Feb 22 12:35:39.176 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/repair_test1.ns, filling with zeroes... m30000| Fri Feb 22 12:35:39.176 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/repair_test1.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:35:39.176 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/repair_test1.0, filling with zeroes... m30000| Fri Feb 22 12:35:39.176 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/repair_test1.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:35:39.176 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/repair_test1.1, filling with zeroes... m30000| Fri Feb 22 12:35:39.177 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/repair_test1.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:35:39.180 [conn6] build index repair_test1.jstests_repair { _id: 1 } m30000| Fri Feb 22 12:35:39.181 [conn6] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:35:39.181 [conn10] repairDatabase repair_test1 m30000| Fri Feb 22 12:35:39.181 [conn10] repair_test1 repairDatabase repair_test1 m30000| Fri Feb 22 12:35:39.242 [conn10] removeJournalFiles m30000| Fri Feb 22 12:35:39.244 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/_tmp_repairDatabase_0/repair_test1.ns, filling with zeroes... m30000| Fri Feb 22 12:35:39.244 [FileAllocator] creating directory /data/db/sharding_passthrough0/_tmp_repairDatabase_0/_tmp m30000| Fri Feb 22 12:35:39.244 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/_tmp_repairDatabase_0/repair_test1.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:35:39.244 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/_tmp_repairDatabase_0/repair_test1.0, filling with zeroes... m30000| Fri Feb 22 12:35:39.244 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/_tmp_repairDatabase_0/repair_test1.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:35:39.245 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/_tmp_repairDatabase_0/repair_test1.1, filling with zeroes... m30000| Fri Feb 22 12:35:39.245 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/_tmp_repairDatabase_0/repair_test1.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:35:39.298 [conn10] build index repair_test1.jstests_repair { _id: 1 } m30000| Fri Feb 22 12:35:39.299 [conn10] fastBuildIndex dupsToDrop:0 m30000| Fri Feb 22 12:35:39.299 [conn10] build index done. scanned 1 total records. 0.001 secs m30000| Fri Feb 22 12:35:39.367 [conn10] removeJournalFiles m30000| Fri Feb 22 12:35:39.431 [conn10] command repair_test1.$cmd command: { repairDatabase: 1.0 } ntoreturn:1 keyUpdates:0 locks(micros) W:249490 reslen:37 249ms m30000| Fri Feb 22 12:35:39.433 [conn10] CMD: validate repair_test1.jstests_repair m30000| Fri Feb 22 12:35:39.433 [conn10] validating index 0: repair_test1.jstests_repair.$_id_ 262ms ******************************************* Test : jstests/count2.js ... m30999| Fri Feb 22 12:35:39.441 [conn1] DROP: test.count2 m30001| Fri Feb 22 12:35:39.441 [conn47] CMD: drop test.count2 m30001| Fri Feb 22 12:35:39.443 [conn47] build index test.count2 { _id: 1 } m30001| Fri Feb 22 12:35:39.444 [conn47] build index done. scanned 0 total records. 0.001 secs 116ms ******************************************* Test : jstests/explaina.js ... m30999| Fri Feb 22 12:35:39.554 [conn1] DROP: test.jstests_explaina m30001| Fri Feb 22 12:35:39.555 [conn47] CMD: drop test.jstests_explaina m30001| Fri Feb 22 12:35:39.556 [conn47] build index test.jstests_explaina { _id: 1 } m30001| Fri Feb 22 12:35:39.557 [conn47] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:39.557 [conn47] info: creating collection test.jstests_explaina on add index m30001| Fri Feb 22 12:35:39.557 [conn47] build index test.jstests_explaina { a: 1.0 } m30001| Fri Feb 22 12:35:39.558 [conn47] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:39.558 [conn47] build index test.jstests_explaina { b: 1.0 } m30001| Fri Feb 22 12:35:39.559 [conn47] build index done. scanned 0 total records. 0 secs 196ms ******************************************* Test : jstests/nin.js ... m30999| Fri Feb 22 12:35:39.753 [conn1] DROP: test.jstests_nin m30001| Fri Feb 22 12:35:39.754 [conn47] CMD: drop test.jstests_nin m30001| Fri Feb 22 12:35:39.754 [conn47] build index test.jstests_nin { _id: 1 } m30001| Fri Feb 22 12:35:39.755 [conn47] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:39.771 [conn1] DROP: test.jstests_nin m30001| Fri Feb 22 12:35:39.771 [conn47] CMD: drop test.jstests_nin m30001| Fri Feb 22 12:35:39.776 [conn47] build index test.jstests_nin { _id: 1 } m30001| Fri Feb 22 12:35:39.777 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:39.777 [conn47] info: creating collection test.jstests_nin on add index m30001| Fri Feb 22 12:35:39.777 [conn47] build index test.jstests_nin { a: 1.0 } m30001| Fri Feb 22 12:35:39.777 [conn47] build index done. scanned 0 total records. 0 secs 47ms ******************************************* Test : jstests/sort6.js ... m30999| Fri Feb 22 12:35:39.794 [conn1] DROP: test.sort6 m30001| Fri Feb 22 12:35:39.795 [conn47] CMD: drop test.sort6 m30001| Fri Feb 22 12:35:39.796 [conn47] build index test.sort6 { _id: 1 } m30001| Fri Feb 22 12:35:39.797 [conn47] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:39.798 [conn47] build index test.sort6 { c: 1.0 } m30001| Fri Feb 22 12:35:39.799 [conn47] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:35:39.800 [conn1] DROP: test.sort6 m30001| Fri Feb 22 12:35:39.800 [conn47] CMD: drop test.sort6 m30001| Fri Feb 22 12:35:39.805 [conn47] build index test.sort6 { _id: 1 } m30001| Fri Feb 22 12:35:39.806 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:39.807 [conn47] build index test.sort6 { c: 1.0 } m30001| Fri Feb 22 12:35:39.807 [conn47] build index done. scanned 3 total records. 0 secs 14ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2intersection.js ******************************************* Test : jstests/explain9.js ... m30999| Fri Feb 22 12:35:39.813 [conn1] DROP: test.jstests_explain9 m30001| Fri Feb 22 12:35:39.813 [conn47] CMD: drop test.jstests_explain9 m30001| Fri Feb 22 12:35:39.814 [conn47] build index test.jstests_explain9 { _id: 1 } m30001| Fri Feb 22 12:35:39.815 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:39.815 [conn47] info: creating collection test.jstests_explain9 on add index m30001| Fri Feb 22 12:35:39.815 [conn47] build index test.jstests_explain9 { a: 1.0 } m30001| Fri Feb 22 12:35:39.815 [conn47] build index done. scanned 0 total records. 0 secs 9ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_sort1.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2nearwithin.js ******************************************* Test : jstests/remove6.js ... m30999| Fri Feb 22 12:35:39.818 [conn1] DROP: test.remove6 m30001| Fri Feb 22 12:35:39.819 [conn47] CMD: drop test.remove6 m30999| Fri Feb 22 12:35:39.819 [conn1] DROP: test.remove6 m30001| Fri Feb 22 12:35:39.819 [conn47] CMD: drop test.remove6 m30001| Fri Feb 22 12:35:39.820 [conn47] build index test.remove6 { _id: 1 } m30001| Fri Feb 22 12:35:39.821 [conn47] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:39.913 [conn1] DROP: test.remove6 m30001| Fri Feb 22 12:35:39.913 [conn47] CMD: drop test.remove6 m30001| Fri Feb 22 12:35:39.917 [conn47] build index test.remove6 { _id: 1 } m30001| Fri Feb 22 12:35:39.919 [conn47] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:39.991 [conn47] build index test.remove6 { x: 1.0 } m30001| Fri Feb 22 12:35:39.997 [conn47] build index done. scanned 1000 total records. 0.006 secs m30999| Fri Feb 22 12:35:40.044 [conn1] DROP: test.remove6 m30001| Fri Feb 22 12:35:40.044 [conn47] CMD: drop test.remove6 m30001| Fri Feb 22 12:35:40.049 [conn47] build index test.remove6 { _id: 1 } m30001| Fri Feb 22 12:35:40.050 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:40.124 [conn47] build index test.remove6 { tags: 1.0 } m30001| Fri Feb 22 12:35:40.181 [conn47] build index done. scanned 1000 total records. 0.057 secs m30999| Fri Feb 22 12:35:40.251 [conn1] DROP: test.remove6 m30001| Fri Feb 22 12:35:40.251 [conn47] CMD: drop test.remove6 m30001| Fri Feb 22 12:35:40.259 [conn47] build index test.remove6 { _id: 1 } m30001| Fri Feb 22 12:35:40.260 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:40.766 [conn47] remove test.remove6 query: { tags: { $in: [ "a", "c" ] } } ndeleted:5000 keyUpdates:0 numYields: 1 locks(micros) w:173810 116ms m30999| Fri Feb 22 12:35:40.768 [conn1] DROP: test.remove6 m30001| Fri Feb 22 12:35:40.768 [conn47] CMD: drop test.remove6 m30001| Fri Feb 22 12:35:40.772 [conn47] build index test.remove6 { _id: 1 } m30001| Fri Feb 22 12:35:40.773 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:41.017 [conn47] build index test.remove6 { x: 1.0 } m30001| Fri Feb 22 12:35:41.036 [conn47] build index done. scanned 5000 total records. 0.019 secs m30001| Fri Feb 22 12:35:41.142 [conn47] remove test.remove6 query: { tags: { $in: [ "a", "c" ] } } ndeleted:5000 keyUpdates:0 locks(micros) w:103262 103ms m30999| Fri Feb 22 12:35:41.143 [conn1] DROP: test.remove6 m30001| Fri Feb 22 12:35:41.143 [conn47] CMD: drop test.remove6 m30001| Fri Feb 22 12:35:41.148 [conn47] build index test.remove6 { _id: 1 } m30001| Fri Feb 22 12:35:41.151 [conn47] build index done. scanned 0 total records. 0.003 secs m30001| Fri Feb 22 12:35:41.503 [conn47] build index test.remove6 { tags: 1.0 } m30999| Fri Feb 22 12:35:41.758 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127661d881c8e745391605c m30999| Fri Feb 22 12:35:41.760 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:35:41.803 [conn47] build index done. scanned 5000 total records. 0.299 secs m30001| Fri Feb 22 12:35:41.803 [conn47] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:299976 299ms m30001| Fri Feb 22 12:35:42.155 [conn47] remove test.remove6 query: { tags: { $in: [ "a", "c" ] } } ndeleted:5000 keyUpdates:0 numYields: 3 locks(micros) w:640408 351ms 2339ms ******************************************* Test : jstests/distinct_index2.js ... m30999| Fri Feb 22 12:35:42.160 [conn1] DROP: test.distinct_index2 m30001| Fri Feb 22 12:35:42.160 [conn47] CMD: drop test.distinct_index2 m30001| Fri Feb 22 12:35:42.161 [conn47] build index test.distinct_index2 { _id: 1 } m30001| Fri Feb 22 12:35:42.162 [conn47] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:42.163 [conn47] info: creating collection test.distinct_index2 on add index m30001| Fri Feb 22 12:35:42.163 [conn47] build index test.distinct_index2 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:42.164 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.164 [conn47] build index test.distinct_index2 { c: 1.0 } m30001| Fri Feb 22 12:35:42.165 [conn47] build index done. scanned 0 total records. 0 secs 202ms ******************************************* Test : jstests/basic4.js ... m30999| Fri Feb 22 12:35:42.371 [conn1] DROP: test.basic4 m30001| Fri Feb 22 12:35:42.371 [conn47] CMD: drop test.basic4 m30001| Fri Feb 22 12:35:42.372 [conn47] build index test.basic4 { _id: 1 } m30001| Fri Feb 22 12:35:42.373 [conn47] build index done. scanned 0 total records. 0.001 secs 17ms ******************************************* Test : jstests/find4.js ... m30999| Fri Feb 22 12:35:42.376 [conn1] DROP: test.find4 m30001| Fri Feb 22 12:35:42.377 [conn47] CMD: drop test.find4 m30001| Fri Feb 22 12:35:42.377 [conn47] build index test.find4 { _id: 1 } m30001| Fri Feb 22 12:35:42.378 [conn47] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.379 [conn1] DROP: test.find4 m30001| Fri Feb 22 12:35:42.379 [conn47] CMD: drop test.find4 m30001| Fri Feb 22 12:35:42.384 [conn47] build index test.find4 { _id: 1 } m30001| Fri Feb 22 12:35:42.384 [conn47] build index done. scanned 0 total records. 0 secs 10ms ******************************************* Test : jstests/id1.js ... m30999| Fri Feb 22 12:35:42.386 [conn1] DROP: test.id1 m30001| Fri Feb 22 12:35:42.386 [conn47] CMD: drop test.id1 m30001| Fri Feb 22 12:35:42.387 [conn47] build index test.id1 { _id: 1 } m30001| Fri Feb 22 12:35:42.388 [conn47] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/ina.js ... m30999| Fri Feb 22 12:35:42.390 [conn1] DROP: test.jstests_ina m30001| Fri Feb 22 12:35:42.391 [conn47] CMD: drop test.jstests_ina m30001| Fri Feb 22 12:35:42.391 [conn47] build index test.jstests_ina { _id: 1 } m30001| Fri Feb 22 12:35:42.392 [conn47] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.392 [conn47] assertion 15881 $elemMatch not allowed within $in ns:test.jstests_ina query:{ a: { $in: [ { $elemMatch: { b: 1.0 } } ] } } m30001| Fri Feb 22 12:35:42.392 [conn47] problem detected during query over test.jstests_ina : { $err: "$elemMatch not allowed within $in", code: 15881 } m30999| Fri Feb 22 12:35:42.392 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 15881 $elemMatch not allowed within $in m30001| Fri Feb 22 12:35:42.392 [conn47] end connection 127.0.0.1:47676 (4 connections now open) m30001| Fri Feb 22 12:35:42.393 [initandlisten] connection accepted from 127.0.0.1:47888 #49 (6 connections now open) m30001| Fri Feb 22 12:35:42.394 [conn49] assertion 15882 $elemMatch not allowed within $in ns:test.jstests_ina query:{ a: { $not: { $in: [ { $elemMatch: { b: 1.0 } } ] } } } m30001| Fri Feb 22 12:35:42.394 [conn49] problem detected during query over test.jstests_ina : { $err: "$elemMatch not allowed within $in", code: 15882 } m30999| Fri Feb 22 12:35:42.394 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 15882 $elemMatch not allowed within $in m30001| Fri Feb 22 12:35:42.394 [conn49] end connection 127.0.0.1:47888 (5 connections now open) m30001| Fri Feb 22 12:35:42.395 [initandlisten] connection accepted from 127.0.0.1:50114 #50 (6 connections now open) m30001| Fri Feb 22 12:35:42.396 [conn50] assertion 15882 $elemMatch not allowed within $in ns:test.jstests_ina query:{ a: { $nin: [ { $elemMatch: { b: 1.0 } } ] } } m30001| Fri Feb 22 12:35:42.396 [conn50] problem detected during query over test.jstests_ina : { $err: "$elemMatch not allowed within $in", code: 15882 } m30999| Fri Feb 22 12:35:42.396 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 15882 $elemMatch not allowed within $in m30001| Fri Feb 22 12:35:42.396 [conn50] end connection 127.0.0.1:50114 (5 connections now open) m30001| Fri Feb 22 12:35:42.397 [initandlisten] connection accepted from 127.0.0.1:40254 #51 (6 connections now open) m30001| Fri Feb 22 12:35:42.398 [conn51] assertion 15882 $elemMatch not allowed within $in ns:test.jstests_ina query:{ a: { $not: { $nin: [ { $elemMatch: { b: 1.0 } } ] } } } m30001| Fri Feb 22 12:35:42.398 [conn51] problem detected during query over test.jstests_ina : { $err: "$elemMatch not allowed within $in", code: 15882 } m30999| Fri Feb 22 12:35:42.398 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 15882 $elemMatch not allowed within $in m30001| Fri Feb 22 12:35:42.398 [conn51] end connection 127.0.0.1:40254 (5 connections now open) 8ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_near_random2.js ******************************************* Test : jstests/copydb.js ... m30999| Fri Feb 22 12:35:42.403 [conn1] couldn't find database [copydb-test-a] in config db m30999| Fri Feb 22 12:35:42.404 [conn1] put [copydb-test-a] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:42.404 [conn1] DROP DATABASE: copydb-test-a m30999| Fri Feb 22 12:35:42.404 [conn1] erased database copydb-test-a from local registry m30999| Fri Feb 22 12:35:42.405 [conn1] DBConfig::dropDatabase: copydb-test-a m30999| Fri Feb 22 12:35:42.405 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:42-5127661e881c8e745391605d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536542405), what: "dropDatabase.start", ns: "copydb-test-a", details: {} } m30999| Fri Feb 22 12:35:42.406 [conn1] DBConfig::dropDatabase: copydb-test-a dropped sharded collections: 0 m30000| Fri Feb 22 12:35:42.406 [conn3] dropDatabase copydb-test-a starting m30000| Fri Feb 22 12:35:42.450 [conn3] removeJournalFiles m30000| Fri Feb 22 12:35:42.451 [conn3] dropDatabase copydb-test-a finished m30999| Fri Feb 22 12:35:42.451 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:42-5127661e881c8e745391605e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536542451), what: "dropDatabase", ns: "copydb-test-a", details: {} } m30999| Fri Feb 22 12:35:42.452 [conn1] couldn't find database [copydb-test-b] in config db m30999| Fri Feb 22 12:35:42.453 [conn1] put [copydb-test-b] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:35:42.453 [conn1] DROP DATABASE: copydb-test-b m30999| Fri Feb 22 12:35:42.453 [conn1] erased database copydb-test-b from local registry m30999| Fri Feb 22 12:35:42.455 [conn1] DBConfig::dropDatabase: copydb-test-b m30999| Fri Feb 22 12:35:42.455 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:42-5127661e881c8e745391605f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536542455), what: "dropDatabase.start", ns: "copydb-test-b", details: {} } m30999| Fri Feb 22 12:35:42.455 [conn1] DBConfig::dropDatabase: copydb-test-b dropped sharded collections: 0 m30000| Fri Feb 22 12:35:42.455 [conn3] dropDatabase copydb-test-b starting m30000| Fri Feb 22 12:35:42.499 [conn3] removeJournalFiles m30000| Fri Feb 22 12:35:42.500 [conn3] dropDatabase copydb-test-b finished m30999| Fri Feb 22 12:35:42.500 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:35:42-5127661e881c8e7453916060", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536542500), what: "dropDatabase", ns: "copydb-test-b", details: {} } m30999| Fri Feb 22 12:35:42.501 [conn1] couldn't find database [copydb-test-a] in config db m30999| Fri Feb 22 12:35:42.502 [conn1] put [copydb-test-a] on: shard0000:localhost:30000 m30001| Fri Feb 22 12:35:42.502 [initandlisten] connection accepted from 127.0.0.1:41268 #52 (5 connections now open) m30000| Fri Feb 22 12:35:42.503 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/copydb-test-a.ns, filling with zeroes... m30000| Fri Feb 22 12:35:42.503 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/copydb-test-a.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:35:42.503 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/copydb-test-a.0, filling with zeroes... m30000| Fri Feb 22 12:35:42.504 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/copydb-test-a.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:35:42.504 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/copydb-test-a.1, filling with zeroes... m30000| Fri Feb 22 12:35:42.504 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/copydb-test-a.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:35:42.507 [conn6] build index copydb-test-a.foo { _id: 1 } m30000| Fri Feb 22 12:35:42.508 [conn6] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:42.509 [conn1] couldn't find database [copydb-test-b] in config db m30999| Fri Feb 22 12:35:42.510 [conn1] put [copydb-test-b] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:35:42.512 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/copydb-test-b.ns, filling with zeroes... m30000| Fri Feb 22 12:35:42.512 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/copydb-test-b.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:35:42.512 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/copydb-test-b.0, filling with zeroes... m30000| Fri Feb 22 12:35:42.512 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/copydb-test-b.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:35:42.512 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/copydb-test-b.1, filling with zeroes... m30000| Fri Feb 22 12:35:42.513 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/copydb-test-b.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:35:42.516 [conn6] build index copydb-test-b.foo { _id: 1 } m30000| Fri Feb 22 12:35:42.517 [conn6] fastBuildIndex dupsToDrop:0 m30000| Fri Feb 22 12:35:42.518 [conn6] build index done. scanned 1 total records. 0.001 secs 121ms ******************************************* Test : jstests/arrayfind2.js ... m30999| Fri Feb 22 12:35:42.520 [conn1] DROP: test.arrayfind2 m30001| Fri Feb 22 12:35:42.521 [conn52] CMD: drop test.arrayfind2 m30001| Fri Feb 22 12:35:42.521 [conn52] build index test.arrayfind2 { _id: 1 } m30001| Fri Feb 22 12:35:42.522 [conn52] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.525 [conn52] assertion 13020 with $all, can't mix $elemMatch and others ns:test.arrayfind2 query:{ a: { $all: [ 1.0, { $elemMatch: { x: 3.0 } } ] } } m30001| Fri Feb 22 12:35:42.525 [conn52] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:42.525 [conn52] problem detected during query over test.arrayfind2 : { $err: "with $all, can't mix $elemMatch and others", code: 13020 } m30999| Fri Feb 22 12:35:42.525 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13020 with $all, can't mix $elemMatch and others m30001| Fri Feb 22 12:35:42.525 [conn52] end connection 127.0.0.1:41268 (4 connections now open) m30001| Fri Feb 22 12:35:42.526 [initandlisten] connection accepted from 127.0.0.1:35347 #53 (5 connections now open) m30001| Fri Feb 22 12:35:42.527 [conn53] assertion 13020 with $all, can't mix $elemMatch and others ns:test.arrayfind2 query:{ a: { $all: [ /a/, { $elemMatch: { x: 3.0 } } ] } } m30001| Fri Feb 22 12:35:42.527 [conn53] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:42.527 [conn53] problem detected during query over test.arrayfind2 : { $err: "with $all, can't mix $elemMatch and others", code: 13020 } m30999| Fri Feb 22 12:35:42.527 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13020 with $all, can't mix $elemMatch and others m30001| Fri Feb 22 12:35:42.527 [conn53] end connection 127.0.0.1:35347 (4 connections now open) m30001| Fri Feb 22 12:35:42.528 [initandlisten] connection accepted from 127.0.0.1:37916 #54 (5 connections now open) m30001| Fri Feb 22 12:35:42.528 [conn54] build index test.arrayfind2 { a: 1.0 } m30001| Fri Feb 22 12:35:42.529 [conn54] build index done. scanned 3 total records. 0 secs m30001| Fri Feb 22 12:35:42.532 [conn54] assertion 13020 with $all, can't mix $elemMatch and others ns:test.arrayfind2 query:{ a: { $all: [ 1.0, { $elemMatch: { x: 3.0 } } ] } } m30001| Fri Feb 22 12:35:42.532 [conn54] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:42.532 [conn54] problem detected during query over test.arrayfind2 : { $err: "with $all, can't mix $elemMatch and others", code: 13020 } m30999| Fri Feb 22 12:35:42.532 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13020 with $all, can't mix $elemMatch and others m30001| Fri Feb 22 12:35:42.532 [conn54] end connection 127.0.0.1:37916 (4 connections now open) m30001| Fri Feb 22 12:35:42.533 [initandlisten] connection accepted from 127.0.0.1:48125 #55 (5 connections now open) m30001| Fri Feb 22 12:35:42.533 [conn55] assertion 13020 with $all, can't mix $elemMatch and others ns:test.arrayfind2 query:{ a: { $all: [ /a/, { $elemMatch: { x: 3.0 } } ] } } m30001| Fri Feb 22 12:35:42.533 [conn55] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:35:42.533 [conn55] problem detected during query over test.arrayfind2 : { $err: "with $all, can't mix $elemMatch and others", code: 13020 } m30999| Fri Feb 22 12:35:42.533 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13020 with $all, can't mix $elemMatch and others m30001| Fri Feb 22 12:35:42.533 [conn55] end connection 127.0.0.1:48125 (4 connections now open) m30001| Fri Feb 22 12:35:42.534 [initandlisten] connection accepted from 127.0.0.1:63547 #56 (5 connections now open) m30001| Fri Feb 22 12:35:42.535 [conn56] build index test.arrayfind2 { a.x: 1.0 } m30001| Fri Feb 22 12:35:42.535 [conn56] build index done. scanned 3 total records. 0 secs 18ms ******************************************* Test : jstests/in9.js ... m30999| Fri Feb 22 12:35:42.538 [conn1] DROP: test.jstests_in9 m30001| Fri Feb 22 12:35:42.538 [conn56] CMD: drop test.jstests_in9 m30001| Fri Feb 22 12:35:42.539 [conn56] build index test.jstests_in9 { _id: 1 } m30001| Fri Feb 22 12:35:42.540 [conn56] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.543 [conn56] build index test.jstests_in9 { key: 1.0 } m30001| Fri Feb 22 12:35:42.544 [conn56] build index done. scanned 5 total records. 0.001 secs 10ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_mapreduce.js ******************************************* Test : jstests/sub1.js ... m30999| Fri Feb 22 12:35:42.548 [conn1] DROP: test.sub1 m30001| Fri Feb 22 12:35:42.548 [conn56] CMD: drop test.sub1 m30001| Fri Feb 22 12:35:42.549 [conn56] build index test.sub1 { _id: 1 } m30001| Fri Feb 22 12:35:42.549 [conn56] build index done. scanned 0 total records. 0 secs { "_id" : ObjectId("5127661e000decca08773c89"), "a" : 1, "b" : { "c" : { "d" : 2 } } } 3ms ******************************************* Test : jstests/pull.js ... m30999| Fri Feb 22 12:35:42.552 [conn1] DROP: test.jstests_pull m30001| Fri Feb 22 12:35:42.552 [conn56] CMD: drop test.jstests_pull m30001| Fri Feb 22 12:35:42.553 [conn56] build index test.jstests_pull { _id: 1 } m30001| Fri Feb 22 12:35:42.554 [conn56] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:42.555 [conn1] DROP: test.jstests_pull m30001| Fri Feb 22 12:35:42.555 [conn56] CMD: drop test.jstests_pull m30001| Fri Feb 22 12:35:42.559 [conn56] build index test.jstests_pull { _id: 1 } m30001| Fri Feb 22 12:35:42.559 [conn56] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.560 [conn1] DROP: test.jstests_pull m30001| Fri Feb 22 12:35:42.560 [conn56] CMD: drop test.jstests_pull m30001| Fri Feb 22 12:35:42.564 [conn56] build index test.jstests_pull { _id: 1 } m30001| Fri Feb 22 12:35:42.564 [conn56] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.565 [conn1] DROP: test.jstests_pull m30001| Fri Feb 22 12:35:42.565 [conn56] CMD: drop test.jstests_pull m30001| Fri Feb 22 12:35:42.569 [conn56] build index test.jstests_pull { _id: 1 } m30001| Fri Feb 22 12:35:42.569 [conn56] build index done. scanned 0 total records. 0 secs 20ms ******************************************* Test : jstests/insert1.js ... m30999| Fri Feb 22 12:35:42.571 [conn1] DROP: test.insert1 m30001| Fri Feb 22 12:35:42.572 [conn56] CMD: drop test.insert1 m30001| Fri Feb 22 12:35:42.572 [conn56] build index test.insert1 { _id: 1 } m30001| Fri Feb 22 12:35:42.573 [conn56] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/orj.js ... m30999| Fri Feb 22 12:35:42.578 [conn1] DROP: test.jstests_orj m30001| Fri Feb 22 12:35:42.578 [conn56] CMD: drop test.jstests_orj m30001| Fri Feb 22 12:35:42.579 [conn56] build index test.jstests_orj { _id: 1 } m30001| Fri Feb 22 12:35:42.579 [conn56] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.579 [conn56] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:42.579 [conn56] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.580 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.580 [conn56] end connection 127.0.0.1:63547 (4 connections now open) m30001| Fri Feb 22 12:35:42.580 [initandlisten] connection accepted from 127.0.0.1:61653 #57 (5 connections now open) m30001| Fri Feb 22 12:35:42.668 [conn57] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:42.668 [conn57] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.669 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.669 [conn57] end connection 127.0.0.1:61653 (4 connections now open) m30001| Fri Feb 22 12:35:42.670 [initandlisten] connection accepted from 127.0.0.1:44719 #58 (5 connections now open) m30001| Fri Feb 22 12:35:42.670 [conn58] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:42.670 [conn58] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.670 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.670 [conn58] end connection 127.0.0.1:44719 (4 connections now open) m30001| Fri Feb 22 12:35:42.671 [initandlisten] connection accepted from 127.0.0.1:40546 #59 (5 connections now open) m30001| Fri Feb 22 12:35:42.671 [conn59] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: "a" } ] } m30001| Fri Feb 22 12:35:42.671 [conn59] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.671 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.671 [conn59] end connection 127.0.0.1:40546 (4 connections now open) m30001| Fri Feb 22 12:35:42.672 [initandlisten] connection accepted from 127.0.0.1:43801 #60 (5 connections now open) m30001| Fri Feb 22 12:35:42.672 [conn60] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: {} } ] } m30001| Fri Feb 22 12:35:42.672 [conn60] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.673 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.673 [conn60] end connection 127.0.0.1:43801 (4 connections now open) m30001| Fri Feb 22 12:35:42.673 [initandlisten] connection accepted from 127.0.0.1:42574 #61 (5 connections now open) m30001| Fri Feb 22 12:35:42.674 [conn61] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.674 [conn61] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.674 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.674 [conn61] end connection 127.0.0.1:42574 (4 connections now open) m30001| Fri Feb 22 12:35:42.674 [initandlisten] connection accepted from 127.0.0.1:60289 #62 (5 connections now open) m30001| Fri Feb 22 12:35:42.675 [conn62] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:42.675 [conn62] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.675 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.675 [conn62] end connection 127.0.0.1:60289 (4 connections now open) m30001| Fri Feb 22 12:35:42.676 [initandlisten] connection accepted from 127.0.0.1:39871 #63 (5 connections now open) m30001| Fri Feb 22 12:35:42.676 [conn63] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:42.676 [conn63] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.676 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.676 [conn63] end connection 127.0.0.1:39871 (4 connections now open) m30001| Fri Feb 22 12:35:42.677 [initandlisten] connection accepted from 127.0.0.1:59986 #64 (5 connections now open) m30001| Fri Feb 22 12:35:42.677 [conn64] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:42.677 [conn64] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.677 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.677 [conn64] end connection 127.0.0.1:59986 (4 connections now open) m30001| Fri Feb 22 12:35:42.678 [initandlisten] connection accepted from 127.0.0.1:42763 #65 (5 connections now open) m30001| Fri Feb 22 12:35:42.678 [conn65] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: "a" } ] } m30001| Fri Feb 22 12:35:42.678 [conn65] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.678 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.679 [conn65] end connection 127.0.0.1:42763 (4 connections now open) m30001| Fri Feb 22 12:35:42.679 [initandlisten] connection accepted from 127.0.0.1:33831 #66 (5 connections now open) m30001| Fri Feb 22 12:35:42.680 [conn66] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: {} } ] } m30001| Fri Feb 22 12:35:42.680 [conn66] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.680 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.680 [conn66] end connection 127.0.0.1:33831 (4 connections now open) m30001| Fri Feb 22 12:35:42.680 [initandlisten] connection accepted from 127.0.0.1:40272 #67 (5 connections now open) m30001| Fri Feb 22 12:35:42.681 [conn67] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.681 [conn67] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.681 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.681 [conn67] end connection 127.0.0.1:40272 (4 connections now open) m30001| Fri Feb 22 12:35:42.681 [initandlisten] connection accepted from 127.0.0.1:57337 #68 (5 connections now open) m30001| Fri Feb 22 12:35:42.691 [conn68] build index test.jstests_orj { a: 1.0 } m30001| Fri Feb 22 12:35:42.691 [conn68] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.692 [conn68] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:42.692 [conn68] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.692 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.692 [conn68] end connection 127.0.0.1:57337 (4 connections now open) m30001| Fri Feb 22 12:35:42.693 [initandlisten] connection accepted from 127.0.0.1:60559 #69 (5 connections now open) m30001| Fri Feb 22 12:35:42.693 [conn69] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:42.693 [conn69] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.693 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.693 [conn69] end connection 127.0.0.1:60559 (4 connections now open) m30001| Fri Feb 22 12:35:42.694 [initandlisten] connection accepted from 127.0.0.1:45142 #70 (5 connections now open) m30001| Fri Feb 22 12:35:42.694 [conn70] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:42.694 [conn70] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.694 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.694 [conn70] end connection 127.0.0.1:45142 (4 connections now open) m30001| Fri Feb 22 12:35:42.695 [initandlisten] connection accepted from 127.0.0.1:43675 #71 (5 connections now open) m30001| Fri Feb 22 12:35:42.695 [conn71] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: "a" } ] } m30001| Fri Feb 22 12:35:42.695 [conn71] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.696 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.696 [conn71] end connection 127.0.0.1:43675 (4 connections now open) m30001| Fri Feb 22 12:35:42.696 [initandlisten] connection accepted from 127.0.0.1:42616 #72 (5 connections now open) m30001| Fri Feb 22 12:35:42.697 [conn72] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: {} } ] } m30001| Fri Feb 22 12:35:42.697 [conn72] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.697 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.697 [conn72] end connection 127.0.0.1:42616 (4 connections now open) m30001| Fri Feb 22 12:35:42.697 [initandlisten] connection accepted from 127.0.0.1:60585 #73 (5 connections now open) m30001| Fri Feb 22 12:35:42.698 [conn73] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.698 [conn73] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.698 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.698 [conn73] end connection 127.0.0.1:60585 (4 connections now open) m30001| Fri Feb 22 12:35:42.698 [initandlisten] connection accepted from 127.0.0.1:60779 #74 (5 connections now open) m30001| Fri Feb 22 12:35:42.699 [conn74] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:42.699 [conn74] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.699 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.699 [conn74] end connection 127.0.0.1:60779 (4 connections now open) m30001| Fri Feb 22 12:35:42.699 [initandlisten] connection accepted from 127.0.0.1:41242 #75 (5 connections now open) m30001| Fri Feb 22 12:35:42.700 [conn75] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:42.700 [conn75] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.700 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.700 [conn75] end connection 127.0.0.1:41242 (4 connections now open) m30001| Fri Feb 22 12:35:42.701 [initandlisten] connection accepted from 127.0.0.1:57004 #76 (5 connections now open) m30001| Fri Feb 22 12:35:42.701 [conn76] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:42.701 [conn76] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.701 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.701 [conn76] end connection 127.0.0.1:57004 (4 connections now open) m30001| Fri Feb 22 12:35:42.702 [initandlisten] connection accepted from 127.0.0.1:49745 #77 (5 connections now open) m30001| Fri Feb 22 12:35:42.702 [conn77] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: "a" } ] } m30001| Fri Feb 22 12:35:42.702 [conn77] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.702 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.702 [conn77] end connection 127.0.0.1:49745 (4 connections now open) m30001| Fri Feb 22 12:35:42.703 [initandlisten] connection accepted from 127.0.0.1:48787 #78 (5 connections now open) m30001| Fri Feb 22 12:35:42.703 [conn78] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: {} } ] } m30001| Fri Feb 22 12:35:42.703 [conn78] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.703 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.703 [conn78] end connection 127.0.0.1:48787 (4 connections now open) m30001| Fri Feb 22 12:35:42.704 [initandlisten] connection accepted from 127.0.0.1:59064 #79 (5 connections now open) m30001| Fri Feb 22 12:35:42.704 [conn79] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.704 [conn79] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.704 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.705 [conn79] end connection 127.0.0.1:59064 (4 connections now open) m30001| Fri Feb 22 12:35:42.705 [initandlisten] connection accepted from 127.0.0.1:38410 #80 (5 connections now open) m30001| Fri Feb 22 12:35:42.715 [conn4] CMD: dropIndexes test.jstests_orj m30001| Fri Feb 22 12:35:42.718 [conn80] build index test.jstests_orj { b: 1.0 } m30001| Fri Feb 22 12:35:42.719 [conn80] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.719 [conn80] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:42.719 [conn80] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.720 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.720 [conn80] end connection 127.0.0.1:38410 (4 connections now open) m30001| Fri Feb 22 12:35:42.720 [initandlisten] connection accepted from 127.0.0.1:51235 #81 (5 connections now open) m30001| Fri Feb 22 12:35:42.721 [conn81] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:42.721 [conn81] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.721 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.721 [conn81] end connection 127.0.0.1:51235 (4 connections now open) m30001| Fri Feb 22 12:35:42.721 [initandlisten] connection accepted from 127.0.0.1:51577 #82 (5 connections now open) m30001| Fri Feb 22 12:35:42.722 [conn82] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:42.722 [conn82] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.722 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.722 [conn82] end connection 127.0.0.1:51577 (4 connections now open) m30001| Fri Feb 22 12:35:42.722 [initandlisten] connection accepted from 127.0.0.1:44986 #83 (5 connections now open) m30001| Fri Feb 22 12:35:42.723 [conn83] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: "a" } ] } m30001| Fri Feb 22 12:35:42.723 [conn83] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.723 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.723 [conn83] end connection 127.0.0.1:44986 (4 connections now open) m30001| Fri Feb 22 12:35:42.724 [initandlisten] connection accepted from 127.0.0.1:52785 #84 (5 connections now open) m30001| Fri Feb 22 12:35:42.724 [conn84] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: {} } ] } m30001| Fri Feb 22 12:35:42.724 [conn84] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.724 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.724 [conn84] end connection 127.0.0.1:52785 (4 connections now open) m30001| Fri Feb 22 12:35:42.725 [initandlisten] connection accepted from 127.0.0.1:45745 #85 (5 connections now open) m30001| Fri Feb 22 12:35:42.725 [conn85] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.725 [conn85] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.725 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.725 [conn85] end connection 127.0.0.1:45745 (4 connections now open) m30001| Fri Feb 22 12:35:42.726 [initandlisten] connection accepted from 127.0.0.1:50072 #86 (5 connections now open) m30001| Fri Feb 22 12:35:42.726 [conn86] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:42.726 [conn86] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.726 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.726 [conn86] end connection 127.0.0.1:50072 (4 connections now open) m30001| Fri Feb 22 12:35:42.727 [initandlisten] connection accepted from 127.0.0.1:64509 #87 (5 connections now open) m30001| Fri Feb 22 12:35:42.727 [conn87] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:42.727 [conn87] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.727 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.727 [conn87] end connection 127.0.0.1:64509 (4 connections now open) m30001| Fri Feb 22 12:35:42.728 [initandlisten] connection accepted from 127.0.0.1:35175 #88 (5 connections now open) m30001| Fri Feb 22 12:35:42.728 [conn88] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:42.728 [conn88] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.728 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.729 [conn88] end connection 127.0.0.1:35175 (4 connections now open) m30001| Fri Feb 22 12:35:42.729 [initandlisten] connection accepted from 127.0.0.1:38452 #89 (5 connections now open) m30001| Fri Feb 22 12:35:42.729 [conn89] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: "a" } ] } m30001| Fri Feb 22 12:35:42.730 [conn89] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.730 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.730 [conn89] end connection 127.0.0.1:38452 (4 connections now open) m30001| Fri Feb 22 12:35:42.730 [initandlisten] connection accepted from 127.0.0.1:47730 #90 (5 connections now open) m30001| Fri Feb 22 12:35:42.731 [conn90] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: {} } ] } m30001| Fri Feb 22 12:35:42.731 [conn90] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.731 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.731 [conn90] end connection 127.0.0.1:47730 (4 connections now open) m30001| Fri Feb 22 12:35:42.731 [initandlisten] connection accepted from 127.0.0.1:40669 #91 (5 connections now open) m30001| Fri Feb 22 12:35:42.732 [conn91] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.732 [conn91] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.732 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.732 [conn91] end connection 127.0.0.1:40669 (4 connections now open) m30001| Fri Feb 22 12:35:42.733 [initandlisten] connection accepted from 127.0.0.1:33886 #92 (5 connections now open) m30001| Fri Feb 22 12:35:42.742 [conn4] CMD: dropIndexes test.jstests_orj m30001| Fri Feb 22 12:35:42.745 [conn92] build index test.jstests_orj { a: 1.0 } m30001| Fri Feb 22 12:35:42.745 [conn92] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.746 [conn92] build index test.jstests_orj { b: 1.0 } m30001| Fri Feb 22 12:35:42.746 [conn92] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.747 [conn92] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:42.747 [conn92] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.747 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.747 [conn92] end connection 127.0.0.1:33886 (4 connections now open) m30001| Fri Feb 22 12:35:42.748 [initandlisten] connection accepted from 127.0.0.1:61645 #93 (5 connections now open) m30001| Fri Feb 22 12:35:42.748 [conn93] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:42.748 [conn93] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.748 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.748 [conn93] end connection 127.0.0.1:61645 (4 connections now open) m30001| Fri Feb 22 12:35:42.749 [initandlisten] connection accepted from 127.0.0.1:62870 #94 (5 connections now open) m30001| Fri Feb 22 12:35:42.749 [conn94] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:42.749 [conn94] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.749 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.749 [conn94] end connection 127.0.0.1:62870 (4 connections now open) m30001| Fri Feb 22 12:35:42.750 [initandlisten] connection accepted from 127.0.0.1:43334 #95 (5 connections now open) m30001| Fri Feb 22 12:35:42.750 [conn95] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: "a" } ] } m30001| Fri Feb 22 12:35:42.750 [conn95] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.750 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.751 [conn95] end connection 127.0.0.1:43334 (4 connections now open) m30001| Fri Feb 22 12:35:42.751 [initandlisten] connection accepted from 127.0.0.1:38217 #96 (5 connections now open) m30001| Fri Feb 22 12:35:42.751 [conn96] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: {} } ] } m30001| Fri Feb 22 12:35:42.752 [conn96] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.752 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.752 [conn96] end connection 127.0.0.1:38217 (4 connections now open) m30001| Fri Feb 22 12:35:42.752 [initandlisten] connection accepted from 127.0.0.1:59694 #97 (5 connections now open) m30001| Fri Feb 22 12:35:42.753 [conn97] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.753 [conn97] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.753 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.753 [conn97] end connection 127.0.0.1:59694 (4 connections now open) m30001| Fri Feb 22 12:35:42.753 [initandlisten] connection accepted from 127.0.0.1:61745 #98 (5 connections now open) m30001| Fri Feb 22 12:35:42.754 [conn98] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:42.754 [conn98] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.754 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.754 [conn98] end connection 127.0.0.1:61745 (4 connections now open) m30001| Fri Feb 22 12:35:42.755 [initandlisten] connection accepted from 127.0.0.1:34253 #99 (5 connections now open) m30001| Fri Feb 22 12:35:42.755 [conn99] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:42.755 [conn99] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.755 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.755 [conn99] end connection 127.0.0.1:34253 (4 connections now open) m30001| Fri Feb 22 12:35:42.756 [initandlisten] connection accepted from 127.0.0.1:45755 #100 (5 connections now open) m30001| Fri Feb 22 12:35:42.756 [conn100] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:42.756 [conn100] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.756 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.756 [conn100] end connection 127.0.0.1:45755 (4 connections now open) m30001| Fri Feb 22 12:35:42.757 [initandlisten] connection accepted from 127.0.0.1:43536 #101 (5 connections now open) m30001| Fri Feb 22 12:35:42.757 [conn101] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: "a" } ] } m30001| Fri Feb 22 12:35:42.757 [conn101] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.757 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.757 [conn101] end connection 127.0.0.1:43536 (4 connections now open) m30001| Fri Feb 22 12:35:42.758 [initandlisten] connection accepted from 127.0.0.1:65019 #102 (5 connections now open) m30001| Fri Feb 22 12:35:42.758 [conn102] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: {} } ] } m30001| Fri Feb 22 12:35:42.758 [conn102] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.758 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.758 [conn102] end connection 127.0.0.1:65019 (4 connections now open) m30001| Fri Feb 22 12:35:42.759 [initandlisten] connection accepted from 127.0.0.1:44617 #103 (5 connections now open) m30001| Fri Feb 22 12:35:42.759 [conn103] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.759 [conn103] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.759 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.760 [conn103] end connection 127.0.0.1:44617 (4 connections now open) m30001| Fri Feb 22 12:35:42.760 [initandlisten] connection accepted from 127.0.0.1:62626 #104 (5 connections now open) m30001| Fri Feb 22 12:35:42.770 [conn4] CMD: dropIndexes test.jstests_orj m30001| Fri Feb 22 12:35:42.775 [conn104] build index test.jstests_orj { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:42.776 [conn104] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.776 [conn104] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:42.776 [conn104] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.776 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.776 [conn104] end connection 127.0.0.1:62626 (4 connections now open) m30001| Fri Feb 22 12:35:42.777 [initandlisten] connection accepted from 127.0.0.1:44128 #105 (5 connections now open) m30001| Fri Feb 22 12:35:42.777 [conn105] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:42.777 [conn105] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.778 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.778 [conn105] end connection 127.0.0.1:44128 (4 connections now open) m30001| Fri Feb 22 12:35:42.778 [initandlisten] connection accepted from 127.0.0.1:44391 #106 (5 connections now open) m30001| Fri Feb 22 12:35:42.778 [conn106] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:42.778 [conn106] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.779 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.779 [conn106] end connection 127.0.0.1:44391 (4 connections now open) m30001| Fri Feb 22 12:35:42.779 [initandlisten] connection accepted from 127.0.0.1:48557 #107 (5 connections now open) m30001| Fri Feb 22 12:35:42.780 [conn107] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: "a" } ] } m30001| Fri Feb 22 12:35:42.780 [conn107] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.780 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.780 [conn107] end connection 127.0.0.1:48557 (4 connections now open) m30001| Fri Feb 22 12:35:42.780 [initandlisten] connection accepted from 127.0.0.1:50713 #108 (5 connections now open) m30001| Fri Feb 22 12:35:42.781 [conn108] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: {} } ] } m30001| Fri Feb 22 12:35:42.781 [conn108] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.781 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.781 [conn108] end connection 127.0.0.1:50713 (4 connections now open) m30001| Fri Feb 22 12:35:42.781 [initandlisten] connection accepted from 127.0.0.1:46297 #109 (5 connections now open) m30001| Fri Feb 22 12:35:42.782 [conn109] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.782 [conn109] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.782 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.782 [conn109] end connection 127.0.0.1:46297 (4 connections now open) m30001| Fri Feb 22 12:35:42.782 [initandlisten] connection accepted from 127.0.0.1:62313 #110 (5 connections now open) m30001| Fri Feb 22 12:35:42.783 [conn110] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:42.783 [conn110] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.783 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.783 [conn110] end connection 127.0.0.1:62313 (4 connections now open) m30001| Fri Feb 22 12:35:42.784 [initandlisten] connection accepted from 127.0.0.1:37802 #111 (5 connections now open) m30001| Fri Feb 22 12:35:42.784 [conn111] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:42.784 [conn111] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.784 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.784 [conn111] end connection 127.0.0.1:37802 (4 connections now open) m30001| Fri Feb 22 12:35:42.785 [initandlisten] connection accepted from 127.0.0.1:54321 #112 (5 connections now open) m30001| Fri Feb 22 12:35:42.785 [conn112] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:42.785 [conn112] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.785 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.785 [conn112] end connection 127.0.0.1:54321 (4 connections now open) m30001| Fri Feb 22 12:35:42.786 [initandlisten] connection accepted from 127.0.0.1:49826 #113 (5 connections now open) m30001| Fri Feb 22 12:35:42.786 [conn113] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: "a" } ] } m30001| Fri Feb 22 12:35:42.786 [conn113] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.786 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.786 [conn113] end connection 127.0.0.1:49826 (4 connections now open) m30001| Fri Feb 22 12:35:42.787 [initandlisten] connection accepted from 127.0.0.1:64232 #114 (5 connections now open) m30001| Fri Feb 22 12:35:42.787 [conn114] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: {} } ] } m30001| Fri Feb 22 12:35:42.787 [conn114] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.787 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.787 [conn114] end connection 127.0.0.1:64232 (4 connections now open) m30001| Fri Feb 22 12:35:42.788 [initandlisten] connection accepted from 127.0.0.1:60096 #115 (5 connections now open) m30001| Fri Feb 22 12:35:42.788 [conn115] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.788 [conn115] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.788 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.789 [conn115] end connection 127.0.0.1:60096 (4 connections now open) m30001| Fri Feb 22 12:35:42.789 [initandlisten] connection accepted from 127.0.0.1:34063 #116 (5 connections now open) m30001| Fri Feb 22 12:35:42.799 [conn4] CMD: dropIndexes test.jstests_orj m30001| Fri Feb 22 12:35:42.802 [conn116] build index test.jstests_orj { a: 1.0 } m30001| Fri Feb 22 12:35:42.802 [conn116] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.803 [conn116] build index test.jstests_orj { b: 1.0 } m30001| Fri Feb 22 12:35:42.803 [conn116] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.803 [conn116] build index test.jstests_orj { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:42.804 [conn116] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:42.805 [conn116] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:42.805 [conn116] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.805 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.805 [conn116] end connection 127.0.0.1:34063 (4 connections now open) m30001| Fri Feb 22 12:35:42.805 [initandlisten] connection accepted from 127.0.0.1:47625 #117 (5 connections now open) m30001| Fri Feb 22 12:35:42.806 [conn117] assertion 13262 $or requires nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:42.806 [conn117] problem detected during query over test.jstests_orj : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:42.806 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:42.806 [conn117] end connection 127.0.0.1:47625 (4 connections now open) m30001| Fri Feb 22 12:35:42.806 [initandlisten] connection accepted from 127.0.0.1:37247 #118 (5 connections now open) m30001| Fri Feb 22 12:35:42.807 [conn118] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:42.807 [conn118] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.807 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.807 [conn118] end connection 127.0.0.1:37247 (4 connections now open) m30001| Fri Feb 22 12:35:42.808 [initandlisten] connection accepted from 127.0.0.1:56108 #119 (5 connections now open) m30001| Fri Feb 22 12:35:42.808 [conn119] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: "a" } ] } m30001| Fri Feb 22 12:35:42.808 [conn119] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.808 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.808 [conn119] end connection 127.0.0.1:56108 (4 connections now open) m30001| Fri Feb 22 12:35:42.809 [initandlisten] connection accepted from 127.0.0.1:37363 #120 (5 connections now open) m30001| Fri Feb 22 12:35:42.809 [conn120] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: {} } ] } m30001| Fri Feb 22 12:35:42.809 [conn120] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.809 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.809 [conn120] end connection 127.0.0.1:37363 (4 connections now open) m30001| Fri Feb 22 12:35:42.810 [initandlisten] connection accepted from 127.0.0.1:59079 #121 (5 connections now open) m30001| Fri Feb 22 12:35:42.810 [conn121] assertion 14817 $and/$or elements must be objects ns:test.jstests_orj query:{ x: 0.0, $or: [ { $or: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.810 [conn121] problem detected during query over test.jstests_orj : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:42.810 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:42.810 [conn121] end connection 127.0.0.1:59079 (4 connections now open) m30001| Fri Feb 22 12:35:42.811 [initandlisten] connection accepted from 127.0.0.1:60015 #122 (5 connections now open) m30001| Fri Feb 22 12:35:42.811 [conn122] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:42.811 [conn122] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.811 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.812 [conn122] end connection 127.0.0.1:60015 (4 connections now open) m30001| Fri Feb 22 12:35:42.812 [initandlisten] connection accepted from 127.0.0.1:56119 #123 (5 connections now open) m30001| Fri Feb 22 12:35:42.812 [conn123] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:42.812 [conn123] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.813 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.813 [conn123] end connection 127.0.0.1:56119 (4 connections now open) m30001| Fri Feb 22 12:35:42.813 [initandlisten] connection accepted from 127.0.0.1:44188 #124 (5 connections now open) m30001| Fri Feb 22 12:35:42.814 [conn124] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:42.814 [conn124] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.814 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.814 [conn124] end connection 127.0.0.1:44188 (4 connections now open) m30001| Fri Feb 22 12:35:42.814 [initandlisten] connection accepted from 127.0.0.1:45579 #125 (5 connections now open) m30001| Fri Feb 22 12:35:42.815 [conn125] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: "a" } ] } m30001| Fri Feb 22 12:35:42.815 [conn125] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.815 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.815 [conn125] end connection 127.0.0.1:45579 (4 connections now open) m30001| Fri Feb 22 12:35:42.815 [initandlisten] connection accepted from 127.0.0.1:59913 #126 (5 connections now open) m30001| Fri Feb 22 12:35:42.816 [conn126] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: {} } ] } m30001| Fri Feb 22 12:35:42.816 [conn126] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:42.816 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:42.816 [conn126] end connection 127.0.0.1:59913 (4 connections now open) m30001| Fri Feb 22 12:35:42.817 [initandlisten] connection accepted from 127.0.0.1:63598 #127 (5 connections now open) m30001| Fri Feb 22 12:35:42.817 [conn127] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_orj query:{ x: 0.0, $nor: [ { $nor: [ "a" ] } ] } m30001| Fri Feb 22 12:35:42.817 [conn127] problem detected during query over test.jstests_orj : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:42.817 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:42.817 [conn127] end connection 127.0.0.1:63598 (4 connections now open) m30001| Fri Feb 22 12:35:42.818 [initandlisten] connection accepted from 127.0.0.1:63528 #128 (5 connections now open) 294ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_multinest0.js ******************************************* Test : jstests/updatec.js ... m30999| Fri Feb 22 12:35:42.871 [conn1] DROP: test.updatec m30001| Fri Feb 22 12:35:42.871 [conn128] CMD: drop test.updatec m30001| Fri Feb 22 12:35:42.872 [conn128] build index test.updatec { _id: 1 } m30001| Fri Feb 22 12:35:42.873 [conn128] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/indexp.js ... m30999| Fri Feb 22 12:35:42.875 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.875 [conn128] CMD: drop test.jstests_indexp m30999| Fri Feb 22 12:35:42.875 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.876 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.876 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.877 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.877 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.877 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.878 [conn128] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.880 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.880 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.886 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.887 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.887 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.887 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.887 [conn128] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.889 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.889 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.895 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.896 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.896 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.896 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.896 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.897 [conn128] build index test.jstests_indexp { x: 1.0 } m30001| Fri Feb 22 12:35:42.897 [conn128] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.900 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.900 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.908 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.908 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.909 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.909 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.909 [conn128] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.911 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.911 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.917 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.917 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.917 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.917 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.918 [conn128] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.919 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.919 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.925 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.926 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.926 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.926 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.927 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.927 [conn128] build index test.jstests_indexp { c: 1.0 } m30001| Fri Feb 22 12:35:42.928 [conn128] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.930 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.930 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.938 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.938 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.938 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.938 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.939 [conn128] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:42.941 [conn1] DROP: test.jstests_indexp m30001| Fri Feb 22 12:35:42.941 [conn128] CMD: drop test.jstests_indexp m30001| Fri Feb 22 12:35:42.947 [conn128] build index test.jstests_indexp { _id: 1 } m30001| Fri Feb 22 12:35:42.947 [conn128] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:42.947 [conn128] info: creating collection test.jstests_indexp on add index m30001| Fri Feb 22 12:35:42.948 [conn128] build index test.jstests_indexp { a: 1.0 } m30001| Fri Feb 22 12:35:42.948 [conn128] build index done. scanned 0 total records. 0 secs 75ms ******************************************* Test : jstests/indexf.js ... m30999| Fri Feb 22 12:35:43.069 [conn1] DROP: test.indexf m30001| Fri Feb 22 12:35:43.076 [conn128] CMD: drop test.indexf m30001| Fri Feb 22 12:35:43.077 [conn128] build index test.indexf { _id: 1 } m30001| Fri Feb 22 12:35:43.078 [conn128] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:43.078 [conn128] info: creating collection test.indexf on add index m30001| Fri Feb 22 12:35:43.078 [conn128] build index test.indexf { x: 1.0 } m30001| Fri Feb 22 12:35:43.079 [conn128] build index done. scanned 0 total records. 0.001 secs 132ms ******************************************* Test : jstests/or2.js ... m30999| Fri Feb 22 12:35:43.082 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.082 [conn128] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.083 [conn128] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.085 [conn128] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:43.085 [conn128] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:43.085 [conn128] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.086 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.086 [conn128] end connection 127.0.0.1:63528 (4 connections now open) m30001| Fri Feb 22 12:35:43.086 [initandlisten] connection accepted from 127.0.0.1:47272 #129 (5 connections now open) m30001| Fri Feb 22 12:35:43.087 [conn129] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:43.087 [conn129] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.087 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.087 [conn129] end connection 127.0.0.1:47272 (4 connections now open) m30001| Fri Feb 22 12:35:43.088 [initandlisten] connection accepted from 127.0.0.1:63437 #130 (5 connections now open) m30001| Fri Feb 22 12:35:43.088 [conn130] assertion 14817 $and/$or elements must be objects ns:test.jstests_or2 query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:43.088 [conn130] problem detected during query over test.jstests_or2 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:43.088 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:43.088 [conn130] end connection 127.0.0.1:63437 (4 connections now open) m30001| Fri Feb 22 12:35:43.089 [initandlisten] connection accepted from 127.0.0.1:39948 #131 (5 connections now open) m30999| Fri Feb 22 12:35:43.091 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.091 [conn131] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.096 [conn131] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.096 [conn131] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.097 [conn131] build index test.jstests_or2 { x: 1.0 } m30001| Fri Feb 22 12:35:43.098 [conn131] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:43.099 [conn131] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:43.099 [conn131] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.099 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.099 [conn131] end connection 127.0.0.1:39948 (4 connections now open) m30001| Fri Feb 22 12:35:43.100 [initandlisten] connection accepted from 127.0.0.1:33697 #132 (5 connections now open) m30001| Fri Feb 22 12:35:43.100 [conn132] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:43.100 [conn132] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.101 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.101 [conn132] end connection 127.0.0.1:33697 (4 connections now open) m30001| Fri Feb 22 12:35:43.101 [initandlisten] connection accepted from 127.0.0.1:41658 #133 (5 connections now open) m30001| Fri Feb 22 12:35:43.102 [conn133] assertion 14817 $and/$or elements must be objects ns:test.jstests_or2 query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:43.102 [conn133] problem detected during query over test.jstests_or2 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:43.102 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:43.102 [conn133] end connection 127.0.0.1:41658 (4 connections now open) m30001| Fri Feb 22 12:35:43.102 [initandlisten] connection accepted from 127.0.0.1:47284 #134 (5 connections now open) m30999| Fri Feb 22 12:35:43.105 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.106 [conn134] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.112 [conn134] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.112 [conn134] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:43.113 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.113 [conn134] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.117 [conn134] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.118 [conn134] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.118 [conn134] info: creating collection test.jstests_or2 on add index m30001| Fri Feb 22 12:35:43.118 [conn134] build index test.jstests_or2 { x: 1.0, a: 1.0 } m30001| Fri Feb 22 12:35:43.118 [conn134] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.119 [conn134] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:43.119 [conn134] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.119 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.119 [conn134] end connection 127.0.0.1:47284 (4 connections now open) m30001| Fri Feb 22 12:35:43.120 [initandlisten] connection accepted from 127.0.0.1:42089 #135 (5 connections now open) m30001| Fri Feb 22 12:35:43.120 [conn135] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:43.120 [conn135] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.120 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.121 [conn135] end connection 127.0.0.1:42089 (4 connections now open) m30001| Fri Feb 22 12:35:43.121 [initandlisten] connection accepted from 127.0.0.1:63216 #136 (5 connections now open) m30001| Fri Feb 22 12:35:43.121 [conn136] assertion 14817 $and/$or elements must be objects ns:test.jstests_or2 query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:43.121 [conn136] problem detected during query over test.jstests_or2 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:43.122 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:43.122 [conn136] end connection 127.0.0.1:63216 (4 connections now open) m30001| Fri Feb 22 12:35:43.122 [initandlisten] connection accepted from 127.0.0.1:64775 #137 (5 connections now open) m30999| Fri Feb 22 12:35:43.125 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.126 [conn137] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.131 [conn137] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.132 [conn137] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:43.133 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.133 [conn137] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.137 [conn137] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.137 [conn137] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.137 [conn137] info: creating collection test.jstests_or2 on add index m30001| Fri Feb 22 12:35:43.137 [conn137] build index test.jstests_or2 { x: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:43.138 [conn137] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.139 [conn137] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:43.139 [conn137] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.139 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.139 [conn137] end connection 127.0.0.1:64775 (4 connections now open) m30001| Fri Feb 22 12:35:43.140 [initandlisten] connection accepted from 127.0.0.1:65488 #138 (5 connections now open) m30001| Fri Feb 22 12:35:43.140 [conn138] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:43.140 [conn138] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.140 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.140 [conn138] end connection 127.0.0.1:65488 (4 connections now open) m30001| Fri Feb 22 12:35:43.141 [initandlisten] connection accepted from 127.0.0.1:42642 #139 (5 connections now open) m30001| Fri Feb 22 12:35:43.141 [conn139] assertion 14817 $and/$or elements must be objects ns:test.jstests_or2 query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:43.141 [conn139] problem detected during query over test.jstests_or2 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:43.141 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:43.141 [conn139] end connection 127.0.0.1:42642 (4 connections now open) m30001| Fri Feb 22 12:35:43.142 [initandlisten] connection accepted from 127.0.0.1:64980 #140 (5 connections now open) m30999| Fri Feb 22 12:35:43.145 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.145 [conn140] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.151 [conn140] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.152 [conn140] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:43.153 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.153 [conn140] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.157 [conn140] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.157 [conn140] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.157 [conn140] info: creating collection test.jstests_or2 on add index m30001| Fri Feb 22 12:35:43.158 [conn140] build index test.jstests_or2 { x: 1.0, a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:43.159 [conn140] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:43.160 [conn140] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: "a" } m30001| Fri Feb 22 12:35:43.160 [conn140] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.160 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.160 [conn140] end connection 127.0.0.1:64980 (4 connections now open) m30001| Fri Feb 22 12:35:43.160 [initandlisten] connection accepted from 127.0.0.1:38241 #141 (5 connections now open) m30001| Fri Feb 22 12:35:43.161 [conn141] assertion 13262 $or requires nonempty array ns:test.jstests_or2 query:{ x: 0.0, $or: {} } m30001| Fri Feb 22 12:35:43.161 [conn141] problem detected during query over test.jstests_or2 : { $err: "$or requires nonempty array", code: 13262 } m30999| Fri Feb 22 12:35:43.161 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13262 $or requires nonempty array m30001| Fri Feb 22 12:35:43.161 [conn141] end connection 127.0.0.1:38241 (4 connections now open) m30001| Fri Feb 22 12:35:43.162 [initandlisten] connection accepted from 127.0.0.1:57029 #142 (5 connections now open) m30001| Fri Feb 22 12:35:43.162 [conn142] assertion 14817 $and/$or elements must be objects ns:test.jstests_or2 query:{ x: 0.0, $or: [ "a" ] } m30001| Fri Feb 22 12:35:43.162 [conn142] problem detected during query over test.jstests_or2 : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:35:43.162 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:35:43.162 [conn142] end connection 127.0.0.1:57029 (4 connections now open) m30001| Fri Feb 22 12:35:43.163 [initandlisten] connection accepted from 127.0.0.1:58732 #143 (5 connections now open) m30999| Fri Feb 22 12:35:43.166 [conn1] DROP: test.jstests_or2 m30001| Fri Feb 22 12:35:43.166 [conn143] CMD: drop test.jstests_or2 m30001| Fri Feb 22 12:35:43.173 [conn143] build index test.jstests_or2 { _id: 1 } m30001| Fri Feb 22 12:35:43.173 [conn143] build index done. scanned 0 total records. 0 secs 93ms >>>>>>>>>>>>>>> skipping jstests/parallel ******************************************* Test : jstests/mr_sort.js ... m30999| Fri Feb 22 12:35:43.175 [conn1] DROP: test.mr_sort m30001| Fri Feb 22 12:35:43.176 [conn143] CMD: drop test.mr_sort m30001| Fri Feb 22 12:35:43.176 [conn143] build index test.mr_sort { _id: 1 } m30001| Fri Feb 22 12:35:43.177 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.177 [conn143] info: creating collection test.mr_sort on add index m30001| Fri Feb 22 12:35:43.177 [conn143] build index test.mr_sort { x: 1.0 } m30001| Fri Feb 22 12:35:43.178 [conn143] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:35:43.212 [conn143] CMD: drop test.tmp.mr.mr_sort_58 m30001| Fri Feb 22 12:35:43.212 [conn143] CMD: drop test.tmp.mr.mr_sort_58_inc m30001| Fri Feb 22 12:35:43.212 [conn143] build index test.tmp.mr.mr_sort_58_inc { 0: 1 } m30001| Fri Feb 22 12:35:43.213 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.213 [conn143] build index test.tmp.mr.mr_sort_58 { _id: 1 } m30001| Fri Feb 22 12:35:43.214 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.216 [conn143] CMD: drop test.mr_sort_out m30001| Fri Feb 22 12:35:43.224 [conn143] CMD: drop test.tmp.mr.mr_sort_58 m30001| Fri Feb 22 12:35:43.224 [conn143] CMD: drop test.tmp.mr.mr_sort_58 m30001| Fri Feb 22 12:35:43.224 [conn143] CMD: drop test.tmp.mr.mr_sort_58_inc m30001| Fri Feb 22 12:35:43.227 [conn143] CMD: drop test.tmp.mr.mr_sort_58 m30001| Fri Feb 22 12:35:43.228 [conn143] CMD: drop test.tmp.mr.mr_sort_58_inc m30999| Fri Feb 22 12:35:43.229 [conn1] DROP: test.mr_sort_out m30001| Fri Feb 22 12:35:43.229 [conn143] CMD: drop test.mr_sort_out m30001| Fri Feb 22 12:35:43.235 [conn143] CMD: drop test.tmp.mr.mr_sort_59 m30001| Fri Feb 22 12:35:43.235 [conn143] CMD: drop test.tmp.mr.mr_sort_59_inc m30001| Fri Feb 22 12:35:43.236 [conn143] build index test.tmp.mr.mr_sort_59_inc { 0: 1 } m30001| Fri Feb 22 12:35:43.236 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.236 [conn143] build index test.tmp.mr.mr_sort_59 { _id: 1 } m30001| Fri Feb 22 12:35:43.237 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.239 [conn143] CMD: drop test.mr_sort_out m30001| Fri Feb 22 12:35:43.247 [conn143] CMD: drop test.tmp.mr.mr_sort_59 m30001| Fri Feb 22 12:35:43.247 [conn143] CMD: drop test.tmp.mr.mr_sort_59 m30001| Fri Feb 22 12:35:43.247 [conn143] CMD: drop test.tmp.mr.mr_sort_59_inc m30001| Fri Feb 22 12:35:43.252 [conn143] CMD: drop test.tmp.mr.mr_sort_59 m30001| Fri Feb 22 12:35:43.252 [conn143] CMD: drop test.tmp.mr.mr_sort_59_inc m30999| Fri Feb 22 12:35:43.253 [conn1] DROP: test.mr_sort_out m30001| Fri Feb 22 12:35:43.253 [conn143] CMD: drop test.mr_sort_out m30001| Fri Feb 22 12:35:43.262 [conn143] CMD: drop test.tmp.mr.mr_sort_60 m30001| Fri Feb 22 12:35:43.262 [conn143] CMD: drop test.tmp.mr.mr_sort_60_inc m30001| Fri Feb 22 12:35:43.263 [conn143] build index test.tmp.mr.mr_sort_60_inc { 0: 1 } m30001| Fri Feb 22 12:35:43.263 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.263 [conn143] build index test.tmp.mr.mr_sort_60 { _id: 1 } m30001| Fri Feb 22 12:35:43.264 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.266 [conn143] CMD: drop test.mr_sort_out m30001| Fri Feb 22 12:35:43.273 [conn143] CMD: drop test.tmp.mr.mr_sort_60 m30001| Fri Feb 22 12:35:43.274 [conn143] CMD: drop test.tmp.mr.mr_sort_60 m30001| Fri Feb 22 12:35:43.274 [conn143] CMD: drop test.tmp.mr.mr_sort_60_inc m30001| Fri Feb 22 12:35:43.277 [conn143] CMD: drop test.tmp.mr.mr_sort_60 m30001| Fri Feb 22 12:35:43.277 [conn143] CMD: drop test.tmp.mr.mr_sort_60_inc m30999| Fri Feb 22 12:35:43.279 [conn1] DROP: test.mr_sort_out m30001| Fri Feb 22 12:35:43.279 [conn143] CMD: drop test.mr_sort_out 108ms ******************************************* Test : jstests/or3.js ... m30999| Fri Feb 22 12:35:43.286 [conn1] DROP: test.jstests_or3 m30001| Fri Feb 22 12:35:43.286 [conn143] CMD: drop test.jstests_or3 m30001| Fri Feb 22 12:35:43.287 [conn143] build index test.jstests_or3 { _id: 1 } m30001| Fri Feb 22 12:35:43.287 [conn143] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.288 [conn143] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:43.288 [conn143] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.288 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.288 [conn143] end connection 127.0.0.1:58732 (4 connections now open) m30001| Fri Feb 22 12:35:43.289 [initandlisten] connection accepted from 127.0.0.1:54835 #144 (6 connections now open) m30001| Fri Feb 22 12:35:43.290 [conn144] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:43.290 [conn144] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.290 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.290 [conn144] end connection 127.0.0.1:54835 (4 connections now open) m30001| Fri Feb 22 12:35:43.290 [initandlisten] connection accepted from 127.0.0.1:38348 #145 (5 connections now open) m30001| Fri Feb 22 12:35:43.291 [conn145] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_or3 query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:43.291 [conn145] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:43.291 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:43.291 [conn145] end connection 127.0.0.1:38348 (4 connections now open) m30001| Fri Feb 22 12:35:43.292 [initandlisten] connection accepted from 127.0.0.1:39554 #146 (5 connections now open) m30001| Fri Feb 22 12:35:43.297 [conn146] build index test.jstests_or3 { x: 1.0 } m30001| Fri Feb 22 12:35:43.297 [conn146] build index done. scanned 8 total records. 0 secs m30001| Fri Feb 22 12:35:43.298 [conn146] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:43.298 [conn146] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.299 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.299 [conn146] end connection 127.0.0.1:39554 (4 connections now open) m30001| Fri Feb 22 12:35:43.299 [initandlisten] connection accepted from 127.0.0.1:64786 #147 (5 connections now open) m30001| Fri Feb 22 12:35:43.300 [conn147] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:43.300 [conn147] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.300 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.300 [conn147] end connection 127.0.0.1:64786 (4 connections now open) m30001| Fri Feb 22 12:35:43.301 [initandlisten] connection accepted from 127.0.0.1:48967 #148 (5 connections now open) m30001| Fri Feb 22 12:35:43.301 [conn148] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_or3 query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:43.301 [conn148] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:43.301 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:43.302 [conn148] end connection 127.0.0.1:48967 (4 connections now open) m30001| Fri Feb 22 12:35:43.302 [initandlisten] connection accepted from 127.0.0.1:34892 #149 (5 connections now open) m30999| Fri Feb 22 12:35:43.307 [conn1] DROP: test.jstests_or3 m30001| Fri Feb 22 12:35:43.307 [conn149] CMD: drop test.jstests_or3 m30001| Fri Feb 22 12:35:43.313 [conn149] build index test.jstests_or3 { _id: 1 } m30001| Fri Feb 22 12:35:43.314 [conn149] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.314 [conn149] info: creating collection test.jstests_or3 on add index m30001| Fri Feb 22 12:35:43.314 [conn149] build index test.jstests_or3 { x: 1.0, a: 1.0 } m30001| Fri Feb 22 12:35:43.314 [conn149] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.316 [conn149] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:43.316 [conn149] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.316 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.316 [conn149] end connection 127.0.0.1:34892 (4 connections now open) m30001| Fri Feb 22 12:35:43.316 [initandlisten] connection accepted from 127.0.0.1:53010 #150 (5 connections now open) m30001| Fri Feb 22 12:35:43.317 [conn150] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:43.317 [conn150] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.317 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.317 [conn150] end connection 127.0.0.1:53010 (4 connections now open) m30001| Fri Feb 22 12:35:43.318 [initandlisten] connection accepted from 127.0.0.1:62167 #151 (5 connections now open) m30001| Fri Feb 22 12:35:43.318 [conn151] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_or3 query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:43.318 [conn151] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:43.318 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:43.318 [conn151] end connection 127.0.0.1:62167 (4 connections now open) m30001| Fri Feb 22 12:35:43.319 [initandlisten] connection accepted from 127.0.0.1:55654 #152 (5 connections now open) m30999| Fri Feb 22 12:35:43.325 [conn1] DROP: test.jstests_or3 m30001| Fri Feb 22 12:35:43.325 [conn152] CMD: drop test.jstests_or3 m30001| Fri Feb 22 12:35:43.331 [conn152] build index test.jstests_or3 { _id: 1 } m30001| Fri Feb 22 12:35:43.332 [conn152] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.332 [conn152] info: creating collection test.jstests_or3 on add index m30001| Fri Feb 22 12:35:43.332 [conn152] build index test.jstests_or3 { x: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:43.332 [conn152] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.333 [conn152] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:43.333 [conn152] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.333 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.333 [conn152] end connection 127.0.0.1:55654 (4 connections now open) m30001| Fri Feb 22 12:35:43.334 [initandlisten] connection accepted from 127.0.0.1:44008 #153 (5 connections now open) m30001| Fri Feb 22 12:35:43.334 [conn153] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:43.335 [conn153] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.335 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.335 [conn153] end connection 127.0.0.1:44008 (4 connections now open) m30001| Fri Feb 22 12:35:43.335 [initandlisten] connection accepted from 127.0.0.1:36718 #154 (5 connections now open) m30001| Fri Feb 22 12:35:43.336 [conn154] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_or3 query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:43.336 [conn154] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:43.336 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:43.336 [conn154] end connection 127.0.0.1:36718 (4 connections now open) m30001| Fri Feb 22 12:35:43.336 [initandlisten] connection accepted from 127.0.0.1:65351 #155 (5 connections now open) m30999| Fri Feb 22 12:35:43.341 [conn1] DROP: test.jstests_or3 m30001| Fri Feb 22 12:35:43.342 [conn155] CMD: drop test.jstests_or3 m30001| Fri Feb 22 12:35:43.347 [conn155] build index test.jstests_or3 { _id: 1 } m30001| Fri Feb 22 12:35:43.348 [conn155] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.348 [conn155] info: creating collection test.jstests_or3 on add index m30001| Fri Feb 22 12:35:43.348 [conn155] build index test.jstests_or3 { x: 1.0, a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:35:43.348 [conn155] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.349 [conn155] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: "a" } m30001| Fri Feb 22 12:35:43.349 [conn155] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.349 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.350 [conn155] end connection 127.0.0.1:65351 (4 connections now open) m30001| Fri Feb 22 12:35:43.350 [initandlisten] connection accepted from 127.0.0.1:44041 #156 (5 connections now open) m30001| Fri Feb 22 12:35:43.351 [conn156] assertion 13086 $and/$or/$nor must be a nonempty array ns:test.jstests_or3 query:{ x: 0.0, $nor: {} } m30001| Fri Feb 22 12:35:43.351 [conn156] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor must be a nonempty array", code: 13086 } m30999| Fri Feb 22 12:35:43.351 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13086 $and/$or/$nor must be a nonempty array m30001| Fri Feb 22 12:35:43.351 [conn156] end connection 127.0.0.1:44041 (4 connections now open) m30001| Fri Feb 22 12:35:43.351 [initandlisten] connection accepted from 127.0.0.1:42810 #157 (5 connections now open) m30001| Fri Feb 22 12:35:43.352 [conn157] assertion 13087 $and/$or/$nor match element must be an object ns:test.jstests_or3 query:{ x: 0.0, $nor: [ "a" ] } m30001| Fri Feb 22 12:35:43.352 [conn157] problem detected during query over test.jstests_or3 : { $err: "$and/$or/$nor match element must be an object", code: 13087 } m30999| Fri Feb 22 12:35:43.352 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13087 $and/$or/$nor match element must be an object m30001| Fri Feb 22 12:35:43.352 [conn157] end connection 127.0.0.1:42810 (4 connections now open) m30001| Fri Feb 22 12:35:43.353 [initandlisten] connection accepted from 127.0.0.1:56157 #158 (5 connections now open) 75ms ******************************************* Test : jstests/killop.js ... m30999| Fri Feb 22 12:35:43.361 [conn1] DROP: test.jstests_killop m30001| Fri Feb 22 12:35:43.361 [conn158] CMD: drop test.jstests_killop m30001| Fri Feb 22 12:35:43.362 [conn158] build index test.jstests_killop { _id: 1 } m30001| Fri Feb 22 12:35:43.362 [conn158] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:35:43.397 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.jstests_killop.count( { $where: function() { while( 1 ) { ; } } } ) localhost:30999/admin Fri Feb 22 12:35:43.430 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.jstests_killop.count( { $where: function() { while( 1 ) { ; } } } ) localhost:30999/admin sh15734| MongoDB shell version: 2.4.0-rc1-pre- sh15735| MongoDB shell version: 2.4.0-rc1-pre- sh15734| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:43.487 [mongosMain] connection accepted from 127.0.0.1:43206 #25 (2 connections now open) m30001| Fri Feb 22 12:35:43.491 [initandlisten] connection accepted from 127.0.0.1:43332 #159 (6 connections now open) sh15735| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:43.519 [mongosMain] connection accepted from 127.0.0.1:56955 #26 (3 connections now open) m30000| Fri Feb 22 12:35:43.523 [initandlisten] connection accepted from 127.0.0.1:51377 #20 (9 connections now open) m30001| Fri Feb 22 12:35:43.524 [initandlisten] connection accepted from 127.0.0.1:40264 #160 (7 connections now open) m30999| Fri Feb 22 12:35:43.634 [conn1] want to kill op: op: "shard0001:491351" m30001| Fri Feb 22 12:35:43.634 [conn4] going to kill op: op: 491351 m30001| Fri Feb 22 12:35:43.634 [conn159] JavaScript execution terminated m30001| Fri Feb 22 12:35:43.634 [conn159] Count with ns: test.jstests_killop and query: { $where: function () { while( 1 ) { ; } } } failed with exception: 16712 JavaScript execution terminated code: 16712 m30999| Fri Feb 22 12:35:43.634 [conn1] want to kill op: op: "shard0001:491354" m30001| Fri Feb 22 12:35:43.634 [conn4] going to kill op: op: 491354 m30001| Fri Feb 22 12:35:43.634 [conn160] JavaScript execution terminated m30001| Fri Feb 22 12:35:43.634 [conn160] Count with ns: test.jstests_killop and query: { $where: function () { while( 1 ) { ; } } } failed with exception: 16712 JavaScript execution terminated code: 16712 m30001| Fri Feb 22 12:35:43.635 [conn159] command test.$cmd command: { count: "jstests_killop", query: { $where: function () { while( 1 ) { ; } } } } ntoreturn:1 keyUpdates:0 locks(micros) r:144014 reslen:87 144ms m30001| Fri Feb 22 12:35:43.637 [conn160] command test.$cmd command: { count: "jstests_killop", query: { $where: function () { while( 1 ) { ; } } } } ntoreturn:1 keyUpdates:0 locks(micros) r:112675 reslen:87 112ms sh15734| Fri Feb 22 12:35:43.638 JavaScript execution failed: count failed: { sh15734| "shards" : { sh15734| sh15734| }, sh15734| "cause" : { sh15734| "ok" : 0, sh15734| "errmsg" : "16712 JavaScript execution terminated" sh15734| }, sh15734| "ok" : 0, sh15734| "errmsg" : "failed on : shard0001" sh15734| } at src/mongo/shell/query.js:L180 sh15735| Fri Feb 22 12:35:43.640 JavaScript execution failed: count failed: { sh15735| "shards" : { sh15735| sh15735| }, sh15735| "cause" : { sh15735| "ok" : 0, sh15735| "errmsg" : "16712 JavaScript execution terminated" sh15735| }, sh15735| "ok" : 0, sh15735| "errmsg" : "failed on : shard0001" sh15735| } at src/mongo/shell/query.js:L180 m30999| Fri Feb 22 12:35:43.647 [conn25] end connection 127.0.0.1:43206 (2 connections now open) m30999| Fri Feb 22 12:35:43.648 [conn26] end connection 127.0.0.1:56955 (1 connection now open) 296ms ******************************************* Test : jstests/indexg.js ... m30999| Fri Feb 22 12:35:43.655 [conn1] DROP: test.jstests_indexg m30001| Fri Feb 22 12:35:43.655 [conn158] CMD: drop test.jstests_indexg m30001| Fri Feb 22 12:35:43.656 [conn158] build index test.jstests_indexg { _id: 1 } m30001| Fri Feb 22 12:35:43.656 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.658 [conn158] build index test.jstests_indexg { list: 1.0 } m30001| Fri Feb 22 12:35:43.658 [conn158] build index done. scanned 2 total records. 0 secs 6ms ******************************************* Test : jstests/indexq.js ... m30999| Fri Feb 22 12:35:43.661 [conn1] DROP: test.jstests_indexq m30001| Fri Feb 22 12:35:43.661 [conn158] CMD: drop test.jstests_indexq m30001| Fri Feb 22 12:35:43.661 [conn158] build index test.jstests_indexq { _id: 1 } m30001| Fri Feb 22 12:35:43.662 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.662 [conn158] info: creating collection test.jstests_indexq on add index m30001| Fri Feb 22 12:35:43.662 [conn158] build index test.jstests_indexq { a: 1.0 } m30001| Fri Feb 22 12:35:43.662 [conn158] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/updateb.js ... m30999| Fri Feb 22 12:35:43.667 [conn1] DROP: test.updateb m30001| Fri Feb 22 12:35:43.667 [conn158] CMD: drop test.updateb m30001| Fri Feb 22 12:35:43.668 [conn158] build index test.updateb { _id: 1 } m30001| Fri Feb 22 12:35:43.668 [conn158] build index done. scanned 0 total records. 0 secs 3ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_multinest1.js ******************************************* Test : jstests/index_check8.js ... m30999| Fri Feb 22 12:35:43.670 [conn1] DROP: test.index_check8 m30001| Fri Feb 22 12:35:43.670 [conn158] CMD: drop test.index_check8 m30001| Fri Feb 22 12:35:43.670 [conn158] build index test.index_check8 { _id: 1 } m30001| Fri Feb 22 12:35:43.671 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.671 [conn158] build index test.index_check8 { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:35:43.672 [conn158] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:35:43.672 [conn158] build index test.index_check8 { a: 1.0, b: 1.0, d: 1.0, e: 1.0 } m30001| Fri Feb 22 12:35:43.673 [conn158] build index done. scanned 1 total records. 0 secs 7ms ******************************************* Test : jstests/ork.js ... m30999| Fri Feb 22 12:35:43.681 [conn1] DROP: test.jstests_ork m30001| Fri Feb 22 12:35:43.681 [conn158] CMD: drop test.jstests_ork m30001| Fri Feb 22 12:35:43.682 [conn158] build index test.jstests_ork { _id: 1 } m30001| Fri Feb 22 12:35:43.682 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:35:43.682 [conn158] info: creating collection test.jstests_ork on add index m30001| Fri Feb 22 12:35:43.682 [conn158] build index test.jstests_ork { a: 1.0 } m30001| Fri Feb 22 12:35:43.683 [conn158] build index done. scanned 0 total records. 0 secs 9ms ******************************************* Test : jstests/cursora.js ... m30999| Fri Feb 22 12:35:43.691 [conn1] DROP: test.cursora m30001| Fri Feb 22 12:35:43.691 [conn158] CMD: drop test.cursora m30001| Fri Feb 22 12:35:43.691 [conn158] build index test.cursora { _id: 1 } m30001| Fri Feb 22 12:35:43.692 [conn158] build index done. scanned 0 total records. 0 secs cursora.js startParallelShell n:1500 atomic:undefined Fri Feb 22 12:35:43.801 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep(50); db.cursora.remove( {} ); db.getLastError(); localhost:30999/admin sh15736| MongoDB shell version: 2.4.0-rc1-pre- sh15736| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:43.866 [mongosMain] connection accepted from 127.0.0.1:51885 #27 (2 connections now open) sh15736| null m30999| Fri Feb 22 12:35:43.955 [conn27] end connection 127.0.0.1:51885 (1 connection now open) cursora.js warning: shouldn't have counted all n: 1500 num: 1500 m30999| Fri Feb 22 12:35:43.961 [conn1] DROP: test.cursora m30001| Fri Feb 22 12:35:43.961 [conn158] CMD: drop test.cursora m30001| Fri Feb 22 12:35:43.965 [conn158] build index test.cursora { _id: 1 } m30001| Fri Feb 22 12:35:43.965 [conn158] build index done. scanned 0 total records. 0 secs cursora.js startParallelShell n:5000 atomic:undefined Fri Feb 22 12:35:44.290 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep(50); db.cursora.remove( {} ); db.getLastError(); localhost:30999/admin sh15739| MongoDB shell version: 2.4.0-rc1-pre- sh15739| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:44.376 [mongosMain] connection accepted from 127.0.0.1:63561 #28 (2 connections now open) m30001| Fri Feb 22 12:35:44.476 [conn158] query test.cursora query: { query: { $where: function () { num = 2; for (var x = 0; x < 1000; x++) num += 2; return... }, orderby: { _id: -1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:4617 keyUpdates:0 numYields: 3 locks(micros) r:344343 nreturned:1 reslen:558 184ms m30001| Fri Feb 22 12:35:44.569 [conn160] remove test.cursora ndeleted:5000 keyUpdates:0 numYields: 4 locks(micros) w:162728 138ms sh15739| null m30999| Fri Feb 22 12:35:44.581 [conn28] end connection 127.0.0.1:63561 (1 connection now open) m30999| Fri Feb 22 12:35:44.588 [conn1] DROP: test.cursora m30001| Fri Feb 22 12:35:44.588 [conn158] CMD: drop test.cursora m30001| Fri Feb 22 12:35:44.594 [conn158] build index test.cursora { _id: 1 } m30001| Fri Feb 22 12:35:44.595 [conn158] build index done. scanned 0 total records. 0 secs cursora.js startParallelShell n:1500 atomic:true Fri Feb 22 12:35:44.702 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep(50); db.cursora.remove( {$atomic:true} ); db.getLastError(); localhost:30999/admin sh15740| MongoDB shell version: 2.4.0-rc1-pre- sh15740| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:44.767 [mongosMain] connection accepted from 127.0.0.1:37790 #29 (2 connections now open) sh15740| null m30999| Fri Feb 22 12:35:44.858 [conn29] end connection 127.0.0.1:37790 (1 connection now open) cursora.js warning: shouldn't have counted all n: 1500 num: 1500 m30999| Fri Feb 22 12:35:44.865 [conn1] DROP: test.cursora m30001| Fri Feb 22 12:35:44.865 [conn158] CMD: drop test.cursora m30001| Fri Feb 22 12:35:44.869 [conn158] build index test.cursora { _id: 1 } m30001| Fri Feb 22 12:35:44.870 [conn158] build index done. scanned 0 total records. 0 secs cursora.js startParallelShell n:5000 atomic:true Fri Feb 22 12:35:45.175 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep(50); db.cursora.remove( {$atomic:true} ); db.getLastError(); localhost:30999/admin sh15741| MongoDB shell version: 2.4.0-rc1-pre- sh15741| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:45.243 [mongosMain] connection accepted from 127.0.0.1:39338 #30 (2 connections now open) m30001| Fri Feb 22 12:35:45.389 [conn158] query test.cursora query: { query: { $where: function () { num = 2; for (var x = 0; x < 1000; x++) num += 2; return... }, orderby: { _id: -1.0 }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:0 keyUpdates:0 numYields: 2 locks(micros) r:234015 nreturned:1 reslen:558 210ms sh15741| null m30999| Fri Feb 22 12:35:45.401 [conn30] end connection 127.0.0.1:39338 (1 connection now open) cursora.js SUCCESS 1724ms ******************************************* Test : jstests/big_object1.js ... m30999| Fri Feb 22 12:35:45.411 [conn1] DROP: test.big_object1 m30001| Fri Feb 22 12:35:45.411 [conn158] CMD: drop test.big_object1 m30001| Fri Feb 22 12:35:45.504 [conn158] build index test.big_object1 { _id: 1 } m30001| Fri Feb 22 12:35:45.506 [conn158] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 12:35:46.106 Assertion: 10334:BSONObj size: 17408175 (0xAFA00901) is invalid. Size must be between 0 and 16793600(16MB) First element: 0: "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa..." 0x7d76d8 0x7a5f8c 0x7a601c 0x6d7602 0x78c9af 0x78cf79 0x78c83c 0x79e700 0x78943a 0x2c3deddec439 0x2c3deddd809a 0x2c3dedd0c76e 0x2c3deedc95de 0x2c3dedd0cfa7 0x2c3dedd06116 0x8bef8e 0x8c09a3 0x862e0d 0x7929d6 0x782792 /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo15printStackTraceERSo+0x28 [0x7d76d8] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo11msgassertedEiPKc+0x9c [0x7a5f8c] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'0x3a601c [0x7a601c] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZNK5mongo7BSONObj14_assertInvalidEv+0x272 [0x6d7602] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo7V8Scope9v8ToMongoEN2v86HandleINS1_6ObjectEEEi+0x57f [0x78c9af] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo7V8Scope16v8ToMongoElementERNS_14BSONObjBuilderERKSsN2v86HandleINS5_5ValueEEEiPNS_7BSONObjE+0x1b9 [0x78cf79] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo7V8Scope9v8ToMongoEN2v86HandleINS1_6ObjectEEEi+0x40c [0x78c83c] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo11mongoInsertEPNS_7V8ScopeERKN2v89ArgumentsE+0x390 [0x79e700] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo7V8Scope10v8CallbackERKN2v89ArgumentsE+0xaa [0x78943a] [0x2c3deddec439] [0x2c3deddd809a] [0x2c3dedd0c76e] [0x2c3deedc95de] [0x2c3dedd0cfa7] [0x2c3dedd06116] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN2v88internalL6InvokeEbNS0_6HandleINS0_10JSFunctionEEENS1_INS0_6ObjectEEEiPS5_Pb+0x11e [0x8bef8e] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN2v88internal9Execution4CallENS0_6HandleINS0_6ObjectEEES4_iPS4_Pbb+0x103 [0x8c09a3] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN2v86Script3RunEv+0x12d [0x862e0d] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo7V8Scope4execERKNS_10StringDataERKSsbbbi+0x116 [0x7929d6] /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo'_ZN5mongo5Scope8execFileERKSsbbi+0x142 [0x782792] { "sharded" : false, "primary" : "shard0001", "ns" : "test.big_object1", "count" : 6, "size" : 82, "avgObjSize" : 13.666666666666666, "storageSize" : 162, "numExtents" : 1, "nindexes" : 1, "lastExtentSize" : 162, "paddingFactor" : 1, "systemFlags" : 1, "userFlags" : 0, "totalIndexSize" : 0, "indexSizes" : { "_id_" : 0 }, "ok" : 1 } m30999| Fri Feb 22 12:35:46.966 [conn1] DROP: test.big_object1 m30001| Fri Feb 22 12:35:46.966 [conn158] CMD: drop test.big_object1 SUCCESS 1562ms ******************************************* Test : jstests/find5.js ... m30999| Fri Feb 22 12:35:46.973 [conn1] DROP: test.find5 m30001| Fri Feb 22 12:35:46.974 [conn158] CMD: drop test.find5 m30001| Fri Feb 22 12:35:46.975 [conn158] build index test.find5 { _id: 1 } m30001| Fri Feb 22 12:35:46.976 [conn158] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:35:46.982 [conn1] DROP: test.find5 m30001| Fri Feb 22 12:35:46.982 [conn158] CMD: drop test.find5 m30001| Fri Feb 22 12:35:46.987 [conn158] build index test.find5 { _id: 1 } m30001| Fri Feb 22 12:35:46.987 [conn158] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:35:46.988 [conn1] DROP: test.find5 m30001| Fri Feb 22 12:35:46.988 [conn158] CMD: drop test.find5 m30001| Fri Feb 22 12:35:46.992 [conn158] build index test.find5 { _id: 1 } m30001| Fri Feb 22 12:35:46.992 [conn158] build index done. scanned 0 total records. 0 secs 21ms ******************************************* Test : jstests/currentop.js ... BEGIN currentop.js m30999| Fri Feb 22 12:35:46.994 [conn1] DROP: test.jstests_currentop m30001| Fri Feb 22 12:35:46.994 [conn158] CMD: drop test.jstests_currentop m30001| Fri Feb 22 12:35:46.995 [conn158] build index test.jstests_currentop { _id: 1 } m30001| Fri Feb 22 12:35:46.996 [conn158] build index done. scanned 0 total records. 0 secs count:100 start shell Fri Feb 22 12:35:47.056 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.jstests_currentop.count( { '$where': function() { sleep(1000); } } ) localhost:30999/admin sleep sh15743| MongoDB shell version: 2.4.0-rc1-pre- sh15743| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:47.147 [mongosMain] connection accepted from 127.0.0.1:48035 #31 (2 connections now open) m30999| Fri Feb 22 12:35:47.761 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276623881c8e7453916061 m30999| Fri Feb 22 12:35:47.762 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. inprog: [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] inprog: [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] wait have some ops [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] ok Fri Feb 22 12:35:48.102 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.jstests_currentop.update( { '$where': function() { sleep(150); } }, { 'num': 1 }, false, true ); db.getLastError() localhost:30999/admin go [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] [ { "opid" : "shard0001:504564", "active" : true, "secs_running" : 0, "op" : "query", "ns" : "test.jstests_currentop", "query" : { "count" : "jstests_currentop", "query" : { "$where" : function () { sleep(1000); } } }, "client_s" : "127.0.0.1:40264", "desc" : "conn160", "threadId" : "0xb9", "connectionId" : 160, "locks" : { "^" : "r", "^test" : "R" }, "waitingForLock" : false, "numYields" : 0, "lockStats" : { "timeLockedMicros" : { }, "timeAcquiringMicros" : { "r" : NumberLong(7), "w" : NumberLong(0) } } } ] readops: [ ] total: 1 w: 0 r:0 m30999| Fri Feb 22 12:35:48.108 [conn1] want to kill op: op: "shard0001:504564" m30001| Fri Feb 22 12:35:48.108 [conn4] going to kill op: op: 504564 sh15744| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:35:48.180 [conn160] JavaScript execution terminated m30001| Fri Feb 22 12:35:48.180 [conn160] Count with ns: test.jstests_currentop and query: { $where: function () { sleep(1000); } } failed with exception: 16712 JavaScript execution terminated code: 16712 m30001| Fri Feb 22 12:35:48.181 [conn160] command test.$cmd command: { count: "jstests_currentop", query: { $where: function () { sleep(1000); } } } ntoreturn:1 keyUpdates:0 locks(micros) r:1030185 reslen:87 1030ms sh15743| Fri Feb 22 12:35:48.184 JavaScript execution failed: count failed: { sh15743| "shards" : { sh15743| sh15743| }, sh15743| "cause" : { sh15743| "ok" : 0, sh15743| "errmsg" : "16712 JavaScript execution terminated" sh15743| }, sh15743| "ok" : 0, sh15743| "errmsg" : "failed on : shard0001" sh15743| } at src/mongo/shell/query.js:L180 sh15744| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:35:48.189 [mongosMain] connection accepted from 127.0.0.1:65331 #32 (3 connections now open) m30999| Fri Feb 22 12:35:48.193 [conn31] end connection 127.0.0.1:48035 (2 connections now open) m30999| Fri Feb 22 12:35:53.764 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276629881c8e7453916062 m30999| Fri Feb 22 12:35:53.764 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:35:59.766 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127662f881c8e7453916063 m30999| Fri Feb 22 12:35:59.767 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:36:04.151 [conn159] update test.jstests_currentop query: { $where: function () { sleep(150); } } update: { num: 1.0 } nscanned:100 nupdated:0 keyUpdates:0 numYields: 98 locks(micros) w:29962711 15953ms sh15744| null m30999| Fri Feb 22 12:36:04.163 [conn32] end connection 127.0.0.1:65331 (1 connection now open) 17176ms ******************************************* Test : jstests/fts_blogwild.js ... m30000| Fri Feb 22 12:36:04.180 [initandlisten] connection accepted from 127.0.0.1:38969 #21 (10 connections now open) m30001| Fri Feb 22 12:36:04.181 [initandlisten] connection accepted from 127.0.0.1:61796 #161 (8 connections now open) m30999| Fri Feb 22 12:36:04.182 [conn1] DROP: test.text_blogwild m30001| Fri Feb 22 12:36:04.183 [conn158] CMD: drop test.text_blogwild m30001| Fri Feb 22 12:36:04.184 [conn158] build index test.text_blogwild { _id: 1 } m30001| Fri Feb 22 12:36:04.185 [conn158] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:04.186 [conn158] build index test.text_blogwild { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:36:04.187 [conn158] build index done. scanned 3 total records. 0.001 secs m30001| Fri Feb 22 12:36:04.191 [conn4] CMD: dropIndexes test.text_blogwild m30001| Fri Feb 22 12:36:04.196 [conn158] build index test.text_blogwild { _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:36:04.197 [conn158] build index done. scanned 3 total records. 0 secs 30ms ******************************************* Test : jstests/basic5.js ... m30999| Fri Feb 22 12:36:04.206 [conn1] DROP: test.basic5 m30001| Fri Feb 22 12:36:04.207 [conn158] CMD: drop test.basic5 m30001| Fri Feb 22 12:36:04.208 [conn158] build index test.basic5 { _id: 1 } m30001| Fri Feb 22 12:36:04.209 [conn158] build index done. scanned 0 total records. 0.001 secs 10ms ******************************************* Test : jstests/index_sparse1.js ... m30999| Fri Feb 22 12:36:04.210 [conn1] DROP: test.index_sparse1 m30001| Fri Feb 22 12:36:04.211 [conn158] CMD: drop test.index_sparse1 m30001| Fri Feb 22 12:36:04.212 [conn158] build index test.index_sparse1 { _id: 1 } m30001| Fri Feb 22 12:36:04.212 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.214 [conn158] build index test.index_sparse1 { x: 1.0 } m30001| Fri Feb 22 12:36:04.215 [conn158] build index done. scanned 5 total records. 0.001 secs m30001| Fri Feb 22 12:36:04.217 [conn4] CMD: dropIndexes test.index_sparse1 m30001| Fri Feb 22 12:36:04.221 [conn158] build index test.index_sparse1 { x: 1.0 } m30001| Fri Feb 22 12:36:04.222 [conn158] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:36:04.224 [conn4] CMD: dropIndexes test.index_sparse1 m30001| Fri Feb 22 12:36:04.228 [conn158] build index test.index_sparse1 { x: 1.0 } m30001| Fri Feb 22 12:36:04.232 [conn158] build index test.index_sparse1 { x: 1.0 } m30001| Fri Feb 22 12:36:04.232 [conn158] build index done. scanned 4 total records. 0 secs m30001| Fri Feb 22 12:36:04.234 [conn4] CMD: dropIndexes test.index_sparse1 m30001| Fri Feb 22 12:36:04.238 [conn158] build index test.index_sparse1 { x: 1.0 } 32ms ******************************************* Test : jstests/in8.js ... m30999| Fri Feb 22 12:36:04.249 [conn1] DROP: test.jstests_in8 m30001| Fri Feb 22 12:36:04.249 [conn158] CMD: drop test.jstests_in8 m30001| Fri Feb 22 12:36:04.250 [conn158] build index test.jstests_in8 { _id: 1 } m30001| Fri Feb 22 12:36:04.251 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.256 [conn158] build index test.jstests_in8 { key: 1.0 } m30001| Fri Feb 22 12:36:04.258 [conn158] build index done. scanned 3 total records. 0.001 secs 22ms ******************************************* Test : jstests/arrayfind3.js ... m30999| Fri Feb 22 12:36:04.266 [conn1] DROP: test.arrayfind3 m30001| Fri Feb 22 12:36:04.266 [conn158] CMD: drop test.arrayfind3 m30001| Fri Feb 22 12:36:04.267 [conn158] build index test.arrayfind3 { _id: 1 } m30001| Fri Feb 22 12:36:04.268 [conn158] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:04.269 [conn158] build index test.arrayfind3 { a: 1.0 } m30001| Fri Feb 22 12:36:04.270 [conn158] build index done. scanned 3 total records. 0 secs 8ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_update1.js ******************************************* Test : jstests/mr_errorhandling.js ... m30999| Fri Feb 22 12:36:04.278 [conn1] DROP: test.mr_errorhandling m30001| Fri Feb 22 12:36:04.278 [conn158] CMD: drop test.mr_errorhandling m30001| Fri Feb 22 12:36:04.279 [conn158] build index test.mr_errorhandling { _id: 1 } m30001| Fri Feb 22 12:36:04.280 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.310 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_61 m30001| Fri Feb 22 12:36:04.311 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_61_inc m30001| Fri Feb 22 12:36:04.311 [conn158] build index test.tmp.mr.mr_errorhandling_61_inc { 0: 1 } m30001| Fri Feb 22 12:36:04.311 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.312 [conn158] build index test.tmp.mr.mr_errorhandling_61 { _id: 1 } m30001| Fri Feb 22 12:36:04.312 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.316 [conn158] CMD: drop test.mr_errorhandling_out m30001| Fri Feb 22 12:36:04.324 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_61 m30001| Fri Feb 22 12:36:04.324 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_61 m30001| Fri Feb 22 12:36:04.324 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_61_inc m30001| Fri Feb 22 12:36:04.328 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_61 m30001| Fri Feb 22 12:36:04.328 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_61_inc m30999| Fri Feb 22 12:36:04.330 [conn1] DROP: test.mr_errorhandling_out m30001| Fri Feb 22 12:36:04.330 [conn158] CMD: drop test.mr_errorhandling_out m30001| Fri Feb 22 12:36:04.335 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_62 m30001| Fri Feb 22 12:36:04.335 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_62_inc m30001| Fri Feb 22 12:36:04.336 [conn158] build index test.tmp.mr.mr_errorhandling_62_inc { 0: 1 } m30001| Fri Feb 22 12:36:04.336 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.336 [conn158] build index test.tmp.mr.mr_errorhandling_62 { _id: 1 } m30001| Fri Feb 22 12:36:04.337 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.339 [conn158] JavaScript execution failed: Error: fast_emit takes 2 args near '{ emit( this.' (line 3) m30001| Fri Feb 22 12:36:04.339 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_62 m30001| Fri Feb 22 12:36:04.343 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_62_inc m30001| Fri Feb 22 12:36:04.346 [conn158] mr failed, removing collection :: caused by :: 16722 JavaScript execution failed: Error: fast_emit takes 2 args near '{ emit( this.' (line 3) m30001| Fri Feb 22 12:36:04.346 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_62 m30001| Fri Feb 22 12:36:04.346 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_62_inc m30001| Fri Feb 22 12:36:04.377 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_63 m30001| Fri Feb 22 12:36:04.377 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_63_inc m30001| Fri Feb 22 12:36:04.378 [conn158] build index test.tmp.mr.mr_errorhandling_63_inc { 0: 1 } m30001| Fri Feb 22 12:36:04.378 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.378 [conn158] build index test.tmp.mr.mr_errorhandling_63 { _id: 1 } m30001| Fri Feb 22 12:36:04.379 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.382 [conn158] CMD: drop test.mr_errorhandling_out m30001| Fri Feb 22 12:36:04.390 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_63 m30001| Fri Feb 22 12:36:04.390 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_63 m30001| Fri Feb 22 12:36:04.390 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_63_inc m30001| Fri Feb 22 12:36:04.394 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_63 m30001| Fri Feb 22 12:36:04.394 [conn158] CMD: drop test.tmp.mr.mr_errorhandling_63_inc m30999| Fri Feb 22 12:36:04.396 [conn1] DROP: test.mr_errorhandling_out m30001| Fri Feb 22 12:36:04.396 [conn158] CMD: drop test.mr_errorhandling_out 130ms ******************************************* Test : jstests/explain8.js ... m30999| Fri Feb 22 12:36:04.406 [conn1] DROP: test.jstests_explain8 m30001| Fri Feb 22 12:36:04.407 [conn158] CMD: drop test.jstests_explain8 m30001| Fri Feb 22 12:36:04.407 [conn158] build index test.jstests_explain8 { _id: 1 } m30001| Fri Feb 22 12:36:04.408 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:04.408 [conn158] info: creating collection test.jstests_explain8 on add index m30001| Fri Feb 22 12:36:04.408 [conn158] build index test.jstests_explain8 { a: 1.0 } m30001| Fri Feb 22 12:36:04.408 [conn158] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:05.768 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276635881c8e7453916064 m30999| Fri Feb 22 12:36:05.769 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:36:10.441 [conn158] query test.jstests_explain8 query: { query: { $or: [ { a: 1000.0, $where: function slow() { m30001| sleep( this.a ); m30001| return true; m30001| } }, { a: 2000.0, $where: function slow() { m30001| sleep( this.a ); m30001| return true; m30001| } }, { a: 3000.0, $where: function slow() { m30001| sleep( this.a ); m30001| return true; m30001| } } ] }, $explain: true } ntoreturn:0 ntoskip:0 nscanned:3 keyUpdates:0 numYields: 1 locks(micros) r:7063285 nreturned:1 reslen:1439 6032ms 6041ms ******************************************* Test : jstests/sort7.js ... m30999| Fri Feb 22 12:36:10.446 [conn1] DROP: test.jstests_sort7 m30001| Fri Feb 22 12:36:10.450 [conn158] CMD: drop test.jstests_sort7 m30001| Fri Feb 22 12:36:10.451 [conn158] build index test.jstests_sort7 { _id: 1 } m30001| Fri Feb 22 12:36:10.452 [conn158] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:10.453 [conn158] build index test.jstests_sort7 { a.x: 1.0 } m30001| Fri Feb 22 12:36:10.453 [conn158] build index done. scanned 3 total records. 0 secs 16ms ******************************************* Test : jstests/find_and_modify_server6588.js ... m30999| Fri Feb 22 12:36:10.471 [conn1] DROP: test.find_and_modify_sever6588 m30001| Fri Feb 22 12:36:10.472 [conn158] CMD: drop test.find_and_modify_sever6588 m30001| Fri Feb 22 12:36:10.472 [conn158] build index test.find_and_modify_sever6588 { _id: 1 } m30001| Fri Feb 22 12:36:10.473 [conn158] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:10.474 [conn1] DROP: test.find_and_modify_sever6588 m30001| Fri Feb 22 12:36:10.474 [conn158] CMD: drop test.find_and_modify_sever6588 m30001| Fri Feb 22 12:36:10.479 [conn158] build index test.find_and_modify_sever6588 { _id: 1 } m30001| Fri Feb 22 12:36:10.480 [conn158] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:10.481 [conn1] DROP: test.find_and_modify_sever6588 m30001| Fri Feb 22 12:36:10.481 [conn158] CMD: drop test.find_and_modify_sever6588 m30001| Fri Feb 22 12:36:10.485 [conn158] build index test.find_and_modify_sever6588 { _id: 1 } m30001| Fri Feb 22 12:36:10.485 [conn158] build index done. scanned 0 total records. 0 secs 28ms ******************************************* Test : jstests/evalf.js ... m30999| Fri Feb 22 12:36:10.490 [conn1] DROP: test.jstests_evalf m30001| Fri Feb 22 12:36:10.491 [conn158] CMD: drop test.jstests_evalf m30001| Fri Feb 22 12:36:10.535 [conn158] build index test.jstests_evalf { _id: 1 } m30001| Fri Feb 22 12:36:10.536 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:10.568 [conn158] JavaScript execution failed: ReferenceError: db is not defined near 'var id = db.jstests_evalf.findOne().opid' m30001| Fri Feb 22 12:36:10.568 [conn158] Count with ns: test.jstests_evalf and query: { $where: function () { var id = db.jstests_evalf.findOne().opid; db.killOp( id ... } failed with exception: 16722 JavaScript execution failed: ReferenceError: db is not defined near 'var id = db.jstests_evalf.findOne().opid' code: 16722 m30001| Fri Feb 22 12:36:10.571 [conn158] JavaScript execution failed: count failed: { m30001| "ok" : 0, m30001| "errmsg" : "16722 JavaScript execution failed: ReferenceError: db is not defined near 'var id = db.jstests_evalf.findOne().opid' " m30001| } at src/mongo/shell/query.js:L180 130ms ******************************************* Test : jstests/nestedarr1.js ... m30999| Fri Feb 22 12:36:10.617 [conn1] DROP: test.arrNestTest m30001| Fri Feb 22 12:36:10.618 [conn158] CMD: drop test.arrNestTest m30001| Fri Feb 22 12:36:10.618 [conn158] build index test.arrNestTest { _id: 1 } m30001| Fri Feb 22 12:36:10.619 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:10.619 [conn158] info: creating collection test.arrNestTest on add index m30001| Fri Feb 22 12:36:10.619 [conn158] build index test.arrNestTest { a: 1.0 } m30001| Fri Feb 22 12:36:10.620 [conn158] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:10.623 [conn158] test.arrNestTest ERROR: key too large len:2416 max:1024 2416 test.arrNestTest.$a_1 m30001| Fri Feb 22 12:36:10.625 [conn158] test.arrNestTest ERROR: key too large len:2416 max:1024 2416 test.arrNestTest.$a_1 m30001| Fri Feb 22 12:36:10.626 [conn158] test.arrNestTest ERROR: key too large len:2416 max:1024 2416 test.arrNestTest.$a_1 Test succeeded! 12ms ******************************************* Test : jstests/count3.js ... m30999| Fri Feb 22 12:36:10.631 [conn1] DROP: test.count3 m30001| Fri Feb 22 12:36:10.632 [conn158] CMD: drop test.count3 m30001| Fri Feb 22 12:36:10.632 [conn158] build index test.count3 { _id: 1 } m30001| Fri Feb 22 12:36:10.633 [conn158] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:10.635 [conn1] DROP: test.count3 m30001| Fri Feb 22 12:36:10.635 [conn158] CMD: drop test.count3 m30001| Fri Feb 22 12:36:10.639 [conn158] build index test.count3 { _id: 1 } m30001| Fri Feb 22 12:36:10.639 [conn158] build index done. scanned 0 total records. 0 secs 12ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_nearwithin.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/apply_ops2.js ******************************************* Test : jstests/pullall2.js ... m30999| Fri Feb 22 12:36:10.641 [conn1] DROP: test.pullall2 m30001| Fri Feb 22 12:36:10.642 [conn158] CMD: drop test.pullall2 m30001| Fri Feb 22 12:36:10.643 [conn158] build index test.pullall2 { _id: 1 } m30001| Fri Feb 22 12:36:10.644 [conn158] build index done. scanned 0 total records. 0.001 secs 5ms ******************************************* Test : jstests/update_multi3.js ... m30999| Fri Feb 22 12:36:10.647 [conn1] DROP: test.update_multi3 m30001| Fri Feb 22 12:36:10.647 [conn158] CMD: drop test.update_multi3 m30001| Fri Feb 22 12:36:10.648 [conn158] build index test.update_multi3 { _id: 1 } m30001| Fri Feb 22 12:36:10.649 [conn158] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:10.651 [conn1] DROP: test.update_multi3 m30001| Fri Feb 22 12:36:10.651 [conn158] CMD: drop test.update_multi3 m30001| Fri Feb 22 12:36:10.655 [conn158] build index test.update_multi3 { _id: 1 } m30001| Fri Feb 22 12:36:10.656 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:10.656 [conn158] info: creating collection test.update_multi3 on add index m30001| Fri Feb 22 12:36:10.656 [conn158] build index test.update_multi3 { k: 1.0 } m30001| Fri Feb 22 12:36:10.656 [conn158] build index done. scanned 0 total records. 0 secs 13ms ******************************************* Test : jstests/remove7.js ... m30999| Fri Feb 22 12:36:10.660 [conn1] DROP: test.remove7 m30001| Fri Feb 22 12:36:10.660 [conn158] CMD: drop test.remove7 m30001| Fri Feb 22 12:36:10.661 [conn158] build index test.remove7 { _id: 1 } m30001| Fri Feb 22 12:36:10.662 [conn158] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:10.720 [conn158] build index test.remove7 { tags: 1.0 } m30001| Fri Feb 22 12:36:10.805 [conn158] build index done. scanned 1000 total records. 0.085 secs m30001| Fri Feb 22 12:36:10.881 [conn158] key seems to have moved in the index, refinding. 4:8059000 m30001| Fri Feb 22 12:36:10.900 [conn158] key seems to have moved in the index, refinding. 4:8046000 m30001| Fri Feb 22 12:36:11.038 [conn158] key seems to have moved in the index, refinding. 4:8063000 m30001| Fri Feb 22 12:36:11.189 [conn158] key seems to have moved in the index, refinding. 4:8063000 m30001| Fri Feb 22 12:36:11.360 [conn158] key seems to have moved in the index, refinding. 4:8063000 m30001| Fri Feb 22 12:36:11.527 [conn158] key seems to have moved in the index, refinding. 4:8063000 m30001| Fri Feb 22 12:36:11.702 [conn158] key seems to have moved in the index, refinding. 4:8063000 m30999| Fri Feb 22 12:36:11.771 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127663b881c8e7453916065 m30999| Fri Feb 22 12:36:11.771 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:36:11.914 [conn158] key seems to have moved in the index, refinding. 4:8063000 m30001| Fri Feb 22 12:36:12.038 [conn158] key seems to have moved in the index, refinding. 4:8063000 m30001| Fri Feb 22 12:36:12.252 [conn158] key seems to have moved in the index, refinding. 4:8063000 1778ms ******************************************* Test : jstests/minmax.js ... m30999| Fri Feb 22 12:36:12.445 [conn1] DROP: test.jstests_minmax m30001| Fri Feb 22 12:36:12.445 [conn158] CMD: drop test.jstests_minmax m30001| Fri Feb 22 12:36:12.446 [conn158] build index test.jstests_minmax { _id: 1 } m30001| Fri Feb 22 12:36:12.447 [conn158] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:12.448 [conn158] info: creating collection test.jstests_minmax on add index m30001| Fri Feb 22 12:36:12.448 [conn158] build index test.jstests_minmax { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:36:12.449 [conn158] build index done. scanned 0 total records. 0.001 secs [ { "_id" : ObjectId("5127663c000decca0877491f"), "a" : 1, "b" : 2 } ] m30999| Fri Feb 22 12:36:12.453 [conn1] DROP: test.jstests_minmax m30001| Fri Feb 22 12:36:12.453 [conn158] CMD: drop test.jstests_minmax m30001| Fri Feb 22 12:36:12.461 [conn158] build index test.jstests_minmax { _id: 1 } m30001| Fri Feb 22 12:36:12.461 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.462 [conn158] info: creating collection test.jstests_minmax on add index m30001| Fri Feb 22 12:36:12.462 [conn158] build index test.jstests_minmax { a: 1.0, b: -1.0 } m30001| Fri Feb 22 12:36:12.462 [conn158] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.467 [conn158] Assertion: 10365:requested keyPattern does not match specified keys m30001| 0xdb9678 0xd7f91c 0xd7f9ac 0xc12e4f 0xc13dac 0xc13f2e 0xc15190 0xc158ea 0xc1715e 0xc174cd 0xc1e0ca 0xc1e258 0xc1e513 0xb9206e 0xb93c12 0xb3c436 0x933241 0xda9b4b 0xe0adfe 0xfffffd7fff257024 m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15printStackTraceERSo+0x28 [0xdb9678] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo11msgassertedEiPKc+0x9c [0xd7f91c] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'0x97f9ac [0xd7f9ac] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator21setHintedPlanForIndexERNS_12IndexDetailsE+0x16f [0xc12e4f] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator11addHintPlanEPNS_16NamespaceDetailsE+0x18c [0xc13dac] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator19addShortCircuitPlanEPNS_16NamespaceDetailsE+0xde [0xc13f2e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator15addInitialPlansEv+0x110 [0xc15190] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo12QueryPlanSet4makeEPKcSt8auto_ptrINS_17FieldRangeSetPairEES5_RKNS_7BSONObjES8_RKN5boost10shared_ptrIKNS_11ParsedQueryEEES8_NS_18QueryPlanGenerator18RecordedPlanPolicyES8_S8_b+0xca [0xc158ea] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MultiPlanScanner4initERKNS_7BSONObjES3_S3_+0xfe [0xc1715e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MultiPlanScanner4makeERKNS_10StringDataERKNS_7BSONObjES6_RKN5boost10shared_ptrIKNS_11ParsedQueryEEES6_NS_18QueryPlanGenerator18RecordedPlanPolicyES6_S6_+0x6d [0xc174cd] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15CursorGenerator19setMultiPlanScannerEv+0xda [0xc1e0ca] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15CursorGenerator8generateEv+0x68 [0xc1e258] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo25NamespaceDetailsTransient9getCursorERKNS_10StringDataERKNS_7BSONObjES6_RKNS_24QueryPlanSelectionPolicyERKN5boost10shared_ptrIKNS_11ParsedQueryEEEbPNS_16QueryPlanSummaryE+0x33 [0xc1e513] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo23queryWithQueryOptimizerEiRKSsRKNS_7BSONObjERNS_5CurOpES4_S4_RKN5boost10shared_ptrINS_11ParsedQueryEEES4_RKNS_12ChunkVersionERNS7_10scoped_ptrINS_25PageFaultRetryableSectionEEERNSG_INS_19NoPageFaultsAllowedEEERNS_7MessageE+0x12e [0xb9206e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x972 [0xb93c12] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xba6 [0xb3c436] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x91 [0x933241] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x43b [0xda9b4b] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'thread_proxy+0x7e [0xe0adfe] m30001| /lib/amd64/libc.so.1'_thrp_setup+0xbc [0xfffffd7fff257024] m30001| Fri Feb 22 12:36:12.467 [conn158] assertion 10365 requested keyPattern does not match specified keys ns:test.jstests_minmax query:{ query: {}, $min: { a: 1.0 }, $hint: { a: 1.0, b: -1.0 } } m30001| Fri Feb 22 12:36:12.467 [conn158] problem detected during query over test.jstests_minmax : { $err: "requested keyPattern does not match specified keys", code: 10365 } m30999| Fri Feb 22 12:36:12.467 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10365 requested keyPattern does not match specified keys m30001| Fri Feb 22 12:36:12.468 [conn158] end connection 127.0.0.1:56157 (7 connections now open) m30001| Fri Feb 22 12:36:12.468 [conn159] Assertion: 10365:min and max keys do not share pattern m30001| 0xdb9678 0xd7f91c 0xd7f9ac 0xc12e4f 0xc13dac 0xc13f2e 0xc15190 0xc158ea 0xc1715e 0xc174cd 0xc1e0ca 0xc1e258 0xc1e513 0xb9206e 0xb93c12 0xb3c436 0x933241 0xda9b4b 0xe0adfe 0xfffffd7fff257024 m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15printStackTraceERSo+0x28 [0xdb9678] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo11msgassertedEiPKc+0x9c [0xd7f91c] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'0x97f9ac [0xd7f9ac] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator21setHintedPlanForIndexERNS_12IndexDetailsE+0x16f [0xc12e4f] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator11addHintPlanEPNS_16NamespaceDetailsE+0x18c [0xc13dac] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator19addShortCircuitPlanEPNS_16NamespaceDetailsE+0xde [0xc13f2e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator15addInitialPlansEv+0x110 [0xc15190] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo12QueryPlanSet4makeEPKcSt8auto_ptrINS_17FieldRangeSetPairEES5_RKNS_7BSONObjES8_RKN5boost10shared_ptrIKNS_11ParsedQueryEEES8_NS_18QueryPlanGenerator18RecordedPlanPolicyES8_S8_b+0xca [0xc158ea] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MultiPlanScanner4initERKNS_7BSONObjES3_S3_+0xfe [0xc1715e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MultiPlanScanner4makeERKNS_10StringDataERKNS_7BSONObjES6_RKN5boost10shared_ptrIKNS_11ParsedQueryEEES6_NS_18QueryPlanGenerator18RecordedPlanPolicyES6_S6_+0x6d [0xc174cd] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15CursorGenerator19setMultiPlanScannerEv+0xda [0xc1e0ca] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15CursorGenerator8generateEv+0x68 [0xc1e258] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo25NamespaceDetailsTransient9getCursorERKNS_10StringDataERKNS_7BSONObjES6_RKNS_24QueryPlanSelectionPolicyERKN5boost10shared_ptrIKNS_11ParsedQueryEEEbPNS_16QueryPlanSummaryE+0x33 [0xc1e513] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo23queryWithQueryOptimizerEiRKSsRKNS_7BSONObjERNS_5CurOpES4_S4_RKN5boost10shared_ptrINS_11ParsedQueryEEES4_RKNS_12ChunkVersionERNS7_10scoped_ptrINS_25PageFaultRetryableSectionEEERNSG_INS_19NoPageFaultsAllowedEEERNS_7MessageE+0x12e [0xb9206e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x972 [0xb93c12] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xba6 [0xb3c436] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x91 [0x933241] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x43b [0xda9b4b] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'thread_proxy+0x7e [0xe0adfe] m30001| /lib/amd64/libc.so.1'_thrp_setup+0xbc [0xfffffd7fff257024] m30001| Fri Feb 22 12:36:12.473 [conn159] assertion 10365 min and max keys do not share pattern ns:test.jstests_minmax query:{ query: {}, $min: { a: 1.0, b: 1.0 }, $max: { a: 1.0 }, $hint: { a: 1.0, b: -1.0 } } m30001| Fri Feb 22 12:36:12.473 [conn159] problem detected during query over test.jstests_minmax : { $err: "min and max keys do not share pattern", code: 10365 } m30999| Fri Feb 22 12:36:12.473 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10365 min and max keys do not share pattern m30001| Fri Feb 22 12:36:12.473 [conn159] end connection 127.0.0.1:43332 (7 connections now open) m30001| Fri Feb 22 12:36:12.473 [conn160] Assertion: 10365:min and max keys do not share pattern m30001| 0xdb9678 0xd7f91c 0xd7f9ac 0xc12e4f 0xc13dac 0xc13f2e 0xc15190 0xc158ea 0xc1715e 0xc174cd 0xc1e0ca 0xc1e258 0xc1e513 0xb9206e 0xb93c12 0xb3c436 0x933241 0xda9b4b 0xe0adfe 0xfffffd7fff257024 m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15printStackTraceERSo+0x28 [0xdb9678] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo11msgassertedEiPKc+0x9c [0xd7f91c] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'0x97f9ac [0xd7f9ac] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator21setHintedPlanForIndexERNS_12IndexDetailsE+0x16f [0xc12e4f] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator11addHintPlanEPNS_16NamespaceDetailsE+0x18c [0xc13dac] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator19addShortCircuitPlanEPNS_16NamespaceDetailsE+0xde [0xc13f2e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo18QueryPlanGenerator15addInitialPlansEv+0x110 [0xc15190] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo12QueryPlanSet4makeEPKcSt8auto_ptrINS_17FieldRangeSetPairEES5_RKNS_7BSONObjES8_RKN5boost10shared_ptrIKNS_11ParsedQueryEEES8_NS_18QueryPlanGenerator18RecordedPlanPolicyES8_S8_b+0xca [0xc158ea] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MultiPlanScanner4initERKNS_7BSONObjES3_S3_+0xfe [0xc1715e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MultiPlanScanner4makeERKNS_10StringDataERKNS_7BSONObjES6_RKN5boost10shared_ptrIKNS_11ParsedQueryEEES6_NS_18QueryPlanGenerator18RecordedPlanPolicyES6_S6_+0x6d [0xc174cd] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15CursorGenerator19setMultiPlanScannerEv+0xda [0xc1e0ca] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo15CursorGenerator8generateEv+0x68 [0xc1e258] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo25NamespaceDetailsTransient9getCursorERKNS_10StringDataERKNS_7BSONObjES6_RKNS_24QueryPlanSelectionPolicyERKN5boost10shared_ptrIKNS_11ParsedQueryEEEbPNS_16QueryPlanSummaryE+0x33 [0xc1e513] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo23queryWithQueryOptimizerEiRKSsRKNS_7BSONObjERNS_5CurOpES4_S4_RKN5boost10shared_ptrINS_11ParsedQueryEEES4_RKNS_12ChunkVersionERNS7_10scoped_ptrINS_25PageFaultRetryableSectionEEERNSG_INS_19NoPageFaultsAllowedEEERNS_7MessageE+0x12e [0xb9206e] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo8runQueryERNS_7MessageERNS_12QueryMessageERNS_5CurOpES1_+0x972 [0xb93c12] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16assembleResponseERNS_7MessageERNS_10DbResponseERKNS_11HostAndPortE+0xba6 [0xb3c436] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo16MyMessageHandler7processERNS_7MessageEPNS_21AbstractMessagingPortEPNS_9LastErrorE+0x91 [0x933241] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'_ZN5mongo17PortMessageServer17handleIncomingMsgEPv+0x43b [0xda9b4b] m30001| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod'thread_proxy+0x7e [0xe0adfe] m30001| /lib/amd64/libc.so.1'_thrp_setup+0xbc [0xfffffd7fff257024] m30001| Fri Feb 22 12:36:12.475 [conn160] assertion 10365 min and max keys do not share pattern ns:test.jstests_minmax query:{ query: {}, $min: { b: 1.0 }, $max: { a: 1.0, b: 2.0 }, $hint: { a: 1.0, b: -1.0 } } m30001| Fri Feb 22 12:36:12.475 [conn160] problem detected during query over test.jstests_minmax : { $err: "min and max keys do not share pattern", code: 10365 } m30999| Fri Feb 22 12:36:12.475 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10365 min and max keys do not share pattern m30001| Fri Feb 22 12:36:12.475 [conn160] end connection 127.0.0.1:40264 (5 connections now open) m30001| Fri Feb 22 12:36:12.476 [initandlisten] connection accepted from 127.0.0.1:40409 #162 (6 connections now open) m30001| Fri Feb 22 12:36:12.476 [conn162] assertion 10366 natural order cannot be specified with $min/$max ns:test.jstests_minmax query:{ query: {}, $min: { a: 1.0 }, $hint: { $natural: 1.0 } } m30001| Fri Feb 22 12:36:12.476 [conn162] problem detected during query over test.jstests_minmax : { $err: "natural order cannot be specified with $min/$max", code: 10366 } m30999| Fri Feb 22 12:36:12.476 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10366 natural order cannot be specified with $min/$max m30001| Fri Feb 22 12:36:12.477 [conn162] end connection 127.0.0.1:40409 (5 connections now open) m30001| Fri Feb 22 12:36:12.477 [initandlisten] connection accepted from 127.0.0.1:35468 #163 (6 connections now open) m30001| Fri Feb 22 12:36:12.478 [conn163] assertion 10366 natural order cannot be specified with $min/$max ns:test.jstests_minmax query:{ query: {}, $max: { a: 1.0 }, $hint: { $natural: 1.0 } } m30001| Fri Feb 22 12:36:12.478 [conn163] problem detected during query over test.jstests_minmax : { $err: "natural order cannot be specified with $min/$max", code: 10366 } m30999| Fri Feb 22 12:36:12.478 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10366 natural order cannot be specified with $min/$max m30001| Fri Feb 22 12:36:12.478 [conn163] end connection 127.0.0.1:35468 (5 connections now open) m30999| Fri Feb 22 12:36:12.479 [conn1] DROP: test.jstests_minmax m30001| Fri Feb 22 12:36:12.479 [initandlisten] connection accepted from 127.0.0.1:60836 #164 (6 connections now open) m30001| Fri Feb 22 12:36:12.482 [conn164] CMD: drop test.jstests_minmax m30001| Fri Feb 22 12:36:12.491 [conn164] build index test.jstests_minmax { _id: 1 } m30001| Fri Feb 22 12:36:12.492 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.492 [conn164] info: creating collection test.jstests_minmax on add index m30001| Fri Feb 22 12:36:12.492 [conn164] build index test.jstests_minmax { a: 1.0 } m30001| Fri Feb 22 12:36:12.492 [conn164] build index done. scanned 0 total records. 0 secs 56ms ******************************************* Test : jstests/multi2.js ... m30999| Fri Feb 22 12:36:12.494 [conn1] DROP: test.multi2 m30001| Fri Feb 22 12:36:12.495 [conn164] CMD: drop test.multi2 m30001| Fri Feb 22 12:36:12.495 [conn164] build index test.multi2 { _id: 1 } m30001| Fri Feb 22 12:36:12.497 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:12.497 [conn164] build index test.multi2 { x: 1.0 } m30001| Fri Feb 22 12:36:12.498 [conn164] build index done. scanned 4 total records. 0 secs m30001| Fri Feb 22 12:36:12.500 [conn4] CMD: dropIndexes test.multi2 m30001| Fri Feb 22 12:36:12.503 [conn164] build index test.multi2 { x: 1.0, a: 1.0 } m30001| Fri Feb 22 12:36:12.504 [conn164] build index done. scanned 4 total records. 0 secs 13ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geof.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/compact.js ******************************************* Test : jstests/update_arraymatch3.js ... m30999| Fri Feb 22 12:36:12.514 [conn1] DROP: test.update_arraymatch3 m30001| Fri Feb 22 12:36:12.514 [conn164] CMD: drop test.update_arraymatch3 m30001| Fri Feb 22 12:36:12.515 [conn164] build index test.update_arraymatch3 { _id: 1 } m30001| Fri Feb 22 12:36:12.516 [conn164] build index done. scanned 0 total records. 0 secs 12ms ******************************************* Test : jstests/all.js ... m30999| Fri Feb 22 12:36:12.527 [conn1] DROP: test.jstests_all m30001| Fri Feb 22 12:36:12.528 [conn164] CMD: drop test.jstests_all m30001| Fri Feb 22 12:36:12.529 [conn164] build index test.jstests_all { _id: 1 } m30001| Fri Feb 22 12:36:12.530 [conn164] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:36:12.540 [conn1] DROP: test.jstests_all m30001| Fri Feb 22 12:36:12.540 [conn164] CMD: drop test.jstests_all m30001| Fri Feb 22 12:36:12.544 [conn164] build index test.jstests_all { _id: 1 } m30001| Fri Feb 22 12:36:12.545 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.545 [conn164] info: creating collection test.jstests_all on add index m30001| Fri Feb 22 12:36:12.545 [conn164] build index test.jstests_all { a: 1.0 } m30001| Fri Feb 22 12:36:12.545 [conn164] build index done. scanned 0 total records. 0 secs 38ms ******************************************* Test : jstests/date3.js ... m30999| Fri Feb 22 12:36:12.586 [conn1] DROP: test.date3 m30001| Fri Feb 22 12:36:12.586 [conn164] CMD: drop test.date3 m30001| Fri Feb 22 12:36:12.587 [conn164] build index test.date3 { _id: 1 } m30001| Fri Feb 22 12:36:12.587 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.590 [conn164] build index test.date3 { d: 1.0 } m30001| Fri Feb 22 12:36:12.591 [conn164] build index done. scanned 3 total records. 0 secs 38ms ******************************************* Test : jstests/rename2.js ... m30999| Fri Feb 22 12:36:12.595 [conn1] DROP: test.rename2a m30001| Fri Feb 22 12:36:12.596 [conn164] CMD: drop test.rename2a m30999| Fri Feb 22 12:36:12.596 [conn1] DROP: test.rename2b m30001| Fri Feb 22 12:36:12.596 [conn164] CMD: drop test.rename2b m30001| Fri Feb 22 12:36:12.597 [conn164] build index test.rename2a { _id: 1 } m30001| Fri Feb 22 12:36:12.597 [conn164] build index done. scanned 0 total records. 0 secs 15ms >>>>>>>>>>>>>>> skipping jstests/_runner.js ******************************************* Test : jstests/arrayfind7.js ... m30999| Fri Feb 22 12:36:12.610 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.610 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.611 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.612 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.612 [conn164] build index test.jstests_arrayfind7 { a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.613 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:12.614 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.614 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.621 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.621 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.622 [conn164] build index test.jstests_arrayfind7 { a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.622 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:12.624 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.624 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.630 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.631 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.631 [conn164] info: creating collection test.jstests_arrayfind7 on add index m30001| Fri Feb 22 12:36:12.631 [conn164] build index test.jstests_arrayfind7 { a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.631 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:12.633 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.634 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.640 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.640 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.641 [conn164] build index test.jstests_arrayfind7 { a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.641 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:12.643 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.643 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.649 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.650 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.650 [conn164] info: creating collection test.jstests_arrayfind7 on add index m30001| Fri Feb 22 12:36:12.650 [conn164] build index test.jstests_arrayfind7 { a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.650 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:12.652 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.652 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.658 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.659 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.659 [conn164] build index test.jstests_arrayfind7 { a.d.e: 1.0, a.b.c: 1.0 } m30999| Fri Feb 22 12:36:12.663 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.663 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.667 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.668 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.668 [conn164] info: creating collection test.jstests_arrayfind7 on add index m30001| Fri Feb 22 12:36:12.668 [conn164] build index test.jstests_arrayfind7 { a.d.e: 1.0, a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.668 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:12.671 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.671 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.678 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.678 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.679 [conn164] build index test.jstests_arrayfind7 { a.x: 1.0, a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.679 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:12.682 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.682 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.689 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.689 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.689 [conn164] info: creating collection test.jstests_arrayfind7 on add index m30001| Fri Feb 22 12:36:12.689 [conn164] build index test.jstests_arrayfind7 { a.x: 1.0, a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.690 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:12.692 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.692 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.698 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.699 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.699 [conn164] build index test.jstests_arrayfind7 { a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.700 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:12.701 [conn1] DROP: test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.702 [conn164] CMD: drop test.jstests_arrayfind7 m30001| Fri Feb 22 12:36:12.708 [conn164] build index test.jstests_arrayfind7 { _id: 1 } m30001| Fri Feb 22 12:36:12.708 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.708 [conn164] info: creating collection test.jstests_arrayfind7 on add index m30001| Fri Feb 22 12:36:12.708 [conn164] build index test.jstests_arrayfind7 { a.b.c: 1.0 } m30001| Fri Feb 22 12:36:12.708 [conn164] build index done. scanned 0 total records. 0 secs 101ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/queryoptimizera.js ******************************************* Test : jstests/loadserverscripts.js ... ---- testing db.loadServerScripts() ---- m30999| Fri Feb 22 12:36:12.715 [conn1] couldn't find database [loadserverscripts] in config db m30999| Fri Feb 22 12:36:12.717 [conn1] put [loadserverscripts] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:36:12.717 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/loadserverscripts.ns, filling with zeroes... m30000| Fri Feb 22 12:36:12.718 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/loadserverscripts.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:36:12.718 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/loadserverscripts.0, filling with zeroes... m30000| Fri Feb 22 12:36:12.718 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/loadserverscripts.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:36:12.718 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/loadserverscripts.1, filling with zeroes... m30000| Fri Feb 22 12:36:12.719 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/loadserverscripts.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:36:12.722 [conn6] build index loadserverscripts.system.js { _id: 1 } m30000| Fri Feb 22 12:36:12.723 [conn6] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:36:12.761 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');db.getSisterDB("loadserverscripts").system.js.insert ( {_id: "myfunc2", "value": function(){ return "myfunc2"; } } );db.getLastError(); localhost:30999/admin sh15769| MongoDB shell version: 2.4.0-rc1-pre- sh15769| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:12.848 [mongosMain] connection accepted from 127.0.0.1:65024 #33 (2 connections now open) m30001| Fri Feb 22 12:36:12.850 [initandlisten] connection accepted from 127.0.0.1:50552 #165 (7 connections now open) sh15769| null m30999| Fri Feb 22 12:36:12.858 [conn33] end connection 127.0.0.1:65024 (1 connection now open) ---- completed test of db.loadServerScripts() ---- 155ms ******************************************* Test : jstests/queryoptimizer9.js ... m30999| Fri Feb 22 12:36:12.869 [conn1] DROP: test.jstests_queryoptimizer9 m30001| Fri Feb 22 12:36:12.869 [conn164] CMD: drop test.jstests_queryoptimizer9 m30001| Fri Feb 22 12:36:12.870 [conn164] build index test.jstests_queryoptimizer9 { _id: 1 } m30001| Fri Feb 22 12:36:12.871 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.871 [conn164] info: creating collection test.jstests_queryoptimizer9 on add index m30001| Fri Feb 22 12:36:12.871 [conn164] build index test.jstests_queryoptimizer9 { a: 1.0 } m30001| Fri Feb 22 12:36:12.871 [conn164] build index done. scanned 0 total records. 0 secs 9ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/notablescan.js ******************************************* Test : jstests/basic1.js ... m30999| Fri Feb 22 12:36:12.879 [conn1] DROP: test.basic1 m30001| Fri Feb 22 12:36:12.879 [conn164] CMD: drop test.basic1 m30001| Fri Feb 22 12:36:12.879 [conn164] build index test.basic1 { _id: 1 } m30001| Fri Feb 22 12:36:12.880 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.881 [conn4] CMD: validate test.basic1 m30001| Fri Feb 22 12:36:12.881 [conn4] validating index 0: test.basic1.$_id_ 6ms ******************************************* Test : jstests/update_addToSet3.js ... m30999| Fri Feb 22 12:36:12.885 [conn1] DROP: test.update_addToSet3 m30001| Fri Feb 22 12:36:12.885 [conn164] CMD: drop test.update_addToSet3 m30001| Fri Feb 22 12:36:12.885 [conn164] build index test.update_addToSet3 { _id: 1 } m30001| Fri Feb 22 12:36:12.886 [conn164] build index done. scanned 0 total records. 0 secs 6ms ******************************************* Test : jstests/find1.js ... m30999| Fri Feb 22 12:36:12.889 [conn1] DROP: test.find1 m30001| Fri Feb 22 12:36:12.889 [conn164] CMD: drop test.find1 lookAtDocumentMetrics: false m30001| Fri Feb 22 12:36:12.891 [conn164] build index test.find1 { _id: 1 } m30001| Fri Feb 22 12:36:12.891 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.897 [conn4] CMD: validate test.find1 m30001| Fri Feb 22 12:36:12.897 [conn4] validating index 0: test.find1.$_id_ 9ms ******************************************* Test : jstests/find_and_modify4.js ... m30999| Fri Feb 22 12:36:12.898 [conn1] DROP: test.find_and_modify4 m30001| Fri Feb 22 12:36:12.898 [conn164] CMD: drop test.find_and_modify4 m30001| Fri Feb 22 12:36:12.899 [conn164] build index test.find_and_modify4 { _id: 1 } m30001| Fri Feb 22 12:36:12.900 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:12.903 [conn1] DROP: test.find_and_modify4 m30001| Fri Feb 22 12:36:12.903 [conn164] CMD: drop test.find_and_modify4 m30001| Fri Feb 22 12:36:12.909 [conn164] build index test.find_and_modify4 { _id: 1 } m30001| Fri Feb 22 12:36:12.909 [conn164] build index done. scanned 0 total records. 0 secs 16ms ******************************************* Test : jstests/server1470.js ... m30999| Fri Feb 22 12:36:12.914 [conn1] DROP: test.server1470 m30001| Fri Feb 22 12:36:12.914 [conn164] CMD: drop test.server1470 m30001| Fri Feb 22 12:36:12.915 [conn164] build index test.server1470 { _id: 1 } m30001| Fri Feb 22 12:36:12.916 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:12.916 [conn1] DROP: test.server1470 m30001| Fri Feb 22 12:36:12.916 [conn164] CMD: drop test.server1470 m30001| Fri Feb 22 12:36:12.921 [conn164] build index test.server1470 { _id: 1 } m30001| Fri Feb 22 12:36:12.921 [conn164] build index done. scanned 0 total records. 0 secs 9ms ******************************************* Test : jstests/index12.js ... 0ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2nearComplex.js ******************************************* Test : jstests/json1.js ... 8ms ******************************************* Test : jstests/regex5.js ... m30999| Fri Feb 22 12:36:12.932 [conn1] DROP: test.regex5 m30001| Fri Feb 22 12:36:12.932 [conn164] CMD: drop test.regex5 m30001| Fri Feb 22 12:36:12.932 [conn164] build index test.regex5 { _id: 1 } m30001| Fri Feb 22 12:36:12.933 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.941 [conn164] build index test.regex5 { x: 1.0 } m30001| Fri Feb 22 12:36:12.942 [conn164] build index done. scanned 2 total records. 0 secs now indexed 21ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/update_setOnInsert.js ******************************************* Test : jstests/oro.js ... m30999| Fri Feb 22 12:36:12.954 [conn1] DROP: test.jstests_oro m30001| Fri Feb 22 12:36:12.954 [conn164] CMD: drop test.jstests_oro m30001| Fri Feb 22 12:36:12.954 [conn164] build index test.jstests_oro { _id: 1 } m30001| Fri Feb 22 12:36:12.955 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.955 [conn164] info: creating collection test.jstests_oro on add index m30001| Fri Feb 22 12:36:12.955 [conn164] build index test.jstests_oro { a: 1.0 } m30001| Fri Feb 22 12:36:12.956 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:12.969 [conn164] build index test.jstests_oro { aa: 1.0 } m30001| Fri Feb 22 12:36:12.970 [conn164] build index done. scanned 200 total records. 0.001 secs m30001| Fri Feb 22 12:36:12.987 [conn164] build index test.jstests_oro { aaa: 1.0 } m30001| Fri Feb 22 12:36:12.989 [conn164] build index done. scanned 400 total records. 0.002 secs m30001| Fri Feb 22 12:36:13.010 [conn164] build index test.jstests_oro { aaaa: 1.0 } m30001| Fri Feb 22 12:36:13.014 [conn164] build index done. scanned 600 total records. 0.003 secs m30001| Fri Feb 22 12:36:13.035 [conn164] build index test.jstests_oro { aaaaa: 1.0 } m30001| Fri Feb 22 12:36:13.039 [conn164] build index done. scanned 800 total records. 0.004 secs m30001| Fri Feb 22 12:36:13.062 [conn164] build index test.jstests_oro { aaaaaa: 1.0 } m30001| Fri Feb 22 12:36:13.068 [conn164] build index done. scanned 1000 total records. 0.006 secs m30001| Fri Feb 22 12:36:13.100 [conn161] end connection 127.0.0.1:61796 (6 connections now open) m30000| Fri Feb 22 12:36:13.100 [conn21] end connection 127.0.0.1:38969 (9 connections now open) m30001| Fri Feb 22 12:36:13.113 [conn164] build index test.jstests_oro { aaaaaaa: 1.0 } m30001| Fri Feb 22 12:36:13.119 [conn164] build index done. scanned 1200 total records. 0.006 secs m30001| Fri Feb 22 12:36:13.146 [conn164] build index test.jstests_oro { aaaaaaaa: 1.0 } m30001| Fri Feb 22 12:36:13.153 [conn164] build index done. scanned 1400 total records. 0.007 secs m30001| Fri Feb 22 12:36:13.181 [conn164] build index test.jstests_oro { aaaaaaaaa: 1.0 } m30001| Fri Feb 22 12:36:13.190 [conn164] build index done. scanned 1600 total records. 0.008 secs m30001| Fri Feb 22 12:36:13.227 [conn164] build index test.jstests_oro { aaaaaaaaaa: 1.0 } m30001| Fri Feb 22 12:36:13.237 [conn164] build index done. scanned 1800 total records. 0.009 secs { "clauses" : [ { "cursor" : "BtreeCursor a_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 1, "indexBounds" : { "a" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "aa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "aaa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 2, "indexBounds" : { "aaaa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaaaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 3, "indexBounds" : { "aaaaa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaaaaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 3, "indexBounds" : { "aaaaaa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaaaaaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 4, "indexBounds" : { "aaaaaaa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaaaaaaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 4, "indexBounds" : { "aaaaaaaa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaaaaaaaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 5, "indexBounds" : { "aaaaaaaaa" : [ [ 1, 1 ] ] } }, { "cursor" : "BtreeCursor aaaaaaaaaa_1", "isMultiKey" : false, "n" : 200, "nscannedObjects" : 200, "nscanned" : 200, "nscannedObjectsAllPlans" : 200, "nscannedAllPlans" : 200, "scanAndOrder" : false, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 6, "indexBounds" : { "aaaaaaaaaa" : [ [ 1, 1 ] ] } } ], "n" : 2000, "nscannedObjects" : 2000, "nscanned" : 2000, "nscannedObjectsAllPlans" : 2000, "nscannedAllPlans" : 2000, "millis" : 38, "server" : "bs-smartos-x86-64-1.10gen.cc:30001", "millis" : 38 } 415ms ******************************************* Test : jstests/indexc.js ... m30999| Fri Feb 22 12:36:13.369 [conn1] DROP: test.indexc m30001| Fri Feb 22 12:36:13.370 [conn164] CMD: drop test.indexc m30001| Fri Feb 22 12:36:13.370 [conn164] build index test.indexc { _id: 1 } m30001| Fri Feb 22 12:36:13.371 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:13.378 [conn164] build index test.indexc { ts: 1.0, cats: 1.0 } m30001| Fri Feb 22 12:36:13.385 [conn164] build index done. scanned 99 total records. 0.006 secs m30001| Fri Feb 22 12:36:13.385 [conn164] build index test.indexc { cats: 1.0 } m30001| Fri Feb 22 12:36:13.391 [conn164] build index done. scanned 99 total records. 0.005 secs 26ms ******************************************* Test : jstests/updatef.js ... m30999| Fri Feb 22 12:36:13.395 [conn1] DROP: test.jstests_updatef_actual m30001| Fri Feb 22 12:36:13.395 [conn164] build index test.jstests_updatef { _id: 1 } m30001| Fri Feb 22 12:36:13.396 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:13.396 [conn164] CMD: drop test.jstests_updatef_actual m30001| Fri Feb 22 12:36:13.396 [conn164] build index test.jstests_updatef_actual { _id: 1 } m30001| Fri Feb 22 12:36:13.397 [conn164] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:36:13.505 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i=0; i < 100; ++i ) { db.jstests_updatef.renameCollection( 'jstests_updatef_' ); db.jstests_updatef_.renameCollection( 'jstests_updatef' ); } localhost:30999/admin sh15771| MongoDB shell version: 2.4.0-rc1-pre- sh15771| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:13.593 [mongosMain] connection accepted from 127.0.0.1:60512 #34 (2 connections now open) sh15771| [object Object] m30999| Fri Feb 22 12:36:14.758 [conn34] end connection 127.0.0.1:60512 (1 connection now open) 1370ms ******************************************* Test : jstests/indexu.js ... m30999| Fri Feb 22 12:36:14.768 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.769 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.769 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.770 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:14.771 [conn164] build index test.jstests_indexu { a.0: 1.0 } m30001| Fri Feb 22 12:36:14.774 [conn164] build index test.jstests_indexu { a.0: 1.0 } m30001| Fri Feb 22 12:36:14.774 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:14.775 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.775 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.783 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.783 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.784 [conn164] build index test.jstests_indexu { a.1: 1.0 } m30001| Fri Feb 22 12:36:14.786 [conn164] build index test.jstests_indexu { a.1: 1.0 } m30001| Fri Feb 22 12:36:14.786 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:14.787 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.787 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.794 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.794 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.794 [conn164] info: creating collection test.jstests_indexu on add index m30001| Fri Feb 22 12:36:14.794 [conn164] build index test.jstests_indexu { a.b: 1.0 } m30001| Fri Feb 22 12:36:14.795 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:14.795 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.795 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.802 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.802 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.802 [conn164] info: creating collection test.jstests_indexu on add index m30001| Fri Feb 22 12:36:14.802 [conn164] build index test.jstests_indexu { a.-1: 1.0 } m30001| Fri Feb 22 12:36:14.803 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:14.803 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.803 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.810 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.810 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.810 [conn164] info: creating collection test.jstests_indexu on add index m30001| Fri Feb 22 12:36:14.810 [conn164] build index test.jstests_indexu { a.00: 1.0 } m30001| Fri Feb 22 12:36:14.811 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:14.812 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.812 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.818 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.819 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.819 [conn164] info: creating collection test.jstests_indexu on add index m30001| Fri Feb 22 12:36:14.819 [conn164] build index test.jstests_indexu { a.0: 1.0, a.1: 1.0 } m30001| Fri Feb 22 12:36:14.819 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:14.820 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.820 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.827 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.827 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.827 [conn164] build index test.jstests_indexu { a.0: 1.0 } m30001| Fri Feb 22 12:36:14.828 [conn164] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:36:14.828 [conn164] build index test.jstests_indexu { a.1: 1.0 } m30001| Fri Feb 22 12:36:14.829 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:14.830 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.830 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.838 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.838 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.839 [conn164] info: creating collection test.jstests_indexu on add index m30001| Fri Feb 22 12:36:14.839 [conn164] build index test.jstests_indexu { a.0: 1.0 } m30001| Fri Feb 22 12:36:14.839 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.839 [conn164] build index test.jstests_indexu { a.1: 1.0 } m30001| Fri Feb 22 12:36:14.840 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:14.842 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.842 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.850 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.850 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.851 [conn164] build index test.jstests_indexu { a.0.0: 1.0 } m30001| Fri Feb 22 12:36:14.851 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:14.852 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.852 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.859 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.859 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.859 [conn164] build index test.jstests_indexu { a.0.0: 1.0 } m30999| Fri Feb 22 12:36:14.861 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.861 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.866 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.866 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.866 [conn164] build index test.jstests_indexu { a.0.0: 1.0 } m30999| Fri Feb 22 12:36:14.868 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.868 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.873 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.873 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.873 [conn164] build index test.jstests_indexu { a.0.0: 1.0 } m30999| Fri Feb 22 12:36:14.875 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.875 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.879 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.880 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.880 [conn164] build index test.jstests_indexu { a.0.0.0.0.0.0: 1.0 } m30001| Fri Feb 22 12:36:14.882 [conn164] build index test.jstests_indexu { a.0.0.0.0.0: 1.0 } m30001| Fri Feb 22 12:36:14.884 [conn164] build index test.jstests_indexu { a.0.0.0.0: 1.0 } m30001| Fri Feb 22 12:36:14.886 [conn164] build index test.jstests_indexu { a.0.0.0: 1.0 } m30001| Fri Feb 22 12:36:14.889 [conn164] build index test.jstests_indexu { a.0.0: 1.0 } m30001| Fri Feb 22 12:36:14.891 [conn164] build index test.jstests_indexu { a.0: 1.0 } m30001| Fri Feb 22 12:36:14.892 [conn164] build index test.jstests_indexu { a: 1.0 } m30001| Fri Feb 22 12:36:14.893 [conn164] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:36:14.893 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.894 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.900 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.901 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.901 [conn164] build index test.jstests_indexu { a.0.c: 1.0 } m30999| Fri Feb 22 12:36:14.903 [conn1] DROP: test.jstests_indexu m30001| Fri Feb 22 12:36:14.903 [conn164] CMD: drop test.jstests_indexu m30001| Fri Feb 22 12:36:14.907 [conn164] build index test.jstests_indexu { _id: 1 } m30001| Fri Feb 22 12:36:14.908 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.908 [conn164] build index test.jstests_indexu { a.0.b: 1.0 } m30001| Fri Feb 22 12:36:14.909 [conn164] build index done. scanned 1 total records. 0 secs 145ms ******************************************* Test : jstests/or7.js ... m30999| Fri Feb 22 12:36:14.917 [conn1] DROP: test.jstests_or7 m30001| Fri Feb 22 12:36:14.917 [conn164] CMD: drop test.jstests_or7 m30001| Fri Feb 22 12:36:14.917 [conn164] build index test.jstests_or7 { _id: 1 } m30001| Fri Feb 22 12:36:14.918 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.918 [conn164] info: creating collection test.jstests_or7 on add index m30001| Fri Feb 22 12:36:14.918 [conn164] build index test.jstests_or7 { a: 1.0 } m30001| Fri Feb 22 12:36:14.918 [conn164] build index done. scanned 0 total records. 0 secs 16ms ******************************************* Test : jstests/covered_index_simple_1.js ... m30999| Fri Feb 22 12:36:14.971 [conn1] DROP: test.covered_simple_1 m30001| Fri Feb 22 12:36:14.971 [conn164] CMD: drop test.covered_simple_1 m30001| Fri Feb 22 12:36:14.971 [conn164] build index test.covered_simple_1 { _id: 1 } m30001| Fri Feb 22 12:36:14.972 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.974 [conn164] build index test.covered_simple_1 { foo: 1.0 } m30001| Fri Feb 22 12:36:14.975 [conn164] build index done. scanned 28 total records. 0 secs all tests pass 54ms ******************************************* Test : jstests/index_arr1.js ... m30999| Fri Feb 22 12:36:14.980 [conn1] DROP: test.index_arr1 m30001| Fri Feb 22 12:36:14.980 [conn164] CMD: drop test.index_arr1 m30001| Fri Feb 22 12:36:14.981 [conn164] build index test.index_arr1 { _id: 1 } m30001| Fri Feb 22 12:36:14.981 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.982 [conn164] build index test.index_arr1 { a: 1.0, b.x: 1.0 } m30001| Fri Feb 22 12:36:14.983 [conn164] build index done. scanned 3 total records. 0 secs m30001| Fri Feb 22 12:36:14.985 [conn164] build index test.index_arr1 { a: 1.0, b.a: 1.0, b.c: 1.0 } m30001| Fri Feb 22 12:36:14.985 [conn164] build index done. scanned 4 total records. 0 secs 8ms ******************************************* Test : jstests/rename6.js ... m30001| Fri Feb 22 12:36:14.988 [conn164] build index test.rename2c { _id: 1 } m30001| Fri Feb 22 12:36:14.989 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:14.989 [conn164] info: creating collection test.rename2c on add index m30001| Fri Feb 22 12:36:14.989 [conn164] build index test.rename2c { name: 1.0, date: 1.0, time: 1.0, renameCollection: 1.0, mongodb: 1.0, testing: 1.0, data: 1.0 } m30001| Fri Feb 22 12:36:14.989 [conn164] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/js4.js ... m30999| Fri Feb 22 12:36:14.996 [conn1] DROP: test.jstests_js4 m30001| Fri Feb 22 12:36:14.996 [conn164] CMD: drop test.jstests_js4 m30001| Fri Feb 22 12:36:14.997 [conn164] build index test.jstests_js4 { _id: 1 } m30001| Fri Feb 22 12:36:14.997 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:15.028 [conn1] DROP: test.jstests_js4 m30001| Fri Feb 22 12:36:15.030 [conn164] CMD: drop test.jstests_js4 m30001| Fri Feb 22 12:36:15.035 [conn164] build index test.jstests_js4 { _id: 1 } m30001| Fri Feb 22 12:36:15.035 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.036 [conn4] CMD: validate test.jstests_js4 m30001| Fri Feb 22 12:36:15.037 [conn4] validating index 0: test.jstests_js4.$_id_ 42ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2largewithin.js ******************************************* Test : jstests/upsert1.js ... m30999| Fri Feb 22 12:36:15.038 [conn1] DROP: test.upsert1 m30001| Fri Feb 22 12:36:15.038 [conn164] CMD: drop test.upsert1 m30001| Fri Feb 22 12:36:15.039 [conn164] build index test.upsert1 { _id: 1 } m30001| Fri Feb 22 12:36:15.039 [conn164] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/update_arraymatch7.js ... m30999| Fri Feb 22 12:36:15.042 [conn1] DROP: test.jstests_update_arraymatch7 m30001| Fri Feb 22 12:36:15.042 [conn164] CMD: drop test.jstests_update_arraymatch7 m30001| Fri Feb 22 12:36:15.043 [conn164] build index test.jstests_update_arraymatch7 { _id: 1 } m30001| Fri Feb 22 12:36:15.043 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.044 [conn164] build index test.jstests_update_arraymatch7 { a.b: 1.0 } m30001| Fri Feb 22 12:36:15.045 [conn164] build index done. scanned 1 total records. 0 secs 4ms ******************************************* Test : jstests/covered_index_sort_2.js ... m30999| Fri Feb 22 12:36:15.046 [conn1] DROP: test.covered_sort_2 m30001| Fri Feb 22 12:36:15.046 [conn164] CMD: drop test.covered_sort_2 m30001| Fri Feb 22 12:36:15.047 [conn164] build index test.covered_sort_2 { _id: 1 } m30001| Fri Feb 22 12:36:15.048 [conn164] build index done. scanned 0 total records. 0 secs all tests pass 3ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geob.js ******************************************* Test : jstests/queryoptimizer10.js ... m30999| Fri Feb 22 12:36:15.050 [conn1] DROP: test.jstests_queryoptimizer10 m30001| Fri Feb 22 12:36:15.050 [conn164] CMD: drop test.jstests_queryoptimizer10 m30001| Fri Feb 22 12:36:15.050 [conn164] build index test.jstests_queryoptimizer10 { _id: 1 } m30001| Fri Feb 22 12:36:15.051 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.051 [conn164] info: creating collection test.jstests_queryoptimizer10 on add index m30001| Fri Feb 22 12:36:15.051 [conn164] build index test.jstests_queryoptimizer10 { a: 1.0 } m30001| Fri Feb 22 12:36:15.051 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.052 [conn164] build index test.jstests_queryoptimizer10 { zzz: 1.0 } m30001| Fri Feb 22 12:36:15.052 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.053 [conn4] CMD: dropIndexes test.jstests_queryoptimizer10 m30001| Fri Feb 22 12:36:15.056 [conn164] build index test.jstests_queryoptimizer10 { zzz: 1.0 } m30001| Fri Feb 22 12:36:15.056 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.057 [conn4] CMD: dropIndexes test.jstests_queryoptimizer10 m30999| Fri Feb 22 12:36:15.059 [conn1] DROP: test.jstests_queryoptimizer10 m30001| Fri Feb 22 12:36:15.059 [conn164] CMD: drop test.jstests_queryoptimizer10 m30001| Fri Feb 22 12:36:15.066 [conn164] build index test.jstests_queryoptimizer10 { _id: 1 } m30001| Fri Feb 22 12:36:15.066 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.066 [conn164] info: creating collection test.jstests_queryoptimizer10 on add index m30001| Fri Feb 22 12:36:15.066 [conn164] build index test.jstests_queryoptimizer10 { a: "2d" } m30001| Fri Feb 22 12:36:15.067 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.067 [conn164] build index test.jstests_queryoptimizer10 { zzz: 1.0 } m30001| Fri Feb 22 12:36:15.067 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.068 [conn4] CMD: dropIndexes test.jstests_queryoptimizer10 m30001| Fri Feb 22 12:36:15.070 [conn164] build index test.jstests_queryoptimizer10 { zzz: 1.0 } m30001| Fri Feb 22 12:36:15.071 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:15.071 [conn4] CMD: dropIndexes test.jstests_queryoptimizer10 24ms ******************************************* Test : jstests/remove3.js ... m30999| Fri Feb 22 12:36:15.074 [conn1] DROP: test.remove3 m30001| Fri Feb 22 12:36:15.074 [conn164] CMD: drop test.remove3 m30001| Fri Feb 22 12:36:15.075 [conn164] build index test.remove3 { _id: 1 } m30001| Fri Feb 22 12:36:15.075 [conn164] build index done. scanned 0 total records. 0 secs 4ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/auth2.js ******************************************* Test : jstests/run_program1.js ... Fri Feb 22 12:36:15.106 shell: started program true Fri Feb 22 12:36:15.154 shell: started program false Fri Feb 22 12:36:15.202 shell: started program this_program_doesnt_exit sh15776| Unable to start program this_program_doesnt_exit errno:2 No such file or directory Fri Feb 22 12:36:15.249 shell: started program echo Hello World. How are you? sh15777| Hello World. How are you? Fri Feb 22 12:36:15.304 shell: started program bash -c echo Hello World. "How are you?" sh15778| Hello World. How are you? Fri Feb 22 12:36:15.360 shell: started program sleep 0.5 Fri Feb 22 12:36:15.917 shell: started program sleep 0.5 1364ms ******************************************* Test : jstests/numberlong.js ... 4ms ******************************************* Test : jstests/sortk.js ... m30999| Fri Feb 22 12:36:16.448 [conn1] DROP: test.jstests_sortk m30001| Fri Feb 22 12:36:16.449 [conn164] CMD: drop test.jstests_sortk m30999| Fri Feb 22 12:36:16.449 [conn1] DROP: test.jstests_sortk m30001| Fri Feb 22 12:36:16.449 [conn164] CMD: drop test.jstests_sortk m30001| Fri Feb 22 12:36:16.450 [conn164] build index test.jstests_sortk { _id: 1 } m30001| Fri Feb 22 12:36:16.451 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:16.452 [conn164] build index test.jstests_sortk { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:36:16.452 [conn164] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:36:16.502 [conn1] DROP: test.jstests_sortk m30001| Fri Feb 22 12:36:16.502 [conn164] CMD: drop test.jstests_sortk m30001| Fri Feb 22 12:36:16.509 [conn164] build index test.jstests_sortk { _id: 1 } m30001| Fri Feb 22 12:36:16.510 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:16.510 [conn164] build index test.jstests_sortk { a: 1.0, b: -1.0 } m30001| Fri Feb 22 12:36:16.511 [conn164] build index done. scanned 6 total records. 0 secs m30999| Fri Feb 22 12:36:16.514 [conn1] DROP: test.jstests_sortk m30001| Fri Feb 22 12:36:16.514 [conn164] CMD: drop test.jstests_sortk m30001| Fri Feb 22 12:36:16.521 [conn164] build index test.jstests_sortk { _id: 1 } m30001| Fri Feb 22 12:36:16.521 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:16.521 [conn164] info: creating collection test.jstests_sortk on add index m30001| Fri Feb 22 12:36:16.521 [conn164] build index test.jstests_sortk { a: 1.0, b: 1.0, c: 1.0 } m30001| Fri Feb 22 12:36:16.522 [conn164] build index done. scanned 0 total records. 0 secs 86ms ******************************************* Test : jstests/count7.js ... m30999| Fri Feb 22 12:36:16.533 [conn1] DROP: test.jstests_count7 m30001| Fri Feb 22 12:36:16.534 [conn164] CMD: drop test.jstests_count7 m30001| Fri Feb 22 12:36:16.534 [conn164] build index test.jstests_count7 { _id: 1 } m30001| Fri Feb 22 12:36:16.535 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:16.535 [conn164] info: creating collection test.jstests_count7 on add index m30001| Fri Feb 22 12:36:16.535 [conn164] build index test.jstests_count7 { a: 1.0 } m30001| Fri Feb 22 12:36:16.536 [conn164] build index done. scanned 0 total records. 0 secs 5ms >>>>>>>>>>>>>>> skipping jstests/perf ******************************************* Test : jstests/and3.js ... m30999| Fri Feb 22 12:36:16.540 [conn1] DROP: test.jstests_and3 m30001| Fri Feb 22 12:36:16.540 [conn164] CMD: drop test.jstests_and3 m30001| Fri Feb 22 12:36:16.541 [conn164] build index test.jstests_and3 { _id: 1 } m30001| Fri Feb 22 12:36:16.541 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:16.542 [conn164] build index test.jstests_and3 { a: 1.0 } m30001| Fri Feb 22 12:36:16.543 [conn164] build index done. scanned 2 total records. 0.001 secs 53ms ******************************************* Test : jstests/splitvector.js ... m30999| Fri Feb 22 12:36:16.593 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:16.593 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:16.595 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:16.595 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:16.595 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:16.595 [conn164] build index test.jstests_splitvector { x: 1.0 } m30001| Fri Feb 22 12:36:16.596 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:16.597 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:16.597 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:16.604 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:16.604 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:16.604 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:16.604 [conn164] build index test.jstests_splitvector { x: 1.0 } m30001| Fri Feb 22 12:36:16.605 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:17.773 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276641881c8e7453916066 m30999| Fri Feb 22 12:36:17.774 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:36:18.213 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:36:18.295 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:18.295 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:18.304 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:18.305 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:18.305 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:18.305 [conn164] build index test.jstests_splitvector { x: 1.0 } m30001| Fri Feb 22 12:36:18.306 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:19.054 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:36:19.056 [conn164] max number of requested split points reached (1) before the end of chunk test.jstests_splitvector { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:36:19.056 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:19.056 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:19.065 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:19.066 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:19.066 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:19.066 [conn164] build index test.jstests_splitvector { x: 1.0 } m30001| Fri Feb 22 12:36:19.067 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:19.820 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:36:19.820 [conn164] limiting split vector to 500 (from 936) objects m30999| Fri Feb 22 12:36:19.835 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:19.835 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:19.850 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:19.852 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:19.852 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:19.852 [conn164] build index test.jstests_splitvector { x: 1.0 } m30001| Fri Feb 22 12:36:19.853 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:20.012 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:36:20.019 [conn164] warning: chunk is larger than 1048576 bytes because of key { x: 1.0 } m30999| Fri Feb 22 12:36:20.019 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:20.019 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:20.027 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:20.028 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:20.028 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:20.028 [conn164] build index test.jstests_splitvector { x: 1.0 } m30001| Fri Feb 22 12:36:20.028 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:20.180 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey } -->> { : MaxKey } m30001| Fri Feb 22 12:36:20.184 [conn164] warning: chunk is larger than 1048576 bytes because of key { x: 2.0 } m30999| Fri Feb 22 12:36:20.184 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:20.184 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:20.192 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:20.193 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:20.193 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:20.193 [conn164] build index test.jstests_splitvector { x: 1.0 } m30001| Fri Feb 22 12:36:20.193 [conn164] build index done. scanned 0 total records. 0 secs test.jstests_splitvector m30001| Fri Feb 22 12:36:20.195 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:36:20.196 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:20.196 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:20.203 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:20.203 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:20.203 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:20.203 [conn164] build index test.jstests_splitvector { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:36:20.204 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:21.970 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } m30999| Fri Feb 22 12:36:22.056 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:22.056 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:22.065 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:22.066 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:22.066 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:22.066 [conn164] build index test.jstests_splitvector { x: 1.0, y: -1.0, z: 1.0 } m30001| Fri Feb 22 12:36:22.067 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:23.775 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276647881c8e7453916067 m30999| Fri Feb 22 12:36:23.776 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:36:24.039 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MaxKey, : MinKey } -->> { : MaxKey, : MinKey, : MaxKey } m30999| Fri Feb 22 12:36:24.123 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:24.123 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:24.132 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:24.133 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:24.133 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:24.133 [conn164] build index test.jstests_splitvector { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:36:24.134 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:25.020 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } m30001| Fri Feb 22 12:36:25.021 [conn164] max number of requested split points reached (1) before the end of chunk test.jstests_splitvector { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } m30999| Fri Feb 22 12:36:25.022 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:25.022 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:25.033 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:25.040 [conn164] build index done. scanned 0 total records. 0.006 secs m30001| Fri Feb 22 12:36:25.040 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:25.040 [conn164] build index test.jstests_splitvector { x: 1.0, y: -1.0, z: 1.0 } m30001| Fri Feb 22 12:36:25.041 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:25.894 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MaxKey, : MinKey } -->> { : MaxKey, : MinKey, : MaxKey } m30001| Fri Feb 22 12:36:25.896 [conn164] max number of requested split points reached (1) before the end of chunk test.jstests_splitvector { : MinKey, : MaxKey, : MinKey } -->> { : MaxKey, : MinKey, : MaxKey } m30999| Fri Feb 22 12:36:25.896 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:25.897 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:25.905 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:25.906 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:25.906 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:25.906 [conn164] build index test.jstests_splitvector { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:36:25.907 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:26.746 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } m30001| Fri Feb 22 12:36:26.746 [conn164] limiting split vector to 500 (from 936) objects m30999| Fri Feb 22 12:36:26.762 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:26.762 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:26.771 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:26.772 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:26.772 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:26.772 [conn164] build index test.jstests_splitvector { x: 1.0, y: -1.0, z: 1.0 } m30001| Fri Feb 22 12:36:26.773 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:27.717 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MaxKey, : MinKey } -->> { : MaxKey, : MinKey, : MaxKey } m30001| Fri Feb 22 12:36:27.717 [conn164] limiting split vector to 500 (from 936) objects m30999| Fri Feb 22 12:36:27.735 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:27.735 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:27.744 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:27.745 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:27.745 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:27.745 [conn164] build index test.jstests_splitvector { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:36:27.746 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:27.886 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } m30001| Fri Feb 22 12:36:27.890 [conn164] warning: chunk is larger than 1048576 bytes because of key { x: 1.0 } m30999| Fri Feb 22 12:36:27.891 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:27.891 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:27.895 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:27.896 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:27.896 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:27.896 [conn164] build index test.jstests_splitvector { x: 1.0, y: -1.0, z: 1.0 } m30001| Fri Feb 22 12:36:27.897 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.048 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MaxKey, : MinKey } -->> { : MaxKey, : MinKey, : MaxKey } m30001| Fri Feb 22 12:36:28.053 [conn164] warning: chunk is larger than 1048576 bytes because of key { x: 1.0 } m30999| Fri Feb 22 12:36:28.054 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:28.054 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:28.058 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:28.059 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.059 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:28.059 [conn164] build index test.jstests_splitvector { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:36:28.059 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.217 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } m30001| Fri Feb 22 12:36:28.219 [conn164] warning: chunk is larger than 1048576 bytes because of key { x: 2.0 } m30999| Fri Feb 22 12:36:28.220 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:28.220 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:28.224 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:28.225 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.225 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:28.225 [conn164] build index test.jstests_splitvector { x: 1.0, y: -1.0, z: 1.0 } m30001| Fri Feb 22 12:36:28.225 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.420 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MaxKey, : MinKey } -->> { : MaxKey, : MinKey, : MaxKey } m30001| Fri Feb 22 12:36:28.424 [conn164] warning: chunk is larger than 1048576 bytes because of key { x: 2.0 } m30999| Fri Feb 22 12:36:28.425 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:28.425 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:28.433 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:28.434 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.434 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:28.434 [conn164] build index test.jstests_splitvector { x: 1.0, y: 1.0 } m30001| Fri Feb 22 12:36:28.435 [conn164] build index done. scanned 0 total records. 0 secs test.jstests_splitvector m30001| Fri Feb 22 12:36:28.437 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MinKey } -->> { : MaxKey, : MaxKey } m30999| Fri Feb 22 12:36:28.437 [conn1] DROP: test.jstests_splitvector m30001| Fri Feb 22 12:36:28.438 [conn164] CMD: drop test.jstests_splitvector m30001| Fri Feb 22 12:36:28.444 [conn164] build index test.jstests_splitvector { _id: 1 } m30001| Fri Feb 22 12:36:28.445 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.445 [conn164] info: creating collection test.jstests_splitvector on add index m30001| Fri Feb 22 12:36:28.445 [conn164] build index test.jstests_splitvector { x: 1.0, y: -1.0, z: 1.0 } m30001| Fri Feb 22 12:36:28.446 [conn164] build index done. scanned 0 total records. 0 secs test.jstests_splitvector m30001| Fri Feb 22 12:36:28.447 [conn164] request split points lookup for chunk test.jstests_splitvector { : MinKey, : MaxKey, : MinKey } -->> { : MaxKey, : MinKey, : MaxKey } PASSED 11857ms >>>>>>>>>>>>>>> skipping jstests/_runner_leak.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/evalb.js ******************************************* Test : jstests/sort3.js ... m30999| Fri Feb 22 12:36:28.452 [conn1] DROP: test.sort3 m30001| Fri Feb 22 12:36:28.452 [conn164] CMD: drop test.sort3 m30001| Fri Feb 22 12:36:28.453 [conn164] build index test.sort3 { _id: 1 } m30001| Fri Feb 22 12:36:28.454 [conn164] build index done. scanned 0 total records. 0 secs 9ms ******************************************* Test : jstests/find_and_modify_server6909.js ... m30999| Fri Feb 22 12:36:28.461 [conn1] DROP: test.find_and_modify_server6906 m30001| Fri Feb 22 12:36:28.462 [conn164] CMD: drop test.find_and_modify_server6906 m30001| Fri Feb 22 12:36:28.462 [conn164] build index test.find_and_modify_server6906 { _id: 1 } m30001| Fri Feb 22 12:36:28.463 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:28.464 [conn1] DROP: test.find_and_modify_server6906 m30001| Fri Feb 22 12:36:28.464 [conn164] CMD: drop test.find_and_modify_server6906 m30001| Fri Feb 22 12:36:28.470 [conn164] build index test.find_and_modify_server6906 { _id: 1 } m30001| Fri Feb 22 12:36:28.473 [conn164] build index done. scanned 0 total records. 0.003 secs 16ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_fiddly_box.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_polygon1_noindex.js ******************************************* Test : jstests/update_multi6.js ... m30999| Fri Feb 22 12:36:28.474 [conn1] DROP: test.update_multi6 m30001| Fri Feb 22 12:36:28.475 [conn164] CMD: drop test.update_multi6 m30001| Fri Feb 22 12:36:28.476 [conn164] build index test.update_multi6 { _id: 1 } m30001| Fri Feb 22 12:36:28.476 [conn164] build index done. scanned 0 total records. 0 secs 4ms ******************************************* Test : jstests/delx.js ... m30999| Fri Feb 22 12:36:28.479 [conn1] couldn't find database [delxa] in config db m30999| Fri Feb 22 12:36:28.481 [conn1] put [delxa] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:36:28.481 [conn1] DROP DATABASE: delxa m30999| Fri Feb 22 12:36:28.481 [conn1] erased database delxa from local registry m30999| Fri Feb 22 12:36:28.481 [conn1] DBConfig::dropDatabase: delxa m30999| Fri Feb 22 12:36:28.481 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:28-5127664c881c8e7453916068", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536588481), what: "dropDatabase.start", ns: "delxa", details: {} } m30999| Fri Feb 22 12:36:28.482 [conn1] DBConfig::dropDatabase: delxa dropped sharded collections: 0 m30000| Fri Feb 22 12:36:28.482 [conn5] dropDatabase delxa starting m30000| Fri Feb 22 12:36:28.549 [conn5] removeJournalFiles m30000| Fri Feb 22 12:36:28.549 [conn5] dropDatabase delxa finished m30999| Fri Feb 22 12:36:28.549 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:28-5127664c881c8e7453916069", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536588549), what: "dropDatabase", ns: "delxa", details: {} } m30999| Fri Feb 22 12:36:28.550 [conn1] couldn't find database [delxa] in config db m30999| Fri Feb 22 12:36:28.551 [conn1] put [delxa] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:36:28.552 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/delxa.ns, filling with zeroes... m30000| Fri Feb 22 12:36:28.552 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/delxa.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:36:28.552 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/delxa.0, filling with zeroes... m30000| Fri Feb 22 12:36:28.552 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/delxa.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:36:28.553 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/delxa.1, filling with zeroes... m30000| Fri Feb 22 12:36:28.553 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/delxa.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:36:28.556 [conn6] build index delxa.foo { _id: 1 } m30000| Fri Feb 22 12:36:28.558 [conn6] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:36:28.564 [conn1] couldn't find database [delxb] in config db m30999| Fri Feb 22 12:36:28.565 [conn1] put [delxb] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:36:28.565 [conn1] DROP DATABASE: delxb m30999| Fri Feb 22 12:36:28.565 [conn1] erased database delxb from local registry m30999| Fri Feb 22 12:36:28.566 [conn1] DBConfig::dropDatabase: delxb m30999| Fri Feb 22 12:36:28.566 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:28-5127664c881c8e745391606a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536588566), what: "dropDatabase.start", ns: "delxb", details: {} } m30999| Fri Feb 22 12:36:28.567 [conn1] DBConfig::dropDatabase: delxb dropped sharded collections: 0 m30000| Fri Feb 22 12:36:28.567 [conn5] dropDatabase delxb starting m30000| Fri Feb 22 12:36:28.638 [conn5] removeJournalFiles m30000| Fri Feb 22 12:36:28.639 [conn5] dropDatabase delxb finished m30999| Fri Feb 22 12:36:28.639 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:28-5127664c881c8e745391606b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536588639), what: "dropDatabase", ns: "delxb", details: {} } m30999| Fri Feb 22 12:36:28.639 [conn1] couldn't find database [delxb] in config db m30999| Fri Feb 22 12:36:28.641 [conn1] put [delxb] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:36:28.641 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/delxb.ns, filling with zeroes... m30000| Fri Feb 22 12:36:28.642 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/delxb.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:36:28.642 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/delxb.0, filling with zeroes... m30000| Fri Feb 22 12:36:28.642 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/delxb.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:36:28.642 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/delxb.1, filling with zeroes... m30000| Fri Feb 22 12:36:28.642 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/delxb.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:36:28.646 [conn6] build index delxb.foo { _id: 1 } m30000| Fri Feb 22 12:36:28.647 [conn6] build index done. scanned 0 total records. 0.001 secs 185ms >>>>>>>>>>>>>>> skipping jstests/fail_point ******************************************* Test : jstests/remove2.js ... m30999| Fri Feb 22 12:36:28.664 [conn1] DROP: test.removetest2 m30001| Fri Feb 22 12:36:28.664 [conn164] CMD: drop test.removetest2 m30001| Fri Feb 22 12:36:28.665 [conn164] build index test.removetest2 { _id: 1 } m30001| Fri Feb 22 12:36:28.666 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.667 [conn9] CMD: validate test.removetest2 m30001| Fri Feb 22 12:36:28.667 [conn9] validating index 0: test.removetest2.$_id_ m30999| Fri Feb 22 12:36:28.667 [conn1] DROP: test.removetest2 m30001| Fri Feb 22 12:36:28.667 [conn164] CMD: drop test.removetest2 m30001| Fri Feb 22 12:36:28.673 [conn164] build index test.removetest2 { _id: 1 } m30001| Fri Feb 22 12:36:28.673 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.675 [conn9] CMD: validate test.removetest2 m30001| Fri Feb 22 12:36:28.675 [conn9] validating index 0: test.removetest2.$_id_ m30001| Fri Feb 22 12:36:28.676 [conn164] build index test.removetest2 { x: 1.0 } m30001| Fri Feb 22 12:36:28.677 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:28.679 [conn9] CMD: validate test.removetest2 m30001| Fri Feb 22 12:36:28.679 [conn9] validating index 0: test.removetest2.$_id_ m30001| Fri Feb 22 12:36:28.679 [conn9] validating index 1: test.removetest2.$x_1 m30999| Fri Feb 22 12:36:28.679 [conn1] DROP: test.removetest2 m30001| Fri Feb 22 12:36:28.679 [conn164] CMD: drop test.removetest2 m30001| Fri Feb 22 12:36:28.686 [conn164] build index test.removetest2 { _id: 1 } m30001| Fri Feb 22 12:36:28.686 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.686 [conn164] info: creating collection test.removetest2 on add index m30001| Fri Feb 22 12:36:28.686 [conn164] build index test.removetest2 { x: 1.0 } m30001| Fri Feb 22 12:36:28.686 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:28.689 [conn9] CMD: validate test.removetest2 m30001| Fri Feb 22 12:36:28.689 [conn9] validating index 0: test.removetest2.$_id_ m30001| Fri Feb 22 12:36:28.689 [conn9] validating index 1: test.removetest2.$x_1 26ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_box1.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped_server2639.js ******************************************* Test : jstests/connections_opened.js ... ---- Creating persistent connections ---- m30999| Fri Feb 22 12:36:28.698 [mongosMain] connection accepted from 127.0.0.1:47690 #35 (2 connections now open) m30999| Fri Feb 22 12:36:28.698 [mongosMain] connection accepted from 127.0.0.1:58434 #36 (3 connections now open) m30999| Fri Feb 22 12:36:28.698 [mongosMain] connection accepted from 127.0.0.1:65257 #37 (4 connections now open) m30999| Fri Feb 22 12:36:28.698 [mongosMain] connection accepted from 127.0.0.1:65408 #38 (5 connections now open) m30999| Fri Feb 22 12:36:28.699 [mongosMain] connection accepted from 127.0.0.1:41277 #39 (6 connections now open) m30999| Fri Feb 22 12:36:28.699 [mongosMain] connection accepted from 127.0.0.1:52047 #40 (7 connections now open) m30999| Fri Feb 22 12:36:28.699 [mongosMain] connection accepted from 127.0.0.1:34097 #41 (8 connections now open) m30999| Fri Feb 22 12:36:28.699 [mongosMain] connection accepted from 127.0.0.1:50536 #42 (9 connections now open) m30999| Fri Feb 22 12:36:28.699 [mongosMain] connection accepted from 127.0.0.1:57372 #43 (10 connections now open) m30999| Fri Feb 22 12:36:28.700 [mongosMain] connection accepted from 127.0.0.1:32979 #44 (11 connections now open) m30999| Fri Feb 22 12:36:28.700 [mongosMain] connection accepted from 127.0.0.1:64287 #45 (12 connections now open) m30999| Fri Feb 22 12:36:28.700 [mongosMain] connection accepted from 127.0.0.1:64795 #46 (13 connections now open) m30999| Fri Feb 22 12:36:28.700 [mongosMain] connection accepted from 127.0.0.1:39239 #47 (14 connections now open) m30999| Fri Feb 22 12:36:28.701 [mongosMain] connection accepted from 127.0.0.1:48859 #48 (15 connections now open) m30999| Fri Feb 22 12:36:28.701 [mongosMain] connection accepted from 127.0.0.1:39323 #49 (16 connections now open) m30999| Fri Feb 22 12:36:28.701 [mongosMain] connection accepted from 127.0.0.1:53290 #50 (17 connections now open) m30999| Fri Feb 22 12:36:28.701 [mongosMain] connection accepted from 127.0.0.1:52379 #51 (18 connections now open) m30999| Fri Feb 22 12:36:28.701 [mongosMain] connection accepted from 127.0.0.1:57463 #52 (19 connections now open) m30999| Fri Feb 22 12:36:28.702 [mongosMain] connection accepted from 127.0.0.1:32928 #53 (20 connections now open) m30999| Fri Feb 22 12:36:28.702 [mongosMain] connection accepted from 127.0.0.1:43418 #54 (21 connections now open) m30999| Fri Feb 22 12:36:28.702 [mongosMain] connection accepted from 127.0.0.1:65096 #55 (22 connections now open) m30999| Fri Feb 22 12:36:28.702 [mongosMain] connection accepted from 127.0.0.1:59848 #56 (23 connections now open) m30999| Fri Feb 22 12:36:28.703 [mongosMain] connection accepted from 127.0.0.1:58936 #57 (24 connections now open) m30999| Fri Feb 22 12:36:28.703 [mongosMain] connection accepted from 127.0.0.1:57642 #58 (25 connections now open) m30999| Fri Feb 22 12:36:28.703 [mongosMain] connection accepted from 127.0.0.1:51178 #59 (26 connections now open) m30999| Fri Feb 22 12:36:28.703 [mongosMain] connection accepted from 127.0.0.1:48810 #60 (27 connections now open) m30999| Fri Feb 22 12:36:28.704 [mongosMain] connection accepted from 127.0.0.1:63648 #61 (28 connections now open) m30999| Fri Feb 22 12:36:28.704 [mongosMain] connection accepted from 127.0.0.1:61312 #62 (29 connections now open) m30999| Fri Feb 22 12:36:28.704 [mongosMain] connection accepted from 127.0.0.1:37167 #63 (30 connections now open) m30999| Fri Feb 22 12:36:28.704 [mongosMain] connection accepted from 127.0.0.1:52105 #64 (31 connections now open) m30999| Fri Feb 22 12:36:28.705 [mongosMain] connection accepted from 127.0.0.1:58529 #65 (32 connections now open) m30999| Fri Feb 22 12:36:28.705 [mongosMain] connection accepted from 127.0.0.1:62742 #66 (33 connections now open) m30999| Fri Feb 22 12:36:28.705 [mongosMain] connection accepted from 127.0.0.1:48654 #67 (34 connections now open) m30999| Fri Feb 22 12:36:28.705 [mongosMain] connection accepted from 127.0.0.1:55457 #68 (35 connections now open) m30999| Fri Feb 22 12:36:28.706 [mongosMain] connection accepted from 127.0.0.1:49492 #69 (36 connections now open) m30999| Fri Feb 22 12:36:28.706 [mongosMain] connection accepted from 127.0.0.1:59885 #70 (37 connections now open) m30999| Fri Feb 22 12:36:28.706 [mongosMain] connection accepted from 127.0.0.1:58911 #71 (38 connections now open) m30999| Fri Feb 22 12:36:28.706 [mongosMain] connection accepted from 127.0.0.1:37712 #72 (39 connections now open) m30999| Fri Feb 22 12:36:28.707 [mongosMain] connection accepted from 127.0.0.1:60812 #73 (40 connections now open) m30999| Fri Feb 22 12:36:28.707 [mongosMain] connection accepted from 127.0.0.1:51407 #74 (41 connections now open) m30999| Fri Feb 22 12:36:28.707 [mongosMain] connection accepted from 127.0.0.1:60220 #75 (42 connections now open) m30999| Fri Feb 22 12:36:28.707 [mongosMain] connection accepted from 127.0.0.1:42633 #76 (43 connections now open) m30999| Fri Feb 22 12:36:28.708 [mongosMain] connection accepted from 127.0.0.1:40840 #77 (44 connections now open) m30999| Fri Feb 22 12:36:28.708 [mongosMain] connection accepted from 127.0.0.1:53396 #78 (45 connections now open) m30999| Fri Feb 22 12:36:28.708 [mongosMain] connection accepted from 127.0.0.1:49347 #79 (46 connections now open) m30999| Fri Feb 22 12:36:28.708 [mongosMain] connection accepted from 127.0.0.1:56349 #80 (47 connections now open) m30999| Fri Feb 22 12:36:28.709 [mongosMain] connection accepted from 127.0.0.1:62707 #81 (48 connections now open) m30999| Fri Feb 22 12:36:28.709 [mongosMain] connection accepted from 127.0.0.1:53525 #82 (49 connections now open) m30999| Fri Feb 22 12:36:28.709 [mongosMain] connection accepted from 127.0.0.1:63754 #83 (50 connections now open) m30999| Fri Feb 22 12:36:28.709 [mongosMain] connection accepted from 127.0.0.1:61610 #84 (51 connections now open) m30999| Fri Feb 22 12:36:28.710 [mongosMain] connection accepted from 127.0.0.1:34384 #85 (52 connections now open) m30999| Fri Feb 22 12:36:28.710 [mongosMain] connection accepted from 127.0.0.1:35271 #86 (53 connections now open) m30999| Fri Feb 22 12:36:28.710 [mongosMain] connection accepted from 127.0.0.1:62778 #87 (54 connections now open) m30999| Fri Feb 22 12:36:28.711 [mongosMain] connection accepted from 127.0.0.1:65207 #88 (55 connections now open) m30999| Fri Feb 22 12:36:28.711 [mongosMain] connection accepted from 127.0.0.1:63882 #89 (56 connections now open) m30999| Fri Feb 22 12:36:28.711 [mongosMain] connection accepted from 127.0.0.1:32987 #90 (57 connections now open) m30999| Fri Feb 22 12:36:28.711 [mongosMain] connection accepted from 127.0.0.1:39211 #91 (58 connections now open) m30999| Fri Feb 22 12:36:28.712 [mongosMain] connection accepted from 127.0.0.1:60610 #92 (59 connections now open) m30999| Fri Feb 22 12:36:28.712 [mongosMain] connection accepted from 127.0.0.1:54364 #93 (60 connections now open) m30999| Fri Feb 22 12:36:28.712 [mongosMain] connection accepted from 127.0.0.1:36119 #94 (61 connections now open) m30999| Fri Feb 22 12:36:28.712 [mongosMain] connection accepted from 127.0.0.1:55803 #95 (62 connections now open) m30999| Fri Feb 22 12:36:28.713 [mongosMain] connection accepted from 127.0.0.1:49613 #96 (63 connections now open) m30999| Fri Feb 22 12:36:28.713 [mongosMain] connection accepted from 127.0.0.1:63672 #97 (64 connections now open) m30999| Fri Feb 22 12:36:28.713 [mongosMain] connection accepted from 127.0.0.1:40432 #98 (65 connections now open) m30999| Fri Feb 22 12:36:28.713 [mongosMain] connection accepted from 127.0.0.1:62123 #99 (66 connections now open) m30999| Fri Feb 22 12:36:28.714 [mongosMain] connection accepted from 127.0.0.1:57756 #100 (67 connections now open) m30999| Fri Feb 22 12:36:28.714 [mongosMain] connection accepted from 127.0.0.1:40138 #101 (68 connections now open) m30999| Fri Feb 22 12:36:28.714 [mongosMain] connection accepted from 127.0.0.1:50564 #102 (69 connections now open) m30999| Fri Feb 22 12:36:28.714 [mongosMain] connection accepted from 127.0.0.1:35313 #103 (70 connections now open) m30999| Fri Feb 22 12:36:28.714 [mongosMain] connection accepted from 127.0.0.1:60066 #104 (71 connections now open) m30999| Fri Feb 22 12:36:28.715 [mongosMain] connection accepted from 127.0.0.1:42095 #105 (72 connections now open) m30999| Fri Feb 22 12:36:28.715 [mongosMain] connection accepted from 127.0.0.1:41032 #106 (73 connections now open) m30999| Fri Feb 22 12:36:28.715 [mongosMain] connection accepted from 127.0.0.1:56657 #107 (74 connections now open) m30999| Fri Feb 22 12:36:28.715 [mongosMain] connection accepted from 127.0.0.1:53113 #108 (75 connections now open) m30999| Fri Feb 22 12:36:28.716 [mongosMain] connection accepted from 127.0.0.1:43657 #109 (76 connections now open) m30999| Fri Feb 22 12:36:28.716 [mongosMain] connection accepted from 127.0.0.1:42417 #110 (77 connections now open) m30999| Fri Feb 22 12:36:28.716 [mongosMain] connection accepted from 127.0.0.1:63317 #111 (78 connections now open) m30999| Fri Feb 22 12:36:28.716 [mongosMain] connection accepted from 127.0.0.1:59534 #112 (79 connections now open) m30999| Fri Feb 22 12:36:28.716 [mongosMain] connection accepted from 127.0.0.1:52217 #113 (80 connections now open) m30999| Fri Feb 22 12:36:28.717 [mongosMain] connection accepted from 127.0.0.1:34299 #114 (81 connections now open) m30999| Fri Feb 22 12:36:28.717 [mongosMain] connection accepted from 127.0.0.1:60217 #115 (82 connections now open) m30999| Fri Feb 22 12:36:28.717 [mongosMain] connection accepted from 127.0.0.1:61544 #116 (83 connections now open) m30999| Fri Feb 22 12:36:28.717 [mongosMain] connection accepted from 127.0.0.1:57345 #117 (84 connections now open) m30999| Fri Feb 22 12:36:28.718 [mongosMain] connection accepted from 127.0.0.1:35476 #118 (85 connections now open) m30999| Fri Feb 22 12:36:28.718 [mongosMain] connection accepted from 127.0.0.1:40433 #119 (86 connections now open) m30999| Fri Feb 22 12:36:28.718 [mongosMain] connection accepted from 127.0.0.1:42742 #120 (87 connections now open) m30999| Fri Feb 22 12:36:28.718 [mongosMain] connection accepted from 127.0.0.1:51360 #121 (88 connections now open) m30999| Fri Feb 22 12:36:28.719 [mongosMain] connection accepted from 127.0.0.1:52213 #122 (89 connections now open) m30999| Fri Feb 22 12:36:28.719 [mongosMain] connection accepted from 127.0.0.1:39951 #123 (90 connections now open) m30999| Fri Feb 22 12:36:28.719 [mongosMain] connection accepted from 127.0.0.1:40501 #124 (91 connections now open) m30999| Fri Feb 22 12:36:28.720 [mongosMain] connection accepted from 127.0.0.1:52162 #125 (92 connections now open) m30999| Fri Feb 22 12:36:28.720 [mongosMain] connection accepted from 127.0.0.1:56983 #126 (93 connections now open) m30999| Fri Feb 22 12:36:28.720 [mongosMain] connection accepted from 127.0.0.1:33111 #127 (94 connections now open) m30999| Fri Feb 22 12:36:28.720 [mongosMain] connection accepted from 127.0.0.1:54966 #128 (95 connections now open) m30999| Fri Feb 22 12:36:28.720 [mongosMain] connection accepted from 127.0.0.1:55295 #129 (96 connections now open) m30999| Fri Feb 22 12:36:28.720 [mongosMain] connection accepted from 127.0.0.1:44209 #130 (97 connections now open) m30999| Fri Feb 22 12:36:28.721 [mongosMain] connection accepted from 127.0.0.1:57671 #131 (98 connections now open) m30999| Fri Feb 22 12:36:28.721 [mongosMain] connection accepted from 127.0.0.1:40536 #132 (99 connections now open) m30999| Fri Feb 22 12:36:28.721 [mongosMain] connection accepted from 127.0.0.1:64455 #133 (100 connections now open) m30999| Fri Feb 22 12:36:28.721 [mongosMain] connection accepted from 127.0.0.1:55600 #134 (101 connections now open) ---- Testing that persistent connections increased the current and totalCreated counters ---- ---- Creating temporary connections ---- m30999| Fri Feb 22 12:36:28.723 [conn1] couldn't find database [connectionsOpenedTest] in config db m30999| Fri Feb 22 12:36:28.724 [conn1] put [connectionsOpenedTest] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:36:28.724 [conn1] DROP DATABASE: connectionsOpenedTest m30999| Fri Feb 22 12:36:28.724 [conn1] erased database connectionsOpenedTest from local registry m30999| Fri Feb 22 12:36:28.725 [conn1] DBConfig::dropDatabase: connectionsOpenedTest m30999| Fri Feb 22 12:36:28.725 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:28-5127664c881c8e745391606c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536588725), what: "dropDatabase.start", ns: "connectionsOpenedTest", details: {} } m30999| Fri Feb 22 12:36:28.726 [conn1] DBConfig::dropDatabase: connectionsOpenedTest dropped sharded collections: 0 m30000| Fri Feb 22 12:36:28.726 [conn5] dropDatabase connectionsOpenedTest starting m30000| Fri Feb 22 12:36:28.805 [conn5] removeJournalFiles m30000| Fri Feb 22 12:36:28.805 [conn5] dropDatabase connectionsOpenedTest finished m30999| Fri Feb 22 12:36:28.805 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:28-5127664c881c8e745391606d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536588805), what: "dropDatabase", ns: "connectionsOpenedTest", details: {} } m30999| Fri Feb 22 12:36:28.806 [conn1] couldn't find database [connectionsOpenedTest] in config db m30999| Fri Feb 22 12:36:28.809 [conn1] put [connectionsOpenedTest] on: shard0000:localhost:30000 m30000| Fri Feb 22 12:36:28.809 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/connectionsOpenedTest.ns, filling with zeroes... m30000| Fri Feb 22 12:36:28.809 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/connectionsOpenedTest.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:36:28.809 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/connectionsOpenedTest.0, filling with zeroes... Fri Feb 22 12:36:28.843 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:28.810 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/connectionsOpenedTest.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:36:28.810 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/connectionsOpenedTest.1, filling with zeroes... m30000| Fri Feb 22 12:36:28.810 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/connectionsOpenedTest.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:36:28.814 [conn6] build index connectionsOpenedTest.keepRunning { _id: 1 } m30000| Fri Feb 22 12:36:28.814 [conn6] build index done. scanned 0 total records. 0 secs sh15796| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:36:28.880 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15797| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:36:28.917 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:28.933 [mongosMain] connection accepted from 127.0.0.1:48963 #135 (102 connections now open) sh15798| MongoDB shell version: 2.4.0-rc1-pre- sh15796| connecting to: localhost:30999/admin Fri Feb 22 12:36:28.956 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:28.975 [initandlisten] connection accepted from 127.0.0.1:54927 #166 (7 connections now open) m30999| Fri Feb 22 12:36:28.971 [mongosMain] connection accepted from 127.0.0.1:65137 #136 (103 connections now open) Fri Feb 22 12:36:28.994 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15797| connecting to: localhost:30999/admin sh15799| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:36:29.011 [mongosMain] connection accepted from 127.0.0.1:53561 #137 (104 connections now open) m30000| Fri Feb 22 12:36:29.015 [initandlisten] connection accepted from 127.0.0.1:48217 #22 (10 connections now open) m30001| Fri Feb 22 12:36:29.015 [initandlisten] connection accepted from 127.0.0.1:47581 #167 (8 connections now open) Fri Feb 22 12:36:29.031 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15798| connecting to: localhost:30999/admin sh15800| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:36:29.054 [initandlisten] connection accepted from 127.0.0.1:38466 #168 (9 connections now open) m30000| Fri Feb 22 12:36:29.053 [initandlisten] connection accepted from 127.0.0.1:40235 #23 (11 connections now open) m30999| Fri Feb 22 12:36:29.050 [mongosMain] connection accepted from 127.0.0.1:55841 #138 (105 connections now open) Fri Feb 22 12:36:29.068 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15802| MongoDB shell version: 2.4.0-rc1-pre- sh15799| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:29.085 [mongosMain] connection accepted from 127.0.0.1:64143 #139 (106 connections now open) m30000| Fri Feb 22 12:36:29.089 [initandlisten] connection accepted from 127.0.0.1:37373 #24 (12 connections now open) Fri Feb 22 12:36:29.105 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:29.093 [initandlisten] connection accepted from 127.0.0.1:33498 #169 (10 connections now open) sh15803| MongoDB shell version: 2.4.0-rc1-pre- sh15800| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:29.126 [initandlisten] connection accepted from 127.0.0.1:43948 #25 (13 connections now open) m30001| Fri Feb 22 12:36:29.127 [initandlisten] connection accepted from 127.0.0.1:48207 #170 (11 connections now open) m30999| Fri Feb 22 12:36:29.123 [mongosMain] connection accepted from 127.0.0.1:40507 #140 (107 connections now open) Fri Feb 22 12:36:29.141 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15802| connecting to: localhost:30999/admin sh15804| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:36:29.161 [mongosMain] connection accepted from 127.0.0.1:44880 #141 (108 connections now open) m30001| Fri Feb 22 12:36:29.165 [initandlisten] connection accepted from 127.0.0.1:40338 #171 (12 connections now open) Fri Feb 22 12:36:29.179 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:29.165 [initandlisten] connection accepted from 127.0.0.1:53215 #26 (14 connections now open) sh15805| MongoDB shell version: 2.4.0-rc1-pre- sh15803| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.201 [initandlisten] connection accepted from 127.0.0.1:37223 #172 (13 connections now open) m30000| Fri Feb 22 12:36:29.200 [initandlisten] connection accepted from 127.0.0.1:42542 #27 (15 connections now open) m30999| Fri Feb 22 12:36:29.197 [mongosMain] connection accepted from 127.0.0.1:41440 #142 (109 connections now open) Fri Feb 22 12:36:29.217 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15806| MongoDB shell version: 2.4.0-rc1-pre- sh15804| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.235 [initandlisten] connection accepted from 127.0.0.1:44539 #173 (14 connections now open) m30999| Fri Feb 22 12:36:29.231 [mongosMain] connection accepted from 127.0.0.1:33989 #143 (110 connections now open) m30000| Fri Feb 22 12:36:29.235 [initandlisten] connection accepted from 127.0.0.1:54928 #28 (16 connections now open) sh15808| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:36:29.253 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15805| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:29.271 [mongosMain] connection accepted from 127.0.0.1:39493 #144 (111 connections now open) m30000| Fri Feb 22 12:36:29.274 [initandlisten] connection accepted from 127.0.0.1:35087 #29 (17 connections now open) m30001| Fri Feb 22 12:36:29.275 [initandlisten] connection accepted from 127.0.0.1:62600 #174 (15 connections now open) Fri Feb 22 12:36:29.291 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15809| MongoDB shell version: 2.4.0-rc1-pre- sh15806| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:29.312 [initandlisten] connection accepted from 127.0.0.1:63162 #30 (18 connections now open) m30001| Fri Feb 22 12:36:29.313 [initandlisten] connection accepted from 127.0.0.1:54474 #175 (16 connections now open) m30999| Fri Feb 22 12:36:29.308 [mongosMain] connection accepted from 127.0.0.1:54440 #145 (112 connections now open) Fri Feb 22 12:36:29.328 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15808| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:29.362 [mongosMain] connection accepted from 127.0.0.1:53902 #146 (113 connections now open) Fri Feb 22 12:36:29.367 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15811| MongoDB shell version: 2.4.0-rc1-pre- sh15810| MongoDB shell version: 2.4.0-rc1-pre- sh15809| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:29.368 [initandlisten] connection accepted from 127.0.0.1:42526 #31 (19 connections now open) m30001| Fri Feb 22 12:36:29.368 [initandlisten] connection accepted from 127.0.0.1:45830 #176 (17 connections now open) Fri Feb 22 12:36:29.407 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15812| MongoDB shell version: 2.4.0-rc1-pre- m30000| Fri Feb 22 12:36:29.415 [initandlisten] connection accepted from 127.0.0.1:59166 #32 (20 connections now open) m30000| Fri Feb 22 12:36:29.441 [initandlisten] connection accepted from 127.0.0.1:56552 #33 (21 connections now open) m30999| Fri Feb 22 12:36:29.411 [mongosMain] connection accepted from 127.0.0.1:59059 #147 (114 connections now open) m30999| Fri Feb 22 12:36:29.436 [mongosMain] connection accepted from 127.0.0.1:44056 #148 (115 connections now open) m30001| Fri Feb 22 12:36:29.416 [initandlisten] connection accepted from 127.0.0.1:44267 #177 (18 connections now open) Fri Feb 22 12:36:29.446 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:29.442 [initandlisten] connection accepted from 127.0.0.1:52216 #178 (19 connections now open) sh15811| connecting to: localhost:30999/admin sh15810| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.468 [initandlisten] connection accepted from 127.0.0.1:32864 #179 (20 connections now open) m30000| Fri Feb 22 12:36:29.468 [initandlisten] connection accepted from 127.0.0.1:54126 #34 (22 connections now open) Fri Feb 22 12:36:29.482 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:29.464 [mongosMain] connection accepted from 127.0.0.1:34878 #149 (116 connections now open) sh15813| MongoDB shell version: 2.4.0-rc1-pre- sh15812| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:29.509 [mongosMain] connection accepted from 127.0.0.1:62832 #150 (117 connections now open) m30000| Fri Feb 22 12:36:29.512 [initandlisten] connection accepted from 127.0.0.1:55412 #35 (23 connections now open) Fri Feb 22 12:36:29.520 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:29.513 [initandlisten] connection accepted from 127.0.0.1:49187 #180 (21 connections now open) sh15815| MongoDB shell version: 2.4.0-rc1-pre- sh15814| MongoDB shell version: 2.4.0-rc1-pre- sh15813| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:29.552 [mongosMain] connection accepted from 127.0.0.1:62791 #151 (118 connections now open) m30001| Fri Feb 22 12:36:29.556 [initandlisten] connection accepted from 127.0.0.1:37500 #181 (22 connections now open) Fri Feb 22 12:36:29.559 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:29.556 [initandlisten] connection accepted from 127.0.0.1:43690 #36 (24 connections now open) sh15816| MongoDB shell version: 2.4.0-rc1-pre- sh15814| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:29.577 [mongosMain] connection accepted from 127.0.0.1:38471 #152 (119 connections now open) m30000| Fri Feb 22 12:36:29.581 [initandlisten] connection accepted from 127.0.0.1:49466 #37 (25 connections now open) m30001| Fri Feb 22 12:36:29.582 [initandlisten] connection accepted from 127.0.0.1:39515 #182 (23 connections now open) Fri Feb 22 12:36:29.594 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15815| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:29.616 [initandlisten] connection accepted from 127.0.0.1:51589 #38 (26 connections now open) m30999| Fri Feb 22 12:36:29.612 [mongosMain] connection accepted from 127.0.0.1:63324 #153 (120 connections now open) sh15818| MongoDB shell version: 2.4.0-rc1-pre- sh15817| MongoDB shell version: 2.4.0-rc1-pre- sh15816| connecting to: localhost:30999/admin Fri Feb 22 12:36:29.631 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin Fri Feb 22 12:36:29.668 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15817| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.684 [initandlisten] connection accepted from 127.0.0.1:40888 #183 (24 connections now open) m30001| Fri Feb 22 12:36:29.684 [initandlisten] connection accepted from 127.0.0.1:37693 #184 (25 connections now open) Fri Feb 22 12:36:29.707 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:29.698 [initandlisten] connection accepted from 127.0.0.1:54743 #185 (26 connections now open) m30999| Fri Feb 22 12:36:29.670 [mongosMain] connection accepted from 127.0.0.1:58583 #154 (121 connections now open) m30999| Fri Feb 22 12:36:29.692 [mongosMain] connection accepted from 127.0.0.1:37284 #155 (122 connections now open) sh15819| MongoDB shell version: 2.4.0-rc1-pre- m30000| Fri Feb 22 12:36:29.684 [initandlisten] connection accepted from 127.0.0.1:53366 #39 (27 connections now open) m30000| Fri Feb 22 12:36:29.697 [initandlisten] connection accepted from 127.0.0.1:64434 #40 (28 connections now open) sh15820| MongoDB shell version: 2.4.0-rc1-pre- sh15818| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.735 [initandlisten] connection accepted from 127.0.0.1:40402 #186 (27 connections now open) m30000| Fri Feb 22 12:36:29.734 [initandlisten] connection accepted from 127.0.0.1:62483 #41 (29 connections now open) Fri Feb 22 12:36:29.745 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:29.730 [mongosMain] connection accepted from 127.0.0.1:64876 #156 (123 connections now open) sh15821| MongoDB shell version: 2.4.0-rc1-pre- sh15819| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.765 [initandlisten] connection accepted from 127.0.0.1:59355 #187 (28 connections now open) m30999| Fri Feb 22 12:36:29.760 [mongosMain] connection accepted from 127.0.0.1:62948 #157 (124 connections now open) m30999| Fri Feb 22 12:36:29.777 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127664d881c8e745391606e m30999| Fri Feb 22 12:36:29.778 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. Fri Feb 22 12:36:29.783 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:29.764 [initandlisten] connection accepted from 127.0.0.1:42603 #42 (30 connections now open) sh15822| MongoDB shell version: 2.4.0-rc1-pre- sh15820| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:29.804 [initandlisten] connection accepted from 127.0.0.1:33765 #43 (31 connections now open) m30001| Fri Feb 22 12:36:29.805 [initandlisten] connection accepted from 127.0.0.1:43917 #188 (29 connections now open) Fri Feb 22 12:36:29.820 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:29.800 [mongosMain] connection accepted from 127.0.0.1:49946 #158 (125 connections now open) sh15823| MongoDB shell version: 2.4.0-rc1-pre- sh15821| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.842 [initandlisten] connection accepted from 127.0.0.1:33199 #189 (30 connections now open) m30999| Fri Feb 22 12:36:29.837 [mongosMain] connection accepted from 127.0.0.1:43458 #159 (126 connections now open) Fri Feb 22 12:36:29.857 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:29.841 [initandlisten] connection accepted from 127.0.0.1:40558 #44 (32 connections now open) sh15824| MongoDB shell version: 2.4.0-rc1-pre- sh15822| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:29.880 [initandlisten] connection accepted from 127.0.0.1:36928 #190 (31 connections now open) m30000| Fri Feb 22 12:36:29.879 [initandlisten] connection accepted from 127.0.0.1:51165 #45 (33 connections now open) Fri Feb 22 12:36:29.895 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:29.875 [mongosMain] connection accepted from 127.0.0.1:38262 #160 (127 connections now open) sh15823| connecting to: localhost:30999/admin sh15825| MongoDB shell version: 2.4.0-rc1-pre- sh15826| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:36:29.933 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:29.941 [initandlisten] connection accepted from 127.0.0.1:49786 #46 (34 connections now open) Fri Feb 22 12:36:29.972 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15827| MongoDB shell version: 2.4.0-rc1-pre- sh15825| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:29.937 [mongosMain] connection accepted from 127.0.0.1:61576 #161 (128 connections now open) m30999| Fri Feb 22 12:36:29.969 [mongosMain] connection accepted from 127.0.0.1:35382 #162 (129 connections now open) sh15824| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:29.972 [initandlisten] connection accepted from 127.0.0.1:34278 #47 (35 connections now open) m30000| Fri Feb 22 12:36:29.992 [initandlisten] connection accepted from 127.0.0.1:47618 #48 (36 connections now open) m30001| Fri Feb 22 12:36:29.989 [initandlisten] connection accepted from 127.0.0.1:59421 #191 (32 connections now open) m30001| Fri Feb 22 12:36:29.989 [initandlisten] connection accepted from 127.0.0.1:34767 #192 (33 connections now open) m30001| Fri Feb 22 12:36:29.993 [initandlisten] connection accepted from 127.0.0.1:49506 #193 (34 connections now open) m30999| Fri Feb 22 12:36:29.989 [mongosMain] connection accepted from 127.0.0.1:52737 #163 (130 connections now open) Fri Feb 22 12:36:30.011 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15826| connecting to: localhost:30999/admin sh15828| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:36:30.026 [mongosMain] connection accepted from 127.0.0.1:36217 #164 (131 connections now open) m30001| Fri Feb 22 12:36:30.030 [initandlisten] connection accepted from 127.0.0.1:40050 #194 (35 connections now open) m30000| Fri Feb 22 12:36:30.029 [initandlisten] connection accepted from 127.0.0.1:63731 #49 (37 connections now open) Fri Feb 22 12:36:30.048 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15827| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:30.068 [initandlisten] connection accepted from 127.0.0.1:38003 #195 (36 connections now open) m30000| Fri Feb 22 12:36:30.067 [initandlisten] connection accepted from 127.0.0.1:55990 #50 (38 connections now open) Fri Feb 22 12:36:30.083 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15830| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:36:30.064 [mongosMain] connection accepted from 127.0.0.1:44709 #165 (132 connections now open) sh15828| connecting to: localhost:30999/admin sh15831| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:36:30.118 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:30.142 [initandlisten] connection accepted from 127.0.0.1:55657 #51 (39 connections now open) m30001| Fri Feb 22 12:36:30.143 [initandlisten] connection accepted from 127.0.0.1:56818 #196 (37 connections now open) m30999| Fri Feb 22 12:36:30.139 [mongosMain] connection accepted from 127.0.0.1:56579 #166 (133 connections now open) Fri Feb 22 12:36:30.158 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15832| MongoDB shell version: 2.4.0-rc1-pre- sh15830| connecting to: localhost:30999/admin sh15829| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:36:30.174 [mongosMain] connection accepted from 127.0.0.1:59819 #167 (134 connections now open) m30001| Fri Feb 22 12:36:30.179 [initandlisten] connection accepted from 127.0.0.1:60132 #197 (38 connections now open) Fri Feb 22 12:36:30.197 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:30.178 [initandlisten] connection accepted from 127.0.0.1:49319 #52 (40 connections now open) sh15833| MongoDB shell version: 2.4.0-rc1-pre- sh15831| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:30.214 [initandlisten] connection accepted from 127.0.0.1:41176 #53 (41 connections now open) m30000| Fri Feb 22 12:36:30.219 [initandlisten] connection accepted from 127.0.0.1:47208 #54 (42 connections now open) m30001| Fri Feb 22 12:36:30.215 [initandlisten] connection accepted from 127.0.0.1:36425 #198 (39 connections now open) Fri Feb 22 12:36:30.235 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:30.219 [initandlisten] connection accepted from 127.0.0.1:41234 #199 (40 connections now open) m30999| Fri Feb 22 12:36:30.211 [mongosMain] connection accepted from 127.0.0.1:47082 #168 (135 connections now open) m30999| Fri Feb 22 12:36:30.215 [mongosMain] connection accepted from 127.0.0.1:52827 #169 (136 connections now open) sh15834| MongoDB shell version: 2.4.0-rc1-pre- sh15829| connecting to: localhost:30999/admin sh15832| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:30.253 [initandlisten] connection accepted from 127.0.0.1:46982 #200 (41 connections now open) m30000| Fri Feb 22 12:36:30.253 [initandlisten] connection accepted from 127.0.0.1:50996 #55 (43 connections now open) Fri Feb 22 12:36:30.272 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:30.249 [mongosMain] connection accepted from 127.0.0.1:56899 #170 (137 connections now open) sh15833| connecting to: localhost:30999/admin sh15835| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:36:30.294 [initandlisten] connection accepted from 127.0.0.1:63489 #201 (42 connections now open) m30999| Fri Feb 22 12:36:30.290 [mongosMain] connection accepted from 127.0.0.1:39066 #171 (138 connections now open) m30000| Fri Feb 22 12:36:30.294 [initandlisten] connection accepted from 127.0.0.1:57718 #56 (44 connections now open) sh15836| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:36:30.315 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15834| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:30.355 [mongosMain] connection accepted from 127.0.0.1:44971 #172 (139 connections now open) m30001| Fri Feb 22 12:36:30.359 [initandlisten] connection accepted from 127.0.0.1:43324 #202 (43 connections now open) m30000| Fri Feb 22 12:36:30.359 [initandlisten] connection accepted from 127.0.0.1:52464 #57 (45 connections now open) Fri Feb 22 12:36:30.361 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15837| MongoDB shell version: 2.4.0-rc1-pre- sh15835| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:30.369 [initandlisten] connection accepted from 127.0.0.1:39562 #203 (44 connections now open) m30999| Fri Feb 22 12:36:30.365 [mongosMain] connection accepted from 127.0.0.1:34556 #173 (140 connections now open) Fri Feb 22 12:36:30.400 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:30.368 [initandlisten] connection accepted from 127.0.0.1:54408 #58 (46 connections now open) sh15836| connecting to: localhost:30999/admin sh15838| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:36:30.410 [initandlisten] connection accepted from 127.0.0.1:55750 #204 (45 connections now open) m30999| Fri Feb 22 12:36:30.406 [mongosMain] connection accepted from 127.0.0.1:63821 #174 (141 connections now open) Fri Feb 22 12:36:30.439 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:30.409 [initandlisten] connection accepted from 127.0.0.1:59951 #59 (47 connections now open) sh15839| MongoDB shell version: 2.4.0-rc1-pre- sh15837| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:30.453 [mongosMain] connection accepted from 127.0.0.1:36015 #175 (142 connections now open) m30000| Fri Feb 22 12:36:30.457 [initandlisten] connection accepted from 127.0.0.1:32922 #60 (48 connections now open) m30001| Fri Feb 22 12:36:30.457 [initandlisten] connection accepted from 127.0.0.1:43201 #205 (46 connections now open) Fri Feb 22 12:36:30.477 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15840| MongoDB shell version: 2.4.0-rc1-pre- sh15838| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:30.495 [initandlisten] connection accepted from 127.0.0.1:51416 #61 (49 connections now open) m30001| Fri Feb 22 12:36:30.496 [initandlisten] connection accepted from 127.0.0.1:63278 #206 (47 connections now open) m30999| Fri Feb 22 12:36:30.492 [mongosMain] connection accepted from 127.0.0.1:41833 #176 (143 connections now open) Fri Feb 22 12:36:30.514 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15841| MongoDB shell version: 2.4.0-rc1-pre- sh15839| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:30.534 [initandlisten] connection accepted from 127.0.0.1:48690 #62 (50 connections now open) m30001| Fri Feb 22 12:36:30.535 [initandlisten] connection accepted from 127.0.0.1:63716 #207 (48 connections now open) m30999| Fri Feb 22 12:36:30.530 [mongosMain] connection accepted from 127.0.0.1:64489 #177 (144 connections now open) Fri Feb 22 12:36:30.554 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15840| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:30.575 [mongosMain] connection accepted from 127.0.0.1:44472 #178 (145 connections now open) m30001| Fri Feb 22 12:36:30.580 [initandlisten] connection accepted from 127.0.0.1:57254 #208 (49 connections now open) m30000| Fri Feb 22 12:36:30.579 [initandlisten] connection accepted from 127.0.0.1:65497 #63 (51 connections now open) Fri Feb 22 12:36:30.593 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15843| MongoDB shell version: 2.4.0-rc1-pre- sh15842| MongoDB shell version: 2.4.0-rc1-pre- sh15841| connecting to: localhost:30999/admin Fri Feb 22 12:36:30.633 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:30.642 [mongosMain] connection accepted from 127.0.0.1:33575 #179 (146 connections now open) m30999| Fri Feb 22 12:36:30.650 [mongosMain] connection accepted from 127.0.0.1:52629 #180 (147 connections now open) m30000| Fri Feb 22 12:36:30.647 [initandlisten] connection accepted from 127.0.0.1:38415 #64 (52 connections now open) m30000| Fri Feb 22 12:36:30.654 [initandlisten] connection accepted from 127.0.0.1:57692 #65 (53 connections now open) m30001| Fri Feb 22 12:36:30.647 [initandlisten] connection accepted from 127.0.0.1:34293 #209 (50 connections now open) m30001| Fri Feb 22 12:36:30.655 [initandlisten] connection accepted from 127.0.0.1:55251 #210 (51 connections now open) Fri Feb 22 12:36:30.680 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15845| MongoDB shell version: 2.4.0-rc1-pre- sh15843| connecting to: localhost:30999/admin sh15844| MongoDB shell version: 2.4.0-rc1-pre- sh15842| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:30.707 [initandlisten] connection accepted from 127.0.0.1:60158 #66 (54 connections now open) m30001| Fri Feb 22 12:36:30.708 [initandlisten] connection accepted from 127.0.0.1:44576 #211 (52 connections now open) m30999| Fri Feb 22 12:36:30.703 [mongosMain] connection accepted from 127.0.0.1:56956 #181 (148 connections now open) Fri Feb 22 12:36:30.721 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15846| MongoDB shell version: 2.4.0-rc1-pre- sh15844| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:30.739 [mongosMain] connection accepted from 127.0.0.1:45378 #182 (149 connections now open) m30000| Fri Feb 22 12:36:30.743 [initandlisten] connection accepted from 127.0.0.1:58389 #67 (55 connections now open) m30001| Fri Feb 22 12:36:30.744 [initandlisten] connection accepted from 127.0.0.1:55435 #212 (53 connections now open) Fri Feb 22 12:36:30.759 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15847| MongoDB shell version: 2.4.0-rc1-pre- sh15845| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:30.777 [initandlisten] connection accepted from 127.0.0.1:44497 #68 (56 connections now open) m30001| Fri Feb 22 12:36:30.777 [initandlisten] connection accepted from 127.0.0.1:47007 #213 (54 connections now open) m30999| Fri Feb 22 12:36:30.773 [mongosMain] connection accepted from 127.0.0.1:57950 #183 (150 connections now open) Fri Feb 22 12:36:30.797 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15846| connecting to: localhost:30999/admin sh15848| MongoDB shell version: 2.4.0-rc1-pre- m30000| Fri Feb 22 12:36:30.817 [initandlisten] connection accepted from 127.0.0.1:56122 #69 (57 connections now open) m30001| Fri Feb 22 12:36:30.818 [initandlisten] connection accepted from 127.0.0.1:39113 #214 (55 connections now open) m30999| Fri Feb 22 12:36:30.814 [mongosMain] connection accepted from 127.0.0.1:54921 #184 (151 connections now open) Fri Feb 22 12:36:30.837 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15849| MongoDB shell version: 2.4.0-rc1-pre- sh15847| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:30.852 [mongosMain] connection accepted from 127.0.0.1:59087 #185 (152 connections now open) m30001| Fri Feb 22 12:36:30.856 [initandlisten] connection accepted from 127.0.0.1:49352 #215 (56 connections now open) m30000| Fri Feb 22 12:36:30.855 [initandlisten] connection accepted from 127.0.0.1:33831 #70 (58 connections now open) Fri Feb 22 12:36:30.874 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15850| MongoDB shell version: 2.4.0-rc1-pre- sh15848| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:30.892 [initandlisten] connection accepted from 127.0.0.1:58381 #216 (57 connections now open) m30999| Fri Feb 22 12:36:30.888 [mongosMain] connection accepted from 127.0.0.1:49112 #186 (153 connections now open) Fri Feb 22 12:36:30.912 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:30.891 [initandlisten] connection accepted from 127.0.0.1:53044 #71 (59 connections now open) sh15851| MongoDB shell version: 2.4.0-rc1-pre- sh15849| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:30.929 [mongosMain] connection accepted from 127.0.0.1:36419 #187 (154 connections now open) m30000| Fri Feb 22 12:36:30.933 [initandlisten] connection accepted from 127.0.0.1:61496 #72 (60 connections now open) m30001| Fri Feb 22 12:36:30.933 [initandlisten] connection accepted from 127.0.0.1:45412 #217 (58 connections now open) Fri Feb 22 12:36:30.949 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15852| MongoDB shell version: 2.4.0-rc1-pre- sh15850| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:30.970 [initandlisten] connection accepted from 127.0.0.1:44020 #218 (59 connections now open) m30000| Fri Feb 22 12:36:30.970 [initandlisten] connection accepted from 127.0.0.1:36741 #73 (61 connections now open) m30999| Fri Feb 22 12:36:30.966 [mongosMain] connection accepted from 127.0.0.1:40774 #188 (155 connections now open) Fri Feb 22 12:36:30.987 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15853| MongoDB shell version: 2.4.0-rc1-pre- sh15851| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:31.004 [mongosMain] connection accepted from 127.0.0.1:33099 #189 (156 connections now open) m30000| Fri Feb 22 12:36:31.007 [initandlisten] connection accepted from 127.0.0.1:56142 #74 (62 connections now open) m30001| Fri Feb 22 12:36:31.008 [initandlisten] connection accepted from 127.0.0.1:59154 #219 (60 connections now open) Fri Feb 22 12:36:31.025 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15852| connecting to: localhost:30999/admin sh15859| MongoDB shell version: 2.4.0-rc1-pre- m30000| Fri Feb 22 12:36:31.045 [initandlisten] connection accepted from 127.0.0.1:61326 #75 (63 connections now open) m30001| Fri Feb 22 12:36:31.045 [initandlisten] connection accepted from 127.0.0.1:46528 #220 (61 connections now open) m30999| Fri Feb 22 12:36:31.041 [mongosMain] connection accepted from 127.0.0.1:63425 #190 (157 connections now open) Fri Feb 22 12:36:31.062 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15860| MongoDB shell version: 2.4.0-rc1-pre- sh15853| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:31.096 [mongosMain] connection accepted from 127.0.0.1:62509 #191 (158 connections now open) Fri Feb 22 12:36:31.100 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15862| MongoDB shell version: 2.4.0-rc1-pre- sh15859| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:31.101 [initandlisten] connection accepted from 127.0.0.1:43988 #76 (64 connections now open) m30000| Fri Feb 22 12:36:31.132 [initandlisten] connection accepted from 127.0.0.1:64524 #77 (65 connections now open) Fri Feb 22 12:36:31.141 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:31.102 [initandlisten] connection accepted from 127.0.0.1:55760 #221 (62 connections now open) m30001| Fri Feb 22 12:36:31.132 [initandlisten] connection accepted from 127.0.0.1:37228 #222 (63 connections now open) m30999| Fri Feb 22 12:36:31.128 [mongosMain] connection accepted from 127.0.0.1:37709 #192 (159 connections now open) sh15863| MongoDB shell version: 2.4.0-rc1-pre- sh15860| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:31.163 [mongosMain] connection accepted from 127.0.0.1:34495 #193 (160 connections now open) m30001| Fri Feb 22 12:36:31.168 [initandlisten] connection accepted from 127.0.0.1:50983 #223 (64 connections now open) m30000| Fri Feb 22 12:36:31.167 [initandlisten] connection accepted from 127.0.0.1:47383 #78 (66 connections now open) Fri Feb 22 12:36:31.180 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15864| MongoDB shell version: 2.4.0-rc1-pre- sh15862| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:31.196 [initandlisten] connection accepted from 127.0.0.1:48678 #224 (65 connections now open) m30999| Fri Feb 22 12:36:31.192 [mongosMain] connection accepted from 127.0.0.1:56328 #194 (161 connections now open) Fri Feb 22 12:36:31.218 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:31.195 [initandlisten] connection accepted from 127.0.0.1:60502 #79 (67 connections now open) sh15863| connecting to: localhost:30999/admin sh15865| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:36:31.236 [initandlisten] connection accepted from 127.0.0.1:38957 #225 (66 connections now open) m30000| Fri Feb 22 12:36:31.235 [initandlisten] connection accepted from 127.0.0.1:37601 #80 (68 connections now open) Fri Feb 22 12:36:31.259 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:31.232 [mongosMain] connection accepted from 127.0.0.1:54095 #195 (162 connections now open) sh15866| MongoDB shell version: 2.4.0-rc1-pre- sh15864| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:31.274 [mongosMain] connection accepted from 127.0.0.1:35749 #196 (163 connections now open) m30001| Fri Feb 22 12:36:31.279 [initandlisten] connection accepted from 127.0.0.1:47644 #226 (67 connections now open) Fri Feb 22 12:36:31.305 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:31.278 [initandlisten] connection accepted from 127.0.0.1:63565 #81 (69 connections now open) sh15867| MongoDB shell version: 2.4.0-rc1-pre- sh15865| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:31.345 [initandlisten] connection accepted from 127.0.0.1:58908 #82 (70 connections now open) m30001| Fri Feb 22 12:36:31.345 [initandlisten] connection accepted from 127.0.0.1:44837 #227 (68 connections now open) m30999| Fri Feb 22 12:36:31.340 [mongosMain] connection accepted from 127.0.0.1:50323 #197 (164 connections now open) Fri Feb 22 12:36:31.352 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15868| MongoDB shell version: 2.4.0-rc1-pre- sh15866| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:31.360 [initandlisten] connection accepted from 127.0.0.1:42254 #83 (71 connections now open) m30001| Fri Feb 22 12:36:31.361 [initandlisten] connection accepted from 127.0.0.1:50021 #228 (69 connections now open) m30999| Fri Feb 22 12:36:31.356 [mongosMain] connection accepted from 127.0.0.1:38257 #198 (165 connections now open) Fri Feb 22 12:36:31.397 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15869| MongoDB shell version: 2.4.0-rc1-pre- sh15867| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:31.423 [initandlisten] connection accepted from 127.0.0.1:32940 #84 (72 connections now open) m30001| Fri Feb 22 12:36:31.423 [initandlisten] connection accepted from 127.0.0.1:42604 #229 (70 connections now open) m30999| Fri Feb 22 12:36:31.419 [mongosMain] connection accepted from 127.0.0.1:43617 #199 (166 connections now open) Fri Feb 22 12:36:31.439 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15870| MongoDB shell version: 2.4.0-rc1-pre- sh15868| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:31.452 [initandlisten] connection accepted from 127.0.0.1:37607 #85 (73 connections now open) m30999| Fri Feb 22 12:36:31.449 [mongosMain] connection accepted from 127.0.0.1:64527 #200 (167 connections now open) m30001| Fri Feb 22 12:36:31.453 [initandlisten] connection accepted from 127.0.0.1:51580 #230 (71 connections now open) Fri Feb 22 12:36:31.479 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15871| MongoDB shell version: 2.4.0-rc1-pre- sh15869| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:31.492 [initandlisten] connection accepted from 127.0.0.1:54379 #86 (74 connections now open) m30999| Fri Feb 22 12:36:31.489 [mongosMain] connection accepted from 127.0.0.1:51652 #201 (168 connections now open) m30001| Fri Feb 22 12:36:31.493 [initandlisten] connection accepted from 127.0.0.1:59261 #231 (72 connections now open) Fri Feb 22 12:36:31.518 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15872| MongoDB shell version: 2.4.0-rc1-pre- sh15870| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:31.554 [mongosMain] connection accepted from 127.0.0.1:58901 #202 (169 connections now open) m30000| Fri Feb 22 12:36:31.557 [initandlisten] connection accepted from 127.0.0.1:60455 #87 (75 connections now open) Fri Feb 22 12:36:31.558 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15873| MongoDB shell version: 2.4.0-rc1-pre- sh15871| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:31.558 [initandlisten] connection accepted from 127.0.0.1:55913 #232 (73 connections now open) m30999| Fri Feb 22 12:36:31.573 [mongosMain] connection accepted from 127.0.0.1:41774 #203 (170 connections now open) m30000| Fri Feb 22 12:36:31.577 [initandlisten] connection accepted from 127.0.0.1:40534 #88 (76 connections now open) Fri Feb 22 12:36:31.598 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:31.577 [initandlisten] connection accepted from 127.0.0.1:49414 #233 (74 connections now open) sh15874| MongoDB shell version: 2.4.0-rc1-pre- sh15872| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:31.617 [initandlisten] connection accepted from 127.0.0.1:41438 #234 (75 connections now open) m30999| Fri Feb 22 12:36:31.613 [mongosMain] connection accepted from 127.0.0.1:39702 #204 (171 connections now open) Fri Feb 22 12:36:31.638 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:31.616 [initandlisten] connection accepted from 127.0.0.1:61864 #89 (77 connections now open) sh15873| connecting to: localhost:30999/admin sh15875| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:36:31.654 [mongosMain] connection accepted from 127.0.0.1:53631 #205 (172 connections now open) Fri Feb 22 12:36:31.679 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:31.657 [initandlisten] connection accepted from 127.0.0.1:52972 #90 (78 connections now open) m30001| Fri Feb 22 12:36:31.658 [initandlisten] connection accepted from 127.0.0.1:35132 #235 (76 connections now open) sh15874| connecting to: localhost:30999/admin sh15876| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:36:31.708 [initandlisten] connection accepted from 127.0.0.1:54067 #236 (77 connections now open) m30999| Fri Feb 22 12:36:31.704 [mongosMain] connection accepted from 127.0.0.1:40123 #206 (173 connections now open) Fri Feb 22 12:36:31.717 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:31.708 [initandlisten] connection accepted from 127.0.0.1:55664 #91 (79 connections now open) sh15877| MongoDB shell version: 2.4.0-rc1-pre- sh15875| connecting to: localhost:30999/admin Fri Feb 22 12:36:31.776 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:31.734 [mongosMain] connection accepted from 127.0.0.1:42953 #207 (174 connections now open) m30000| Fri Feb 22 12:36:31.738 [initandlisten] connection accepted from 127.0.0.1:35537 #92 (80 connections now open) m30001| Fri Feb 22 12:36:31.739 [initandlisten] connection accepted from 127.0.0.1:64779 #237 (78 connections now open) sh15878| MongoDB shell version: 2.4.0-rc1-pre- sh15876| connecting to: localhost:30999/admin sh15877| connecting to: localhost:30999/admin Fri Feb 22 12:36:31.813 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:31.781 [mongosMain] connection accepted from 127.0.0.1:48364 #208 (175 connections now open) m30999| Fri Feb 22 12:36:31.802 [mongosMain] connection accepted from 127.0.0.1:57031 #209 (176 connections now open) m30000| Fri Feb 22 12:36:31.785 [initandlisten] connection accepted from 127.0.0.1:35209 #93 (81 connections now open) m30000| Fri Feb 22 12:36:31.806 [initandlisten] connection accepted from 127.0.0.1:44882 #94 (82 connections now open) m30001| Fri Feb 22 12:36:31.785 [initandlisten] connection accepted from 127.0.0.1:54979 #238 (79 connections now open) m30001| Fri Feb 22 12:36:31.806 [initandlisten] connection accepted from 127.0.0.1:33754 #239 (80 connections now open) sh15880| MongoDB shell version: 2.4.0-rc1-pre- sh15878| connecting to: localhost:30999/admin Fri Feb 22 12:36:31.850 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:31.858 [initandlisten] connection accepted from 127.0.0.1:37789 #240 (81 connections now open) Fri Feb 22 12:36:31.888 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:31.853 [mongosMain] connection accepted from 127.0.0.1:53993 #210 (177 connections now open) m30000| Fri Feb 22 12:36:31.857 [initandlisten] connection accepted from 127.0.0.1:43439 #95 (83 connections now open) sh15881| MongoDB shell version: 2.4.0-rc1-pre- sh15880| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:31.920 [mongosMain] connection accepted from 127.0.0.1:62289 #211 (178 connections now open) sh15883| MongoDB shell version: 2.4.0-rc1-pre- sh15882| MongoDB shell version: 2.4.0-rc1-pre- Fri Feb 22 12:36:31.923 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15881| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:31.925 [initandlisten] connection accepted from 127.0.0.1:58634 #241 (82 connections now open) Fri Feb 22 12:36:31.958 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:31.924 [initandlisten] connection accepted from 127.0.0.1:62236 #96 (84 connections now open) m30999| Fri Feb 22 12:36:31.957 [mongosMain] connection accepted from 127.0.0.1:41723 #212 (179 connections now open) sh15883| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:31.978 [mongosMain] connection accepted from 127.0.0.1:47546 #213 (180 connections now open) Fri Feb 22 12:36:31.990 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:31.960 [initandlisten] connection accepted from 127.0.0.1:32879 #97 (85 connections now open) m30000| Fri Feb 22 12:36:31.982 [initandlisten] connection accepted from 127.0.0.1:50697 #98 (86 connections now open) m30001| Fri Feb 22 12:36:31.960 [initandlisten] connection accepted from 127.0.0.1:54064 #242 (83 connections now open) m30001| Fri Feb 22 12:36:31.983 [initandlisten] connection accepted from 127.0.0.1:65410 #243 (84 connections now open) sh15885| MongoDB shell version: 2.4.0-rc1-pre- sh15884| MongoDB shell version: 2.4.0-rc1-pre- sh15882| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:32.022 [mongosMain] connection accepted from 127.0.0.1:36275 #214 (181 connections now open) sh15884| connecting to: localhost:30999/admin Fri Feb 22 12:36:32.025 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.026 [initandlisten] connection accepted from 127.0.0.1:51031 #99 (87 connections now open) m30000| Fri Feb 22 12:36:32.034 [initandlisten] connection accepted from 127.0.0.1:47434 #100 (88 connections now open) Fri Feb 22 12:36:32.058 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:32.027 [initandlisten] connection accepted from 127.0.0.1:57089 #244 (85 connections now open) m30001| Fri Feb 22 12:36:32.034 [initandlisten] connection accepted from 127.0.0.1:64908 #245 (86 connections now open) m30999| Fri Feb 22 12:36:32.030 [mongosMain] connection accepted from 127.0.0.1:51654 #215 (182 connections now open) sh15885| connecting to: localhost:30999/admin sh15887| MongoDB shell version: 2.4.0-rc1-pre- sh15886| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:36:32.089 [initandlisten] connection accepted from 127.0.0.1:57511 #246 (87 connections now open) m30000| Fri Feb 22 12:36:32.089 [initandlisten] connection accepted from 127.0.0.1:56818 #101 (89 connections now open) Fri Feb 22 12:36:32.091 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:32.085 [mongosMain] connection accepted from 127.0.0.1:42358 #216 (183 connections now open) sh15886| connecting to: localhost:30999/admin sh15888| MongoDB shell version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:36:32.095 [mongosMain] connection accepted from 127.0.0.1:63742 #217 (184 connections now open) m30001| Fri Feb 22 12:36:32.099 [initandlisten] connection accepted from 127.0.0.1:47882 #247 (88 connections now open) Fri Feb 22 12:36:32.125 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.098 [initandlisten] connection accepted from 127.0.0.1:37456 #102 (90 connections now open) sh15887| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:32.152 [initandlisten] connection accepted from 127.0.0.1:48286 #248 (89 connections now open) Fri Feb 22 12:36:32.157 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.151 [initandlisten] connection accepted from 127.0.0.1:37735 #103 (91 connections now open) m30999| Fri Feb 22 12:36:32.147 [mongosMain] connection accepted from 127.0.0.1:41393 #218 (185 connections now open) sh15890| MongoDB shell version: 2.4.0-rc1-pre- sh15889| MongoDB shell version: 2.4.0-rc1-pre- sh15888| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:32.185 [mongosMain] connection accepted from 127.0.0.1:44645 #219 (186 connections now open) Fri Feb 22 12:36:32.189 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.189 [initandlisten] connection accepted from 127.0.0.1:43324 #104 (92 connections now open) sh15889| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:32.190 [initandlisten] connection accepted from 127.0.0.1:48162 #249 (90 connections now open) m30001| Fri Feb 22 12:36:32.219 [initandlisten] connection accepted from 127.0.0.1:54544 #250 (91 connections now open) m30000| Fri Feb 22 12:36:32.219 [initandlisten] connection accepted from 127.0.0.1:58248 #105 (93 connections now open) Fri Feb 22 12:36:32.222 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:32.216 [mongosMain] connection accepted from 127.0.0.1:49154 #220 (187 connections now open) sh15892| MongoDB shell version: 2.4.0-rc1-pre- sh15890| connecting to: localhost:30999/admin sh15891| MongoDB shell version: 2.4.0-rc1-pre- m30000| Fri Feb 22 12:36:32.244 [initandlisten] connection accepted from 127.0.0.1:58295 #106 (94 connections now open) m30001| Fri Feb 22 12:36:32.244 [initandlisten] connection accepted from 127.0.0.1:42870 #251 (92 connections now open) m30999| Fri Feb 22 12:36:32.241 [mongosMain] connection accepted from 127.0.0.1:43581 #221 (188 connections now open) Fri Feb 22 12:36:32.254 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15893| MongoDB shell version: 2.4.0-rc1-pre- sh15891| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:32.263 [initandlisten] connection accepted from 127.0.0.1:34412 #107 (95 connections now open) m30999| Fri Feb 22 12:36:32.260 [mongosMain] connection accepted from 127.0.0.1:34825 #222 (189 connections now open) m30001| Fri Feb 22 12:36:32.263 [initandlisten] connection accepted from 127.0.0.1:53618 #252 (93 connections now open) Fri Feb 22 12:36:32.290 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15894| MongoDB shell version: 2.4.0-rc1-pre- sh15892| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:32.309 [initandlisten] connection accepted from 127.0.0.1:43879 #253 (94 connections now open) m30999| Fri Feb 22 12:36:32.304 [mongosMain] connection accepted from 127.0.0.1:41185 #223 (190 connections now open) m30000| Fri Feb 22 12:36:32.308 [initandlisten] connection accepted from 127.0.0.1:59819 #108 (96 connections now open) Fri Feb 22 12:36:32.330 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15895| MongoDB shell version: 2.4.0-rc1-pre- sh15893| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:32.350 [initandlisten] connection accepted from 127.0.0.1:37443 #254 (95 connections now open) m30999| Fri Feb 22 12:36:32.346 [mongosMain] connection accepted from 127.0.0.1:42359 #224 (191 connections now open) Fri Feb 22 12:36:32.371 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.350 [initandlisten] connection accepted from 127.0.0.1:40482 #109 (97 connections now open) sh15896| MongoDB shell version: 2.4.0-rc1-pre- sh15894| connecting to: localhost:30999/admin m30000| Fri Feb 22 12:36:32.395 [initandlisten] connection accepted from 127.0.0.1:43359 #110 (98 connections now open) m30999| Fri Feb 22 12:36:32.391 [mongosMain] connection accepted from 127.0.0.1:42910 #225 (192 connections now open) m30001| Fri Feb 22 12:36:32.395 [initandlisten] connection accepted from 127.0.0.1:34959 #255 (96 connections now open) Fri Feb 22 12:36:32.409 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15897| MongoDB shell version: 2.4.0-rc1-pre- sh15895| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:32.431 [initandlisten] connection accepted from 127.0.0.1:64383 #256 (97 connections now open) m30999| Fri Feb 22 12:36:32.426 [mongosMain] connection accepted from 127.0.0.1:53687 #226 (193 connections now open) Fri Feb 22 12:36:32.448 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.430 [initandlisten] connection accepted from 127.0.0.1:52710 #111 (99 connections now open) sh15898| MongoDB shell version: 2.4.0-rc1-pre- sh15896| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:32.472 [mongosMain] connection accepted from 127.0.0.1:38923 #227 (194 connections now open) m30000| Fri Feb 22 12:36:32.476 [initandlisten] connection accepted from 127.0.0.1:33747 #112 (100 connections now open) Fri Feb 22 12:36:32.491 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30001| Fri Feb 22 12:36:32.477 [initandlisten] connection accepted from 127.0.0.1:32875 #257 (98 connections now open) sh15899| MongoDB shell version: 2.4.0-rc1-pre- sh15897| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:32.510 [initandlisten] connection accepted from 127.0.0.1:46308 #258 (99 connections now open) m30999| Fri Feb 22 12:36:32.506 [mongosMain] connection accepted from 127.0.0.1:40151 #228 (195 connections now open) Fri Feb 22 12:36:32.529 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.510 [initandlisten] connection accepted from 127.0.0.1:55563 #113 (101 connections now open) sh15898| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:32.567 [mongosMain] connection accepted from 127.0.0.1:45121 #229 (196 connections now open) Fri Feb 22 12:36:32.569 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin sh15900| MongoDB shell version: 2.4.0-rc1-pre- sh15901| MongoDB shell version: 2.4.0-rc1-pre- sh15899| connecting to: localhost:30999/admin m30001| Fri Feb 22 12:36:32.572 [initandlisten] connection accepted from 127.0.0.1:55820 #259 (100 connections now open) m30001| Fri Feb 22 12:36:32.609 [initandlisten] connection accepted from 127.0.0.1:40433 #260 (101 connections now open) Fri Feb 22 12:36:32.612 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30999| Fri Feb 22 12:36:32.605 [mongosMain] connection accepted from 127.0.0.1:51109 #230 (197 connections now open) m30000| Fri Feb 22 12:36:32.572 [initandlisten] connection accepted from 127.0.0.1:45414 #114 (102 connections now open) m30000| Fri Feb 22 12:36:32.609 [initandlisten] connection accepted from 127.0.0.1:34886 #115 (103 connections now open) sh15900| connecting to: localhost:30999/admin sh15902| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:36:32.638 [initandlisten] connection accepted from 127.0.0.1:51698 #261 (102 connections now open) Fri Feb 22 12:36:32.652 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');assert.soon(function() {return db.getSiblingDB('connectionsOpenedTest').getCollection('keepRunning').findOne().stop;}, "Parallel shell never told to terminate", 10 * 60000); localhost:30999/admin m30000| Fri Feb 22 12:36:32.637 [initandlisten] connection accepted from 127.0.0.1:35142 #116 (104 connections now open) m30999| Fri Feb 22 12:36:32.634 [mongosMain] connection accepted from 127.0.0.1:58397 #231 (198 connections now open) sh15901| connecting to: localhost:30999/admin ---- Testing that temporary connections increased the current and totalCreated counters ---- sh15903| MongoDB shell version: 2.4.0-rc1-pre- sh15904| MongoDB shell version: 2.4.0-rc1-pre- sh15902| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:32.695 [mongosMain] connection accepted from 127.0.0.1:35355 #232 (199 connections now open) m30000| Fri Feb 22 12:36:32.700 [initandlisten] connection accepted from 127.0.0.1:42233 #117 (105 connections now open) m30001| Fri Feb 22 12:36:32.701 [initandlisten] connection accepted from 127.0.0.1:58757 #262 (103 connections now open) sh15903| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:32.730 [mongosMain] connection accepted from 127.0.0.1:61812 #233 (200 connections now open) m30000| Fri Feb 22 12:36:32.734 [initandlisten] connection accepted from 127.0.0.1:53392 #118 (106 connections now open) m30001| Fri Feb 22 12:36:32.735 [initandlisten] connection accepted from 127.0.0.1:37943 #263 (104 connections now open) sh15904| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:32.743 [mongosMain] connection accepted from 127.0.0.1:58444 #234 (201 connections now open) m30000| Fri Feb 22 12:36:32.747 [initandlisten] connection accepted from 127.0.0.1:50519 #119 (107 connections now open) m30001| Fri Feb 22 12:36:32.747 [initandlisten] connection accepted from 127.0.0.1:41663 #264 (105 connections now open) ---- Waiting for all temporary connections to be closed ---- m30999| Fri Feb 22 12:36:32.869 [conn179] end connection 127.0.0.1:33575 (200 connections now open) m30999| Fri Feb 22 12:36:32.869 [conn148] end connection 127.0.0.1:44056 (200 connections now open) m30999| Fri Feb 22 12:36:32.870 [conn200] end connection 127.0.0.1:64527 (198 connections now open) m30999| Fri Feb 22 12:36:32.875 [conn210] end connection 127.0.0.1:53993 (197 connections now open) m30999| Fri Feb 22 12:36:32.876 [conn205] end connection 127.0.0.1:53631 (196 connections now open) m30999| Fri Feb 22 12:36:32.877 [conn180] end connection 127.0.0.1:52629 (195 connections now open) m30999| Fri Feb 22 12:36:32.877 [conn185] end connection 127.0.0.1:59087 (194 connections now open) m30999| Fri Feb 22 12:36:32.877 [conn170] end connection 127.0.0.1:56899 (193 connections now open) m30999| Fri Feb 22 12:36:32.877 [conn222] end connection 127.0.0.1:34825 (192 connections now open) m30999| Fri Feb 22 12:36:32.878 [conn175] end connection 127.0.0.1:36015 (191 connections now open) m30999| Fri Feb 22 12:36:32.883 [conn138] end connection 127.0.0.1:55841 (190 connections now open) m30999| Fri Feb 22 12:36:32.891 [conn227] end connection 127.0.0.1:38923 (189 connections now open) m30999| Fri Feb 22 12:36:32.892 [conn165] end connection 127.0.0.1:44709 (188 connections now open) m30999| Fri Feb 22 12:36:32.894 [conn149] end connection 127.0.0.1:34878 (187 connections now open) m30999| Fri Feb 22 12:36:32.898 [conn196] end connection 127.0.0.1:35749 (186 connections now open) m30999| Fri Feb 22 12:36:32.902 [conn144] end connection 127.0.0.1:39493 (185 connections now open) m30999| Fri Feb 22 12:36:32.906 [conn216] end connection 127.0.0.1:42358 (184 connections now open) m30999| Fri Feb 22 12:36:32.910 [conn153] end connection 127.0.0.1:63324 (183 connections now open) m30999| Fri Feb 22 12:36:32.911 [conn154] end connection 127.0.0.1:58583 (182 connections now open) m30999| Fri Feb 22 12:36:32.911 [conn201] end connection 127.0.0.1:51652 (181 connections now open) m30999| Fri Feb 22 12:36:32.912 [conn186] end connection 127.0.0.1:49112 (180 connections now open) m30999| Fri Feb 22 12:36:32.914 [conn232] end connection 127.0.0.1:35355 (179 connections now open) m30999| Fri Feb 22 12:36:32.915 [conn217] end connection 127.0.0.1:63742 (178 connections now open) m30999| Fri Feb 22 12:36:32.919 [conn176] end connection 127.0.0.1:41833 (177 connections now open) m30999| Fri Feb 22 12:36:32.919 [conn171] end connection 127.0.0.1:39066 (176 connections now open) m30999| Fri Feb 22 12:36:32.921 [conn139] end connection 127.0.0.1:64143 (175 connections now open) m30999| Fri Feb 22 12:36:32.921 [conn191] end connection 127.0.0.1:62509 (175 connections now open) m30999| Fri Feb 22 12:36:32.922 [conn155] end connection 127.0.0.1:37284 (173 connections now open) m30999| Fri Feb 22 12:36:32.924 [conn223] end connection 127.0.0.1:41185 (172 connections now open) m30999| Fri Feb 22 12:36:32.925 [conn228] end connection 127.0.0.1:40151 (171 connections now open) m30999| Fri Feb 22 12:36:32.927 [conn206] end connection 127.0.0.1:40123 (170 connections now open) m30999| Fri Feb 22 12:36:32.930 [conn181] end connection 127.0.0.1:56956 (169 connections now open) m30999| Fri Feb 22 12:36:32.941 [conn145] end connection 127.0.0.1:54440 (168 connections now open) m30999| Fri Feb 22 12:36:32.941 [conn150] end connection 127.0.0.1:62832 (168 connections now open) m30999| Fri Feb 22 12:36:32.942 [conn211] end connection 127.0.0.1:62289 (166 connections now open) m30999| Fri Feb 22 12:36:32.948 [conn233] end connection 127.0.0.1:61812 (165 connections now open) m30999| Fri Feb 22 12:36:32.953 [conn192] end connection 127.0.0.1:37709 (164 connections now open) m30999| Fri Feb 22 12:36:32.955 [conn187] end connection 127.0.0.1:36419 (163 connections now open) m30999| Fri Feb 22 12:36:32.956 [conn140] end connection 127.0.0.1:40507 (162 connections now open) m30999| Fri Feb 22 12:36:32.956 [conn207] end connection 127.0.0.1:42953 (162 connections now open) m30999| Fri Feb 22 12:36:32.957 [conn177] end connection 127.0.0.1:64489 (160 connections now open) m30999| Fri Feb 22 12:36:32.961 [conn234] end connection 127.0.0.1:58444 (159 connections now open) m30999| Fri Feb 22 12:36:32.961 [conn156] end connection 127.0.0.1:64876 (158 connections now open) m30999| Fri Feb 22 12:36:32.965 [conn197] end connection 127.0.0.1:50323 (157 connections now open) m30999| Fri Feb 22 12:36:32.965 [conn182] end connection 127.0.0.1:45378 (156 connections now open) m30999| Fri Feb 22 12:36:32.965 [conn224] end connection 127.0.0.1:42359 (156 connections now open) m30999| Fri Feb 22 12:36:32.966 [conn166] end connection 127.0.0.1:56579 (154 connections now open) m30999| Fri Feb 22 12:36:32.968 [conn218] end connection 127.0.0.1:41393 (153 connections now open) m30999| Fri Feb 22 12:36:32.969 [conn135] end connection 127.0.0.1:48963 (152 connections now open) m30999| Fri Feb 22 12:36:32.977 [conn202] end connection 127.0.0.1:58901 (151 connections now open) m30999| Fri Feb 22 12:36:32.978 [conn212] end connection 127.0.0.1:41723 (150 connections now open) m30001| Fri Feb 22 12:36:32.978 [conn242] end connection 127.0.0.1:54064 (104 connections now open) m30000| Fri Feb 22 12:36:32.978 [conn97] end connection 127.0.0.1:32879 (106 connections now open) m30999| Fri Feb 22 12:36:32.979 [conn198] end connection 127.0.0.1:38257 (149 connections now open) m30000| Fri Feb 22 12:36:32.980 [conn83] end connection 127.0.0.1:42254 (105 connections now open) m30001| Fri Feb 22 12:36:32.980 [conn228] end connection 127.0.0.1:50021 (103 connections now open) m30999| Fri Feb 22 12:36:32.983 [conn172] end connection 127.0.0.1:44971 (148 connections now open) m30999| Fri Feb 22 12:36:32.983 [conn151] end connection 127.0.0.1:62791 (147 connections now open) m30000| Fri Feb 22 12:36:32.983 [conn57] end connection 127.0.0.1:52464 (104 connections now open) m30001| Fri Feb 22 12:36:32.983 [conn202] end connection 127.0.0.1:43324 (102 connections now open) m30000| Fri Feb 22 12:36:32.983 [conn36] end connection 127.0.0.1:43690 (103 connections now open) m30001| Fri Feb 22 12:36:32.983 [conn181] end connection 127.0.0.1:37500 (101 connections now open) m30999| Fri Feb 22 12:36:32.987 [conn193] end connection 127.0.0.1:34495 (146 connections now open) m30999| Fri Feb 22 12:36:32.987 [conn229] end connection 127.0.0.1:45121 (146 connections now open) m30000| Fri Feb 22 12:36:32.987 [conn78] end connection 127.0.0.1:47383 (102 connections now open) m30001| Fri Feb 22 12:36:32.987 [conn223] end connection 127.0.0.1:50983 (100 connections now open) m30000| Fri Feb 22 12:36:32.987 [conn114] end connection 127.0.0.1:45414 (101 connections now open) m30001| Fri Feb 22 12:36:32.987 [conn259] end connection 127.0.0.1:55820 (99 connections now open) m30999| Fri Feb 22 12:36:32.990 [conn188] end connection 127.0.0.1:40774 (144 connections now open) m30001| Fri Feb 22 12:36:32.991 [conn218] end connection 127.0.0.1:44020 (98 connections now open) m30000| Fri Feb 22 12:36:32.991 [conn73] end connection 127.0.0.1:36741 (100 connections now open) m30999| Fri Feb 22 12:36:32.991 [conn157] end connection 127.0.0.1:62948 (143 connections now open) m30000| Fri Feb 22 12:36:32.992 [conn42] end connection 127.0.0.1:42603 (99 connections now open) m30001| Fri Feb 22 12:36:32.992 [conn187] end connection 127.0.0.1:59355 (97 connections now open) m30999| Fri Feb 22 12:36:32.993 [conn173] end connection 127.0.0.1:34556 (142 connections now open) m30000| Fri Feb 22 12:36:32.993 [conn58] end connection 127.0.0.1:54408 (98 connections now open) m30001| Fri Feb 22 12:36:32.993 [conn203] end connection 127.0.0.1:39562 (96 connections now open) m30999| Fri Feb 22 12:36:32.994 [conn141] end connection 127.0.0.1:44880 (141 connections now open) m30000| Fri Feb 22 12:36:32.994 [conn26] end connection 127.0.0.1:53215 (97 connections now open) m30001| Fri Feb 22 12:36:32.994 [conn171] end connection 127.0.0.1:40338 (95 connections now open) m30999| Fri Feb 22 12:36:32.995 [conn203] end connection 127.0.0.1:41774 (140 connections now open) m30000| Fri Feb 22 12:36:32.996 [conn88] end connection 127.0.0.1:40534 (96 connections now open) m30001| Fri Feb 22 12:36:32.996 [conn233] end connection 127.0.0.1:49414 (94 connections now open) m30999| Fri Feb 22 12:36:32.996 [conn146] end connection 127.0.0.1:53902 (139 connections now open) m30001| Fri Feb 22 12:36:32.996 [conn176] end connection 127.0.0.1:45830 (93 connections now open) m30000| Fri Feb 22 12:36:32.996 [conn31] end connection 127.0.0.1:42526 (95 connections now open) m30999| Fri Feb 22 12:36:32.999 [conn183] end connection 127.0.0.1:57950 (138 connections now open) m30001| Fri Feb 22 12:36:32.999 [conn213] end connection 127.0.0.1:47007 (92 connections now open) m30000| Fri Feb 22 12:36:32.999 [conn68] end connection 127.0.0.1:44497 (94 connections now open) m30999| Fri Feb 22 12:36:33.000 [conn213] end connection 127.0.0.1:47546 (137 connections now open) m30000| Fri Feb 22 12:36:33.000 [conn98] end connection 127.0.0.1:50697 (93 connections now open) m30001| Fri Feb 22 12:36:33.000 [conn243] end connection 127.0.0.1:65410 (91 connections now open) m30999| Fri Feb 22 12:36:33.003 [conn178] end connection 127.0.0.1:44472 (136 connections now open) m30001| Fri Feb 22 12:36:33.003 [conn208] end connection 127.0.0.1:57254 (90 connections now open) m30000| Fri Feb 22 12:36:33.003 [conn63] end connection 127.0.0.1:65497 (92 connections now open) m30999| Fri Feb 22 12:36:33.003 [conn167] end connection 127.0.0.1:59819 (135 connections now open) m30000| Fri Feb 22 12:36:33.003 [conn52] end connection 127.0.0.1:49319 (91 connections now open) m30001| Fri Feb 22 12:36:33.004 [conn197] end connection 127.0.0.1:60132 (89 connections now open) m30999| Fri Feb 22 12:36:33.004 [conn208] end connection 127.0.0.1:48364 (134 connections now open) m30000| Fri Feb 22 12:36:33.004 [conn93] end connection 127.0.0.1:35209 (90 connections now open) m30001| Fri Feb 22 12:36:33.004 [conn238] end connection 127.0.0.1:54979 (88 connections now open) m30999| Fri Feb 22 12:36:33.006 [conn219] end connection 127.0.0.1:44645 (133 connections now open) m30001| Fri Feb 22 12:36:33.007 [conn249] end connection 127.0.0.1:48162 (87 connections now open) m30000| Fri Feb 22 12:36:33.007 [conn104] end connection 127.0.0.1:43324 (89 connections now open) m30999| Fri Feb 22 12:36:33.008 [conn136] end connection 127.0.0.1:65137 (132 connections now open) m30000| Fri Feb 22 12:36:33.008 [conn8] end connection 127.0.0.1:46525 (88 connections now open) m30001| Fri Feb 22 12:36:33.008 [conn166] end connection 127.0.0.1:54927 (86 connections now open) m30999| Fri Feb 22 12:36:33.008 [conn152] end connection 127.0.0.1:38471 (131 connections now open) m30000| Fri Feb 22 12:36:33.009 [conn37] end connection 127.0.0.1:49466 (87 connections now open) m30001| Fri Feb 22 12:36:33.009 [conn182] end connection 127.0.0.1:39515 (85 connections now open) m30999| Fri Feb 22 12:36:33.012 [conn225] end connection 127.0.0.1:42910 (130 connections now open) m30000| Fri Feb 22 12:36:33.012 [conn110] end connection 127.0.0.1:43359 (86 connections now open) m30001| Fri Feb 22 12:36:33.012 [conn255] end connection 127.0.0.1:34959 (84 connections now open) m30999| Fri Feb 22 12:36:33.014 [conn162] end connection 127.0.0.1:35382 (129 connections now open) m30999| Fri Feb 22 12:36:33.014 [conn160] end connection 127.0.0.1:38262 (129 connections now open) m30999| Fri Feb 22 12:36:33.014 [conn161] end connection 127.0.0.1:61576 (127 connections now open) m30000| Fri Feb 22 12:36:33.014 [conn47] end connection 127.0.0.1:34278 (85 connections now open) m30001| Fri Feb 22 12:36:33.014 [conn192] end connection 127.0.0.1:34767 (83 connections now open) m30000| Fri Feb 22 12:36:33.014 [conn46] end connection 127.0.0.1:49786 (84 connections now open) m30000| Fri Feb 22 12:36:33.014 [conn45] end connection 127.0.0.1:51165 (84 connections now open) m30001| Fri Feb 22 12:36:33.015 [conn190] end connection 127.0.0.1:36928 (82 connections now open) m30001| Fri Feb 22 12:36:33.015 [conn191] end connection 127.0.0.1:59421 (82 connections now open) m30999| Fri Feb 22 12:36:33.017 [conn194] end connection 127.0.0.1:56328 (126 connections now open) m30999| Fri Feb 22 12:36:33.017 [conn163] end connection 127.0.0.1:52737 (126 connections now open) m30000| Fri Feb 22 12:36:33.017 [conn79] end connection 127.0.0.1:60502 (82 connections now open) m30000| Fri Feb 22 12:36:33.017 [conn48] end connection 127.0.0.1:47618 (82 connections now open) m30001| Fri Feb 22 12:36:33.017 [conn193] end connection 127.0.0.1:49506 (80 connections now open) m30001| Fri Feb 22 12:36:33.017 [conn224] end connection 127.0.0.1:48678 (80 connections now open) m30999| Fri Feb 22 12:36:33.024 [conn230] end connection 127.0.0.1:51109 (124 connections now open) m30001| Fri Feb 22 12:36:33.025 [conn260] end connection 127.0.0.1:40433 (78 connections now open) m30000| Fri Feb 22 12:36:33.025 [conn115] end connection 127.0.0.1:34886 (80 connections now open) m30999| Fri Feb 22 12:36:33.025 [conn209] end connection 127.0.0.1:57031 (123 connections now open) m30000| Fri Feb 22 12:36:33.025 [conn94] end connection 127.0.0.1:44882 (79 connections now open) m30001| Fri Feb 22 12:36:33.025 [conn239] end connection 127.0.0.1:33754 (77 connections now open) m30999| Fri Feb 22 12:36:33.030 [conn158] end connection 127.0.0.1:49946 (122 connections now open) m30999| Fri Feb 22 12:36:33.030 [conn189] end connection 127.0.0.1:33099 (122 connections now open) m30000| Fri Feb 22 12:36:33.030 [conn43] end connection 127.0.0.1:33765 (78 connections now open) m30001| Fri Feb 22 12:36:33.030 [conn188] end connection 127.0.0.1:43917 (76 connections now open) m30000| Fri Feb 22 12:36:33.031 [conn74] end connection 127.0.0.1:56142 (77 connections now open) m30001| Fri Feb 22 12:36:33.031 [conn219] end connection 127.0.0.1:59154 (75 connections now open) m30999| Fri Feb 22 12:36:33.031 [conn142] end connection 127.0.0.1:41440 (120 connections now open) m30000| Fri Feb 22 12:36:33.031 [conn27] end connection 127.0.0.1:42542 (76 connections now open) m30001| Fri Feb 22 12:36:33.031 [conn172] end connection 127.0.0.1:37223 (74 connections now open) m30999| Fri Feb 22 12:36:33.033 [conn174] end connection 127.0.0.1:63821 (119 connections now open) m30000| Fri Feb 22 12:36:33.034 [conn59] end connection 127.0.0.1:59951 (75 connections now open) m30001| Fri Feb 22 12:36:33.034 [conn204] end connection 127.0.0.1:55750 (73 connections now open) m30999| Fri Feb 22 12:36:33.036 [conn220] end connection 127.0.0.1:49154 (118 connections now open) m30000| Fri Feb 22 12:36:33.036 [conn105] end connection 127.0.0.1:58248 (74 connections now open) m30001| Fri Feb 22 12:36:33.036 [conn250] end connection 127.0.0.1:54544 (72 connections now open) m30999| Fri Feb 22 12:36:33.036 [conn204] end connection 127.0.0.1:39702 (117 connections now open) m30000| Fri Feb 22 12:36:33.036 [conn89] end connection 127.0.0.1:61864 (73 connections now open) m30001| Fri Feb 22 12:36:33.036 [conn234] end connection 127.0.0.1:41438 (71 connections now open) m30999| Fri Feb 22 12:36:33.040 [conn168] end connection 127.0.0.1:47082 (116 connections now open) m30999| Fri Feb 22 12:36:33.040 [conn184] end connection 127.0.0.1:54921 (115 connections now open) m30001| Fri Feb 22 12:36:33.040 [conn198] end connection 127.0.0.1:36425 (70 connections now open) m30000| Fri Feb 22 12:36:33.040 [conn53] end connection 127.0.0.1:41176 (72 connections now open) m30000| Fri Feb 22 12:36:33.040 [conn69] end connection 127.0.0.1:56122 (71 connections now open) m30001| Fri Feb 22 12:36:33.040 [conn214] end connection 127.0.0.1:39113 (69 connections now open) m30999| Fri Feb 22 12:36:33.043 [conn169] end connection 127.0.0.1:52827 (114 connections now open) m30001| Fri Feb 22 12:36:33.043 [conn199] end connection 127.0.0.1:41234 (68 connections now open) m30000| Fri Feb 22 12:36:33.043 [conn54] end connection 127.0.0.1:47208 (70 connections now open) m30999| Fri Feb 22 12:36:33.044 [conn199] end connection 127.0.0.1:43617 (113 connections now open) m30000| Fri Feb 22 12:36:33.044 [conn84] end connection 127.0.0.1:32940 (69 connections now open) m30001| Fri Feb 22 12:36:33.044 [conn229] end connection 127.0.0.1:42604 (67 connections now open) m30999| Fri Feb 22 12:36:33.044 [conn147] end connection 127.0.0.1:59059 (112 connections now open) m30001| Fri Feb 22 12:36:33.044 [conn177] end connection 127.0.0.1:44267 (66 connections now open) m30000| Fri Feb 22 12:36:33.044 [conn32] end connection 127.0.0.1:59166 (68 connections now open) m30999| Fri Feb 22 12:36:33.044 [conn214] end connection 127.0.0.1:36275 (111 connections now open) m30000| Fri Feb 22 12:36:33.044 [conn99] end connection 127.0.0.1:51031 (67 connections now open) m30001| Fri Feb 22 12:36:33.044 [conn244] end connection 127.0.0.1:57089 (65 connections now open) m30999| Fri Feb 22 12:36:33.046 [conn226] end connection 127.0.0.1:53687 (110 connections now open) m30000| Fri Feb 22 12:36:33.046 [conn111] end connection 127.0.0.1:52710 (66 connections now open) m30001| Fri Feb 22 12:36:33.046 [conn256] end connection 127.0.0.1:64383 (64 connections now open) m30999| Fri Feb 22 12:36:33.047 [conn137] end connection 127.0.0.1:53561 (109 connections now open) m30000| Fri Feb 22 12:36:33.047 [conn22] end connection 127.0.0.1:48217 (65 connections now open) m30001| Fri Feb 22 12:36:33.047 [conn167] end connection 127.0.0.1:47581 (63 connections now open) m30999| Fri Feb 22 12:36:33.052 [conn215] end connection 127.0.0.1:51654 (108 connections now open) m30000| Fri Feb 22 12:36:33.052 [conn100] end connection 127.0.0.1:47434 (64 connections now open) m30001| Fri Feb 22 12:36:33.052 [conn245] end connection 127.0.0.1:64908 (62 connections now open) m30999| Fri Feb 22 12:36:33.054 [conn231] end connection 127.0.0.1:58397 (107 connections now open) m30000| Fri Feb 22 12:36:33.054 [conn116] end connection 127.0.0.1:35142 (63 connections now open) m30001| Fri Feb 22 12:36:33.054 [conn261] end connection 127.0.0.1:51698 (61 connections now open) m30999| Fri Feb 22 12:36:33.056 [conn164] end connection 127.0.0.1:36217 (106 connections now open) m30000| Fri Feb 22 12:36:33.056 [conn49] end connection 127.0.0.1:63731 (62 connections now open) m30001| Fri Feb 22 12:36:33.056 [conn194] end connection 127.0.0.1:40050 (60 connections now open) m30999| Fri Feb 22 12:36:33.056 [conn195] end connection 127.0.0.1:54095 (105 connections now open) m30000| Fri Feb 22 12:36:33.056 [conn80] end connection 127.0.0.1:37601 (61 connections now open) m30001| Fri Feb 22 12:36:33.056 [conn225] end connection 127.0.0.1:38957 (59 connections now open) m30999| Fri Feb 22 12:36:33.061 [conn221] end connection 127.0.0.1:43581 (104 connections now open) m30000| Fri Feb 22 12:36:33.061 [conn106] end connection 127.0.0.1:58295 (60 connections now open) m30001| Fri Feb 22 12:36:33.062 [conn251] end connection 127.0.0.1:42870 (58 connections now open) m30999| Fri Feb 22 12:36:33.065 [conn143] end connection 127.0.0.1:33989 (103 connections now open) m30000| Fri Feb 22 12:36:33.065 [conn28] end connection 127.0.0.1:54928 (59 connections now open) m30001| Fri Feb 22 12:36:33.065 [conn173] end connection 127.0.0.1:44539 (57 connections now open) m30999| Fri Feb 22 12:36:33.066 [conn190] end connection 127.0.0.1:63425 (102 connections now open) m30000| Fri Feb 22 12:36:33.066 [conn75] end connection 127.0.0.1:61326 (58 connections now open) m30001| Fri Feb 22 12:36:33.066 [conn220] end connection 127.0.0.1:46528 (56 connections now open) m30999| Fri Feb 22 12:36:33.068 [conn159] end connection 127.0.0.1:43458 (101 connections now open) m30000| Fri Feb 22 12:36:33.068 [conn44] end connection 127.0.0.1:40558 (57 connections now open) m30001| Fri Feb 22 12:36:33.068 [conn189] end connection 127.0.0.1:33199 (55 connections now open) ---- Testing that current connections counter went down after temporary connections closed ---- 4391ms ******************************************* Test : jstests/sort2.js ... m30999| Fri Feb 22 12:36:33.082 [conn1] DROP: test.sort2 m30001| Fri Feb 22 12:36:33.083 [conn164] CMD: drop test.sort2 m30001| Fri Feb 22 12:36:33.084 [conn164] build index test.sort2 { _id: 1 } m30001| Fri Feb 22 12:36:33.086 [conn164] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:33.087 [conn164] build index test.sort2 { y.a: 1.0 } m30001| Fri Feb 22 12:36:33.088 [conn164] build index done. scanned 4 total records. 0.001 secs m30001| Fri Feb 22 12:36:33.090 [conn9] CMD: validate test.sort2 m30001| Fri Feb 22 12:36:33.090 [conn9] validating index 0: test.sort2.$_id_ m30001| Fri Feb 22 12:36:33.090 [conn9] validating index 1: test.sort2.$y.a_1 m30999| Fri Feb 22 12:36:33.091 [conn1] DROP: test.sort2 m30001| Fri Feb 22 12:36:33.091 [conn164] CMD: drop test.sort2 m30001| Fri Feb 22 12:36:33.100 [conn164] build index test.sort2 { _id: 1 } m30001| Fri Feb 22 12:36:33.101 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:33.102 [conn164] build index test.sort2 { x: 1.0 } m30001| Fri Feb 22 12:36:33.103 [conn164] build index done. scanned 5 total records. 0 secs 23ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2selfintersectingpoly.js !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/dropdb_race.js ******************************************* Test : jstests/evalc.js ... m30999| Fri Feb 22 12:36:33.110 [conn1] DROP: test.jstests_evalc m30001| Fri Feb 22 12:36:33.110 [conn164] CMD: drop test.jstests_evalc m30999| Fri Feb 22 12:36:33.110 [conn1] DROP: test.evalc_done m30001| Fri Feb 22 12:36:33.111 [conn164] CMD: drop test.evalc_done m30001| Fri Feb 22 12:36:33.112 [conn164] build index test.jstests_evalc { _id: 1 } m30001| Fri Feb 22 12:36:33.113 [conn164] build index done. scanned 0 total records. 0.001 secs Fri Feb 22 12:36:33.148 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');print( 'starting forked:' + Date() ); for ( i=0; i<50000; i++ ){ db.currentOp(); } print( 'ending forked:' + Date() ); db.evalc_done.insert( { x : 1 } ); localhost:30999/admin starting eval: Fri Feb 22 2013 12:36:33 GMT+0000 (UTC) sh15905| MongoDB shell version: 2.4.0-rc1-pre- sh15905| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:33.250 [mongosMain] connection accepted from 127.0.0.1:58998 #235 (102 connections now open) sh15905| starting forked:Fri Feb 22 2013 12:36:33 GMT+0000 (UTC) m30001| Fri Feb 22 12:36:34.970 [conn164] command test.$cmd command: { $eval: "db.jstests_evalc.count( {i:10} );" } ntoreturn:1 keyUpdates:0 locks(micros) W:106686 reslen:53 106ms m30000| Fri Feb 22 12:36:35.780 [initandlisten] connection accepted from 127.0.0.1:57529 #120 (58 connections now open) m30999| Fri Feb 22 12:36:35.780 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276653881c8e745391606f m30999| Fri Feb 22 12:36:35.781 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:36:41.783 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276659881c8e7453916070 m30999| Fri Feb 22 12:36:41.784 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. sh15905| ending forked:Fri Feb 22 2013 12:36:46 GMT+0000 (UTC) m30001| Fri Feb 22 12:36:46.574 [conn232] build index test.evalc_done { _id: 1 } m30001| Fri Feb 22 12:36:46.575 [conn232] build index done. scanned 0 total records. 0.001 secs end eval: Fri Feb 22 2013 12:36:46 GMT+0000 (UTC) m30999| Fri Feb 22 12:36:46.588 [conn235] end connection 127.0.0.1:58998 (101 connections now open) 13508ms ******************************************* Test : jstests/and2.js ... m30999| Fri Feb 22 12:36:46.620 [conn1] DROP: test.jstests_and2 m30001| Fri Feb 22 12:36:46.621 [conn164] CMD: drop test.jstests_and2 m30001| Fri Feb 22 12:36:46.622 [conn164] build index test.jstests_and2 { _id: 1 } m30001| Fri Feb 22 12:36:46.623 [conn164] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:36:46.623 [conn1] DROP: test.jstests_and2 m30001| Fri Feb 22 12:36:46.623 [conn164] CMD: drop test.jstests_and2 m30001| Fri Feb 22 12:36:46.629 [conn164] build index test.jstests_and2 { _id: 1 } m30001| Fri Feb 22 12:36:46.630 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:46.631 [conn1] DROP: test.jstests_and2 m30001| Fri Feb 22 12:36:46.631 [conn164] CMD: drop test.jstests_and2 m30001| Fri Feb 22 12:36:46.635 [conn164] build index test.jstests_and2 { _id: 1 } m30001| Fri Feb 22 12:36:46.636 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:46.639 [conn1] DROP: test.jstests_and2 m30001| Fri Feb 22 12:36:46.639 [conn164] CMD: drop test.jstests_and2 m30001| Fri Feb 22 12:36:46.644 [conn164] build index test.jstests_and2 { _id: 1 } m30001| Fri Feb 22 12:36:46.644 [conn164] build index done. scanned 0 total records. 0 secs 32ms ******************************************* Test : jstests/count6.js ... m30999| Fri Feb 22 12:36:46.646 [conn1] DROP: test.jstests_count6 m30001| Fri Feb 22 12:36:46.646 [conn164] CMD: drop test.jstests_count6 m30001| Fri Feb 22 12:36:46.647 [conn164] build index test.jstests_count6 { _id: 1 } m30001| Fri Feb 22 12:36:46.648 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:46.648 [conn164] info: creating collection test.jstests_count6 on add index m30001| Fri Feb 22 12:36:46.648 [conn164] build index test.jstests_count6 { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:36:46.649 [conn164] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:36:46.883 [conn1] DROP: test.jstests_count6 m30001| Fri Feb 22 12:36:46.883 [conn164] CMD: drop test.jstests_count6 m30001| Fri Feb 22 12:36:46.891 [conn164] build index test.jstests_count6 { _id: 1 } m30001| Fri Feb 22 12:36:46.892 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:46.892 [conn164] info: creating collection test.jstests_count6 on add index m30001| Fri Feb 22 12:36:46.892 [conn164] build index test.jstests_count6 { b: 1.0, a: 1.0 } m30001| Fri Feb 22 12:36:46.893 [conn164] build index done. scanned 0 total records. 0 secs 449ms ******************************************* Test : jstests/sortj.js ... m30999| Fri Feb 22 12:36:47.095 [conn1] DROP: test.jstests_sortj m30001| Fri Feb 22 12:36:47.096 [conn164] CMD: drop test.jstests_sortj m30001| Fri Feb 22 12:36:47.097 [conn164] build index test.jstests_sortj { _id: 1 } m30001| Fri Feb 22 12:36:47.097 [conn164] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:47.097 [conn164] info: creating collection test.jstests_sortj on add index m30001| Fri Feb 22 12:36:47.098 [conn164] build index test.jstests_sortj { a: 1.0 } m30001| Fri Feb 22 12:36:47.098 [conn164] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:47.786 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127665f881c8e7453916071 m30999| Fri Feb 22 12:36:47.787 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30001| Fri Feb 22 12:36:47.999 [conn164] assertion 10128 too much data for sort() with no index. add an index or specify a smaller limit ns:test.jstests_sortj query:{ query: { a: { $gte: 0.0 }, c: null }, orderby: { d: 1.0 } } m30001| Fri Feb 22 12:36:47.999 [conn164] problem detected during query over test.jstests_sortj : { $err: "too much data for sort() with no index. add an index or specify a smaller limit", code: 10128 } m30999| Fri Feb 22 12:36:47.999 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10128 too much data for sort() with no index. add an index or specify a smaller limit m30001| Fri Feb 22 12:36:47.999 [conn164] end connection 127.0.0.1:60836 (54 connections now open) m30999| Fri Feb 22 12:36:48.000 [conn1] DROP: test.jstests_sortj m30001| Fri Feb 22 12:36:48.000 [conn232] CMD: drop test.jstests_sortj 915ms ******************************************* Test : jstests/covered_index_sort_3.js ... m30999| Fri Feb 22 12:36:48.016 [conn1] DROP: test.covered_sort_3 m30001| Fri Feb 22 12:36:48.017 [conn232] CMD: drop test.covered_sort_3 m30001| Fri Feb 22 12:36:48.018 [conn232] build index test.covered_sort_3 { _id: 1 } m30001| Fri Feb 22 12:36:48.019 [conn232] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.202 [conn232] build index test.covered_sort_3 { a: 1.0, b: -1.0, c: 1.0 } m30001| Fri Feb 22 12:36:48.204 [conn232] build index done. scanned 100 total records. 0.002 secs all tests pass 198ms ******************************************* Test : jstests/temp_cleanup.js ... m30999| Fri Feb 22 12:36:48.209 [conn1] couldn't find database [temp_cleanup_test] in config db m30999| Fri Feb 22 12:36:48.211 [conn1] put [temp_cleanup_test] on: shard0000:localhost:30000 m30999| Fri Feb 22 12:36:48.211 [conn1] DROP: temp_cleanup_test.tempCleanup m30000| Fri Feb 22 12:36:48.212 [conn6] CMD: drop temp_cleanup_test.tempCleanup m30000| Fri Feb 22 12:36:48.212 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/temp_cleanup_test.ns, filling with zeroes... m30000| Fri Feb 22 12:36:48.212 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/temp_cleanup_test.ns, size: 16MB, took 0 secs m30000| Fri Feb 22 12:36:48.212 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/temp_cleanup_test.0, filling with zeroes... m30000| Fri Feb 22 12:36:48.213 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/temp_cleanup_test.0, size: 64MB, took 0 secs m30000| Fri Feb 22 12:36:48.213 [FileAllocator] allocating new datafile /data/db/sharding_passthrough0/temp_cleanup_test.1, filling with zeroes... m30000| Fri Feb 22 12:36:48.213 [FileAllocator] done allocating datafile /data/db/sharding_passthrough0/temp_cleanup_test.1, size: 128MB, took 0 secs m30000| Fri Feb 22 12:36:48.216 [conn6] build index temp_cleanup_test.tempCleanup { _id: 1 } m30000| Fri Feb 22 12:36:48.217 [conn6] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:36:48.251 [conn6] CMD: drop temp_cleanup_test.tmp.mr.tempCleanup_0 m30000| Fri Feb 22 12:36:48.252 [conn6] CMD: drop temp_cleanup_test.tmp.mr.tempCleanup_0_inc m30000| Fri Feb 22 12:36:48.252 [conn6] build index temp_cleanup_test.tmp.mr.tempCleanup_0_inc { 0: 1 } m30000| Fri Feb 22 12:36:48.254 [conn6] build index done. scanned 0 total records. 0.001 secs m30000| Fri Feb 22 12:36:48.254 [conn6] build index temp_cleanup_test.tmp.mr.tempCleanup_0 { _id: 1 } m30000| Fri Feb 22 12:36:48.255 [conn6] build index done. scanned 0 total records. 0 secs m30000| Fri Feb 22 12:36:48.256 [conn6] CMD: drop temp_cleanup_test.xyz m30000| Fri Feb 22 12:36:48.259 [conn6] CMD: drop temp_cleanup_test.tmp.mr.tempCleanup_0 m30000| Fri Feb 22 12:36:48.260 [conn6] CMD: drop temp_cleanup_test.tmp.mr.tempCleanup_0 m30000| Fri Feb 22 12:36:48.260 [conn6] CMD: drop temp_cleanup_test.tmp.mr.tempCleanup_0_inc m30000| Fri Feb 22 12:36:48.260 [conn6] CMD: drop temp_cleanup_test.tmp.mr.tempCleanup_0 m30000| Fri Feb 22 12:36:48.261 [conn6] CMD: drop temp_cleanup_test.tmp.mr.tempCleanup_0_inc { "result" : "xyz", "timeMillis" : 41, "counts" : { "input" : 1, "emit" : 1, "reduce" : 0, "output" : 1 }, "ok" : 1, } m30999| Fri Feb 22 12:36:48.263 [conn1] DROP DATABASE: temp_cleanup_test m30999| Fri Feb 22 12:36:48.263 [conn1] erased database temp_cleanup_test from local registry m30999| Fri Feb 22 12:36:48.265 [conn1] DBConfig::dropDatabase: temp_cleanup_test m30999| Fri Feb 22 12:36:48.265 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:48-51276660881c8e7453916072", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536608265), what: "dropDatabase.start", ns: "temp_cleanup_test", details: {} } m30999| Fri Feb 22 12:36:48.265 [conn1] DBConfig::dropDatabase: temp_cleanup_test dropped sharded collections: 0 m30000| Fri Feb 22 12:36:48.265 [conn5] dropDatabase temp_cleanup_test starting m30000| Fri Feb 22 12:36:48.368 [conn5] removeJournalFiles m30000| Fri Feb 22 12:36:48.372 [conn5] dropDatabase temp_cleanup_test finished m30000| Fri Feb 22 12:36:48.372 [conn5] command temp_cleanup_test.$cmd command: { dropDatabase: 1 } ntoreturn:1 keyUpdates:0 locks(micros) W:106148 reslen:68 106ms m30999| Fri Feb 22 12:36:48.372 [conn1] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:36:48-51276660881c8e7453916073", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536608372), what: "dropDatabase", ns: "temp_cleanup_test", details: {} } 164ms ******************************************* Test : jstests/update_arraymatch6.js ... m30999| Fri Feb 22 12:36:48.373 [conn1] DROP: test.jstests_update_arraymatch6 m30001| Fri Feb 22 12:36:48.374 [conn232] CMD: drop test.jstests_update_arraymatch6 m30001| Fri Feb 22 12:36:48.375 [conn232] build index test.jstests_update_arraymatch6 { _id: 1 } m30001| Fri Feb 22 12:36:48.376 [conn232] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:48.377 [conn1] DROP: test.jstests_update_arraymatch6 m30001| Fri Feb 22 12:36:48.377 [conn232] CMD: drop test.jstests_update_arraymatch6 m30001| Fri Feb 22 12:36:48.381 [conn232] build index test.jstests_update_arraymatch6 { _id: 1 } m30001| Fri Feb 22 12:36:48.381 [conn232] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.381 [conn232] info: creating collection test.jstests_update_arraymatch6 on add index m30001| Fri Feb 22 12:36:48.381 [conn232] build index test.jstests_update_arraymatch6 { a.id: 1.0 } m30001| Fri Feb 22 12:36:48.382 [conn232] build index done. scanned 0 total records. 0 secs 11ms ******************************************* Test : jstests/js5.js ... m30999| Fri Feb 22 12:36:48.428 [conn1] DROP: test.jstests_js5 m30001| Fri Feb 22 12:36:48.428 [conn232] CMD: drop test.jstests_js5 m30001| Fri Feb 22 12:36:48.429 [conn232] build index test.jstests_js5 { _id: 1 } m30001| Fri Feb 22 12:36:48.429 [conn232] build index done. scanned 0 total records. 0 secs 67ms ******************************************* Test : jstests/elemMatchProjection.js ... m30999| Fri Feb 22 12:36:48.456 [conn1] DROP: test.SERVER828Test m30001| Fri Feb 22 12:36:48.457 [conn232] CMD: drop test.SERVER828Test m30001| Fri Feb 22 12:36:48.457 [conn232] build index test.SERVER828Test { _id: 1 } m30001| Fri Feb 22 12:36:48.458 [conn232] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.609 [conn232] build index test.SERVER828Test { group: 1.0, y.d: 1.0 } m30001| Fri Feb 22 12:36:48.635 [conn232] build index done. scanned 1600 total records. 0.025 secs m30001| Fri Feb 22 12:36:48.636 [conn232] build index test.SERVER828Test { group: 1.0, covered: 1.0 } m30001| Fri Feb 22 12:36:48.662 [conn232] build index done. scanned 1600 total records. 0.026 secs m30001| Fri Feb 22 12:36:48.694 [conn232] assertion 16354 Positional operator does not match the query specifier. ns:test.SERVER828Test query:{ group: 3.0, x.a: 2.0 } m30001| Fri Feb 22 12:36:48.694 [conn232] problem detected during query over test.SERVER828Test : { $err: "Positional operator does not match the query specifier.", code: 16354 } m30999| Fri Feb 22 12:36:48.694 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16354 Positional operator does not match the query specifier. m30001| Fri Feb 22 12:36:48.694 [conn232] end connection 127.0.0.1:55913 (53 connections now open) m30001| Fri Feb 22 12:36:48.695 [conn165] assertion 16354 Positional operator does not match the query specifier. ns:test.SERVER828Test query:{ query: { group: 3.0, x.a: 2.0 }, orderby: { x: 1.0 } } m30001| Fri Feb 22 12:36:48.695 [conn165] problem detected during query over test.SERVER828Test : { $err: "Positional operator does not match the query specifier.", code: 16354 } m30999| Fri Feb 22 12:36:48.695 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16354 Positional operator does not match the query specifier. m30001| Fri Feb 22 12:36:48.695 [conn165] end connection 127.0.0.1:50552 (53 connections now open) m30001| Fri Feb 22 12:36:48.695 [conn248] assertion 10053 You cannot currently mix including and excluding fields. Contact us if this is an issue. ns:test.SERVER828Test query:{ query: { group: 3.0, x.a: 2.0 }, orderby: { x: 1.0 } } m30001| Fri Feb 22 12:36:48.695 [conn248] problem detected during query over test.SERVER828Test : { $err: "You cannot currently mix including and excluding fields. Contact us if this is an issue.", code: 10053 } m30999| Fri Feb 22 12:36:48.695 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 10053 You cannot currently mix including and excluding fields. Contact us if this is an issue. m30001| Fri Feb 22 12:36:48.696 [conn248] end connection 127.0.0.1:48286 (51 connections now open) m30001| Fri Feb 22 12:36:48.696 [conn196] assertion 16346 Cannot specify more than one positional array element per query (currently unsupported). ns:test.SERVER828Test query:{ group: 3.0, x.a: 1.0, y.aa: 1.0 } m30001| Fri Feb 22 12:36:48.696 [conn196] problem detected during query over test.SERVER828Test : { $err: "Cannot specify more than one positional array element per query (currently unsupported).", code: 16346 } m30999| Fri Feb 22 12:36:48.696 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16346 Cannot specify more than one positional array element per query (currently unsupported). m30001| Fri Feb 22 12:36:48.696 [conn196] end connection 127.0.0.1:56818 (50 connections now open) m30001| Fri Feb 22 12:36:48.697 [conn254] assertion 16354 Positional operator does not match the query specifier. ns:test.SERVER828Test query:{ group: 3.0 } m30001| Fri Feb 22 12:36:48.697 [conn254] problem detected during query over test.SERVER828Test : { $err: "Positional operator does not match the query specifier.", code: 16354 } m30999| Fri Feb 22 12:36:48.697 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16354 Positional operator does not match the query specifier. m30001| Fri Feb 22 12:36:48.697 [conn254] end connection 127.0.0.1:37443 (49 connections now open) 383ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_poly_edge.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geoc.js ******************************************* Test : jstests/index13.js ... m30999| Fri Feb 22 12:36:48.841 [conn1] DROP: test.jstests_index13 m30001| Fri Feb 22 12:36:48.842 [conn212] CMD: drop test.jstests_index13 m30001| Fri Feb 22 12:36:48.842 [conn212] build index test.jstests_index13 { _id: 1 } m30001| Fri Feb 22 12:36:48.844 [conn212] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:48.844 [conn212] info: creating collection test.jstests_index13 on add index m30001| Fri Feb 22 12:36:48.844 [conn212] build index test.jstests_index13 { a.b: 1.0, a.c: 1.0 } m30001| Fri Feb 22 12:36:48.845 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.864 [conn212] build index test.jstests_index13 { a.b: 1.0, a.c: 1.0, d.e: 1.0, d.f: 1.0 } m30001| Fri Feb 22 12:36:48.865 [conn212] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:48.875 [conn212] build index test.jstests_index13 { a.b.c: 1.0, a.b.d: 1.0 } m30001| Fri Feb 22 12:36:48.875 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:48.880 [conn1] DROP: test.jstests_index13 m30001| Fri Feb 22 12:36:48.880 [conn212] CMD: drop test.jstests_index13 m30001| Fri Feb 22 12:36:48.892 [conn212] build index test.jstests_index13 { _id: 1 } m30001| Fri Feb 22 12:36:48.892 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.892 [conn212] info: creating collection test.jstests_index13 on add index m30001| Fri Feb 22 12:36:48.892 [conn212] build index test.jstests_index13 { a.b.x: 1.0, a.b.y: 1.0 } m30001| Fri Feb 22 12:36:48.893 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.900 [conn4] CMD: dropIndexes test.jstests_index13 m30001| Fri Feb 22 12:36:48.903 [conn212] build index test.jstests_index13 { a: 1.0, a.b.x: 1.0, a.b.y: 1.0 } m30001| Fri Feb 22 12:36:48.904 [conn212] build index done. scanned 4 total records. 0 secs m30999| Fri Feb 22 12:36:48.909 [conn1] DROP: test.jstests_index13 m30001| Fri Feb 22 12:36:48.909 [conn212] CMD: drop test.jstests_index13 m30001| Fri Feb 22 12:36:48.916 [conn212] build index test.jstests_index13 { _id: 1 } m30001| Fri Feb 22 12:36:48.916 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.916 [conn212] info: creating collection test.jstests_index13 on add index m30001| Fri Feb 22 12:36:48.916 [conn212] build index test.jstests_index13 { a.b.c: 1.0, a.e.f: 1.0, a.b.d: 1.0, a.e.g: 1.0 } m30001| Fri Feb 22 12:36:48.917 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:48.923 [conn1] DROP: test.jstests_index13 m30001| Fri Feb 22 12:36:48.923 [conn212] CMD: drop test.jstests_index13 m30001| Fri Feb 22 12:36:48.930 [conn212] build index test.jstests_index13 { _id: 1 } m30001| Fri Feb 22 12:36:48.930 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.931 [conn212] info: creating collection test.jstests_index13 on add index m30001| Fri Feb 22 12:36:48.931 [conn212] build index test.jstests_index13 { a.b.c: 1.0, a.e.c: 1.0, a.b.d: 1.0, a.e.d: 1.0 } m30001| Fri Feb 22 12:36:48.932 [conn212] build index done. scanned 0 total records. 0 secs 102ms ******************************************* Test : jstests/find_and_modify_server7660.js ... m30999| Fri Feb 22 12:36:48.943 [conn1] DROP: test.find_and_modify_server7660 m30001| Fri Feb 22 12:36:48.943 [conn212] CMD: drop test.find_and_modify_server7660 m30001| Fri Feb 22 12:36:48.944 [conn212] build index test.find_and_modify_server7660 { _id: 1 } m30001| Fri Feb 22 12:36:48.945 [conn212] build index done. scanned 0 total records. 0.001 secs 9ms ******************************************* Test : jstests/or6.js ... m30999| Fri Feb 22 12:36:48.951 [conn1] DROP: test.jstests_or6 m30001| Fri Feb 22 12:36:48.952 [conn212] CMD: drop test.jstests_or6 m30001| Fri Feb 22 12:36:48.952 [conn212] build index test.jstests_or6 { _id: 1 } m30001| Fri Feb 22 12:36:48.953 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.953 [conn212] info: creating collection test.jstests_or6 on add index m30001| Fri Feb 22 12:36:48.953 [conn212] build index test.jstests_or6 { a: 1.0 } m30001| Fri Feb 22 12:36:48.954 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.962 [conn212] build index test.jstests_or6 { b: 1.0 } m30001| Fri Feb 22 12:36:48.963 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:48.964 [conn1] DROP: test.jstests_or6 m30001| Fri Feb 22 12:36:48.964 [conn212] CMD: drop test.jstests_or6 m30001| Fri Feb 22 12:36:48.973 [conn212] build index test.jstests_or6 { _id: 1 } m30001| Fri Feb 22 12:36:48.974 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.974 [conn212] info: creating collection test.jstests_or6 on add index m30001| Fri Feb 22 12:36:48.974 [conn212] build index test.jstests_or6 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:36:48.974 [conn212] build index done. scanned 0 total records. 0 secs 36ms ******************************************* Test : jstests/indext.js ... m30999| Fri Feb 22 12:36:48.986 [conn1] DROP: test.jstests_indext m30001| Fri Feb 22 12:36:48.986 [conn212] CMD: drop test.jstests_indext m30001| Fri Feb 22 12:36:48.987 [conn212] build index test.jstests_indext { _id: 1 } m30001| Fri Feb 22 12:36:48.987 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.987 [conn212] info: creating collection test.jstests_indext on add index m30001| Fri Feb 22 12:36:48.987 [conn212] build index test.jstests_indext { a.b: 1.0 } m30001| Fri Feb 22 12:36:48.988 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:48.989 [conn212] build index test.jstests_indext { a.b: 1.0, a.c: 1.0 } m30001| Fri Feb 22 12:36:48.990 [conn212] build index done. scanned 2 total records. 0 secs 10ms ******************************************* Test : jstests/updateg.js ... m30999| Fri Feb 22 12:36:48.993 [conn1] DROP: test.jstests_updateg m30001| Fri Feb 22 12:36:48.993 [conn212] CMD: drop test.jstests_updateg m30001| Fri Feb 22 12:36:48.994 [conn212] build index test.jstests_updateg { _id: 1 } m30001| Fri Feb 22 12:36:48.994 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:48.995 [conn1] DROP: test.jstests_updateg m30001| Fri Feb 22 12:36:48.995 [conn212] CMD: drop test.jstests_updateg m30001| Fri Feb 22 12:36:49.000 [conn212] build index test.jstests_updateg { _id: 1 } m30001| Fri Feb 22 12:36:49.000 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:36:49.001 [conn1] DROP: test.jstests_updateg m30001| Fri Feb 22 12:36:49.001 [conn212] CMD: drop test.jstests_updateg m30001| Fri Feb 22 12:36:49.006 [conn212] build index test.jstests_updateg { _id: 1 } m30001| Fri Feb 22 12:36:49.006 [conn212] build index done. scanned 0 total records. 0 secs 15ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/reversecursor.js ******************************************* Test : jstests/indexb.js ... m30999| Fri Feb 22 12:36:49.041 [conn1] DROP: test.indexb m30001| Fri Feb 22 12:36:49.041 [conn212] CMD: drop test.indexb m30001| Fri Feb 22 12:36:49.042 [conn212] build index test.indexb { _id: 1 } m30001| Fri Feb 22 12:36:49.043 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:49.043 [conn212] info: creating collection test.indexb on add index m30001| Fri Feb 22 12:36:49.043 [conn212] build index test.indexb { a: 1.0 } m30001| Fri Feb 22 12:36:49.043 [conn212] build index done. scanned 0 total records. 0 secs 39ms ******************************************* Test : jstests/orn.js ... m30999| Fri Feb 22 12:36:49.047 [conn1] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:49.047 [conn212] CMD: drop test.jstests_orn Fri Feb 22 12:36:49.076 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i = 0; i < 15; ++i ) { sleep( 1000 ); db.jstests_orn.drop() } localhost:30999/admin m30001| Fri Feb 22 12:36:49.077 [conn212] build index test.jstests_orn { _id: 1 } m30001| Fri Feb 22 12:36:49.078 [conn212] build index done. scanned 0 total records. 0 secs sh15961| MongoDB shell version: 2.4.0-rc1-pre- sh15961| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:36:49.163 [mongosMain] connection accepted from 127.0.0.1:62701 #236 (102 connections now open) m30001| Fri Feb 22 12:36:49.667 [conn212] build index test.jstests_orn { a: 1.0 } m30001| Fri Feb 22 12:36:49.726 [conn212] build index done. scanned 10000 total records. 0.058 secs m30001| Fri Feb 22 12:36:49.727 [conn212] build index test.jstests_orn { b: 1.0 } m30001| Fri Feb 22 12:36:49.786 [conn212] build index done. scanned 10000 total records. 0.058 secs m30001| Fri Feb 22 12:36:49.905 [conn212] command test.$cmd command: { distinct: "jstests_orn", key: "a", query: { $or: [ { a: { $lte: 500.0 }, i: 49999.0 }, { b: { $lte: 500.0 }, i: 49999.0 }, { a: { $lte: 1000.0 }, i: 49999.0 }, { b: { $lte: 1000.0 }, i: 49999.0 }, { a: { $lte: 1500.0 }, i: 49999.0 }, { b: { $lte: 1500.0 }, i: 49999.0 }, { a: { $lte: 2000.0 }, i: 49999.0 }, { b: { $lte: 2000.0 }, i: 49999.0 }, { a: { $lte: 2500.0 }, i: 49999.0 }, { b: { $lte: 2500.0 }, i: 49999.0 }, { a: { $lte: 3000.0 }, i: 49999.0 }, { b: { $lte: 3000.0 }, i: 49999.0 }, { a: { $lte: 3500.0 }, i: 49999.0 }, { b: { $lte: 3500.0 }, i: 49999.0 }, { a: { $lte: 4000.0 }, i: 49999.0 }, { b: { $lte: 4000.0 }, i: 49999.0 }, { a: { $lte: 4500.0 }, i: 49999.0 }, { b: { $lte: 4500.0 }, i: 49999.0 }, { a: { $lte: 5000.0 }, i: 49999.0 }, { b: { $lte: 5000.0 }, i: 49999.0 } ] } } ntoreturn:1 keyUpdates:0 locks(micros) r:117957 reslen:149 117ms m30999| Fri Feb 22 12:36:50.167 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:50.167 [conn227] CMD: drop test.jstests_orn m30001| Fri Feb 22 12:36:50.176 [conn212] build index test.jstests_orn { _id: 1 } m30001| Fri Feb 22 12:36:50.177 [conn212] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:50.543 [conn212] build index test.jstests_orn { a: 1.0 } m30001| Fri Feb 22 12:36:50.577 [conn212] build index done. scanned 6812 total records. 0.034 secs m30001| Fri Feb 22 12:36:50.578 [conn212] build index test.jstests_orn { b: 1.0 } m30001| Fri Feb 22 12:36:50.612 [conn212] build index done. scanned 6812 total records. 0.033 secs m30999| Fri Feb 22 12:36:51.176 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:51.176 [conn227] CMD: drop test.jstests_orn m30001| Fri Feb 22 12:36:51.184 [conn212] build index test.jstests_orn { _id: 1 } m30001| Fri Feb 22 12:36:51.185 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:36:51.353 [conn212] build index test.jstests_orn { a: 1.0 } m30001| Fri Feb 22 12:36:51.371 [conn212] build index done. scanned 3187 total records. 0.018 secs m30001| Fri Feb 22 12:36:51.373 [conn212] build index test.jstests_orn { b: 1.0 } m30001| Fri Feb 22 12:36:51.391 [conn212] build index done. scanned 3187 total records. 0.017 secs m30999| Fri Feb 22 12:36:52.184 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:52.184 [conn227] CMD: drop test.jstests_orn m30001| Fri Feb 22 12:36:52.197 [conn212] build index test.jstests_orn { _id: 1 } m30001| Fri Feb 22 12:36:52.198 [conn212] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:36:52.262 [conn212] build index test.jstests_orn { a: 1.0 } m30001| Fri Feb 22 12:36:52.268 [conn212] build index done. scanned 1079 total records. 0.006 secs m30001| Fri Feb 22 12:36:52.269 [conn212] build index test.jstests_orn { b: 1.0 } m30001| Fri Feb 22 12:36:52.276 [conn212] build index done. scanned 1079 total records. 0.006 secs m30999| Fri Feb 22 12:36:53.196 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:53.197 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:36:53.789 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276665881c8e7453916074 m30999| Fri Feb 22 12:36:53.789 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:36:54.205 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:54.206 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:36:55.206 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:55.206 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:36:56.207 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:56.208 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:36:57.209 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:57.209 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:36:58.209 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:58.210 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:36:59.210 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:36:59.210 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:36:59.791 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127666b881c8e7453916075 m30999| Fri Feb 22 12:36:59.791 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:37:00.211 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:37:00.211 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:37:01.212 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:37:01.212 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:37:02.213 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:37:02.213 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:37:03.214 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:37:03.214 [conn227] CMD: drop test.jstests_orn m30999| Fri Feb 22 12:37:04.214 [conn236] DROP: test.jstests_orn m30001| Fri Feb 22 12:37:04.215 [conn227] CMD: drop test.jstests_orn sh15961| false m30999| Fri Feb 22 12:37:04.226 [conn236] end connection 127.0.0.1:62701 (101 connections now open) 15186ms ******************************************* Test : jstests/regex4.js ... m30999| Fri Feb 22 12:37:04.236 [conn1] DROP: test.regex4 m30001| Fri Feb 22 12:37:04.237 [conn212] CMD: drop test.regex4 m30001| Fri Feb 22 12:37:04.238 [conn212] build index test.regex4 { _id: 1 } m30001| Fri Feb 22 12:37:04.239 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.240 [conn212] build index test.regex4 { name: 1.0 } m30001| Fri Feb 22 12:37:04.241 [conn212] build index done. scanned 4 total records. 0 secs 11ms ******************************************* Test : jstests/queryoptimizer8.js ... m30999| Fri Feb 22 12:37:04.248 [conn1] DROP: test.jstests_queryoptimizer8 m30001| Fri Feb 22 12:37:04.248 [conn212] CMD: drop test.jstests_queryoptimizer8 m30001| Fri Feb 22 12:37:04.249 [conn212] build index test.jstests_queryoptimizer8 { _id: 1 } m30001| Fri Feb 22 12:37:04.250 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.250 [conn212] info: creating collection test.jstests_queryoptimizer8 on add index m30001| Fri Feb 22 12:37:04.250 [conn212] build index test.jstests_queryoptimizer8 { a: 1.0 } m30001| Fri Feb 22 12:37:04.251 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.251 [conn212] build index test.jstests_queryoptimizer8 { b: 1.0 } m30001| Fri Feb 22 12:37:04.252 [conn212] build index done. scanned 0 total records. 0 secs 75ms ******************************************* Test : jstests/arrayfind6.js ... m30999| Fri Feb 22 12:37:04.325 [conn1] DROP: test.jstests_arrayfind6 m30001| Fri Feb 22 12:37:04.326 [conn212] CMD: drop test.jstests_arrayfind6 m30001| Fri Feb 22 12:37:04.326 [conn212] build index test.jstests_arrayfind6 { _id: 1 } m30001| Fri Feb 22 12:37:04.327 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.330 [conn212] build index test.jstests_arrayfind6 { a.b: 1.0 } m30001| Fri Feb 22 12:37:04.331 [conn212] build index done. scanned 1 total records. 0 secs 16ms ******************************************* Test : jstests/autoid.js ... m30999| Fri Feb 22 12:37:04.335 [conn1] DROP: test.jstests_autoid m30001| Fri Feb 22 12:37:04.335 [conn212] CMD: drop test.jstests_autoid m30001| Fri Feb 22 12:37:04.336 [conn212] build index test.jstests_autoid { _id: 1 } m30001| Fri Feb 22 12:37:04.336 [conn212] build index done. scanned 0 total records. 0 secs 3ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/eval_nolock.js ******************************************* Test : jstests/update_addToSet2.js ... m30999| Fri Feb 22 12:37:04.343 [conn1] DROP: test.update_addToSet2 m30001| Fri Feb 22 12:37:04.343 [conn212] CMD: drop test.update_addToSet2 m30001| Fri Feb 22 12:37:04.344 [conn212] build index test.update_addToSet2 { _id: 1 } m30001| Fri Feb 22 12:37:04.344 [conn212] build index done. scanned 0 total records. 0 secs { "_id" : 1, "kids" : [ { "name" : "Bob", "age" : "4" }, { "name" : "Dan", "age" : "2" } ] } 7ms ******************************************* Test : jstests/regex6.js ... m30999| Fri Feb 22 12:37:04.346 [conn1] DROP: test.regex6 m30001| Fri Feb 22 12:37:04.347 [conn212] CMD: drop test.regex6 m30001| Fri Feb 22 12:37:04.347 [conn212] build index test.regex6 { _id: 1 } m30001| Fri Feb 22 12:37:04.348 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.348 [conn212] build index test.regex6 { name: 1.0 } m30001| Fri Feb 22 12:37:04.349 [conn212] build index done. scanned 5 total records. 0.001 secs 13ms ******************************************* Test : jstests/fts_partition_no_multikey.js ... m30000| Fri Feb 22 12:37:04.365 [initandlisten] connection accepted from 127.0.0.1:35470 #121 (59 connections now open) m30001| Fri Feb 22 12:37:04.366 [initandlisten] connection accepted from 127.0.0.1:55970 #265 (50 connections now open) m30999| Fri Feb 22 12:37:04.366 [conn1] DROP: test.fts_partition_no_multikey m30001| Fri Feb 22 12:37:04.367 [conn212] CMD: drop test.fts_partition_no_multikey m30001| Fri Feb 22 12:37:04.368 [conn212] build index test.fts_partition_no_multikey { _id: 1 } m30001| Fri Feb 22 12:37:04.368 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.368 [conn212] info: creating collection test.fts_partition_no_multikey on add index m30001| Fri Feb 22 12:37:04.368 [conn212] build index test.fts_partition_no_multikey { x: 1.0, _fts: "text", _ftsx: 1 } m30001| Fri Feb 22 12:37:04.369 [conn212] build index done. scanned 0 total records. 0 secs 14ms ******************************************* Test : jstests/index8.js ... m30999| Fri Feb 22 12:37:04.380 [conn1] DROP: test.jstests_index8 m30001| Fri Feb 22 12:37:04.380 [conn212] CMD: drop test.jstests_index8 m30001| Fri Feb 22 12:37:04.380 [conn212] build index test.jstests_index8 { _id: 1 } m30001| Fri Feb 22 12:37:04.381 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.381 [conn212] info: creating collection test.jstests_index8 on add index m30001| Fri Feb 22 12:37:04.381 [conn212] build index test.jstests_index8 { a: 1.0 } m30001| Fri Feb 22 12:37:04.381 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.382 [conn212] build index test.jstests_index8 { b: 1.0 } m30001| Fri Feb 22 12:37:04.382 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.383 [conn212] build index test.jstests_index8 { c: 1.0 } m30001| Fri Feb 22 12:37:04.383 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.386 [conn4] CMD: reIndex test.jstests_index8 m30001| Fri Feb 22 12:37:04.396 [conn4] build index test.jstests_index8 { _id: 1 } m30001| Fri Feb 22 12:37:04.397 [conn4] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.397 [conn4] build index test.jstests_index8 { a: 1.0 } m30001| Fri Feb 22 12:37:04.397 [conn4] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.397 [conn4] build index test.jstests_index8 { b: 1.0 } m30001| Fri Feb 22 12:37:04.398 [conn4] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.398 [conn4] build index test.jstests_index8 { c: 1.0 } m30001| Fri Feb 22 12:37:04.398 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:04.402 [conn1] DROP: test.jstests_index8 m30001| Fri Feb 22 12:37:04.402 [conn212] CMD: drop test.jstests_index8 m30001| Fri Feb 22 12:37:04.409 [conn212] build index test.jstests_index8 { _id: 1 } m30001| Fri Feb 22 12:37:04.410 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.410 [conn212] info: creating collection test.jstests_index8 on add index m30001| Fri Feb 22 12:37:04.410 [conn212] build index test.jstests_index8 { a: 1.0, b: -1.0 } m30001| Fri Feb 22 12:37:04.410 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:04.412 [conn1] DROP: test.jstests_index8 m30001| Fri Feb 22 12:37:04.412 [conn212] CMD: drop test.jstests_index8 m30001| Fri Feb 22 12:37:04.417 [conn212] build index test.jstests_index8 { _id: 1 } m30001| Fri Feb 22 12:37:04.417 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.417 [conn212] info: creating collection test.jstests_index8 on add index m30001| Fri Feb 22 12:37:04.417 [conn212] build index test.jstests_index8 { a: 1.0 } m30001| Fri Feb 22 12:37:04.418 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:04.419 [conn1] DROP: test.jstests_index8 m30001| Fri Feb 22 12:37:04.419 [conn212] CMD: drop test.jstests_index8 m30001| Fri Feb 22 12:37:04.423 [conn212] build index test.jstests_index8 { _id: 1 } m30001| Fri Feb 22 12:37:04.423 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.423 [conn212] info: creating collection test.jstests_index8 on add index m30001| Fri Feb 22 12:37:04.423 [conn212] build index test.jstests_index8 { a: 1.0 } m30001| Fri Feb 22 12:37:04.424 [conn212] build index done. scanned 0 total records. 0 secs 52ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/cappeda.js ******************************************* Test : jstests/orl.js ... m30999| Fri Feb 22 12:37:04.426 [conn1] DROP: test.jstests_orl m30001| Fri Feb 22 12:37:04.426 [conn212] CMD: drop test.jstests_orl m30001| Fri Feb 22 12:37:04.427 [conn212] build index test.jstests_orl { _id: 1 } m30001| Fri Feb 22 12:37:04.427 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.427 [conn212] info: creating collection test.jstests_orl on add index m30001| Fri Feb 22 12:37:04.427 [conn212] build index test.jstests_orl { a.b: 1.0, a.c: 1.0 } m30001| Fri Feb 22 12:37:04.428 [conn212] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/exists9.js ... m30999| Fri Feb 22 12:37:04.433 [conn1] DROP: test.jstests_exists9 m30001| Fri Feb 22 12:37:04.433 [conn212] CMD: drop test.jstests_exists9 m30001| Fri Feb 22 12:37:04.434 [conn212] build index test.jstests_exists9 { _id: 1 } m30001| Fri Feb 22 12:37:04.434 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.436 [conn212] build index test.jstests_exists9 { a.b: 1.0 } m30001| Fri Feb 22 12:37:04.436 [conn212] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:37:04.437 [conn1] DROP: test.jstests_exists9 m30001| Fri Feb 22 12:37:04.437 [conn212] CMD: drop test.jstests_exists9 m30001| Fri Feb 22 12:37:04.441 [conn212] build index test.jstests_exists9 { _id: 1 } m30001| Fri Feb 22 12:37:04.442 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.443 [conn212] build index test.jstests_exists9 { a: 1.0 } m30001| Fri Feb 22 12:37:04.443 [conn212] build index done. scanned 2 total records. 0 secs m30999| Fri Feb 22 12:37:04.445 [conn1] DROP: test.jstests_exists9 m30001| Fri Feb 22 12:37:04.445 [conn212] CMD: drop test.jstests_exists9 m30001| Fri Feb 22 12:37:04.451 [conn212] build index test.jstests_exists9 { _id: 1 } m30001| Fri Feb 22 12:37:04.452 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.453 [conn212] build index test.jstests_exists9 { a.0: 1.0 } m30001| Fri Feb 22 12:37:04.453 [conn212] build index done. scanned 2 total records. 0 secs 26ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped9.js ******************************************* Test : jstests/updatee.js ... m30999| Fri Feb 22 12:37:04.456 [conn1] DROP: test.updatee m30001| Fri Feb 22 12:37:04.456 [conn212] CMD: drop test.updatee m30001| Fri Feb 22 12:37:04.456 [conn212] build index test.updatee { _id: 1 } m30001| Fri Feb 22 12:37:04.457 [conn212] build index done. scanned 0 total records. 0 secs 7ms ******************************************* Test : jstests/indexv.js ... m30999| Fri Feb 22 12:37:04.468 [conn1] DROP: test.jstests_indexv m30001| Fri Feb 22 12:37:04.468 [conn212] CMD: drop test.jstests_indexv m30001| Fri Feb 22 12:37:04.469 [conn212] build index test.jstests_indexv { _id: 1 } m30001| Fri Feb 22 12:37:04.470 [conn212] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:37:04.470 [conn212] info: creating collection test.jstests_indexv on add index m30001| Fri Feb 22 12:37:04.470 [conn212] build index test.jstests_indexv { a.b: 1.0 } m30001| Fri Feb 22 12:37:04.471 [conn212] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:04.472 [conn1] DROP: test.jstests_indexv m30001| Fri Feb 22 12:37:04.472 [conn212] CMD: drop test.jstests_indexv m30001| Fri Feb 22 12:37:04.478 [conn212] build index test.jstests_indexv { _id: 1 } m30001| Fri Feb 22 12:37:04.478 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.478 [conn212] info: creating collection test.jstests_indexv on add index m30001| Fri Feb 22 12:37:04.478 [conn212] build index test.jstests_indexv { a.b.c: 1.0 } m30001| Fri Feb 22 12:37:04.479 [conn212] build index done. scanned 0 total records. 0 secs 18ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/or4.js ******************************************* Test : jstests/existsa.js ... m30999| Fri Feb 22 12:37:04.485 [conn1] DROP: test.jstests_existsa m30001| Fri Feb 22 12:37:04.485 [conn212] CMD: drop test.jstests_existsa m30001| Fri Feb 22 12:37:04.486 [conn212] build index test.jstests_existsa { _id: 1 } m30001| Fri Feb 22 12:37:04.487 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.487 [conn212] build index test.jstests_existsa { a: 1.0 } m30001| Fri Feb 22 12:37:04.487 [conn212] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:37:04.525 [conn1] DROP: test.jstests_existsa m30001| Fri Feb 22 12:37:04.525 [conn212] CMD: drop test.jstests_existsa m30001| Fri Feb 22 12:37:04.529 [conn212] build index test.jstests_existsa { _id: 1 } m30001| Fri Feb 22 12:37:04.530 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.530 [conn212] build index test.jstests_existsa { a.b: 1.0 } m30001| Fri Feb 22 12:37:04.531 [conn212] build index done. scanned 3 total records. 0 secs m30999| Fri Feb 22 12:37:04.536 [conn1] DROP: test.jstests_existsa m30001| Fri Feb 22 12:37:04.536 [conn212] CMD: drop test.jstests_existsa m30001| Fri Feb 22 12:37:04.540 [conn212] build index test.jstests_existsa { _id: 1 } m30001| Fri Feb 22 12:37:04.541 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.541 [conn212] build index test.jstests_existsa { a: 1.0 } m30001| Fri Feb 22 12:37:04.541 [conn212] build index done. scanned 1 total records. 0 secs 62ms ******************************************* Test : jstests/index11.js ... m30999| Fri Feb 22 12:37:04.547 [conn1] DROP: test.jstests_index11 m30001| Fri Feb 22 12:37:04.547 [conn212] CMD: drop test.jstests_index11 m30001| Fri Feb 22 12:37:04.547 [conn212] build index test.jstests_index11 { _id: 1 } m30001| Fri Feb 22 12:37:04.548 [conn212] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:04.549 [conn212] build index test.jstests_index11 { k: 1.0, v: 1.0 } m30001| Fri Feb 22 12:37:04.549 [conn212] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_index11.$k_1_v_1 4118 { : "a", : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:37:04.549 [conn212] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:37:04.549 [conn212] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:37:04.550 [conn212] test.jstests_index11 ERROR: key too large len:4118 max:1024 4118 test.jstests_index11.$k_1_v_1 m30001| Fri Feb 22 12:37:04.550 [conn4] CMD: dropIndexes test.jstests_index11 m30001| Fri Feb 22 12:37:04.554 [conn212] build index test.jstests_index11 { k: 1.0, v: 1.0 } m30001| Fri Feb 22 12:37:04.554 [conn212] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_index11.$k_1_v_1 4118 { : "a", : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:37:04.554 [conn212] test.system.indexes Btree::insert: key too large to index, skipping test.jstests_index11.$k_1_v_1 4118 { : "x", : "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx..." } m30001| Fri Feb 22 12:37:04.554 [conn212] warning: not all entries were added to the index, probably some keys were too large m30001| Fri Feb 22 12:37:04.554 [conn212] build index done. scanned 2 total records. 0 secs 13ms ******************************************* Test : jstests/nin2.js ... m30999| Fri Feb 22 12:37:04.556 [conn1] DROP: test.jstests_nin2 m30001| Fri Feb 22 12:37:04.556 [conn212] CMD: drop test.jstests_nin2 m30001| Fri Feb 22 12:37:04.557 [conn212] build index test.jstests_nin2 { _id: 1 } m30001| Fri Feb 22 12:37:04.557 [conn212] build index done. scanned 0 total records. 0 secs 40ms ******************************************* Test : jstests/regex_limit.js ... m30999| Fri Feb 22 12:37:04.596 [conn1] DROP: test.regex_limit m30001| Fri Feb 22 12:37:04.597 [conn212] CMD: drop test.regex_limit m30001| Fri Feb 22 12:37:04.599 [conn212] build index test.regex_limit { _id: 1 } m30001| Fri Feb 22 12:37:04.600 [conn212] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:37:04.610 [conn212] assertion 16432 Regular expression is too long ns:test.regex_limit query:{ z: { $regex: "cccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc..." } } m30001| Fri Feb 22 12:37:04.610 [conn212] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:37:04.610 [conn212] problem detected during query over test.regex_limit : { $err: "Regular expression is too long", code: 16432 } m30999| Fri Feb 22 12:37:04.610 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16432 Regular expression is too long m30001| Fri Feb 22 12:37:04.611 [conn212] end connection 127.0.0.1:55435 (49 connections now open) m30001| Fri Feb 22 12:37:04.621 [conn227] warning: log line attempted (32k) over max size(10k), printing beginning and end ... assertion 16431 Regular expression is too long ns:test.regex_limit query:{ z: { $in: [ /ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc .......... ccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccccc/ ] } } m30001| Fri Feb 22 12:37:04.621 [conn227] ntoskip:0 ntoreturn:-1 m30001| Fri Feb 22 12:37:04.621 [conn227] problem detected during query over test.regex_limit : { $err: "Regular expression is too long", code: 16431 } m30999| Fri Feb 22 12:37:04.621 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16431 Regular expression is too long m30001| Fri Feb 22 12:37:04.621 [conn227] end connection 127.0.0.1:44837 (48 connections now open) 26ms ******************************************* Test : jstests/index_elemmatch1.js ... m30999| Fri Feb 22 12:37:04.622 [conn1] DROP: test.index_elemmatch1 m30001| Fri Feb 22 12:37:04.623 [conn186] CMD: drop test.index_elemmatch1 m30001| Fri Feb 22 12:37:04.623 [conn186] build index test.index_elemmatch1 { _id: 1 } m30001| Fri Feb 22 12:37:04.624 [conn186] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:05.354 [conn186] build index test.index_elemmatch1 { a: 1.0, b: 1.0 } m30001| Fri Feb 22 12:37:05.427 [conn186] build index done. scanned 10000 total records. 0.072 secs m30001| Fri Feb 22 12:37:05.436 [conn186] build index test.index_elemmatch1 { arr.x: 1.0, a: 1.0 } m30001| Fri Feb 22 12:37:05.541 [conn186] build index done. scanned 10000 total records. 0.105 secs m30001| Fri Feb 22 12:37:05.542 [conn186] insert test.system.indexes ninserted:1 keyUpdates:0 locks(micros) w:105996 105ms 926ms ******************************************* Test : jstests/push_sort.js ... m30999| Fri Feb 22 12:37:05.552 [conn1] DROP: test.push_sort m30001| Fri Feb 22 12:37:05.553 [conn186] CMD: drop test.push_sort m30001| Fri Feb 22 12:37:05.553 [conn186] build index test.push_sort { _id: 1 } m30001| Fri Feb 22 12:37:05.555 [conn186] build index done. scanned 0 total records. 0.001 secs 12ms ******************************************* Test : jstests/basic2.js ... m30999| Fri Feb 22 12:37:05.565 [conn1] DROP: test.basic2 m30001| Fri Feb 22 12:37:05.566 [conn186] CMD: drop test.basic2 m30001| Fri Feb 22 12:37:05.566 [conn186] build index test.basic2 { _id: 1 } m30001| Fri Feb 22 12:37:05.568 [conn186] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:37:05.569 [conn4] CMD: validate test.basic2 m30001| Fri Feb 22 12:37:05.569 [conn4] validating index 0: test.basic2.$_id_ 9ms ******************************************* Test : jstests/hostinfo.js ... 7ms ******************************************* Test : jstests/find2.js ... m30999| Fri Feb 22 12:37:05.582 [conn1] DROP: test.ed_db_find2_oif m30001| Fri Feb 22 12:37:05.582 [conn186] CMD: drop test.ed_db_find2_oif m30001| Fri Feb 22 12:37:05.583 [conn186] build index test.ed_db_find2_oif { _id: 1 } m30001| Fri Feb 22 12:37:05.584 [conn186] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/distinct_array1.js ... m30999| Fri Feb 22 12:37:05.587 [conn1] DROP: test.distinct_array1 m30001| Fri Feb 22 12:37:05.587 [conn186] CMD: drop test.distinct_array1 m30001| Fri Feb 22 12:37:05.588 [conn186] build index test.distinct_array1 { _id: 1 } m30001| Fri Feb 22 12:37:05.589 [conn186] build index done. scanned 0 total records. 0 secs 5ms ******************************************* Test : jstests/queryoptimizerb.js ... m30999| Fri Feb 22 12:37:05.591 [conn1] DROP: test.jstests_queryoptimizerb m30001| Fri Feb 22 12:37:05.592 [conn186] CMD: drop test.jstests_queryoptimizerb m30001| Fri Feb 22 12:37:05.592 [conn186] build index test.jstests_queryoptimizerb { _id: 1 } m30001| Fri Feb 22 12:37:05.593 [conn186] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:05.593 [conn186] info: creating collection test.jstests_queryoptimizerb on add index m30001| Fri Feb 22 12:37:05.593 [conn186] build index test.jstests_queryoptimizerb { a: "2d" } m30001| Fri Feb 22 12:37:05.594 [conn186] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:05.595 [conn186] build index test.jstests_queryoptimizerb { c: 1.0, b: 1.0 } m30001| Fri Feb 22 12:37:05.596 [conn186] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:37:05.598 [conn186] assertion 16331 'special' plan hint not allowed ns:test.jstests_queryoptimizerb query:{ query: { a: [ 0.0, 0.0 ], $or: [ { a: [ 0.0, 0.0 ] } ] }, $hint: { a: "2d" } } m30001| Fri Feb 22 12:37:05.598 [conn186] problem detected during query over test.jstests_queryoptimizerb : { $err: "'special' plan hint not allowed", code: 16331 } m30999| Fri Feb 22 12:37:05.598 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16331 'special' plan hint not allowed m30001| Fri Feb 22 12:37:05.598 [conn186] end connection 127.0.0.1:40402 (47 connections now open) m30001| Fri Feb 22 12:37:05.599 [conn264] assertion 16331 'special' plan hint not allowed ns:test.jstests_queryoptimizerb query:{ query: { a: [ 0.0, 0.0 ], $or: [ { a: [ 0.0, 0.0 ] } ] }, $hint: { a: "2d" }, $explain: true } m30001| Fri Feb 22 12:37:05.599 [conn264] problem detected during query over test.jstests_queryoptimizerb : { $err: "'special' plan hint not allowed", code: 16331 } m30999| Fri Feb 22 12:37:05.599 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 16331 'special' plan hint not allowed m30001| Fri Feb 22 12:37:05.599 [conn264] end connection 127.0.0.1:41663 (46 connections now open) 8ms ******************************************* Test : jstests/mr_drop.js ... m30999| Fri Feb 22 12:37:05.605 [conn1] DROP: test.jstests_mr_drop m30001| Fri Feb 22 12:37:05.605 [conn207] CMD: drop test.jstests_mr_drop setting random seed: 1361536625605 m30001| Fri Feb 22 12:37:05.606 [conn207] build index test.jstests_mr_drop { _id: 1 } m30001| Fri Feb 22 12:37:05.607 [conn207] build index done. scanned 0 total records. 0 secs Fri Feb 22 12:37:05.694 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');sleep( 1000 ); db.jstests_mr_drop.drop(); localhost:30999/admin sh16026| MongoDB shell version: 2.4.0-rc1-pre- m30001| Fri Feb 22 12:37:05.727 [conn207] CMD: drop test.tmp.mr.jstests_mr_drop_65 m30001| Fri Feb 22 12:37:05.727 [conn207] CMD: drop test.tmp.mr.jstests_mr_drop_65_inc m30001| Fri Feb 22 12:37:05.727 [conn207] build index test.tmp.mr.jstests_mr_drop_65_inc { 0: 1 } m30001| Fri Feb 22 12:37:05.728 [conn207] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:05.729 [conn207] build index test.tmp.mr.jstests_mr_drop_65 { _id: 1 } m30001| Fri Feb 22 12:37:05.729 [conn207] build index done. scanned 0 total records. 0 secs sh16026| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:37:05.760 [mongosMain] connection accepted from 127.0.0.1:47804 #237 (102 connections now open) m30999| Fri Feb 22 12:37:05.793 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276671881c8e7453916076 m30999| Fri Feb 22 12:37:05.794 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:37:06.763 [conn237] DROP: test.jstests_mr_drop m30001| Fri Feb 22 12:37:07.769 [conn237] CMD: drop test.jstests_mr_drop sh16026| true m30999| Fri Feb 22 12:37:07.788 [conn237] end connection 127.0.0.1:47804 (101 connections now open) m30001| Fri Feb 22 12:37:09.818 [conn207] CMD: drop test.jstests_mr_drop_out m30001| Fri Feb 22 12:37:09.830 [conn207] CMD: drop test.tmp.mr.jstests_mr_drop_65 m30001| Fri Feb 22 12:37:09.830 [conn207] CMD: drop test.tmp.mr.jstests_mr_drop_65 m30001| Fri Feb 22 12:37:09.830 [conn207] CMD: drop test.tmp.mr.jstests_mr_drop_65_inc m30001| Fri Feb 22 12:37:09.835 [conn207] CMD: drop test.tmp.mr.jstests_mr_drop_65 m30001| Fri Feb 22 12:37:09.836 [conn207] CMD: drop test.tmp.mr.jstests_mr_drop_65_inc m30001| Fri Feb 22 12:37:09.836 [conn207] command test.$cmd command: { mapreduce: "jstests_mr_drop", map: function () { sleep( this.mapSleep ); emit( this.key, this ); }, reduce: function ( key, vals ) { sleep( vals[ 0 ].reduceSleep ); return vals[ 0 ]; }, finalize: function ( key, value ) { sleep( value.finalizeSleep ); return value; }, out: "jstests_mr_drop_out" } ntoreturn:1 keyUpdates:0 numYields: 103 locks(micros) W:12169 r:4071130 w:7835 reslen:143 4140ms 4238ms ******************************************* Test : jstests/arrayfind4.js ... m30999| Fri Feb 22 12:37:09.843 [conn1] DROP: test.jstests_arrayfind4 m30001| Fri Feb 22 12:37:09.844 [conn207] CMD: drop test.jstests_arrayfind4 m30001| Fri Feb 22 12:37:09.844 [conn207] build index test.jstests_arrayfind4 { _id: 1 } m30001| Fri Feb 22 12:37:09.845 [conn207] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:09.845 [conn207] build index test.jstests_arrayfind4 { a: 1.0 } m30001| Fri Feb 22 12:37:09.846 [conn207] build index done. scanned 1 total records. 0 secs 12ms ******************************************* Test : jstests/unset.js ... m30999| Fri Feb 22 12:37:09.853 [conn1] DROP: test.unset m30001| Fri Feb 22 12:37:09.853 [conn207] CMD: drop test.unset m30001| Fri Feb 22 12:37:09.854 [conn207] build index test.unset { _id: 1 } m30001| Fri Feb 22 12:37:09.855 [conn207] build index done. scanned 0 total records. 0 secs 7ms ******************************************* Test : jstests/binData.js ... 1ms ******************************************* Test : jstests/stats.js ... m30999| Fri Feb 22 12:37:09.858 [conn1] DROP: test.stats1 m30001| Fri Feb 22 12:37:09.858 [conn207] CMD: drop test.stats1 m30001| Fri Feb 22 12:37:09.859 [conn207] build index test.stats1 { _id: 1 } m30001| Fri Feb 22 12:37:09.859 [conn207] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/count4.js ... m30999| Fri Feb 22 12:37:09.865 [conn1] DROP: test.count4 m30001| Fri Feb 22 12:37:09.865 [conn207] CMD: drop test.count4 m30001| Fri Feb 22 12:37:09.866 [conn207] build index test.count4 { _id: 1 } m30001| Fri Feb 22 12:37:09.867 [conn207] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:37:09.874 [conn207] build index test.count4 { x: 1.0 } m30001| Fri Feb 22 12:37:09.875 [conn207] build index done. scanned 100 total records. 0.001 secs 15ms ******************************************* Test : jstests/sorth.js ... m30999| Fri Feb 22 12:37:09.877 [conn1] DROP: test.jstests_sorth m30001| Fri Feb 22 12:37:09.877 [conn207] CMD: drop test.jstests_sorth m30001| Fri Feb 22 12:37:09.878 [conn207] build index test.jstests_sorth { _id: 1 } m30001| Fri Feb 22 12:37:09.879 [conn207] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:37:09.879 [conn207] info: creating collection test.jstests_sorth on add index m30001| Fri Feb 22 12:37:09.879 [conn207] build index test.jstests_sorth { a: 1.0 } m30001| Fri Feb 22 12:37:09.880 [conn207] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:09.880 [conn207] build index test.jstests_sorth { b: 1.0 } m30001| Fri Feb 22 12:37:09.881 [conn207] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:09.925 [conn265] end connection 127.0.0.1:55970 (45 connections now open) m30000| Fri Feb 22 12:37:09.925 [conn121] end connection 127.0.0.1:35470 (58 connections now open) m30999| Fri Feb 22 12:37:09.951 [conn1] DROP: test.jstests_sorth m30001| Fri Feb 22 12:37:09.952 [conn207] CMD: drop test.jstests_sorth 84ms ******************************************* Test : jstests/evala.js ... m30999| Fri Feb 22 12:37:09.962 [conn1] DROP: test.evala m30001| Fri Feb 22 12:37:09.963 [conn207] CMD: drop test.evala m30001| Fri Feb 22 12:37:09.963 [conn207] build index test.evala { _id: 1 } m30001| Fri Feb 22 12:37:09.964 [conn207] build index done. scanned 0 total records. 0 secs 47ms ******************************************* Test : jstests/and.js ... m30999| Fri Feb 22 12:37:10.015 [conn1] DROP: test.jstests_and m30001| Fri Feb 22 12:37:10.015 [conn207] CMD: drop test.jstests_and m30001| Fri Feb 22 12:37:10.016 [conn207] build index test.jstests_and { _id: 1 } m30001| Fri Feb 22 12:37:10.016 [conn207] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.016 [conn207] assertion 14816 $and expression must be a nonempty array ns:test.jstests_and query:{ $and: 4.0 } m30001| Fri Feb 22 12:37:10.017 [conn207] problem detected during query over test.jstests_and : { $err: "$and expression must be a nonempty array", code: 14816 } m30999| Fri Feb 22 12:37:10.017 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14816 $and expression must be a nonempty array m30001| Fri Feb 22 12:37:10.017 [conn207] end connection 127.0.0.1:63716 (44 connections now open) m30001| Fri Feb 22 12:37:10.018 [conn237] assertion 14816 $and expression must be a nonempty array ns:test.jstests_and query:{ $and: {} } m30001| Fri Feb 22 12:37:10.018 [conn237] problem detected during query over test.jstests_and : { $err: "$and expression must be a nonempty array", code: 14816 } m30999| Fri Feb 22 12:37:10.018 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14816 $and expression must be a nonempty array m30001| Fri Feb 22 12:37:10.018 [conn237] end connection 127.0.0.1:64779 (44 connections now open) m30001| Fri Feb 22 12:37:10.019 [conn170] assertion 14817 $and/$or elements must be objects ns:test.jstests_and query:{ $and: [ 4.0 ] } m30001| Fri Feb 22 12:37:10.019 [conn170] problem detected during query over test.jstests_and : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:37:10.019 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:37:10.019 [conn170] end connection 127.0.0.1:48207 (43 connections now open) m30001| Fri Feb 22 12:37:10.072 [conn217] build index test.jstests_and { a: 1.0 } m30001| Fri Feb 22 12:37:10.073 [conn217] build index done. scanned 2 total records. 0 secs m30001| Fri Feb 22 12:37:10.073 [conn217] assertion 14816 $and expression must be a nonempty array ns:test.jstests_and query:{ $and: 4.0 } m30001| Fri Feb 22 12:37:10.073 [conn217] problem detected during query over test.jstests_and : { $err: "$and expression must be a nonempty array", code: 14816 } m30999| Fri Feb 22 12:37:10.073 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14816 $and expression must be a nonempty array m30001| Fri Feb 22 12:37:10.073 [conn217] end connection 127.0.0.1:45412 (41 connections now open) m30001| Fri Feb 22 12:37:10.074 [conn222] assertion 14816 $and expression must be a nonempty array ns:test.jstests_and query:{ $and: {} } m30001| Fri Feb 22 12:37:10.074 [conn222] problem detected during query over test.jstests_and : { $err: "$and expression must be a nonempty array", code: 14816 } m30999| Fri Feb 22 12:37:10.074 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14816 $and expression must be a nonempty array m30001| Fri Feb 22 12:37:10.074 [conn222] end connection 127.0.0.1:37228 (41 connections now open) m30001| Fri Feb 22 12:37:10.075 [conn263] assertion 14817 $and/$or elements must be objects ns:test.jstests_and query:{ $and: [ 4.0 ] } m30001| Fri Feb 22 12:37:10.075 [conn263] problem detected during query over test.jstests_and : { $err: "$and/$or elements must be objects", code: 14817 } m30999| Fri Feb 22 12:37:10.075 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 14817 $and/$or elements must be objects m30001| Fri Feb 22 12:37:10.075 [conn263] end connection 127.0.0.1:37943 (40 connections now open) 142ms ******************************************* Test : jstests/mr_undef.js ... m30999| Fri Feb 22 12:37:10.151 [conn1] DROP: test.mr_undef m30001| Fri Feb 22 12:37:10.151 [conn241] CMD: drop test.mr_undef m30999| Fri Feb 22 12:37:10.152 [conn1] DROP: test.mr_undef_out m30001| Fri Feb 22 12:37:10.152 [conn241] CMD: drop test.mr_undef_out m30001| Fri Feb 22 12:37:10.152 [conn241] build index test.mr_undef { _id: 1 } m30001| Fri Feb 22 12:37:10.153 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.183 [conn241] CMD: drop test.tmp.mr.mr_undef_66 m30001| Fri Feb 22 12:37:10.184 [conn241] CMD: drop test.tmp.mr.mr_undef_66_inc m30001| Fri Feb 22 12:37:10.184 [conn241] build index test.tmp.mr.mr_undef_66_inc { 0: 1 } m30001| Fri Feb 22 12:37:10.184 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.185 [conn241] build index test.tmp.mr.mr_undef_66 { _id: 1 } m30001| Fri Feb 22 12:37:10.185 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.188 [conn241] CMD: drop test.mr_undef_out m30001| Fri Feb 22 12:37:10.198 [conn241] CMD: drop test.tmp.mr.mr_undef_66 m30001| Fri Feb 22 12:37:10.198 [conn241] CMD: drop test.tmp.mr.mr_undef_66 m30001| Fri Feb 22 12:37:10.198 [conn241] CMD: drop test.tmp.mr.mr_undef_66_inc m30001| Fri Feb 22 12:37:10.202 [conn241] CMD: drop test.tmp.mr.mr_undef_66 m30001| Fri Feb 22 12:37:10.203 [conn241] CMD: drop test.tmp.mr.mr_undef_66_inc 54ms ******************************************* Test : jstests/eval9.js ... m30999| Fri Feb 22 12:37:10.255 [conn1] DROP: test.eval9 m30001| Fri Feb 22 12:37:10.255 [conn241] CMD: drop test.eval9 m30001| Fri Feb 22 12:37:10.256 [conn241] build index test.eval9 { _id: 1 } m30001| Fri Feb 22 12:37:10.256 [conn241] build index done. scanned 0 total records. 0 secs 55ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_box3.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2meridian.js ******************************************* Test : jstests/db.js ... 36ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/auth1.js ******************************************* Test : jstests/update_multi4.js ... m30999| Fri Feb 22 12:37:10.297 [conn1] DROP: test.update_mulit4 m30001| Fri Feb 22 12:37:10.297 [conn241] CMD: drop test.update_mulit4 m30001| Fri Feb 22 12:37:10.297 [conn241] build index test.update_mulit4 { _id: 1 } m30001| Fri Feb 22 12:37:10.298 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.352 [conn241] build index test.update_mulit4 { k: 1.0 } m30001| Fri Feb 22 12:37:10.359 [conn241] build index done. scanned 1000 total records. 0.006 secs 78ms ******************************************* Test : jstests/mr_index2.js ... m30999| Fri Feb 22 12:37:10.375 [conn1] DROP: test.mr_index2 m30001| Fri Feb 22 12:37:10.375 [conn241] CMD: drop test.mr_index2 m30001| Fri Feb 22 12:37:10.376 [conn241] build index test.mr_index2 { _id: 1 } m30001| Fri Feb 22 12:37:10.376 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.378 [conn241] CMD: drop test.tmp.mr.mr_index2_67 m30001| Fri Feb 22 12:37:10.378 [conn241] CMD: drop test.tmp.mr.mr_index2_67_inc m30001| Fri Feb 22 12:37:10.379 [conn241] build index test.tmp.mr.mr_index2_67_inc { 0: 1 } m30001| Fri Feb 22 12:37:10.379 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.379 [conn241] build index test.tmp.mr.mr_index2_67 { _id: 1 } m30001| Fri Feb 22 12:37:10.380 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.383 [conn241] CMD: drop test.mr_index2_out m30001| Fri Feb 22 12:37:10.392 [conn241] CMD: drop test.tmp.mr.mr_index2_67 m30001| Fri Feb 22 12:37:10.392 [conn241] CMD: drop test.tmp.mr.mr_index2_67 m30001| Fri Feb 22 12:37:10.392 [conn241] CMD: drop test.tmp.mr.mr_index2_67_inc m30001| Fri Feb 22 12:37:10.397 [conn241] CMD: drop test.tmp.mr.mr_index2_67 m30001| Fri Feb 22 12:37:10.397 [conn241] CMD: drop test.tmp.mr.mr_index2_67_inc m30999| Fri Feb 22 12:37:10.398 [conn1] DROP: test.mr_index2_out m30001| Fri Feb 22 12:37:10.398 [conn241] CMD: drop test.mr_index2_out m30001| Fri Feb 22 12:37:10.404 [conn241] CMD: drop test.tmp.mr.mr_index2_68 m30001| Fri Feb 22 12:37:10.405 [conn241] CMD: drop test.tmp.mr.mr_index2_68_inc m30001| Fri Feb 22 12:37:10.405 [conn241] build index test.tmp.mr.mr_index2_68_inc { 0: 1 } m30001| Fri Feb 22 12:37:10.405 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.405 [conn241] build index test.tmp.mr.mr_index2_68 { _id: 1 } m30001| Fri Feb 22 12:37:10.406 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.408 [conn241] CMD: drop test.mr_index2_out m30001| Fri Feb 22 12:37:10.418 [conn241] CMD: drop test.tmp.mr.mr_index2_68 m30001| Fri Feb 22 12:37:10.418 [conn241] CMD: drop test.tmp.mr.mr_index2_68 m30001| Fri Feb 22 12:37:10.418 [conn241] CMD: drop test.tmp.mr.mr_index2_68_inc m30001| Fri Feb 22 12:37:10.423 [conn241] CMD: drop test.tmp.mr.mr_index2_68 m30001| Fri Feb 22 12:37:10.423 [conn241] CMD: drop test.tmp.mr.mr_index2_68_inc m30999| Fri Feb 22 12:37:10.423 [conn1] DROP: test.mr_index2_out m30001| Fri Feb 22 12:37:10.424 [conn241] CMD: drop test.mr_index2_out m30001| Fri Feb 22 12:37:10.428 [conn241] build index test.mr_index2 { arr: 1.0 } m30001| Fri Feb 22 12:37:10.429 [conn241] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:37:10.431 [conn241] CMD: drop test.tmp.mr.mr_index2_69 m30001| Fri Feb 22 12:37:10.431 [conn241] CMD: drop test.tmp.mr.mr_index2_69_inc m30001| Fri Feb 22 12:37:10.431 [conn241] build index test.tmp.mr.mr_index2_69_inc { 0: 1 } m30001| Fri Feb 22 12:37:10.432 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.432 [conn241] build index test.tmp.mr.mr_index2_69 { _id: 1 } m30001| Fri Feb 22 12:37:10.432 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.435 [conn241] CMD: drop test.mr_index2_out m30001| Fri Feb 22 12:37:10.444 [conn241] CMD: drop test.tmp.mr.mr_index2_69 m30001| Fri Feb 22 12:37:10.445 [conn241] CMD: drop test.tmp.mr.mr_index2_69 m30001| Fri Feb 22 12:37:10.445 [conn241] CMD: drop test.tmp.mr.mr_index2_69_inc m30001| Fri Feb 22 12:37:10.450 [conn241] CMD: drop test.tmp.mr.mr_index2_69 m30001| Fri Feb 22 12:37:10.450 [conn241] CMD: drop test.tmp.mr.mr_index2_69_inc m30999| Fri Feb 22 12:37:10.450 [conn1] DROP: test.mr_index2_out m30001| Fri Feb 22 12:37:10.450 [conn241] CMD: drop test.mr_index2_out 81ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geoa.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_mapreduce2.js ******************************************* Test : jstests/hashindex1.js ... m30999| Fri Feb 22 12:37:10.464 [conn1] DROP: test.hashindex1 m30001| Fri Feb 22 12:37:10.464 [conn241] CMD: drop test.hashindex1 m30001| Fri Feb 22 12:37:10.465 [conn241] build index test.hashindex1 { _id: 1 } m30001| Fri Feb 22 12:37:10.466 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.466 [conn241] info: creating collection test.hashindex1 on add index m30001| Fri Feb 22 12:37:10.466 [conn241] build index test.hashindex1 { a: "hashed", b: 1.0 } m30001| Fri Feb 22 12:37:10.469 [conn241] build index test.hashindex1 { a: "hashed" } m30001| Fri Feb 22 12:37:10.471 [conn241] build index test.hashindex1 { a: "hashed" } m30001| Fri Feb 22 12:37:10.472 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.479 [conn241] build index test.hashindex1 { _id: "hashed" } m30001| Fri Feb 22 12:37:10.479 [conn241] build index done. scanned 11 total records. 0 secs m30001| Fri Feb 22 12:37:10.482 [conn241] build index test.hashindex1 { b: "hashed" } m30001| Fri Feb 22 12:37:10.482 [conn241] build index done. scanned 11 total records. 0 secs 31ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo9.js ******************************************* Test : jstests/find_and_modify_server6659.js ... m30999| Fri Feb 22 12:37:10.487 [conn1] DROP: test.find_and_modify_server6659 m30001| Fri Feb 22 12:37:10.487 [conn241] CMD: drop test.find_and_modify_server6659 m30001| Fri Feb 22 12:37:10.488 [conn241] build index test.find_and_modify_server6659 { _id: 1 } m30001| Fri Feb 22 12:37:10.488 [conn241] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/index_arr2.js ... m30999| Fri Feb 22 12:37:10.490 [conn1] DROP: test.jstests_arr2 m30001| Fri Feb 22 12:37:10.490 [conn241] CMD: drop test.jstests_arr2 m30001| Fri Feb 22 12:37:10.490 [conn241] build index test.jstests_arr2 { _id: 1 } m30001| Fri Feb 22 12:37:10.491 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:10.494 [conn1] DROP: test.jstests_arr2 m30001| Fri Feb 22 12:37:10.494 [conn241] CMD: drop test.jstests_arr2 m30001| Fri Feb 22 12:37:10.498 [conn241] build index test.jstests_arr2 { _id: 1 } m30001| Fri Feb 22 12:37:10.499 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.501 [conn241] build index test.jstests_arr2 { a.b.c: 1.0 } m30001| Fri Feb 22 12:37:10.501 [conn241] build index done. scanned 21 total records. 0 secs 14ms ******************************************* Test : jstests/covered_index_simple_2.js ... m30999| Fri Feb 22 12:37:10.508 [conn1] DROP: test.covered_simple_2 m30001| Fri Feb 22 12:37:10.508 [conn241] CMD: drop test.covered_simple_2 m30001| Fri Feb 22 12:37:10.508 [conn241] build index test.covered_simple_2 { _id: 1 } m30001| Fri Feb 22 12:37:10.509 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.510 [conn241] build index test.covered_simple_2 { foo: 1.0 } m30001| Fri Feb 22 12:37:10.510 [conn241] build index done. scanned 13 total records. 0 secs all tests pass 11ms ******************************************* Test : jstests/rename5.js ... m30999| Fri Feb 22 12:37:10.514 [conn1] DROP: test.jstests_rename5 m30001| Fri Feb 22 12:37:10.515 [conn241] CMD: drop test.jstests_rename5 m30001| Fri Feb 22 12:37:10.515 [conn241] build index test.jstests_rename5 { _id: 1 } m30001| Fri Feb 22 12:37:10.516 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.516 [conn241] info: creating collection test.jstests_rename5 on add index m30001| Fri Feb 22 12:37:10.516 [conn241] build index test.jstests_rename5 { a: 1.0 } m30001| Fri Feb 22 12:37:10.516 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:10.518 [conn1] DROP: test.jstests_rename5 m30001| Fri Feb 22 12:37:10.518 [conn241] CMD: drop test.jstests_rename5 m30001| Fri Feb 22 12:37:10.526 [conn241] build index test.jstests_rename5 { _id: 1 } m30001| Fri Feb 22 12:37:10.527 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:10.528 [conn1] DROP: test.jstests_rename5 m30001| Fri Feb 22 12:37:10.528 [conn241] CMD: drop test.jstests_rename5 m30001| Fri Feb 22 12:37:10.533 [conn241] build index test.jstests_rename5 { _id: 1 } m30001| Fri Feb 22 12:37:10.534 [conn241] build index done. scanned 0 total records. 0 secs 20ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/loglong.js ******************************************* Test : jstests/js7.js ... m30999| Fri Feb 22 12:37:10.540 [conn1] DROP: test.jstests_js7 m30001| Fri Feb 22 12:37:10.540 [conn241] CMD: drop test.jstests_js7 8ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_regex0.js ******************************************* Test : jstests/update_arraymatch4.js ... m30999| Fri Feb 22 12:37:10.542 [conn1] DROP: test.update_arraymatch4 m30001| Fri Feb 22 12:37:10.542 [conn241] CMD: drop test.update_arraymatch4 m30001| Fri Feb 22 12:37:10.543 [conn241] build index test.update_arraymatch4 { _id: 1 } m30001| Fri Feb 22 12:37:10.543 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.545 [conn241] build index test.update_arraymatch4 { arr: 1.0 } m30001| Fri Feb 22 12:37:10.545 [conn241] build index done. scanned 1 total records. 0 secs 4ms ******************************************* Test : jstests/upsert2.js ... m30999| Fri Feb 22 12:37:10.553 [conn1] DROP: test.jstests_upsert2 m30001| Fri Feb 22 12:37:10.553 [conn241] CMD: drop test.jstests_upsert2 m30001| Fri Feb 22 12:37:10.554 [conn241] build index test.jstests_upsert2 { _id: 1 } m30001| Fri Feb 22 12:37:10.554 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:10.555 [conn1] DROP: test.jstests_upsert2 m30001| Fri Feb 22 12:37:10.555 [conn241] CMD: drop test.jstests_upsert2 m30001| Fri Feb 22 12:37:10.560 [conn241] build index test.jstests_upsert2 { _id: 1 } m30001| Fri Feb 22 12:37:10.560 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:10.561 [conn1] DROP: test.jstests_upsert2 m30001| Fri Feb 22 12:37:10.561 [conn241] CMD: drop test.jstests_upsert2 m30001| Fri Feb 22 12:37:10.566 [conn241] build index test.jstests_upsert2 { _id: 1 } m30001| Fri Feb 22 12:37:10.566 [conn241] build index done. scanned 0 total records. 0 secs 20ms ******************************************* Test : jstests/update_blank1.js ... m30999| Fri Feb 22 12:37:10.567 [conn1] DROP: test.update_blank1 m30001| Fri Feb 22 12:37:10.567 [conn241] CMD: drop test.update_blank1 m30001| Fri Feb 22 12:37:10.568 [conn241] build index test.update_blank1 { _id: 1 } m30001| Fri Feb 22 12:37:10.569 [conn241] build index done. scanned 0 total records. 0.001 secs null 4ms ******************************************* Test : jstests/covered_index_sort_1.js ... m30999| Fri Feb 22 12:37:10.571 [conn1] DROP: test.covered_sort_1 m30001| Fri Feb 22 12:37:10.571 [conn241] CMD: drop test.covered_sort_1 m30001| Fri Feb 22 12:37:10.572 [conn241] build index test.covered_sort_1 { _id: 1 } m30001| Fri Feb 22 12:37:10.572 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.574 [conn241] build index test.covered_sort_1 { foo: 1.0 } m30001| Fri Feb 22 12:37:10.574 [conn241] build index done. scanned 28 total records. 0 secs all tests pass 6ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo8.js ******************************************* Test : jstests/regex_util.js ... 1ms ******************************************* Test : jstests/mr_index3.js ... m30999| Fri Feb 22 12:37:10.578 [conn1] DROP: test.mr_index3 m30001| Fri Feb 22 12:37:10.578 [conn241] CMD: drop test.mr_index3 m30001| Fri Feb 22 12:37:10.579 [conn241] build index test.mr_index3 { _id: 1 } m30001| Fri Feb 22 12:37:10.579 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.588 [conn241] build index test.mr_index3 { name: 1.0, tags: 1.0 } m30001| Fri Feb 22 12:37:10.589 [conn241] build index done. scanned 4 total records. 0 secs 19ms ******************************************* Test : jstests/update_arraymatch5.js ... m30999| Fri Feb 22 12:37:10.598 [conn1] DROP: test.update_arraymatch5 m30001| Fri Feb 22 12:37:10.598 [conn241] CMD: drop test.update_arraymatch5 m30001| Fri Feb 22 12:37:10.598 [conn241] build index test.update_arraymatch5 { _id: 1 } m30001| Fri Feb 22 12:37:10.599 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.599 [conn241] build index test.update_arraymatch5 { abc.visible: 1.0, testarray.visible: 1.0, testarray.xxx: 1.0 } m30001| Fri Feb 22 12:37:10.600 [conn241] build index done. scanned 1 total records. 0 secs 6ms ******************************************* Test : jstests/rename4.js ... m30999| Fri Feb 22 12:37:10.605 [conn1] DROP: test.jstests_rename4 m30001| Fri Feb 22 12:37:10.605 [conn241] CMD: drop test.jstests_rename4 m30001| Fri Feb 22 12:37:10.626 [conn241] build index test.jstests_rename4 { _id: 1 } m30001| Fri Feb 22 12:37:10.626 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:10.668 [conn1] DROP: test.jstests_rename4 m30001| Fri Feb 22 12:37:10.668 [conn241] CMD: drop test.jstests_rename4 m30001| Fri Feb 22 12:37:10.674 [conn241] build index test.jstests_rename4 { _id: 1 } m30001| Fri Feb 22 12:37:10.675 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.675 [conn241] info: creating collection test.jstests_rename4 on add index m30001| Fri Feb 22 12:37:10.675 [conn241] build index test.jstests_rename4 { a: 1.0 } m30001| Fri Feb 22 12:37:10.676 [conn241] build index done. scanned 0 total records. 0 secs 76ms ******************************************* Test : jstests/covered_index_simple_3.js ... m30999| Fri Feb 22 12:37:10.680 [conn1] DROP: test.covered_simple_3 m30001| Fri Feb 22 12:37:10.681 [conn241] CMD: drop test.covered_simple_3 m30001| Fri Feb 22 12:37:10.681 [conn241] build index test.covered_simple_3 { _id: 1 } m30001| Fri Feb 22 12:37:10.682 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.683 [conn241] build index test.covered_simple_3 { foo: 1.0 } m30001| Fri Feb 22 12:37:10.684 [conn241] build index done. scanned 18 total records. 0 secs all tests pass 8ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/copydb-auth.js ******************************************* Test : jstests/mr_comments.js ... m30999| Fri Feb 22 12:37:10.689 [conn1] DROP: test.mr_comments m30001| Fri Feb 22 12:37:10.689 [conn241] CMD: drop test.mr_comments m30001| Fri Feb 22 12:37:10.689 [conn241] build index test.mr_comments { _id: 1 } m30001| Fri Feb 22 12:37:10.690 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.692 [conn241] CMD: drop test.tmp.mr.mr_comments_70 m30001| Fri Feb 22 12:37:10.692 [conn241] CMD: drop test.tmp.mr.mr_comments_70_inc m30001| Fri Feb 22 12:37:10.693 [conn241] build index test.tmp.mr.mr_comments_70_inc { 0: 1 } m30001| Fri Feb 22 12:37:10.693 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.694 [conn241] build index test.tmp.mr.mr_comments_70 { _id: 1 } m30001| Fri Feb 22 12:37:10.694 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.697 [conn241] CMD: drop test.mr_comments_out m30001| Fri Feb 22 12:37:10.708 [conn241] CMD: drop test.tmp.mr.mr_comments_70 m30001| Fri Feb 22 12:37:10.708 [conn241] CMD: drop test.tmp.mr.mr_comments_70 m30001| Fri Feb 22 12:37:10.708 [conn241] CMD: drop test.tmp.mr.mr_comments_70_inc m30001| Fri Feb 22 12:37:10.713 [conn241] CMD: drop test.tmp.mr.mr_comments_70 m30001| Fri Feb 22 12:37:10.713 [conn241] CMD: drop test.tmp.mr.mr_comments_70_inc m30001| Fri Feb 22 12:37:10.716 [conn241] CMD: drop test.tmp.mr.mr_comments_71 m30001| Fri Feb 22 12:37:10.716 [conn241] CMD: drop test.tmp.mr.mr_comments_71_inc m30001| Fri Feb 22 12:37:10.716 [conn241] build index test.tmp.mr.mr_comments_71_inc { 0: 1 } m30001| Fri Feb 22 12:37:10.717 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.717 [conn241] build index test.tmp.mr.mr_comments_71 { _id: 1 } m30001| Fri Feb 22 12:37:10.717 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.720 [conn241] CMD: drop test.mr_comments_out m30001| Fri Feb 22 12:37:10.735 [conn241] CMD: drop test.tmp.mr.mr_comments_71 m30001| Fri Feb 22 12:37:10.736 [conn241] CMD: drop test.tmp.mr.mr_comments_71 m30001| Fri Feb 22 12:37:10.736 [conn241] CMD: drop test.tmp.mr.mr_comments_71_inc m30001| Fri Feb 22 12:37:10.740 [conn241] CMD: drop test.tmp.mr.mr_comments_71 m30001| Fri Feb 22 12:37:10.741 [conn241] CMD: drop test.tmp.mr.mr_comments_71_inc 54ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_box2.js ******************************************* Test : jstests/eval8.js ... m30999| Fri Feb 22 12:37:10.743 [conn1] DROP: test.eval8 m30001| Fri Feb 22 12:37:10.743 [conn241] CMD: drop test.eval8 m30001| Fri Feb 22 12:37:10.743 [conn241] build index test.eval8 { _id: 1 } m30001| Fri Feb 22 12:37:10.744 [conn241] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/sort1.js ... m30999| Fri Feb 22 12:37:10.746 [conn1] DROP: test.sort1 m30001| Fri Feb 22 12:37:10.746 [conn241] CMD: drop test.sort1 m30001| Fri Feb 22 12:37:10.746 [conn241] build index test.sort1 { _id: 1 } m30001| Fri Feb 22 12:37:10.747 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.749 [conn241] build index test.sort1 { x: 1.0 } m30001| Fri Feb 22 12:37:10.750 [conn241] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:37:10.752 [conn4] CMD: validate test.sort1 m30001| Fri Feb 22 12:37:10.752 [conn4] validating index 0: test.sort1.$_id_ m30001| Fri Feb 22 12:37:10.752 [conn4] validating index 1: test.sort1.$x_1 m30999| Fri Feb 22 12:37:10.753 [conn1] DROP: test.sort1 m30001| Fri Feb 22 12:37:10.753 [conn241] CMD: drop test.sort1 m30001| Fri Feb 22 12:37:10.760 [conn241] build index test.sort1 { _id: 1 } m30001| Fri Feb 22 12:37:10.760 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:10.762 [conn241] build index test.sort1 { x: 1.0 } m30001| Fri Feb 22 12:37:10.763 [conn241] build index done. scanned 4 total records. 0 secs m30001| Fri Feb 22 12:37:10.765 [conn4] CMD: validate test.sort1 m30001| Fri Feb 22 12:37:10.765 [conn4] validating index 0: test.sort1.$_id_ m30001| Fri Feb 22 12:37:10.765 [conn4] validating index 1: test.sort1.$x_1 20ms ******************************************* Test : jstests/use_power_of_2.js ... m30999| Fri Feb 22 12:37:10.766 [conn1] DROP: test.usepower1 m30001| Fri Feb 22 12:37:10.766 [conn241] CMD: drop test.usepower1 m30001| Fri Feb 22 12:37:10.767 [conn241] build index test.usepower1 { _id: 1 } m30001| Fri Feb 22 12:37:10.768 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:11.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276677881c8e7453916077 m30999| Fri Feb 22 12:37:11.797 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:37:13.235 [conn1] DROP: test.usepower2 m30001| Fri Feb 22 12:37:13.236 [conn241] CMD: drop test.usepower2 m30001| Fri Feb 22 12:37:13.236 [conn241] build index test.usepower2 { _id: 1 } m30001| Fri Feb 22 12:37:13.238 [conn241] build index done. scanned 0 total records. 0.001 secs 4769ms ******************************************* Test : jstests/sorti.js ... m30999| Fri Feb 22 12:37:15.536 [conn1] DROP: test.jstests_sorti m30001| Fri Feb 22 12:37:15.537 [conn241] CMD: drop test.jstests_sorti m30001| Fri Feb 22 12:37:15.537 [conn241] build index test.jstests_sorti { _id: 1 } m30001| Fri Feb 22 12:37:15.538 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.539 [conn241] build index test.jstests_sorti { b: 1.0 } m30001| Fri Feb 22 12:37:15.540 [conn241] build index done. scanned 4 total records. 0 secs 6ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/count5.js ******************************************* Test : jstests/update_multi5.js ... m30999| Fri Feb 22 12:37:15.547 [conn1] DROP: test.update_multi5 m30001| Fri Feb 22 12:37:15.547 [conn241] CMD: drop test.update_multi5 m30001| Fri Feb 22 12:37:15.548 [conn241] build index test.update_multi5 { _id: 1 } m30001| Fri Feb 22 12:37:15.549 [conn241] build index done. scanned 0 total records. 0 secs 8ms ******************************************* Test : jstests/indexapi.js ... m30999| Fri Feb 22 12:37:15.556 [conn1] DROP: test.indexapi m30001| Fri Feb 22 12:37:15.556 [conn241] CMD: drop test.indexapi m30001| Fri Feb 22 12:37:15.557 [conn241] build index test.indexapi { _id: 1 } m30001| Fri Feb 22 12:37:15.558 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.558 [conn241] info: creating collection test.indexapi on add index m30001| Fri Feb 22 12:37:15.558 [conn241] build index test.indexapi { x: 1.0 } m30001| Fri Feb 22 12:37:15.559 [conn241] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:15.560 [conn1] DROP: test.indexapi m30001| Fri Feb 22 12:37:15.560 [conn241] CMD: drop test.indexapi m30001| Fri Feb 22 12:37:15.566 [conn241] build index test.indexapi { _id: 1 } m30001| Fri Feb 22 12:37:15.566 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.566 [conn241] info: creating collection test.indexapi on add index m30001| Fri Feb 22 12:37:15.566 [conn241] build index test.indexapi { x: 1.0 } m30001| Fri Feb 22 12:37:15.567 [conn241] build index done. scanned 0 total records. 0 secs 19ms >>>>>>>>>>>>>>> skipping jstests/dur ******************************************* Test : jstests/find3.js ... m30999| Fri Feb 22 12:37:15.573 [conn1] DROP: test.find3 m30001| Fri Feb 22 12:37:15.574 [conn241] CMD: drop test.find3 m30001| Fri Feb 22 12:37:15.574 [conn241] build index test.find3 { _id: 1 } m30001| Fri Feb 22 12:37:15.575 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.578 [conn4] CMD: validate test.find3 m30001| Fri Feb 22 12:37:15.578 [conn4] validating index 0: test.find3.$_id_ 9ms ******************************************* Test : jstests/basic3.js ... m30001| Fri Feb 22 12:37:15.586 [conn241] build index test.foo_basic3 { _id: 1 } 8ms ******************************************* Test : jstests/arrayfind5.js ... m30999| Fri Feb 22 12:37:15.587 [conn1] DROP: test.jstests_arrayfind5 m30001| Fri Feb 22 12:37:15.587 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.587 [conn241] CMD: drop test.jstests_arrayfind5 m30001| Fri Feb 22 12:37:15.588 [conn241] build index test.jstests_arrayfind5 { _id: 1 } m30001| Fri Feb 22 12:37:15.588 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.589 [conn241] build index test.jstests_arrayfind5 { a.b: 1.0 } m30001| Fri Feb 22 12:37:15.590 [conn241] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:37:15.592 [conn1] DROP: test.jstests_arrayfind5 m30001| Fri Feb 22 12:37:15.592 [conn241] CMD: drop test.jstests_arrayfind5 m30001| Fri Feb 22 12:37:15.597 [conn241] build index test.jstests_arrayfind5 { _id: 1 } m30001| Fri Feb 22 12:37:15.598 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.599 [conn241] build index test.jstests_arrayfind5 { a.b: 1.0 } m30001| Fri Feb 22 12:37:15.599 [conn241] build index done. scanned 1 total records. 0 secs 14ms ******************************************* Test : jstests/queryoptimizerc.js ... m30999| Fri Feb 22 12:37:15.601 [conn1] DROP: test.jstests_queryoptimizerc m30001| Fri Feb 22 12:37:15.601 [conn241] CMD: drop test.jstests_queryoptimizerc m30001| Fri Feb 22 12:37:15.601 [conn241] build index test.jstests_queryoptimizerc { _id: 1 } m30001| Fri Feb 22 12:37:15.602 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.602 [conn241] info: creating collection test.jstests_queryoptimizerc on add index m30001| Fri Feb 22 12:37:15.602 [conn241] build index test.jstests_queryoptimizerc { a: 1.0 } m30001| Fri Feb 22 12:37:15.602 [conn241] build index done. scanned 0 total records. 0 secs 3ms ******************************************* Test : jstests/or5.js ... m30999| Fri Feb 22 12:37:15.608 [conn1] DROP: test.jstests_or5 m30001| Fri Feb 22 12:37:15.608 [conn241] CMD: drop test.jstests_or5 m30001| Fri Feb 22 12:37:15.609 [conn241] build index test.jstests_or5 { _id: 1 } m30001| Fri Feb 22 12:37:15.609 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.609 [conn241] info: creating collection test.jstests_or5 on add index m30001| Fri Feb 22 12:37:15.609 [conn241] build index test.jstests_or5 { a: 1.0 } m30001| Fri Feb 22 12:37:15.610 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.610 [conn241] build index test.jstests_or5 { b: 1.0 } m30001| Fri Feb 22 12:37:15.610 [conn241] build index done. scanned 0 total records. 0 secs { "cursor" : "BasicCursor", "isMultiKey" : false, "n" : 0, "nscannedObjects" : 0, "nscanned" : 0, "nscannedObjectsAllPlans" : 0, "nscannedAllPlans" : 0, "scanAndOrder" : true, "indexOnly" : false, "nYields" : 0, "nChunkSkips" : 0, "millis" : 0, "indexBounds" : { }, "server" : "bs-smartos-x86-64-1.10gen.cc:30001", "millis" : 0 } m30001| Fri Feb 22 12:37:15.613 [conn241] build index test.jstests_or5 { c: 1.0 } m30001| Fri Feb 22 12:37:15.614 [conn241] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.638 [conn241] build index test.jstests_or5 { z: "2d" } m30001| Fri Feb 22 12:37:15.638 [conn241] build index done. scanned 7 total records. 0 secs m30001| Fri Feb 22 12:37:15.641 [conn241] assertion 13291 $or may not contain 'special' query ns:test.jstests_or5 query:{ $or: [ { z: { $near: [ 50.0, 50.0 ] } }, { a: 2.0 } ] } m30001| Fri Feb 22 12:37:15.641 [conn241] problem detected during query over test.jstests_or5 : { $err: "$or may not contain 'special' query", code: 13291 } m30999| Fri Feb 22 12:37:15.641 [conn1] warning: db exception when finishing on shard0001:localhost:30001, current connection state is { state: { conn: "localhost:30001", vinfo: "shard0001:localhost:30001", cursor: "(empty)", count: 0, done: false }, retryNext: false, init: true, finish: true, errored: false } :: caused by :: 13291 $or may not contain 'special' query m30001| Fri Feb 22 12:37:15.641 [conn241] end connection 127.0.0.1:58634 (38 connections now open) m30999| Fri Feb 22 12:37:15.642 [conn1] DROP: test.jstests_or5 m30001| Fri Feb 22 12:37:15.642 [conn180] CMD: drop test.jstests_or5 m30001| Fri Feb 22 12:37:15.653 [conn180] build index test.jstests_or5 { _id: 1 } m30001| Fri Feb 22 12:37:15.653 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.653 [conn180] info: creating collection test.jstests_or5 on add index m30001| Fri Feb 22 12:37:15.653 [conn180] build index test.jstests_or5 { a: 1.0 } m30001| Fri Feb 22 12:37:15.654 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.654 [conn180] build index test.jstests_or5 { b: 1.0 } m30001| Fri Feb 22 12:37:15.654 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.654 [conn180] build index test.jstests_or5 { c: 1.0 } m30001| Fri Feb 22 12:37:15.655 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:15.658 [conn1] DROP: test.jstests_or5 m30001| Fri Feb 22 12:37:15.658 [conn180] CMD: drop test.jstests_or5 m30001| Fri Feb 22 12:37:15.666 [conn180] build index test.jstests_or5 { _id: 1 } m30001| Fri Feb 22 12:37:15.667 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.667 [conn180] info: creating collection test.jstests_or5 on add index m30001| Fri Feb 22 12:37:15.667 [conn180] build index test.jstests_or5 { a: 1.0 } m30001| Fri Feb 22 12:37:15.667 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.667 [conn180] build index test.jstests_or5 { b: 1.0 } m30001| Fri Feb 22 12:37:15.668 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.668 [conn180] build index test.jstests_or5 { c: 1.0 } m30001| Fri Feb 22 12:37:15.668 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:15.670 [conn1] DROP: test.jstests_or5 m30001| Fri Feb 22 12:37:15.670 [conn180] CMD: drop test.jstests_or5 m30001| Fri Feb 22 12:37:15.678 [conn180] build index test.jstests_or5 { _id: 1 } m30001| Fri Feb 22 12:37:15.678 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.678 [conn180] info: creating collection test.jstests_or5 on add index m30001| Fri Feb 22 12:37:15.678 [conn180] build index test.jstests_or5 { a: 1.0 } m30001| Fri Feb 22 12:37:15.678 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.679 [conn180] build index test.jstests_or5 { b: 1.0 } m30001| Fri Feb 22 12:37:15.679 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.679 [conn180] build index test.jstests_or5 { c: 1.0 } m30001| Fri Feb 22 12:37:15.680 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:15.681 [conn1] DROP: test.jstests_or5 m30001| Fri Feb 22 12:37:15.682 [conn180] CMD: drop test.jstests_or5 m30001| Fri Feb 22 12:37:15.689 [conn180] build index test.jstests_or5 { _id: 1 } m30001| Fri Feb 22 12:37:15.690 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.690 [conn180] info: creating collection test.jstests_or5 on add index m30001| Fri Feb 22 12:37:15.690 [conn180] build index test.jstests_or5 { a: 1.0 } m30001| Fri Feb 22 12:37:15.690 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.690 [conn180] build index test.jstests_or5 { b: 1.0 } m30001| Fri Feb 22 12:37:15.691 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.691 [conn180] build index test.jstests_or5 { c: 1.0 } m30001| Fri Feb 22 12:37:15.691 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:15.693 [conn1] DROP: test.jstests_or5 m30001| Fri Feb 22 12:37:15.693 [conn180] CMD: drop test.jstests_or5 m30001| Fri Feb 22 12:37:15.701 [conn180] build index test.jstests_or5 { _id: 1 } m30001| Fri Feb 22 12:37:15.701 [conn180] build index done. scanned 0 total records. 0 secs 99ms ******************************************* Test : jstests/indexw.js ... m30999| Fri Feb 22 12:37:15.712 [conn1] DROP: test.jstests_indexw m30001| Fri Feb 22 12:37:15.712 [conn180] CMD: drop test.jstests_indexw m30001| Fri Feb 22 12:37:15.712 [conn180] build index test.jstests_indexw { _id: 1 } m30001| Fri Feb 22 12:37:15.713 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.713 [conn180] build index test.jstests_indexw { a: 1.0 } m30001| Fri Feb 22 12:37:15.714 [conn180] build index done. scanned 1 total records. 0 secs m30001| Fri Feb 22 12:37:15.715 [conn4] CMD: dropIndexes test.jstests_indexw m30001| Fri Feb 22 12:37:15.717 [conn180] build index test.jstests_indexw { a: 1.0 } m30001| Fri Feb 22 12:37:15.718 [conn180] build index done. scanned 1 total records. 0 secs 16ms ******************************************* Test : jstests/updated.js ... m30999| Fri Feb 22 12:37:15.719 [conn1] DROP: test.updated m30001| Fri Feb 22 12:37:15.719 [conn180] CMD: drop test.updated m30001| Fri Feb 22 12:37:15.720 [conn180] build index test.updated { _id: 1 } m30001| Fri Feb 22 12:37:15.720 [conn180] build index done. scanned 0 total records. 0 secs 3ms !!!!!!!!!!!!!!! skipping test that has failed under sharding but might not anymore jstests/capped8.js ******************************************* Test : jstests/indexa.js ... m30999| Fri Feb 22 12:37:15.727 [conn1] DROP: test.indexa m30001| Fri Feb 22 12:37:15.727 [conn180] CMD: drop test.indexa m30001| Fri Feb 22 12:37:15.728 [conn180] build index test.indexa { _id: 1 } m30001| Fri Feb 22 12:37:15.728 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:15.728 [conn180] info: creating collection test.indexa on add index m30001| Fri Feb 22 12:37:15.728 [conn180] build index test.indexa { x: 1.0 } m30001| Fri Feb 22 12:37:15.728 [conn180] build index done. scanned 0 total records. 0 secs 10ms ******************************************* Test : jstests/exists8.js ... m30999| Fri Feb 22 12:37:15.737 [conn1] DROP: test.jstests_exists8 m30001| Fri Feb 22 12:37:15.737 [conn180] CMD: drop test.jstests_exists8 m30001| Fri Feb 22 12:37:15.738 [conn180] build index test.jstests_exists8 { _id: 1 } m30001| Fri Feb 22 12:37:15.738 [conn180] build index done. scanned 0 total records. 0 secs 21ms ******************************************* Test : jstests/orm.js ... m30999| Fri Feb 22 12:37:15.753 [conn1] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:15.753 [conn180] CMD: drop test.jstests_orm Fri Feb 22 12:37:15.781 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --eval TestData = { "testPath" : "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_passthrough.js", "testFile" : "sharding_passthrough.js", "testName" : "sharding_passthrough", "noJournal" : false, "noJournalPrealloc" : false, "auth" : false, "keyFile" : null, "keyFileData" : null };db = db.getSiblingDB('test');for( i = 0; i < 15; ++i ) { sleep( 1000 ); db.jstests_orm.drop() } localhost:30999/admin m30001| Fri Feb 22 12:37:15.782 [conn180] build index test.jstests_orm { _id: 1 } m30001| Fri Feb 22 12:37:15.783 [conn180] build index done. scanned 0 total records. 0 secs sh16076| MongoDB shell version: 2.4.0-rc1-pre- sh16076| connecting to: localhost:30999/admin m30999| Fri Feb 22 12:37:15.872 [mongosMain] connection accepted from 127.0.0.1:42261 #238 (102 connections now open) m30001| Fri Feb 22 12:37:16.229 [conn180] build index test.jstests_orm { a: 1.0 } m30001| Fri Feb 22 12:37:16.267 [conn180] build index done. scanned 10000 total records. 0.037 secs m30001| Fri Feb 22 12:37:16.267 [conn180] build index test.jstests_orm { b: 1.0 } m30001| Fri Feb 22 12:37:16.307 [conn180] build index done. scanned 10000 total records. 0.039 secs m30001| Fri Feb 22 12:37:16.673 [conn180] remove test.jstests_orm query: { $or: [ { a: { $lte: 500.0 }, i: 49999.0 }, { b: { $lte: 500.0 }, i: 49999.0 }, { a: { $lte: 1000.0 }, i: 49999.0 }, { b: { $lte: 1000.0 }, i: 49999.0 }, { a: { $lte: 1500.0 }, i: 49999.0 }, { b: { $lte: 1500.0 }, i: 49999.0 }, { a: { $lte: 2000.0 }, i: 49999.0 }, { b: { $lte: 2000.0 }, i: 49999.0 }, { a: { $lte: 2500.0 }, i: 49999.0 }, { b: { $lte: 2500.0 }, i: 49999.0 }, { a: { $lte: 3000.0 }, i: 49999.0 }, { b: { $lte: 3000.0 }, i: 49999.0 }, { a: { $lte: 3500.0 }, i: 49999.0 }, { b: { $lte: 3500.0 }, i: 49999.0 }, { a: { $lte: 4000.0 }, i: 49999.0 }, { b: { $lte: 4000.0 }, i: 49999.0 }, { a: { $lte: 4500.0 }, i: 49999.0 }, { b: { $lte: 4500.0 }, i: 49999.0 }, { a: { $lte: 5000.0 }, i: 49999.0 }, { b: { $lte: 5000.0 }, i: 49999.0 } ] } ndeleted:0 keyUpdates:0 numYields: 1 locks(micros) w:110745 104ms m30999| Fri Feb 22 12:37:16.875 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:16.875 [conn175] CMD: drop test.jstests_orm m30001| Fri Feb 22 12:37:16.884 [conn180] build index test.jstests_orm { _id: 1 } m30001| Fri Feb 22 12:37:16.884 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:17.155 [conn180] build index test.jstests_orm { a: 1.0 } m30001| Fri Feb 22 12:37:17.179 [conn180] build index done. scanned 6297 total records. 0.023 secs m30001| Fri Feb 22 12:37:17.179 [conn180] build index test.jstests_orm { b: 1.0 } m30001| Fri Feb 22 12:37:17.203 [conn180] build index done. scanned 6297 total records. 0.023 secs m30001| Fri Feb 22 12:37:17.520 [conn180] remove test.jstests_orm query: { $or: [ { a: { $lte: 500.0 }, i: 49999.0 }, { b: { $lte: 500.0 }, i: 49999.0 }, { a: { $lte: 1000.0 }, i: 49999.0 }, { b: { $lte: 1000.0 }, i: 49999.0 }, { a: { $lte: 1500.0 }, i: 49999.0 }, { b: { $lte: 1500.0 }, i: 49999.0 }, { a: { $lte: 2000.0 }, i: 49999.0 }, { b: { $lte: 2000.0 }, i: 49999.0 }, { a: { $lte: 2500.0 }, i: 49999.0 }, { b: { $lte: 2500.0 }, i: 49999.0 }, { a: { $lte: 3000.0 }, i: 49999.0 }, { b: { $lte: 3000.0 }, i: 49999.0 }, { a: { $lte: 3500.0 }, i: 49999.0 }, { b: { $lte: 3500.0 }, i: 49999.0 }, { a: { $lte: 4000.0 }, i: 49999.0 }, { b: { $lte: 4000.0 }, i: 49999.0 }, { a: { $lte: 4500.0 }, i: 49999.0 }, { b: { $lte: 4500.0 }, i: 49999.0 }, { a: { $lte: 5000.0 }, i: 49999.0 }, { b: { $lte: 5000.0 }, i: 49999.0 } ] } ndeleted:0 keyUpdates:0 numYields: 1 locks(micros) w:125360 105ms m30999| Fri Feb 22 12:37:17.798 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 5127667d881c8e7453916078 m30999| Fri Feb 22 12:37:17.799 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:37:17.884 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:17.884 [conn175] CMD: drop test.jstests_orm m30001| Fri Feb 22 12:37:17.893 [conn180] build index test.jstests_orm { _id: 1 } m30001| Fri Feb 22 12:37:17.894 [conn180] build index done. scanned 0 total records. 0.001 secs m30001| Fri Feb 22 12:37:18.154 [conn180] build index test.jstests_orm { a: 1.0 } m30001| Fri Feb 22 12:37:18.176 [conn180] build index done. scanned 5761 total records. 0.021 secs m30001| Fri Feb 22 12:37:18.176 [conn180] build index test.jstests_orm { b: 1.0 } m30001| Fri Feb 22 12:37:18.198 [conn180] build index done. scanned 5761 total records. 0.021 secs m30999| Fri Feb 22 12:37:18.893 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:18.893 [conn175] CMD: drop test.jstests_orm m30001| Fri Feb 22 12:37:18.902 [conn180] build index test.jstests_orm { _id: 1 } m30001| Fri Feb 22 12:37:18.903 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:18.966 [conn180] build index test.jstests_orm { a: 1.0 } m30001| Fri Feb 22 12:37:18.972 [conn180] build index done. scanned 1565 total records. 0.006 secs m30001| Fri Feb 22 12:37:18.973 [conn180] build index test.jstests_orm { b: 1.0 } m30001| Fri Feb 22 12:37:18.979 [conn180] build index done. scanned 1565 total records. 0.005 secs m30999| Fri Feb 22 12:37:19.902 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:19.903 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:20.911 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:20.911 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:21.912 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:21.913 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:22.913 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:22.913 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:23.800 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276683881c8e7453916079 m30999| Fri Feb 22 12:37:23.801 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:37:23.914 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:23.914 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:24.915 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:24.915 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:25.916 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:25.916 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:26.917 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:26.917 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:27.917 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:27.918 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:28.918 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:28.919 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:29.803 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' acquired, ts : 51276689881c8e745391607a m30999| Fri Feb 22 12:37:29.803 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536403:16838' unlocked. m30999| Fri Feb 22 12:37:29.919 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:29.919 [conn175] CMD: drop test.jstests_orm m30999| Fri Feb 22 12:37:30.920 [conn238] DROP: test.jstests_orm m30001| Fri Feb 22 12:37:30.920 [conn175] CMD: drop test.jstests_orm sh16076| false m30999| Fri Feb 22 12:37:30.932 [conn238] end connection 127.0.0.1:42261 (101 connections now open) 15185ms >>>>>>>>>>>>>>> skipping jstests/ssl ******************************************* Test : jstests/index9.js ... m30999| Fri Feb 22 12:37:30.942 [conn1] DROP: test.jstests_index9 m30001| Fri Feb 22 12:37:30.943 [conn180] CMD: drop test.jstests_index9 m30001| Fri Feb 22 12:37:30.944 [conn180] build index test.jstests_index9 { _id: 1 } m30001| Fri Feb 22 12:37:30.945 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:30.946 [conn1] DROP: test.jstests_index9 m30001| Fri Feb 22 12:37:30.946 [conn180] CMD: drop test.jstests_index9 m30001| Fri Feb 22 12:37:30.950 [conn180] build index test.jstests_index9 { _id: 1 } m30001| Fri Feb 22 12:37:30.951 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:30.952 [conn1] DROP: test.jstests_index9 m30001| Fri Feb 22 12:37:30.952 [conn180] CMD: drop test.jstests_index9 m30001| Fri Feb 22 12:37:30.957 [conn180] build index test.jstests_index9 { _id: 1 } m30001| Fri Feb 22 12:37:30.957 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:30.959 [conn1] DROP: test.jstests_index9 m30001| Fri Feb 22 12:37:30.959 [conn180] CMD: drop test.jstests_index9 m30001| Fri Feb 22 12:37:30.962 [conn180] build index test.jstests_index9 { _id: 1 } m30001| Fri Feb 22 12:37:30.963 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:30.963 [conn180] info: creating collection test.jstests_index9 on add index m30999| Fri Feb 22 12:37:30.964 [conn1] DROP: test.jstests_index9 m30001| Fri Feb 22 12:37:30.964 [conn180] CMD: drop test.jstests_index9 m30001| Fri Feb 22 12:37:30.967 [conn180] build index test.jstests_index9 { _id: 1 } m30001| Fri Feb 22 12:37:30.967 [conn180] build index done. scanned 0 total records. 0 secs 30ms ******************************************* Test : jstests/unset2.js ... m30999| Fri Feb 22 12:37:30.969 [conn1] DROP: test.unset2 m30001| Fri Feb 22 12:37:30.970 [conn180] CMD: drop test.unset2 m30001| Fri Feb 22 12:37:30.970 [conn180] build index test.unset2 { _id: 1 } m30001| Fri Feb 22 12:37:30.971 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:30.972 [conn1] DROP: test.unset2 m30001| Fri Feb 22 12:37:30.972 [conn180] CMD: drop test.unset2 m30001| Fri Feb 22 12:37:30.977 [conn180] build index test.unset2 { _id: 1 } m30001| Fri Feb 22 12:37:30.977 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:30.978 [conn1] DROP: test.unset2 m30001| Fri Feb 22 12:37:30.978 [conn180] CMD: drop test.unset2 m30001| Fri Feb 22 12:37:30.981 [conn180] build index test.unset2 { _id: 1 } m30001| Fri Feb 22 12:37:30.981 [conn180] build index done. scanned 0 total records. 0 secs 14ms ******************************************* Test : jstests/regex7.js ... m30999| Fri Feb 22 12:37:30.983 [conn1] DROP: test.regex_matches_self m30001| Fri Feb 22 12:37:30.983 [conn180] CMD: drop test.regex_matches_self m30001| Fri Feb 22 12:37:30.984 [conn180] build index test.regex_matches_self { _id: 1 } m30001| Fri Feb 22 12:37:30.984 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:30.987 [conn180] build index test.regex_matches_self { r: 1.0 } m30001| Fri Feb 22 12:37:30.987 [conn180] build index done. scanned 3 total records. 0 secs 8ms >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/geo_s2indexoldformat.js >>>>>>>>>>>>>>> skipping test that would correctly fail under sharding jstests/indexStatsCommand.js ******************************************* Test : jstests/index10.js ... m30999| Fri Feb 22 12:37:30.994 [conn1] DROP: test.jstests_index10 m30001| Fri Feb 22 12:37:30.994 [conn180] CMD: drop test.jstests_index10 m30001| Fri Feb 22 12:37:30.995 [conn180] build index test.jstests_index10 { _id: 1 } m30001| Fri Feb 22 12:37:30.995 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:30.996 [conn180] build index test.jstests_index10 { i: 1.0 } m30001| Fri Feb 22 12:37:30.996 [conn180] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:37:30.997 [conn9] CMD: dropIndexes test.jstests_index10 m30001| Fri Feb 22 12:37:31.001 [conn180] build index test.jstests_index10 { i: 1.0 } m30001| Fri Feb 22 12:37:31.005 [conn180] build index test.jstests_index10 { i: 1.0 } m30001| Fri Feb 22 12:37:31.005 [conn180] fastBuildIndex dupsToDrop:2 m30001| Fri Feb 22 12:37:31.005 [conn180] build index done. scanned 5 total records. 0 secs m30001| Fri Feb 22 12:37:31.007 [conn180] build index test.jstests_index10 { j: 1.0 } m30001| Fri Feb 22 12:37:31.007 [conn180] fastBuildIndex dupsToDrop:2 m30001| Fri Feb 22 12:37:31.007 [conn180] build index done. scanned 3 total records. 0 secs 18ms ******************************************* Test : jstests/uniqueness.js ... m30999| Fri Feb 22 12:37:31.009 [conn1] DROP: test.jstests_uniqueness m30001| Fri Feb 22 12:37:31.009 [conn180] CMD: drop test.jstests_uniqueness m30001| Fri Feb 22 12:37:31.010 [conn180] build index test.jstests_uniqueness { _id: 1 } m30001| Fri Feb 22 12:37:31.010 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:31.012 [conn1] DROP: test.jstests_uniqueness2 m30001| Fri Feb 22 12:37:31.012 [conn180] CMD: drop test.jstests_uniqueness2 m30001| Fri Feb 22 12:37:31.013 [conn180] build index test.jstests_uniqueness2 { _id: 1 } m30001| Fri Feb 22 12:37:31.013 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:31.014 [conn180] build index test.jstests_uniqueness2 { a: 1.0 } m30999| Fri Feb 22 12:37:31.017 [conn1] DROP: test.jstests_uniqueness2 m30001| Fri Feb 22 12:37:31.017 [conn180] CMD: drop test.jstests_uniqueness2 m30001| Fri Feb 22 12:37:31.021 [conn180] build index test.jstests_uniqueness2 { _id: 1 } m30001| Fri Feb 22 12:37:31.022 [conn180] build index done. scanned 0 total records. 0 secs m30001| Fri Feb 22 12:37:31.023 [conn180] build index test.jstests_uniqueness2 { a: 1.0 } background m30001| Fri Feb 22 12:37:31.023 [conn180] background addExistingToIndex exception E11000 duplicate key error index: test.jstests_uniqueness2.$a_1 dup key: { : 3.0 } m30999| Fri Feb 22 12:37:31.025 [conn1] DROP: test.jstests_uniqueness m30001| Fri Feb 22 12:37:31.025 [conn180] CMD: drop test.jstests_uniqueness m30001| Fri Feb 22 12:37:31.029 [conn180] build index test.jstests_uniqueness { _id: 1 } m30001| Fri Feb 22 12:37:31.029 [conn180] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:37:31.030 [conn1] DROP: test.jstests_uniqueness m30001| Fri Feb 22 12:37:31.030 [conn180] CMD: drop test.jstests_uniqueness m30001| Fri Feb 22 12:37:31.033 [conn180] build index test.jstests_uniqueness { _id: 1 } m30001| Fri Feb 22 12:37:31.033 [conn180] build index done. scanned 0 total records. 0 secs 26ms m30999| Fri Feb 22 12:37:31.034 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m30000| Fri Feb 22 12:37:31.035 [conn3] end connection 127.0.0.1:34908 (57 connections now open) m30000| Fri Feb 22 12:37:31.035 [conn5] end connection 127.0.0.1:54456 (57 connections now open) m30000| Fri Feb 22 12:37:31.035 [conn6] end connection 127.0.0.1:50119 (57 connections now open) m30001| Fri Feb 22 12:37:31.035 [conn4] end connection 127.0.0.1:40959 (37 connections now open) m30000| Fri Feb 22 12:37:31.035 [conn10] end connection 127.0.0.1:42520 (56 connections now open) m30000| Fri Feb 22 12:37:31.035 [conn20] end connection 127.0.0.1:51377 (55 connections now open) m30001| Fri Feb 22 12:37:31.035 [conn9] end connection 127.0.0.1:51687 (37 connections now open) m30001| Fri Feb 22 12:37:31.036 [conn168] end connection 127.0.0.1:38466 (35 connections now open) m30001| Fri Feb 22 12:37:31.036 [conn169] end connection 127.0.0.1:33498 (35 connections now open) m30001| Fri Feb 22 12:37:31.036 [conn175] end connection 127.0.0.1:54474 (35 connections now open) m30001| Fri Feb 22 12:37:31.036 [conn174] end connection 127.0.0.1:62600 (35 connections now open) m30000| Fri Feb 22 12:37:31.036 [conn23] end connection 127.0.0.1:40235 (53 connections now open) m30000| Fri Feb 22 12:37:31.036 [conn25] end connection 127.0.0.1:43948 (53 connections now open) m30001| Fri Feb 22 12:37:31.036 [conn178] end connection 127.0.0.1:52216 (34 connections now open) m30000| Fri Feb 22 12:37:31.036 [conn24] end connection 127.0.0.1:37373 (53 connections now open) m30000| Fri Feb 22 12:37:31.036 [conn120] end connection 127.0.0.1:57529 (53 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn179] end connection 127.0.0.1:32864 (33 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn183] end connection 127.0.0.1:40888 (31 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn201] end connection 127.0.0.1:63489 (31 connections now open) m30000| Fri Feb 22 12:37:31.036 [conn30] end connection 127.0.0.1:63162 (52 connections now open) m30000| Fri Feb 22 12:37:31.037 [conn35] end connection 127.0.0.1:55412 (52 connections now open) m30000| Fri Feb 22 12:37:31.037 [conn33] end connection 127.0.0.1:56552 (53 connections now open) m30000| Fri Feb 22 12:37:31.037 [conn29] end connection 127.0.0.1:35087 (53 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn195] end connection 127.0.0.1:38003 (31 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn200] end connection 127.0.0.1:46982 (31 connections now open) m30000| Fri Feb 22 12:37:31.037 [conn40] end connection 127.0.0.1:64434 (50 connections now open) m30000| Fri Feb 22 12:37:31.037 [conn38] end connection 127.0.0.1:51589 (50 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn205] end connection 127.0.0.1:43201 (31 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn180] end connection 127.0.0.1:49187 (30 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn185] end connection 127.0.0.1:54743 (29 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn206] end connection 127.0.0.1:63278 (28 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn209] end connection 127.0.0.1:34293 (28 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn231] end connection 127.0.0.1:59261 (28 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn215] end connection 127.0.0.1:49352 (28 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn246] end connection 127.0.0.1:57511 (28 connections now open) m30001| Fri Feb 22 12:37:31.037 [conn184] end connection 127.0.0.1:37693 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn258] end connection 127.0.0.1:46308 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn262] end connection 127.0.0.1:58757 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn211] end connection 127.0.0.1:44576 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn253] end connection 127.0.0.1:43879 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn210] end connection 127.0.0.1:55251 (28 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn118] end connection 127.0.0.1:53392 (42 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn226] end connection 127.0.0.1:47644 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn252] end connection 127.0.0.1:53618 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn240] end connection 127.0.0.1:37789 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn236] end connection 127.0.0.1:54067 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn257] end connection 127.0.0.1:32875 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn221] end connection 127.0.0.1:55760 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn247] end connection 127.0.0.1:47882 (28 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn41] end connection 127.0.0.1:62483 (42 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn109] end connection 127.0.0.1:40482 (42 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn107] end connection 127.0.0.1:34412 (41 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn216] end connection 127.0.0.1:58381 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn235] end connection 127.0.0.1:35132 (28 connections now open) m30001| Fri Feb 22 12:37:31.038 [conn230] end connection 127.0.0.1:51580 (26 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn103] end connection 127.0.0.1:37735 (40 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn91] end connection 127.0.0.1:55664 (40 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn90] end connection 127.0.0.1:52972 (40 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn82] end connection 127.0.0.1:58908 (39 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn96] end connection 127.0.0.1:62236 (38 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn85] end connection 127.0.0.1:37607 (37 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn87] end connection 127.0.0.1:60455 (37 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn81] end connection 127.0.0.1:63565 (37 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn76] end connection 127.0.0.1:43988 (37 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn77] end connection 127.0.0.1:64524 (37 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn67] end connection 127.0.0.1:58389 (37 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn65] end connection 127.0.0.1:57692 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn34] end connection 127.0.0.1:54126 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn70] end connection 127.0.0.1:33831 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn64] end connection 127.0.0.1:38415 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn39] end connection 127.0.0.1:53366 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn61] end connection 127.0.0.1:51416 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn71] end connection 127.0.0.1:53044 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn60] end connection 127.0.0.1:32922 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn117] end connection 127.0.0.1:42233 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn56] end connection 127.0.0.1:57718 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn62] end connection 127.0.0.1:48690 (37 connections now open) m30000| Fri Feb 22 12:37:31.038 [conn112] end connection 127.0.0.1:33747 (40 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn51] end connection 127.0.0.1:55657 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn55] end connection 127.0.0.1:50996 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn108] end connection 127.0.0.1:59819 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn119] end connection 127.0.0.1:50519 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn86] end connection 127.0.0.1:54379 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn92] end connection 127.0.0.1:35537 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn72] end connection 127.0.0.1:61496 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn66] end connection 127.0.0.1:60158 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn102] end connection 127.0.0.1:37456 (37 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn113] end connection 127.0.0.1:55563 (41 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn50] end connection 127.0.0.1:55990 (37 connections now open) m30000| Fri Feb 22 12:37:31.040 [conn95] end connection 127.0.0.1:43439 (12 connections now open) m30000| Fri Feb 22 12:37:31.039 [conn101] end connection 127.0.0.1:56818 (37 connections now open) Fri Feb 22 12:37:32.034 shell: stopped mongo program on port 30999 m30000| Fri Feb 22 12:37:32.034 got signal 15 (Terminated), will terminate after current cmd ends m30000| Fri Feb 22 12:37:32.035 [interruptThread] now exiting m30000| Fri Feb 22 12:37:32.035 dbexit: m30000| Fri Feb 22 12:37:32.035 [interruptThread] shutdown: going to close listening sockets... m30000| Fri Feb 22 12:37:32.035 [interruptThread] closing listening socket: 12 m30000| Fri Feb 22 12:37:32.035 [interruptThread] closing listening socket: 13 m30000| Fri Feb 22 12:37:32.035 [interruptThread] closing listening socket: 14 m30000| Fri Feb 22 12:37:32.035 [interruptThread] removing socket file: /tmp/mongodb-30000.sock m30000| Fri Feb 22 12:37:32.035 [interruptThread] shutdown: going to flush diaglog... m30000| Fri Feb 22 12:37:32.035 [interruptThread] shutdown: going to close sockets... m30000| Fri Feb 22 12:37:32.035 [interruptThread] shutdown: waiting for fs preallocator... m30000| Fri Feb 22 12:37:32.035 [interruptThread] shutdown: lock for final commit... m30000| Fri Feb 22 12:37:32.035 [interruptThread] shutdown: final commit... m30000| Fri Feb 22 12:37:32.035 [conn1] end connection 127.0.0.1:54949 (2 connections now open) m30000| Fri Feb 22 12:37:32.035 [conn2] end connection 127.0.0.1:55009 (1 connection now open) m30000| Fri Feb 22 12:37:32.118 [interruptThread] shutdown: closing all files... m30000| Fri Feb 22 12:37:32.122 [interruptThread] closeAllFiles() finished m30000| Fri Feb 22 12:37:32.122 [interruptThread] journalCleanup... m30000| Fri Feb 22 12:37:32.122 [interruptThread] removeJournalFiles m30000| Fri Feb 22 12:37:32.123 dbexit: really exiting now Fri Feb 22 12:37:33.034 shell: stopped mongo program on port 30000 Fri Feb 22 12:37:33.383 [conn24] end connection 127.0.0.1:52906 (0 connections now open) m30001| Fri Feb 22 12:37:33.035 got signal 15 (Terminated), will terminate after current cmd ends m30001| Fri Feb 22 12:37:33.035 [interruptThread] now exiting m30001| Fri Feb 22 12:37:33.035 dbexit: m30001| Fri Feb 22 12:37:33.035 [interruptThread] shutdown: going to close listening sockets... m30001| Fri Feb 22 12:37:33.035 [interruptThread] closing listening socket: 15 m30001| Fri Feb 22 12:37:33.035 [interruptThread] closing listening socket: 16 m30001| Fri Feb 22 12:37:33.035 [interruptThread] closing listening socket: 17 m30001| Fri Feb 22 12:37:33.035 [interruptThread] removing socket file: /tmp/mongodb-30001.sock m30001| Fri Feb 22 12:37:33.035 [interruptThread] shutdown: going to flush diaglog... m30001| Fri Feb 22 12:37:33.035 [interruptThread] shutdown: going to close sockets... m30001| Fri Feb 22 12:37:33.035 [interruptThread] shutdown: waiting for fs preallocator... m30001| Fri Feb 22 12:37:33.035 [interruptThread] shutdown: lock for final commit... m30001| Fri Feb 22 12:37:33.035 [interruptThread] shutdown: final commit... m30001| Fri Feb 22 12:37:33.035 [conn1] end connection 127.0.0.1:46171 (1 connection now open) m30001| Fri Feb 22 12:37:33.272 [interruptThread] shutdown: closing all files... m30001| Fri Feb 22 12:37:33.379 [interruptThread] closeAllFiles() finished m30001| Fri Feb 22 12:37:33.379 [interruptThread] journalCleanup... m30001| Fri Feb 22 12:37:33.379 [interruptThread] removeJournalFiles m30001| Fri Feb 22 12:37:33.382 dbexit: really exiting now Fri Feb 22 12:37:34.035 shell: stopped mongo program on port 30001 *** ShardingTest sharding_passthrough completed successfully in 251.066 seconds *** total runner time: 250.42secs 4.1898 minutes Fri Feb 22 12:37:34.328 [initandlisten] connection accepted from 127.0.0.1:63668 #25 (1 connection now open) Fri Feb 22 12:37:34.329 [conn25] end connection 127.0.0.1:63668 (0 connections now open) ******************************************* Test : sharding_rs1.js ... Command : /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongo --port 27999 --authenticationMechanism MONGODB-CR /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_rs1.js --eval TestData = new Object();TestData.testPath = "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_rs1.js";TestData.testFile = "sharding_rs1.js";TestData.testName = "sharding_rs1";TestData.noJournal = false;TestData.noJournalPrealloc = false;TestData.auth = false;TestData.keyFile = null;TestData.keyFileData = null; Date : Fri Feb 22 12:37:34 2013 buildlogger: could not find or import buildbot.tac for authentication MongoDB shell version: 2.4.0-rc1-pre- connecting to: 127.0.0.1:27999/test Fri Feb 22 12:37:34.458 [initandlisten] connection accepted from 127.0.0.1:41731 #26 (1 connection now open) null Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31100, 31101, 31102 ] 31100 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31100, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 0, "node" : 0, "set" : "rs1-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs0-0' Fri Feb 22 12:37:34.474 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31100 --noprealloc --smallfiles --rest --replSet rs1-rs0 --dbpath /data/db/rs1-rs0-0 --setParameter enableTestCommands=1 m31100| note: noprealloc may hurt performance in many applications m31100| Fri Feb 22 12:37:34.572 [initandlisten] MongoDB starting : pid=16102 port=31100 dbpath=/data/db/rs1-rs0-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31100| Fri Feb 22 12:37:34.573 [initandlisten] m31100| Fri Feb 22 12:37:34.573 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31100| Fri Feb 22 12:37:34.573 [initandlisten] ** uses to detect impending page faults. m31100| Fri Feb 22 12:37:34.573 [initandlisten] ** This may result in slower performance for certain use cases m31100| Fri Feb 22 12:37:34.573 [initandlisten] m31100| Fri Feb 22 12:37:34.573 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31100| Fri Feb 22 12:37:34.573 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31100| Fri Feb 22 12:37:34.573 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31100| Fri Feb 22 12:37:34.573 [initandlisten] allocator: system m31100| Fri Feb 22 12:37:34.573 [initandlisten] options: { dbpath: "/data/db/rs1-rs0-0", noprealloc: true, oplogSize: 40, port: 31100, replSet: "rs1-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31100| Fri Feb 22 12:37:34.573 [initandlisten] journal dir=/data/db/rs1-rs0-0/journal m31100| Fri Feb 22 12:37:34.573 [initandlisten] recover : no journal files present, no recovery needed m31100| Fri Feb 22 12:37:34.589 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.ns, filling with zeroes... m31100| Fri Feb 22 12:37:34.589 [FileAllocator] creating directory /data/db/rs1-rs0-0/_tmp m31100| Fri Feb 22 12:37:34.589 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 12:37:34.590 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.0, filling with zeroes... m31100| Fri Feb 22 12:37:34.590 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.0, size: 16MB, took 0 secs m31100| Fri Feb 22 12:37:34.593 [initandlisten] waiting for connections on port 31100 m31100| Fri Feb 22 12:37:34.593 [websvr] admin web console waiting for connections on port 32100 m31100| Fri Feb 22 12:37:34.595 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 12:37:34.595 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31100| Fri Feb 22 12:37:34.677 [initandlisten] connection accepted from 127.0.0.1:60748 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31100, 31101, 31102 ] 31101 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31101, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 0, "node" : 1, "set" : "rs1-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs0-1' Fri Feb 22 12:37:34.684 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31101 --noprealloc --smallfiles --rest --replSet rs1-rs0 --dbpath /data/db/rs1-rs0-1 --setParameter enableTestCommands=1 m31101| note: noprealloc may hurt performance in many applications m31101| Fri Feb 22 12:37:34.755 [initandlisten] MongoDB starting : pid=16103 port=31101 dbpath=/data/db/rs1-rs0-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31101| Fri Feb 22 12:37:34.756 [initandlisten] m31101| Fri Feb 22 12:37:34.756 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31101| Fri Feb 22 12:37:34.756 [initandlisten] ** uses to detect impending page faults. m31101| Fri Feb 22 12:37:34.756 [initandlisten] ** This may result in slower performance for certain use cases m31101| Fri Feb 22 12:37:34.756 [initandlisten] m31101| Fri Feb 22 12:37:34.756 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31101| Fri Feb 22 12:37:34.756 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31101| Fri Feb 22 12:37:34.756 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31101| Fri Feb 22 12:37:34.756 [initandlisten] allocator: system m31101| Fri Feb 22 12:37:34.756 [initandlisten] options: { dbpath: "/data/db/rs1-rs0-1", noprealloc: true, oplogSize: 40, port: 31101, replSet: "rs1-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31101| Fri Feb 22 12:37:34.756 [initandlisten] journal dir=/data/db/rs1-rs0-1/journal m31101| Fri Feb 22 12:37:34.756 [initandlisten] recover : no journal files present, no recovery needed m31101| Fri Feb 22 12:37:34.769 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.ns, filling with zeroes... m31101| Fri Feb 22 12:37:34.769 [FileAllocator] creating directory /data/db/rs1-rs0-1/_tmp m31101| Fri Feb 22 12:37:34.769 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 12:37:34.769 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.0, filling with zeroes... m31101| Fri Feb 22 12:37:34.770 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.0, size: 16MB, took 0 secs m31101| Fri Feb 22 12:37:34.772 [websvr] admin web console waiting for connections on port 32101 m31101| Fri Feb 22 12:37:34.772 [initandlisten] waiting for connections on port 31101 m31101| Fri Feb 22 12:37:34.775 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31101| Fri Feb 22 12:37:34.775 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31101| Fri Feb 22 12:37:34.885 [initandlisten] connection accepted from 127.0.0.1:49315 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31100, 31101, 31102 ] 31102 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31102, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs0", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 0, "node" : 2, "set" : "rs1-rs0" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs0-2' Fri Feb 22 12:37:34.890 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31102 --noprealloc --smallfiles --rest --replSet rs1-rs0 --dbpath /data/db/rs1-rs0-2 --setParameter enableTestCommands=1 m31102| note: noprealloc may hurt performance in many applications m31102| Fri Feb 22 12:37:34.962 [initandlisten] MongoDB starting : pid=16104 port=31102 dbpath=/data/db/rs1-rs0-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31102| Fri Feb 22 12:37:34.963 [initandlisten] m31102| Fri Feb 22 12:37:34.963 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31102| Fri Feb 22 12:37:34.963 [initandlisten] ** uses to detect impending page faults. m31102| Fri Feb 22 12:37:34.963 [initandlisten] ** This may result in slower performance for certain use cases m31102| Fri Feb 22 12:37:34.963 [initandlisten] m31102| Fri Feb 22 12:37:34.963 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31102| Fri Feb 22 12:37:34.963 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31102| Fri Feb 22 12:37:34.963 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31102| Fri Feb 22 12:37:34.963 [initandlisten] allocator: system m31102| Fri Feb 22 12:37:34.963 [initandlisten] options: { dbpath: "/data/db/rs1-rs0-2", noprealloc: true, oplogSize: 40, port: 31102, replSet: "rs1-rs0", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31102| Fri Feb 22 12:37:34.963 [initandlisten] journal dir=/data/db/rs1-rs0-2/journal m31102| Fri Feb 22 12:37:34.963 [initandlisten] recover : no journal files present, no recovery needed m31102| Fri Feb 22 12:37:34.976 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/local.ns, filling with zeroes... m31102| Fri Feb 22 12:37:34.976 [FileAllocator] creating directory /data/db/rs1-rs0-2/_tmp m31102| Fri Feb 22 12:37:34.976 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/local.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 12:37:34.976 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/local.0, filling with zeroes... m31102| Fri Feb 22 12:37:34.976 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/local.0, size: 16MB, took 0 secs m31102| Fri Feb 22 12:37:34.979 [initandlisten] waiting for connections on port 31102 m31102| Fri Feb 22 12:37:34.979 [websvr] admin web console waiting for connections on port 32102 m31102| Fri Feb 22 12:37:34.981 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31102| Fri Feb 22 12:37:34.982 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31102| Fri Feb 22 12:37:35.092 [initandlisten] connection accepted from 127.0.0.1:50126 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31100, connection to bs-smartos-x86-64-1.10gen.cc:31101, connection to bs-smartos-x86-64-1.10gen.cc:31102 ] { "replSetInitiate" : { "_id" : "rs1-rs0", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31100" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31101" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31102" } ] } } m31100| Fri Feb 22 12:37:35.095 [conn1] replSet replSetInitiate admin command received from client m31100| Fri Feb 22 12:37:35.098 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31100| Fri Feb 22 12:37:35.099 [initandlisten] connection accepted from 165.225.128.186:50614 #2 (2 connections now open) m31101| Fri Feb 22 12:37:35.099 [initandlisten] connection accepted from 165.225.128.186:48324 #2 (2 connections now open) m31102| Fri Feb 22 12:37:35.101 [initandlisten] connection accepted from 165.225.128.186:45114 #2 (2 connections now open) m31100| Fri Feb 22 12:37:35.102 [conn1] replSet replSetInitiate all members seem up m31100| Fri Feb 22 12:37:35.102 [conn1] ****** m31100| Fri Feb 22 12:37:35.102 [conn1] creating replication oplog of size: 40MB... m31100| Fri Feb 22 12:37:35.102 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/local.1, filling with zeroes... m31100| Fri Feb 22 12:37:35.102 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/local.1, size: 64MB, took 0 secs m31100| Fri Feb 22 12:37:35.114 [conn1] ****** m31100| Fri Feb 22 12:37:35.114 [conn1] replSet info saving a newer config version to local.system.replset m31100| Fri Feb 22 12:37:35.130 [conn2] end connection 165.225.128.186:50614 (1 connection now open) m31100| Fri Feb 22 12:37:35.130 [conn1] replSet saveConfigLocally done m31100| Fri Feb 22 12:37:35.130 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31100| Fri Feb 22 12:37:44.596 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:37:44.596 [rsStart] replSet STARTUP2 m31100| Fri Feb 22 12:37:44.596 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31101| Fri Feb 22 12:37:44.776 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:37:44.777 [initandlisten] connection accepted from 165.225.128.186:44194 #3 (2 connections now open) m31101| Fri Feb 22 12:37:44.778 [initandlisten] connection accepted from 165.225.128.186:44104 #3 (3 connections now open) m31101| Fri Feb 22 12:37:44.778 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 12:37:44.778 [rsStart] replSet got config version 1 from a remote, saving locally m31101| Fri Feb 22 12:37:44.778 [rsStart] replSet info saving a newer config version to local.system.replset m31101| Fri Feb 22 12:37:44.781 [rsStart] replSet saveConfigLocally done m31101| Fri Feb 22 12:37:44.781 [rsStart] replSet STARTUP2 m31101| Fri Feb 22 12:37:44.782 [rsSync] ****** m31101| Fri Feb 22 12:37:44.782 [rsSync] creating replication oplog of size: 40MB... m31101| Fri Feb 22 12:37:44.782 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/local.1, filling with zeroes... m31101| Fri Feb 22 12:37:44.782 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/local.1, size: 64MB, took 0 secs m31101| Fri Feb 22 12:37:44.793 [rsSync] ****** m31101| Fri Feb 22 12:37:44.793 [rsSync] replSet initial sync pending m31101| Fri Feb 22 12:37:44.793 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31101| Fri Feb 22 12:37:44.795 [conn3] end connection 165.225.128.186:44104 (2 connections now open) m31102| Fri Feb 22 12:37:44.982 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31100| Fri Feb 22 12:37:45.597 [rsSync] replSet SECONDARY m31100| Fri Feb 22 12:37:46.596 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31100| Fri Feb 22 12:37:46.596 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31101 thinks that we are down m31100| Fri Feb 22 12:37:46.596 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31100| Fri Feb 22 12:37:46.597 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31100| Fri Feb 22 12:37:46.597 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31101 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 12:37:46.778 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31101| Fri Feb 22 12:37:46.778 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31102| Fri Feb 22 12:37:46.779 [initandlisten] connection accepted from 165.225.128.186:64771 #3 (3 connections now open) m31101| Fri Feb 22 12:37:46.779 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is up m31100| Fri Feb 22 12:37:52.597 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31102| Fri Feb 22 12:37:54.982 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:37:54.983 [initandlisten] connection accepted from 165.225.128.186:61142 #4 (3 connections now open) m31102| Fri Feb 22 12:37:54.984 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 12:37:54.984 [initandlisten] connection accepted from 165.225.128.186:55119 #4 (3 connections now open) m31102| Fri Feb 22 12:37:54.985 [initandlisten] connection accepted from 165.225.128.186:45391 #4 (4 connections now open) m31102| Fri Feb 22 12:37:54.985 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31102 m31102| Fri Feb 22 12:37:54.985 [rsStart] replSet got config version 1 from a remote, saving locally m31102| Fri Feb 22 12:37:54.985 [rsStart] replSet info saving a newer config version to local.system.replset m31102| Fri Feb 22 12:37:54.990 [rsStart] replSet saveConfigLocally done m31102| Fri Feb 22 12:37:54.991 [rsStart] replSet STARTUP2 m31102| Fri Feb 22 12:37:54.991 [rsSync] ****** m31102| Fri Feb 22 12:37:54.991 [rsSync] creating replication oplog of size: 40MB... m31102| Fri Feb 22 12:37:54.992 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/local.1, filling with zeroes... m31102| Fri Feb 22 12:37:54.992 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/local.1, size: 64MB, took 0 secs m31102| Fri Feb 22 12:37:55.005 [conn4] end connection 165.225.128.186:45391 (3 connections now open) m31102| Fri Feb 22 12:37:55.005 [rsSync] ****** m31102| Fri Feb 22 12:37:55.005 [rsSync] replSet initial sync pending m31102| Fri Feb 22 12:37:55.005 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31100| Fri Feb 22 12:37:56.598 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31100| Fri Feb 22 12:37:56.598 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31100| Fri Feb 22 12:37:56.598 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31100 is electable' m31101| Fri Feb 22 12:37:56.780 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31102 thinks that we are down m31101| Fri Feb 22 12:37:56.780 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state STARTUP2 m31102| Fri Feb 22 12:37:56.985 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is up m31102| Fri Feb 22 12:37:56.985 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is up m31102| Fri Feb 22 12:37:56.985 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state SECONDARY m31102| Fri Feb 22 12:37:56.985 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state STARTUP2 m31101| Fri Feb 22 12:37:58.598 [conn2] end connection 165.225.128.186:48324 (2 connections now open) m31101| Fri Feb 22 12:37:58.598 [initandlisten] connection accepted from 165.225.128.186:35297 #5 (3 connections now open) m31100| Fri Feb 22 12:38:00.780 [conn3] end connection 165.225.128.186:44194 (2 connections now open) m31100| Fri Feb 22 12:38:00.781 [initandlisten] connection accepted from 165.225.128.186:60318 #5 (3 connections now open) m31101| Fri Feb 22 12:38:00.793 [rsSync] replSet initial sync pending m31101| Fri Feb 22 12:38:00.793 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:38:00.794 [initandlisten] connection accepted from 165.225.128.186:33321 #6 (4 connections now open) m31101| Fri Feb 22 12:38:00.802 [rsSync] build index local.me { _id: 1 } m31101| Fri Feb 22 12:38:00.806 [rsSync] build index done. scanned 0 total records. 0.003 secs m31101| Fri Feb 22 12:38:00.808 [rsSync] build index local.replset.minvalid { _id: 1 } m31101| Fri Feb 22 12:38:00.809 [rsSync] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 12:38:00.809 [rsSync] replSet initial sync drop all databases m31101| Fri Feb 22 12:38:00.809 [rsSync] dropAllDatabasesExceptLocal 1 m31101| Fri Feb 22 12:38:00.809 [rsSync] replSet initial sync clone all databases m31101| Fri Feb 22 12:38:00.810 [rsSync] replSet initial sync data copy, starting syncup m31101| Fri Feb 22 12:38:00.810 [rsSync] oplog sync 1 of 3 m31101| Fri Feb 22 12:38:00.810 [rsSync] oplog sync 2 of 3 m31101| Fri Feb 22 12:38:00.810 [rsSync] replSet initial sync building indexes m31101| Fri Feb 22 12:38:00.810 [rsSync] oplog sync 3 of 3 m31101| Fri Feb 22 12:38:00.810 [rsSync] replSet initial sync finishing up m31101| Fri Feb 22 12:38:00.819 [rsSync] replSet set minValid=5127668f:1 m31101| Fri Feb 22 12:38:00.826 [rsSync] replSet RECOVERING m31101| Fri Feb 22 12:38:00.826 [rsSync] replSet initial sync done m31100| Fri Feb 22 12:38:00.826 [conn6] end connection 165.225.128.186:33321 (3 connections now open) m31102| Fri Feb 22 12:38:00.986 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31100| Fri Feb 22 12:38:02.599 [rsMgr] replSet info electSelf 0 m31100| Fri Feb 22 12:38:02.599 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state RECOVERING m31102| Fri Feb 22 12:38:02.599 [conn2] replSet RECOVERING m31102| Fri Feb 22 12:38:02.599 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31101| Fri Feb 22 12:38:02.599 [conn5] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31100 (0) m31101| Fri Feb 22 12:38:02.781 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31101| Fri Feb 22 12:38:02.826 [rsSync] replSet SECONDARY m31102| Fri Feb 22 12:38:02.986 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31100| Fri Feb 22 12:38:03.598 [rsMgr] replSet PRIMARY m31100| Fri Feb 22 12:38:04.599 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state RECOVERING m31100| Fri Feb 22 12:38:04.599 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31101 is now in state SECONDARY m31101| Fri Feb 22 12:38:04.781 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31101| Fri Feb 22 12:38:04.783 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:38:04.783 [initandlisten] connection accepted from 165.225.128.186:37826 #7 (4 connections now open) m31101| Fri Feb 22 12:38:04.826 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:38:04.827 [initandlisten] connection accepted from 165.225.128.186:55112 #8 (5 connections now open) m31102| Fri Feb 22 12:38:04.987 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state PRIMARY m31100| Fri Feb 22 12:38:05.835 [slaveTracking] build index local.slaves { _id: 1 } m31100| Fri Feb 22 12:38:05.838 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31100| Fri Feb 22 12:38:10.987 [conn4] end connection 165.225.128.186:61142 (4 connections now open) m31100| Fri Feb 22 12:38:10.988 [initandlisten] connection accepted from 165.225.128.186:40740 #9 (5 connections now open) m31102| Fri Feb 22 12:38:11.006 [rsSync] replSet initial sync pending m31102| Fri Feb 22 12:38:11.006 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:38:11.006 [initandlisten] connection accepted from 165.225.128.186:61669 #10 (6 connections now open) m31102| Fri Feb 22 12:38:11.014 [rsSync] build index local.me { _id: 1 } m31102| Fri Feb 22 12:38:11.018 [rsSync] build index done. scanned 0 total records. 0.003 secs m31102| Fri Feb 22 12:38:11.020 [rsSync] build index local.replset.minvalid { _id: 1 } m31102| Fri Feb 22 12:38:11.021 [rsSync] build index done. scanned 0 total records. 0.001 secs m31102| Fri Feb 22 12:38:11.021 [rsSync] replSet initial sync drop all databases m31102| Fri Feb 22 12:38:11.021 [rsSync] dropAllDatabasesExceptLocal 1 m31102| Fri Feb 22 12:38:11.021 [rsSync] replSet initial sync clone all databases m31102| Fri Feb 22 12:38:11.022 [rsSync] replSet initial sync data copy, starting syncup m31102| Fri Feb 22 12:38:11.022 [rsSync] oplog sync 1 of 3 m31102| Fri Feb 22 12:38:11.022 [rsSync] oplog sync 2 of 3 m31102| Fri Feb 22 12:38:11.022 [rsSync] replSet initial sync building indexes m31102| Fri Feb 22 12:38:11.022 [rsSync] oplog sync 3 of 3 m31102| Fri Feb 22 12:38:11.022 [rsSync] replSet initial sync finishing up m31102| Fri Feb 22 12:38:11.032 [rsSync] replSet set minValid=5127668f:1 m31102| Fri Feb 22 12:38:11.041 [rsSync] replSet initial sync done m31100| Fri Feb 22 12:38:11.041 [conn10] end connection 165.225.128.186:61669 (5 connections now open) m31102| Fri Feb 22 12:38:11.992 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:38:11.993 [initandlisten] connection accepted from 165.225.128.186:50557 #11 (6 connections now open) m31102| Fri Feb 22 12:38:12.042 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:38:12.043 [initandlisten] connection accepted from 165.225.128.186:61272 #12 (7 connections now open) m31102| Fri Feb 22 12:38:13.041 [rsSync] replSet SECONDARY Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31200, 31201, 31202 ] 31200 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31200, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 1, "node" : 0, "set" : "rs1-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs1-0' Fri Feb 22 12:38:13.247 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31200 --noprealloc --smallfiles --rest --replSet rs1-rs1 --dbpath /data/db/rs1-rs1-0 --setParameter enableTestCommands=1 m31200| note: noprealloc may hurt performance in many applications m31200| Fri Feb 22 12:38:13.341 [initandlisten] MongoDB starting : pid=16207 port=31200 dbpath=/data/db/rs1-rs1-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31200| Fri Feb 22 12:38:13.341 [initandlisten] m31200| Fri Feb 22 12:38:13.341 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31200| Fri Feb 22 12:38:13.341 [initandlisten] ** uses to detect impending page faults. m31200| Fri Feb 22 12:38:13.341 [initandlisten] ** This may result in slower performance for certain use cases m31200| Fri Feb 22 12:38:13.341 [initandlisten] m31200| Fri Feb 22 12:38:13.342 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31200| Fri Feb 22 12:38:13.342 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31200| Fri Feb 22 12:38:13.342 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31200| Fri Feb 22 12:38:13.342 [initandlisten] allocator: system m31200| Fri Feb 22 12:38:13.342 [initandlisten] options: { dbpath: "/data/db/rs1-rs1-0", noprealloc: true, oplogSize: 40, port: 31200, replSet: "rs1-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31200| Fri Feb 22 12:38:13.342 [initandlisten] journal dir=/data/db/rs1-rs1-0/journal m31200| Fri Feb 22 12:38:13.342 [initandlisten] recover : no journal files present, no recovery needed m31200| Fri Feb 22 12:38:13.362 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.ns, filling with zeroes... m31200| Fri Feb 22 12:38:13.362 [FileAllocator] creating directory /data/db/rs1-rs1-0/_tmp m31200| Fri Feb 22 12:38:13.362 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 12:38:13.362 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.0, filling with zeroes... m31200| Fri Feb 22 12:38:13.362 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.0, size: 16MB, took 0 secs m31200| Fri Feb 22 12:38:13.365 [initandlisten] waiting for connections on port 31200 m31200| Fri Feb 22 12:38:13.365 [websvr] admin web console waiting for connections on port 32200 m31200| Fri Feb 22 12:38:13.368 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31200| Fri Feb 22 12:38:13.368 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31200| Fri Feb 22 12:38:13.448 [initandlisten] connection accepted from 127.0.0.1:35975 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31200 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31200, 31201, 31202 ] 31201 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31201, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 1, "node" : 1, "set" : "rs1-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs1-1' Fri Feb 22 12:38:13.452 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31201 --noprealloc --smallfiles --rest --replSet rs1-rs1 --dbpath /data/db/rs1-rs1-1 --setParameter enableTestCommands=1 m31201| note: noprealloc may hurt performance in many applications m31201| Fri Feb 22 12:38:13.522 [initandlisten] MongoDB starting : pid=16208 port=31201 dbpath=/data/db/rs1-rs1-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31201| Fri Feb 22 12:38:13.522 [initandlisten] m31201| Fri Feb 22 12:38:13.522 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31201| Fri Feb 22 12:38:13.522 [initandlisten] ** uses to detect impending page faults. m31201| Fri Feb 22 12:38:13.522 [initandlisten] ** This may result in slower performance for certain use cases m31201| Fri Feb 22 12:38:13.522 [initandlisten] m31201| Fri Feb 22 12:38:13.522 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31201| Fri Feb 22 12:38:13.522 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31201| Fri Feb 22 12:38:13.522 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31201| Fri Feb 22 12:38:13.522 [initandlisten] allocator: system m31201| Fri Feb 22 12:38:13.522 [initandlisten] options: { dbpath: "/data/db/rs1-rs1-1", noprealloc: true, oplogSize: 40, port: 31201, replSet: "rs1-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31201| Fri Feb 22 12:38:13.523 [initandlisten] journal dir=/data/db/rs1-rs1-1/journal m31201| Fri Feb 22 12:38:13.523 [initandlisten] recover : no journal files present, no recovery needed m31201| Fri Feb 22 12:38:13.535 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.ns, filling with zeroes... m31201| Fri Feb 22 12:38:13.535 [FileAllocator] creating directory /data/db/rs1-rs1-1/_tmp m31201| Fri Feb 22 12:38:13.536 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.ns, size: 16MB, took 0 secs m31201| Fri Feb 22 12:38:13.536 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.0, filling with zeroes... m31201| Fri Feb 22 12:38:13.536 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.0, size: 16MB, took 0 secs m31201| Fri Feb 22 12:38:13.538 [initandlisten] waiting for connections on port 31201 m31201| Fri Feb 22 12:38:13.539 [websvr] admin web console waiting for connections on port 32201 m31201| Fri Feb 22 12:38:13.541 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31201| Fri Feb 22 12:38:13.541 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31201| Fri Feb 22 12:38:13.653 [initandlisten] connection accepted from 127.0.0.1:37141 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31200, connection to bs-smartos-x86-64-1.10gen.cc:31201 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31200, 31201, 31202 ] 31202 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31202, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs1", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 1, "node" : 2, "set" : "rs1-rs1" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs1-2' Fri Feb 22 12:38:13.656 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31202 --noprealloc --smallfiles --rest --replSet rs1-rs1 --dbpath /data/db/rs1-rs1-2 --setParameter enableTestCommands=1 m31202| note: noprealloc may hurt performance in many applications m31202| Fri Feb 22 12:38:13.745 [initandlisten] MongoDB starting : pid=16209 port=31202 dbpath=/data/db/rs1-rs1-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31202| Fri Feb 22 12:38:13.745 [initandlisten] m31202| Fri Feb 22 12:38:13.745 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31202| Fri Feb 22 12:38:13.745 [initandlisten] ** uses to detect impending page faults. m31202| Fri Feb 22 12:38:13.745 [initandlisten] ** This may result in slower performance for certain use cases m31202| Fri Feb 22 12:38:13.745 [initandlisten] m31202| Fri Feb 22 12:38:13.745 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31202| Fri Feb 22 12:38:13.745 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31202| Fri Feb 22 12:38:13.745 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31202| Fri Feb 22 12:38:13.745 [initandlisten] allocator: system m31202| Fri Feb 22 12:38:13.745 [initandlisten] options: { dbpath: "/data/db/rs1-rs1-2", noprealloc: true, oplogSize: 40, port: 31202, replSet: "rs1-rs1", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31202| Fri Feb 22 12:38:13.746 [initandlisten] journal dir=/data/db/rs1-rs1-2/journal m31202| Fri Feb 22 12:38:13.746 [initandlisten] recover : no journal files present, no recovery needed m31202| Fri Feb 22 12:38:13.760 [FileAllocator] allocating new datafile /data/db/rs1-rs1-2/local.ns, filling with zeroes... m31202| Fri Feb 22 12:38:13.760 [FileAllocator] creating directory /data/db/rs1-rs1-2/_tmp m31202| Fri Feb 22 12:38:13.760 [FileAllocator] done allocating datafile /data/db/rs1-rs1-2/local.ns, size: 16MB, took 0 secs m31202| Fri Feb 22 12:38:13.760 [FileAllocator] allocating new datafile /data/db/rs1-rs1-2/local.0, filling with zeroes... m31202| Fri Feb 22 12:38:13.760 [FileAllocator] done allocating datafile /data/db/rs1-rs1-2/local.0, size: 16MB, took 0 secs m31202| Fri Feb 22 12:38:13.763 [initandlisten] waiting for connections on port 31202 m31202| Fri Feb 22 12:38:13.763 [websvr] admin web console waiting for connections on port 32202 m31202| Fri Feb 22 12:38:13.766 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31202| Fri Feb 22 12:38:13.766 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31202| Fri Feb 22 12:38:13.857 [initandlisten] connection accepted from 127.0.0.1:59280 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31200, connection to bs-smartos-x86-64-1.10gen.cc:31201, connection to bs-smartos-x86-64-1.10gen.cc:31202 ] { "replSetInitiate" : { "_id" : "rs1-rs1", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31200" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31201" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31202" } ] } } m31200| Fri Feb 22 12:38:13.859 [conn1] replSet replSetInitiate admin command received from client m31200| Fri Feb 22 12:38:13.860 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31200| Fri Feb 22 12:38:13.861 [initandlisten] connection accepted from 165.225.128.186:53825 #2 (2 connections now open) m31201| Fri Feb 22 12:38:13.861 [initandlisten] connection accepted from 165.225.128.186:36605 #2 (2 connections now open) m31202| Fri Feb 22 12:38:13.863 [initandlisten] connection accepted from 165.225.128.186:43020 #2 (2 connections now open) m31200| Fri Feb 22 12:38:13.864 [conn1] replSet replSetInitiate all members seem up m31200| Fri Feb 22 12:38:13.864 [conn1] ****** m31200| Fri Feb 22 12:38:13.864 [conn1] creating replication oplog of size: 40MB... m31200| Fri Feb 22 12:38:13.864 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/local.1, filling with zeroes... m31200| Fri Feb 22 12:38:13.864 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/local.1, size: 64MB, took 0 secs m31200| Fri Feb 22 12:38:13.878 [conn1] ****** m31200| Fri Feb 22 12:38:13.878 [conn1] replSet info saving a newer config version to local.system.replset m31200| Fri Feb 22 12:38:13.881 [conn2] end connection 165.225.128.186:53825 (1 connection now open) m31200| Fri Feb 22 12:38:13.889 [conn1] replSet saveConfigLocally done m31200| Fri Feb 22 12:38:13.889 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31102| Fri Feb 22 12:38:14.600 [conn2] end connection 165.225.128.186:45114 (2 connections now open) m31102| Fri Feb 22 12:38:14.601 [initandlisten] connection accepted from 165.225.128.186:40156 #5 (3 connections now open) m31100| Fri Feb 22 12:38:14.601 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY m31102| Fri Feb 22 12:38:14.783 [conn3] end connection 165.225.128.186:64771 (2 connections now open) m31102| Fri Feb 22 12:38:14.783 [initandlisten] connection accepted from 165.225.128.186:34498 #6 (3 connections now open) m31101| Fri Feb 22 12:38:14.783 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31102 is now in state SECONDARY m31200| Fri Feb 22 12:38:23.368 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:23.368 [rsStart] replSet STARTUP2 m31200| Fri Feb 22 12:38:23.368 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is up m31200| Fri Feb 22 12:38:23.368 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is up m31201| Fri Feb 22 12:38:23.542 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:23.543 [initandlisten] connection accepted from 165.225.128.186:34884 #3 (2 connections now open) m31201| Fri Feb 22 12:38:23.543 [initandlisten] connection accepted from 165.225.128.186:51563 #3 (3 connections now open) m31201| Fri Feb 22 12:38:23.544 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31201 m31201| Fri Feb 22 12:38:23.544 [rsStart] replSet got config version 1 from a remote, saving locally m31201| Fri Feb 22 12:38:23.544 [rsStart] replSet info saving a newer config version to local.system.replset m31201| Fri Feb 22 12:38:23.551 [rsStart] replSet saveConfigLocally done m31201| Fri Feb 22 12:38:23.551 [rsStart] replSet STARTUP2 m31201| Fri Feb 22 12:38:23.552 [rsSync] ****** m31201| Fri Feb 22 12:38:23.552 [rsSync] creating replication oplog of size: 40MB... m31201| Fri Feb 22 12:38:23.552 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/local.1, filling with zeroes... m31201| Fri Feb 22 12:38:23.552 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/local.1, size: 64MB, took 0 secs m31201| Fri Feb 22 12:38:23.562 [conn3] end connection 165.225.128.186:51563 (2 connections now open) m31201| Fri Feb 22 12:38:23.568 [rsSync] ****** m31201| Fri Feb 22 12:38:23.568 [rsSync] replSet initial sync pending m31201| Fri Feb 22 12:38:23.568 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31202| Fri Feb 22 12:38:23.767 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:23.767 [initandlisten] connection accepted from 165.225.128.186:51995 #4 (3 connections now open) m31202| Fri Feb 22 12:38:23.768 [initandlisten] connection accepted from 165.225.128.186:37644 #3 (3 connections now open) m31202| Fri Feb 22 12:38:23.769 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31202 m31202| Fri Feb 22 12:38:23.769 [rsStart] replSet got config version 1 from a remote, saving locally m31202| Fri Feb 22 12:38:23.769 [rsStart] replSet info saving a newer config version to local.system.replset m31202| Fri Feb 22 12:38:23.773 [rsStart] replSet saveConfigLocally done m31202| Fri Feb 22 12:38:23.774 [rsStart] replSet STARTUP2 m31202| Fri Feb 22 12:38:23.774 [rsSync] ****** m31202| Fri Feb 22 12:38:23.774 [rsSync] creating replication oplog of size: 40MB... m31202| Fri Feb 22 12:38:23.774 [FileAllocator] allocating new datafile /data/db/rs1-rs1-2/local.1, filling with zeroes... m31202| Fri Feb 22 12:38:23.775 [FileAllocator] done allocating datafile /data/db/rs1-rs1-2/local.1, size: 64MB, took 0 secs m31202| Fri Feb 22 12:38:23.787 [conn3] end connection 165.225.128.186:37644 (2 connections now open) m31202| Fri Feb 22 12:38:23.795 [rsSync] ****** m31202| Fri Feb 22 12:38:23.795 [rsSync] replSet initial sync pending m31202| Fri Feb 22 12:38:23.795 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31200| Fri Feb 22 12:38:24.369 [rsSync] replSet SECONDARY m31101| Fri Feb 22 12:38:24.989 [conn4] end connection 165.225.128.186:55119 (2 connections now open) m31101| Fri Feb 22 12:38:24.990 [initandlisten] connection accepted from 165.225.128.186:50092 #6 (3 connections now open) m31200| Fri Feb 22 12:38:25.369 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31201 thinks that we are down m31200| Fri Feb 22 12:38:25.369 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state STARTUP2 m31200| Fri Feb 22 12:38:25.369 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31202 thinks that we are down m31200| Fri Feb 22 12:38:25.369 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is now in state STARTUP2 m31200| Fri Feb 22 12:38:25.369 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31202 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31200 is electable' m31200| Fri Feb 22 12:38:25.369 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31202 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31200 is electable' m31201| Fri Feb 22 12:38:25.544 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is up m31201| Fri Feb 22 12:38:25.544 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state SECONDARY m31202| Fri Feb 22 12:38:25.544 [initandlisten] connection accepted from 165.225.128.186:45435 #4 (3 connections now open) m31201| Fri Feb 22 12:38:25.545 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31202 thinks that we are down m31201| Fri Feb 22 12:38:25.545 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is up m31201| Fri Feb 22 12:38:25.545 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is now in state STARTUP2 m31202| Fri Feb 22 12:38:25.769 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is up m31202| Fri Feb 22 12:38:25.769 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state SECONDARY m31201| Fri Feb 22 12:38:25.769 [initandlisten] connection accepted from 165.225.128.186:32769 #4 (3 connections now open) m31202| Fri Feb 22 12:38:25.770 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is up m31202| Fri Feb 22 12:38:25.770 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state STARTUP2 m31101| Fri Feb 22 12:38:28.602 [conn5] end connection 165.225.128.186:35297 (2 connections now open) m31101| Fri Feb 22 12:38:28.602 [initandlisten] connection accepted from 165.225.128.186:64219 #7 (3 connections now open) m31100| Fri Feb 22 12:38:30.785 [conn5] end connection 165.225.128.186:60318 (6 connections now open) m31100| Fri Feb 22 12:38:30.785 [initandlisten] connection accepted from 165.225.128.186:62661 #13 (7 connections now open) m31200| Fri Feb 22 12:38:31.370 [rsMgr] replSet info electSelf 0 m31202| Fri Feb 22 12:38:31.370 [conn2] replSet RECOVERING m31202| Fri Feb 22 12:38:31.370 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31200 (0) m31201| Fri Feb 22 12:38:31.370 [conn2] replSet RECOVERING m31201| Fri Feb 22 12:38:31.370 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31200 (0) m31201| Fri Feb 22 12:38:31.545 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is now in state RECOVERING m31202| Fri Feb 22 12:38:31.770 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state RECOVERING m31200| Fri Feb 22 12:38:32.369 [rsMgr] replSet PRIMARY m31200| Fri Feb 22 12:38:33.370 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state RECOVERING m31200| Fri Feb 22 12:38:33.370 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is now in state RECOVERING m31201| Fri Feb 22 12:38:33.545 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state PRIMARY m31202| Fri Feb 22 12:38:33.770 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state PRIMARY m31201| Fri Feb 22 12:38:37.370 [conn2] end connection 165.225.128.186:36605 (2 connections now open) m31201| Fri Feb 22 12:38:37.371 [initandlisten] connection accepted from 165.225.128.186:54299 #5 (3 connections now open) m31200| Fri Feb 22 12:38:39.546 [conn3] end connection 165.225.128.186:34884 (2 connections now open) m31200| Fri Feb 22 12:38:39.547 [initandlisten] connection accepted from 165.225.128.186:46858 #5 (3 connections now open) m31201| Fri Feb 22 12:38:39.568 [rsSync] replSet initial sync pending m31201| Fri Feb 22 12:38:39.568 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:39.569 [initandlisten] connection accepted from 165.225.128.186:64764 #6 (4 connections now open) m31201| Fri Feb 22 12:38:39.577 [rsSync] build index local.me { _id: 1 } m31201| Fri Feb 22 12:38:39.581 [rsSync] build index done. scanned 0 total records. 0.003 secs m31201| Fri Feb 22 12:38:39.582 [rsSync] build index local.replset.minvalid { _id: 1 } m31201| Fri Feb 22 12:38:39.584 [rsSync] build index done. scanned 0 total records. 0.001 secs m31201| Fri Feb 22 12:38:39.584 [rsSync] replSet initial sync drop all databases m31201| Fri Feb 22 12:38:39.584 [rsSync] dropAllDatabasesExceptLocal 1 m31201| Fri Feb 22 12:38:39.584 [rsSync] replSet initial sync clone all databases m31201| Fri Feb 22 12:38:39.584 [rsSync] replSet initial sync data copy, starting syncup m31201| Fri Feb 22 12:38:39.584 [rsSync] oplog sync 1 of 3 m31201| Fri Feb 22 12:38:39.585 [rsSync] oplog sync 2 of 3 m31201| Fri Feb 22 12:38:39.585 [rsSync] replSet initial sync building indexes m31201| Fri Feb 22 12:38:39.585 [rsSync] oplog sync 3 of 3 m31201| Fri Feb 22 12:38:39.585 [rsSync] replSet initial sync finishing up m31201| Fri Feb 22 12:38:39.598 [rsSync] replSet set minValid=512766b5:b m31201| Fri Feb 22 12:38:39.609 [rsSync] replSet initial sync done m31200| Fri Feb 22 12:38:39.610 [conn6] end connection 165.225.128.186:64764 (3 connections now open) m31200| Fri Feb 22 12:38:39.771 [conn4] end connection 165.225.128.186:51995 (2 connections now open) m31200| Fri Feb 22 12:38:39.771 [initandlisten] connection accepted from 165.225.128.186:50244 #7 (3 connections now open) m31202| Fri Feb 22 12:38:39.796 [rsSync] replSet initial sync pending m31202| Fri Feb 22 12:38:39.796 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:39.796 [initandlisten] connection accepted from 165.225.128.186:56207 #8 (4 connections now open) m31202| Fri Feb 22 12:38:39.804 [rsSync] build index local.me { _id: 1 } m31202| Fri Feb 22 12:38:39.808 [rsSync] build index done. scanned 0 total records. 0.003 secs m31202| Fri Feb 22 12:38:39.810 [rsSync] build index local.replset.minvalid { _id: 1 } m31202| Fri Feb 22 12:38:39.811 [rsSync] build index done. scanned 0 total records. 0.001 secs m31202| Fri Feb 22 12:38:39.811 [rsSync] replSet initial sync drop all databases m31202| Fri Feb 22 12:38:39.811 [rsSync] dropAllDatabasesExceptLocal 1 m31202| Fri Feb 22 12:38:39.811 [rsSync] replSet initial sync clone all databases m31202| Fri Feb 22 12:38:39.812 [rsSync] replSet initial sync data copy, starting syncup m31202| Fri Feb 22 12:38:39.812 [rsSync] oplog sync 1 of 3 m31202| Fri Feb 22 12:38:39.812 [rsSync] oplog sync 2 of 3 m31202| Fri Feb 22 12:38:39.812 [rsSync] replSet initial sync building indexes m31202| Fri Feb 22 12:38:39.812 [rsSync] oplog sync 3 of 3 m31202| Fri Feb 22 12:38:39.812 [rsSync] replSet initial sync finishing up m31202| Fri Feb 22 12:38:39.832 [rsSync] replSet set minValid=512766b5:b m31202| Fri Feb 22 12:38:39.851 [rsSync] replSet initial sync done m31200| Fri Feb 22 12:38:39.851 [conn8] end connection 165.225.128.186:56207 (3 connections now open) m31201| Fri Feb 22 12:38:40.552 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:40.553 [initandlisten] connection accepted from 165.225.128.186:54273 #9 (4 connections now open) m31201| Fri Feb 22 12:38:40.610 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:40.610 [initandlisten] connection accepted from 165.225.128.186:40783 #10 (5 connections now open) m31202| Fri Feb 22 12:38:40.775 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:40.775 [initandlisten] connection accepted from 165.225.128.186:34251 #11 (6 connections now open) m31202| Fri Feb 22 12:38:40.851 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:38:40.852 [initandlisten] connection accepted from 165.225.128.186:38056 #12 (7 connections now open) m31100| Fri Feb 22 12:38:40.992 [conn9] end connection 165.225.128.186:40740 (6 connections now open) m31100| Fri Feb 22 12:38:40.992 [initandlisten] connection accepted from 165.225.128.186:41658 #14 (7 connections now open) m31201| Fri Feb 22 12:38:41.611 [rsSync] replSet SECONDARY m31200| Fri Feb 22 12:38:41.619 [slaveTracking] build index local.slaves { _id: 1 } m31200| Fri Feb 22 12:38:41.621 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31202| Fri Feb 22 12:38:41.772 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state SECONDARY m31202| Fri Feb 22 12:38:41.852 [rsSync] replSet SECONDARY Replica set test! ReplSetTest Starting Set ReplSetTest n is : 0 ReplSetTest n: 0 ports: [ 31300, 31301, 31302 ] 31300 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31300, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs2", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 2, "node" : 0, "set" : "rs1-rs2" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs2-0' Fri Feb 22 12:38:41.992 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31300 --noprealloc --smallfiles --rest --replSet rs1-rs2 --dbpath /data/db/rs1-rs2-0 --setParameter enableTestCommands=1 m31300| note: noprealloc may hurt performance in many applications m31300| Fri Feb 22 12:38:42.064 [initandlisten] MongoDB starting : pid=16289 port=31300 dbpath=/data/db/rs1-rs2-0 64-bit host=bs-smartos-x86-64-1.10gen.cc m31300| Fri Feb 22 12:38:42.065 [initandlisten] m31300| Fri Feb 22 12:38:42.065 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31300| Fri Feb 22 12:38:42.065 [initandlisten] ** uses to detect impending page faults. m31300| Fri Feb 22 12:38:42.065 [initandlisten] ** This may result in slower performance for certain use cases m31300| Fri Feb 22 12:38:42.065 [initandlisten] m31300| Fri Feb 22 12:38:42.065 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31300| Fri Feb 22 12:38:42.065 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31300| Fri Feb 22 12:38:42.065 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31300| Fri Feb 22 12:38:42.065 [initandlisten] allocator: system m31300| Fri Feb 22 12:38:42.065 [initandlisten] options: { dbpath: "/data/db/rs1-rs2-0", noprealloc: true, oplogSize: 40, port: 31300, replSet: "rs1-rs2", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31300| Fri Feb 22 12:38:42.065 [initandlisten] journal dir=/data/db/rs1-rs2-0/journal m31300| Fri Feb 22 12:38:42.065 [initandlisten] recover : no journal files present, no recovery needed m31300| Fri Feb 22 12:38:42.082 [FileAllocator] allocating new datafile /data/db/rs1-rs2-0/local.ns, filling with zeroes... m31300| Fri Feb 22 12:38:42.082 [FileAllocator] creating directory /data/db/rs1-rs2-0/_tmp m31300| Fri Feb 22 12:38:42.082 [FileAllocator] done allocating datafile /data/db/rs1-rs2-0/local.ns, size: 16MB, took 0 secs m31300| Fri Feb 22 12:38:42.082 [FileAllocator] allocating new datafile /data/db/rs1-rs2-0/local.0, filling with zeroes... m31300| Fri Feb 22 12:38:42.082 [FileAllocator] done allocating datafile /data/db/rs1-rs2-0/local.0, size: 16MB, took 0 secs m31300| Fri Feb 22 12:38:42.085 [initandlisten] waiting for connections on port 31300 m31300| Fri Feb 22 12:38:42.085 [websvr] admin web console waiting for connections on port 32300 m31300| Fri Feb 22 12:38:42.088 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31300| Fri Feb 22 12:38:42.088 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31300| Fri Feb 22 12:38:42.194 [initandlisten] connection accepted from 127.0.0.1:51566 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31300 ] ReplSetTest n is : 1 ReplSetTest n: 1 ports: [ 31300, 31301, 31302 ] 31301 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31301, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs2", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 2, "node" : 1, "set" : "rs1-rs2" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs2-1' Fri Feb 22 12:38:42.197 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31301 --noprealloc --smallfiles --rest --replSet rs1-rs2 --dbpath /data/db/rs1-rs2-1 --setParameter enableTestCommands=1 m31301| note: noprealloc may hurt performance in many applications m31301| Fri Feb 22 12:38:42.271 [initandlisten] MongoDB starting : pid=16290 port=31301 dbpath=/data/db/rs1-rs2-1 64-bit host=bs-smartos-x86-64-1.10gen.cc m31301| Fri Feb 22 12:38:42.271 [initandlisten] m31301| Fri Feb 22 12:38:42.271 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31301| Fri Feb 22 12:38:42.271 [initandlisten] ** uses to detect impending page faults. m31301| Fri Feb 22 12:38:42.271 [initandlisten] ** This may result in slower performance for certain use cases m31301| Fri Feb 22 12:38:42.271 [initandlisten] m31301| Fri Feb 22 12:38:42.272 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31301| Fri Feb 22 12:38:42.272 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31301| Fri Feb 22 12:38:42.272 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31301| Fri Feb 22 12:38:42.272 [initandlisten] allocator: system m31301| Fri Feb 22 12:38:42.272 [initandlisten] options: { dbpath: "/data/db/rs1-rs2-1", noprealloc: true, oplogSize: 40, port: 31301, replSet: "rs1-rs2", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31301| Fri Feb 22 12:38:42.272 [initandlisten] journal dir=/data/db/rs1-rs2-1/journal m31301| Fri Feb 22 12:38:42.272 [initandlisten] recover : no journal files present, no recovery needed m31301| Fri Feb 22 12:38:42.286 [FileAllocator] allocating new datafile /data/db/rs1-rs2-1/local.ns, filling with zeroes... m31301| Fri Feb 22 12:38:42.286 [FileAllocator] creating directory /data/db/rs1-rs2-1/_tmp m31301| Fri Feb 22 12:38:42.286 [FileAllocator] done allocating datafile /data/db/rs1-rs2-1/local.ns, size: 16MB, took 0 secs m31301| Fri Feb 22 12:38:42.286 [FileAllocator] allocating new datafile /data/db/rs1-rs2-1/local.0, filling with zeroes... m31301| Fri Feb 22 12:38:42.287 [FileAllocator] done allocating datafile /data/db/rs1-rs2-1/local.0, size: 16MB, took 0 secs m31301| Fri Feb 22 12:38:42.290 [initandlisten] waiting for connections on port 31301 m31301| Fri Feb 22 12:38:42.290 [websvr] admin web console waiting for connections on port 32301 m31301| Fri Feb 22 12:38:42.293 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31301| Fri Feb 22 12:38:42.293 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31301| Fri Feb 22 12:38:42.398 [initandlisten] connection accepted from 127.0.0.1:37435 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31300, connection to bs-smartos-x86-64-1.10gen.cc:31301 ] ReplSetTest n is : 2 ReplSetTest n: 2 ports: [ 31300, 31301, 31302 ] 31302 number { "useHostName" : true, "oplogSize" : 40, "keyFile" : undefined, "port" : 31302, "noprealloc" : "", "smallfiles" : "", "rest" : "", "replSet" : "rs1-rs2", "dbpath" : "$set-$node", "useHostname" : true, "noJournalPrealloc" : undefined, "pathOpts" : { "testName" : "rs1", "shard" : 2, "node" : 2, "set" : "rs1-rs2" }, "restart" : undefined } ReplSetTest Starting.... Resetting db path '/data/db/rs1-rs2-2' Fri Feb 22 12:38:42.404 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --oplogSize 40 --port 31302 --noprealloc --smallfiles --rest --replSet rs1-rs2 --dbpath /data/db/rs1-rs2-2 --setParameter enableTestCommands=1 m31302| note: noprealloc may hurt performance in many applications m31302| Fri Feb 22 12:38:42.516 [initandlisten] MongoDB starting : pid=16291 port=31302 dbpath=/data/db/rs1-rs2-2 64-bit host=bs-smartos-x86-64-1.10gen.cc m31302| Fri Feb 22 12:38:42.517 [initandlisten] m31302| Fri Feb 22 12:38:42.517 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m31302| Fri Feb 22 12:38:42.517 [initandlisten] ** uses to detect impending page faults. m31302| Fri Feb 22 12:38:42.517 [initandlisten] ** This may result in slower performance for certain use cases m31302| Fri Feb 22 12:38:42.517 [initandlisten] m31302| Fri Feb 22 12:38:42.517 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m31302| Fri Feb 22 12:38:42.517 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m31302| Fri Feb 22 12:38:42.517 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m31302| Fri Feb 22 12:38:42.517 [initandlisten] allocator: system m31302| Fri Feb 22 12:38:42.517 [initandlisten] options: { dbpath: "/data/db/rs1-rs2-2", noprealloc: true, oplogSize: 40, port: 31302, replSet: "rs1-rs2", rest: true, setParameter: [ "enableTestCommands=1" ], smallfiles: true } m31302| Fri Feb 22 12:38:42.517 [initandlisten] journal dir=/data/db/rs1-rs2-2/journal m31302| Fri Feb 22 12:38:42.517 [initandlisten] recover : no journal files present, no recovery needed m31302| Fri Feb 22 12:38:42.533 [FileAllocator] allocating new datafile /data/db/rs1-rs2-2/local.ns, filling with zeroes... m31302| Fri Feb 22 12:38:42.533 [FileAllocator] creating directory /data/db/rs1-rs2-2/_tmp m31302| Fri Feb 22 12:38:42.533 [FileAllocator] done allocating datafile /data/db/rs1-rs2-2/local.ns, size: 16MB, took 0 secs m31302| Fri Feb 22 12:38:42.533 [FileAllocator] allocating new datafile /data/db/rs1-rs2-2/local.0, filling with zeroes... m31302| Fri Feb 22 12:38:42.534 [FileAllocator] done allocating datafile /data/db/rs1-rs2-2/local.0, size: 16MB, took 0 secs m31302| Fri Feb 22 12:38:42.537 [initandlisten] waiting for connections on port 31302 m31302| Fri Feb 22 12:38:42.537 [websvr] admin web console waiting for connections on port 32302 m31302| Fri Feb 22 12:38:42.540 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31302| Fri Feb 22 12:38:42.540 [rsStart] replSet info you may need to run replSetInitiate -- rs.initiate() in the shell -- if that is not already done m31302| Fri Feb 22 12:38:42.606 [initandlisten] connection accepted from 127.0.0.1:48513 #1 (1 connection now open) [ connection to bs-smartos-x86-64-1.10gen.cc:31300, connection to bs-smartos-x86-64-1.10gen.cc:31301, connection to bs-smartos-x86-64-1.10gen.cc:31302 ] { "replSetInitiate" : { "_id" : "rs1-rs2", "members" : [ { "_id" : 0, "host" : "bs-smartos-x86-64-1.10gen.cc:31300" }, { "_id" : 1, "host" : "bs-smartos-x86-64-1.10gen.cc:31301" }, { "_id" : 2, "host" : "bs-smartos-x86-64-1.10gen.cc:31302" } ] } } m31300| Fri Feb 22 12:38:42.607 [conn1] replSet replSetInitiate admin command received from client m31300| Fri Feb 22 12:38:42.608 [conn1] replSet replSetInitiate config object parses ok, 3 members specified m31300| Fri Feb 22 12:38:42.609 [initandlisten] connection accepted from 165.225.128.186:64348 #2 (2 connections now open) m31301| Fri Feb 22 12:38:42.609 [initandlisten] connection accepted from 165.225.128.186:49524 #2 (2 connections now open) m31302| Fri Feb 22 12:38:42.616 [initandlisten] connection accepted from 165.225.128.186:60943 #2 (2 connections now open) m31300| Fri Feb 22 12:38:42.617 [conn1] replSet replSetInitiate all members seem up m31300| Fri Feb 22 12:38:42.617 [conn1] ****** m31300| Fri Feb 22 12:38:42.617 [conn1] creating replication oplog of size: 40MB... m31300| Fri Feb 22 12:38:42.617 [FileAllocator] allocating new datafile /data/db/rs1-rs2-0/local.1, filling with zeroes... m31300| Fri Feb 22 12:38:42.618 [FileAllocator] done allocating datafile /data/db/rs1-rs2-0/local.1, size: 64MB, took 0 secs m31300| Fri Feb 22 12:38:42.630 [conn1] ****** m31300| Fri Feb 22 12:38:42.630 [conn1] replSet info saving a newer config version to local.system.replset m31300| Fri Feb 22 12:38:42.630 [conn2] end connection 165.225.128.186:64348 (1 connection now open) m31300| Fri Feb 22 12:38:42.639 [conn1] replSet saveConfigLocally done m31300| Fri Feb 22 12:38:42.639 [conn1] replSet replSetInitiate config now saved locally. Should come online in about a minute. { "info" : "Config now saved locally. Should come online in about a minute.", "ok" : 1 } m31200| Fri Feb 22 12:38:43.371 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is now in state SECONDARY m31200| Fri Feb 22 12:38:43.372 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31201 is now in state SECONDARY m31201| Fri Feb 22 12:38:43.547 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31202 is now in state SECONDARY m31102| Fri Feb 22 12:38:44.605 [conn5] end connection 165.225.128.186:40156 (2 connections now open) m31102| Fri Feb 22 12:38:44.605 [initandlisten] connection accepted from 165.225.128.186:54840 #7 (3 connections now open) m31102| Fri Feb 22 12:38:44.787 [conn6] end connection 165.225.128.186:34498 (2 connections now open) m31102| Fri Feb 22 12:38:44.787 [initandlisten] connection accepted from 165.225.128.186:59761 #8 (3 connections now open) m31202| Fri Feb 22 12:38:51.372 [conn2] end connection 165.225.128.186:43020 (2 connections now open) m31202| Fri Feb 22 12:38:51.373 [initandlisten] connection accepted from 165.225.128.186:34417 #5 (3 connections now open) m31300| Fri Feb 22 12:38:52.088 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:38:52.088 [rsStart] replSet STARTUP2 m31300| Fri Feb 22 12:38:52.088 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is up m31301| Fri Feb 22 12:38:52.294 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:38:52.294 [initandlisten] connection accepted from 165.225.128.186:42905 #3 (2 connections now open) m31301| Fri Feb 22 12:38:52.295 [initandlisten] connection accepted from 165.225.128.186:37468 #3 (3 connections now open) m31301| Fri Feb 22 12:38:52.295 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31301 m31301| Fri Feb 22 12:38:52.296 [rsStart] replSet got config version 1 from a remote, saving locally m31301| Fri Feb 22 12:38:52.296 [rsStart] replSet info saving a newer config version to local.system.replset m31301| Fri Feb 22 12:38:52.299 [rsStart] replSet saveConfigLocally done m31301| Fri Feb 22 12:38:52.300 [rsStart] replSet STARTUP2 m31301| Fri Feb 22 12:38:52.300 [rsSync] ****** m31301| Fri Feb 22 12:38:52.300 [rsSync] creating replication oplog of size: 40MB... m31301| Fri Feb 22 12:38:52.300 [FileAllocator] allocating new datafile /data/db/rs1-rs2-1/local.1, filling with zeroes... m31301| Fri Feb 22 12:38:52.300 [FileAllocator] done allocating datafile /data/db/rs1-rs2-1/local.1, size: 64MB, took 0 secs m31301| Fri Feb 22 12:38:52.312 [rsSync] ****** m31301| Fri Feb 22 12:38:52.312 [rsSync] replSet initial sync pending m31301| Fri Feb 22 12:38:52.312 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31301| Fri Feb 22 12:38:52.315 [conn3] end connection 165.225.128.186:37468 (2 connections now open) m31302| Fri Feb 22 12:38:52.541 [rsStart] replSet can't get local.system.replset config from self or any seed (EMPTYCONFIG) m31300| Fri Feb 22 12:38:53.089 [rsSync] replSet SECONDARY m31202| Fri Feb 22 12:38:53.548 [conn4] end connection 165.225.128.186:45435 (2 connections now open) m31202| Fri Feb 22 12:38:53.548 [initandlisten] connection accepted from 165.225.128.186:64824 #6 (3 connections now open) m31201| Fri Feb 22 12:38:53.773 [conn4] end connection 165.225.128.186:32769 (2 connections now open) m31201| Fri Feb 22 12:38:53.774 [initandlisten] connection accepted from 165.225.128.186:37814 #6 (3 connections now open) m31300| Fri Feb 22 12:38:54.088 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is up m31300| Fri Feb 22 12:38:54.089 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31301 thinks that we are down m31300| Fri Feb 22 12:38:54.089 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is now in state STARTUP2 m31300| Fri Feb 22 12:38:54.089 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31301 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31300 is electable' m31300| Fri Feb 22 12:38:54.089 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31301 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31300 is electable' m31301| Fri Feb 22 12:38:54.296 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31300 is up m31301| Fri Feb 22 12:38:54.296 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31300 is now in state SECONDARY m31302| Fri Feb 22 12:38:54.296 [initandlisten] connection accepted from 165.225.128.186:37652 #3 (3 connections now open) m31301| Fri Feb 22 12:38:54.297 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is up m31101| Fri Feb 22 12:38:54.997 [conn6] end connection 165.225.128.186:50092 (2 connections now open) m31101| Fri Feb 22 12:38:54.998 [initandlisten] connection accepted from 165.225.128.186:62367 #8 (3 connections now open) m31101| Fri Feb 22 12:38:58.606 [conn7] end connection 165.225.128.186:64219 (2 connections now open) m31101| Fri Feb 22 12:38:58.606 [initandlisten] connection accepted from 165.225.128.186:37285 #9 (3 connections now open) m31300| Fri Feb 22 12:39:00.090 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31100| Fri Feb 22 12:39:00.789 [conn13] end connection 165.225.128.186:62661 (6 connections now open) m31100| Fri Feb 22 12:39:00.789 [initandlisten] connection accepted from 165.225.128.186:47878 #15 (7 connections now open) m31302| Fri Feb 22 12:39:02.541 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:39:02.542 [initandlisten] connection accepted from 165.225.128.186:60070 #4 (3 connections now open) m31302| Fri Feb 22 12:39:02.543 [rsStart] trying to contact bs-smartos-x86-64-1.10gen.cc:31301 m31301| Fri Feb 22 12:39:02.543 [initandlisten] connection accepted from 165.225.128.186:43941 #4 (3 connections now open) m31302| Fri Feb 22 12:39:02.544 [initandlisten] connection accepted from 165.225.128.186:48053 #4 (4 connections now open) m31302| Fri Feb 22 12:39:02.544 [rsStart] replSet I am bs-smartos-x86-64-1.10gen.cc:31302 m31302| Fri Feb 22 12:39:02.544 [rsStart] replSet got config version 1 from a remote, saving locally m31302| Fri Feb 22 12:39:02.544 [rsStart] replSet info saving a newer config version to local.system.replset m31302| Fri Feb 22 12:39:02.547 [rsStart] replSet saveConfigLocally done m31302| Fri Feb 22 12:39:02.548 [rsStart] replSet STARTUP2 m31302| Fri Feb 22 12:39:02.548 [rsSync] ****** m31302| Fri Feb 22 12:39:02.548 [rsSync] creating replication oplog of size: 40MB... m31302| Fri Feb 22 12:39:02.548 [FileAllocator] allocating new datafile /data/db/rs1-rs2-2/local.1, filling with zeroes... m31302| Fri Feb 22 12:39:02.549 [FileAllocator] done allocating datafile /data/db/rs1-rs2-2/local.1, size: 64MB, took 0 secs m31302| Fri Feb 22 12:39:02.560 [rsSync] ****** m31302| Fri Feb 22 12:39:02.560 [rsSync] replSet initial sync pending m31302| Fri Feb 22 12:39:02.560 [rsSync] replSet initial sync need a member to be primary or secondary to do our initial sync m31302| Fri Feb 22 12:39:02.564 [conn4] end connection 165.225.128.186:48053 (3 connections now open) m31300| Fri Feb 22 12:39:04.090 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31302 thinks that we are down m31300| Fri Feb 22 12:39:04.090 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is now in state STARTUP2 m31300| Fri Feb 22 12:39:04.090 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31302 would veto with 'I don't think bs-smartos-x86-64-1.10gen.cc:31300 is electable' m31301| Fri Feb 22 12:39:04.298 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31302 thinks that we are down m31301| Fri Feb 22 12:39:04.298 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is now in state STARTUP2 m31302| Fri Feb 22 12:39:04.544 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is up m31302| Fri Feb 22 12:39:04.544 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31300 is up m31302| Fri Feb 22 12:39:04.544 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is now in state STARTUP2 m31302| Fri Feb 22 12:39:04.544 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31300 is now in state SECONDARY m31301| Fri Feb 22 12:39:06.090 [conn2] end connection 165.225.128.186:49524 (2 connections now open) m31301| Fri Feb 22 12:39:06.091 [initandlisten] connection accepted from 165.225.128.186:63085 #5 (3 connections now open) m31201| Fri Feb 22 12:39:07.377 [conn5] end connection 165.225.128.186:54299 (2 connections now open) m31201| Fri Feb 22 12:39:07.377 [initandlisten] connection accepted from 165.225.128.186:63732 #7 (3 connections now open) m31300| Fri Feb 22 12:39:08.298 [conn3] end connection 165.225.128.186:42905 (2 connections now open) m31300| Fri Feb 22 12:39:08.298 [initandlisten] connection accepted from 165.225.128.186:57030 #5 (3 connections now open) m31301| Fri Feb 22 12:39:08.312 [rsSync] replSet initial sync pending m31301| Fri Feb 22 12:39:08.312 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:39:08.313 [initandlisten] connection accepted from 165.225.128.186:46164 #6 (4 connections now open) m31301| Fri Feb 22 12:39:08.318 [rsSync] build index local.me { _id: 1 } m31301| Fri Feb 22 12:39:08.321 [rsSync] build index done. scanned 0 total records. 0.002 secs m31301| Fri Feb 22 12:39:08.322 [rsSync] build index local.replset.minvalid { _id: 1 } m31301| Fri Feb 22 12:39:08.323 [rsSync] build index done. scanned 0 total records. 0 secs m31301| Fri Feb 22 12:39:08.323 [rsSync] replSet initial sync drop all databases m31301| Fri Feb 22 12:39:08.323 [rsSync] dropAllDatabasesExceptLocal 1 m31301| Fri Feb 22 12:39:08.323 [rsSync] replSet initial sync clone all databases m31301| Fri Feb 22 12:39:08.323 [rsSync] replSet initial sync data copy, starting syncup m31301| Fri Feb 22 12:39:08.323 [rsSync] oplog sync 1 of 3 m31301| Fri Feb 22 12:39:08.324 [rsSync] oplog sync 2 of 3 m31301| Fri Feb 22 12:39:08.324 [rsSync] replSet initial sync building indexes m31301| Fri Feb 22 12:39:08.324 [rsSync] oplog sync 3 of 3 m31301| Fri Feb 22 12:39:08.324 [rsSync] replSet initial sync finishing up m31301| Fri Feb 22 12:39:08.335 [rsSync] replSet set minValid=512766d2:b m31301| Fri Feb 22 12:39:08.344 [rsSync] replSet RECOVERING m31301| Fri Feb 22 12:39:08.344 [rsSync] replSet initial sync done m31300| Fri Feb 22 12:39:08.344 [conn6] end connection 165.225.128.186:46164 (3 connections now open) m31302| Fri Feb 22 12:39:08.545 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is now in state RECOVERING m31200| Fri Feb 22 12:39:09.550 [conn5] end connection 165.225.128.186:46858 (6 connections now open) m31200| Fri Feb 22 12:39:09.551 [initandlisten] connection accepted from 165.225.128.186:51691 #13 (7 connections now open) m31200| Fri Feb 22 12:39:09.775 [conn7] end connection 165.225.128.186:50244 (6 connections now open) m31200| Fri Feb 22 12:39:09.775 [initandlisten] connection accepted from 165.225.128.186:65458 #14 (7 connections now open) m31300| Fri Feb 22 12:39:10.091 [rsMgr] replSet info electSelf 0 m31300| Fri Feb 22 12:39:10.091 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is now in state RECOVERING m31302| Fri Feb 22 12:39:10.091 [conn2] replSet RECOVERING m31301| Fri Feb 22 12:39:10.091 [conn5] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31300 (0) m31302| Fri Feb 22 12:39:10.091 [conn2] replSet info voting yea for bs-smartos-x86-64-1.10gen.cc:31300 (0) m31301| Fri Feb 22 12:39:10.299 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is now in state RECOVERING m31301| Fri Feb 22 12:39:10.344 [rsSync] replSet SECONDARY m31302| Fri Feb 22 12:39:10.545 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is now in state SECONDARY m31100| Fri Feb 22 12:39:10.996 [conn14] end connection 165.225.128.186:41658 (6 connections now open) m31100| Fri Feb 22 12:39:10.996 [initandlisten] connection accepted from 165.225.128.186:39561 #16 (7 connections now open) m31300| Fri Feb 22 12:39:11.090 [rsMgr] replSet PRIMARY m31300| Fri Feb 22 12:39:12.091 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is now in state RECOVERING m31300| Fri Feb 22 12:39:12.091 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31301 is now in state SECONDARY m31301| Fri Feb 22 12:39:12.299 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31300 is now in state PRIMARY m31301| Fri Feb 22 12:39:12.301 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:39:12.302 [initandlisten] connection accepted from 165.225.128.186:47673 #7 (4 connections now open) m31301| Fri Feb 22 12:39:12.344 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:39:12.344 [initandlisten] connection accepted from 165.225.128.186:45983 #8 (5 connections now open) m31302| Fri Feb 22 12:39:12.545 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31300 is now in state PRIMARY m31300| Fri Feb 22 12:39:13.354 [slaveTracking] build index local.slaves { _id: 1 } m31300| Fri Feb 22 12:39:13.357 [slaveTracking] build index done. scanned 0 total records. 0.002 secs m31102| Fri Feb 22 12:39:14.609 [conn7] end connection 165.225.128.186:54840 (2 connections now open) m31102| Fri Feb 22 12:39:14.609 [initandlisten] connection accepted from 165.225.128.186:41546 #9 (3 connections now open) m31102| Fri Feb 22 12:39:14.791 [conn8] end connection 165.225.128.186:59761 (2 connections now open) m31102| Fri Feb 22 12:39:14.791 [initandlisten] connection accepted from 165.225.128.186:51619 #10 (3 connections now open) m31300| Fri Feb 22 12:39:18.546 [conn4] end connection 165.225.128.186:60070 (4 connections now open) m31300| Fri Feb 22 12:39:18.547 [initandlisten] connection accepted from 165.225.128.186:43215 #9 (5 connections now open) m31302| Fri Feb 22 12:39:18.560 [rsSync] replSet initial sync pending m31302| Fri Feb 22 12:39:18.560 [rsSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:39:18.561 [initandlisten] connection accepted from 165.225.128.186:63842 #10 (6 connections now open) m31302| Fri Feb 22 12:39:18.569 [rsSync] build index local.me { _id: 1 } m31302| Fri Feb 22 12:39:18.573 [rsSync] build index done. scanned 0 total records. 0.003 secs m31302| Fri Feb 22 12:39:18.575 [rsSync] build index local.replset.minvalid { _id: 1 } m31302| Fri Feb 22 12:39:18.576 [rsSync] build index done. scanned 0 total records. 0.001 secs m31302| Fri Feb 22 12:39:18.576 [rsSync] replSet initial sync drop all databases m31302| Fri Feb 22 12:39:18.576 [rsSync] dropAllDatabasesExceptLocal 1 m31302| Fri Feb 22 12:39:18.576 [rsSync] replSet initial sync clone all databases m31302| Fri Feb 22 12:39:18.577 [rsSync] replSet initial sync data copy, starting syncup m31302| Fri Feb 22 12:39:18.577 [rsSync] oplog sync 1 of 3 m31302| Fri Feb 22 12:39:18.577 [rsSync] oplog sync 2 of 3 m31302| Fri Feb 22 12:39:18.577 [rsSync] replSet initial sync building indexes m31302| Fri Feb 22 12:39:18.577 [rsSync] oplog sync 3 of 3 m31302| Fri Feb 22 12:39:18.577 [rsSync] replSet initial sync finishing up m31302| Fri Feb 22 12:39:18.585 [rsSync] replSet set minValid=512766d2:b m31302| Fri Feb 22 12:39:18.589 [rsSync] replSet initial sync done m31300| Fri Feb 22 12:39:18.590 [conn10] end connection 165.225.128.186:63842 (5 connections now open) m31302| Fri Feb 22 12:39:19.549 [rsBackgroundSync] replSet syncing to: bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:39:19.549 [initandlisten] connection accepted from 165.225.128.186:49759 #11 (6 connections now open) m31302| Fri Feb 22 12:39:19.590 [rsSyncNotifier] replset setting oplog notifier to bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:39:19.591 [initandlisten] connection accepted from 165.225.128.186:52921 #12 (7 connections now open) m31302| Fri Feb 22 12:39:20.590 [rsSync] replSet SECONDARY m31100| Fri Feb 22 12:39:20.759 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/admin.ns, filling with zeroes... m31100| Fri Feb 22 12:39:20.759 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/admin.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 12:39:20.759 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/admin.0, filling with zeroes... m31100| Fri Feb 22 12:39:20.760 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/admin.0, size: 16MB, took 0 secs m31100| Fri Feb 22 12:39:20.763 [conn1] build index admin.foo { _id: 1 } m31100| Fri Feb 22 12:39:20.764 [conn1] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 12:39:20.765 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/admin.ns, filling with zeroes... m31102| Fri Feb 22 12:39:20.765 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/admin.ns, filling with zeroes... m31101| Fri Feb 22 12:39:20.766 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/admin.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 12:39:20.766 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/admin.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 12:39:20.766 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/admin.0, filling with zeroes... m31102| Fri Feb 22 12:39:20.766 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/admin.0, filling with zeroes... m31102| Fri Feb 22 12:39:20.766 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/admin.0, size: 16MB, took 0 secs m31101| Fri Feb 22 12:39:20.766 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361536760000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361536760000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 m31101| Fri Feb 22 12:39:20.770 [repl writer worker 1] build index admin.foo { _id: 1 } m31102| Fri Feb 22 12:39:20.771 [repl writer worker 1] build index admin.foo { _id: 1 } m31101| Fri Feb 22 12:39:20.771 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced m31102| Fri Feb 22 12:39:20.773 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31102 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31102, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361536760000, "i" : 1 } Fri Feb 22 12:39:20.778 starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 12:39:20.778 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31100| Fri Feb 22 12:39:20.778 [initandlisten] connection accepted from 165.225.128.186:39357 #17 (8 connections now open) Fri Feb 22 12:39:20.779 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ Fri Feb 22 12:39:20.779 trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 Fri Feb 22 12:39:20.779 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m31100| Fri Feb 22 12:39:20.779 [initandlisten] connection accepted from 165.225.128.186:33430 #18 (9 connections now open) Fri Feb 22 12:39:20.779 trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 Fri Feb 22 12:39:20.779 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 Fri Feb 22 12:39:20.779 trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set rs1-rs0 m31101| Fri Feb 22 12:39:20.779 [initandlisten] connection accepted from 165.225.128.186:42279 #10 (4 connections now open) Fri Feb 22 12:39:20.780 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set rs1-rs0 m31102| Fri Feb 22 12:39:20.780 [initandlisten] connection accepted from 165.225.128.186:59385 #11 (4 connections now open) m31100| Fri Feb 22 12:39:20.780 [initandlisten] connection accepted from 165.225.128.186:54041 #19 (10 connections now open) m31100| Fri Feb 22 12:39:20.781 [conn17] end connection 165.225.128.186:39357 (9 connections now open) Fri Feb 22 12:39:20.781 Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:39:20.781 [initandlisten] connection accepted from 165.225.128.186:45110 #11 (5 connections now open) m31102| Fri Feb 22 12:39:20.782 [initandlisten] connection accepted from 165.225.128.186:43191 #12 (5 connections now open) Fri Feb 22 12:39:20.782 replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 Fri Feb 22 12:39:20.783 [ReplicaSetMonitorWatcher] starting m31200| Fri Feb 22 12:39:20.785 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/admin.ns, filling with zeroes... m31200| Fri Feb 22 12:39:20.785 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/admin.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 12:39:20.785 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/admin.0, filling with zeroes... m31200| Fri Feb 22 12:39:20.785 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/admin.0, size: 16MB, took 0 secs m31200| Fri Feb 22 12:39:20.789 [conn1] build index admin.foo { _id: 1 } m31200| Fri Feb 22 12:39:20.790 [conn1] build index done. scanned 0 total records. 0.001 secs m31201| Fri Feb 22 12:39:20.791 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/admin.ns, filling with zeroes... m31202| Fri Feb 22 12:39:20.791 [FileAllocator] allocating new datafile /data/db/rs1-rs1-2/admin.ns, filling with zeroes... m31201| Fri Feb 22 12:39:20.792 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/admin.ns, size: 16MB, took 0 secs m31202| Fri Feb 22 12:39:20.792 [FileAllocator] done allocating datafile /data/db/rs1-rs1-2/admin.ns, size: 16MB, took 0 secs m31201| Fri Feb 22 12:39:20.792 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/admin.0, filling with zeroes... m31202| Fri Feb 22 12:39:20.792 [FileAllocator] allocating new datafile /data/db/rs1-rs1-2/admin.0, filling with zeroes... m31201| Fri Feb 22 12:39:20.792 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31200, is { "t" : 1361536760000, "i" : 1 } m31202| Fri Feb 22 12:39:20.792 [FileAllocator] done allocating datafile /data/db/rs1-rs1-2/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361536760000, "i" : 1 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31201 m31201| Fri Feb 22 12:39:20.796 [repl writer worker 1] build index admin.foo { _id: 1 } m31202| Fri Feb 22 12:39:20.796 [repl writer worker 1] build index admin.foo { _id: 1 } m31201| Fri Feb 22 12:39:20.797 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31202| Fri Feb 22 12:39:20.798 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31201, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31202 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31202, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361536760000, "i" : 1 } Fri Feb 22 12:39:20.803 starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 Fri Feb 22 12:39:20.803 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m31200| Fri Feb 22 12:39:20.803 [initandlisten] connection accepted from 165.225.128.186:41709 #15 (8 connections now open) Fri Feb 22 12:39:20.803 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31202", 2: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ Fri Feb 22 12:39:20.803 trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 Fri Feb 22 12:39:20.804 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 Fri Feb 22 12:39:20.804 trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m31200| Fri Feb 22 12:39:20.804 [initandlisten] connection accepted from 165.225.128.186:36290 #16 (9 connections now open) Fri Feb 22 12:39:20.804 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 Fri Feb 22 12:39:20.804 trying to add new host bs-smartos-x86-64-1.10gen.cc:31202 to replica set rs1-rs1 m31201| Fri Feb 22 12:39:20.804 [initandlisten] connection accepted from 165.225.128.186:52210 #8 (4 connections now open) Fri Feb 22 12:39:20.804 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31202 in replica set rs1-rs1 m31202| Fri Feb 22 12:39:20.804 [initandlisten] connection accepted from 165.225.128.186:49161 #7 (4 connections now open) m31200| Fri Feb 22 12:39:20.805 [initandlisten] connection accepted from 165.225.128.186:57896 #17 (10 connections now open) m31200| Fri Feb 22 12:39:20.805 [conn15] end connection 165.225.128.186:41709 (9 connections now open) Fri Feb 22 12:39:20.805 Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m31201| Fri Feb 22 12:39:20.806 [initandlisten] connection accepted from 165.225.128.186:43738 #9 (5 connections now open) m31202| Fri Feb 22 12:39:20.807 [initandlisten] connection accepted from 165.225.128.186:41378 #8 (5 connections now open) Fri Feb 22 12:39:20.807 replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31300| Fri Feb 22 12:39:20.809 [FileAllocator] allocating new datafile /data/db/rs1-rs2-0/admin.ns, filling with zeroes... m31300| Fri Feb 22 12:39:20.810 [FileAllocator] done allocating datafile /data/db/rs1-rs2-0/admin.ns, size: 16MB, took 0 secs m31300| Fri Feb 22 12:39:20.810 [FileAllocator] allocating new datafile /data/db/rs1-rs2-0/admin.0, filling with zeroes... m31300| Fri Feb 22 12:39:20.810 [FileAllocator] done allocating datafile /data/db/rs1-rs2-0/admin.0, size: 16MB, took 0 secs m31300| Fri Feb 22 12:39:20.813 [conn1] build index admin.foo { _id: 1 } m31300| Fri Feb 22 12:39:20.814 [conn1] build index done. scanned 0 total records. 0.001 secs m31302| Fri Feb 22 12:39:20.816 [FileAllocator] allocating new datafile /data/db/rs1-rs2-2/admin.ns, filling with zeroes... m31301| Fri Feb 22 12:39:20.816 [FileAllocator] allocating new datafile /data/db/rs1-rs2-1/admin.ns, filling with zeroes... m31302| Fri Feb 22 12:39:20.816 [FileAllocator] done allocating datafile /data/db/rs1-rs2-2/admin.ns, size: 16MB, took 0 secs m31301| Fri Feb 22 12:39:20.816 [FileAllocator] done allocating datafile /data/db/rs1-rs2-1/admin.ns, size: 16MB, took 0 secs m31302| Fri Feb 22 12:39:20.816 [FileAllocator] allocating new datafile /data/db/rs1-rs2-2/admin.0, filling with zeroes... m31301| Fri Feb 22 12:39:20.816 [FileAllocator] allocating new datafile /data/db/rs1-rs2-1/admin.0, filling with zeroes... m31302| Fri Feb 22 12:39:20.816 [FileAllocator] done allocating datafile /data/db/rs1-rs2-2/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31300, is { "t" : 1361536760000, "i" : 1 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361536760000, "i" : 1 } m31301| Fri Feb 22 12:39:20.817 [FileAllocator] done allocating datafile /data/db/rs1-rs2-1/admin.0, size: 16MB, took 0 secs ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31301 m31302| Fri Feb 22 12:39:20.820 [repl writer worker 1] build index admin.foo { _id: 1 } m31301| Fri Feb 22 12:39:20.821 [repl writer worker 1] build index admin.foo { _id: 1 } m31301| Fri Feb 22 12:39:20.822 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31302| Fri Feb 22 12:39:20.822 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31301, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31302 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31302, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361536760000, "i" : 1 } Fri Feb 22 12:39:20.828 starting new replica set monitor for replica set rs1-rs2 with seed of bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 Fri Feb 22 12:39:20.828 successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31300 for replica set rs1-rs2 m31300| Fri Feb 22 12:39:20.828 [initandlisten] connection accepted from 165.225.128.186:46376 #13 (8 connections now open) Fri Feb 22 12:39:20.829 changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31300", 1: "bs-smartos-x86-64-1.10gen.cc:31302", 2: "bs-smartos-x86-64-1.10gen.cc:31301" } from rs1-rs2/ Fri Feb 22 12:39:20.829 trying to add new host bs-smartos-x86-64-1.10gen.cc:31300 to replica set rs1-rs2 Fri Feb 22 12:39:20.829 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31300 in replica set rs1-rs2 Fri Feb 22 12:39:20.829 trying to add new host bs-smartos-x86-64-1.10gen.cc:31301 to replica set rs1-rs2 m31300| Fri Feb 22 12:39:20.829 [initandlisten] connection accepted from 165.225.128.186:50209 #14 (9 connections now open) Fri Feb 22 12:39:20.829 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31301 in replica set rs1-rs2 Fri Feb 22 12:39:20.829 trying to add new host bs-smartos-x86-64-1.10gen.cc:31302 to replica set rs1-rs2 m31301| Fri Feb 22 12:39:20.829 [initandlisten] connection accepted from 165.225.128.186:55442 #6 (4 connections now open) Fri Feb 22 12:39:20.830 successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31302 in replica set rs1-rs2 m31302| Fri Feb 22 12:39:20.830 [initandlisten] connection accepted from 165.225.128.186:56338 #5 (4 connections now open) m31300| Fri Feb 22 12:39:20.830 [initandlisten] connection accepted from 165.225.128.186:39390 #15 (10 connections now open) m31300| Fri Feb 22 12:39:20.830 [conn13] end connection 165.225.128.186:46376 (9 connections now open) Fri Feb 22 12:39:20.830 Primary for replica set rs1-rs2 changed to bs-smartos-x86-64-1.10gen.cc:31300 m31301| Fri Feb 22 12:39:20.831 [initandlisten] connection accepted from 165.225.128.186:59903 #7 (5 connections now open) m31302| Fri Feb 22 12:39:20.832 [initandlisten] connection accepted from 165.225.128.186:39445 #6 (5 connections now open) Fri Feb 22 12:39:20.832 replica set monitor for replica set rs1-rs2 started, address is rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 Resetting db path '/data/db/rs1-config0' Fri Feb 22 12:39:20.839 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongod --port 29000 --dbpath /data/db/rs1-config0 --configsvr --setParameter enableTestCommands=1 m29000| Fri Feb 22 12:39:20.932 [initandlisten] MongoDB starting : pid=16327 port=29000 dbpath=/data/db/rs1-config0 master=1 64-bit host=bs-smartos-x86-64-1.10gen.cc m29000| Fri Feb 22 12:39:20.933 [initandlisten] m29000| Fri Feb 22 12:39:20.933 [initandlisten] ** NOTE: your operating system version does not support the method that MongoDB m29000| Fri Feb 22 12:39:20.933 [initandlisten] ** uses to detect impending page faults. m29000| Fri Feb 22 12:39:20.933 [initandlisten] ** This may result in slower performance for certain use cases m29000| Fri Feb 22 12:39:20.933 [initandlisten] m29000| Fri Feb 22 12:39:20.933 [initandlisten] db version v2.4.0-rc1-pre-, pdfile version 4.5 m29000| Fri Feb 22 12:39:20.933 [initandlisten] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m29000| Fri Feb 22 12:39:20.933 [initandlisten] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m29000| Fri Feb 22 12:39:20.933 [initandlisten] allocator: system m29000| Fri Feb 22 12:39:20.933 [initandlisten] options: { configsvr: true, dbpath: "/data/db/rs1-config0", port: 29000, setParameter: [ "enableTestCommands=1" ] } m29000| Fri Feb 22 12:39:20.933 [initandlisten] journal dir=/data/db/rs1-config0/journal m29000| Fri Feb 22 12:39:20.933 [initandlisten] recover : no journal files present, no recovery needed m29000| Fri Feb 22 12:39:20.949 [FileAllocator] allocating new datafile /data/db/rs1-config0/local.ns, filling with zeroes... m29000| Fri Feb 22 12:39:20.949 [FileAllocator] creating directory /data/db/rs1-config0/_tmp m29000| Fri Feb 22 12:39:20.949 [FileAllocator] done allocating datafile /data/db/rs1-config0/local.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 12:39:20.949 [FileAllocator] allocating new datafile /data/db/rs1-config0/local.0, filling with zeroes... m29000| Fri Feb 22 12:39:20.949 [FileAllocator] done allocating datafile /data/db/rs1-config0/local.0, size: 16MB, took 0 secs m29000| Fri Feb 22 12:39:20.953 [initandlisten] ****** m29000| Fri Feb 22 12:39:20.953 [initandlisten] creating replication oplog of size: 5MB... m29000| Fri Feb 22 12:39:20.958 [initandlisten] ****** m29000| Fri Feb 22 12:39:20.958 [initandlisten] waiting for connections on port 29000 m29000| Fri Feb 22 12:39:20.958 [websvr] admin web console waiting for connections on port 30000 m29000| Fri Feb 22 12:39:21.041 [initandlisten] connection accepted from 127.0.0.1:60420 #1 (1 connection now open) "bs-smartos-x86-64-1.10gen.cc:29000" m29000| Fri Feb 22 12:39:21.042 [initandlisten] connection accepted from 165.225.128.186:63105 #2 (2 connections now open) ShardingTest rs1 : { "config" : "bs-smartos-x86-64-1.10gen.cc:29000", "shards" : [ connection to rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102, connection to rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202, connection to rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 ] } Fri Feb 22 12:39:21.048 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30999 --configdb bs-smartos-x86-64-1.10gen.cc:29000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30999| Fri Feb 22 12:39:21.067 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30999| Fri Feb 22 12:39:21.068 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=16329 port=30999 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30999| Fri Feb 22 12:39:21.068 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30999| Fri Feb 22 12:39:21.068 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30999| Fri Feb 22 12:39:21.068 [mongosMain] options: { chunkSize: 1, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30999, setParameter: [ "enableTestCommands=1" ], verbose: true } m30999| Fri Feb 22 12:39:21.068 [mongosMain] config string : bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:21.068 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:21.069 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.070 [mongosMain] connected connection! m29000| Fri Feb 22 12:39:21.070 [initandlisten] connection accepted from 165.225.128.186:56038 #3 (3 connections now open) m30999| Fri Feb 22 12:39:21.071 BackgroundJob starting: CheckConfigServers m30999| Fri Feb 22 12:39:21.071 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:21.071 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 12:39:21.071 [initandlisten] connection accepted from 165.225.128.186:64182 #4 (4 connections now open) m30999| Fri Feb 22 12:39:21.071 [mongosMain] connected connection! m29000| Fri Feb 22 12:39:21.072 [conn4] CMD fsync: sync:1 lock:0 m30999| Fri Feb 22 12:39:21.077 [mongosMain] created new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:39:21.078 [mongosMain] trying to acquire new distributed lock for configUpgrade on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:21.078 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 (sleeping for 30000ms) m30999| Fri Feb 22 12:39:21.078 [mongosMain] inserting initial doc in config.locks for lock configUpgrade m29000| Fri Feb 22 12:39:21.078 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.ns, filling with zeroes... m30999| Fri Feb 22 12:39:21.078 [mongosMain] about to acquire distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:mongosMain:5758", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:21 2013" }, m30999| "why" : "upgrading config database to new format v4", m30999| "ts" : { "$oid" : "512766f911fb11ce1f290bdf" } } m30999| { "_id" : "configUpgrade", m30999| "state" : 0 } m29000| Fri Feb 22 12:39:21.078 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.ns, size: 16MB, took 0 secs m29000| Fri Feb 22 12:39:21.078 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.0, filling with zeroes... m29000| Fri Feb 22 12:39:21.079 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.0, size: 16MB, took 0 secs m29000| Fri Feb 22 12:39:21.079 [FileAllocator] allocating new datafile /data/db/rs1-config0/config.1, filling with zeroes... m29000| Fri Feb 22 12:39:21.079 [FileAllocator] done allocating datafile /data/db/rs1-config0/config.1, size: 32MB, took 0 secs m29000| Fri Feb 22 12:39:21.082 [conn4] build index config.locks { _id: 1 } m29000| Fri Feb 22 12:39:21.082 [conn4] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:39:21.084 [conn3] build index config.lockpings { _id: 1 } m29000| Fri Feb 22 12:39:21.086 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:39:21.087 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 12:39:21 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838', sleeping for 30000ms m29000| Fri Feb 22 12:39:21.087 [conn3] build index config.lockpings { ping: new Date(1) } m29000| Fri Feb 22 12:39:21.088 [conn3] build index done. scanned 1 total records. 0 secs m30999| Fri Feb 22 12:39:21.088 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 512766f911fb11ce1f290bdf m30999| Fri Feb 22 12:39:21.091 [mongosMain] starting upgrade of config server from v0 to v4 m30999| Fri Feb 22 12:39:21.091 [mongosMain] starting next upgrade step from v0 to v4 m30999| Fri Feb 22 12:39:21.091 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:21-512766f911fb11ce1f290be0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536761091), what: "starting upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m29000| Fri Feb 22 12:39:21.092 [conn4] build index config.changelog { _id: 1 } m29000| Fri Feb 22 12:39:21.092 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:39:21.093 [mongosMain] writing initial config version at v4 m29000| Fri Feb 22 12:39:21.093 [conn4] build index config.version { _id: 1 } m29000| Fri Feb 22 12:39:21.094 [conn4] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:39:21.095 [mongosMain] about to log new metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:21-512766f911fb11ce1f290be2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "N/A", time: new Date(1361536761095), what: "finished upgrade of config database", ns: "config.version", details: { from: 0, to: 4 } } m30999| Fri Feb 22 12:39:21.095 [mongosMain] upgrade of config server to v4 successful m30999| Fri Feb 22 12:39:21.095 [mongosMain] distributed lock 'configUpgrade/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m29000| Fri Feb 22 12:39:21.097 [conn3] build index config.settings { _id: 1 } m30999| Fri Feb 22 12:39:21.098 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:39:21.098 BackgroundJob starting: Balancer m30999| Fri Feb 22 12:39:21.098 BackgroundJob starting: cursorTimeout m30999| Fri Feb 22 12:39:21.098 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30999| Fri Feb 22 12:39:21.098 BackgroundJob starting: PeriodicTask::Runner m30999| Fri Feb 22 12:39:21.098 [Balancer] about to contact config servers and shards m30999| Fri Feb 22 12:39:21.098 [websvr] admin web console waiting for connections on port 31999 m30999| Fri Feb 22 12:39:21.098 [mongosMain] waiting for connections on port 30999 m29000| Fri Feb 22 12:39:21.098 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Fri Feb 22 12:39:21.099 [conn3] build index config.chunks { _id: 1 } m29000| Fri Feb 22 12:39:21.100 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Fri Feb 22 12:39:21.101 [conn3] info: creating collection config.chunks on add index m29000| Fri Feb 22 12:39:21.101 [conn3] build index config.chunks { ns: 1, min: 1 } m29000| Fri Feb 22 12:39:21.102 [conn3] build index done. scanned 0 total records. 0.001 secs m29000| Fri Feb 22 12:39:21.102 [conn3] build index config.chunks { ns: 1, shard: 1, min: 1 } m29000| Fri Feb 22 12:39:21.103 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:39:21.103 [conn3] build index config.chunks { ns: 1, lastmod: 1 } m29000| Fri Feb 22 12:39:21.104 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:39:21.104 [conn3] build index config.shards { _id: 1 } m29000| Fri Feb 22 12:39:21.105 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:39:21.105 [conn3] info: creating collection config.shards on add index m29000| Fri Feb 22 12:39:21.105 [conn3] build index config.shards { host: 1 } m29000| Fri Feb 22 12:39:21.107 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:39:21.107 [Balancer] config servers and shards contacted successfully m30999| Fri Feb 22 12:39:21.107 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30999 started at Feb 22 12:39:21 m30999| Fri Feb 22 12:39:21.107 [Balancer] created new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30999| Fri Feb 22 12:39:21.107 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m29000| Fri Feb 22 12:39:21.108 [conn3] build index config.mongos { _id: 1 } m30999| Fri Feb 22 12:39:21.107 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.108 [Balancer] connected connection! m29000| Fri Feb 22 12:39:21.108 [initandlisten] connection accepted from 165.225.128.186:41457 #5 (5 connections now open) m29000| Fri Feb 22 12:39:21.109 [conn3] build index done. scanned 0 total records. 0.001 secs m30999| Fri Feb 22 12:39:21.109 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:21.109 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:21.110 [Balancer] inserting initial doc in config.locks for lock balancer m30999| Fri Feb 22 12:39:21.110 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:21 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512766f911fb11ce1f290be4" } } m30999| { "_id" : "balancer", m30999| "state" : 0 } m30999| Fri Feb 22 12:39:21.111 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 512766f911fb11ce1f290be4 m30999| Fri Feb 22 12:39:21.111 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:21.111 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:21.111 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:21.111 [Balancer] no collections to balance m30999| Fri Feb 22 12:39:21.111 [Balancer] no need to move any chunk m30999| Fri Feb 22 12:39:21.111 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:21.111 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:21.256 [mongosMain] connection accepted from 127.0.0.1:44251 #1 (1 connection now open) Fri Feb 22 12:39:21.261 shell: started program /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos --port 30998 --configdb bs-smartos-x86-64-1.10gen.cc:29000 -v --chunkSize 1 --setParameter enableTestCommands=1 m30998| Fri Feb 22 12:39:21.282 warning: running with 1 config server should be done only for testing purposes and is not recommended for production m30998| Fri Feb 22 12:39:21.282 [mongosMain] MongoS version 2.4.0-rc1-pre- starting: pid=16330 port=30998 64-bit host=bs-smartos-x86-64-1.10gen.cc (--help for usage) m30998| Fri Feb 22 12:39:21.282 [mongosMain] git version: 420e61e130be8ae5f4d2d283e4f84711121dd8c0 m30998| Fri Feb 22 12:39:21.282 [mongosMain] build info: SunOS bs-smartos-x86-64-1.10gen.cc 5.11 joyent_20120424T232010Z i86pc BOOST_LIB_VERSION=1_49 m30998| Fri Feb 22 12:39:21.282 [mongosMain] options: { chunkSize: 1, configdb: "bs-smartos-x86-64-1.10gen.cc:29000", port: 30998, setParameter: [ "enableTestCommands=1" ], verbose: true } m30998| Fri Feb 22 12:39:21.283 [mongosMain] config string : bs-smartos-x86-64-1.10gen.cc:29000 m30998| Fri Feb 22 12:39:21.283 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30998| Fri Feb 22 12:39:21.284 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:21.284 [mongosMain] connected connection! m29000| Fri Feb 22 12:39:21.284 [initandlisten] connection accepted from 165.225.128.186:52660 #6 (6 connections now open) m30998| Fri Feb 22 12:39:21.291 BackgroundJob starting: CheckConfigServers m30998| Fri Feb 22 12:39:21.291 [mongosMain] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30998| Fri Feb 22 12:39:21.291 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:21.291 [mongosMain] connected connection! m29000| Fri Feb 22 12:39:21.291 [initandlisten] connection accepted from 165.225.128.186:43667 #7 (7 connections now open) m30998| Fri Feb 22 12:39:21.292 [mongosMain] MaxChunkSize: 1 m30998| Fri Feb 22 12:39:21.293 [websvr] fd limit hard:65536 soft:1024 max conn: 819 m30998| Fri Feb 22 12:39:21.293 BackgroundJob starting: Balancer m30998| Fri Feb 22 12:39:21.293 BackgroundJob starting: cursorTimeout m30998| Fri Feb 22 12:39:21.294 [Balancer] about to contact config servers and shards m30998| Fri Feb 22 12:39:21.294 BackgroundJob starting: PeriodicTask::Runner m30998| Fri Feb 22 12:39:21.294 [mongosMain] fd limit hard:65536 soft:1024 max conn: 819 m30998| Fri Feb 22 12:39:21.294 [Balancer] config servers and shards contacted successfully m30998| Fri Feb 22 12:39:21.294 [Balancer] balancer id: bs-smartos-x86-64-1.10gen.cc:30998 started at Feb 22 12:39:21 m30998| Fri Feb 22 12:39:21.294 [websvr] admin web console waiting for connections on port 31998 m30998| Fri Feb 22 12:39:21.294 [Balancer] created new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : 0 ) m30998| Fri Feb 22 12:39:21.294 [mongosMain] waiting for connections on port 30998 m30998| Fri Feb 22 12:39:21.294 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30998| Fri Feb 22 12:39:21.294 BackgroundJob starting: ConnectBG m29000| Fri Feb 22 12:39:21.294 [initandlisten] connection accepted from 165.225.128.186:40858 #8 (8 connections now open) m30998| Fri Feb 22 12:39:21.294 [Balancer] connected connection! m30998| Fri Feb 22 12:39:21.295 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:21.296 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:21.296 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 (sleeping for 30000ms) m30998| Fri Feb 22 12:39:21.296 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:21 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "512766f9fed0a5416d51b831" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "512766f911fb11ce1f290be4" } } m30998| Fri Feb 22 12:39:21.297 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 12:39:21 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838', sleeping for 30000ms m30998| Fri Feb 22 12:39:21.297 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 512766f9fed0a5416d51b831 m30998| Fri Feb 22 12:39:21.297 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:21.297 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:21.297 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:21.297 [Balancer] no collections to balance m30998| Fri Feb 22 12:39:21.297 [Balancer] no need to move any chunk m30998| Fri Feb 22 12:39:21.297 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:21.298 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m31202| Fri Feb 22 12:39:21.379 [conn5] end connection 165.225.128.186:34417 (4 connections now open) m31202| Fri Feb 22 12:39:21.379 [initandlisten] connection accepted from 165.225.128.186:61242 #9 (5 connections now open) m30998| Fri Feb 22 12:39:21.462 [mongosMain] connection accepted from 127.0.0.1:40536 #1 (1 connection now open) ShardingTest undefined going to add shard : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.464 [conn1] couldn't find database [admin] in config db m29000| Fri Feb 22 12:39:21.465 [conn3] build index config.databases { _id: 1 } m29000| Fri Feb 22 12:39:21.466 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:39:21.466 [conn1] put [admin] on: config:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:21.466 [conn1] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.466 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.467 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.467 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.467 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31100| Fri Feb 22 12:39:21.467 [initandlisten] connection accepted from 165.225.128.186:60571 #20 (10 connections now open) m30999| Fri Feb 22 12:39:21.467 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761467), ok: 1.0 } m30999| Fri Feb 22 12:39:21.467 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m30999| Fri Feb 22 12:39:21.467 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m30999| Fri Feb 22 12:39:21.467 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.468 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.468 [conn1] connected connection! m31100| Fri Feb 22 12:39:21.468 [initandlisten] connection accepted from 165.225.128.186:52307 #21 (11 connections now open) m30999| Fri Feb 22 12:39:21.468 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m30999| Fri Feb 22 12:39:21.468 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m30999| Fri Feb 22 12:39:21.468 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.468 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.468 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.468 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m30999| Fri Feb 22 12:39:21.468 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set rs1-rs0 m31101| Fri Feb 22 12:39:21.468 [initandlisten] connection accepted from 165.225.128.186:65502 #12 (6 connections now open) m30999| Fri Feb 22 12:39:21.468 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.468 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.468 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.468 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set rs1-rs0 m31102| Fri Feb 22 12:39:21.468 [initandlisten] connection accepted from 165.225.128.186:57608 #13 (6 connections now open) m30999| Fri Feb 22 12:39:21.468 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.469 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.469 [conn1] connected connection! m31100| Fri Feb 22 12:39:21.469 [initandlisten] connection accepted from 165.225.128.186:54585 #22 (12 connections now open) m30999| Fri Feb 22 12:39:21.469 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.469 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.469 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.469 [conn1] replicaSetChange: shard not found for set: rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.469 [conn1] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31100| Fri Feb 22 12:39:21.469 [conn20] end connection 165.225.128.186:60571 (11 connections now open) m30999| Fri Feb 22 12:39:21.469 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761469), ok: 1.0 } m30999| Fri Feb 22 12:39:21.470 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.470 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.470 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.470 [conn1] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.470 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761470), ok: 1.0 } m30999| Fri Feb 22 12:39:21.470 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.470 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.470 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.470 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761470), ok: 1.0 } m30999| Fri Feb 22 12:39:21.470 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.470 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 12:39:21.470 [initandlisten] connection accepted from 165.225.128.186:56331 #13 (7 connections now open) m30999| Fri Feb 22 12:39:21.470 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.471 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.471 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.471 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.471 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761471), ok: 1.0 } m30999| Fri Feb 22 12:39:21.471 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.471 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.471 [conn1] connected connection! m31102| Fri Feb 22 12:39:21.471 [initandlisten] connection accepted from 165.225.128.186:41859 #14 (7 connections now open) m30999| Fri Feb 22 12:39:21.472 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.472 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.472 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.472 [conn1] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.472 BackgroundJob starting: ReplicaSetMonitorWatcher m30999| Fri Feb 22 12:39:21.472 [ReplicaSetMonitorWatcher] starting m30999| Fri Feb 22 12:39:21.472 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.472 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.472 [conn1] connected connection! m31100| Fri Feb 22 12:39:21.472 [initandlisten] connection accepted from 165.225.128.186:62589 #23 (12 connections now open) m30999| Fri Feb 22 12:39:21.473 [conn1] going to add shard: { _id: "rs1-rs0", host: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } { "shardAdded" : "rs1-rs0", "ok" : 1 } ShardingTest undefined going to add shard : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.475 [conn1] starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.475 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.475 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.475 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.475 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m31200| Fri Feb 22 12:39:21.475 [initandlisten] connection accepted from 165.225.128.186:45086 #18 (10 connections now open) m30999| Fri Feb 22 12:39:21.475 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761475), ok: 1.0 } m30999| Fri Feb 22 12:39:21.475 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31202", 2: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ m30999| Fri Feb 22 12:39:21.475 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 m30999| Fri Feb 22 12:39:21.475 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.475 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 12:39:21.476 [initandlisten] connection accepted from 165.225.128.186:46732 #19 (11 connections now open) m30999| Fri Feb 22 12:39:21.476 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.476 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 m30999| Fri Feb 22 12:39:21.476 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m30999| Fri Feb 22 12:39:21.476 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.476 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.476 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.476 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m30999| Fri Feb 22 12:39:21.476 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31202 to replica set rs1-rs1 m31201| Fri Feb 22 12:39:21.476 [initandlisten] connection accepted from 165.225.128.186:37702 #10 (6 connections now open) m30999| Fri Feb 22 12:39:21.476 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.476 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.476 [conn1] connected connection! m31202| Fri Feb 22 12:39:21.476 [initandlisten] connection accepted from 165.225.128.186:34145 #10 (6 connections now open) m30999| Fri Feb 22 12:39:21.476 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31202 in replica set rs1-rs1 m30999| Fri Feb 22 12:39:21.476 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.477 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 12:39:21.477 [initandlisten] connection accepted from 165.225.128.186:54183 #20 (12 connections now open) m30999| Fri Feb 22 12:39:21.477 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.477 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.477 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.477 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.477 [conn1] replicaSetChange: shard not found for set: rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.477 [conn1] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31200| Fri Feb 22 12:39:21.477 [conn18] end connection 165.225.128.186:45086 (11 connections now open) m30999| Fri Feb 22 12:39:21.477 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761477), ok: 1.0 } m30999| Fri Feb 22 12:39:21.478 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.478 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.478 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.478 [conn1] Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.478 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761478), ok: 1.0 } m30999| Fri Feb 22 12:39:21.478 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.478 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.478 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.478 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761478), ok: 1.0 } m30999| Fri Feb 22 12:39:21.478 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.478 BackgroundJob starting: ConnectBG m31201| Fri Feb 22 12:39:21.478 [initandlisten] connection accepted from 165.225.128.186:52904 #11 (7 connections now open) m30999| Fri Feb 22 12:39:21.478 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.479 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.479 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.479 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.479 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761479), ok: 1.0 } m30999| Fri Feb 22 12:39:21.479 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.479 BackgroundJob starting: ConnectBG m31202| Fri Feb 22 12:39:21.479 [initandlisten] connection accepted from 165.225.128.186:46538 #11 (7 connections now open) m30999| Fri Feb 22 12:39:21.479 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.479 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.479 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.479 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.479 [conn1] replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.480 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.480 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 12:39:21.480 [initandlisten] connection accepted from 165.225.128.186:42588 #21 (12 connections now open) m30999| Fri Feb 22 12:39:21.480 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.481 [conn1] going to add shard: { _id: "rs1-rs1", host: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202" } { "shardAdded" : "rs1-rs1", "ok" : 1 } ShardingTest undefined going to add shard : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.482 [conn1] starting new replica set monitor for replica set rs1-rs2 with seed of bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.482 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.482 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.483 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.483 [conn1] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31300 for replica set rs1-rs2 m31300| Fri Feb 22 12:39:21.483 [initandlisten] connection accepted from 165.225.128.186:56818 #16 (10 connections now open) m30999| Fri Feb 22 12:39:21.483 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761483), ok: 1.0 } m30999| Fri Feb 22 12:39:21.483 [conn1] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31300", 1: "bs-smartos-x86-64-1.10gen.cc:31302", 2: "bs-smartos-x86-64-1.10gen.cc:31301" } from rs1-rs2/ m30999| Fri Feb 22 12:39:21.483 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31300 to replica set rs1-rs2 m30999| Fri Feb 22 12:39:21.483 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.483 BackgroundJob starting: ConnectBG m31300| Fri Feb 22 12:39:21.483 [initandlisten] connection accepted from 165.225.128.186:59167 #17 (11 connections now open) m30999| Fri Feb 22 12:39:21.483 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.483 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31300 in replica set rs1-rs2 m30999| Fri Feb 22 12:39:21.483 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31301 to replica set rs1-rs2 m30999| Fri Feb 22 12:39:21.483 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.483 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.484 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.484 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31301 in replica set rs1-rs2 m30999| Fri Feb 22 12:39:21.484 [conn1] trying to add new host bs-smartos-x86-64-1.10gen.cc:31302 to replica set rs1-rs2 m30999| Fri Feb 22 12:39:21.484 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31302 m31301| Fri Feb 22 12:39:21.484 [initandlisten] connection accepted from 165.225.128.186:35151 #8 (6 connections now open) m30999| Fri Feb 22 12:39:21.484 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.484 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.484 [conn1] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31302 in replica set rs1-rs2 m31302| Fri Feb 22 12:39:21.484 [initandlisten] connection accepted from 165.225.128.186:36823 #7 (6 connections now open) m30999| Fri Feb 22 12:39:21.484 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.484 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.484 [conn1] connected connection! m31300| Fri Feb 22 12:39:21.484 [initandlisten] connection accepted from 165.225.128.186:52910 #18 (12 connections now open) m30999| Fri Feb 22 12:39:21.485 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.485 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.485 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.485 [conn1] replicaSetChange: shard not found for set: rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.485 [conn1] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31300| Fri Feb 22 12:39:21.485 [conn16] end connection 165.225.128.186:56818 (11 connections now open) m30999| Fri Feb 22 12:39:21.485 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761485), ok: 1.0 } m30999| Fri Feb 22 12:39:21.485 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.485 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.485 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.485 [conn1] Primary for replica set rs1-rs2 changed to bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.485 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761485), ok: 1.0 } m30999| Fri Feb 22 12:39:21.486 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.486 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.486 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.486 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761486), ok: 1.0 } m30999| Fri Feb 22 12:39:21.486 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.486 BackgroundJob starting: ConnectBG m31301| Fri Feb 22 12:39:21.486 [initandlisten] connection accepted from 165.225.128.186:58453 #9 (7 connections now open) m30999| Fri Feb 22 12:39:21.486 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.486 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.486 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.486 [conn1] dbclient_rs nodes[2].ok = false bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.487 [conn1] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536761487), ok: 1.0 } m30999| Fri Feb 22 12:39:21.487 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.487 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.487 [conn1] connected connection! m31302| Fri Feb 22 12:39:21.487 [initandlisten] connection accepted from 165.225.128.186:49506 #8 (7 connections now open) m30999| Fri Feb 22 12:39:21.487 [conn1] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.487 [conn1] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.487 [conn1] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.487 [conn1] replica set monitor for replica set rs1-rs2 started, address is rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.487 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.488 BackgroundJob starting: ConnectBG m31300| Fri Feb 22 12:39:21.488 [initandlisten] connection accepted from 165.225.128.186:61908 #19 (12 connections now open) m30999| Fri Feb 22 12:39:21.488 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.489 [conn1] going to add shard: { _id: "rs1-rs2", host: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302" } { "shardAdded" : "rs1-rs2", "ok" : 1 } m30999| Fri Feb 22 12:39:21.490 [conn1] couldn't find database [test] in config db m30999| Fri Feb 22 12:39:21.492 [conn1] best shard for new allocation is shard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 mapped: 128 writeLock: 0 version: 2.4.0-rc1-pre- m30999| Fri Feb 22 12:39:21.492 [conn1] put [test] on: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.492 [conn1] enabling sharding on: test m30999| Fri Feb 22 12:39:21.493 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31100 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.493 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31101 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.493 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.493 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31102 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.493 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.493 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.493 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:21.493 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:21.493 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.493 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:21.493 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.494 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.494 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.494 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:39:21.494 [initandlisten] connection accepted from 165.225.128.186:38124 #24 (13 connections now open) m30999| Fri Feb 22 12:39:21.494 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.494 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 12:39:21.494 [initandlisten] connection accepted from 165.225.128.186:64969 #25 (14 connections now open) m30999| Fri Feb 22 12:39:21.494 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31100] connected connection! m30999| Fri Feb 22 12:39:21.494 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31101] connected connection! m31101| Fri Feb 22 12:39:21.494 [initandlisten] connection accepted from 165.225.128.186:47831 #14 (8 connections now open) m30999| Fri Feb 22 12:39:21.494 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31102] connected connection! m31102| Fri Feb 22 12:39:21.494 [initandlisten] connection accepted from 165.225.128.186:57040 #15 (8 connections now open) m30999| Fri Feb 22 12:39:21.494 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31200 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.494 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31201 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.494 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.495 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.495 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31202 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.495 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.495 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.495 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.495 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:21.495 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31202] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:21.495 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.495 [conn1] connected connection! m31200| Fri Feb 22 12:39:21.495 [initandlisten] connection accepted from 165.225.128.186:57541 #22 (13 connections now open) m30999| Fri Feb 22 12:39:21.495 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:21.495 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.495 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.495 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31200] connected connection! m31200| Fri Feb 22 12:39:21.495 [initandlisten] connection accepted from 165.225.128.186:56445 #23 (14 connections now open) m30999| Fri Feb 22 12:39:21.495 BackgroundJob starting: ConnectBG m31201| Fri Feb 22 12:39:21.495 [initandlisten] connection accepted from 165.225.128.186:39313 #12 (8 connections now open) m30999| Fri Feb 22 12:39:21.495 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31201] connected connection! m30999| Fri Feb 22 12:39:21.495 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31202] connected connection! m31202| Fri Feb 22 12:39:21.495 [initandlisten] connection accepted from 165.225.128.186:61769 #12 (8 connections now open) m30999| Fri Feb 22 12:39:21.496 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31300 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.496 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31301 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.496 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.496 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:31302 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.496 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.496 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31300] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.496 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.496 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.496 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31301] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:21.496 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31302] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:21.496 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.496 BackgroundJob starting: ConnectBG m31300| Fri Feb 22 12:39:21.496 [initandlisten] connection accepted from 165.225.128.186:42772 #20 (13 connections now open) m30999| Fri Feb 22 12:39:21.496 [conn1] connected connection! m30999| Fri Feb 22 12:39:21.496 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:21.497 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.497 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31300] connected connection! m31300| Fri Feb 22 12:39:21.497 [initandlisten] connection accepted from 165.225.128.186:33938 #21 (14 connections now open) m30999| Fri Feb 22 12:39:21.497 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.497 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31302] connected connection! m31302| Fri Feb 22 12:39:21.497 [initandlisten] connection accepted from 165.225.128.186:39713 #9 (8 connections now open) m31301| Fri Feb 22 12:39:21.497 [initandlisten] connection accepted from 165.225.128.186:40385 #10 (8 connections now open) m30999| Fri Feb 22 12:39:21.497 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:31301] connected connection! m30999| Fri Feb 22 12:39:21.497 [conn1] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:21.497 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:21.498 [conn1] connected connection! m29000| Fri Feb 22 12:39:21.498 [initandlisten] connection accepted from 165.225.128.186:56764 #9 (9 connections now open) m30999| Fri Feb 22 12:39:21.498 [conn1] creating WriteBackListener for: bs-smartos-x86-64-1.10gen.cc:29000 serverID: 512766f911fb11ce1f290be3 m30999| Fri Feb 22 12:39:21.498 [conn1] initializing shard connection to bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:21.498 BackgroundJob starting: WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:21.498 [WriteBackListener-bs-smartos-x86-64-1.10gen.cc:29000] bs-smartos-x86-64-1.10gen.cc:29000 is not a shard node { "_id" : "chunksize", "value" : 1 } m31100| Fri Feb 22 12:39:21.500 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/test.ns, filling with zeroes... m31100| Fri Feb 22 12:39:21.500 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/test.ns, size: 16MB, took 0 secs m31100| Fri Feb 22 12:39:21.500 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/test.0, filling with zeroes... m31100| Fri Feb 22 12:39:21.501 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/test.0, size: 16MB, took 0 secs m31100| Fri Feb 22 12:39:21.504 [conn24] build index test.foo { _id: 1 } m31100| Fri Feb 22 12:39:21.506 [conn24] build index done. scanned 0 total records. 0.001 secs m31101| Fri Feb 22 12:39:21.507 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/test.ns, filling with zeroes... m31102| Fri Feb 22 12:39:21.507 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/test.ns, filling with zeroes... m31101| Fri Feb 22 12:39:21.508 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/test.ns, size: 16MB, took 0 secs m31102| Fri Feb 22 12:39:21.508 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/test.ns, size: 16MB, took 0 secs m31101| Fri Feb 22 12:39:21.508 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/test.0, filling with zeroes... m31102| Fri Feb 22 12:39:21.508 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/test.0, filling with zeroes... m31101| Fri Feb 22 12:39:21.508 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/test.0, size: 16MB, took 0 secs m31102| Fri Feb 22 12:39:21.508 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/test.0, size: 16MB, took 0 secs m31101| Fri Feb 22 12:39:21.512 [repl writer worker 1] build index test.foo { _id: 1 } m31102| Fri Feb 22 12:39:21.513 [repl writer worker 1] build index test.foo { _id: 1 } m31101| Fri Feb 22 12:39:21.514 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31102| Fri Feb 22 12:39:21.515 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31100| Fri Feb 22 12:39:22.028 [FileAllocator] allocating new datafile /data/db/rs1-rs0-0/test.1, filling with zeroes... m31100| Fri Feb 22 12:39:22.028 [FileAllocator] done allocating datafile /data/db/rs1-rs0-0/test.1, size: 32MB, took 0 secs m31101| Fri Feb 22 12:39:22.039 [FileAllocator] allocating new datafile /data/db/rs1-rs0-1/test.1, filling with zeroes... m31101| Fri Feb 22 12:39:22.040 [FileAllocator] done allocating datafile /data/db/rs1-rs0-1/test.1, size: 32MB, took 0 secs m31102| Fri Feb 22 12:39:22.048 [FileAllocator] allocating new datafile /data/db/rs1-rs0-2/test.1, filling with zeroes... m31102| Fri Feb 22 12:39:22.048 [FileAllocator] done allocating datafile /data/db/rs1-rs0-2/test.1, size: 32MB, took 0 secs m31302| Fri Feb 22 12:39:22.092 [conn2] end connection 165.225.128.186:60943 (7 connections now open) m31302| Fri Feb 22 12:39:22.093 [initandlisten] connection accepted from 165.225.128.186:43980 #10 (8 connections now open) m31300| Fri Feb 22 12:39:22.093 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is now in state SECONDARY m31302| Fri Feb 22 12:39:22.300 [conn3] end connection 165.225.128.186:37652 (7 connections now open) m31302| Fri Feb 22 12:39:22.301 [initandlisten] connection accepted from 165.225.128.186:63208 #11 (8 connections now open) m31301| Fri Feb 22 12:39:22.301 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31302 is now in state SECONDARY m30999| Fri Feb 22 12:39:22.302 [conn1] CMD: shardcollection: { shardcollection: "test.foo", key: { _id: 1.0 } } m30999| Fri Feb 22 12:39:22.302 [conn1] enable sharding on: test.foo with shard key: { _id: 1.0 } m31100| Fri Feb 22 12:39:22.303 [conn23] request split points lookup for chunk test.foo { : MinKey } -->> { : MaxKey } m30999| Fri Feb 22 12:39:22.306 [conn1] going to create 41 chunk(s) for: test.foo using new epoch 512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:22.315 [conn1] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 1|40||512766fa11fb11ce1f290be5 based on: (empty) m29000| Fri Feb 22 12:39:22.315 [conn3] build index config.collections { _id: 1 } m29000| Fri Feb 22 12:39:22.316 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:39:22.316 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|40, versionEpoch: ObjectId('512766fa11fb11ce1f290be5'), serverID: ObjectId('512766f911fb11ce1f290be3'), shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x118b8b0 2 m30999| Fri Feb 22 12:39:22.317 [conn1] setShardVersion failed! m30999| { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ns: "test.foo", need_authoritative: true, ok: 0.0, errmsg: "first time for collection 'test.foo'" } m30999| Fri Feb 22 12:39:22.317 [conn1] setShardVersion rs1-rs0 bs-smartos-x86-64-1.10gen.cc:31100 test.foo { setShardVersion: "test.foo", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", version: Timestamp 1000|40, versionEpoch: ObjectId('512766fa11fb11ce1f290be5'), serverID: ObjectId('512766f911fb11ce1f290be3'), authoritative: true, shard: "rs1-rs0", shardHost: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102" } 0x118b8b0 2 m31100| Fri Feb 22 12:39:22.317 [conn24] no current chunk manager found for this shard, will initialize m29000| Fri Feb 22 12:39:22.318 [initandlisten] connection accepted from 165.225.128.186:63553 #10 (10 connections now open) m30999| Fri Feb 22 12:39:22.320 [conn1] setShardVersion success: { oldVersion: Timestamp 0|0, oldVersionEpoch: ObjectId('000000000000000000000000'), ok: 1.0 } { "rs1-rs0" : 41, "rs1-rs1" : 0, "rs1-rs2" : 0 } total: 41 min: 0 max: 41 { "rs1-rs0" : 41, "rs1-rs1" : 0, "rs1-rs2" : 0 } total: 41 min: 0 max: 41 41 { "rs1-rs0" : 41, "rs1-rs1" : 0, "rs1-rs2" : 0 } total: 41 min: 0 max: 41 m31202| Fri Feb 22 12:39:23.552 [conn6] end connection 165.225.128.186:64824 (7 connections now open) m31202| Fri Feb 22 12:39:23.552 [initandlisten] connection accepted from 165.225.128.186:52858 #13 (8 connections now open) m31201| Fri Feb 22 12:39:23.777 [conn6] end connection 165.225.128.186:37814 (7 connections now open) m31201| Fri Feb 22 12:39:23.778 [initandlisten] connection accepted from 165.225.128.186:56710 #13 (8 connections now open) m31101| Fri Feb 22 12:39:25.002 [conn8] end connection 165.225.128.186:62367 (7 connections now open) m31101| Fri Feb 22 12:39:25.002 [initandlisten] connection accepted from 165.225.128.186:51573 #15 (8 connections now open) m30999| Fri Feb 22 12:39:27.112 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:27.113 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:27.113 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:27 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "512766ff11fb11ce1f290be6" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512766f9fed0a5416d51b831" } } m30999| Fri Feb 22 12:39:27.114 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 512766ff11fb11ce1f290be6 m30999| Fri Feb 22 12:39:27.114 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:27.114 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:27.114 [Balancer] secondaryThrottle: 1 m29000| Fri Feb 22 12:39:27.116 [conn3] build index config.tags { _id: 1 } m29000| Fri Feb 22 12:39:27.116 [conn3] build index done. scanned 0 total records. 0 secs m29000| Fri Feb 22 12:39:27.116 [conn3] info: creating collection config.tags on add index m29000| Fri Feb 22 12:39:27.116 [conn3] build index config.tags { ns: 1, min: 1 } m29000| Fri Feb 22 12:39:27.117 [conn3] build index done. scanned 0 total records. 0 secs m30999| Fri Feb 22 12:39:27.117 [Balancer] rs1-rs2 has more chunks me:0 best: rs1-rs1:0 m30999| Fri Feb 22 12:39:27.117 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:27.117 [Balancer] donor : rs1-rs0 chunks on 41 m30999| Fri Feb 22 12:39:27.117 [Balancer] receiver : rs1-rs1 chunks on 0 m30999| Fri Feb 22 12:39:27.117 [Balancer] threshold : 4 m30999| Fri Feb 22 12:39:27.117 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_MinKey", lastmod: Timestamp 1000|0, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: MinKey }, max: { _id: 51.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30999| Fri Feb 22 12:39:27.117 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|0||000000000000000000000000min: { _id: MinKey }max: { _id: 51.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:27.118 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: MinKey }, max: { _id: 51.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m29000| Fri Feb 22 12:39:27.118 [initandlisten] connection accepted from 165.225.128.186:45190 #11 (11 connections now open) m31100| Fri Feb 22 12:39:27.119 [LockPinger] creating distributed lock ping thread for bs-smartos-x86-64-1.10gen.cc:29000 and process bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249 (sleeping for 30000ms) m31100| Fri Feb 22 12:39:27.120 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 512766fffd440aee3dad4aba m31100| Fri Feb 22 12:39:27.120 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:27-512766fffd440aee3dad4abb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536767120), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:27.121 [conn23] moveChunk request accepted at version 1|40||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:27.121 [conn23] moveChunk number of documents: 51 m31100| Fri Feb 22 12:39:27.121 [conn23] starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:27.122 [conn23] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m31200| Fri Feb 22 12:39:27.122 [initandlisten] connection accepted from 165.225.128.186:50118 #24 (15 connections now open) m31100| Fri Feb 22 12:39:27.122 [conn23] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31202", 2: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ m31100| Fri Feb 22 12:39:27.122 [conn23] trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 m31100| Fri Feb 22 12:39:27.122 [conn23] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 m31200| Fri Feb 22 12:39:27.122 [initandlisten] connection accepted from 165.225.128.186:62358 #25 (16 connections now open) m31100| Fri Feb 22 12:39:27.122 [conn23] trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m31100| Fri Feb 22 12:39:27.123 [conn23] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m31100| Fri Feb 22 12:39:27.123 [conn23] trying to add new host bs-smartos-x86-64-1.10gen.cc:31202 to replica set rs1-rs1 m31201| Fri Feb 22 12:39:27.123 [initandlisten] connection accepted from 165.225.128.186:50576 #14 (9 connections now open) m31100| Fri Feb 22 12:39:27.123 [conn23] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31202 in replica set rs1-rs1 m31202| Fri Feb 22 12:39:27.123 [initandlisten] connection accepted from 165.225.128.186:64561 #14 (9 connections now open) m31200| Fri Feb 22 12:39:27.123 [initandlisten] connection accepted from 165.225.128.186:57829 #26 (17 connections now open) m31200| Fri Feb 22 12:39:27.123 [conn24] end connection 165.225.128.186:50118 (16 connections now open) m31100| Fri Feb 22 12:39:27.124 [conn23] Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m31201| Fri Feb 22 12:39:27.124 [initandlisten] connection accepted from 165.225.128.186:63588 #15 (10 connections now open) m31202| Fri Feb 22 12:39:27.125 [initandlisten] connection accepted from 165.225.128.186:34833 #15 (10 connections now open) m31100| Fri Feb 22 12:39:27.125 [conn23] replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:27.125 [ReplicaSetMonitorWatcher] starting m31200| Fri Feb 22 12:39:27.125 [initandlisten] connection accepted from 165.225.128.186:47432 #27 (17 connections now open) m31200| Fri Feb 22 12:39:27.125 [migrateThread] starting receiving-end of migration of chunk { _id: MinKey } -> { _id: 51.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:27.125 [migrateThread] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31200| Fri Feb 22 12:39:27.126 [migrateThread] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31100| Fri Feb 22 12:39:27.126 [initandlisten] connection accepted from 165.225.128.186:54836 #26 (15 connections now open) m31200| Fri Feb 22 12:39:27.126 [migrateThread] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m31200| Fri Feb 22 12:39:27.126 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m31200| Fri Feb 22 12:39:27.126 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m31200| Fri Feb 22 12:39:27.126 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m31100| Fri Feb 22 12:39:27.126 [initandlisten] connection accepted from 165.225.128.186:49296 #27 (16 connections now open) m31200| Fri Feb 22 12:39:27.127 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31200| Fri Feb 22 12:39:27.127 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set rs1-rs0 m31101| Fri Feb 22 12:39:27.127 [initandlisten] connection accepted from 165.225.128.186:47063 #16 (9 connections now open) m31200| Fri Feb 22 12:39:27.127 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set rs1-rs0 m31102| Fri Feb 22 12:39:27.127 [initandlisten] connection accepted from 165.225.128.186:57979 #16 (9 connections now open) m31100| Fri Feb 22 12:39:27.127 [initandlisten] connection accepted from 165.225.128.186:57095 #28 (17 connections now open) m31100| Fri Feb 22 12:39:27.127 [conn26] end connection 165.225.128.186:54836 (16 connections now open) m31200| Fri Feb 22 12:39:27.127 [migrateThread] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:39:27.128 [initandlisten] connection accepted from 165.225.128.186:41616 #17 (10 connections now open) m31102| Fri Feb 22 12:39:27.129 [initandlisten] connection accepted from 165.225.128.186:47621 #17 (10 connections now open) m31200| Fri Feb 22 12:39:27.129 [migrateThread] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31200| Fri Feb 22 12:39:27.129 [ReplicaSetMonitorWatcher] starting m31100| Fri Feb 22 12:39:27.129 [initandlisten] connection accepted from 165.225.128.186:33931 #29 (17 connections now open) m31200| Fri Feb 22 12:39:27.130 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/test.ns, filling with zeroes... m31200| Fri Feb 22 12:39:27.130 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/test.ns, size: 16MB, took 0 secs m31200| Fri Feb 22 12:39:27.130 [FileAllocator] allocating new datafile /data/db/rs1-rs1-0/test.0, filling with zeroes... m31200| Fri Feb 22 12:39:27.130 [FileAllocator] done allocating datafile /data/db/rs1-rs1-0/test.0, size: 16MB, took 0 secs m31200| Fri Feb 22 12:39:27.133 [migrateThread] build index test.foo { _id: 1 } m31200| Fri Feb 22 12:39:27.134 [migrateThread] build index done. scanned 0 total records. 0 secs m31200| Fri Feb 22 12:39:27.134 [migrateThread] info: creating collection test.foo on add index m31200| Fri Feb 22 12:39:27.134 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31202| Fri Feb 22 12:39:27.135 [FileAllocator] allocating new datafile /data/db/rs1-rs1-2/test.ns, filling with zeroes... m31201| Fri Feb 22 12:39:27.135 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/test.ns, filling with zeroes... m31202| Fri Feb 22 12:39:27.135 [FileAllocator] done allocating datafile /data/db/rs1-rs1-2/test.ns, size: 16MB, took 0 secs m31202| Fri Feb 22 12:39:27.135 [FileAllocator] allocating new datafile /data/db/rs1-rs1-2/test.0, filling with zeroes... m31201| Fri Feb 22 12:39:27.135 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/test.ns, size: 16MB, took 0 secs m31202| Fri Feb 22 12:39:27.135 [FileAllocator] done allocating datafile /data/db/rs1-rs1-2/test.0, size: 16MB, took 0 secs m31201| Fri Feb 22 12:39:27.135 [FileAllocator] allocating new datafile /data/db/rs1-rs1-1/test.0, filling with zeroes... m31201| Fri Feb 22 12:39:27.136 [FileAllocator] done allocating datafile /data/db/rs1-rs1-1/test.0, size: 16MB, took 0 secs m31100| Fri Feb 22 12:39:27.136 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31202| Fri Feb 22 12:39:27.139 [repl writer worker 1] build index test.foo { _id: 1 } m31201| Fri Feb 22 12:39:27.140 [repl writer worker 1] build index test.foo { _id: 1 } m31202| Fri Feb 22 12:39:27.140 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31202| Fri Feb 22 12:39:27.140 [repl writer worker 1] info: creating collection test.foo on add index m31201| Fri Feb 22 12:39:27.141 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31201| Fri Feb 22 12:39:27.141 [repl writer worker 1] info: creating collection test.foo on add index m31100| Fri Feb 22 12:39:27.146 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:27.156 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:27.166 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:27.183 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5, clonedBytes: 50300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:27.215 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8, clonedBytes: 80480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:27.279 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30998| Fri Feb 22 12:39:27.298 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:27.299 [Balancer] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.299 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.299 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.299 [Balancer] connected connection! m31100| Fri Feb 22 12:39:27.299 [initandlisten] connection accepted from 165.225.128.186:43185 #30 (18 connections now open) m30998| Fri Feb 22 12:39:27.299 [Balancer] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m30998| Fri Feb 22 12:39:27.299 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767299), ok: 1.0 } m30998| Fri Feb 22 12:39:27.300 [Balancer] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m30998| Fri Feb 22 12:39:27.300 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m30998| Fri Feb 22 12:39:27.300 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.300 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 12:39:27.300 [initandlisten] connection accepted from 165.225.128.186:54595 #31 (19 connections now open) m30998| Fri Feb 22 12:39:27.300 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.300 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m30998| Fri Feb 22 12:39:27.300 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m30998| Fri Feb 22 12:39:27.300 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:27.300 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.300 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.300 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 12:39:27.300 [initandlisten] connection accepted from 165.225.128.186:49444 #18 (11 connections now open) m30998| Fri Feb 22 12:39:27.300 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set rs1-rs0 m30998| Fri Feb 22 12:39:27.300 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.300 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.301 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.301 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set rs1-rs0 m31102| Fri Feb 22 12:39:27.301 [initandlisten] connection accepted from 165.225.128.186:62741 #18 (11 connections now open) m30998| Fri Feb 22 12:39:27.301 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.301 BackgroundJob starting: ConnectBG m31100| Fri Feb 22 12:39:27.301 [initandlisten] connection accepted from 165.225.128.186:41709 #32 (20 connections now open) m30998| Fri Feb 22 12:39:27.301 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.301 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.301 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:27.301 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.301 [Balancer] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31100| Fri Feb 22 12:39:27.301 [conn30] end connection 165.225.128.186:43185 (19 connections now open) m30998| Fri Feb 22 12:39:27.301 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767301), ok: 1.0 } m30998| Fri Feb 22 12:39:27.302 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.302 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:27.302 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.302 [Balancer] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.302 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767302), ok: 1.0 } m30998| Fri Feb 22 12:39:27.302 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.302 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:27.302 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.302 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767302), ok: 1.0 } m30998| Fri Feb 22 12:39:27.302 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:27.302 BackgroundJob starting: ConnectBG m31101| Fri Feb 22 12:39:27.302 [initandlisten] connection accepted from 165.225.128.186:64889 #19 (12 connections now open) m30998| Fri Feb 22 12:39:27.302 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.303 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.303 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:27.303 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.303 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767303), ok: 1.0 } m30998| Fri Feb 22 12:39:27.303 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.303 BackgroundJob starting: ConnectBG m31102| Fri Feb 22 12:39:27.303 [initandlisten] connection accepted from 165.225.128.186:52563 #19 (12 connections now open) m30998| Fri Feb 22 12:39:27.303 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.303 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.303 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:27.303 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.303 [Balancer] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:27.303 BackgroundJob starting: ReplicaSetMonitorWatcher m30998| Fri Feb 22 12:39:27.303 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:27.304 [ReplicaSetMonitorWatcher] starting m30998| Fri Feb 22 12:39:27.304 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.304 [Balancer] connected connection! m31100| Fri Feb 22 12:39:27.304 [initandlisten] connection accepted from 165.225.128.186:48709 #33 (20 connections now open) m30998| Fri Feb 22 12:39:27.304 [Balancer] starting new replica set monitor for replica set rs1-rs1 with seed of bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.304 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.304 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.304 [Balancer] connected connection! m31200| Fri Feb 22 12:39:27.304 [initandlisten] connection accepted from 165.225.128.186:57274 #28 (18 connections now open) m30998| Fri Feb 22 12:39:27.304 [Balancer] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31200 for replica set rs1-rs1 m30998| Fri Feb 22 12:39:27.305 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767305), ok: 1.0 } m30998| Fri Feb 22 12:39:27.305 [Balancer] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31200", 1: "bs-smartos-x86-64-1.10gen.cc:31202", 2: "bs-smartos-x86-64-1.10gen.cc:31201" } from rs1-rs1/ m30998| Fri Feb 22 12:39:27.305 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31200 to replica set rs1-rs1 m30998| Fri Feb 22 12:39:27.305 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.305 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 12:39:27.305 [initandlisten] connection accepted from 165.225.128.186:41563 #29 (19 connections now open) m30998| Fri Feb 22 12:39:27.305 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.305 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31200 in replica set rs1-rs1 m30998| Fri Feb 22 12:39:27.305 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31201 to replica set rs1-rs1 m30998| Fri Feb 22 12:39:27.305 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:27.305 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.305 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.305 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31201 in replica set rs1-rs1 m31201| Fri Feb 22 12:39:27.305 [initandlisten] connection accepted from 165.225.128.186:47479 #16 (11 connections now open) m30998| Fri Feb 22 12:39:27.305 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31202 to replica set rs1-rs1 m30998| Fri Feb 22 12:39:27.305 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.306 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.306 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.306 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31202 in replica set rs1-rs1 m31202| Fri Feb 22 12:39:27.306 [initandlisten] connection accepted from 165.225.128.186:33704 #16 (11 connections now open) m30998| Fri Feb 22 12:39:27.306 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.306 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 12:39:27.306 [initandlisten] connection accepted from 165.225.128.186:53766 #30 (20 connections now open) m30998| Fri Feb 22 12:39:27.306 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.306 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.306 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:27.306 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.306 [Balancer] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31200| Fri Feb 22 12:39:27.306 [conn28] end connection 165.225.128.186:57274 (19 connections now open) m30998| Fri Feb 22 12:39:27.307 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767306), ok: 1.0 } m30998| Fri Feb 22 12:39:27.307 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.307 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:27.307 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.307 [Balancer] Primary for replica set rs1-rs1 changed to bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.307 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767307), ok: 1.0 } m30998| Fri Feb 22 12:39:27.307 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.307 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:27.307 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.307 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767307), ok: 1.0 } m30998| Fri Feb 22 12:39:27.307 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:27.307 BackgroundJob starting: ConnectBG m31201| Fri Feb 22 12:39:27.307 [initandlisten] connection accepted from 165.225.128.186:53060 #17 (12 connections now open) m30998| Fri Feb 22 12:39:27.307 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.308 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.308 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:27.308 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.308 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767308), ok: 1.0 } m30998| Fri Feb 22 12:39:27.308 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.308 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.308 [Balancer] connected connection! m31202| Fri Feb 22 12:39:27.308 [initandlisten] connection accepted from 165.225.128.186:51084 #17 (12 connections now open) m30998| Fri Feb 22 12:39:27.308 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.308 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:27.309 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.309 [Balancer] replica set monitor for replica set rs1-rs1 started, address is rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:27.309 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:27.309 BackgroundJob starting: ConnectBG m31200| Fri Feb 22 12:39:27.309 [initandlisten] connection accepted from 165.225.128.186:44126 #31 (20 connections now open) m30998| Fri Feb 22 12:39:27.309 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.309 [Balancer] starting new replica set monitor for replica set rs1-rs2 with seed of bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.309 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.309 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.309 [Balancer] connected connection! m31300| Fri Feb 22 12:39:27.309 [initandlisten] connection accepted from 165.225.128.186:43394 #22 (15 connections now open) m30998| Fri Feb 22 12:39:27.309 [Balancer] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31300 for replica set rs1-rs2 m30998| Fri Feb 22 12:39:27.310 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767310), ok: 1.0 } m30998| Fri Feb 22 12:39:27.310 [Balancer] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31300", 1: "bs-smartos-x86-64-1.10gen.cc:31302", 2: "bs-smartos-x86-64-1.10gen.cc:31301" } from rs1-rs2/ m30998| Fri Feb 22 12:39:27.310 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31300 to replica set rs1-rs2 m30998| Fri Feb 22 12:39:27.310 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.310 BackgroundJob starting: ConnectBG m31300| Fri Feb 22 12:39:27.310 [initandlisten] connection accepted from 165.225.128.186:41362 #23 (16 connections now open) m30998| Fri Feb 22 12:39:27.310 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.310 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31300 in replica set rs1-rs2 m30998| Fri Feb 22 12:39:27.310 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31301 to replica set rs1-rs2 m30998| Fri Feb 22 12:39:27.310 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:27.310 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.310 [Balancer] connected connection! m31301| Fri Feb 22 12:39:27.310 [initandlisten] connection accepted from 165.225.128.186:48970 #11 (9 connections now open) m30998| Fri Feb 22 12:39:27.310 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31301 in replica set rs1-rs2 m30998| Fri Feb 22 12:39:27.310 [Balancer] trying to add new host bs-smartos-x86-64-1.10gen.cc:31302 to replica set rs1-rs2 m30998| Fri Feb 22 12:39:27.310 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.311 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.311 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.311 [Balancer] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31302 in replica set rs1-rs2 m31302| Fri Feb 22 12:39:27.311 [initandlisten] connection accepted from 165.225.128.186:36880 #12 (9 connections now open) m30998| Fri Feb 22 12:39:27.311 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.311 BackgroundJob starting: ConnectBG m31300| Fri Feb 22 12:39:27.311 [initandlisten] connection accepted from 165.225.128.186:56183 #24 (17 connections now open) m30998| Fri Feb 22 12:39:27.311 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.311 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.311 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:27.311 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.311 [Balancer] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31300| Fri Feb 22 12:39:27.311 [conn22] end connection 165.225.128.186:43394 (16 connections now open) m30998| Fri Feb 22 12:39:27.311 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767311), ok: 1.0 } m30998| Fri Feb 22 12:39:27.312 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.312 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:27.312 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.312 [Balancer] Primary for replica set rs1-rs2 changed to bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.312 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767312), ok: 1.0 } m30998| Fri Feb 22 12:39:27.312 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.312 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:27.312 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.312 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767312), ok: 1.0 } m30998| Fri Feb 22 12:39:27.312 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:27.312 BackgroundJob starting: ConnectBG m31301| Fri Feb 22 12:39:27.312 [initandlisten] connection accepted from 165.225.128.186:53582 #12 (10 connections now open) m30998| Fri Feb 22 12:39:27.312 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.313 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.313 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:27.313 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.313 [Balancer] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536767313), ok: 1.0 } m30998| Fri Feb 22 12:39:27.313 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.313 BackgroundJob starting: ConnectBG m31302| Fri Feb 22 12:39:27.313 [initandlisten] connection accepted from 165.225.128.186:56244 #13 (10 connections now open) m30998| Fri Feb 22 12:39:27.313 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.313 [Balancer] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.313 [Balancer] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:27.313 [Balancer] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.313 [Balancer] replica set monitor for replica set rs1-rs2 started, address is rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:27.313 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:27.314 BackgroundJob starting: ConnectBG m31300| Fri Feb 22 12:39:27.314 [initandlisten] connection accepted from 165.225.128.186:58248 #25 (17 connections now open) m30998| Fri Feb 22 12:39:27.314 [Balancer] connected connection! m30998| Fri Feb 22 12:39:27.314 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:27.314 [Balancer] checking last ping for lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' against process and ping Thu Jan 1 00:00:00 1970 m30998| Fri Feb 22 12:39:27.314 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30998| Fri Feb 22 12:39:27.314 BackgroundJob starting: ConnectBG m30998| Fri Feb 22 12:39:27.314 [Balancer] connected connection! m29000| Fri Feb 22 12:39:27.314 [initandlisten] connection accepted from 165.225.128.186:38296 #12 (12 connections now open) m30998| Fri Feb 22 12:39:27.315 [Balancer] could not force lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' because elapsed time 0 <= takeover time 900000 m30998| Fri Feb 22 12:39:27.315 [Balancer] skipping balancing round because another balancer is active { "rs1-rs0" : 41, "rs1-rs1" : 0, "rs1-rs2" : 0 } total: 41 min: 0 max: 41 m31100| Fri Feb 22 12:39:27.407 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 27, clonedBytes: 271620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:27.626 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:27.626 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m31200| Fri Feb 22 12:39:27.627 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m31100| Fri Feb 22 12:39:27.663 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:27.663 [conn23] moveChunk setting version to: 2|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:27.664 [initandlisten] connection accepted from 165.225.128.186:44415 #32 (21 connections now open) m31200| Fri Feb 22 12:39:27.664 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:27.667 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m31200| Fri Feb 22 12:39:27.667 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: MinKey } -> { _id: 51.0 } m31200| Fri Feb 22 12:39:27.668 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:27-512766ff6bc5d04ff892a697", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536767668), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 5: 8, step2 of 5: 0, step3 of 5: 491, step4 of 5: 0, step5 of 5: 41 } } m29000| Fri Feb 22 12:39:27.668 [initandlisten] connection accepted from 165.225.128.186:47290 #13 (13 connections now open) m31100| Fri Feb 22 12:39:27.674 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: MinKey }, max: { _id: 51.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 51, clonedBytes: 513060, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:27.674 [conn23] moveChunk updating self version to: 2|1||512766fa11fb11ce1f290be5 through { _id: 51.0 } -> { _id: 103.0 } for collection 'test.foo' m29000| Fri Feb 22 12:39:27.675 [initandlisten] connection accepted from 165.225.128.186:56300 #14 (14 connections now open) m31100| Fri Feb 22 12:39:27.675 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:27-512766fffd440aee3dad4abc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536767675), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:27.675 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:27.675 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:27.675 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:27.675 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:27.675 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:27.676 [cleanupOldData-512766fffd440aee3dad4abd] (start) waiting to cleanup test.foo from { _id: MinKey } -> { _id: 51.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:27.676 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:27.676 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:27-512766fffd440aee3dad4abe", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536767676), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: MinKey }, max: { _id: 51.0 }, step1 of 6: 0, step2 of 6: 3, step3 of 6: 4, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:27.676 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: MinKey }, max: { _id: 51.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_MinKey", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:49 r:178 w:26 reslen:37 558ms m30999| Fri Feb 22 12:39:27.676 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:27.677 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 3 version: 2|1||512766fa11fb11ce1f290be5 based on: 1|40||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:27.677 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:27.677 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:27.696 [cleanupOldData-512766fffd440aee3dad4abd] waiting to remove documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m31100| Fri Feb 22 12:39:27.696 [cleanupOldData-512766fffd440aee3dad4abd] moveChunk starting delete for: test.foo from { _id: MinKey } -> { _id: 51.0 } m31100| Fri Feb 22 12:39:28.165 [cleanupOldData-512766fffd440aee3dad4abd] Helpers::removeRangeUnlocked time spent waiting for replication: 446ms m31100| Fri Feb 22 12:39:28.165 [cleanupOldData-512766fffd440aee3dad4abd] moveChunk deleted 51 documents for test.foo from { _id: MinKey } -> { _id: 51.0 } m31101| Fri Feb 22 12:39:28.610 [conn9] end connection 165.225.128.186:37285 (11 connections now open) m31101| Fri Feb 22 12:39:28.611 [initandlisten] connection accepted from 165.225.128.186:40954 #20 (12 connections now open) m30999| Fri Feb 22 12:39:28.678 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:28.679 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:28.679 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:28 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670011fb11ce1f290be7" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "512766ff11fb11ce1f290be6" } } m30999| Fri Feb 22 12:39:28.680 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670011fb11ce1f290be7 m30999| Fri Feb 22 12:39:28.680 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:28.680 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:28.680 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:28.682 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:28.682 [Balancer] donor : rs1-rs0 chunks on 40 m30999| Fri Feb 22 12:39:28.682 [Balancer] receiver : rs1-rs2 chunks on 0 m30999| Fri Feb 22 12:39:28.682 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:28.682 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_51.0", lastmod: Timestamp 2000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 51.0 }, max: { _id: 103.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:28.682 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 2|1||000000000000000000000000min: { _id: 51.0 }max: { _id: 103.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:28.682 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 51.0 }, max: { _id: 103.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_51.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:28.683 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276700fd440aee3dad4abf m31100| Fri Feb 22 12:39:28.683 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:28-51276700fd440aee3dad4ac0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536768683), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:28.684 [conn23] moveChunk request accepted at version 2|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:28.685 [conn23] moveChunk number of documents: 52 m31100| Fri Feb 22 12:39:28.685 [conn23] starting new replica set monitor for replica set rs1-rs2 with seed of bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:28.685 [conn23] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31300 for replica set rs1-rs2 m31300| Fri Feb 22 12:39:28.685 [initandlisten] connection accepted from 165.225.128.186:50076 #26 (18 connections now open) m31100| Fri Feb 22 12:39:28.685 [conn23] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31300", 1: "bs-smartos-x86-64-1.10gen.cc:31302", 2: "bs-smartos-x86-64-1.10gen.cc:31301" } from rs1-rs2/ m31100| Fri Feb 22 12:39:28.685 [conn23] trying to add new host bs-smartos-x86-64-1.10gen.cc:31300 to replica set rs1-rs2 m31100| Fri Feb 22 12:39:28.685 [conn23] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31300 in replica set rs1-rs2 m31300| Fri Feb 22 12:39:28.685 [initandlisten] connection accepted from 165.225.128.186:50670 #27 (19 connections now open) m31100| Fri Feb 22 12:39:28.685 [conn23] trying to add new host bs-smartos-x86-64-1.10gen.cc:31301 to replica set rs1-rs2 m31100| Fri Feb 22 12:39:28.686 [conn23] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31301 in replica set rs1-rs2 m31100| Fri Feb 22 12:39:28.686 [conn23] trying to add new host bs-smartos-x86-64-1.10gen.cc:31302 to replica set rs1-rs2 m31301| Fri Feb 22 12:39:28.686 [initandlisten] connection accepted from 165.225.128.186:36630 #13 (11 connections now open) m31100| Fri Feb 22 12:39:28.686 [conn23] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31302 in replica set rs1-rs2 m31302| Fri Feb 22 12:39:28.686 [initandlisten] connection accepted from 165.225.128.186:62251 #14 (11 connections now open) m31300| Fri Feb 22 12:39:28.686 [initandlisten] connection accepted from 165.225.128.186:54062 #28 (20 connections now open) m31300| Fri Feb 22 12:39:28.687 [conn26] end connection 165.225.128.186:50076 (19 connections now open) m31100| Fri Feb 22 12:39:28.687 [conn23] Primary for replica set rs1-rs2 changed to bs-smartos-x86-64-1.10gen.cc:31300 m31301| Fri Feb 22 12:39:28.688 [initandlisten] connection accepted from 165.225.128.186:61320 #14 (12 connections now open) m31302| Fri Feb 22 12:39:28.688 [initandlisten] connection accepted from 165.225.128.186:55688 #15 (12 connections now open) m31100| Fri Feb 22 12:39:28.689 [conn23] replica set monitor for replica set rs1-rs2 started, address is rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31300| Fri Feb 22 12:39:28.689 [initandlisten] connection accepted from 165.225.128.186:51120 #29 (20 connections now open) m31300| Fri Feb 22 12:39:28.689 [migrateThread] starting receiving-end of migration of chunk { _id: 51.0 } -> { _id: 103.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:28.689 [migrateThread] starting new replica set monitor for replica set rs1-rs0 with seed of bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31300| Fri Feb 22 12:39:28.690 [migrateThread] successfully connected to seed bs-smartos-x86-64-1.10gen.cc:31100 for replica set rs1-rs0 m31100| Fri Feb 22 12:39:28.690 [initandlisten] connection accepted from 165.225.128.186:40814 #34 (21 connections now open) m31300| Fri Feb 22 12:39:28.690 [migrateThread] changing hosts to { 0: "bs-smartos-x86-64-1.10gen.cc:31100", 1: "bs-smartos-x86-64-1.10gen.cc:31102", 2: "bs-smartos-x86-64-1.10gen.cc:31101" } from rs1-rs0/ m31300| Fri Feb 22 12:39:28.690 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31100 to replica set rs1-rs0 m31300| Fri Feb 22 12:39:28.691 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31100 in replica set rs1-rs0 m31300| Fri Feb 22 12:39:28.691 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31101 to replica set rs1-rs0 m31100| Fri Feb 22 12:39:28.691 [initandlisten] connection accepted from 165.225.128.186:54874 #35 (22 connections now open) m31300| Fri Feb 22 12:39:28.691 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31101 in replica set rs1-rs0 m31101| Fri Feb 22 12:39:28.691 [initandlisten] connection accepted from 165.225.128.186:63707 #21 (13 connections now open) m31300| Fri Feb 22 12:39:28.691 [migrateThread] trying to add new host bs-smartos-x86-64-1.10gen.cc:31102 to replica set rs1-rs0 m31102| Fri Feb 22 12:39:28.691 [initandlisten] connection accepted from 165.225.128.186:43073 #20 (13 connections now open) m31300| Fri Feb 22 12:39:28.691 [migrateThread] successfully connected to new host bs-smartos-x86-64-1.10gen.cc:31102 in replica set rs1-rs0 m31100| Fri Feb 22 12:39:28.691 [initandlisten] connection accepted from 165.225.128.186:43613 #36 (23 connections now open) m31100| Fri Feb 22 12:39:28.692 [conn34] end connection 165.225.128.186:40814 (22 connections now open) m31300| Fri Feb 22 12:39:28.692 [migrateThread] Primary for replica set rs1-rs0 changed to bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:39:28.693 [initandlisten] connection accepted from 165.225.128.186:56758 #22 (14 connections now open) m31102| Fri Feb 22 12:39:28.693 [initandlisten] connection accepted from 165.225.128.186:60724 #21 (14 connections now open) m31300| Fri Feb 22 12:39:28.694 [migrateThread] replica set monitor for replica set rs1-rs0 started, address is rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m31300| Fri Feb 22 12:39:28.694 [ReplicaSetMonitorWatcher] starting m31100| Fri Feb 22 12:39:28.694 [initandlisten] connection accepted from 165.225.128.186:58536 #37 (23 connections now open) m31300| Fri Feb 22 12:39:28.695 [FileAllocator] allocating new datafile /data/db/rs1-rs2-0/test.ns, filling with zeroes... m31300| Fri Feb 22 12:39:28.695 [FileAllocator] done allocating datafile /data/db/rs1-rs2-0/test.ns, size: 16MB, took 0 secs m31300| Fri Feb 22 12:39:28.695 [FileAllocator] allocating new datafile /data/db/rs1-rs2-0/test.0, filling with zeroes... m31300| Fri Feb 22 12:39:28.696 [FileAllocator] done allocating datafile /data/db/rs1-rs2-0/test.0, size: 16MB, took 0 secs m31300| Fri Feb 22 12:39:28.699 [migrateThread] build index test.foo { _id: 1 } m31100| Fri Feb 22 12:39:28.699 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "ready", counts: { cloned: 0, clonedBytes: 0, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:28.701 [migrateThread] build index done. scanned 0 total records. 0.001 secs m31300| Fri Feb 22 12:39:28.701 [migrateThread] info: creating collection test.foo on add index m31300| Fri Feb 22 12:39:28.701 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31301| Fri Feb 22 12:39:28.701 [FileAllocator] allocating new datafile /data/db/rs1-rs2-1/test.ns, filling with zeroes... m31302| Fri Feb 22 12:39:28.701 [FileAllocator] allocating new datafile /data/db/rs1-rs2-2/test.ns, filling with zeroes... m31301| Fri Feb 22 12:39:28.702 [FileAllocator] done allocating datafile /data/db/rs1-rs2-1/test.ns, size: 16MB, took 0 secs m31302| Fri Feb 22 12:39:28.702 [FileAllocator] done allocating datafile /data/db/rs1-rs2-2/test.ns, size: 16MB, took 0 secs m31301| Fri Feb 22 12:39:28.702 [FileAllocator] allocating new datafile /data/db/rs1-rs2-1/test.0, filling with zeroes... m31302| Fri Feb 22 12:39:28.702 [FileAllocator] allocating new datafile /data/db/rs1-rs2-2/test.0, filling with zeroes... m31301| Fri Feb 22 12:39:28.702 [FileAllocator] done allocating datafile /data/db/rs1-rs2-1/test.0, size: 16MB, took 0 secs m31302| Fri Feb 22 12:39:28.702 [FileAllocator] done allocating datafile /data/db/rs1-rs2-2/test.0, size: 16MB, took 0 secs m31301| Fri Feb 22 12:39:28.706 [repl writer worker 1] build index test.foo { _id: 1 } m31302| Fri Feb 22 12:39:28.706 [repl writer worker 1] build index test.foo { _id: 1 } m31301| Fri Feb 22 12:39:28.708 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31301| Fri Feb 22 12:39:28.708 [repl writer worker 1] info: creating collection test.foo on add index m31302| Fri Feb 22 12:39:28.708 [repl writer worker 1] build index done. scanned 0 total records. 0.001 secs m31302| Fri Feb 22 12:39:28.708 [repl writer worker 1] info: creating collection test.foo on add index m31100| Fri Feb 22 12:39:28.710 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:28.720 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:28.730 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:28.746 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 5, clonedBytes: 50300, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:28.778 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 8, clonedBytes: 80480, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:28.842 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 14, clonedBytes: 140840, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:28.971 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 27, clonedBytes: 271620, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:29.227 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:29.234 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:29.234 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m31300| Fri Feb 22 12:39:29.235 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m31100| Fri Feb 22 12:39:29.739 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:29.739 [conn23] moveChunk setting version to: 3|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:29.740 [initandlisten] connection accepted from 165.225.128.186:53663 #30 (21 connections now open) m31300| Fri Feb 22 12:39:29.740 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:29.744 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m31300| Fri Feb 22 12:39:29.744 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 51.0 } -> { _id: 103.0 } m31300| Fri Feb 22 12:39:29.745 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:29-51276701dd66f5428ed3b721", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536769745), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 5: 11, step2 of 5: 0, step3 of 5: 532, step4 of 5: 0, step5 of 5: 510 } } m29000| Fri Feb 22 12:39:29.745 [initandlisten] connection accepted from 165.225.128.186:64735 #15 (15 connections now open) m31100| Fri Feb 22 12:39:29.750 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 51.0 }, max: { _id: 103.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:29.750 [conn23] moveChunk updating self version to: 3|1||512766fa11fb11ce1f290be5 through { _id: 103.0 } -> { _id: 155.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:29.751 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:29-51276701fd440aee3dad4ac1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536769751), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:29.751 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:29.751 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:29.751 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:29.751 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:29.751 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:29.751 [cleanupOldData-51276701fd440aee3dad4ac2] (start) waiting to cleanup test.foo from { _id: 51.0 } -> { _id: 103.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:29.752 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:29.752 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:29-51276701fd440aee3dad4ac3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536769752), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 51.0 }, max: { _id: 103.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 4, step4 of 6: 1049, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:29.752 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 51.0 }, max: { _id: 103.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_51.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:29 r:169 w:34 reslen:37 1070ms m30999| Fri Feb 22 12:39:29.752 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:29.753 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 4 version: 3|1||512766fa11fb11ce1f290be5 based on: 2|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:29.754 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:29.754 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:29.771 [cleanupOldData-51276701fd440aee3dad4ac2] waiting to remove documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m31100| Fri Feb 22 12:39:29.772 [cleanupOldData-51276701fd440aee3dad4ac2] moveChunk starting delete for: test.foo from { _id: 51.0 } -> { _id: 103.0 } m31100| Fri Feb 22 12:39:30.273 [cleanupOldData-51276701fd440aee3dad4ac2] Helpers::removeRangeUnlocked time spent waiting for replication: 483ms m31100| Fri Feb 22 12:39:30.273 [cleanupOldData-51276701fd440aee3dad4ac2] moveChunk deleted 52 documents for test.foo from { _id: 51.0 } -> { _id: 103.0 } m30999| Fri Feb 22 12:39:30.755 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:30.755 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:30.755 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:30 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670211fb11ce1f290be8" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127670011fb11ce1f290be7" } } m30999| Fri Feb 22 12:39:30.756 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670211fb11ce1f290be8 m30999| Fri Feb 22 12:39:30.756 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:30.756 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:30.756 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:30.757 [Balancer] rs1-rs2 has more chunks me:1 best: rs1-rs1:1 m30999| Fri Feb 22 12:39:30.757 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:30.757 [Balancer] donor : rs1-rs0 chunks on 39 m30999| Fri Feb 22 12:39:30.757 [Balancer] receiver : rs1-rs1 chunks on 1 m30999| Fri Feb 22 12:39:30.757 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:30.757 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_103.0", lastmod: Timestamp 3000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 103.0 }, max: { _id: 155.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30999| Fri Feb 22 12:39:30.758 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 3|1||000000000000000000000000min: { _id: 103.0 }max: { _id: 155.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:30.761 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 103.0 }, max: { _id: 155.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_103.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:30.761 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276702fd440aee3dad4ac4 m31100| Fri Feb 22 12:39:30.761 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:30-51276702fd440aee3dad4ac5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536770761), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:30.762 [conn23] moveChunk request accepted at version 3|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:30.762 [conn23] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:30.763 [migrateThread] starting receiving-end of migration of chunk { _id: 103.0 } -> { _id: 155.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:30.764 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:30.773 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:30.783 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:30.793 [conn15] end connection 165.225.128.186:47878 (22 connections now open) m31100| Fri Feb 22 12:39:30.793 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:30.793 [initandlisten] connection accepted from 165.225.128.186:52615 #38 (23 connections now open) m31100| Fri Feb 22 12:39:30.804 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:30.820 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:30.852 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:30.916 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:31.045 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:31.295 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:31.295 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m31200| Fri Feb 22 12:39:31.296 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m31100| Fri Feb 22 12:39:31.301 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:31.301 [conn23] moveChunk setting version to: 4|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:31.301 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:31.306 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m31200| Fri Feb 22 12:39:31.306 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 103.0 } -> { _id: 155.0 } m31200| Fri Feb 22 12:39:31.306 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:31-512767036bc5d04ff892a698", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536771306), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:31.311 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 103.0 }, max: { _id: 155.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:31.311 [conn23] moveChunk updating self version to: 4|1||512766fa11fb11ce1f290be5 through { _id: 155.0 } -> { _id: 207.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:31.312 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:31-51276703fd440aee3dad4ac6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536771312), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:31.312 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:31.312 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:31.312 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:31.312 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:31.312 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:31.312 [cleanupOldData-51276703fd440aee3dad4ac7] (start) waiting to cleanup test.foo from { _id: 103.0 } -> { _id: 155.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:31.312 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:31.313 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:31-51276703fd440aee3dad4ac8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536771312), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 103.0 }, max: { _id: 155.0 }, step1 of 6: 2, step2 of 6: 1, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:31.313 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 103.0 }, max: { _id: 155.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_103.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:41 r:116 w:25 reslen:37 554ms m30999| Fri Feb 22 12:39:31.313 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:31.314 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 5 version: 4|1||512766fa11fb11ce1f290be5 based on: 3|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:31.314 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:31.314 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:31.332 [cleanupOldData-51276703fd440aee3dad4ac7] waiting to remove documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m31100| Fri Feb 22 12:39:31.332 [cleanupOldData-51276703fd440aee3dad4ac7] moveChunk starting delete for: test.foo from { _id: 103.0 } -> { _id: 155.0 } m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771472), ok: 1.0 } m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771472), ok: 1.0 } m30999| Fri Feb 22 12:39:31.472 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771473), ok: 1.0 } m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771473), ok: 1.0 } m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771473), ok: 1.0 } m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:31.473 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 12:39:31.474 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771474), ok: 1.0 } m30999| Fri Feb 22 12:39:31.474 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:31.474 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:31.474 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:31.474 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:31.474 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771474), ok: 1.0 } m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771475), ok: 1.0 } m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771475), ok: 1.0 } m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771475), ok: 1.0 } m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:31.475 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs2 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771476), ok: 1.0 } m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771476), ok: 1.0 } m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771476), ok: 1.0 } m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:31.476 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771477), ok: 1.0 } m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536771477), ok: 1.0 } m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:31.477 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:31.853 [cleanupOldData-51276703fd440aee3dad4ac7] Helpers::removeRangeUnlocked time spent waiting for replication: 504ms m31100| Fri Feb 22 12:39:31.853 [cleanupOldData-51276703fd440aee3dad4ac7] moveChunk deleted 52 documents for test.foo from { _id: 103.0 } -> { _id: 155.0 } m30999| Fri Feb 22 12:39:32.315 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:32.316 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:32.316 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:32 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670411fb11ce1f290be9" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127670211fb11ce1f290be8" } } m30999| Fri Feb 22 12:39:32.317 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670411fb11ce1f290be9 m30999| Fri Feb 22 12:39:32.317 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:32.317 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:32.317 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:32.319 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:32.319 [Balancer] donor : rs1-rs0 chunks on 38 m30999| Fri Feb 22 12:39:32.319 [Balancer] receiver : rs1-rs2 chunks on 1 m30999| Fri Feb 22 12:39:32.319 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:32.319 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_155.0", lastmod: Timestamp 4000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 155.0 }, max: { _id: 207.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:32.319 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 4|1||000000000000000000000000min: { _id: 155.0 }max: { _id: 207.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:32.319 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 155.0 }, max: { _id: 207.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_155.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:32.320 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276704fd440aee3dad4ac9 m31100| Fri Feb 22 12:39:32.321 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:32-51276704fd440aee3dad4aca", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536772321), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:32.322 [conn23] moveChunk request accepted at version 4|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:32.322 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:32.322 [migrateThread] starting receiving-end of migration of chunk { _id: 155.0 } -> { _id: 207.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:32.323 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:32.332 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:32.343 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 { "rs1-rs0" : 38, "rs1-rs1" : 2, "rs1-rs2" : 1 } total: 41 min: 1 max: 38 m31100| Fri Feb 22 12:39:32.353 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:32.363 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:32.379 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:32.411 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:32.476 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31301| Fri Feb 22 12:39:32.548 [conn4] end connection 165.225.128.186:43941 (11 connections now open) m31301| Fri Feb 22 12:39:32.549 [initandlisten] connection accepted from 165.225.128.186:34431 #15 (12 connections now open) m31100| Fri Feb 22 12:39:32.604 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:32.857 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:32.857 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m31300| Fri Feb 22 12:39:32.857 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m31100| Fri Feb 22 12:39:32.860 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:32.860 [conn23] moveChunk setting version to: 5|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:32.860 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:32.867 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m31300| Fri Feb 22 12:39:32.867 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 155.0 } -> { _id: 207.0 } m31300| Fri Feb 22 12:39:32.867 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:32-51276704dd66f5428ed3b722", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536772867), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 533, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:32.870 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 155.0 }, max: { _id: 207.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:32.871 [conn23] moveChunk updating self version to: 5|1||512766fa11fb11ce1f290be5 through { _id: 207.0 } -> { _id: 259.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:32.871 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:32-51276704fd440aee3dad4acb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536772871), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:32.876 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:32.876 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:32.876 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:32.876 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:32.876 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:32.876 [cleanupOldData-51276704fd440aee3dad4acc] (start) waiting to cleanup test.foo from { _id: 155.0 } -> { _id: 207.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:32.877 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:32.877 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:32-51276704fd440aee3dad4acd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536772877), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 155.0 }, max: { _id: 207.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 537, step5 of 6: 15, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:32.877 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 155.0 }, max: { _id: 207.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_155.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:29 r:232 w:34 reslen:37 557ms m30999| Fri Feb 22 12:39:32.877 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:32.879 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 6 version: 5|1||512766fa11fb11ce1f290be5 based on: 4|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:32.879 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:32.879 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:32.896 [cleanupOldData-51276704fd440aee3dad4acc] waiting to remove documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m31100| Fri Feb 22 12:39:32.896 [cleanupOldData-51276704fd440aee3dad4acc] moveChunk starting delete for: test.foo from { _id: 155.0 } -> { _id: 207.0 } m30998| Fri Feb 22 12:39:33.316 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:33.316 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:33.316 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:33 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "51276705fed0a5416d51b832" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127670411fb11ce1f290be9" } } m30998| Fri Feb 22 12:39:33.317 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 51276705fed0a5416d51b832 m30998| Fri Feb 22 12:39:33.317 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:33.317 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:33.317 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:33.320 [Balancer] DBConfig unserialize: test { _id: "test", partitioned: true, primary: "rs1-rs0" } m30998| Fri Feb 22 12:39:33.322 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 2 version: 5|1||512766fa11fb11ce1f290be5 based on: (empty) m30998| Fri Feb 22 12:39:33.322 [Balancer] rs1-rs2 has more chunks me:2 best: rs1-rs1:2 m30998| Fri Feb 22 12:39:33.322 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:33.322 [Balancer] donor : rs1-rs0 chunks on 37 m30998| Fri Feb 22 12:39:33.322 [Balancer] receiver : rs1-rs1 chunks on 2 m30998| Fri Feb 22 12:39:33.322 [Balancer] threshold : 4 m30998| Fri Feb 22 12:39:33.322 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_207.0", lastmod: Timestamp 5000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 207.0 }, max: { _id: 259.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:33.322 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 5|1||000000000000000000000000min: { _id: 207.0 }max: { _id: 259.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:33.322 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 207.0 }, max: { _id: 259.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_207.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:33.323 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276705fd440aee3dad4ace m31100| Fri Feb 22 12:39:33.323 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:33-51276705fd440aee3dad4acf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536773323), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:33.324 [conn33] moveChunk request accepted at version 5|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:33.325 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:33.325 [migrateThread] starting receiving-end of migration of chunk { _id: 207.0 } -> { _id: 259.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:33.326 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:33.335 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.345 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.356 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.366 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.380 [cleanupOldData-51276704fd440aee3dad4acc] Helpers::removeRangeUnlocked time spent waiting for replication: 464ms m31100| Fri Feb 22 12:39:33.380 [cleanupOldData-51276704fd440aee3dad4acc] moveChunk deleted 52 documents for test.foo from { _id: 155.0 } -> { _id: 207.0 } m31100| Fri Feb 22 12:39:33.382 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.415 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.479 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.607 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:33.859 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:33.859 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m31200| Fri Feb 22 12:39:33.860 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m31100| Fri Feb 22 12:39:33.863 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.863 [conn33] moveChunk setting version to: 6|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:33.864 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:33.870 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m31200| Fri Feb 22 12:39:33.870 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 207.0 } -> { _id: 259.0 } m31200| Fri Feb 22 12:39:33.870 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:33-512767056bc5d04ff892a699", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536773870), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 532, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:33.874 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 207.0 }, max: { _id: 259.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:33.874 [conn33] moveChunk updating self version to: 6|1||512766fa11fb11ce1f290be5 through { _id: 259.0 } -> { _id: 311.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:33.875 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:33-51276705fd440aee3dad4ad0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536773875), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:33.875 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:33.875 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:33.875 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:33.875 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:33.875 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:33.875 [cleanupOldData-51276705fd440aee3dad4ad1] (start) waiting to cleanup test.foo from { _id: 207.0 } -> { _id: 259.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:33.875 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:33.875 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:33-51276705fd440aee3dad4ad2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536773875), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 207.0 }, max: { _id: 259.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:33.875 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 207.0 }, max: { _id: 259.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_207.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:39 r:177 w:37 reslen:37 553ms m30998| Fri Feb 22 12:39:33.875 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:33.877 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 3 version: 6|1||512766fa11fb11ce1f290be5 based on: 5|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:33.877 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:33.877 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:33.880 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:33.880 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:33.881 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:33 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670511fb11ce1f290bea" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276705fed0a5416d51b832" } } m30999| Fri Feb 22 12:39:33.881 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670511fb11ce1f290bea m30999| Fri Feb 22 12:39:33.881 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:33.881 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:33.881 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:33.883 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:33.883 [Balancer] donor : rs1-rs0 chunks on 36 m30999| Fri Feb 22 12:39:33.883 [Balancer] receiver : rs1-rs2 chunks on 2 m30999| Fri Feb 22 12:39:33.883 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:33.883 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_259.0", lastmod: Timestamp 6000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 259.0 }, max: { _id: 311.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:33.883 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|5||000000000000000000000000min: { _id: 259.0 }max: { _id: 311.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:33.884 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 259.0 }, max: { _id: 311.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_259.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:33.885 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276705fd440aee3dad4ad3 m31100| Fri Feb 22 12:39:33.885 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:33-51276705fd440aee3dad4ad4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536773885), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:33.886 [conn23] moveChunk request accepted at version 6|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:33.886 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:33.886 [migrateThread] starting receiving-end of migration of chunk { _id: 259.0 } -> { _id: 311.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:33.887 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:33.895 [cleanupOldData-51276705fd440aee3dad4ad1] waiting to remove documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m31100| Fri Feb 22 12:39:33.895 [cleanupOldData-51276705fd440aee3dad4ad1] moveChunk starting delete for: test.foo from { _id: 207.0 } -> { _id: 259.0 } m31100| Fri Feb 22 12:39:33.897 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.907 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.917 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.927 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.943 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:33.976 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.040 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.168 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.417 [cleanupOldData-51276705fd440aee3dad4ad1] Helpers::removeRangeUnlocked time spent waiting for replication: 501ms m31100| Fri Feb 22 12:39:34.417 [cleanupOldData-51276705fd440aee3dad4ad1] moveChunk deleted 52 documents for test.foo from { _id: 207.0 } -> { _id: 259.0 } m31300| Fri Feb 22 12:39:34.419 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:34.419 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m31300| Fri Feb 22 12:39:34.420 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m31100| Fri Feb 22 12:39:34.424 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.425 [conn23] moveChunk setting version to: 7|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:34.425 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:34.430 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m31300| Fri Feb 22 12:39:34.430 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 259.0 } -> { _id: 311.0 } m31300| Fri Feb 22 12:39:34.430 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:34-51276706dd66f5428ed3b723", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536774430), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 532, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:34.435 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 259.0 }, max: { _id: 311.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:34.435 [conn23] moveChunk updating self version to: 7|1||512766fa11fb11ce1f290be5 through { _id: 311.0 } -> { _id: 363.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:34.435 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:34-51276706fd440aee3dad4ad5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536774435), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:34.436 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:34.436 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:34.436 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:34.436 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:34.436 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:34.436 [cleanupOldData-51276706fd440aee3dad4ad6] (start) waiting to cleanup test.foo from { _id: 259.0 } -> { _id: 311.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:34.436 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:34.436 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:34-51276706fd440aee3dad4ad7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536774436), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 259.0 }, max: { _id: 311.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:34.436 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 259.0 }, max: { _id: 311.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_259.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:19 r:181 w:25 reslen:37 552ms m30999| Fri Feb 22 12:39:34.436 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:34.437 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 7|1||512766fa11fb11ce1f290be5 based on: 5|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:34.437 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:34.438 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:34.456 [cleanupOldData-51276706fd440aee3dad4ad6] waiting to remove documents for test.foo from { _id: 259.0 } -> { _id: 311.0 } m31100| Fri Feb 22 12:39:34.456 [cleanupOldData-51276706fd440aee3dad4ad6] moveChunk starting delete for: test.foo from { _id: 259.0 } -> { _id: 311.0 } m30998| Fri Feb 22 12:39:34.878 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:34.878 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:34.879 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:34 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "51276706fed0a5416d51b833" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127670511fb11ce1f290bea" } } m30998| Fri Feb 22 12:39:34.879 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 51276706fed0a5416d51b833 m30998| Fri Feb 22 12:39:34.880 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:34.880 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:34.880 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:34.881 [Balancer] rs1-rs2 has more chunks me:3 best: rs1-rs1:3 m30998| Fri Feb 22 12:39:34.881 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:34.881 [Balancer] donor : rs1-rs0 chunks on 35 m30998| Fri Feb 22 12:39:34.881 [Balancer] receiver : rs1-rs1 chunks on 3 m30998| Fri Feb 22 12:39:34.881 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:34.881 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_311.0", lastmod: Timestamp 7000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 311.0 }, max: { _id: 363.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:34.881 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|6||000000000000000000000000min: { _id: 311.0 }max: { _id: 363.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:34.882 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 311.0 }, max: { _id: 363.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_311.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:34.883 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276706fd440aee3dad4ad8 m31100| Fri Feb 22 12:39:34.883 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:34-51276706fd440aee3dad4ad9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536774883), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:34.884 [conn33] moveChunk request accepted at version 7|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:34.884 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:34.884 [migrateThread] starting receiving-end of migration of chunk { _id: 311.0 } -> { _id: 363.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:34.885 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:34.895 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.905 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.915 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.925 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.942 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.974 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:34.978 [cleanupOldData-51276706fd440aee3dad4ad6] Helpers::removeRangeUnlocked time spent waiting for replication: 505ms m31100| Fri Feb 22 12:39:34.978 [cleanupOldData-51276706fd440aee3dad4ad6] moveChunk deleted 52 documents for test.foo from { _id: 259.0 } -> { _id: 311.0 } m31100| Fri Feb 22 12:39:35.038 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.166 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:35.420 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:35.420 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m31200| Fri Feb 22 12:39:35.420 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m31100| Fri Feb 22 12:39:35.423 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.423 [conn33] moveChunk setting version to: 8|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:35.423 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:35.431 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m31200| Fri Feb 22 12:39:35.431 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 311.0 } -> { _id: 363.0 } m31200| Fri Feb 22 12:39:35.431 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:35-512767076bc5d04ff892a69a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536775431), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 534, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:35.433 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 311.0 }, max: { _id: 363.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:35.433 [conn33] moveChunk updating self version to: 8|1||512766fa11fb11ce1f290be5 through { _id: 363.0 } -> { _id: 415.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:35.434 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:35-51276707fd440aee3dad4ada", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536775434), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:35.434 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:35.434 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:35.434 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:35.434 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:35.434 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:35.434 [cleanupOldData-51276707fd440aee3dad4adb] (start) waiting to cleanup test.foo from { _id: 311.0 } -> { _id: 363.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:35.434 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:35.434 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:35-51276707fd440aee3dad4adc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536775434), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 311.0 }, max: { _id: 363.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:35.435 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 311.0 }, max: { _id: 363.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_311.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:174 w:38 reslen:37 553ms m30998| Fri Feb 22 12:39:35.435 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:35.436 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 4 version: 8|1||512766fa11fb11ce1f290be5 based on: 6|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:35.436 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:35.437 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:35.438 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:35.439 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:35.439 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:35 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670711fb11ce1f290beb" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276706fed0a5416d51b833" } } m30999| Fri Feb 22 12:39:35.440 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670711fb11ce1f290beb m30999| Fri Feb 22 12:39:35.440 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:35.440 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:35.440 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:35.442 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:35.442 [Balancer] donor : rs1-rs0 chunks on 34 m30999| Fri Feb 22 12:39:35.442 [Balancer] receiver : rs1-rs2 chunks on 3 m30999| Fri Feb 22 12:39:35.442 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:35.442 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_363.0", lastmod: Timestamp 8000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 363.0 }, max: { _id: 415.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:35.442 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|7||000000000000000000000000min: { _id: 363.0 }max: { _id: 415.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:35.442 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 363.0 }, max: { _id: 415.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_363.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:35.443 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276707fd440aee3dad4add m31100| Fri Feb 22 12:39:35.443 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:35-51276707fd440aee3dad4ade", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536775443), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:35.444 [conn23] moveChunk request accepted at version 8|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:35.445 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:35.445 [migrateThread] starting receiving-end of migration of chunk { _id: 363.0 } -> { _id: 415.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:35.446 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:35.454 [cleanupOldData-51276707fd440aee3dad4adb] waiting to remove documents for test.foo from { _id: 311.0 } -> { _id: 363.0 } m31100| Fri Feb 22 12:39:35.454 [cleanupOldData-51276707fd440aee3dad4adb] moveChunk starting delete for: test.foo from { _id: 311.0 } -> { _id: 363.0 } m31100| Fri Feb 22 12:39:35.455 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.465 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.476 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.486 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.502 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.534 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.599 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.727 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.925 [cleanupOldData-51276707fd440aee3dad4adb] Helpers::removeRangeUnlocked time spent waiting for replication: 450ms m31100| Fri Feb 22 12:39:35.925 [cleanupOldData-51276707fd440aee3dad4adb] moveChunk deleted 52 documents for test.foo from { _id: 311.0 } -> { _id: 363.0 } m31300| Fri Feb 22 12:39:35.978 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:35.978 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m31300| Fri Feb 22 12:39:35.978 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m31100| Fri Feb 22 12:39:35.983 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:35.983 [conn23] moveChunk setting version to: 9|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:35.983 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:35.988 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m31300| Fri Feb 22 12:39:35.988 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 363.0 } -> { _id: 415.0 } m31300| Fri Feb 22 12:39:35.989 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:35-51276707dd66f5428ed3b724", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536775989), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:35.994 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 363.0 }, max: { _id: 415.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:35.994 [conn23] moveChunk updating self version to: 9|1||512766fa11fb11ce1f290be5 through { _id: 415.0 } -> { _id: 467.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:35.994 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:35-51276707fd440aee3dad4adf", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536775994), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:35.994 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:35.994 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:35.994 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:35.995 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:35.995 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:35.995 [cleanupOldData-51276707fd440aee3dad4ae0] (start) waiting to cleanup test.foo from { _id: 363.0 } -> { _id: 415.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:35.995 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:35.995 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:35-51276707fd440aee3dad4ae1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536775995), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 363.0 }, max: { _id: 415.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:35.995 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 363.0 }, max: { _id: 415.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_363.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:32 r:174 w:33 reslen:37 552ms m30999| Fri Feb 22 12:39:35.995 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:35.996 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 8 version: 9|1||512766fa11fb11ce1f290be5 based on: 7|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:35.996 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:35.997 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:36.015 [cleanupOldData-51276707fd440aee3dad4ae0] waiting to remove documents for test.foo from { _id: 363.0 } -> { _id: 415.0 } m31100| Fri Feb 22 12:39:36.015 [cleanupOldData-51276707fd440aee3dad4ae0] moveChunk starting delete for: test.foo from { _id: 363.0 } -> { _id: 415.0 } m31301| Fri Feb 22 12:39:36.095 [conn5] end connection 165.225.128.186:63085 (11 connections now open) m31301| Fri Feb 22 12:39:36.095 [initandlisten] connection accepted from 165.225.128.186:34157 #16 (12 connections now open) m30998| Fri Feb 22 12:39:36.437 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:36.438 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:36.438 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:36 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "51276708fed0a5416d51b834" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127670711fb11ce1f290beb" } } m30998| Fri Feb 22 12:39:36.439 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 51276708fed0a5416d51b834 m30998| Fri Feb 22 12:39:36.439 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:36.439 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:36.439 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:36.441 [Balancer] rs1-rs2 has more chunks me:4 best: rs1-rs1:4 m30998| Fri Feb 22 12:39:36.441 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:36.441 [Balancer] donor : rs1-rs0 chunks on 33 m30998| Fri Feb 22 12:39:36.441 [Balancer] receiver : rs1-rs1 chunks on 4 m30998| Fri Feb 22 12:39:36.441 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:36.441 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_415.0", lastmod: Timestamp 9000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 415.0 }, max: { _id: 467.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:36.441 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|8||000000000000000000000000min: { _id: 415.0 }max: { _id: 467.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:36.441 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 415.0 }, max: { _id: 467.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_415.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:36.442 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276708fd440aee3dad4ae2 m31100| Fri Feb 22 12:39:36.442 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:36-51276708fd440aee3dad4ae3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536776442), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:36.443 [conn33] moveChunk request accepted at version 9|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:36.444 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:36.444 [migrateThread] starting receiving-end of migration of chunk { _id: 415.0 } -> { _id: 467.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:36.444 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:36.454 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.464 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.475 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.485 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.501 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.533 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.536 [cleanupOldData-51276707fd440aee3dad4ae0] Helpers::removeRangeUnlocked time spent waiting for replication: 506ms m31100| Fri Feb 22 12:39:36.536 [cleanupOldData-51276707fd440aee3dad4ae0] moveChunk deleted 52 documents for test.foo from { _id: 363.0 } -> { _id: 415.0 } m31100| Fri Feb 22 12:39:36.597 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.726 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:36.976 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:36.976 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m31200| Fri Feb 22 12:39:36.977 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m31100| Fri Feb 22 12:39:36.982 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:36.982 [conn33] moveChunk setting version to: 10|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:36.982 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:36.987 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m31200| Fri Feb 22 12:39:36.987 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 415.0 } -> { _id: 467.0 } m31200| Fri Feb 22 12:39:36.987 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:36-512767086bc5d04ff892a69b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536776987), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:36.992 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 415.0 }, max: { _id: 467.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:36.993 [conn33] moveChunk updating self version to: 10|1||512766fa11fb11ce1f290be5 through { _id: 467.0 } -> { _id: 519.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:36.993 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:36-51276708fd440aee3dad4ae4", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536776993), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:36.993 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:36.993 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:36.993 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:36.994 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:36.994 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:36.994 [cleanupOldData-51276708fd440aee3dad4ae5] (start) waiting to cleanup test.foo from { _id: 415.0 } -> { _id: 467.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:36.994 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:36.994 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:36-51276708fd440aee3dad4ae6", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536776994), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 415.0 }, max: { _id: 467.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:36.994 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 415.0 }, max: { _id: 467.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_415.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:169 w:49 reslen:37 552ms m30998| Fri Feb 22 12:39:36.994 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:36.996 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 5 version: 10|1||512766fa11fb11ce1f290be5 based on: 8|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:36.996 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:36.996 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:36.997 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:36.998 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:36.998 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:36 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670811fb11ce1f290bec" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276708fed0a5416d51b834" } } m30999| Fri Feb 22 12:39:36.999 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670811fb11ce1f290bec m30999| Fri Feb 22 12:39:36.999 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:36.999 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:36.999 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:37.001 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:37.001 [Balancer] donor : rs1-rs0 chunks on 32 m30999| Fri Feb 22 12:39:37.001 [Balancer] receiver : rs1-rs2 chunks on 4 m30999| Fri Feb 22 12:39:37.001 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:37.001 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_467.0", lastmod: Timestamp 10000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 467.0 }, max: { _id: 519.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:37.001 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|9||000000000000000000000000min: { _id: 467.0 }max: { _id: 519.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:37.001 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 467.0 }, max: { _id: 519.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_467.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:37.002 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276709fd440aee3dad4ae7 m31100| Fri Feb 22 12:39:37.002 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:37-51276709fd440aee3dad4ae8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536777002), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:37.004 [conn23] moveChunk request accepted at version 10|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:37.004 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:37.004 [migrateThread] starting receiving-end of migration of chunk { _id: 467.0 } -> { _id: 519.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:37.005 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:37.014 [cleanupOldData-51276708fd440aee3dad4ae5] waiting to remove documents for test.foo from { _id: 415.0 } -> { _id: 467.0 } m31100| Fri Feb 22 12:39:37.014 [cleanupOldData-51276708fd440aee3dad4ae5] moveChunk starting delete for: test.foo from { _id: 415.0 } -> { _id: 467.0 } m31100| Fri Feb 22 12:39:37.014 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.024 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.035 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.045 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.061 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.094 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.158 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.286 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777304), ok: 1.0 } m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777304), ok: 1.0 } m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:37.304 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777304), ok: 1.0 } m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777305), ok: 1.0 } m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777305), ok: 1.0 } m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30998| Fri Feb 22 12:39:37.305 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777305), ok: 1.0 } m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777306), ok: 1.0 } m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777306), ok: 1.0 } m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:37.306 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777306), ok: 1.0 } m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777307), ok: 1.0 } m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs2 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777307), ok: 1.0 } m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:37.307 [ReplicaSetMonitorWatcher] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777308), ok: 1.0 } m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777308), ok: 1.0 } m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777308), ok: 1.0 } m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:37.308 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536777308), ok: 1.0 } m30998| Fri Feb 22 12:39:37.309 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:37.309 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:37.309 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 { "rs1-rs0" : 32, "rs1-rs1" : 5, "rs1-rs2" : 4 } total: 41 min: 4 max: 32 m31201| Fri Feb 22 12:39:37.381 [conn7] end connection 165.225.128.186:63732 (11 connections now open) m31201| Fri Feb 22 12:39:37.381 [initandlisten] connection accepted from 165.225.128.186:64010 #18 (12 connections now open) m31100| Fri Feb 22 12:39:37.487 [cleanupOldData-51276708fd440aee3dad4ae5] Helpers::removeRangeUnlocked time spent waiting for replication: 448ms m31100| Fri Feb 22 12:39:37.487 [cleanupOldData-51276708fd440aee3dad4ae5] moveChunk deleted 52 documents for test.foo from { _id: 415.0 } -> { _id: 467.0 } m31300| Fri Feb 22 12:39:37.538 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:37.538 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m31300| Fri Feb 22 12:39:37.539 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m31100| Fri Feb 22 12:39:37.542 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:37.542 [conn23] moveChunk setting version to: 11|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:37.543 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:37.549 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m31300| Fri Feb 22 12:39:37.549 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 467.0 } -> { _id: 519.0 } m31300| Fri Feb 22 12:39:37.549 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:37-51276709dd66f5428ed3b725", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536777549), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 533, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:37.553 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 467.0 }, max: { _id: 519.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:37.553 [conn23] moveChunk updating self version to: 11|1||512766fa11fb11ce1f290be5 through { _id: 519.0 } -> { _id: 571.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:37.553 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:37-51276709fd440aee3dad4ae9", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536777553), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:37.553 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:37.553 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:37.554 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:37.554 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:37.554 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:37.554 [cleanupOldData-51276709fd440aee3dad4aea] (start) waiting to cleanup test.foo from { _id: 467.0 } -> { _id: 519.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:37.554 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:37.554 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:37-51276709fd440aee3dad4aeb", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536777554), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 467.0 }, max: { _id: 519.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:37.554 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 467.0 }, max: { _id: 519.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_467.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:172 w:31 reslen:37 553ms m30999| Fri Feb 22 12:39:37.554 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:37.556 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 11|1||512766fa11fb11ce1f290be5 based on: 9|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:37.556 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:37.556 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:37.574 [cleanupOldData-51276709fd440aee3dad4aea] waiting to remove documents for test.foo from { _id: 467.0 } -> { _id: 519.0 } m31100| Fri Feb 22 12:39:37.574 [cleanupOldData-51276709fd440aee3dad4aea] moveChunk starting delete for: test.foo from { _id: 467.0 } -> { _id: 519.0 } m30998| Fri Feb 22 12:39:37.997 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:37.997 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:37.998 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:37 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "51276709fed0a5416d51b835" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127670811fb11ce1f290bec" } } m30998| Fri Feb 22 12:39:37.998 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 51276709fed0a5416d51b835 m30998| Fri Feb 22 12:39:37.998 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:37.998 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:37.998 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:38.000 [Balancer] rs1-rs2 has more chunks me:5 best: rs1-rs1:5 m30998| Fri Feb 22 12:39:38.000 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:38.000 [Balancer] donor : rs1-rs0 chunks on 31 m30998| Fri Feb 22 12:39:38.000 [Balancer] receiver : rs1-rs1 chunks on 5 m30998| Fri Feb 22 12:39:38.000 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:38.000 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_519.0", lastmod: Timestamp 11000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 519.0 }, max: { _id: 571.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:38.000 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|10||000000000000000000000000min: { _id: 519.0 }max: { _id: 571.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:38.001 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 519.0 }, max: { _id: 571.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_519.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:38.002 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670afd440aee3dad4aec m31100| Fri Feb 22 12:39:38.002 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:38-5127670afd440aee3dad4aed", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536778002), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:38.003 [conn33] moveChunk request accepted at version 11|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:38.003 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:38.003 [migrateThread] starting receiving-end of migration of chunk { _id: 519.0 } -> { _id: 571.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:38.004 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:38.013 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.024 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.034 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.044 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.060 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.093 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.096 [cleanupOldData-51276709fd440aee3dad4aea] Helpers::removeRangeUnlocked time spent waiting for replication: 504ms m31100| Fri Feb 22 12:39:38.096 [cleanupOldData-51276709fd440aee3dad4aea] moveChunk deleted 52 documents for test.foo from { _id: 467.0 } -> { _id: 519.0 } m31100| Fri Feb 22 12:39:38.157 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.285 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:38.303 [conn5] end connection 165.225.128.186:57030 (20 connections now open) m31300| Fri Feb 22 12:39:38.303 [initandlisten] connection accepted from 165.225.128.186:60018 #31 (21 connections now open) m31200| Fri Feb 22 12:39:38.538 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:38.538 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m31200| Fri Feb 22 12:39:38.538 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m31100| Fri Feb 22 12:39:38.541 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.541 [conn33] moveChunk setting version to: 12|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:38.541 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:38.549 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m31200| Fri Feb 22 12:39:38.549 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 519.0 } -> { _id: 571.0 } m31200| Fri Feb 22 12:39:38.549 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:38-5127670a6bc5d04ff892a69c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536778549), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 533, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:38.552 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 519.0 }, max: { _id: 571.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:38.552 [conn33] moveChunk updating self version to: 12|1||512766fa11fb11ce1f290be5 through { _id: 571.0 } -> { _id: 623.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:38.552 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:38-5127670afd440aee3dad4aee", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536778552), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:38.552 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:38.553 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:38.553 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:38.553 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:38.553 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:38.553 [cleanupOldData-5127670afd440aee3dad4aef] (start) waiting to cleanup test.foo from { _id: 519.0 } -> { _id: 571.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:38.553 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:38.553 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:38-5127670afd440aee3dad4af0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536778553), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 519.0 }, max: { _id: 571.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:38.553 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 519.0 }, max: { _id: 571.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_519.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:46 r:188 w:36 reslen:37 552ms m30998| Fri Feb 22 12:39:38.553 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:38.555 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 6 version: 12|1||512766fa11fb11ce1f290be5 based on: 10|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:38.555 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:38.555 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:38.557 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:38.557 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:38.557 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:38 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670a11fb11ce1f290bed" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "51276709fed0a5416d51b835" } } m30999| Fri Feb 22 12:39:38.558 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670a11fb11ce1f290bed m30999| Fri Feb 22 12:39:38.558 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:38.558 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:38.558 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:38.560 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:38.560 [Balancer] donor : rs1-rs0 chunks on 30 m30999| Fri Feb 22 12:39:38.560 [Balancer] receiver : rs1-rs2 chunks on 5 m30999| Fri Feb 22 12:39:38.560 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:38.560 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_571.0", lastmod: Timestamp 12000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 571.0 }, max: { _id: 623.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:38.560 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|11||000000000000000000000000min: { _id: 571.0 }max: { _id: 623.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:38.561 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 571.0 }, max: { _id: 623.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_571.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:38.562 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670afd440aee3dad4af1 m31100| Fri Feb 22 12:39:38.562 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:38-5127670afd440aee3dad4af2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536778562), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:38.563 [conn23] moveChunk request accepted at version 12|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:38.563 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:38.563 [migrateThread] starting receiving-end of migration of chunk { _id: 571.0 } -> { _id: 623.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:38.564 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:38.573 [cleanupOldData-5127670afd440aee3dad4aef] waiting to remove documents for test.foo from { _id: 519.0 } -> { _id: 571.0 } m31100| Fri Feb 22 12:39:38.573 [cleanupOldData-5127670afd440aee3dad4aef] moveChunk starting delete for: test.foo from { _id: 519.0 } -> { _id: 571.0 } m31100| Fri Feb 22 12:39:38.573 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.584 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.594 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.604 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.620 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.653 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.717 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:38.845 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.095 [cleanupOldData-5127670afd440aee3dad4aef] Helpers::removeRangeUnlocked time spent waiting for replication: 508ms m31100| Fri Feb 22 12:39:39.095 [cleanupOldData-5127670afd440aee3dad4aef] moveChunk deleted 52 documents for test.foo from { _id: 519.0 } -> { _id: 571.0 } m31300| Fri Feb 22 12:39:39.096 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:39.096 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m31300| Fri Feb 22 12:39:39.096 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m31100| Fri Feb 22 12:39:39.101 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.101 [conn23] moveChunk setting version to: 13|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:39.101 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:39.107 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m31300| Fri Feb 22 12:39:39.107 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 571.0 } -> { _id: 623.0 } m31300| Fri Feb 22 12:39:39.107 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:39-5127670bdd66f5428ed3b726", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536779107), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:39.111 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 571.0 }, max: { _id: 623.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:39.111 [conn23] moveChunk updating self version to: 13|1||512766fa11fb11ce1f290be5 through { _id: 623.0 } -> { _id: 675.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:39.112 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:39-5127670bfd440aee3dad4af3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536779112), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:39.112 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:39.112 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:39.112 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:39.112 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:39.112 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:39.112 [cleanupOldData-5127670bfd440aee3dad4af4] (start) waiting to cleanup test.foo from { _id: 571.0 } -> { _id: 623.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:39.112 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:39.113 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:39-5127670bfd440aee3dad4af5", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536779113), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 571.0 }, max: { _id: 623.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 538, step5 of 6: 10, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:39.113 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 571.0 }, max: { _id: 623.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_571.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:14 r:236 w:22 reslen:37 552ms m30999| Fri Feb 22 12:39:39.113 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:39.114 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 10 version: 13|1||512766fa11fb11ce1f290be5 based on: 11|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:39.114 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:39.115 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:39.132 [cleanupOldData-5127670bfd440aee3dad4af4] waiting to remove documents for test.foo from { _id: 571.0 } -> { _id: 623.0 } m31100| Fri Feb 22 12:39:39.132 [cleanupOldData-5127670bfd440aee3dad4af4] moveChunk starting delete for: test.foo from { _id: 571.0 } -> { _id: 623.0 } m31200| Fri Feb 22 12:39:39.554 [conn13] end connection 165.225.128.186:51691 (20 connections now open) m31200| Fri Feb 22 12:39:39.554 [initandlisten] connection accepted from 165.225.128.186:54918 #33 (21 connections now open) m30998| Fri Feb 22 12:39:39.556 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:39.556 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:39.556 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:39 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "5127670bfed0a5416d51b836" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127670a11fb11ce1f290bed" } } m30998| Fri Feb 22 12:39:39.557 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 5127670bfed0a5416d51b836 m30998| Fri Feb 22 12:39:39.557 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:39.557 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:39.557 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:39.559 [Balancer] rs1-rs2 has more chunks me:6 best: rs1-rs1:6 m30998| Fri Feb 22 12:39:39.559 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:39.559 [Balancer] donor : rs1-rs0 chunks on 29 m30998| Fri Feb 22 12:39:39.559 [Balancer] receiver : rs1-rs1 chunks on 6 m30998| Fri Feb 22 12:39:39.559 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:39.559 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_623.0", lastmod: Timestamp 13000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 623.0 }, max: { _id: 675.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:39.559 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|12||000000000000000000000000min: { _id: 623.0 }max: { _id: 675.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:39.559 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 623.0 }, max: { _id: 675.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_623.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:39.560 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670bfd440aee3dad4af6 m31100| Fri Feb 22 12:39:39.560 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:39-5127670bfd440aee3dad4af7", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536779560), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:39.561 [conn33] moveChunk request accepted at version 13|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:39.561 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:39.561 [migrateThread] starting receiving-end of migration of chunk { _id: 623.0 } -> { _id: 675.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:39.562 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:39.571 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.581 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.591 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.602 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.618 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.632 [cleanupOldData-5127670bfd440aee3dad4af4] Helpers::removeRangeUnlocked time spent waiting for replication: 481ms m31100| Fri Feb 22 12:39:39.632 [cleanupOldData-5127670bfd440aee3dad4af4] moveChunk deleted 52 documents for test.foo from { _id: 571.0 } -> { _id: 623.0 } m31100| Fri Feb 22 12:39:39.650 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:39.714 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:39.779 [conn14] end connection 165.225.128.186:65458 (20 connections now open) m31200| Fri Feb 22 12:39:39.779 [initandlisten] connection accepted from 165.225.128.186:49464 #34 (21 connections now open) m31100| Fri Feb 22 12:39:39.842 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:40.091 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:40.091 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m31200| Fri Feb 22 12:39:40.092 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m31100| Fri Feb 22 12:39:40.098 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.098 [conn33] moveChunk setting version to: 14|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:40.099 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:40.102 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m31200| Fri Feb 22 12:39:40.102 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 623.0 } -> { _id: 675.0 } m31200| Fri Feb 22 12:39:40.102 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:40-5127670c6bc5d04ff892a69d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536780102), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 529, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:40.109 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 623.0 }, max: { _id: 675.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:40.109 [conn33] moveChunk updating self version to: 14|1||512766fa11fb11ce1f290be5 through { _id: 675.0 } -> { _id: 727.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:40.109 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:40-5127670cfd440aee3dad4af8", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536780109), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:40.109 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:40.109 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:40.109 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:40.109 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:40.109 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:40.109 [cleanupOldData-5127670cfd440aee3dad4af9] (start) waiting to cleanup test.foo from { _id: 623.0 } -> { _id: 675.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:40.110 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:40.110 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:40-5127670cfd440aee3dad4afa", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536780110), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 623.0 }, max: { _id: 675.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 537, step5 of 6: 10, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:40.110 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 623.0 }, max: { _id: 675.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_623.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:14 r:155 w:28 reslen:37 551ms m30998| Fri Feb 22 12:39:40.110 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:40.111 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 7 version: 14|1||512766fa11fb11ce1f290be5 based on: 12|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:40.111 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:40.111 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:40.115 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:40.115 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:40.116 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:40 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670c11fb11ce1f290bee" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127670bfed0a5416d51b836" } } m30999| Fri Feb 22 12:39:40.116 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670c11fb11ce1f290bee m30999| Fri Feb 22 12:39:40.116 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:40.116 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:40.116 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:40.117 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:40.117 [Balancer] donor : rs1-rs0 chunks on 28 m30999| Fri Feb 22 12:39:40.117 [Balancer] receiver : rs1-rs2 chunks on 6 m30999| Fri Feb 22 12:39:40.117 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:40.117 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_675.0", lastmod: Timestamp 14000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 675.0 }, max: { _id: 727.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:40.118 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|13||000000000000000000000000min: { _id: 675.0 }max: { _id: 727.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:40.118 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 675.0 }, max: { _id: 727.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_675.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:40.119 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670cfd440aee3dad4afb m31100| Fri Feb 22 12:39:40.119 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:40-5127670cfd440aee3dad4afc", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536780119), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:40.119 [conn23] moveChunk request accepted at version 14|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:40.120 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:40.120 [migrateThread] starting receiving-end of migration of chunk { _id: 675.0 } -> { _id: 727.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:40.120 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:40.130 [cleanupOldData-5127670cfd440aee3dad4af9] waiting to remove documents for test.foo from { _id: 623.0 } -> { _id: 675.0 } m31100| Fri Feb 22 12:39:40.130 [cleanupOldData-5127670cfd440aee3dad4af9] moveChunk starting delete for: test.foo from { _id: 623.0 } -> { _id: 675.0 } m31100| Fri Feb 22 12:39:40.130 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.140 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.150 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.161 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.177 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.209 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.273 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.401 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.487 [cleanupOldData-5127670cfd440aee3dad4af9] Helpers::removeRangeUnlocked time spent waiting for replication: 330ms m31100| Fri Feb 22 12:39:40.487 [cleanupOldData-5127670cfd440aee3dad4af9] moveChunk deleted 52 documents for test.foo from { _id: 623.0 } -> { _id: 675.0 } m31300| Fri Feb 22 12:39:40.651 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:40.651 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m31300| Fri Feb 22 12:39:40.651 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m31100| Fri Feb 22 12:39:40.658 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:40.658 [conn23] moveChunk setting version to: 15|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:40.658 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:40.662 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m31300| Fri Feb 22 12:39:40.662 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 675.0 } -> { _id: 727.0 } m31300| Fri Feb 22 12:39:40.662 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:40-5127670cdd66f5428ed3b727", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536780662), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:40.668 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 675.0 }, max: { _id: 727.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:40.668 [conn23] moveChunk updating self version to: 15|1||512766fa11fb11ce1f290be5 through { _id: 727.0 } -> { _id: 779.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:40.669 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:40-5127670cfd440aee3dad4afd", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536780669), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:40.669 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:40.669 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:40.669 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:40.669 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:40.669 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:40.669 [cleanupOldData-5127670cfd440aee3dad4afe] (start) waiting to cleanup test.foo from { _id: 675.0 } -> { _id: 727.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:40.669 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:40.669 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:40-5127670cfd440aee3dad4aff", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536780669), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 675.0 }, max: { _id: 727.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:40.669 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 675.0 }, max: { _id: 727.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_675.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:37 r:128 w:23 reslen:37 551ms m30999| Fri Feb 22 12:39:40.669 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:40.670 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 15|1||512766fa11fb11ce1f290be5 based on: 13|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:40.671 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:40.671 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:40.689 [cleanupOldData-5127670cfd440aee3dad4afe] waiting to remove documents for test.foo from { _id: 675.0 } -> { _id: 727.0 } m31100| Fri Feb 22 12:39:40.689 [cleanupOldData-5127670cfd440aee3dad4afe] moveChunk starting delete for: test.foo from { _id: 675.0 } -> { _id: 727.0 } m31100| Fri Feb 22 12:39:41.000 [conn16] end connection 165.225.128.186:39561 (22 connections now open) m31100| Fri Feb 22 12:39:41.001 [initandlisten] connection accepted from 165.225.128.186:46547 #39 (23 connections now open) m30998| Fri Feb 22 12:39:41.112 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:41.113 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:41.113 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:41 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "5127670dfed0a5416d51b837" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127670c11fb11ce1f290bee" } } m30998| Fri Feb 22 12:39:41.114 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 5127670dfed0a5416d51b837 m30998| Fri Feb 22 12:39:41.114 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:41.114 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:41.114 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:41.116 [Balancer] rs1-rs2 has more chunks me:7 best: rs1-rs1:7 m30998| Fri Feb 22 12:39:41.116 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:41.116 [Balancer] donor : rs1-rs0 chunks on 27 m30998| Fri Feb 22 12:39:41.116 [Balancer] receiver : rs1-rs1 chunks on 7 m30998| Fri Feb 22 12:39:41.116 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:41.116 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_727.0", lastmod: Timestamp 15000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 727.0 }, max: { _id: 779.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:41.116 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|14||000000000000000000000000min: { _id: 727.0 }max: { _id: 779.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:41.116 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 727.0 }, max: { _id: 779.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_727.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:41.117 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670dfd440aee3dad4b00 m31100| Fri Feb 22 12:39:41.117 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:41-5127670dfd440aee3dad4b01", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536781117), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:41.119 [conn33] moveChunk request accepted at version 15|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:41.119 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:41.119 [migrateThread] starting receiving-end of migration of chunk { _id: 727.0 } -> { _id: 779.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:41.120 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:41.129 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.139 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.150 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.160 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.176 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.209 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.212 [cleanupOldData-5127670cfd440aee3dad4afe] Helpers::removeRangeUnlocked time spent waiting for replication: 504ms m31100| Fri Feb 22 12:39:41.212 [cleanupOldData-5127670cfd440aee3dad4afe] moveChunk deleted 52 documents for test.foo from { _id: 675.0 } -> { _id: 727.0 } m31100| Fri Feb 22 12:39:41.273 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.401 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 12:39:41.477 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 12:39:41.477 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781477), ok: 1.0 } m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781478), ok: 1.0 } m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781478), ok: 1.0 } m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:41.478 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781478), ok: 1.0 } m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781479), ok: 1.0 } m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 12:39:41.479 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781479), ok: 1.0 } m30999| Fri Feb 22 12:39:41.482 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:41.482 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:41.482 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:41.482 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781483), ok: 1.0 } m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781483), ok: 1.0 } m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:41.483 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781483), ok: 1.0 } m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781484), ok: 1.0 } m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs2 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781484), ok: 1.0 } m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:41.484 [ReplicaSetMonitorWatcher] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781484), ok: 1.0 } m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781485), ok: 1.0 } m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781485), ok: 1.0 } m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:41.485 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:41.486 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536781485), ok: 1.0 } m30999| Fri Feb 22 12:39:41.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:41.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:41.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m31200| Fri Feb 22 12:39:41.652 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:41.652 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m31200| Fri Feb 22 12:39:41.652 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m31100| Fri Feb 22 12:39:41.658 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.658 [conn33] moveChunk setting version to: 16|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:41.658 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:41.663 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m31200| Fri Feb 22 12:39:41.663 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 727.0 } -> { _id: 779.0 } m31200| Fri Feb 22 12:39:41.663 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:41-5127670d6bc5d04ff892a69e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536781663), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:41.668 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 727.0 }, max: { _id: 779.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:41.668 [conn33] moveChunk updating self version to: 16|1||512766fa11fb11ce1f290be5 through { _id: 779.0 } -> { _id: 831.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:41.669 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:41-5127670dfd440aee3dad4b02", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536781669), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:41.669 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:41.669 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:41.669 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:41.669 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:41.669 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:41.669 [cleanupOldData-5127670dfd440aee3dad4b03] (start) waiting to cleanup test.foo from { _id: 727.0 } -> { _id: 779.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:41.670 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:41.670 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:41-5127670dfd440aee3dad4b04", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536781670), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 727.0 }, max: { _id: 779.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:41.670 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 727.0 }, max: { _id: 779.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_727.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:186 w:33 reslen:37 553ms m30998| Fri Feb 22 12:39:41.670 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:41.671 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 8 version: 16|1||512766fa11fb11ce1f290be5 based on: 14|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:41.671 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:41.672 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:41.672 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:41.672 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:41.673 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:41 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670d11fb11ce1f290bef" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127670dfed0a5416d51b837" } } m30999| Fri Feb 22 12:39:41.673 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670d11fb11ce1f290bef m30999| Fri Feb 22 12:39:41.673 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:41.673 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:41.673 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:41.675 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:41.675 [Balancer] donor : rs1-rs0 chunks on 26 m30999| Fri Feb 22 12:39:41.675 [Balancer] receiver : rs1-rs2 chunks on 7 m30999| Fri Feb 22 12:39:41.675 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:41.675 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_779.0", lastmod: Timestamp 16000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 779.0 }, max: { _id: 831.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:41.675 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|15||000000000000000000000000min: { _id: 779.0 }max: { _id: 831.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:41.676 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 779.0 }, max: { _id: 831.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_779.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:41.677 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670dfd440aee3dad4b05 m31100| Fri Feb 22 12:39:41.677 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:41-5127670dfd440aee3dad4b06", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536781677), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:41.678 [conn23] moveChunk request accepted at version 16|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:41.678 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:41.678 [migrateThread] starting receiving-end of migration of chunk { _id: 779.0 } -> { _id: 831.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:41.679 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:41.688 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.689 [cleanupOldData-5127670dfd440aee3dad4b03] waiting to remove documents for test.foo from { _id: 727.0 } -> { _id: 779.0 } m31100| Fri Feb 22 12:39:41.689 [cleanupOldData-5127670dfd440aee3dad4b03] moveChunk starting delete for: test.foo from { _id: 727.0 } -> { _id: 779.0 } m31100| Fri Feb 22 12:39:41.699 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.709 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.719 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.735 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.768 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.832 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:41.960 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.212 [cleanupOldData-5127670dfd440aee3dad4b03] Helpers::removeRangeUnlocked time spent waiting for replication: 504ms m31100| Fri Feb 22 12:39:42.212 [cleanupOldData-5127670dfd440aee3dad4b03] moveChunk deleted 52 documents for test.foo from { _id: 727.0 } -> { _id: 779.0 } m31300| Fri Feb 22 12:39:42.212 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:42.212 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m31300| Fri Feb 22 12:39:42.213 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m31100| Fri Feb 22 12:39:42.217 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.217 [conn23] moveChunk setting version to: 17|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:42.217 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:42.223 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m31300| Fri Feb 22 12:39:42.223 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 779.0 } -> { _id: 831.0 } m31300| Fri Feb 22 12:39:42.223 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:42-5127670edd66f5428ed3b728", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536782223), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 532, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:42.227 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 779.0 }, max: { _id: 831.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:42.227 [conn23] moveChunk updating self version to: 17|1||512766fa11fb11ce1f290be5 through { _id: 831.0 } -> { _id: 883.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:42.228 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:42-5127670efd440aee3dad4b07", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536782228), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:42.228 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:42.228 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:42.228 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:42.228 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:42.228 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:42.228 [cleanupOldData-5127670efd440aee3dad4b08] (start) waiting to cleanup test.foo from { _id: 779.0 } -> { _id: 831.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:42.228 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:42.228 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:42-5127670efd440aee3dad4b09", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536782228), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 779.0 }, max: { _id: 831.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:42.229 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 779.0 }, max: { _id: 831.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_779.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:23 r:172 w:28 reslen:37 553ms m30999| Fri Feb 22 12:39:42.229 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:42.230 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 12 version: 17|1||512766fa11fb11ce1f290be5 based on: 15|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:42.230 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:42.230 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:42.248 [cleanupOldData-5127670efd440aee3dad4b08] waiting to remove documents for test.foo from { _id: 779.0 } -> { _id: 831.0 } m31100| Fri Feb 22 12:39:42.248 [cleanupOldData-5127670efd440aee3dad4b08] moveChunk starting delete for: test.foo from { _id: 779.0 } -> { _id: 831.0 } { "rs1-rs0" : 25, "rs1-rs1" : 8, "rs1-rs2" : 8 } total: 41 min: 8 max: 25 m30998| Fri Feb 22 12:39:42.672 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:42.673 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:42.673 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:42 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "5127670efed0a5416d51b838" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127670d11fb11ce1f290bef" } } m30998| Fri Feb 22 12:39:42.674 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 5127670efed0a5416d51b838 m30998| Fri Feb 22 12:39:42.674 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:42.674 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:42.674 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:42.675 [Balancer] rs1-rs2 has more chunks me:8 best: rs1-rs1:8 m30998| Fri Feb 22 12:39:42.675 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:42.675 [Balancer] donor : rs1-rs0 chunks on 25 m30998| Fri Feb 22 12:39:42.675 [Balancer] receiver : rs1-rs1 chunks on 8 m30998| Fri Feb 22 12:39:42.675 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:42.675 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_831.0", lastmod: Timestamp 17000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 831.0 }, max: { _id: 883.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:42.675 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|16||000000000000000000000000min: { _id: 831.0 }max: { _id: 883.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:42.675 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 831.0 }, max: { _id: 883.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_831.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:42.676 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670efd440aee3dad4b0a m31100| Fri Feb 22 12:39:42.676 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:42-5127670efd440aee3dad4b0b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536782676), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:42.677 [conn33] moveChunk request accepted at version 17|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:42.677 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:42.677 [migrateThread] starting receiving-end of migration of chunk { _id: 831.0 } -> { _id: 883.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:42.678 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:42.687 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.698 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.708 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.710 [cleanupOldData-5127670efd440aee3dad4b08] Helpers::removeRangeUnlocked time spent waiting for replication: 441ms m31100| Fri Feb 22 12:39:42.710 [cleanupOldData-5127670efd440aee3dad4b08] moveChunk deleted 52 documents for test.foo from { _id: 779.0 } -> { _id: 831.0 } m31100| Fri Feb 22 12:39:42.718 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.734 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.766 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.830 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:42.958 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:43.196 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:43.197 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m31200| Fri Feb 22 12:39:43.197 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m31100| Fri Feb 22 12:39:43.215 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.215 [conn33] moveChunk setting version to: 18|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:43.215 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:43.217 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m31200| Fri Feb 22 12:39:43.217 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 831.0 } -> { _id: 883.0 } m31200| Fri Feb 22 12:39:43.217 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:43-5127670f6bc5d04ff892a69f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536783217), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 518, step4 of 5: 0, step5 of 5: 20 } } m31100| Fri Feb 22 12:39:43.225 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 831.0 }, max: { _id: 883.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:43.225 [conn33] moveChunk updating self version to: 18|1||512766fa11fb11ce1f290be5 through { _id: 883.0 } -> { _id: 935.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:43.226 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:43-5127670ffd440aee3dad4b0c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536783226), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:43.226 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:43.226 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:43.226 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:43.226 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:43.226 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:43.226 [cleanupOldData-5127670ffd440aee3dad4b0d] (start) waiting to cleanup test.foo from { _id: 831.0 } -> { _id: 883.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:43.226 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:43.226 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:43-5127670ffd440aee3dad4b0e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536783226), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 831.0 }, max: { _id: 883.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 537, step5 of 6: 10, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:43.226 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 831.0 }, max: { _id: 883.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_831.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:15 r:113 w:25 reslen:37 551ms m30998| Fri Feb 22 12:39:43.226 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:43.227 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 9 version: 18|1||512766fa11fb11ce1f290be5 based on: 16|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:43.227 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:43.228 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m30999| Fri Feb 22 12:39:43.231 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:43.231 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:43.231 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:43 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127670f11fb11ce1f290bf0" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127670efed0a5416d51b838" } } m30999| Fri Feb 22 12:39:43.232 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127670f11fb11ce1f290bf0 m30999| Fri Feb 22 12:39:43.232 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:43.232 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:43.232 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:43.233 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:43.233 [Balancer] donor : rs1-rs0 chunks on 24 m30999| Fri Feb 22 12:39:43.233 [Balancer] receiver : rs1-rs2 chunks on 8 m30999| Fri Feb 22 12:39:43.233 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:43.233 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_883.0", lastmod: Timestamp 18000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 883.0 }, max: { _id: 935.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:43.233 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|17||000000000000000000000000min: { _id: 883.0 }max: { _id: 935.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:43.234 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 883.0 }, max: { _id: 935.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_883.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:43.234 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127670ffd440aee3dad4b0f m31100| Fri Feb 22 12:39:43.234 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:43-5127670ffd440aee3dad4b10", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536783234), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:43.235 [conn23] moveChunk request accepted at version 18|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:43.235 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:43.236 [migrateThread] starting receiving-end of migration of chunk { _id: 883.0 } -> { _id: 935.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:43.236 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:43.246 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.246 [cleanupOldData-5127670ffd440aee3dad4b0d] waiting to remove documents for test.foo from { _id: 831.0 } -> { _id: 883.0 } m31100| Fri Feb 22 12:39:43.246 [cleanupOldData-5127670ffd440aee3dad4b0d] moveChunk starting delete for: test.foo from { _id: 831.0 } -> { _id: 883.0 } m31100| Fri Feb 22 12:39:43.256 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.266 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.276 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.293 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.325 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.389 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.517 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 22, clonedBytes: 221320, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:43.725 [cleanupOldData-5127670ffd440aee3dad4b0d] Helpers::removeRangeUnlocked time spent waiting for replication: 365ms m31100| Fri Feb 22 12:39:43.725 [cleanupOldData-5127670ffd440aee3dad4b0d] moveChunk deleted 52 documents for test.foo from { _id: 831.0 } -> { _id: 883.0 } m31100| Fri Feb 22 12:39:43.773 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 47, clonedBytes: 472820, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:43.828 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:43.828 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m31300| Fri Feb 22 12:39:43.829 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m30998| Fri Feb 22 12:39:44.228 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:44.229 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:44.229 [Balancer] checking last ping for lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' against process and ping Thu Jan 1 00:00:00 1970 m30998| Fri Feb 22 12:39:44.229 [Balancer] could not force lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' because elapsed time 0 <= takeover time 900000 m30998| Fri Feb 22 12:39:44.229 [Balancer] skipping balancing round because another balancer is active m31100| Fri Feb 22 12:39:44.286 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:44.286 [conn23] moveChunk setting version to: 19|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:44.286 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:44.295 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m31300| Fri Feb 22 12:39:44.295 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 883.0 } -> { _id: 935.0 } m31300| Fri Feb 22 12:39:44.295 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:44-51276710dd66f5428ed3b729", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536784295), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 591, step4 of 5: 0, step5 of 5: 467 } } m31100| Fri Feb 22 12:39:44.296 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 883.0 }, max: { _id: 935.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:44.296 [conn23] moveChunk updating self version to: 19|1||512766fa11fb11ce1f290be5 through { _id: 935.0 } -> { _id: 987.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:44.297 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:44-51276710fd440aee3dad4b11", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536784297), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:44.297 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:44.297 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:44.297 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:44.297 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:44.297 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:44.297 [cleanupOldData-51276710fd440aee3dad4b12] (start) waiting to cleanup test.foo from { _id: 883.0 } -> { _id: 935.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:44.297 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:44.297 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:44-51276710fd440aee3dad4b13", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536784297), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 883.0 }, max: { _id: 935.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:44.297 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 883.0 }, max: { _id: 935.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_883.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:16 r:115 w:21 reslen:37 1063ms m30999| Fri Feb 22 12:39:44.297 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:44.299 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 19|1||512766fa11fb11ce1f290be5 based on: 17|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:44.299 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:44.299 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:44.317 [cleanupOldData-51276710fd440aee3dad4b12] waiting to remove documents for test.foo from { _id: 883.0 } -> { _id: 935.0 } m31100| Fri Feb 22 12:39:44.317 [cleanupOldData-51276710fd440aee3dad4b12] moveChunk starting delete for: test.foo from { _id: 883.0 } -> { _id: 935.0 } m31102| Fri Feb 22 12:39:44.613 [conn9] end connection 165.225.128.186:41546 (13 connections now open) m31102| Fri Feb 22 12:39:44.613 [initandlisten] connection accepted from 165.225.128.186:50405 #22 (14 connections now open) m31100| Fri Feb 22 12:39:44.696 [cleanupOldData-51276710fd440aee3dad4b12] Helpers::removeRangeUnlocked time spent waiting for replication: 356ms m31100| Fri Feb 22 12:39:44.696 [cleanupOldData-51276710fd440aee3dad4b12] moveChunk deleted 52 documents for test.foo from { _id: 883.0 } -> { _id: 935.0 } m31102| Fri Feb 22 12:39:44.795 [conn10] end connection 165.225.128.186:51619 (13 connections now open) m31102| Fri Feb 22 12:39:44.795 [initandlisten] connection accepted from 165.225.128.186:55982 #23 (14 connections now open) m30999| Fri Feb 22 12:39:45.300 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:45.300 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:45.301 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:45 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127671111fb11ce1f290bf1" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127670f11fb11ce1f290bf0" } } m30999| Fri Feb 22 12:39:45.301 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127671111fb11ce1f290bf1 m30999| Fri Feb 22 12:39:45.301 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:45.301 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:45.301 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:45.303 [Balancer] rs1-rs2 has more chunks me:9 best: rs1-rs1:9 m30999| Fri Feb 22 12:39:45.303 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:45.303 [Balancer] donor : rs1-rs0 chunks on 23 m30999| Fri Feb 22 12:39:45.303 [Balancer] receiver : rs1-rs1 chunks on 9 m30999| Fri Feb 22 12:39:45.303 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:45.303 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_935.0", lastmod: Timestamp 19000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 935.0 }, max: { _id: 987.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30999| Fri Feb 22 12:39:45.304 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 19|1||000000000000000000000000min: { _id: 935.0 }max: { _id: 987.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:45.306 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 935.0 }, max: { _id: 987.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_935.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:45.307 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276711fd440aee3dad4b14 m31100| Fri Feb 22 12:39:45.307 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:45-51276711fd440aee3dad4b15", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536785307), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:45.308 [conn23] moveChunk request accepted at version 19|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:45.309 [conn23] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:45.309 [migrateThread] starting receiving-end of migration of chunk { _id: 935.0 } -> { _id: 987.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:45.310 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:45.319 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.329 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.340 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.350 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.366 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.398 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.463 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.591 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:45.842 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:45.842 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m31200| Fri Feb 22 12:39:45.842 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m31100| Fri Feb 22 12:39:45.847 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:45.847 [conn23] moveChunk setting version to: 20|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:45.847 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:45.853 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m31200| Fri Feb 22 12:39:45.853 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 935.0 } -> { _id: 987.0 } m31200| Fri Feb 22 12:39:45.853 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:45-512767116bc5d04ff892a6a0", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536785853), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:45.857 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 935.0 }, max: { _id: 987.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:45.858 [conn23] moveChunk updating self version to: 20|1||512766fa11fb11ce1f290be5 through { _id: 987.0 } -> { _id: 1039.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:45.858 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:45-51276711fd440aee3dad4b16", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536785858), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:45.858 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:45.858 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:45.858 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:45.858 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:45.858 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:45.859 [cleanupOldData-51276711fd440aee3dad4b17] (start) waiting to cleanup test.foo from { _id: 935.0 } -> { _id: 987.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:45.859 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:45.859 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:45-51276711fd440aee3dad4b18", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536785859), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 935.0 }, max: { _id: 987.0 }, step1 of 6: 2, step2 of 6: 2, step3 of 6: 0, step4 of 6: 538, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:45.859 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 935.0 }, max: { _id: 987.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_935.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:42 r:175 w:27 reslen:37 555ms m30999| Fri Feb 22 12:39:45.859 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:45.860 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 14 version: 20|1||512766fa11fb11ce1f290be5 based on: 19|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:45.861 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:45.861 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:45.879 [cleanupOldData-51276711fd440aee3dad4b17] waiting to remove documents for test.foo from { _id: 935.0 } -> { _id: 987.0 } m31100| Fri Feb 22 12:39:45.879 [cleanupOldData-51276711fd440aee3dad4b17] moveChunk starting delete for: test.foo from { _id: 935.0 } -> { _id: 987.0 } m31100| Fri Feb 22 12:39:46.400 [cleanupOldData-51276711fd440aee3dad4b17] Helpers::removeRangeUnlocked time spent waiting for replication: 505ms m31100| Fri Feb 22 12:39:46.400 [cleanupOldData-51276711fd440aee3dad4b17] moveChunk deleted 52 documents for test.foo from { _id: 935.0 } -> { _id: 987.0 } m30999| Fri Feb 22 12:39:46.862 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:46.862 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:46.862 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:46 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127671211fb11ce1f290bf2" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127671111fb11ce1f290bf1" } } m30999| Fri Feb 22 12:39:46.863 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127671211fb11ce1f290bf2 m30999| Fri Feb 22 12:39:46.863 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:46.863 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:46.863 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:46.865 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:46.865 [Balancer] donor : rs1-rs0 chunks on 22 m30999| Fri Feb 22 12:39:46.865 [Balancer] receiver : rs1-rs2 chunks on 9 m30999| Fri Feb 22 12:39:46.865 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:46.865 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_987.0", lastmod: Timestamp 20000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 987.0 }, max: { _id: 1039.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30999| Fri Feb 22 12:39:46.865 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 20|1||000000000000000000000000min: { _id: 987.0 }max: { _id: 1039.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:46.866 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 987.0 }, max: { _id: 1039.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_987.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:46.867 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276712fd440aee3dad4b19 m31100| Fri Feb 22 12:39:46.867 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:46-51276712fd440aee3dad4b1a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536786867), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 987.0 }, max: { _id: 1039.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:46.868 [conn23] moveChunk request accepted at version 20|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:46.868 [conn23] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:46.868 [migrateThread] starting receiving-end of migration of chunk { _id: 987.0 } -> { _id: 1039.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:46.869 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:46.878 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:46.889 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:46.899 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:46.909 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:46.925 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:46.957 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:47.022 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:47.150 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30998| Fri Feb 22 12:39:47.309 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30998| Fri Feb 22 12:39:47.309 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787309), ok: 1.0 } m30998| Fri Feb 22 12:39:47.309 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:47.309 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:47.309 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:47.309 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:47.309 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787309), ok: 1.0 } m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787310), ok: 1.0 } m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787310), ok: 1.0 } m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:47.310 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787310), ok: 1.0 } m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787311), ok: 1.0 } m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787311), ok: 1.0 } m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:47.311 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787311), ok: 1.0 } m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787312), ok: 1.0 } m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787312), ok: 1.0 } m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs2 m30998| Fri Feb 22 12:39:47.312 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787312), ok: 1.0 } m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787313), ok: 1.0 } m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787313), ok: 1.0 } m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:47.313 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787313), ok: 1.0 } m30998| Fri Feb 22 12:39:47.314 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:47.314 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:47.314 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:47.314 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536787314), ok: 1.0 } m30998| Fri Feb 22 12:39:47.314 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:47.314 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:47.314 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 { "rs1-rs0" : 22, "rs1-rs1" : 10, "rs1-rs2" : 9 } total: 41 min: 9 max: 22 m31300| Fri Feb 22 12:39:47.402 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:47.402 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 987.0 } -> { _id: 1039.0 } m31300| Fri Feb 22 12:39:47.403 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 987.0 } -> { _id: 1039.0 } m31100| Fri Feb 22 12:39:47.406 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:47.406 [conn23] moveChunk setting version to: 21|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:47.406 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:47.413 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 987.0 } -> { _id: 1039.0 } m31300| Fri Feb 22 12:39:47.413 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 987.0 } -> { _id: 1039.0 } m31300| Fri Feb 22 12:39:47.413 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:47-51276713dd66f5428ed3b72a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536787413), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 987.0 }, max: { _id: 1039.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 532, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:47.416 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 987.0 }, max: { _id: 1039.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:47.416 [conn23] moveChunk updating self version to: 21|1||512766fa11fb11ce1f290be5 through { _id: 1039.0 } -> { _id: 1091.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:47.417 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:47-51276713fd440aee3dad4b1b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536787417), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 987.0 }, max: { _id: 1039.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:47.417 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:47.417 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:47.417 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:47.417 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:47.417 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:47.417 [cleanupOldData-51276713fd440aee3dad4b1c] (start) waiting to cleanup test.foo from { _id: 987.0 } -> { _id: 1039.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:47.418 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:47.418 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:47-51276713fd440aee3dad4b1d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536787418), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 987.0 }, max: { _id: 1039.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:47.418 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 987.0 }, max: { _id: 1039.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_987.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:28 r:229 w:27 reslen:37 552ms m30999| Fri Feb 22 12:39:47.418 [Balancer] moveChunk result: { ok: 1.0 } m30999| Fri Feb 22 12:39:47.419 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 15 version: 21|1||512766fa11fb11ce1f290be5 based on: 20|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:47.419 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:47.420 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:47.438 [cleanupOldData-51276713fd440aee3dad4b1c] waiting to remove documents for test.foo from { _id: 987.0 } -> { _id: 1039.0 } m31100| Fri Feb 22 12:39:47.438 [cleanupOldData-51276713fd440aee3dad4b1c] moveChunk starting delete for: test.foo from { _id: 987.0 } -> { _id: 1039.0 } m31100| Fri Feb 22 12:39:47.919 [cleanupOldData-51276713fd440aee3dad4b1c] Helpers::removeRangeUnlocked time spent waiting for replication: 460ms m31100| Fri Feb 22 12:39:47.919 [cleanupOldData-51276713fd440aee3dad4b1c] moveChunk deleted 52 documents for test.foo from { _id: 987.0 } -> { _id: 1039.0 } m30999| Fri Feb 22 12:39:48.420 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:48.421 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:48.421 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838: m30999| { "state" : 1, m30999| "who" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838:Balancer:10113", m30999| "process" : "bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838", m30999| "when" : { "$date" : "Fri Feb 22 12:39:48 2013" }, m30999| "why" : "doing balance round", m30999| "ts" : { "$oid" : "5127671411fb11ce1f290bf3" } } m30999| { "_id" : "balancer", m30999| "state" : 0, m30999| "ts" : { "$oid" : "5127671211fb11ce1f290bf2" } } m30999| Fri Feb 22 12:39:48.422 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' acquired, ts : 5127671411fb11ce1f290bf3 m30999| Fri Feb 22 12:39:48.422 [Balancer] *** start balancing round m30999| Fri Feb 22 12:39:48.422 [Balancer] waitForDelete: 0 m30999| Fri Feb 22 12:39:48.422 [Balancer] secondaryThrottle: 1 m30999| Fri Feb 22 12:39:48.423 [Balancer] rs1-rs2 has more chunks me:10 best: rs1-rs1:10 m30999| Fri Feb 22 12:39:48.423 [Balancer] collection : test.foo m30999| Fri Feb 22 12:39:48.423 [Balancer] donor : rs1-rs0 chunks on 21 m30999| Fri Feb 22 12:39:48.423 [Balancer] receiver : rs1-rs1 chunks on 10 m30999| Fri Feb 22 12:39:48.423 [Balancer] threshold : 2 m30999| Fri Feb 22 12:39:48.423 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1039.0", lastmod: Timestamp 21000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30999| Fri Feb 22 12:39:48.424 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 21|1||000000000000000000000000min: { _id: 1039.0 }max: { _id: 1091.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:48.424 [conn23] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1039.0 }, max: { _id: 1091.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1039.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:48.425 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276714fd440aee3dad4b1e m31100| Fri Feb 22 12:39:48.425 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:48-51276714fd440aee3dad4b1f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536788425), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1039.0 }, max: { _id: 1091.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:48.426 [conn23] moveChunk request accepted at version 21|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:48.426 [conn23] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:48.426 [migrateThread] starting receiving-end of migration of chunk { _id: 1039.0 } -> { _id: 1091.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:48.427 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:48.436 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:48.447 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:48.457 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:48.467 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:48.483 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:48.516 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 7, clonedBytes: 70420, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:48.550 [conn9] end connection 165.225.128.186:43215 (20 connections now open) m31300| Fri Feb 22 12:39:48.551 [initandlisten] connection accepted from 165.225.128.186:51696 #32 (21 connections now open) m31100| Fri Feb 22 12:39:48.580 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 7, clonedBytes: 70420, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:48.708 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 16, clonedBytes: 160960, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:48.964 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 41, clonedBytes: 412460, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:49.079 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:49.079 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1039.0 } -> { _id: 1091.0 } m31200| Fri Feb 22 12:39:49.079 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1039.0 } -> { _id: 1091.0 } m31100| Fri Feb 22 12:39:49.477 [conn23] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:49.477 [conn23] moveChunk setting version to: 22|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:49.477 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:49.486 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1039.0 } -> { _id: 1091.0 } m31200| Fri Feb 22 12:39:49.486 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1039.0 } -> { _id: 1091.0 } m31200| Fri Feb 22 12:39:49.486 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:49-512767156bc5d04ff892a6a1", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536789486), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1039.0 }, max: { _id: 1091.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 651, step4 of 5: 0, step5 of 5: 407 } } m31100| Fri Feb 22 12:39:49.487 [conn23] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1039.0 }, max: { _id: 1091.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:49.487 [conn23] moveChunk updating self version to: 22|1||512766fa11fb11ce1f290be5 through { _id: 1091.0 } -> { _id: 1143.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:49.488 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:49-51276715fd440aee3dad4b20", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536789488), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1039.0 }, max: { _id: 1091.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:49.488 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:49.488 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:49.488 [conn23] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:49.488 [conn23] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:49.488 [conn23] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:49.488 [cleanupOldData-51276715fd440aee3dad4b21] (start) waiting to cleanup test.foo from { _id: 1039.0 } -> { _id: 1091.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:49.489 [conn23] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:49.489 [conn23] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:49-51276715fd440aee3dad4b22", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:62589", time: new Date(1361536789489), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1039.0 }, max: { _id: 1091.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 1050, step5 of 6: 11, step6 of 6: 0 } } m30999| Fri Feb 22 12:39:49.489 [Balancer] moveChunk result: { ok: 1.0 } m31100| Fri Feb 22 12:39:49.489 [conn23] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1039.0 }, max: { _id: 1091.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1039.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:38 r:179 w:27 reslen:37 1064ms m30999| Fri Feb 22 12:39:49.490 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 16 version: 22|1||512766fa11fb11ce1f290be5 based on: 21|1||512766fa11fb11ce1f290be5 m30999| Fri Feb 22 12:39:49.490 [Balancer] *** end of balancing round m30999| Fri Feb 22 12:39:49.491 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:49.508 [cleanupOldData-51276715fd440aee3dad4b21] waiting to remove documents for test.foo from { _id: 1039.0 } -> { _id: 1091.0 } m31100| Fri Feb 22 12:39:49.508 [cleanupOldData-51276715fd440aee3dad4b21] moveChunk starting delete for: test.foo from { _id: 1039.0 } -> { _id: 1091.0 } m31100| Fri Feb 22 12:39:50.010 [cleanupOldData-51276715fd440aee3dad4b21] Helpers::removeRangeUnlocked time spent waiting for replication: 485ms m31100| Fri Feb 22 12:39:50.010 [cleanupOldData-51276715fd440aee3dad4b21] moveChunk deleted 52 documents for test.foo from { _id: 1039.0 } -> { _id: 1091.0 } m30998| Fri Feb 22 12:39:50.230 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:50.231 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:50.231 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:50 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "51276716fed0a5416d51b839" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127671411fb11ce1f290bf3" } } m30998| Fri Feb 22 12:39:50.232 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 51276716fed0a5416d51b839 m30998| Fri Feb 22 12:39:50.232 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:50.232 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:50.232 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:50.234 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:50.234 [Balancer] donor : rs1-rs0 chunks on 20 m30998| Fri Feb 22 12:39:50.234 [Balancer] receiver : rs1-rs2 chunks on 10 m30998| Fri Feb 22 12:39:50.234 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:50.234 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1091.0", lastmod: Timestamp 22000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30998| Fri Feb 22 12:39:50.234 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 1|21||000000000000000000000000min: { _id: 1091.0 }max: { _id: 1143.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:50.234 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 1091.0 }, max: { _id: 1143.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1091.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:50.235 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276716fd440aee3dad4b23 m31100| Fri Feb 22 12:39:50.235 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:50-51276716fd440aee3dad4b24", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536790235), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1091.0 }, max: { _id: 1143.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:50.236 [conn33] moveChunk request accepted at version 22|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:50.237 [conn33] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:50.237 [migrateThread] starting receiving-end of migration of chunk { _id: 1091.0 } -> { _id: 1143.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:50.238 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:50.247 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:50.257 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:50.267 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:50.278 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:50.294 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:50.326 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:50.390 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 12:39:50.491 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:50.492 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:50.492 [Balancer] checking last ping for lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' against process and ping Thu Jan 1 00:00:00 1970 m30999| Fri Feb 22 12:39:50.492 [Balancer] creating new connection to:bs-smartos-x86-64-1.10gen.cc:29000 m30999| Fri Feb 22 12:39:50.492 BackgroundJob starting: ConnectBG m30999| Fri Feb 22 12:39:50.492 [Balancer] connected connection! m29000| Fri Feb 22 12:39:50.492 [initandlisten] connection accepted from 165.225.128.186:39890 #16 (16 connections now open) m30999| Fri Feb 22 12:39:50.493 [Balancer] could not force lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' because elapsed time 0 <= takeover time 900000 m30999| Fri Feb 22 12:39:50.493 [Balancer] skipping balancing round because another balancer is active m31100| Fri Feb 22 12:39:50.518 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:50.739 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:50.740 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1091.0 } -> { _id: 1143.0 } m31300| Fri Feb 22 12:39:50.740 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1091.0 } -> { _id: 1143.0 } m31100| Fri Feb 22 12:39:50.775 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:50.775 [conn33] moveChunk setting version to: 23|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:50.775 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:50.781 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1091.0 } -> { _id: 1143.0 } m31300| Fri Feb 22 12:39:50.781 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1091.0 } -> { _id: 1143.0 } m31300| Fri Feb 22 12:39:50.781 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:50-51276716dd66f5428ed3b72b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536790781), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1091.0 }, max: { _id: 1143.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 501, step4 of 5: 0, step5 of 5: 41 } } m31100| Fri Feb 22 12:39:50.785 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1091.0 }, max: { _id: 1143.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:50.785 [conn33] moveChunk updating self version to: 23|1||512766fa11fb11ce1f290be5 through { _id: 1143.0 } -> { _id: 1195.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:50.786 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:50-51276716fd440aee3dad4b25", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536790786), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1091.0 }, max: { _id: 1143.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:50.786 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:50.786 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:50.786 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:50.786 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:50.786 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:50.786 [cleanupOldData-51276716fd440aee3dad4b26] (start) waiting to cleanup test.foo from { _id: 1091.0 } -> { _id: 1143.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:50.786 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:50.786 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:50-51276716fd440aee3dad4b27", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536790786), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1091.0 }, max: { _id: 1143.0 }, step1 of 6: 0, step2 of 6: 2, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:50.792 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 1091.0 }, max: { _id: 1143.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1091.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:43 r:171 w:29 reslen:37 557ms m30998| Fri Feb 22 12:39:50.792 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:50.794 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 10 version: 23|1||512766fa11fb11ce1f290be5 based on: 18|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:50.794 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:50.795 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:50.806 [cleanupOldData-51276716fd440aee3dad4b26] waiting to remove documents for test.foo from { _id: 1091.0 } -> { _id: 1143.0 } m31100| Fri Feb 22 12:39:50.806 [cleanupOldData-51276716fd440aee3dad4b26] moveChunk starting delete for: test.foo from { _id: 1091.0 } -> { _id: 1143.0 } m30999| Fri Feb 22 12:39:51.088 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 12:39:51 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838', sleeping for 30000ms m31100| Fri Feb 22 12:39:51.268 [cleanupOldData-51276716fd440aee3dad4b26] Helpers::removeRangeUnlocked time spent waiting for replication: 435ms m31100| Fri Feb 22 12:39:51.268 [cleanupOldData-51276716fd440aee3dad4b26] moveChunk deleted 52 documents for test.foo from { _id: 1091.0 } -> { _id: 1143.0 } m30998| Fri Feb 22 12:39:51.298 [LockPinger] cluster bs-smartos-x86-64-1.10gen.cc:29000 pinged successfully at Fri Feb 22 12:39:51 2013 by distributed lock pinger 'bs-smartos-x86-64-1.10gen.cc:29000/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838', sleeping for 30000ms m31202| Fri Feb 22 12:39:51.383 [conn9] end connection 165.225.128.186:61242 (11 connections now open) m31202| Fri Feb 22 12:39:51.383 [initandlisten] connection accepted from 165.225.128.186:36300 #18 (12 connections now open) m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791486), ok: 1.0 } m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791486), ok: 1.0 } m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:51.486 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791487), ok: 1.0 } m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791487), ok: 1.0 } m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791487), ok: 1.0 } m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30999| Fri Feb 22 12:39:51.487 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791487), ok: 1.0 } m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791488), ok: 1.0 } m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791488), ok: 1.0 } m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791488), ok: 1.0 } m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791488), ok: 1.0 } m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30999| Fri Feb 22 12:39:51.488 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs2 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791489), ok: 1.0 } m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791489), ok: 1.0 } m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791489), ok: 1.0 } m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791489), ok: 1.0 } m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:51.489 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30999| Fri Feb 22 12:39:51.490 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536791490), ok: 1.0 } m30999| Fri Feb 22 12:39:51.490 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30999| Fri Feb 22 12:39:51.490 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30999| Fri Feb 22 12:39:51.490 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:51.795 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:51.796 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:51.796 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:51 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "51276717fed0a5416d51b83a" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "51276716fed0a5416d51b839" } } m30998| Fri Feb 22 12:39:51.796 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 51276717fed0a5416d51b83a m30998| Fri Feb 22 12:39:51.796 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:51.796 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:51.796 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:51.798 [Balancer] rs1-rs2 has more chunks me:11 best: rs1-rs1:11 m30998| Fri Feb 22 12:39:51.798 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:51.798 [Balancer] donor : rs1-rs0 chunks on 19 m30998| Fri Feb 22 12:39:51.798 [Balancer] receiver : rs1-rs1 chunks on 11 m30998| Fri Feb 22 12:39:51.798 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:51.798 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1143.0", lastmod: Timestamp 23000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:51.798 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 23|1||000000000000000000000000min: { _id: 1143.0 }max: { _id: 1195.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:51.798 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1143.0 }, max: { _id: 1195.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1143.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:51.799 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276717fd440aee3dad4b28 m31100| Fri Feb 22 12:39:51.799 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:51-51276717fd440aee3dad4b29", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536791799), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1143.0 }, max: { _id: 1195.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:51.800 [conn33] moveChunk request accepted at version 23|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:51.800 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:51.801 [migrateThread] starting receiving-end of migration of chunk { _id: 1143.0 } -> { _id: 1195.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:51.801 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:51.811 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:51.821 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:51.831 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:51.841 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:51.858 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:51.890 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:51.954 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:52.082 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31302| Fri Feb 22 12:39:52.097 [conn10] end connection 165.225.128.186:43980 (11 connections now open) m31302| Fri Feb 22 12:39:52.097 [initandlisten] connection accepted from 165.225.128.186:61647 #16 (12 connections now open) m31302| Fri Feb 22 12:39:52.305 [conn11] end connection 165.225.128.186:63208 (11 connections now open) m31302| Fri Feb 22 12:39:52.305 [initandlisten] connection accepted from 165.225.128.186:38135 #17 (12 connections now open) m31200| Fri Feb 22 12:39:52.333 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:52.333 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1143.0 } -> { _id: 1195.0 } m31200| Fri Feb 22 12:39:52.334 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1143.0 } -> { _id: 1195.0 } m31100| Fri Feb 22 12:39:52.338 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:52.339 [conn33] moveChunk setting version to: 24|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:52.339 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:52.344 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1143.0 } -> { _id: 1195.0 } m31200| Fri Feb 22 12:39:52.344 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1143.0 } -> { _id: 1195.0 } m31200| Fri Feb 22 12:39:52.344 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:52-512767186bc5d04ff892a6a2", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536792344), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1143.0 }, max: { _id: 1195.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:52.349 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1143.0 }, max: { _id: 1195.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:52.349 [conn33] moveChunk updating self version to: 24|1||512766fa11fb11ce1f290be5 through { _id: 1195.0 } -> { _id: 1247.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:52.350 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:52-51276718fd440aee3dad4b2a", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536792350), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1143.0 }, max: { _id: 1195.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:52.350 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:52.350 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:52.350 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:52.350 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:52.350 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:52.350 [cleanupOldData-51276718fd440aee3dad4b2b] (start) waiting to cleanup test.foo from { _id: 1143.0 } -> { _id: 1195.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:52.350 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:52.350 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:52-51276718fd440aee3dad4b2c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536792350), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1143.0 }, max: { _id: 1195.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:52.350 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1143.0 }, max: { _id: 1195.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1143.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:31 r:142 w:24 reslen:37 552ms m30998| Fri Feb 22 12:39:52.350 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:52.352 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 11 version: 24|1||512766fa11fb11ce1f290be5 based on: 23|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:52.352 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:52.352 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. { "rs1-rs0" : 18, "rs1-rs1" : 12, "rs1-rs2" : 11 } total: 41 min: 11 max: 18 m31100| Fri Feb 22 12:39:52.370 [cleanupOldData-51276718fd440aee3dad4b2b] waiting to remove documents for test.foo from { _id: 1143.0 } -> { _id: 1195.0 } m31100| Fri Feb 22 12:39:52.370 [cleanupOldData-51276718fd440aee3dad4b2b] moveChunk starting delete for: test.foo from { _id: 1143.0 } -> { _id: 1195.0 } m31100| Fri Feb 22 12:39:52.891 [cleanupOldData-51276718fd440aee3dad4b2b] Helpers::removeRangeUnlocked time spent waiting for replication: 503ms m31100| Fri Feb 22 12:39:52.891 [cleanupOldData-51276718fd440aee3dad4b2b] moveChunk deleted 52 documents for test.foo from { _id: 1143.0 } -> { _id: 1195.0 } m30998| Fri Feb 22 12:39:53.353 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:53.353 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:53.353 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:53 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "51276719fed0a5416d51b83b" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "51276717fed0a5416d51b83a" } } m30998| Fri Feb 22 12:39:53.355 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 51276719fed0a5416d51b83b m30998| Fri Feb 22 12:39:53.355 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:53.355 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:53.355 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:53.356 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:53.356 [Balancer] donor : rs1-rs0 chunks on 18 m30998| Fri Feb 22 12:39:53.356 [Balancer] receiver : rs1-rs2 chunks on 11 m30998| Fri Feb 22 12:39:53.356 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:53.356 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1195.0", lastmod: Timestamp 24000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30998| Fri Feb 22 12:39:53.356 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 24|1||000000000000000000000000min: { _id: 1195.0 }max: { _id: 1247.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:53.360 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 1195.0 }, max: { _id: 1247.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1195.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:53.361 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 51276719fd440aee3dad4b2d m31100| Fri Feb 22 12:39:53.361 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:53-51276719fd440aee3dad4b2e", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536793361), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1195.0 }, max: { _id: 1247.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:53.363 [conn33] moveChunk request accepted at version 24|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:53.363 [conn33] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:53.363 [migrateThread] starting receiving-end of migration of chunk { _id: 1195.0 } -> { _id: 1247.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:53.364 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:53.373 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:53.384 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:53.394 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:53.404 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:53.420 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:53.452 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:53.516 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31202| Fri Feb 22 12:39:53.556 [conn13] end connection 165.225.128.186:52858 (11 connections now open) m31202| Fri Feb 22 12:39:53.557 [initandlisten] connection accepted from 165.225.128.186:47978 #19 (12 connections now open) m31100| Fri Feb 22 12:39:53.645 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31201| Fri Feb 22 12:39:53.789 [conn13] end connection 165.225.128.186:56710 (11 connections now open) m31201| Fri Feb 22 12:39:53.789 [initandlisten] connection accepted from 165.225.128.186:53555 #19 (12 connections now open) m31300| Fri Feb 22 12:39:53.896 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:53.896 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1195.0 } -> { _id: 1247.0 } m31300| Fri Feb 22 12:39:53.896 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1195.0 } -> { _id: 1247.0 } m31100| Fri Feb 22 12:39:53.901 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:53.901 [conn33] moveChunk setting version to: 25|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:53.901 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:53.907 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1195.0 } -> { _id: 1247.0 } m31300| Fri Feb 22 12:39:53.907 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1195.0 } -> { _id: 1247.0 } m31300| Fri Feb 22 12:39:53.907 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:53-51276719dd66f5428ed3b72c", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536793907), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1195.0 }, max: { _id: 1247.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 10 } } m31100| Fri Feb 22 12:39:53.911 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1195.0 }, max: { _id: 1247.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:53.911 [conn33] moveChunk updating self version to: 25|1||512766fa11fb11ce1f290be5 through { _id: 1247.0 } -> { _id: 1299.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:53.912 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:53-51276719fd440aee3dad4b2f", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536793912), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1195.0 }, max: { _id: 1247.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:53.912 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:53.912 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:53.912 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:53.912 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:53.912 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:53.912 [cleanupOldData-51276719fd440aee3dad4b30] (start) waiting to cleanup test.foo from { _id: 1195.0 } -> { _id: 1247.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:53.913 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:53.913 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:53-51276719fd440aee3dad4b31", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536793913), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1195.0 }, max: { _id: 1247.0 }, step1 of 6: 3, step2 of 6: 2, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:53.913 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 1195.0 }, max: { _id: 1247.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1195.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:27 r:180 w:26 reslen:37 556ms m30998| Fri Feb 22 12:39:53.913 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:53.914 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 12 version: 25|1||512766fa11fb11ce1f290be5 based on: 24|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:53.914 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:53.915 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:53.932 [cleanupOldData-51276719fd440aee3dad4b30] waiting to remove documents for test.foo from { _id: 1195.0 } -> { _id: 1247.0 } m31100| Fri Feb 22 12:39:53.932 [cleanupOldData-51276719fd440aee3dad4b30] moveChunk starting delete for: test.foo from { _id: 1195.0 } -> { _id: 1247.0 } m31100| Fri Feb 22 12:39:54.454 [cleanupOldData-51276719fd440aee3dad4b30] Helpers::removeRangeUnlocked time spent waiting for replication: 508ms m31100| Fri Feb 22 12:39:54.454 [cleanupOldData-51276719fd440aee3dad4b30] moveChunk deleted 52 documents for test.foo from { _id: 1195.0 } -> { _id: 1247.0 } m30998| Fri Feb 22 12:39:54.915 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:54.916 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:54.916 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:54 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "5127671afed0a5416d51b83c" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "51276719fed0a5416d51b83b" } } m30998| Fri Feb 22 12:39:54.917 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 5127671afed0a5416d51b83c m30998| Fri Feb 22 12:39:54.917 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:54.917 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:54.917 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:54.919 [Balancer] rs1-rs2 has more chunks me:12 best: rs1-rs1:12 m30998| Fri Feb 22 12:39:54.919 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:54.919 [Balancer] donor : rs1-rs0 chunks on 17 m30998| Fri Feb 22 12:39:54.919 [Balancer] receiver : rs1-rs1 chunks on 12 m30998| Fri Feb 22 12:39:54.919 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:54.919 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1247.0", lastmod: Timestamp 25000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs1 tag [] m30998| Fri Feb 22 12:39:54.919 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 25|1||000000000000000000000000min: { _id: 1247.0 }max: { _id: 1299.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs1:rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m31100| Fri Feb 22 12:39:54.919 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1247.0 }, max: { _id: 1299.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1247.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:54.920 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127671afd440aee3dad4b32 m31100| Fri Feb 22 12:39:54.920 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:54-5127671afd440aee3dad4b33", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536794920), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1247.0 }, max: { _id: 1299.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:54.921 [conn33] moveChunk request accepted at version 25|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:54.921 [conn33] moveChunk number of documents: 52 m31200| Fri Feb 22 12:39:54.922 [migrateThread] starting receiving-end of migration of chunk { _id: 1247.0 } -> { _id: 1299.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31200| Fri Feb 22 12:39:54.922 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:54.932 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:54.942 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:54.952 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:54.962 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:54.978 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31101| Fri Feb 22 12:39:55.007 [conn15] end connection 165.225.128.186:51573 (13 connections now open) m31101| Fri Feb 22 12:39:55.008 [initandlisten] connection accepted from 165.225.128.186:49984 #23 (14 connections now open) m31100| Fri Feb 22 12:39:55.011 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:55.075 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:55.203 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31200| Fri Feb 22 12:39:55.454 [migrateThread] Waiting for replication to catch up before entering critical section m31200| Fri Feb 22 12:39:55.454 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1247.0 } -> { _id: 1299.0 } m31200| Fri Feb 22 12:39:55.455 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1247.0 } -> { _id: 1299.0 } m31100| Fri Feb 22 12:39:55.459 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:55.459 [conn33] moveChunk setting version to: 26|0||512766fa11fb11ce1f290be5 m31200| Fri Feb 22 12:39:55.460 [conn32] Waiting for commit to finish m31200| Fri Feb 22 12:39:55.465 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1247.0 } -> { _id: 1299.0 } m31200| Fri Feb 22 12:39:55.465 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1247.0 } -> { _id: 1299.0 } m31200| Fri Feb 22 12:39:55.465 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:55-5127671b6bc5d04ff892a6a3", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536795465), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1247.0 }, max: { _id: 1299.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:55.470 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1247.0 }, max: { _id: 1299.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:55.470 [conn33] moveChunk updating self version to: 26|1||512766fa11fb11ce1f290be5 through { _id: 1299.0 } -> { _id: 1351.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:55.470 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:55-5127671bfd440aee3dad4b34", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536795470), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1247.0 }, max: { _id: 1299.0 }, from: "rs1-rs0", to: "rs1-rs1" } } m31100| Fri Feb 22 12:39:55.470 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:55.470 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:55.470 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:55.471 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:55.471 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:55.471 [cleanupOldData-5127671bfd440aee3dad4b35] (start) waiting to cleanup test.foo from { _id: 1247.0 } -> { _id: 1299.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:55.471 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:55.471 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:55-5127671bfd440aee3dad4b36", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536795471), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1247.0 }, max: { _id: 1299.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:55.471 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202", fromShard: "rs1-rs0", toShard: "rs1-rs1", min: { _id: 1247.0 }, max: { _id: 1299.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1247.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:25 r:223 w:29 reslen:37 551ms m30998| Fri Feb 22 12:39:55.471 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:55.472 [Balancer] ChunkManager: time to load chunks for test.foo: 0ms sequenceNumber: 13 version: 26|1||512766fa11fb11ce1f290be5 based on: 25|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:55.473 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:55.473 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:55.491 [cleanupOldData-5127671bfd440aee3dad4b35] waiting to remove documents for test.foo from { _id: 1247.0 } -> { _id: 1299.0 } m31100| Fri Feb 22 12:39:55.491 [cleanupOldData-5127671bfd440aee3dad4b35] moveChunk starting delete for: test.foo from { _id: 1247.0 } -> { _id: 1299.0 } m31100| Fri Feb 22 12:39:55.942 [cleanupOldData-5127671bfd440aee3dad4b35] Helpers::removeRangeUnlocked time spent waiting for replication: 413ms m31100| Fri Feb 22 12:39:55.942 [cleanupOldData-5127671bfd440aee3dad4b35] moveChunk deleted 52 documents for test.foo from { _id: 1247.0 } -> { _id: 1299.0 } m30998| Fri Feb 22 12:39:56.474 [Balancer] Refreshing MaxChunkSize: 1 m30998| Fri Feb 22 12:39:56.474 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 ) m30998| Fri Feb 22 12:39:56.474 [Balancer] about to acquire distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838: m30998| { "state" : 1, m30998| "who" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838:Balancer:5758", m30998| "process" : "bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838", m30998| "when" : { "$date" : "Fri Feb 22 12:39:56 2013" }, m30998| "why" : "doing balance round", m30998| "ts" : { "$oid" : "5127671cfed0a5416d51b83d" } } m30998| { "_id" : "balancer", m30998| "state" : 0, m30998| "ts" : { "$oid" : "5127671afed0a5416d51b83c" } } m30998| Fri Feb 22 12:39:56.475 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' acquired, ts : 5127671cfed0a5416d51b83d m30998| Fri Feb 22 12:39:56.475 [Balancer] *** start balancing round m30998| Fri Feb 22 12:39:56.475 [Balancer] waitForDelete: 0 m30998| Fri Feb 22 12:39:56.475 [Balancer] secondaryThrottle: 1 m30998| Fri Feb 22 12:39:56.476 [Balancer] collection : test.foo m30998| Fri Feb 22 12:39:56.476 [Balancer] donor : rs1-rs0 chunks on 16 m30998| Fri Feb 22 12:39:56.476 [Balancer] receiver : rs1-rs2 chunks on 12 m30998| Fri Feb 22 12:39:56.476 [Balancer] threshold : 2 m30998| Fri Feb 22 12:39:56.476 [Balancer] ns: test.foo going to move { _id: "test.foo-_id_1299.0", lastmod: Timestamp 26000|1, lastmodEpoch: ObjectId('512766fa11fb11ce1f290be5'), ns: "test.foo", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shard: "rs1-rs0" } from: rs1-rs0 to: rs1-rs2 tag [] m30998| Fri Feb 22 12:39:56.476 [Balancer] moving chunk ns: test.foo moving ( ns:test.fooshard: rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102lastmod: 26|1||000000000000000000000000min: { _id: 1299.0 }max: { _id: 1351.0 }) rs1-rs0:rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 -> rs1-rs2:rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m31100| Fri Feb 22 12:39:56.477 [conn33] received moveChunk request: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 1299.0 }, max: { _id: 1351.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1299.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } m31100| Fri Feb 22 12:39:56.478 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' acquired, ts : 5127671cfd440aee3dad4b37 m31100| Fri Feb 22 12:39:56.478 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:56-5127671cfd440aee3dad4b38", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536796478), what: "moveChunk.start", ns: "test.foo", details: { min: { _id: 1299.0 }, max: { _id: 1351.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:56.478 [conn33] moveChunk request accepted at version 26|1||512766fa11fb11ce1f290be5 m31100| Fri Feb 22 12:39:56.479 [conn33] moveChunk number of documents: 52 m31300| Fri Feb 22 12:39:56.479 [migrateThread] starting receiving-end of migration of chunk { _id: 1299.0 } -> { _id: 1351.0 } for collection test.foo from rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 (2 slaves detected) m31300| Fri Feb 22 12:39:56.480 [migrateThread] Helpers::removeRangeUnlocked time spent waiting for replication: 0ms m31100| Fri Feb 22 12:39:56.489 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 1, clonedBytes: 10060, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m30999| Fri Feb 22 12:39:56.494 [Balancer] Refreshing MaxChunkSize: 1 m30999| Fri Feb 22 12:39:56.494 [Balancer] trying to acquire new distributed lock for balancer on bs-smartos-x86-64-1.10gen.cc:29000 ( lock timeout : 900000, ping interval : 30000, process : bs-smartos-x86-64-1.10gen.cc:30999:1361536761:16838 ) m30999| Fri Feb 22 12:39:56.495 [Balancer] checking last ping for lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' against process bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838 and ping Fri Feb 22 12:39:21 2013 m30999| Fri Feb 22 12:39:56.495 [Balancer] could not force lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' because elapsed time 0 <= takeover time 900000 m30999| Fri Feb 22 12:39:56.495 [Balancer] skipping balancing round because another balancer is active m31100| Fri Feb 22 12:39:56.499 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 2, clonedBytes: 20120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:56.509 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 3, clonedBytes: 30180, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:56.520 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 4, clonedBytes: 40240, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:56.536 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 6, clonedBytes: 60360, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:56.568 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 9, clonedBytes: 90540, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:56.632 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 15, clonedBytes: 150900, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:56.760 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "clone", counts: { cloned: 28, clonedBytes: 281680, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31300| Fri Feb 22 12:39:57.011 [migrateThread] Waiting for replication to catch up before entering critical section m31300| Fri Feb 22 12:39:57.011 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1299.0 } -> { _id: 1351.0 } m31300| Fri Feb 22 12:39:57.012 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1299.0 } -> { _id: 1351.0 } m31100| Fri Feb 22 12:39:57.017 [conn33] moveChunk data transfer progress: { active: true, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "steady", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } my mem used: 0 m31100| Fri Feb 22 12:39:57.017 [conn33] moveChunk setting version to: 27|0||512766fa11fb11ce1f290be5 m31300| Fri Feb 22 12:39:57.017 [conn30] Waiting for commit to finish m31300| Fri Feb 22 12:39:57.023 [migrateThread] migrate commit succeeded flushing to secondaries for 'test.foo' { _id: 1299.0 } -> { _id: 1351.0 } m31300| Fri Feb 22 12:39:57.023 [migrateThread] migrate commit flushed to journal for 'test.foo' { _id: 1299.0 } -> { _id: 1351.0 } m31300| Fri Feb 22 12:39:57.023 [migrateThread] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:57-5127671ddd66f5428ed3b72d", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: ":27017", time: new Date(1361536797023), what: "moveChunk.to", ns: "test.foo", details: { min: { _id: 1299.0 }, max: { _id: 1351.0 }, step1 of 5: 0, step2 of 5: 0, step3 of 5: 531, step4 of 5: 0, step5 of 5: 11 } } m31100| Fri Feb 22 12:39:57.027 [conn33] moveChunk migrate commit accepted by TO-shard: { active: false, ns: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", min: { _id: 1299.0 }, max: { _id: 1351.0 }, shardKeyPattern: { _id: 1.0 }, state: "done", counts: { cloned: 52, clonedBytes: 523120, catchup: 0, steady: 0 }, ok: 1.0 } m31100| Fri Feb 22 12:39:57.027 [conn33] moveChunk updating self version to: 27|1||512766fa11fb11ce1f290be5 through { _id: 1351.0 } -> { _id: 1403.0 } for collection 'test.foo' m31100| Fri Feb 22 12:39:57.028 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:57-5127671dfd440aee3dad4b39", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536797028), what: "moveChunk.commit", ns: "test.foo", details: { min: { _id: 1299.0 }, max: { _id: 1351.0 }, from: "rs1-rs0", to: "rs1-rs2" } } m31100| Fri Feb 22 12:39:57.028 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:57.028 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:57.028 [conn33] forking for cleanup of chunk data m31100| Fri Feb 22 12:39:57.028 [conn33] MigrateFromStatus::done About to acquire global write lock to exit critical section m31100| Fri Feb 22 12:39:57.028 [conn33] MigrateFromStatus::done Global lock acquired m31100| Fri Feb 22 12:39:57.028 [cleanupOldData-5127671dfd440aee3dad4b3a] (start) waiting to cleanup test.foo from { _id: 1299.0 } -> { _id: 1351.0 }, # cursors remaining: 0 m31100| Fri Feb 22 12:39:57.029 [conn33] distributed lock 'test.foo/bs-smartos-x86-64-1.10gen.cc:31100:1361536767:31249' unlocked. m31100| Fri Feb 22 12:39:57.029 [conn33] about to log metadata event: { _id: "bs-smartos-x86-64-1.10gen.cc-2013-02-22T12:39:57-5127671dfd440aee3dad4b3b", server: "bs-smartos-x86-64-1.10gen.cc", clientAddr: "165.225.128.186:48709", time: new Date(1361536797029), what: "moveChunk.from", ns: "test.foo", details: { min: { _id: 1299.0 }, max: { _id: 1351.0 }, step1 of 6: 0, step2 of 6: 1, step3 of 6: 0, step4 of 6: 537, step5 of 6: 11, step6 of 6: 0 } } m31100| Fri Feb 22 12:39:57.029 [conn33] command admin.$cmd command: { moveChunk: "test.foo", from: "rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102", to: "rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302", fromShard: "rs1-rs0", toShard: "rs1-rs2", min: { _id: 1299.0 }, max: { _id: 1351.0 }, maxChunkSizeBytes: 1048576, shardId: "test.foo-_id_1299.0", configdb: "bs-smartos-x86-64-1.10gen.cc:29000", secondaryThrottle: true, waitForDelete: false } ntoreturn:1 keyUpdates:0 locks(micros) W:48 r:118 w:28 reslen:37 552ms m30998| Fri Feb 22 12:39:57.029 [Balancer] moveChunk result: { ok: 1.0 } m30998| Fri Feb 22 12:39:57.030 [Balancer] ChunkManager: time to load chunks for test.foo: 1ms sequenceNumber: 14 version: 27|1||512766fa11fb11ce1f290be5 based on: 26|1||512766fa11fb11ce1f290be5 m30998| Fri Feb 22 12:39:57.030 [Balancer] *** end of balancing round m30998| Fri Feb 22 12:39:57.031 [Balancer] distributed lock 'balancer/bs-smartos-x86-64-1.10gen.cc:30998:1361536761:16838' unlocked. m31100| Fri Feb 22 12:39:57.048 [cleanupOldData-5127671dfd440aee3dad4b3a] waiting to remove documents for test.foo from { _id: 1299.0 } -> { _id: 1351.0 } m31100| Fri Feb 22 12:39:57.048 [cleanupOldData-5127671dfd440aee3dad4b3a] moveChunk starting delete for: test.foo from { _id: 1299.0 } -> { _id: 1351.0 } m30998| Fri Feb 22 12:39:57.314 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs0 m30998| Fri Feb 22 12:39:57.314 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797314), ok: 1.0 } m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] _check : rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797315), ok: 1.0 } m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31100 { setName: "rs1-rs0", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31100", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31100", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797315), ok: 1.0 } m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:57.315 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31101 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31101", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797315), ok: 1.0 } m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], primary: "bs-smartos-x86-64-1.10gen.cc:31100", me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797316), ok: 1.0 } m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31100 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31101 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31102 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs1 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797316), ok: 1.0 } m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:57.316 [ReplicaSetMonitorWatcher] _check : rs1-rs1/bs-smartos-x86-64-1.10gen.cc:31200,bs-smartos-x86-64-1.10gen.cc:31201,bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797316), ok: 1.0 } m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31200 { setName: "rs1-rs1", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31200", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31200", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797317), ok: 1.0 } m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31201 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31201", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797317), ok: 1.0 } m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:57.317 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31202 { setName: "rs1-rs1", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31202", "bs-smartos-x86-64-1.10gen.cc:31201", "bs-smartos-x86-64-1.10gen.cc:31200" ], primary: "bs-smartos-x86-64-1.10gen.cc:31200", me: "bs-smartos-x86-64-1.10gen.cc:31202", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797317), ok: 1.0 } m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31200 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31201 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31202 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] checking replica set: rs1-rs2 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797318), ok: 1.0 } m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] _check : rs1-rs2/bs-smartos-x86-64-1.10gen.cc:31300,bs-smartos-x86-64-1.10gen.cc:31301,bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797318), ok: 1.0 } m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:57.318 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31300 { setName: "rs1-rs2", ismaster: true, secondary: false, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31300", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31300", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797318), ok: 1.0 } m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31301 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31301", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797319), ok: 1.0 } m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31302 { setName: "rs1-rs2", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31302", "bs-smartos-x86-64-1.10gen.cc:31301", "bs-smartos-x86-64-1.10gen.cc:31300" ], primary: "bs-smartos-x86-64-1.10gen.cc:31300", me: "bs-smartos-x86-64-1.10gen.cc:31302", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536797319), ok: 1.0 } m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[0].ok = true bs-smartos-x86-64-1.10gen.cc:31300 m30998| Fri Feb 22 12:39:57.319 [ReplicaSetMonitorWatcher] dbclient_rs nodes[1].ok = true bs-smartos-x86-64-1.10gen.cc:31301 m30998| Fri Feb 22 12:39:57.320 [ReplicaSetMonitorWatcher] dbclient_rs nodes[2].ok = true bs-smartos-x86-64-1.10gen.cc:31302 { "rs1-rs0" : 15, "rs1-rs1" : 13, "rs1-rs2" : 13 } total: 41 min: 13 max: 15 ---- Stopping balancer ---- Waiting for active hosts... Waiting for the balancer lock... Waiting again for active hosts after balancer is off... ---- Balancer stopped, checking dbhashes ---- ReplSetTest awaitReplication: starting: timestamp for primary, bs-smartos-x86-64-1.10gen.cc:31100, is { "t" : 1361536797000, "i" : 32 } ReplSetTest awaitReplication: checking secondaries against timestamp { "t" : 1361536797000, "i" : 32 } ReplSetTest awaitReplication: checking secondary #1: bs-smartos-x86-64-1.10gen.cc:31101 ReplSetTest awaitReplication: secondary #1, bs-smartos-x86-64-1.10gen.cc:31101, is synced ReplSetTest awaitReplication: checking secondary #2: bs-smartos-x86-64-1.10gen.cc:31102 ReplSetTest awaitReplication: secondary #2, bs-smartos-x86-64-1.10gen.cc:31102, is synced ReplSetTest awaitReplication: finished: all 2 secondaries synced at timestamp { "t" : 1361536797000, "i" : 32 } rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 { "master" : { "numCollections" : 3, "host" : "bs-smartos-x86-64-1.10gen.cc:31100", "collections" : { "foo" : "65fc9999809fb25724c54fa8f431f9d3" }, "md5" : "8d81df358c608fb0af0fb4dc0bdf8d3e", "ok" : 1 }, "slaves" : [ { "numCollections" : 3, "host" : "bs-smartos-x86-64-1.10gen.cc:31101", "collections" : { "foo" : "aed56fa374c52c9679152b990fe21f56" }, "md5" : "a10dfe7f1c8ad2475cca57e15e820b17", "ok" : 1 }, { "numCollections" : 3, "host" : "bs-smartos-x86-64-1.10gen.cc:31102", "collections" : { "foo" : "1b420798015df48e29bf0adf79f4bd1b" }, "md5" : "11f38ddfc663d14e0bffa4f55b23dab9", "ok" : 1 } ] } assert: ["8d81df358c608fb0af0fb4dc0bdf8d3e"] != ["a10dfe7f1c8ad2475cca57e15e820b17"] are not equal : hashes not same for: rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 slave: 0 Error: Printing Stack Trace at printStackTrace (src/mongo/shell/utils.js:37:7) at doassert (src/mongo/shell/assert.js:6:1) at Function.assert.eq (src/mongo/shell/assert.js:32:1) at /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_rs1.js:70:16 Fri Feb 22 12:39:57.643 JavaScript execution failed: ["8d81df358c608fb0af0fb4dc0bdf8d3e"] != ["a10dfe7f1c8ad2475cca57e15e820b17"] are not equal : hashes not same for: rs1-rs0/bs-smartos-x86-64-1.10gen.cc:31100,bs-smartos-x86-64-1.10gen.cc:31101,bs-smartos-x86-64-1.10gen.cc:31102 slave: 0 at src/mongo/shell/assert.js:L7 failed to load: /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_rs1.js m29000| Fri Feb 22 12:39:57.643 got signal 15 (Terminated), will terminate after current cmd ends m29000| Fri Feb 22 12:39:57.643 [interruptThread] now exiting m29000| Fri Feb 22 12:39:57.644 dbexit: m29000| Fri Feb 22 12:39:57.644 [interruptThread] shutdown: going to close listening sockets... m29000| Fri Feb 22 12:39:57.644 [interruptThread] closing listening socket: 57 m31100| Fri Feb 22 12:39:57.644 [cleanupOldData-5127671dfd440aee3dad4b3a] Helpers::removeRangeUnlocked time spent waiting for replication: 496ms m31100| Fri Feb 22 12:39:57.644 [cleanupOldData-5127671dfd440aee3dad4b3a] moveChunk deleted 52 documents for test.foo from { _id: 1299.0 } -> { _id: 1351.0 } m29000| Fri Feb 22 12:39:57.644 [interruptThread] closing listening socket: 58 m29000| Fri Feb 22 12:39:57.644 [interruptThread] closing listening socket: 59 m29000| Fri Feb 22 12:39:57.644 [interruptThread] removing socket file: /tmp/mongodb-29000.sock m29000| Fri Feb 22 12:39:57.644 [interruptThread] shutdown: going to flush diaglog... m29000| Fri Feb 22 12:39:57.644 [interruptThread] shutdown: going to close sockets... m29000| Fri Feb 22 12:39:57.644 [interruptThread] shutdown: waiting for fs preallocator... m29000| Fri Feb 22 12:39:57.644 [interruptThread] shutdown: lock for final commit... m29000| Fri Feb 22 12:39:57.644 [interruptThread] shutdown: final commit... m29000| Fri Feb 22 12:39:57.657 [interruptThread] shutdown: closing all files... m29000| Fri Feb 22 12:39:57.659 [interruptThread] closeAllFiles() finished m29000| Fri Feb 22 12:39:57.659 [interruptThread] journalCleanup... m29000| Fri Feb 22 12:39:57.659 [interruptThread] removeJournalFiles m29000| Fri Feb 22 12:39:57.659 dbexit: really exiting now m30998| Fri Feb 22 12:39:58.031 [Balancer] Socket recv() errno:131 Connection reset by peer 165.225.128.186:29000 m30998| Fri Feb 22 12:39:58.045 [Balancer] SocketException: remote: 165.225.128.186:29000 error: 9001 socket exception [1] server [165.225.128.186:29000] m30998| Fri Feb 22 12:39:58.045 [Balancer] DBClientCursor::init call() failed m30998| Fri Feb 22 12:39:58.045 [Balancer] Assertion: 13632:couldn't get updated shard list from config server m30998| 0xaaa408 0xa74f3c 0x9fde75 0x96394e 0xa793be 0xa79f6c 0xafb11e 0xfffffd7fff257024 0xfffffd7fff2572f0 m30998| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo15printStackTraceERSo+0x28 [0xaaa408] m30998| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo11msgassertedEiPKc+0x9c [0xa74f3c] m30998| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo15StaticShardInfo6reloadEv+0xc05 [0x9fde75] m30998| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo8Balancer3runEv+0x24e [0x96394e] m30998| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5mongo13BackgroundJob7jobBodyEN5boost10shared_ptrINS0_9JobStatusEEE+0xce [0xa793be] m30998| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'_ZN5boost6detail11thread_dataINS_3_bi6bind_tIvNS_4_mfi3mf1IvN5mongo13BackgroundJobENS_10shared_ptrINS7_9JobStatusEEEEENS2_5list2INS2_5valueIPS7_EENSD_ISA_EEEEEEE3runEv+0x7c [0xa79f6c] m30998| /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/mongos'thread_proxy+0x7e [0xafb11e] m30998| /lib/amd64/libc.so.1'_thrp_setup+0xbc [0xfffffd7fff257024] m30998| /lib/amd64/libc.so.1'_lwp_start+0x0 [0xfffffd7fff2572f0] m30998| Fri Feb 22 12:39:58.046 [Balancer] Detected bad connection created at 1361536761294912 microSec, clearing pool for bs-smartos-x86-64-1.10gen.cc:29000 m30998| Fri Feb 22 12:39:58.046 [Balancer] scoped connection to bs-smartos-x86-64-1.10gen.cc:29000 not being returned to the pool m30998| Fri Feb 22 12:39:58.046 [Balancer] caught exception while doing balance: couldn't get updated shard list from config server m30998| Fri Feb 22 12:39:58.046 [Balancer] *** End of balancing round m31101| Fri Feb 22 12:39:58.614 [conn20] end connection 165.225.128.186:40954 (13 connections now open) m31101| Fri Feb 22 12:39:58.614 [initandlisten] connection accepted from 165.225.128.186:64501 #24 (14 connections now open) m30998| Fri Feb 22 12:39:58.644 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m31102| Fri Feb 22 12:39:58.644 [conn18] end connection 165.225.128.186:62741 (13 connections now open) m31100| Fri Feb 22 12:39:58.644 [conn32] end connection 165.225.128.186:41709 (22 connections now open) m31102| Fri Feb 22 12:39:58.644 [conn19] end connection 165.225.128.186:52563 (13 connections now open) m31100| Fri Feb 22 12:39:58.644 [conn33] end connection 165.225.128.186:48709 (22 connections now open) m31100| Fri Feb 22 12:39:58.644 [conn31] end connection 165.225.128.186:54595 (22 connections now open) m31101| Fri Feb 22 12:39:58.644 [conn19] end connection 165.225.128.186:64889 (13 connections now open) m31101| Fri Feb 22 12:39:58.644 [conn18] end connection 165.225.128.186:49444 (13 connections now open) m31200| Fri Feb 22 12:39:58.644 [conn29] end connection 165.225.128.186:41563 (20 connections now open) m31201| Fri Feb 22 12:39:58.644 [conn17] end connection 165.225.128.186:53060 (11 connections now open) m31200| Fri Feb 22 12:39:58.644 [conn30] end connection 165.225.128.186:53766 (20 connections now open) m31201| Fri Feb 22 12:39:58.644 [conn16] end connection 165.225.128.186:47479 (11 connections now open) m31200| Fri Feb 22 12:39:58.644 [conn31] end connection 165.225.128.186:44126 (20 connections now open) m31300| Fri Feb 22 12:39:58.644 [conn23] end connection 165.225.128.186:41362 (20 connections now open) m31301| Fri Feb 22 12:39:58.644 [conn12] end connection 165.225.128.186:53582 (11 connections now open) m31202| Fri Feb 22 12:39:58.644 [conn16] end connection 165.225.128.186:33704 (11 connections now open) m31302| Fri Feb 22 12:39:58.644 [conn12] end connection 165.225.128.186:36880 (11 connections now open) m31202| Fri Feb 22 12:39:58.644 [conn17] end connection 165.225.128.186:51084 (11 connections now open) m31302| Fri Feb 22 12:39:58.644 [conn13] end connection 165.225.128.186:56244 (11 connections now open) m31300| Fri Feb 22 12:39:58.644 [conn24] end connection 165.225.128.186:56183 (20 connections now open) m31300| Fri Feb 22 12:39:58.644 [conn25] end connection 165.225.128.186:58248 (19 connections now open) m31301| Fri Feb 22 12:39:58.644 [conn11] end connection 165.225.128.186:48970 (11 connections now open) m30999| Fri Feb 22 12:39:59.644 [mongosMain] dbexit: received signal 15 rc:0 received signal 15 m31100| Fri Feb 22 12:39:59.645 [conn21] end connection 165.225.128.186:52307 (19 connections now open) m31101| Fri Feb 22 12:39:59.645 [conn13] end connection 165.225.128.186:56331 (11 connections now open) m31100| Fri Feb 22 12:39:59.645 [conn22] end connection 165.225.128.186:54585 (19 connections now open) m31101| Fri Feb 22 12:39:59.645 [conn12] end connection 165.225.128.186:65502 (11 connections now open) m31100| Fri Feb 22 12:39:59.645 [conn23] end connection 165.225.128.186:62589 (19 connections now open) m31102| Fri Feb 22 12:39:59.645 [conn13] end connection 165.225.128.186:57608 (11 connections now open) m31102| Fri Feb 22 12:39:59.645 [conn14] end connection 165.225.128.186:41859 (11 connections now open) m31201| Fri Feb 22 12:39:59.645 [conn11] end connection 165.225.128.186:52904 (9 connections now open) m31202| Fri Feb 22 12:39:59.645 [conn10] end connection 165.225.128.186:34145 (9 connections now open) m31200| Fri Feb 22 12:39:59.645 [conn19] end connection 165.225.128.186:46732 (17 connections now open) m31202| Fri Feb 22 12:39:59.645 [conn11] end connection 165.225.128.186:46538 (9 connections now open) m31200| Fri Feb 22 12:39:59.645 [conn20] end connection 165.225.128.186:54183 (17 connections now open) m31201| Fri Feb 22 12:39:59.645 [conn10] end connection 165.225.128.186:37702 (9 connections now open) m31200| Fri Feb 22 12:39:59.645 [conn21] end connection 165.225.128.186:42588 (17 connections now open) m31301| Fri Feb 22 12:39:59.645 [conn9] end connection 165.225.128.186:58453 (9 connections now open) m31301| Fri Feb 22 12:39:59.645 [conn8] end connection 165.225.128.186:35151 (9 connections now open) m31302| Fri Feb 22 12:39:59.645 [conn7] end connection 165.225.128.186:36823 (9 connections now open) m31300| Fri Feb 22 12:39:59.645 [conn17] end connection 165.225.128.186:59167 (17 connections now open) m31302| Fri Feb 22 12:39:59.645 [conn8] end connection 165.225.128.186:49506 (9 connections now open) m31300| Fri Feb 22 12:39:59.645 [conn18] end connection 165.225.128.186:52910 (17 connections now open) m31300| Fri Feb 22 12:39:59.645 [conn19] end connection 165.225.128.186:61908 (17 connections now open) m31200| Fri Feb 22 12:39:59.645 [conn22] end connection 165.225.128.186:57541 (14 connections now open) m31100| Fri Feb 22 12:39:59.645 [conn24] end connection 165.225.128.186:38124 (16 connections now open) m31300| Fri Feb 22 12:39:59.645 [conn20] end connection 165.225.128.186:42772 (14 connections now open) m31100| Fri Feb 22 12:40:00.644 got signal 15 (Terminated), will terminate after current cmd ends m31100| Fri Feb 22 12:40:00.644 [interruptThread] now exiting m31100| Fri Feb 22 12:40:00.644 dbexit: m31100| Fri Feb 22 12:40:00.644 [interruptThread] shutdown: going to close listening sockets... m31100| Fri Feb 22 12:40:00.644 [interruptThread] closing listening socket: 12 m31100| Fri Feb 22 12:40:00.644 [interruptThread] closing listening socket: 13 m31100| Fri Feb 22 12:40:00.644 [interruptThread] closing listening socket: 14 m31100| Fri Feb 22 12:40:00.644 [interruptThread] removing socket file: /tmp/mongodb-31100.sock m31100| Fri Feb 22 12:40:00.644 [interruptThread] shutdown: going to flush diaglog... m31100| Fri Feb 22 12:40:00.644 [interruptThread] shutdown: going to close sockets... m31100| Fri Feb 22 12:40:00.644 [interruptThread] shutdown: waiting for fs preallocator... m31100| Fri Feb 22 12:40:00.644 [interruptThread] shutdown: lock for final commit... m31100| Fri Feb 22 12:40:00.644 [interruptThread] shutdown: final commit... m31100| Fri Feb 22 12:40:00.644 [conn1] end connection 127.0.0.1:60748 (15 connections now open) m31101| Fri Feb 22 12:40:00.644 [conn24] end connection 165.225.128.186:64501 (9 connections now open) m31102| Fri Feb 22 12:40:00.644 [conn22] end connection 165.225.128.186:50405 (9 connections now open) m31100| Fri Feb 22 12:40:00.644 [conn39] end connection 165.225.128.186:46547 (15 connections now open) m31100| Fri Feb 22 12:40:00.645 [conn18] end connection 165.225.128.186:33430 (15 connections now open) m31102| Fri Feb 22 12:40:00.645 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:40:00.645 [rsSyncNotifier] replset tracking exception: exception: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:40:00.645 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:40:00.645 [conn19] end connection 165.225.128.186:54041 (15 connections now open) m31102| Fri Feb 22 12:40:00.645 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31100 m31100| Fri Feb 22 12:40:00.645 [conn36] end connection 165.225.128.186:43613 (15 connections now open) m31201| Fri Feb 22 12:40:00.645 [conn15] end connection 165.225.128.186:63588 (7 connections now open) m31201| Fri Feb 22 12:40:00.645 [conn14] end connection 165.225.128.186:50576 (7 connections now open) m31200| Fri Feb 22 12:40:00.645 [conn26] end connection 165.225.128.186:57829 (13 connections now open) m31200| Fri Feb 22 12:40:00.645 [conn25] end connection 165.225.128.186:62358 (13 connections now open) m31202| Fri Feb 22 12:40:00.645 [conn14] end connection 165.225.128.186:64561 (7 connections now open) m31202| Fri Feb 22 12:40:00.645 [conn15] end connection 165.225.128.186:34833 (7 connections now open) m31200| Fri Feb 22 12:40:00.645 [conn27] end connection 165.225.128.186:47432 (11 connections now open) m31100| Fri Feb 22 12:40:00.645 [conn27] end connection 165.225.128.186:49296 (15 connections now open) m31100| Fri Feb 22 12:40:00.645 [conn28] end connection 165.225.128.186:57095 (15 connections now open) m31100| Fri Feb 22 12:40:00.645 [conn29] end connection 165.225.128.186:33931 (15 connections now open) m31301| Fri Feb 22 12:40:00.645 [conn14] end connection 165.225.128.186:61320 (7 connections now open) m31100| Fri Feb 22 12:40:00.645 [conn37] end connection 165.225.128.186:58536 (15 connections now open) m31300| Fri Feb 22 12:40:00.646 [conn28] end connection 165.225.128.186:54062 (13 connections now open) m31302| Fri Feb 22 12:40:00.646 [conn14] end connection 165.225.128.186:62251 (7 connections now open) m31300| Fri Feb 22 12:40:00.646 [conn27] end connection 165.225.128.186:50670 (13 connections now open) m31301| Fri Feb 22 12:40:00.646 [conn13] end connection 165.225.128.186:36630 (6 connections now open) m31300| Fri Feb 22 12:40:00.646 [conn29] end connection 165.225.128.186:51120 (11 connections now open) m31302| Fri Feb 22 12:40:00.646 [conn15] end connection 165.225.128.186:55688 (6 connections now open) m31200| Fri Feb 22 12:40:00.646 [conn32] end connection 165.225.128.186:44415 (10 connections now open) m31300| Fri Feb 22 12:40:00.646 [conn30] end connection 165.225.128.186:53663 (10 connections now open) m31100| Fri Feb 22 12:40:00.652 [conn35] end connection 165.225.128.186:54874 (6 connections now open) m31100| Fri Feb 22 12:40:00.652 [conn38] end connection 165.225.128.186:52615 (6 connections now open) m31100| Fri Feb 22 12:40:00.684 [interruptThread] shutdown: closing all files... m31100| Fri Feb 22 12:40:00.695 [interruptThread] closeAllFiles() finished m31100| Fri Feb 22 12:40:00.695 [interruptThread] journalCleanup... m31100| Fri Feb 22 12:40:00.695 [interruptThread] removeJournalFiles m31100| Fri Feb 22 12:40:00.695 dbexit: really exiting now Fri Feb 22 12:40:00.795 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31100 m31101| Fri Feb 22 12:40:00.798 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:40:00.798 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:40:00.798 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31101| Fri Feb 22 12:40:00.798 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:40:00.799 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:40:00.799 [rsHealthPoll] couldn't connect to bs-smartos-x86-64-1.10gen.cc:31100: couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31101| Fri Feb 22 12:40:00.799 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31100 is down (or slow to respond): m31101| Fri Feb 22 12:40:00.799 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state DOWN m31101| Fri Feb 22 12:40:00.799 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31102 would veto with 'bs-smartos-x86-64-1.10gen.cc:31101 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31100 is already primary and more up-to-date' Fri Feb 22 12:40:00.805 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31100 error: 9001 socket exception [1] server [165.225.128.186:31100] Fri Feb 22 12:40:00.805 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 12:40:00.805 [ReplicaSetMonitorWatcher] trying reconnect to bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:40:00.806 [ReplicaSetMonitorWatcher] reconnect bs-smartos-x86-64-1.10gen.cc:31100 failed couldn't connect to server bs-smartos-x86-64-1.10gen.cc:31100 m31102| Fri Feb 22 12:40:01.004 [rsHealthPoll] DBClientCursor::init call() failed m31102| Fri Feb 22 12:40:01.004 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31100 heartbeat failed, retrying m31102| Fri Feb 22 12:40:01.004 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31100 is down (or slow to respond): m31102| Fri Feb 22 12:40:01.004 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31100 is now in state DOWN m31101| Fri Feb 22 12:40:01.644 got signal 15 (Terminated), will terminate after current cmd ends m31101| Fri Feb 22 12:40:01.644 [interruptThread] now exiting m31101| Fri Feb 22 12:40:01.644 dbexit: m31101| Fri Feb 22 12:40:01.644 [interruptThread] shutdown: going to close listening sockets... m31101| Fri Feb 22 12:40:01.644 [interruptThread] closing listening socket: 15 m31101| Fri Feb 22 12:40:01.644 [interruptThread] closing listening socket: 16 m31101| Fri Feb 22 12:40:01.644 [interruptThread] closing listening socket: 17 m31101| Fri Feb 22 12:40:01.644 [interruptThread] removing socket file: /tmp/mongodb-31101.sock m31101| Fri Feb 22 12:40:01.644 [interruptThread] shutdown: going to flush diaglog... m31101| Fri Feb 22 12:40:01.644 [interruptThread] shutdown: going to close sockets... m31101| Fri Feb 22 12:40:01.645 [interruptThread] shutdown: waiting for fs preallocator... m31101| Fri Feb 22 12:40:01.645 [interruptThread] shutdown: lock for final commit... m31101| Fri Feb 22 12:40:01.645 [interruptThread] shutdown: final commit... m31101| Fri Feb 22 12:40:01.645 [conn1] end connection 127.0.0.1:49315 (8 connections now open) m31101| Fri Feb 22 12:40:01.645 [conn23] end connection 165.225.128.186:49984 (8 connections now open) m31102| Fri Feb 22 12:40:01.645 [conn23] end connection 165.225.128.186:55982 (8 connections now open) m31101| Fri Feb 22 12:40:01.645 [conn10] end connection 165.225.128.186:42279 (8 connections now open) m31101| Fri Feb 22 12:40:01.645 [conn11] end connection 165.225.128.186:45110 (8 connections now open) m31101| Fri Feb 22 12:40:01.645 [conn17] end connection 165.225.128.186:41616 (8 connections now open) m31101| Fri Feb 22 12:40:01.645 [conn16] end connection 165.225.128.186:47063 (8 connections now open) m31101| Fri Feb 22 12:40:01.645 [conn21] end connection 165.225.128.186:63707 (8 connections now open) m31101| Fri Feb 22 12:40:01.654 [conn22] end connection 165.225.128.186:56758 (1 connection now open) m31101| Fri Feb 22 12:40:01.675 [interruptThread] shutdown: closing all files... m31101| Fri Feb 22 12:40:01.684 [interruptThread] closeAllFiles() finished m31101| Fri Feb 22 12:40:01.685 [interruptThread] journalCleanup... m31101| Fri Feb 22 12:40:01.685 [interruptThread] removeJournalFiles m31101| Fri Feb 22 12:40:01.685 dbexit: really exiting now Fri Feb 22 12:40:01.807 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31100 socket exception [FAILED_STATE] for bs-smartos-x86-64-1.10gen.cc:31100 Fri Feb 22 12:40:01.807 [ReplicaSetMonitorWatcher] Socket recv() errno:131 Connection reset by peer 165.225.128.186:31101 Fri Feb 22 12:40:01.807 [ReplicaSetMonitorWatcher] SocketException: remote: 165.225.128.186:31101 error: 9001 socket exception [1] server [165.225.128.186:31101] Fri Feb 22 12:40:01.807 [ReplicaSetMonitorWatcher] DBClientCursor::init call() failed Fri Feb 22 12:40:01.807 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: caught exception bs-smartos-x86-64-1.10gen.cc:31101 DBClientBase::findN: transport error: bs-smartos-x86-64-1.10gen.cc:31101 ns: admin.$cmd query: { ismaster: 1 } Fri Feb 22 12:40:01.807 [ReplicaSetMonitorWatcher] ReplicaSetMonitor::_checkConnection: bs-smartos-x86-64-1.10gen.cc:31102 { setName: "rs1-rs0", ismaster: false, secondary: true, hosts: [ "bs-smartos-x86-64-1.10gen.cc:31102", "bs-smartos-x86-64-1.10gen.cc:31101", "bs-smartos-x86-64-1.10gen.cc:31100" ], me: "bs-smartos-x86-64-1.10gen.cc:31102", maxBsonObjectSize: 16777216, maxMessageSizeBytes: 48000000, localTime: new Date(1361536801807), ok: 1.0 } m31102| Fri Feb 22 12:40:02.022 [MultiCommandJob] DBClientCursor::init call() failed m31102| Fri Feb 22 12:40:02.022 [rsMgr] replSet not electing self, not all members up and we have been up less than 5 minutes m31301| Fri Feb 22 12:40:02.556 [conn15] end connection 165.225.128.186:34431 (5 connections now open) m31301| Fri Feb 22 12:40:02.556 [initandlisten] connection accepted from 165.225.128.186:43350 #17 (6 connections now open) m31102| Fri Feb 22 12:40:02.644 got signal 15 (Terminated), will terminate after current cmd ends m31102| Fri Feb 22 12:40:02.645 [interruptThread] now exiting m31102| Fri Feb 22 12:40:02.645 dbexit: m31102| Fri Feb 22 12:40:02.645 [interruptThread] shutdown: going to close listening sockets... m31102| Fri Feb 22 12:40:02.645 [interruptThread] closing listening socket: 18 m31102| Fri Feb 22 12:40:02.645 [interruptThread] closing listening socket: 19 m31102| Fri Feb 22 12:40:02.645 [interruptThread] closing listening socket: 20 m31102| Fri Feb 22 12:40:02.645 [interruptThread] removing socket file: /tmp/mongodb-31102.sock m31102| Fri Feb 22 12:40:02.645 [interruptThread] shutdown: going to flush diaglog... m31102| Fri Feb 22 12:40:02.645 [interruptThread] shutdown: going to close sockets... m31102| Fri Feb 22 12:40:02.645 [interruptThread] shutdown: waiting for fs preallocator... m31102| Fri Feb 22 12:40:02.645 [interruptThread] shutdown: lock for final commit... m31102| Fri Feb 22 12:40:02.645 [interruptThread] shutdown: final commit... m31102| Fri Feb 22 12:40:02.645 [conn1] end connection 127.0.0.1:50126 (7 connections now open) m31102| Fri Feb 22 12:40:02.645 [conn11] end connection 165.225.128.186:59385 (7 connections now open) m31102| Fri Feb 22 12:40:02.645 [conn12] end connection 165.225.128.186:43191 (7 connections now open) m31102| Fri Feb 22 12:40:02.645 [conn16] end connection 165.225.128.186:57979 (7 connections now open) m31102| Fri Feb 22 12:40:02.647 [conn17] end connection 165.225.128.186:47621 (3 connections now open) m31102| Fri Feb 22 12:40:02.647 [conn21] end connection 165.225.128.186:60724 (2 connections now open) m31102| Fri Feb 22 12:40:02.647 [conn20] end connection 165.225.128.186:43073 (2 connections now open) m31102| Fri Feb 22 12:40:02.679 [interruptThread] shutdown: closing all files... m31102| Fri Feb 22 12:40:02.690 [interruptThread] closeAllFiles() finished m31102| Fri Feb 22 12:40:02.690 [interruptThread] journalCleanup... m31102| Fri Feb 22 12:40:02.690 [interruptThread] removeJournalFiles m31102| Fri Feb 22 12:40:02.691 dbexit: really exiting now Fri Feb 22 12:40:02.807 [ReplicaSetMonitorWatcher] warning: No primary detected for set rs1-rs0 m31200| Fri Feb 22 12:40:03.645 got signal 15 (Terminated), will terminate after current cmd ends m31200| Fri Feb 22 12:40:03.645 [interruptThread] now exiting m31200| Fri Feb 22 12:40:03.645 dbexit: m31200| Fri Feb 22 12:40:03.645 [interruptThread] shutdown: going to close listening sockets... m31200| Fri Feb 22 12:40:03.645 [interruptThread] closing listening socket: 21 m31200| Fri Feb 22 12:40:03.645 [interruptThread] closing listening socket: 22 m31200| Fri Feb 22 12:40:03.645 [interruptThread] closing listening socket: 23 m31200| Fri Feb 22 12:40:03.645 [interruptThread] removing socket file: /tmp/mongodb-31200.sock m31200| Fri Feb 22 12:40:03.645 [interruptThread] shutdown: going to flush diaglog... m31200| Fri Feb 22 12:40:03.645 [interruptThread] shutdown: going to close sockets... m31200| Fri Feb 22 12:40:03.645 [interruptThread] shutdown: waiting for fs preallocator... m31200| Fri Feb 22 12:40:03.645 [interruptThread] shutdown: lock for final commit... m31200| Fri Feb 22 12:40:03.645 [interruptThread] shutdown: final commit... m31200| Fri Feb 22 12:40:03.645 [conn1] end connection 127.0.0.1:35975 (9 connections now open) m31200| Fri Feb 22 12:40:03.645 [conn33] end connection 165.225.128.186:54918 (9 connections now open) m31201| Fri Feb 22 12:40:03.645 [conn18] end connection 165.225.128.186:64010 (5 connections now open) m31200| Fri Feb 22 12:40:03.645 [conn34] end connection 165.225.128.186:49464 (9 connections now open) m31202| Fri Feb 22 12:40:03.645 [conn18] end connection 165.225.128.186:36300 (5 connections now open) m31200| Fri Feb 22 12:40:03.645 [conn10] end connection 165.225.128.186:40783 (9 connections now open) m31200| Fri Feb 22 12:40:03.645 [conn12] end connection 165.225.128.186:38056 (9 connections now open) m31200| Fri Feb 22 12:40:03.645 [conn16] end connection 165.225.128.186:36290 (9 connections now open) m31200| Fri Feb 22 12:40:03.645 [conn17] end connection 165.225.128.186:57896 (9 connections now open) m31201| Fri Feb 22 12:40:03.645 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31200 m31202| Fri Feb 22 12:40:03.646 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31200 m31200| Fri Feb 22 12:40:03.668 [interruptThread] shutdown: closing all files... m31200| Fri Feb 22 12:40:03.670 [interruptThread] closeAllFiles() finished m31200| Fri Feb 22 12:40:03.670 [interruptThread] journalCleanup... m31200| Fri Feb 22 12:40:03.670 [interruptThread] removeJournalFiles m31200| Fri Feb 22 12:40:03.671 dbexit: really exiting now m31202| Fri Feb 22 12:40:03.783 [rsHealthPoll] DBClientCursor::init call() failed m31202| Fri Feb 22 12:40:03.783 [rsHealthPoll] replset info bs-smartos-x86-64-1.10gen.cc:31200 heartbeat failed, retrying m31202| Fri Feb 22 12:40:03.783 [rsHealthPoll] replSet info bs-smartos-x86-64-1.10gen.cc:31200 is down (or slow to respond): m31202| Fri Feb 22 12:40:03.783 [rsHealthPoll] replSet member bs-smartos-x86-64-1.10gen.cc:31200 is now in state DOWN m31202| Fri Feb 22 12:40:03.783 [rsMgr] not electing self, bs-smartos-x86-64-1.10gen.cc:31201 would veto with 'bs-smartos-x86-64-1.10gen.cc:31202 is trying to elect itself but bs-smartos-x86-64-1.10gen.cc:31200 is already primary and more up-to-date' m31201| Fri Feb 22 12:40:04.645 got signal 15 (Terminated), will terminate after current cmd ends m31201| Fri Feb 22 12:40:04.645 [interruptThread] now exiting m31201| Fri Feb 22 12:40:04.645 dbexit: m31201| Fri Feb 22 12:40:04.645 [interruptThread] shutdown: going to close listening sockets... m31201| Fri Feb 22 12:40:04.645 [interruptThread] closing listening socket: 24 m31201| Fri Feb 22 12:40:04.645 [interruptThread] closing listening socket: 25 m31201| Fri Feb 22 12:40:04.645 [interruptThread] closing listening socket: 26 m31201| Fri Feb 22 12:40:04.645 [interruptThread] removing socket file: /tmp/mongodb-31201.sock m31201| Fri Feb 22 12:40:04.645 [interruptThread] shutdown: going to flush diaglog... m31201| Fri Feb 22 12:40:04.645 [interruptThread] shutdown: going to close sockets... m31201| Fri Feb 22 12:40:04.645 [interruptThread] shutdown: waiting for fs preallocator... m31201| Fri Feb 22 12:40:04.645 [interruptThread] shutdown: lock for final commit... m31201| Fri Feb 22 12:40:04.645 [interruptThread] shutdown: final commit... m31201| Fri Feb 22 12:40:04.645 [conn1] end connection 127.0.0.1:37141 (4 connections now open) m31201| Fri Feb 22 12:40:04.645 [conn19] end connection 165.225.128.186:53555 (4 connections now open) m31202| Fri Feb 22 12:40:04.645 [conn19] end connection 165.225.128.186:47978 (4 connections now open) m31201| Fri Feb 22 12:40:04.645 [conn8] end connection 165.225.128.186:52210 (4 connections now open) m31201| Fri Feb 22 12:40:04.645 [conn9] end connection 165.225.128.186:43738 (4 connections now open) m31201| Fri Feb 22 12:40:04.669 [interruptThread] shutdown: closing all files... m31201| Fri Feb 22 12:40:04.670 [interruptThread] closeAllFiles() finished m31201| Fri Feb 22 12:40:04.670 [interruptThread] journalCleanup... m31201| Fri Feb 22 12:40:04.670 [interruptThread] removeJournalFiles m31201| Fri Feb 22 12:40:04.671 dbexit: really exiting now m31202| Fri Feb 22 12:40:05.645 got signal 15 (Terminated), will terminate after current cmd ends m31202| Fri Feb 22 12:40:05.645 [interruptThread] now exiting m31202| Fri Feb 22 12:40:05.645 dbexit: m31202| Fri Feb 22 12:40:05.645 [interruptThread] shutdown: going to close listening sockets... m31202| Fri Feb 22 12:40:05.645 [interruptThread] closing listening socket: 27 m31202| Fri Feb 22 12:40:05.645 [interruptThread] closing listening socket: 28 m31202| Fri Feb 22 12:40:05.645 [interruptThread] closing listening socket: 29 m31202| Fri Feb 22 12:40:05.645 [interruptThread] removing socket file: /tmp/mongodb-31202.sock m31202| Fri Feb 22 12:40:05.645 [interruptThread] shutdown: going to flush diaglog... m31202| Fri Feb 22 12:40:05.645 [interruptThread] shutdown: going to close sockets... m31202| Fri Feb 22 12:40:05.645 [interruptThread] shutdown: waiting for fs preallocator... m31202| Fri Feb 22 12:40:05.645 [interruptThread] shutdown: lock for final commit... m31202| Fri Feb 22 12:40:05.645 [interruptThread] shutdown: final commit... m31202| Fri Feb 22 12:40:05.645 [conn1] end connection 127.0.0.1:59280 (3 connections now open) m31202| Fri Feb 22 12:40:05.645 [conn7] end connection 165.225.128.186:49161 (3 connections now open) m31202| Fri Feb 22 12:40:05.645 [conn8] end connection 165.225.128.186:41378 (3 connections now open) m31202| Fri Feb 22 12:40:05.672 [interruptThread] shutdown: closing all files... m31202| Fri Feb 22 12:40:05.674 [interruptThread] closeAllFiles() finished m31202| Fri Feb 22 12:40:05.674 [interruptThread] journalCleanup... m31202| Fri Feb 22 12:40:05.674 [interruptThread] removeJournalFiles m31202| Fri Feb 22 12:40:05.674 dbexit: really exiting now m31301| Fri Feb 22 12:40:06.099 [conn16] end connection 165.225.128.186:34157 (5 connections now open) m31301| Fri Feb 22 12:40:06.100 [initandlisten] connection accepted from 165.225.128.186:57457 #18 (6 connections now open) m31300| Fri Feb 22 12:40:06.645 got signal 15 (Terminated), will terminate after current cmd ends m31300| Fri Feb 22 12:40:06.645 [interruptThread] now exiting m31300| Fri Feb 22 12:40:06.645 dbexit: m31300| Fri Feb 22 12:40:06.645 [interruptThread] shutdown: going to close listening sockets... m31300| Fri Feb 22 12:40:06.645 [interruptThread] closing listening socket: 30 m31300| Fri Feb 22 12:40:06.645 [interruptThread] closing listening socket: 31 m31300| Fri Feb 22 12:40:06.645 [interruptThread] closing listening socket: 32 m31300| Fri Feb 22 12:40:06.645 [interruptThread] removing socket file: /tmp/mongodb-31300.sock m31300| Fri Feb 22 12:40:06.645 [interruptThread] shutdown: going to flush diaglog... m31300| Fri Feb 22 12:40:06.645 [interruptThread] shutdown: going to close sockets... m31300| Fri Feb 22 12:40:06.645 [interruptThread] shutdown: waiting for fs preallocator... m31300| Fri Feb 22 12:40:06.645 [interruptThread] shutdown: lock for final commit... m31300| Fri Feb 22 12:40:06.645 [interruptThread] shutdown: final commit... m31300| Fri Feb 22 12:40:06.645 [conn1] end connection 127.0.0.1:51566 (9 connections now open) m31302| Fri Feb 22 12:40:06.646 [conn16] end connection 165.225.128.186:61647 (5 connections now open) m31300| Fri Feb 22 12:40:06.646 [conn31] end connection 165.225.128.186:60018 (9 connections now open) m31300| Fri Feb 22 12:40:06.646 [conn32] end connection 165.225.128.186:51696 (9 connections now open) m31301| Fri Feb 22 12:40:06.646 [conn18] end connection 165.225.128.186:57457 (5 connections now open) m31300| Fri Feb 22 12:40:06.646 [conn8] end connection 165.225.128.186:45983 (9 connections now open) m31300| Fri Feb 22 12:40:06.646 [conn14] end connection 165.225.128.186:50209 (9 connections now open) m31300| Fri Feb 22 12:40:06.646 [conn12] end connection 165.225.128.186:52921 (9 connections now open) m31300| Fri Feb 22 12:40:06.646 [conn15] end connection 165.225.128.186:39390 (8 connections now open) m31301| Fri Feb 22 12:40:06.646 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31300 m31302| Fri Feb 22 12:40:06.646 [rsBackgroundSync] replSet db exception in producer: 10278 dbclient error communicating with server: bs-smartos-x86-64-1.10gen.cc:31300 m31300| Fri Feb 22 12:40:06.669 [interruptThread] shutdown: closing all files... m31300| Fri Feb 22 12:40:06.671 [interruptThread] closeAllFiles() finished m31300| Fri Feb 22 12:40:06.671 [interruptThread] journalCleanup... m31300| Fri Feb 22 12:40:06.671 [interruptThread] removeJournalFiles m31300| Fri Feb 22 12:40:06.671 dbexit: really exiting now m31301| Fri Feb 22 12:40:07.645 got signal 15 (Terminated), will terminate after current cmd ends m31301| Fri Feb 22 12:40:07.645 [interruptThread] now exiting m31301| Fri Feb 22 12:40:07.645 dbexit: m31301| Fri Feb 22 12:40:07.645 [interruptThread] shutdown:Fri Feb 22 12:40:09.653 [conn26] end connection 127.0.0.1:41731 (0 connections now open) 2.5891 minutes Fri Feb 22 12:40:09.675 got signal 15 (Terminated), will terminate after current cmd ends Fri Feb 22 12:40:09.675 [interruptThread] now exiting Fri Feb 22 12:40:09.675 dbexit: Fri Feb 22 12:40:09.675 [interruptThread] shutdown: going to close listening sockets... Fri Feb 22 12:40:09.675 [interruptThread] closing listening socket: 8 Fri Feb 22 12:40:09.675 [interruptThread] closing listening socket: 9 Fri Feb 22 12:40:09.675 [interruptThread] closing listening socket: 10 Fri Feb 22 12:40:09.675 [interruptThread] removing socket file: /tmp/mongodb-27999.sock Fri Feb 22 12:40:09.675 [interruptThread] shutdown: going to flush diaglog... Fri Feb 22 12:40:09.675 [interruptThread] shutdown: going to close sockets... Fri Feb 22 12:40:09.675 [interruptThread] shutdown: waiting for fs preallocator... Fri Feb 22 12:40:09.675 [interruptThread] shutdown: lock for final commit... Fri Feb 22 12:40:09.675 [interruptThread] shutdown: final commit... Fri Feb 22 12:40:09.683 [interruptThread] shutdown: closing all files... Fri Feb 22 12:40:09.683 [interruptThread] closeAllFiles() finished Fri Feb 22 12:40:09.683 [interruptThread] journalCleanup... Fri Feb 22 12:40:09.683 [interruptThread] removeJournalFiles Fri Feb 22 12:40:09.683 dbexit: really exiting now test /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_rs1.js exited with status 253 52 tests succeeded 11 tests didn't get run The following tests failed (with exit code): /data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/jstests/slowNightly/sharding_rs1.js 253 Traceback (most recent call last): File "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/buildscripts/smoke.py", line 1002, in main() File "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/buildscripts/smoke.py", line 999, in main report() File "/data/buildslaves/SolarisSmartOS_64bit_Nightly/mongo/buildscripts/smoke.py", line 644, in report raise Exception("Test failures") Exception: Test failures scons: *** [smokeJsSlowNightly] Error 1 scons: building terminated because of errors.